- Contents in this wiki are for entertainment purposes only
AgentFlow Launcher Guide: Orchestrating Multi-Agent Workflows: Difference between revisions
Jump to navigation
Jump to search
XenoEngineer (talk | contribs) mNo edit summary |
XenoEngineer (talk | contribs) mNo edit summary |
||
| Line 1: | Line 1: | ||
{{menuAgentFlow|Orchestration}} | {{menuAgentFlow|Orchestration}} | ||
https://groupkos.com/dev/index.php/AgentFlow_Launcher_Guide:_Orchestrating_Multi-Agent_Workflows?action=purge | |||
<pre style="background-color:#eee; color:magenta;> | <pre style="background-color:#eee; color:magenta;> | ||
# Launcher Guide: Orchestrating Multi-Agent Workflows | # Launcher Guide: Orchestrating Multi-Agent Workflows | ||
Latest revision as of 15:49, 13 May 2026
Category:AgentFlow
Orchestration
—Orchestrating Multi-Agent Workflows
—Notes on AgentFlow
# Launcher Guide: Orchestrating Multi-Agent Workflows
This guide covers how to use the **AgentFlow Launcher** to orchestrate complex multi-agent workflows, from simple agent creation to sophisticated inter-loquation pipelines.
## Launcher Components
```
Launcher (REST API + CLI)
├── Agent Pool (registry of active agents)
├── Routing Engine (message path selection)
├── State Manager (lifecycle tracking)
├── Vector Broker (semantic alignment via CLIO)
└── Timeline Sync (causality recording)
```
---
## 1. Agent Instantiation via CLI
### **Create a New Agent**
```bash
llm-launcher.exe new "CodeAssistant" \
--model "codellama:70b" \
--endpoint "http://localhost:11434" \
--persona "Senior Go Engineer" \
--role "Writes production-safe Go code"
```
### **Create Multiple Agents**
```bash
# Create a narrative agent
llm-launcher.exe new "Narrator" \
--model "llama3:8b" \
--persona "Creative Storyteller" \
--role "Generates engaging narratives"
# Create a documentation agent
llm-launcher.exe new "DocumentationAgent" \
--model "mistral:7b" \
--persona "Technical Writer" \
--role "Produces clear API documentation"
# Create a test engineer
llm-launcher.exe new "TestAgent" \
--model "codellama:70b" \
--persona "QA Specialist" \
--role "Writes comprehensive test suites"
```
### **List Active Agents**
```bash
llm-launcher.exe list
Output:
───────────────────────────────────────────
ID Model Status Persona
───────────────────────────────────────────
code-asst codellama:70b active Senior Go Engineer
narrator llama3:8b active Creative Storyteller
docs-agent mistral:7b active Technical Writer
test-agent codellama:70b active QA Specialist
───────────────────────────────────────────
```
---
## 2. Single-Agent Queries
### **Direct Chat with Agent**
```bash
llm-launcher.exe chat CodeAssistant "Write a Go function to validate email addresses"
Output:
🤖 CodeAssistant:
func ValidateEmail(email string) (bool, error) {
addr, err := mail.ParseAddress(email)
if err != nil {
return false, fmt.Errorf("invalid email: %w", err)
}
return addr != nil, nil
}
```
### **Get Agent Capabilities**
```bash
llm-launcher.exe info CodeAssistant
Output:
Name: CodeAssistant
Model: codellama:70b
Persona: Senior Go Engineer
Role: Writes production-safe Go code
Endpoint: http://localhost:11434
Status: active
Uptime: 2h 15m
```
---
## 3. Inter-LLM Communication (Inter-Loquation)
### **Two-Agent Dialogue**
Send a message from one agent to another:
```bash
llm-launcher.exe chat \
--from CodeAssistant \
--to DocumentationAgent \
--message "Explain this Go function: func Add(a, b int) int"
Output:
🤖 CodeAssistant sends to 📖 DocumentationAgent:
> The Add function takes two integers and returns their sum.
>
> Parameters:
> - a (int): First operand
> - b (int): Second operand
>
> Returns:
> - (int): The sum of a and b
>
> Example: Add(3, 5) returns 8
```
### **Enable Vector Diffusion (Semantic Alignment)**
```bash
llm-launcher.exe chat \
--from CodeAssistant \
--to DocumentationAgent \
--message "Explain this complex algorithm" \
--vector-diffusion true
# This ensures that the semantic meaning is preserved across models
# CLIO embeddings align CodeAssistant's code semantics with DocumentationAgent's narrative
```
---
## 4. Orchestrated Pipelines
### **Sequential Pipeline**
Execute agents in sequence:
```bash
llm-launcher.exe launch-pipeline \
--agents CodeAssistant,DocumentationAgent,Narrator,TestAgent \
--input "Design and implement a distributed cache with strong consistency" \
--output-file "outputs/20260513-cache-design.md"
```
**Execution Flow**:
1. CodeAssistant → Designs the cache system (code + architecture)
2. DocumentationAgent → Creates API documentation
3. Narrator → Writes a technical summary/narrative
4. TestAgent → Generates test suite
**Output**:
```
outputs/20260513-cache-design.md
# Distributed Cache Design
## Architecture (by CodeAssistant)
... [code and design details]
## API Reference (by DocumentationAgent)
... [documentation]
## Executive Summary (by Narrator)
... [narrative overview]
## Test Suite (by TestAgent)
... [test cases]
```
### **Parallel Consensus Pipeline**
Get multiple agents' perspectives on the same input:
```bash
llm-launcher.exe launch-consensus \
--agents CodeAssistant,TestAgent,SecurityAgent \
--query "Review this authentication implementation" \
--merge-strategy highest_confidence
Output:
📊 Consensus Results:
✅ CodeAssistant (confidence: 0.96)
"Implementation is clean and follows Go conventions. Good error handling."
✅ TestAgent (confidence: 0.94)
"Adequate test coverage. Edge cases covered: invalid tokens, expiration."
✅ SecurityAgent (confidence: 0.98)
"Uses bcrypt correctly. Recommend rate limiting on login endpoint."
📌 Merged Recommendation:
"Production-ready with rate limiting addition."
```
---
## 5. REST API Endpoints
### **List Agents**
```bash
curl http://localhost:8080/api/v1/agents
Response:
{
"agents": [
{
"id": "code-asst",
"name": "CodeAssistant",
"model": "codellama:70b",
"status": "active",
"personification": {
"persona": "Senior Go Engineer",
"role": "Writes production-safe Go code"
}
},
...
]
}
```
### **Create Agent (REST)**
```bash
curl -X POST http://localhost:8080/api/v1/agents \
-H "Authorization: Bearer [PAT_FROM_WIN11]" \
-H "Content-Type: application/json" \
-d '{
"name": "Narrator",
"personification": {
"persona": "Creative Storyteller",
"prompt": "Generate engaging narratives about technology"
},
"llm_config": {
"model": "llama3:8b",
"endpoint": "http://localhost:11434"
}
}'
Response:
{
"id": "narrator",
"name": "Narrator",
"status": "active",
"created_at": "2026-05-13T02:19:46Z"
}
```
### **Inter-Agent Chat (REST)**
```bash
curl -X POST http://localhost:8080/api/v1/agents/{from}/chat \
-H "Content-Type: application/json" \
-d '{
"to": "DocumentationAgent",
"message": "Explain this code in plain English",
"vector_diffusion": true
}'
Response:
{
"id": "msg-20260513-001",
"from": "CodeAssistant",
"to": "DocumentationAgent",
"response": "This code validates email addresses...",
"vector": [0.23, -0.67, 0.89, ...],
"confidence": 0.98,
"latency_ms": 234
}
```
### **Launch Pipeline (REST)**
```bash
curl -X POST http://localhost:8080/api/v1/pipelines \
-H "Content-Type: application/json" \
-d '{
"agents": ["CodeAssistant", "DocumentationAgent", "Narrator"],
"input": "Write a story about a spaceship",
"output_format": "markdown"
}'
Response:
{
"pipeline_id": "pipe-20260513-001",
"status": "running",
"progress": "0/3 agents completed",
"created_at": "2026-05-13T02:19:46Z"
}
# Poll for results
curl http://localhost:8080/api/v1/pipelines/pipe-20260513-001
Response:
{
"pipeline_id": "pipe-20260513-001",
"status": "completed",
"progress": "3/3 agents completed",
"output": "# Story: The Spaceship\n... [full output]"
}
```
---
## 6. Agent Dialogue Contracts
Define reusable agent communication patterns in YAML:
```yaml
# dialogues/code-to-docs.yaml
pipeline:
source: CodeAssistant
destination: DocumentationAgent
trigger: "Function implemented"
payload:
vector_diffusion: true
payload_type: "code"
# Transformation rules
transform:
- extract_docstring: true
- preserve_semantics: true
- add_examples: true
```
Use the contract:
```bash
llm-launcher.exe dialogue \
--contract "code-to-docs" \
--input "func Download(url string) ([]byte, error)"
```
---
## 7. Emergence Detection & Zenkin Moments
Monitor multi-agent interactions for sudden insights:
```bash
llm-launcher.exe monitor-emergence \
--agents CodeAssistant,SecurityAgent,ArchitectAgent \
--query "Design a zero-trust authentication system"
Output:
⚡ Zenkin Flash Detected (entropy: 0.02, confidence: 0.98)
Time: 2026-05-13 02:22:15
Pattern: CodeAssistant → SecurityAgent → ArchitectAgent (3 exchanges)
📌 Emergence: Hybrid approach combining OAuth 2.0 + hardware keys
First suggested by: SecurityAgent
Refined by: CodeAssistant
Architecture validated by: ArchitectAgent
```
---
## 8. Timeline & Causality
Track all agent interactions:
```bash
llm-launcher.exe timeline \
--since "2026-05-13 02:00:00" \
--format json
Output:
{
"events": [
{
"timestamp": "2026-05-13T02:19:46Z",
"agent_from": "CodeAssistant",
"agent_to": "DocumentationAgent",
"vector_alignment": 0.98,
"latency_ms": 234
},
{
"timestamp": "2026-05-13T02:22:15Z",
"type": "emergence_detected",
"agents": ["CodeAssistant", "SecurityAgent", "ArchitectAgent"],
"pattern": "hybrid_auth_system"
}
]
}
```
---
## 9. Configuration & Customization
### **Agent Personification Override**
```yaml
# agents/custom-code-assistant.yaml
name: CustomCodeAssistant
personification:
persona: "Expert Rust Developer"
role: "Writes memory-safe systems code"
system_prompt: |
You are an expert in Rust systems programming...
Always prioritize memory safety and zero-cost abstractions.
llm_config:
model: codellama:70b
endpoint: "http://localhost:11434"
temperature: 0.3 # More deterministic
max_tokens: 2048
```
### **Pipeline Configuration**
```yaml
# pipelines/full-development-workflow.yaml
agents:
- CodeAssistant:
input: "Build a TODO app in Go"
timeout: 60000
- TestAgent:
trigger_on: "CodeAssistant.done"
input: "Write tests for:"
context_from: CodeAssistant
- DocumentationAgent:
trigger_on: "TestAgent.done"
input: "Generate API docs for:"
context_from: [CodeAssistant, TestAgent]
output:
format: markdown
save_to: "outputs/app-build-{timestamp}.md"
include_vectors: true
```
---
## 10. Troubleshooting
### **Agent Not Responding**
```bash
llm-launcher.exe debug CodeAssistant
Output:
Agent: CodeAssistant
Status: offline
Last Heartbeat: 5m 30s ago
Endpoint: http://localhost:11434
Error: Connection refused
Fix: Ensure Ollama is running on port 11434
ollama serve
```
### **Semantic Misalignment**
```bash
llm-launcher.exe diagnose-vector \
--from CodeAssistant \
--to DocumentationAgent \
--message "Example code"
Output:
Vector Alignment: 0.73 (below threshold 0.85)
Recommendation: Increase context window or use more specific prompts
```
---
## Next Steps
1. **Create Your First Agent** – Use the CLI to create an agent
2. **Run a Simple Pipeline** – Chain two agents together
3. **Monitor Emergence** – Watch for Zenkin moments in multi-agent dialogue
4. **Build Custom Contracts** – Define reusable agent patterns
See [ARCHITECTURE.md](ARCHITECTURE.md) for design details and [API_SPEC.md](API_SPEC.md) for full API reference.