Contents in this wiki are for entertainment purposes only
This is not fiction ∞ this is psience of mind

Notes on AgentFlow: Difference between revisions

From Catcliffe Development
Jump to navigation Jump to search
(Created page with "==Stubby stub== AgentFlow is a RESTful endpoint that launches agents by orchestrated-intent of a top-holon, or mere mortal. ==See also== ;* The ''Social Brain'' of Paul S. Prueitt, Ph.D —http://www.blueberry-brain.org/winterchaos/PreuittTheSocialBrain.pdf")
 
mNo edit summary
Line 1: Line 1:
==Stubby stub==


AgentFlow is a RESTful endpoint that launches agents by orchestrated-intent of a top-holon, or mere mortal.
AgentFlow is a RESTful endpoint that launches agents by orchestrated-intent of a top-holon, or mere mortal.
== LLM-Launcher.exe Help Listing ==
<pre>
C:\xenoKOS\AgentFlow>bin\llm-launcher.exe --help
AgentFlow Launcher - CLI Bridge for MindChat Multi-Agent Orchestration
Usage:
  llm-launcher.exe <command> [options]
Commands:
  launch-pipeline          Launch a multi-agent pipeline workflow
  chat                    Send a message between two agents (inter-loquation)
  monitor-emergence        Monitor for emergence events (Zenkin flash moments)
  help                    Show this help message
launch-pipeline Options:
  --agents <list>          Comma-separated agent IDs (required)
                          Example: CodeAssistant,Narrator,TestAgent
  --input <text>          Pipeline input text (required)
  --output-file <path>    Where to save output (optional)
  --output-format <fmt>    Output format: markdown, json, text (default: markdown)
  --topology <topo>        MindChat topology: interloquation, seminar, chorus,
                          convergence, whisper-chain, oracle (default: seminar)
  --turns <n>              Number of dialogue turns (default: 6)
  --participants <n>      Number of participants (overrides agent count)
  --vector-diffusion      Enable semantic vector alignment (default: true)
  --verbose                Enable verbose output
Examples:
  # Two-agent pipeline
  llm-launcher.exe launch-pipeline \
    --agents CodeAssistant,DocumentationAgent \
    --input "Design a caching system" \
    --output-file output.md \
    --turns 8
  # Code review consensus (3 agents)
  llm-launcher.exe launch-pipeline \
    --agents CodeAssistant,TestAgent,SecurityAgent \
    --input "Review this authentication code" \
    --topology seminar \
    --turns 4
chat Options:
  --from <agent>          Source agent ID (required)
  --to <agent>            Destination agent ID (required)
  --message <text>        Message to send (required)
  --vector-diffusion      Enable semantic alignment (default: true)
Examples:
  # Send message from one agent to another
  llm-launcher.exe chat \
    --from CodeAssistant \
    --to DocumentationAgent \
    --message "Explain this function in plain English"
monitor-emergence Options:
  --agents <list>          Comma-separated agent IDs (required)
  --query <text>          Query for emergence detection (required)
  --timeout <seconds>      Detection timeout (default: 120)
Examples:
  # Detect novel solutions from multi-agent collaboration
  llm-launcher.exe monitor-emergence \
    --agents CodeAssistant,SecurityAgent,ArchitectAgent \
    --query "Design a zero-trust authentication system" \
    --timeout 300
Global Options:
  -h, --help              Show this help message
  --verbose                Enable verbose output
Environment Variables:
  MINDCHAT_PORT            MindChat HTTP server port (default: 1000)
  VECTOR_DIFFUSION        Enable vector alignment globally
  VERBOSE                  Enable verbose logging
For more information, see: docs/LAUNCHER_GUIDE.md
C:\xenoKOS\AgentFlow>
</pre>




==See also==
==See also==
;* The ''Social Brain'' of Paul S. Prueitt, Ph.D &mdash;http://www.blueberry-brain.org/winterchaos/PreuittTheSocialBrain.pdf
;* The ''Social Brain'' of Paul S. Prueitt, Ph.D &mdash;http://www.blueberry-brain.org/winterchaos/PreuittTheSocialBrain.pdf

Revision as of 13:20, 13 May 2026

AgentFlow is a RESTful endpoint that launches agents by orchestrated-intent of a top-holon, or mere mortal.


LLM-Launcher.exe Help Listing

C:\xenoKOS\AgentFlow>bin\llm-launcher.exe --help

AgentFlow Launcher - CLI Bridge for MindChat Multi-Agent Orchestration

Usage:
  llm-launcher.exe <command> [options]

Commands:
  launch-pipeline          Launch a multi-agent pipeline workflow
  chat                     Send a message between two agents (inter-loquation)
  monitor-emergence        Monitor for emergence events (Zenkin flash moments)
  help                     Show this help message

launch-pipeline Options:
  --agents <list>          Comma-separated agent IDs (required)
                          Example: CodeAssistant,Narrator,TestAgent
  --input <text>           Pipeline input text (required)
  --output-file <path>     Where to save output (optional)
  --output-format <fmt>    Output format: markdown, json, text (default: markdown)
  --topology <topo>        MindChat topology: interloquation, seminar, chorus,
                          convergence, whisper-chain, oracle (default: seminar)
  --turns <n>              Number of dialogue turns (default: 6)
  --participants <n>       Number of participants (overrides agent count)
  --vector-diffusion       Enable semantic vector alignment (default: true)
  --verbose                Enable verbose output

Examples:
  # Two-agent pipeline
  llm-launcher.exe launch-pipeline \
    --agents CodeAssistant,DocumentationAgent \
    --input "Design a caching system" \
    --output-file output.md \
    --turns 8

  # Code review consensus (3 agents)
  llm-launcher.exe launch-pipeline \
    --agents CodeAssistant,TestAgent,SecurityAgent \
    --input "Review this authentication code" \
    --topology seminar \
    --turns 4

chat Options:
  --from <agent>           Source agent ID (required)
  --to <agent>             Destination agent ID (required)
  --message <text>         Message to send (required)
  --vector-diffusion       Enable semantic alignment (default: true)

Examples:
  # Send message from one agent to another
  llm-launcher.exe chat \
    --from CodeAssistant \
    --to DocumentationAgent \
    --message "Explain this function in plain English"

monitor-emergence Options:
  --agents <list>          Comma-separated agent IDs (required)
  --query <text>           Query for emergence detection (required)
  --timeout <seconds>      Detection timeout (default: 120)

Examples:
  # Detect novel solutions from multi-agent collaboration
  llm-launcher.exe monitor-emergence \
    --agents CodeAssistant,SecurityAgent,ArchitectAgent \
    --query "Design a zero-trust authentication system" \
    --timeout 300

Global Options:
  -h, --help               Show this help message
  --verbose                Enable verbose output

Environment Variables:
  MINDCHAT_PORT            MindChat HTTP server port (default: 1000)
  VECTOR_DIFFUSION         Enable vector alignment globally
  VERBOSE                  Enable verbose logging

For more information, see: docs/LAUNCHER_GUIDE.md

C:\xenoKOS\AgentFlow>


See also