Multi-agent parallelism applies to all LLMs, not just local
This commit is contained in:
@@ -65,14 +65,17 @@ curl http://127.0.0.1:8080/v1/chat/completions \
|
||||
- No rate limits
|
||||
- No cost accumulation
|
||||
|
||||
### 🚀 Parallel Work → **MULTI-AGENT LOCAL**
|
||||
### 🚀 Parallel Work → **MULTI-AGENT**
|
||||
When speed matters, spawn multiple workers:
|
||||
```bash
|
||||
# Flynn can spawn sub-agents hitting local LLMs
|
||||
# Flynn can spawn sub-agents targeting any LLM
|
||||
# Each agent works independently, results merge
|
||||
```
|
||||
- Use for: bulk analysis, multi-file processing, research tasks
|
||||
- Coordinate via sessions_spawn with local model routing
|
||||
- Coordinate via `sessions_spawn` with model param
|
||||
- **Local:** best for privacy + no rate limits
|
||||
- **Cloud:** best for complex tasks needing quality
|
||||
- Mix and match based on task requirements
|
||||
|
||||
### ⚡ Quick One-Shot → **COPILOT or LOCAL**
|
||||
```bash
|
||||
|
||||
@@ -55,7 +55,12 @@
|
||||
**Local First** — Use local LLMs (llama-swap @ :8080) when:
|
||||
1. **Privacy/Confidentiality** — Sensitive data never leaves the machine
|
||||
2. **Long-running tasks** — No API costs, no rate limits, no timeouts
|
||||
3. **Parallel work** — Spawn multiple agents hitting local endpoint
|
||||
|
||||
**Multi-agent parallelism** — Spawn multiple agents for speed:
|
||||
- Works with local AND cloud LLMs
|
||||
- Local: best for privacy, no rate limits
|
||||
- Cloud: best for complex tasks needing quality
|
||||
- Mix based on task requirements
|
||||
|
||||
**Always check availability** — Local LLMs may not be running:
|
||||
```bash
|
||||
|
||||
Reference in New Issue
Block a user