Multi-agent parallelism applies to all LLMs, not just local
This commit is contained in:
@@ -55,7 +55,12 @@
|
||||
**Local First** — Use local LLMs (llama-swap @ :8080) when:
|
||||
1. **Privacy/Confidentiality** — Sensitive data never leaves the machine
|
||||
2. **Long-running tasks** — No API costs, no rate limits, no timeouts
|
||||
3. **Parallel work** — Spawn multiple agents hitting local endpoint
|
||||
|
||||
**Multi-agent parallelism** — Spawn multiple agents for speed:
|
||||
- Works with local AND cloud LLMs
|
||||
- Local: best for privacy, no rate limits
|
||||
- Cloud: best for complex tasks needing quality
|
||||
- Mix based on task requirements
|
||||
|
||||
**Always check availability** — Local LLMs may not be running:
|
||||
```bash
|
||||
|
||||
Reference in New Issue
Block a user