William Valentin fece6b59c5 Add llama-swap local LLM setup
- Config at ~/.config/llama-swap/config.yaml
- Systemd user service (auto-starts)
- 6 models: qwen3, coder, glm, gemma, reasoning, gpt-oss
- Endpoint: http://127.0.0.1:8080
2026-01-26 22:19:41 -08:00
2026-01-26 21:56:59 -08:00
2026-01-26 21:56:59 -08:00
2026-01-26 21:56:59 -08:00
2026-01-26 21:56:59 -08:00
2026-01-26 21:56:59 -08:00
2026-01-26 21:56:59 -08:00
2026-01-26 21:56:59 -08:00
2026-01-26 22:19:41 -08:00
2026-01-26 21:56:59 -08:00
Description
No description provided
82 KiB
Languages
Shell 67.2%
Python 32.8%