feat(external-llm): standardize tiers and optimize model selection
- Rename tiers: opus/sonnet/haiku → frontier/mid-tier/lightweight - Align with industry benchmarks (MMLU, GPQA, Chatbot Arena) - Add /external command for LLM mode control - Fix invoke.py timeout passthrough (now 600s default) Tier changes: - Promote gemini-2.5-pro to frontier (benchmark-validated) - Demote glm-4.7 to mid-tier then removed (unreliable) - Promote gemini-2.5-flash to mid-tier New models added: - gpt-5-mini, gpt-5-nano (GPT family coverage) - grok-code (Grok/X family) - glm-4.5-air (lightweight GLM) Removed (redundant/unreliable): - o3 (not available) - glm-4.7 (timeouts) - gpt-4o, big-pickle, glm-4.5-flash (redundant) Final: 11 models across 3 tiers, 4 model families Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -199,6 +199,15 @@
|
||||
],
|
||||
"invokes": "skill:usage"
|
||||
},
|
||||
"/external": {
|
||||
"description": "Toggle and use external LLM mode (GPT-5.2, Gemini, etc.)",
|
||||
"aliases": [
|
||||
"/llm",
|
||||
"/ext",
|
||||
"/external-llm"
|
||||
],
|
||||
"invokes": "command:external"
|
||||
},
|
||||
"/README": {
|
||||
"description": "TODO",
|
||||
"aliases": [],
|
||||
|
||||
Reference in New Issue
Block a user