- Installed hooks: boot-md, session-memory, command-logger, bootstrap-extra-files - Updated default model to litellm/copilot-claude-sonnet-4.6 - Added memory/startup-health.json + boot-last-run.json from boot check - Added memory/startup-health.md warning log - Captured new session memory files (2026-03-04/05) - Minor MEMORY.md + USER.md: added git preference line
180 lines
4.6 KiB
Markdown
180 lines
4.6 KiB
Markdown
# Session: 2026-03-03 21:18:33 UTC
|
||
|
||
- **Session Key**: agent:main:main
|
||
- **Session ID**: 4e89d280-a179-4c69-ae5f-9d285ffb0f2c
|
||
- **Source**: gateway:sessions.reset
|
||
|
||
## Conversation Summary
|
||
|
||
user: [Tue 2026-03-03 21:09 UTC] hey
|
||
user: [Tue 2026-03-03 21:09 UTC] same
|
||
assistant: [[reply_to_current]] Thanks — I found a likely mismatch and patched it:
|
||
|
||
- Changed LiteLLM provider base URL to include `/v1`:
|
||
- `http://192.168.153.113:18804/v1`
|
||
|
||
This often fixes OpenAI-compatible clients that were otherwise hitting wrong paths and surfacing generic `Connection error`.
|
||
|
||
Please do:
|
||
|
||
```bash
|
||
openclaw gateway restart
|
||
```
|
||
|
||
Then retest:
|
||
1) `model set litellm/claude-haiku-4-5`
|
||
2) `hey`
|
||
|
||
If still failing, I’ll do one more hard fix: force model list down to only known-good IDs + strip any dynamic reload leftovers.
|
||
user: [Tue 2026-03-03 21:10 UTC] model set to litellm/claude-haiku-4-5 hey run error: Connection error.
|
||
user: [Tue 2026-03-03 21:11 UTC] hey
|
||
user: [Tue 2026-03-03 21:11 UTC] model set to litellm/claude-haiku-4-5 hey run error: Connection error. model set to litellm/claude-haiku-4-5 hey
|
||
assistant: [[reply_to_current]] You’re not crazy — this is now looking like an **OpenClaw-side LiteLLM integration bug** on your current build, not your LiteLLM server.
|
||
|
||
What we know:
|
||
- Direct LiteLLM API call to `claude-haiku-4-5` works (`OK`).
|
||
- OpenClaw run with `provider=litellm model=claude-haiku-4-5` repeatedly fails with generic `Connection error`.
|
||
- No auth/quota error is surfaced for that model path (unlike Gemini 429s).
|
||
|
||
### Best next fix (likely)
|
||
You’re on **OpenClaw 2026.3.1**, and status keeps showing **update available 2026.3.2**.
|
||
Please run:
|
||
|
||
```bash
|
||
openclaw update
|
||
openclaw gateway restart
|
||
```
|
||
|
||
Then retest:
|
||
```text
|
||
model set litellm/claude-haiku-4-5
|
||
hey
|
||
```
|
||
|
||
If it still fails after update, I’ll do a hard workaround: rewire these models through a custom non-`litellm` provider block (same endpoint/key) so model selection works without the buggy path.
|
||
user: Sender (untrusted metadata):
|
||
```json
|
||
{
|
||
"label": "openclaw-tui (gateway-client)",
|
||
"id": "gateway-client",
|
||
"name": "openclaw-tui",
|
||
"username": "openclaw-tui"
|
||
}
|
||
```
|
||
|
||
[Tue 2026-03-03 21:14 UTC] hey
|
||
user: Sender (untrusted metadata):
|
||
```json
|
||
{
|
||
"label": "openclaw-tui (gateway-client)",
|
||
"id": "gateway-client",
|
||
"name": "openclaw-tui",
|
||
"username": "openclaw-tui"
|
||
}
|
||
```
|
||
|
||
[Tue 2026-03-03 21:14 UTC] same issue
|
||
assistant: [[reply_to_current]] Got it. Since it still fails, let’s bypass the `litellm` provider path entirely.
|
||
|
||
Use a custom provider name (same LiteLLM endpoint/key), then select models from that provider.
|
||
|
||
Add this in `~/.openclaw/openclaw.json` under `models.providers`:
|
||
|
||
```json5
|
||
"liteproxy": {
|
||
"baseUrl": "http://192.168.153.113:18804/v1",
|
||
"apiKey": "sk-3oxoovwm69RyWRREGPJfDQ",
|
||
"api": "openai-completions",
|
||
"models": [
|
||
{ "id": "claude-haiku-4-5", "name": "Claude Haiku 4.5" },
|
||
{ "id": "gemini-2.5-flash", "name": "Gemini 2.5 Flash" },
|
||
{ "id": "gemini-2.5-flash-lite", "name": "Gemini 2.5 Flash Lite" },
|
||
{ "id": "zai-glm-4.7", "name": "GLM 4.7" }
|
||
]
|
||
}
|
||
```
|
||
|
||
And in `agents.defaults.models` add:
|
||
```json5
|
||
"liteproxy/claude-haiku-4-5": {},
|
||
"liteproxy/gemini-2.5-flash": {},
|
||
"liteproxy/gemini-2.5-flash-lite": {},
|
||
"liteproxy/zai-glm-4.7": {}
|
||
```
|
||
|
||
Then restart:
|
||
```bash
|
||
openclaw gateway restart
|
||
```
|
||
|
||
Test:
|
||
```text
|
||
model set liteproxy/claude-haiku-4-5
|
||
hey
|
||
```
|
||
|
||
If you want, I can give you one exact `jq` patch command to apply this automatically.
|
||
user: Sender (untrusted metadata):
|
||
```json
|
||
{
|
||
"label": "openclaw-tui (gateway-client)",
|
||
"id": "gateway-client",
|
||
"name": "openclaw-tui",
|
||
"username": "openclaw-tui"
|
||
}
|
||
```
|
||
|
||
[Tue 2026-03-03 21:15 UTC] di it please
|
||
assistant: [[reply_to_current]] Done ✅ I added `liteproxy` as a separate provider and added its models to `/models`.
|
||
|
||
I also made a backup:
|
||
- `~/.openclaw/openclaw.json.backup-before-liteproxy-20260303T211619Z`
|
||
|
||
Now run:
|
||
|
||
```bash
|
||
openclaw gateway restart
|
||
```
|
||
|
||
Then test:
|
||
|
||
```text
|
||
model set liteproxy/claude-haiku-4-5
|
||
hey
|
||
```
|
||
|
||
If that still fails, paste the exact output and I’ll keep debugging live.
|
||
user: Sender (untrusted metadata):
|
||
```json
|
||
{
|
||
"label": "openclaw-tui (gateway-client)",
|
||
"id": "gateway-client",
|
||
"name": "openclaw-tui",
|
||
"username": "openclaw-tui"
|
||
}
|
||
```
|
||
|
||
[Tue 2026-03-03 21:15 UTC] hey
|
||
user: Sender (untrusted metadata):
|
||
```json
|
||
{
|
||
"label": "openclaw-tui (gateway-client)",
|
||
"id": "gateway-client",
|
||
"name": "openclaw-tui",
|
||
"username": "openclaw-tui"
|
||
}
|
||
```
|
||
|
||
[Tue 2026-03-03 21:16 UTC] ?
|
||
user: Sender (untrusted metadata):
|
||
```json
|
||
{
|
||
"label": "openclaw-tui (gateway-client)",
|
||
"id": "gateway-client",
|
||
"name": "openclaw-tui",
|
||
"username": "openclaw-tui"
|
||
}
|
||
```
|
||
|
||
[Tue 2026-03-03 21:16 UTC] same
|