5.2 KiB
5.2 KiB
SESSION STATE - Live Memory
Last update: 2026-03-10 (builder glass preset + ignore cleanup) Owner: Codex + user
Current objective
Stabilize the UB24 builder workflow with reproducible progress and no context loss between sessions.
Current context snapshot
- Canonical workspace:
C:\word - Active branch target:
ai/ub24-builder-v1 - Runtime command:
python -m demo.app - Canonical URL:
http://127.0.0.1:5001/elementor/1
Done recently
- Defined memory/versioning strategy based on stable rules + live state + history.
- Added startup/close hooks to enforce session continuity.
- Updated startup protocol to always read
agent.mdin addition to core memory files. - Disabled legacy routes
/customizer2/<site_id>and/customizer3/<site_id>indemo/routes/customizer_ascii.py(now return404). - Stabilized
elementor_builderfor restaurante in free-drag mode while preserving manual placement. - Reset action now restores base template per rubro (instead of wiping all blocks).
- Menu mapping improved to resolve links by semantic intent + block type (contact/map/social/review/cards).
- Added draft autosave and preview-final device propagation (
desktop/tablet/phone). - Added universal LLM router toolkit for multi-project reuse at
tools/llm_universalwith provider fallback, cooldown, and SQLite usage tracking. - Stored cross-project LLM execution plan in root file
llm. - Configured and verified
GITHUB_TOKENandCLOUDFLARE_API_TOKEN/CLOUDFLARE_ACCOUNT_IDin user env. - Improved universal router behavior: missing-key fallback handling + round-robin balancing by route.
- Validated unified multi-provider flow with real calls (fallback to GitHub Models when other candidates fail).
- Added unified chat interface
tools/llm_universal/chat_cli.pywith persistent context viachat_history.json. - Documented full LLM implementation log in
codex/LLM_UNIVERSAL_LOG.md. - Enabled OpenRouter in user env from local Cline secrets and validated live calls.
- Added unified HTTP API server for LLM (
tools/llm_universal/api_server.py) with persistent sessions and context continuity. - Added run/test scripts for immediate usage (
run_server.ps1,test_api.ps1) and validated E2E flow. - Finalized universal assistant as standalone service in
tools/llm_universal(detached fromdemo.app), with browser chat on:5055. - Installed Task Scheduler autostart (
GKACHELE-LLM-Chat-Autostart) to launch at user logon. - Locked assistant web to fixed agent mode and route (
agent) to avoid manual model switching. - Added startup memory preload in agent sessions (
AGENTS.md,agent.md,codex/SESSION_STATE.md,codex/VERSIONADO_IA.md). - Verified LLM GUI server health on
127.0.0.1:5055and clarified304responses are normal cache behavior forGET /assistant. - Verified Groq API key is valid (direct
GET /modelsandPOST /chat/completionsreturn200). - Started and validated both local GUIs:
- LLM assistant:
http://127.0.0.1:5055/assistant(health:GET /api/llm/health) - UB24 builder:
http://127.0.0.1:5001/elementor/1
- LLM assistant:
- Added xAI (Grok) provider routing to
tools/llm_universal/provider_config.example.jsonand setXAI_API_KEYin user env. - xAI endpoints returned
403for bothGET /v1/modelsandPOST /v1/chat/completionswith the provided key (base URLhttps://api.x.ai/v1). - Added OpenAI provider using Responses API with
codex-mini-latestin routes; implemented Responses payload parsing in router and updated key setup/checklist and autostart env propagation. - Added menu glass/dark/default presets and hover/blur styling in
elementor/templates/elementor_builder.html; logging of incoming presets tologs/elementor_save.log. - Expanded
.gitignoreto drop local artifacts (db/logs/snapshots/pycache/gitea_data/free-llm resources, etc.).
In progress
- Final QA of restaurante flow end-to-end (order, menu links, responsive, publish persistence).
- Enable additional provider keys (beyond OpenRouter) for universal LLM fallback.
- Stabilize free-tier provider mix (monitor Groq/auth errors if they reappear and keep fallback stable).
- Prepare first project integration using
/api/llm/chat. - Standalone service smoke validated (
/assistant+/api/llm/health+ chat POST).
Blockers
- xAI endpoints returned
403with the provided key; need a valid xAI API key and/or the correct base URL from the xAI console. - OpenAI provider added but not validated yet (requires
OPENAI_API_KEY). - Agent UI occasionally shows raw tool-call JSON (e.g.
{"type":"tool_call",...}) instead of executing tool and returning final natural-language response.
Next 3 steps
- If
:5055goes down again, inspecttools/llm_universal/_api_err.logfor crash details and adjust autostart to restart on failure. - Persist and auto-restore last
session_idin web UI so context resumes automatically after PC restart. - Fix agent tool-call parser/loop so raw JSON is never rendered to user and every valid tool call is executed before final answer.