chore(session): log glass preset changes
This commit is contained in:
@@ -1,6 +1,6 @@
|
|||||||
# SESSION STATE - Live Memory
|
# SESSION STATE - Live Memory
|
||||||
|
|
||||||
Last update: 2026-03-05 (sync)
|
Last update: 2026-03-10 (builder glass preset + ignore cleanup)
|
||||||
Owner: Codex + user
|
Owner: Codex + user
|
||||||
|
|
||||||
## Current objective
|
## Current objective
|
||||||
@@ -21,17 +21,47 @@ Stabilize the UB24 builder workflow with reproducible progress and no context lo
|
|||||||
6. Reset action now restores base template per rubro (instead of wiping all blocks).
|
6. Reset action now restores base template per rubro (instead of wiping all blocks).
|
||||||
7. Menu mapping improved to resolve links by semantic intent + block type (contact/map/social/review/cards).
|
7. Menu mapping improved to resolve links by semantic intent + block type (contact/map/social/review/cards).
|
||||||
8. Added draft autosave and preview-final device propagation (`desktop/tablet/phone`).
|
8. Added draft autosave and preview-final device propagation (`desktop/tablet/phone`).
|
||||||
|
9. Added universal LLM router toolkit for multi-project reuse at `tools/llm_universal` with provider fallback, cooldown, and SQLite usage tracking.
|
||||||
|
10. Stored cross-project LLM execution plan in root file `llm`.
|
||||||
|
11. Configured and verified `GITHUB_TOKEN` and `CLOUDFLARE_API_TOKEN`/`CLOUDFLARE_ACCOUNT_ID` in user env.
|
||||||
|
12. Improved universal router behavior: missing-key fallback handling + round-robin balancing by route.
|
||||||
|
13. Validated unified multi-provider flow with real calls (fallback to GitHub Models when other candidates fail).
|
||||||
|
14. Added unified chat interface `tools/llm_universal/chat_cli.py` with persistent context via `chat_history.json`.
|
||||||
|
15. Documented full LLM implementation log in `codex/LLM_UNIVERSAL_LOG.md`.
|
||||||
|
16. Enabled OpenRouter in user env from local Cline secrets and validated live calls.
|
||||||
|
17. Added unified HTTP API server for LLM (`tools/llm_universal/api_server.py`) with persistent sessions and context continuity.
|
||||||
|
18. Added run/test scripts for immediate usage (`run_server.ps1`, `test_api.ps1`) and validated E2E flow.
|
||||||
|
19. Finalized universal assistant as standalone service in `tools/llm_universal` (detached from `demo.app`), with browser chat on `:5055`.
|
||||||
|
20. Installed Task Scheduler autostart (`GKACHELE-LLM-Chat-Autostart`) to launch at user logon.
|
||||||
|
21. Locked assistant web to fixed agent mode and route (`agent`) to avoid manual model switching.
|
||||||
|
22. Added startup memory preload in agent sessions (`AGENTS.md`, `agent.md`, `codex/SESSION_STATE.md`, `codex/VERSIONADO_IA.md`).
|
||||||
|
23. Verified LLM GUI server health on `127.0.0.1:5055` and clarified `304` responses are normal cache behavior for `GET /assistant`.
|
||||||
|
24. Verified Groq API key is valid (direct `GET /models` and `POST /chat/completions` return `200`).
|
||||||
|
25. Started and validated both local GUIs:
|
||||||
|
- LLM assistant: `http://127.0.0.1:5055/assistant` (health: `GET /api/llm/health`)
|
||||||
|
- UB24 builder: `http://127.0.0.1:5001/elementor/1`
|
||||||
|
26. Added xAI (Grok) provider routing to `tools/llm_universal/provider_config.example.json` and set `XAI_API_KEY` in user env.
|
||||||
|
27. xAI endpoints returned `403` for both `GET /v1/models` and `POST /v1/chat/completions` with the provided key (base URL `https://api.x.ai/v1`).
|
||||||
|
28. Added OpenAI provider using Responses API with `codex-mini-latest` in routes; implemented Responses payload parsing in router and updated key setup/checklist and autostart env propagation.
|
||||||
|
29. Added menu glass/dark/default presets and hover/blur styling in `elementor/templates/elementor_builder.html`; logging of incoming presets to `logs/elementor_save.log`.
|
||||||
|
30. Expanded `.gitignore` to drop local artifacts (db/logs/snapshots/__pycache__/gitea_data/free-llm resources, etc.).
|
||||||
|
|
||||||
## In progress
|
## In progress
|
||||||
1. Final QA of restaurante flow end-to-end (order, menu links, responsive, publish persistence).
|
1. Final QA of restaurante flow end-to-end (order, menu links, responsive, publish persistence).
|
||||||
|
2. Enable additional provider keys (beyond OpenRouter) for universal LLM fallback.
|
||||||
|
3. Stabilize free-tier provider mix (monitor Groq/auth errors if they reappear and keep fallback stable).
|
||||||
|
4. Prepare first project integration using `/api/llm/chat`.
|
||||||
|
5. Standalone service smoke validated (`/assistant` + `/api/llm/health` + chat POST).
|
||||||
|
|
||||||
## Blockers
|
## Blockers
|
||||||
1. None active.
|
1. xAI endpoints returned `403` with the provided key; need a valid xAI API key and/or the correct base URL from the xAI console.
|
||||||
|
2. OpenAI provider added but not validated yet (requires `OPENAI_API_KEY`).
|
||||||
|
3. Agent UI occasionally shows raw tool-call JSON (e.g. `{"type":"tool_call",...}`) instead of executing tool and returning final natural-language response.
|
||||||
|
|
||||||
## Next 3 steps
|
## Next 3 steps
|
||||||
1. QA complete on `site_id=1` for reset -> reorder -> preview-final -> publish.
|
1. If `:5055` goes down again, inspect `tools/llm_universal/_api_err.log` for crash details and adjust autostart to restart on failure.
|
||||||
2. Tune spacing/heights in free-drag for blocks with dynamic content (contact/social/map) without auto-restack.
|
2. Persist and auto-restore last `session_id` in web UI so context resumes automatically after PC restart.
|
||||||
3. Consolidate docs/cross references for single customizer flow and mark legacy as deprecated.
|
3. Fix agent tool-call parser/loop so raw JSON is never rendered to user and every valid tool call is executed before final answer.
|
||||||
|
|
||||||
## Quick handoff template (copy and fill at close)
|
## Quick handoff template (copy and fill at close)
|
||||||
### What changed today
|
### What changed today
|
||||||
|
|||||||
Reference in New Issue
Block a user