2026-05-04
aide crossed an important line: it stopped being only a dashboard for AI
coding history and became a loop for turning that history into reviewed project
memory.
The original product shape was descriptive. Claude Code and Codex logs flow into SQLite, then the dashboard shows cost, token usage, tool patterns, sessions, projects, and diagnostic views. That is useful, but it mostly answers “what happened?” The more interesting question is whether previous AI work can make the next AI session start with better context.
The new path is:
Claude/Codex logs -> normalized sessions -> investigation queue -> digest proposals
-> reviewed artifacts -> runbook/brief previews
The investigation queue came first because the inputs need a trust gate. If project attribution, command classification, edit detection, or error categories are shaky, durable memory just preserves bad assumptions. Sessions now get flagged for weak attribution, permission friction, file-access failures, expensive no-edit work, suspiciously low active time, and residual error categories before anything becomes a durable artifact.
Above that, aide now has a semantic artifact schema for decisions, setup steps,
credential steps, verification recipes, known mistakes, risky actions, future
agent instructions, and planning signals. A digest command proposes artifacts
from a session, but does not save them by default. Saved proposals go through a
human review flow in the CLI or /artifacts dashboard. Only accepted artifacts
feed generated Markdown.
The final loop is now visible: /runbook previews project runbooks generated
from accepted artifacts, and /brief generates task-specific handoff context
from the same accepted knowledge. The whole thing still runs locally and makes
zero LLM calls.
The invariant that changed is subtle: logs are no longer treated only as analytics exhaust. They are evidence. Evidence can propose memory, but it should not become memory without review. A compounding AI tool needs that middle layer; otherwise every future brief inherits the parser’s mistakes and the agent’s confident guesses.