2026-03-06

A mutable predictions design had shipped two days earlier and produced zero predictions in one day of production. The intent had been: one active prediction per entity per horizon, with update verbs (reinforce, weaken, reverse) writing to a prediction_states table as new signals arrived. The volume gate _should_resynthesize ran before the prediction LLM. With the gate closed on most cycles, the LLM never decided anything. The model couldn’t even be wrong; it never spoke.

The prediction layer is supposed to commit directional claims on tickers with active signals, then score them against actual price movement to build a track record. Mutable predictions optimize for the model’s convenience: clean state, no replay. They’re the wrong shape for a track record. A prediction you can edit is one you can’t be wrong about, and arbitrary target dates let the model pick a horizon that flatters the result.

The redesign replaced the mutable table with predictions_v2: immutable, timestamped claims at three fixed horizons (1w / 1m / 1q, mapped to 7 / 30 / 90 days). Every entity-update cycle, LLM #1 fires for any entity with new signals, no volume gate. The output is create (writes 3 rows, one per horizon, same direction/conviction/thesis), invalidate (closes a specific active prediction and links the replacement via parent_id), or skip (the default, and what the LLM was supposed to be deciding all along). Target dates are pre-computed from the horizon. Entity state synthesis stays gated by _should_resynthesize as a separate LLM #2.

Deleted: the mutable state table, the change-log table, the three update verbs, and the reverse_prediction_state() helper. Cost projected at ~$0.25-0.32/day (~$8-10/month) for 120-160 Haiku calls/day, against $0/day in the broken design.

The mistake was placing a volume gate before a model whose entire job was to be the gate. If the LLM is supposed to decide whether a claim is warranted, every other gate in front of it is a vote for not letting the model do its job. Architecturally, the model has to be allowed to say “skip.” The system can’t pre-empt that decision and call it conservatism.