Skip to main content

11 posts tagged with "memory"

View all tags

Agent Memory Contamination: How One Bad Tool Response Poisons a Whole Session

· 10 min read
Tian Pan
Software Engineer

Your agent completes 80% of a multi-step research task correctly, then confidently delivers a conclusion that's completely wrong. You trace back through the logs and find the culprit at step three: a tool call returned stale data, the agent integrated that data as fact, and every subsequent reasoning step built on that poisoned premise. By the end of the session, the agent was correct about everything except the thing that mattered.

This is agent memory contamination — and it's one of the most insidious reliability failures in production agentic systems. Unlike a crash or timeout, it produces a confident wrong answer. Observability tooling records a successful run. The user walks away with bad information.

Ghost Context: How Contradictory Beliefs Break Long-Running Agent Memory

· 11 min read
Tian Pan
Software Engineer

Your agent has talked to the same user 400 times. Six months ago she said she preferred Python. Three months ago her team migrated to Go. Last week she mentioned a new TypeScript project. All three facts are sitting in your vector store right now — semantically similar, chronologically unordered, equally weighted. The next time she asks for code help, your agent retrieves all three, hands a contradictory mess to the model, and confidently generates Python with Go idioms for a TypeScript context.

This is ghost context: stale beliefs that never die, retrieved alongside their replacements, silently corrupting agent reasoning.

The problem is underappreciated because it doesn't produce visible errors. The agent doesn't crash. It doesn't refuse to respond. It produces fluent, confident output that's just subtly, expensively wrong.

Agent Memory Drift: Why Reconciliation Is the Loop You're Missing

· 11 min read
Tian Pan
Software Engineer

The most dangerous thing your long-running agent does is also the thing it does most confidently: answer from memory. The customer's address changed last Tuesday. The ticket the agent thinks is "open" was closed yesterday by a human. The product feature the agent has tidy explanatory notes about shipped in a different shape than the spec the agent read three weeks ago. None of this is hallucination in the textbook sense — the model is recalling exactly what it stored. The world simply moved while the agent was looking elsewhere.

Most teams treat memory like a write problem: what should the agent remember, how do we summarize, what's the embedding strategy, how do we keep the store from blowing up. That framing produces architectures that grow more confident as they grow more wrong. The harder problem — the one that determines whether your agent stays useful past week three — is reconciliation: the explicit, ongoing loop that compares what the agent thinks is true against what the underlying systems say is true right now.

Chat History Is a Database. Stop Treating It Like Scrollback.

· 11 min read
Tian Pan
Software Engineer

The most common production complaint about agentic products is some version of "it forgot what we said." The complaint shows up at turn eight, or fifteen, or thirty — never at turn two — and the team's first instinct is always the same: bigger context window. Which is the wrong instinct, because the bug is not in the model. The bug is that the team is treating conversation history as scrollback in a terminal — append a line, render the tail, truncate when full — when what they actually built, without realizing it, is a read-heavy database with append-only writes, a hot working set, an eviction policy hiding inside their truncation rule, and a query pattern that depends on the kind of question being asked. Once you accept that, the entire shape of the problem changes.

The scrollback model is so seductive because the chat UI looks like a transcript. Messages flow downward, the user reads them top-to-bottom, and the natural way to feed the model is to splice the latest N turns into the prompt. The data structure feels free. There's no schema, no index, no query — just append, render, repeat. And for the first few turns, every architecture works. The model has the whole conversation in its context, the bill is small, and the demo is delightful.

Session Stitching: Why Your Conversation-ID Is a Lie

· 11 min read
Tian Pan
Software Engineer

A user starts negotiating a contract with your agent on her desktop at 9 a.m. She gets a Slack ping, switches to her phone over lunch to ask one clarifying question, and reopens the desktop tab at 4 p.m. to revise the draft. To her, that was one task — three hours of working through one contract. To your system, that was three sessions on two devices, each with its own conversation-id, each with its own memory window, each presenting a fresh greeting and asking her to re-paste the draft she'd already discussed twice.

The bug is not in the model. The bug is that your platform encoded "session" — a transport-layer artifact about a single connection — as the unit of context, while your user encoded "task" — the contract — as the unit of context. Every framework on the market quietly conflates the two, and the gap between them is where half of agent UX disappears.

Agent Memory Schema Evolution Is Protobuf on Hard Mode

· 11 min read
Tian Pan
Software Engineer

The first painful agent-memory migration always teaches the same lesson: there were two schemas, and you only migrated one of them. The storage layer is fine — every row was rewritten, every key is in its new shape, the backfill job logged success. The agent is broken anyway. It keeps writing to user.preferences.theme, retrieves nothing, then helpfully synthesizes a default from context as if the key never existed. The migration runbook reports green. Users report stale memory.

The asymmetry is structural. A traditional service that depends on a renamed column gets a hard error and you fix it. An agent that depends on a renamed memory key gets a soft miss and confabulates around it. The schema lives in two places — your store and the model's context — and you can only migrate one of them with a SQL script.

Protobuf solved a version of this problem twenty years ago by codifying an additive-only discipline: fields are forever, numbers are forever, wire types never change, and removal is replaced with deprecation. That discipline is the right starting point for agent memory, with one extra constraint that makes it harder. Protobuf receivers ignore unknown fields by design. Agents don't.

The Silent Corruption Problem in Parallel Agent Systems

· 12 min read
Tian Pan
Software Engineer

When a multi-agent system starts behaving strangely — giving inconsistent answers, losing track of tasks, making decisions that contradict earlier reasoning — the instinct is to blame the model. Tweak the prompt. Switch to a stronger model. Add more context.

The actual cause is often more mundane and more dangerous: shared state corruption from concurrent writes. Two agents read the same memory, both compute updates, and one silently overwrites the other. The resulting state is technically valid — no exceptions thrown, no schema violations — but semantically wrong. Every agent that reads it afterward reasons correctly over incorrect information.

This failure mode is invisible at the individual operation level, hard to reproduce in test environments, and nearly impossible to distinguish from model error by looking at outputs alone. O'Reilly's 2025 research on multi-agent memory engineering found that 36.9% of multi-agent system failures stem from interagent misalignment — agents operating on inconsistent views of shared information. It's not a theoretical concern.

The Three Memory Systems Every Production AI Agent Needs

· 10 min read
Tian Pan
Software Engineer

Most AI agents fail the same way: they work perfectly in demos and fall apart after the tenth real conversation. The agent that helped a user configure a billing integration last Tuesday has no idea who that user is today. It asks for their company name again. Then their plan tier. Then re-explains concepts the user already knows. The experience degrades from "useful assistant" to "chatbot with amnesia."

The instinct is to throw more context at the problem — stuff the conversation history into the prompt and call it solved. That works until it doesn't. At scale, full-context approaches become prohibitively expensive, and more troublingly, performance degrades as input grows. Research shows LLM accuracy drops measurably as context length increases, even within a model's advertised limits. A 1M-token context window is not a memory system.

The agents that work in production treat memory as a first-class architectural concern, not an afterthought. And the ones that get it right distinguish between three fundamentally different types of information that need to persist — each with different storage patterns, retrieval strategies, and decay characteristics.

Memory Architectures for Production AI Agents

· 10 min read
Tian Pan
Software Engineer

Most teams add memory to their agents as an afterthought — usually after a user complains that the agent forgot something it was explicitly told three sessions ago. At that point, the fix feels obvious: store conversations somewhere and retrieve them later. But this intuition leads to systems that work in demos and fall apart in production. The gap between a memory system that stores things and one that reliably surfaces the right things at the right time is where most agent projects quietly fail.

Memory architecture is not a peripheral concern. For any agent handling multi-session interactions — customer support, coding assistants, research tools, voice interfaces — memory is the difference between a stateful assistant and a very expensive autocomplete. Getting it wrong doesn't produce crashes; it produces agents that feel subtly broken, that contradict themselves, or that confidently repeat outdated information the user corrected two weeks ago.

LLM-Powered Autonomous Agents: The Architecture Behind Real Autonomy

· 8 min read
Tian Pan
Software Engineer

Most teams that claim to have "agents in production" don't. Surveys consistently show that around 57% of engineering organizations have deployed AI agents — but when you apply rigorous criteria (the LLM must plan, act, observe feedback, and adapt based on results), only 16% of enterprise deployments and 27% of startup deployments qualify as true agents. The rest are glorified chatbots with tool calls bolted on.

This gap isn't about model capability. It's about architecture. Genuine autonomous agents require three interlocking subsystems working in concert: planning, memory, and tool use. Most implementations get one right, partially implement a second, and ignore the third. The result is a system that works beautifully in demos and fails unpredictably in production.

Context Engineering for Personalization: How to Build Long-Term Memory Into AI Agents

· 8 min read
Tian Pan
Software Engineer

Most agent demos are stateless. A user asks a question, the agent answers, the session ends — and the next conversation starts from scratch. That's fine for a calculator. It's not fine for an assistant that's supposed to know you.

The gap between a useful agent and a frustrating one often comes down to one thing: whether the system remembers what matters. This post breaks down how to architect durable, personalized memory into production AI agents — covering the four-phase lifecycle, layered precedence rules, and the specific failure modes that will bite you if you skip the engineering.