Agent Memory Contamination: How One Bad Tool Response Poisons a Whole Session
Your agent completes 80% of a multi-step research task correctly, then confidently delivers a conclusion that's completely wrong. You trace back through the logs and find the culprit at step three: a tool call returned stale data, the agent integrated that data as fact, and every subsequent reasoning step built on that poisoned premise. By the end of the session, the agent was correct about everything except the thing that mattered.
This is agent memory contamination — and it's one of the most insidious reliability failures in production agentic systems. Unlike a crash or timeout, it produces a confident wrong answer. Observability tooling records a successful run. The user walks away with bad information.
