Epistemic Trust in Agent Chains: How Uncertainty Compounds Through Multi-Step Delegation
Most teams building multi-agent systems spend a lot of time thinking about authorization trust: what is Agent B allowed to do, which tools can it call, what data can it access. That's an important problem. But there's a second trust problem that doesn't get nearly enough attention, and it's the one that actually kills production systems.
The problem is epistemic: when Agent A delegates a task to Agent B and gets back an answer, how much should A believe what B returned?
This isn't a question of whether B was authorized to answer. It's a question of whether B actually could.
