Skip to main content

30 posts tagged with "context-engineering"

View all tags

Context Compression Artifacts: What Your Summarization Middleware Is Silently Losing

· 10 min read
Tian Pan
Software Engineer

Your agent said "Do NOT use eval()" at turn three. By turn thirty, it called eval(). Your insurance processor said "Never approve claims without valid ID." After fifteen compression cycles, it approved one. These aren't model failures — they're compression failures. The agent's reasoning was fine. The summarization middleware threw away the one constraint that mattered.

Context compression is now standard infrastructure in long-running agent systems. When conversation history grows too large for the context window, you compress it — roll up older turns into a summary, trim, chunk, or distill. The problem is that modern summarizers don't destroy information randomly. They destroy it predictably, along specific fault lines, and most teams only discover those fault lines in production.

The Context Length Arms Race: Why Filling the Window Is the Wrong Goal

· 7 min read
Tian Pan
Software Engineer

Every six months, a model ships with a bigger context window. GPT-4.1 hit 1 million tokens. Gemini 2.5 followed at 2 million. Llama 4 is now advertising 10 million. The implicit promise is: dump everything in, stop worrying about what to include, let the model figure it out.

That promise does not hold up in production. A 2024 study evaluating 18 leading LLMs found that every single model showed performance degradation as input length increased. Not some models — every model. The context window is a ceiling, not a floor, and the teams that treat it as a floor are discovering that the hard way.

The Context Window Is an API Surface: Treat Your Prompt Structure as a Contract

· 9 min read
Tian Pan
Software Engineer

Six months into a production LLM feature, an engineer files a bug: the model started giving incorrect output sometime last quarter. Nobody remembers changing the prompt. The git blame shows it was "cleaned up for readability." The previous version is gone. Debugging begins from scratch.

This is the moment teams discover that their context window was never really engineered — it was just assembled.

The context window is the contract between your system and the model. Every token that enters it — system instructions, retrieved documents, conversation history, tool schemas, the user query — is input to a function call that costs money, takes time, and produces non-deterministic output. Yet most teams treat context composition as an implementation detail rather than an API surface. Prompts get edited in place, without versioning. Sections grow by accumulation. Nobody owns the layout. Changes propagate silently. The debugging experience is worse than anything from the pre-LLM era, because at least stack traces tell you what changed.

The Feature Store Pattern for LLM Applications: Stop Retrieving What You Could Precompute

· 10 min read
Tian Pan
Software Engineer

Most teams building LLM applications eventually converge on the same ad-hoc architecture: a scatter of cron jobs computing user summaries, a vector database queried fresh on every request, a Redis cache added when latency got embarrassing, and three different codebases that all define "user preference" slightly differently. Only later, usually after a production incident, do they recognize what they built: a feature store — a bad one, assembled accidentally.

The feature store is one of the most battle-tested patterns in traditional ML infrastructure. Applied deliberately to LLM context assembly, it eliminates the latency, cost, and consistency problems that plague most retrieval pipelines. This post explains how.

The System Prompt Is a Software Interface, Not a Config String

· 9 min read
Tian Pan
Software Engineer

Most teams treat their system prompts the way early web developers treated CSS: paste something that works, modify it carefully to not break anything, commit it to a config file, and hope nobody touches it. Then a new team member "cleans it up," a model upgrade subtly changes behavior, and three weeks later a user files a bug that nobody can reproduce because nobody knows what the prompt actually said last Tuesday.

This isn't a workflow problem. It's a category error. System prompts aren't configuration — they're software interfaces. And until engineering teams treat them as such, the LLM features they build will remain fragile, hard to debug, and impossible to scale.

Intent Drift in Long Conversations: Why Your Agent's Goal Representation Goes Stale

· 9 min read
Tian Pan
Software Engineer

Most conversations about context windows focus on what the model can hold. The harder problem is what the model does with what it holds — specifically, how it tracks the evolving goal of the person it's talking to.

Intent isn't static. Users start vague, refine iteratively, contradict themselves, digress, and revise. What they need at message 40 is not necessarily what they expressed at message 2. An agent that treats context as a flat append log will accumulate all of that — and still get the current intent wrong.

The 200-Token System Prompt That Beats Your 4000-Token One

· 10 min read
Tian Pan
Software Engineer

A team I worked with spent six months tuning a system prompt to roughly 4,000 tokens. It was their crown jewel — a careful accretion of edge-case handling, formatting rules, persona instructions, fallback behaviors, and a dozen few-shot examples. Then a junior engineer joined, asked why the prompt was so long, and rewrote it in an afternoon. The new version was 200 tokens. On their existing eval suite it scored four points higher. It was also forty times cheaper to run, and noticeably faster.

This is not an anecdote about a magic short prompt. It is a pattern I see almost every time I read a production system prompt that has lived past its first quarter. Long prompts grow by accretion, not by design. Every failure mode that surfaced in QA contributed a paragraph. Every stakeholder who watched a demo contributed a tone instruction. Every example that "seemed to help" got pinned to the bottom. The result is a prompt that is longer than the user input it is meant to instruct, full of internal contradictions the model has to silently resolve at inference time, with attention spread thinly across competing demands.

Persona Drift in Long-Running Agent Sessions: Why Your Agent Forgets Who It Is

· 10 min read
Tian Pan
Software Engineer

Most production agent failures look like model errors. The agent starts a session responding correctly to the system prompt — maintaining the right tone, respecting tool constraints, following the defined workflow. Then somewhere around turn 30 or 40, things subtly shift. The agent starts hedging where it should be direct. It calls tools it was told to avoid. It contradicts a decision it made 15 turns earlier. The system prompt hasn't changed, but the agent's behavior has.

This is persona drift: the progressive divergence between an agent's actual behavior and its original system instructions, caused by how transformers attend to increasingly buried context. Research quantifies it precisely — after 8–12 dialogue turns, persona self-consistency metrics degrade by more than 30%. Single-turn agents achieve roughly 90% task accuracy; multi-turn agents running the same tasks fall to around 65%. That 25-point drop isn't a model quality problem you can prompt your way around. It's an architectural property of how attention works over long sequences, and most teams discover it only after they've shipped a feature that degrades silently for hours before a user finally notices.

Pagination Is a Tool-Catalog Discipline: Why Agents Burn Context on List Returns

· 11 min read
Tian Pan
Software Engineer

Every well-designed HTTP API in your stack returns paginated results. Nobody loads a million rows into memory and hopes for the best. Yet the tools your agent calls return the entire list, and the agent dutifully reads it, because the function signature says list_orders() -> Order[] and the agent has no protocol for "give me the next page" the way a human user has scroll-and-load-more.

The agent burns tokens on rows it could have skipped. The long-tail customer with 50K records hits context-window failures the median customer never sees. The tool author cannot tell from the trace whether the agent needed all those rows or simply could not ask for fewer. And somewhere in your eval suite, the regression that would have flagged this never runs because every test fixture has fewer than 100 records.

Pagination is not a UI affordance. It is a load-shedding primitive — and the agent that consumes a tool without it is reimplementing every SELECT * FROM orders mistake the API designers in your company spent a decade learning to avoid.

Tool Output Compression: The Injection Decision That Shapes Context Quality

· 10 min read
Tian Pan
Software Engineer

Your agent calls a database tool. The query returns 8,000 tokens of raw JSON — nested objects, null fields, pagination metadata, and a timestamp on every row. Your agent needs three fields from that response. You just paid for 7,900 tokens of noise, and you injected all of them into context where they'll compete for attention against the actual task.

This is the tool output injection problem, and it's the most underrated architectural decision in agent design. Most teams discover it the hard way: the demo works, production degrades, and nobody can explain why the model started hedging answers it used to answer confidently.

Compaction Traps: Why Long-Running Agents Forget What They Already Tried

· 9 min read
Tian Pan
Software Engineer

An agent calls a file-writing tool. The tool fails with a permission error. The agent records this, moves on to a different approach, and eventually runs long enough that the runtime triggers context compaction. The summary reads: "the agent has been working on writing output files." What it drops: that the permission error ever happened, and why the original approach was abandoned. Three hundred tokens later, the agent tries the same write again.

This pattern — call it the compaction trap — is one of the most persistent reliability failures in production agent systems. It's not a model bug. It's an architecture mismatch between how compaction works and what agents actually need to stay coherent across long sessions.

Long-Session Context Degradation: How Multi-Turn Conversations Go Stale

· 8 min read
Tian Pan
Software Engineer

The first time a user's 80-turn support conversation suddenly started contradicting advice given 60 turns ago, the team blamed a bug. There was no bug. The model was simply lost. Across all major frontier models, multi-turn conversations show an average 39% performance drop compared to single-turn interactions on the same tasks. Most teams never measure this. They assume context windows are roughly as powerful as their token limit suggests, and they build products accordingly.

That assumption is quietly wrong. Long sessions don't just get slower or more expensive — they get unreliable in ways that are nearly impossible to notice until users are already frustrated.