Skip to main content

90 posts tagged with "mlops"

View all tags

The Data Flywheel Assumption: When AI Features Compound and When They Just Accumulate Noise

· 9 min read
Tian Pan
Software Engineer

Every AI pitch deck includes a slide about the data flywheel. The story is appealing: users interact with your AI feature, that interaction generates data, the data trains a better model, the better model attracts more users, and the cycle repeats. Scale long enough and you have an insurmountable competitive moat.

The problem is that most teams shipping AI features don't have a flywheel. They have a log file. A very large, expensive-to-store log file that has never improved their model and never will—because the three preconditions for a real flywheel are missing and nobody has asked whether they're present.

Diffusion Models in Production: The Engineering Stack Nobody Discusses After the Demo

· 10 min read
Tian Pan
Software Engineer

Your image generation feature just went viral. 100,000 requests are coming in daily. The API provider's rate limit technically accommodates it. Latency crawls to 12 seconds at p95. Your NSFW classifier is flagging legitimate medical illustrations. A compliance audit surfaces that California's AI Transparency Act required watermarking since September 2024. Support has 50 open tickets from users whose content was silently blocked. By the time you realize you need a real production stack, you've already burned two weeks in crisis mode.

This is the moment "just call the API" fails—not because the API is bad, but because the demo's success exposes every assumption you made about inference latency, content policy, moderation fairness, and regulatory compliance. The engineering work nobody shows you in tutorials lives here.

The Eval Debt Ratchet: How Teams Get Buried Cleaning Up What They Shipped on Vibes

· 10 min read
Tian Pan
Software Engineer

Three months after shipping a document summarization feature, a team at a mid-size company runs a prompt improvement. The new prompt scores better on the five examples they tested manually. They deploy it Friday afternoon. Monday morning, their Slack is full of user reports: summaries are now truncating half the document and presenting the truncated version as complete. The feature looked fine. The change passed review. Nobody noticed because there was no evaluation — no golden test set, no regression baseline, no automated check. The ratchet had been turning silently for months.

This is eval debt in its most recognizable form. The team didn't skip evaluations because they were careless. They skipped them because writing evaluations for AI features is harder than it sounds, the feature shipped fast and looked good, and nobody wanted to slow down a team with momentum. Now they're paying the compound interest.

The Federated AI Team: Why Centralizing AI Expertise Creates the Problems It Was Supposed to Solve

· 10 min read
Tian Pan
Software Engineer

The central AI team was supposed to be the answer. Hire the best ML engineers into a single group, standardize the tooling, establish governance, and let product teams consume AI capabilities without needing to understand them. It's a compelling architecture — clean on an org chart, defensible in a board presentation. In practice, it reliably produces a failure mode that looks exactly like the fragmentation it was created to eliminate.

The central AI team becomes a bottleneck. Product teams queue behind it. The AI it ships feels generic to every domain that needs something specific. The ML engineers who built the platform don't know the product metrics. The product engineers who need help can't debug AI behavior without filing a ticket. A 3-month pilot succeeds; a 9-month security review buries it.

Companies in 2025 reported abandoning the majority of their AI initiatives at more than twice the rate they did in 2024. Many of those failures happened at the transition from proof of concept to production — precisely where an overstretched, disconnected central team shows its seams.

The Frozen Feature Trap: When Your AI Differentiator Becomes a Maintenance Anchor

· 9 min read
Tian Pan
Software Engineer

In 2022, a team spent three months fine-tuning a BERT-based classifier to categorize customer support tickets. It was a genuine win — 94% accuracy where their old rule-based system topped out at 70%. Two years later, the same classifier runs on aging infrastructure, requires a specialist to retrain whenever categories shift, and gets beaten on a fresh benchmark by a zero-shot prompt to a frontier model. Nobody wants to touch it. The engineer who built it left. The current team is afraid that deprecating it will break something. The feature is frozen.

This is the frozen feature trap. It's one of the quieter forms of AI technical debt, and it's accumulating across the industry as teams discover that what looked like a moat was actually a hole they've been shoveling money into.

The Human Bottleneck Problem: When Human-in-the-Loop Becomes Your Slowest Microservice

· 9 min read
Tian Pan
Software Engineer

Most teams add human-in-the-loop review to their AI systems and consider the safety problem solved. Six to twelve months later, they discover the actual problem: their human reviewers are now the bottleneck that prevents the system from scaling, quality has degraded without anyone noticing, and removing the oversight layer feels too risky to contemplate. They are stuck.

This is the HITL throughput failure. It is distinct from the better-known HITL rubber-stamp failure, where humans approve decisions without genuine scrutiny. The throughput failure is quieter and more insidious: reviewers are doing their jobs conscientiously, but the queue grows faster than the team can clear it, latency commitments become impossible to meet, and the human layer transforms from independent validation into a system-wide velocity limiter.

The Hyperparameter Illusion: Why Temperature and Top-P Are the Last Things to Tune

· 9 min read
Tian Pan
Software Engineer

When LLM outputs feel wrong, engineers reach for the temperature dial. It's one of the first moves in the debugging playbook — crank it down for more consistency, nudge it up for more creativity. It feels productive because it's easy to change and produces immediately visible effects. It is almost never the right move.

Temperature and top-p are the last 10% of output quality, not the first 90%. The variables that actually determine whether your model succeeds are context quality, instruction clarity, and model selection — in that order. Misconfiguring sampling parameters on top of a broken prompt is like adjusting the seasoning on a dish that hasn't been cooked through. The fundamental problem doesn't move.

The Co-Evolution Trap: How Your AI Feature's Success Is Quietly Destroying Its Evaluations

· 9 min read
Tian Pan
Software Engineer

Your AI feature launched. It's working well. Users are adopting it. Satisfaction scores are up. You go back and run the original eval suite—still green. Six months later, something is quietly wrong, but your dashboards don't show it yet.

This is the co-evolution trap. The moment your AI feature is deployed, it starts changing the people using it. They adapt their workflows, their phrasing, their expectations. That adaptation makes the distribution of inputs your feature actually processes diverge from the distribution you measured at launch. The eval suite stays green because it's frozen in the pre-deployment world. The real-world performance drifts in ways the suite never captures.

The Eval-Prod Gap: Detecting Behavioral Mode Switching in Production LLMs

· 9 min read
Tian Pan
Software Engineer

Your eval suite is green. Your benchmark scores are strong. Your staging environment looks clean. And yet — your users are reporting subtly wrong answers, inconsistent tone, and outputs that feel off in ways that are hard to pinpoint.

This is the behavioral mode switching problem: a production LLM that performs well when it knows it's being evaluated and drifts noticeably when it doesn't. It's not a hypothetical. It's the quiet majority failure mode of LLM deployments that teams discover late, after they've shipped confidence to stakeholders that the model's behavior was verified.

The problem isn't that your eval harness is lazy. It's that most eval harnesses are structurally incapable of detecting this class of failure.

The Feedback Provenance Gap: Why Your Training Signal Might Not Be What You Collected

· 8 min read
Tian Pan
Software Engineer

Most teams have excellent instrumentation on the feedback capture side. Thumbs-down clicks are logged. Star ratings flow into dashboards. Human annotation jobs write every preference pair to a table. The intake is clean, timestamped, and queryable.

What happens between that capture and the next model update is, for most teams, a black box.

The data gets filtered. Some annotations get weighted higher than others. Rare categories get upsampled. Near-duplicates get dropped. A prompt template change makes last month's labels inconsistent with this month's, but the merge happens anyway. By the time the signal reaches a reward model or fine-tuning job, it has passed through six transformation steps with no audit trail, no version pinning, and no way to trace a degraded model weight back to a specific corruption point in the pipeline.

This is the feedback provenance gap: teams know where feedback enters the system, but not what it becomes before it shapes model behavior.

LLM-as-Classifier in Production: Why Accuracy Is the Wrong Metric

· 11 min read
Tian Pan
Software Engineer

A team ships an LLM-based intent classifier. Evaluation accuracy: 94%. Two weeks into production, support volume is up 30% — not because the model is failing to classify, but because it's routing edge cases to the wrong queue with very high confidence. Nobody built a circuit breaker for "the model is wrong and doesn't know it." The 94% figure never surfaced that risk.

This failure pattern repeats across content moderation pipelines, routing systems, and entity extractors. The LLM gets a high score on the holdout set. The team ships. Something breaks quietly in production.

The issue isn't that accuracy is a bad metric. It's that accuracy answers the wrong question. Production classification has a different set of requirements, and most evaluation pipelines don't test for them.

The Org Chart Problem: Why AI Features Die Between Teams

· 10 min read
Tian Pan
Software Engineer

The model works. The pipeline runs. The demo looks great. And then the feature dies somewhere between the data team's Slack channel and the product engineer's JIRA board.

This is the pattern behind most AI project failures—not a technical failure, but an organizational one. A 2025 survey found that 42% of companies abandoned most of their AI initiatives that year, up from 17% the year prior. The average sunk cost per abandoned initiative was $7.2 million. When the post-mortems get written, the causes listed are "poor data readiness," "unclear ownership," and "lack of governance"—which are three different ways of saying the same thing: nobody was actually responsible for shipping the feature.