April 2026
Why design systems fail (and what works instead)
Most design systems fail not from lack of components, but because they're treated as asset libraries. The ones that work are feedback loops.
A design system shipped without measurable adoption is theatre. It's a Figma library that looks like a deliverable. It convinces the steering committee. Six months later you can pull up the component analytics and see one thing: nobody is using it.
I've watched this pattern repeat for 26 years, across enterprise rollouts, agency engagements, and startup foundations. The failure isn't usually craft. The components are fine. The tokens are fine. The Figma file is, in fact, beautiful.
The failure is structural. Most teams build design systems as asset libraries — a one-way handoff from a central team to consuming product teams. The libraries that compound and pay for themselves over years are built as feedback loops — telemetry-driven, usage-aware, with a refactor cadence that takes priority over green-field components.
Here's what I mean.
The asset-library failure mode
The default mental model goes like this: a small design system team produces components. Product teams consume them. Adoption is measured by "did the team use the library?" — usually answered by inspecting Figma frames or asking in standup.
This mental model has three structural problems that show up in every failed rollout I've audited:
- Adoption is invisible. You can't see which components are actually used in production, on which pages, by which teams. Without telemetry, you're guessing. With guessing, your roadmap is driven by whoever shouted loudest in the last component request meeting.
- The roadmap is upstream. New components get prioritized over fixing the ones that ship. The library grows. Quality of any individual component degrades. Consuming teams stop trusting it.
- There's no cost of staying off-system. A product team can ship using a one-off component, never get flagged, and quietly fork the design language. A year later, the system has 200 components and the product has 1,400 — none of which match.
Combine all three and you get the predictable end state: a system the design team is proud of, that engineering doesn't import, that product teams route around, and that gets quietly retired in the next reorg.
What changes when it's a feedback loop
A design system as a feedback loop has one core property: the system observes its own use, and the roadmap follows what the data says.
Concretely, that means three things have to be in place from day one:
- Component telemetry in production, not just in Figma. You need to know which components render on which page, in which app, and how often. The instrumentation is the deliverable.
- A weekly review of the off-system surface area. Which products shipped one-off components this sprint? Why? What was the friction with the on-system equivalent? That conversation drives next sprint's library work.
- A fixed ratio of refactor-to-new. I usually push for 60% refactor, 40% new for the first two quarters of any system. That ratio inverts the default and forces the team to defend new component requests against the cost of the existing surface area.
This isn't a tooling story. The tooling is straightforward — instrumentation lives in the component wrapper, telemetry lands in whatever you already use (PostHog, Amplitude, Datadog), the off-system audit is a recurring 30-minute meeting. The reason most teams don't do this is that it requires the design system team to stop being a publisher and start being a service.
Publishers ship things. Services watch how things get used and adjust. The discipline is different. The headcount profile is different. The budget conversation with leadership is much, much different.
The AI-augmented version
This part has changed in the last 18 months and most teams haven't caught up.
When I started designing systems, the constraint was throughput — you couldn't produce enough components, enough variants, enough documentation. Today, that constraint is gone. With v0, Cursor, and Claude Code, a single designer-engineer can produce a complete component (variants, accessibility, tests, MDX docs) in a few hours.
Throughput isn't scarce anymore. Judgment is. Which means the failure mode I described above is now even more dangerous. You can produce 200 components in a quarter without breaking a sweat. You can also degrade adoption faster than ever, because consuming teams can't keep up with library churn either — and they can route around you with their own AI-generated components.
The teams I see winning with AI-augmented design systems do something specific: they use AI to eliminate the work of refactoring. Mass migrations across consuming products. Codemod the legacy. Auto-generate the documentation. Keep the throughput where it actually compounds: in keeping the system clean, not in adding to it.
The agency angle
If you're an agency partnering with a client on a design system: the failure pattern is the same, but the timeline pressure makes it worse. You have eight weeks to ship a system the client will own. The temptation is to build a polished asset library and call it a day. The client signs off. You move on.
Six months later, your case study reads "we built a design system for X." The client's adoption metric reads "the design system is used by 3 of 14 product teams."
The version that survives is the one where the agency leaves behind:
- A telemetry pipeline already wired to production.
- A scheduled refactor cadence with the first three sprints planned.
- A decision log that survives the inevitable "should we extend this for the new initiative?" conversation eight months in.
That handoff makes the agency look slightly less heroic in the case study. It also produces clients who renew, refer, and credit the agency for compounding value over years.
What to do Monday
If you're already two quarters into a system and the adoption signal is soft, here's the order I'd run:
- Instrument first, not last. Wrap the existing components with usage telemetry. Wait two weeks. Look at the data.
- Audit off-system. Pick the three products with the worst adoption and ask the teams what specifically blocked them. The answers are usually small, mundane, fixable — and have been quietly killing your library for months.
- Cut the new-component backlog by 70%. Replace it with a refactor backlog from the audit findings.
- Republish the roadmap. Externally, to consuming teams. The shift from "here's what we're shipping" to "here's what we're fixing" is what restores trust.
Most design systems don't fail because the components are bad. They fail because the team builds an artifact instead of a service. That's a fixable problem. The fix starts with measuring adoption like you'd measure any other product — because that's what a design system actually is.
If you're rebuilding a system that's lost the room, or scoping a new one for a growth-stage product, the Sprint engagement is usually how this work starts. Five days, real prototypes, a roadmap your engineering team will actually follow.
Or if you're an agency working on this for a client and want a partner: see how I work with agencies.
Working on something similar?
Most engagements start with a 20-minute call.
You leave with a clearer read on the problem — even if we don't end up working together. No deck, no pitch.