March 2026
How I Used AI Feedback Loops to Build Incredible Systems
I've been building AI systems where the feedback doesn't just inform the human. It teaches the AI. What happens when those loops start compounding.

84% of the world's population has never used AI. Not once.
Damian Player made a visualization that puts this in perspective. Picture a grid of 2,500 dots, each representing 3.2 million people. The entire world population. The overwhelming majority of that grid is grey. Never touched it. The green sliver at the bottom? 1.3 billion free chatbot users. The orange line within that? 15 to 25 million people who pay $20 a month. The red pixels you can barely see? 2 to 5 million who use AI coding tools.
We're not in the late stages of an AI revolution. We're in the opening minutes.
Global AI adoption, Feb 2026 · Source: Damian Player
Never used AI
~6.8B (84%)
Free chatbot user
~1.3B (16%)
Pays $20/mo for AI
~15-25M (0.3%)
Uses coding tools
~2-5M (0.04%)
88% of organizations say they use AI in some function. Only 6% report meaningful business impact. Anthropic measured "observed exposure," the gap between what AI could theoretically do in a job versus what people actually use it for. In computer and math occupations, that gap is 61 percentage points. The tools exist. Almost nobody knows how to use them well.
The gap between what AI could do and what people actually use it for
This is the context for everything I'm about to describe. The field is wide open. The people building real feedback loops, not just chatting with AI but teaching it, are operating in a space where almost nobody else is. That advantage compounds.
The feedback loop
Last autumn, I was building an AI technical support system for an industrial manufacturing company. The system searches thousands of technical documents (tolerances, alloy specs, surface treatment manuals) and generates answers for customer queries.
The first version did what first versions do. It pulled the right documents maybe 60% of the time. The answers were technically correct but practically useless. Textbook knowledge without operational context.
So we built a feedback button. Floating in the corner of every AI response. Thumbs up, thumbs down, and a text field for context. Simple.
Here's what happened: the technical support team started correcting the AI. Not just flagging wrong answers, but explaining why they were wrong. "This alloy recommendation is technically valid but we stopped using it for this application three years ago." "The tolerance is correct but the customer is asking about the post-anodized dimension, not the raw extrusion."
Each correction made the next answer better. Not incrementally. Exponentially. The corrections weren't just fixing individual responses. They were teaching the system how the company actually thinks about its products. The gap between textbook knowledge and operational knowledge started closing.
By the third iteration, we weren't debugging the AI anymore. We were refining it. The experts started spending less time correcting and more time pushing the system into new territory. Edge cases. Unusual alloy combinations. Customer-specific requirements that only existed in someone's head until now.
This isn't "using AI." This is building a system that learns from its own deployment.
The capability jump
The speed of change in AI tooling is hard to overstate:
- 41% of all code written in 2025 was AI-generated
- Anthropic says 70 to 90% of their own code is AI-generated
- Cursor hit $1B ARR faster than any SaaS company in history
- A quarter of the current YC batch runs on 95% AI-generated code
- Companies are hitting $10M revenue with fewer than 10 people
If you tried building something like this two years ago, you know the pain. AI coding tools circa 2023 broke things constantly. You'd get stuck in loops where the AI confidently produced code that looked right, tested wrong, and when you asked it to fix the problem, it introduced three new ones.
I built my own consulting website entirely with AI. The early versions were brutal. Multi-day debugging sessions for issues that would have taken a senior developer thirty minutes. The AI couldn't hold the whole system in its head. It would fix a component and break the layout. Fix the layout and break the routing. Fix the routing and forget the component it just fixed.
Something shifted in late 2025.
It wasn't one thing. It was a stack of things arriving together: tool calling that actually worked, persistent memory across sessions, MCP integrations that let the AI read your codebase and your design files simultaneously, RAG pipelines that gave it real context instead of guessing. Claude went from being a talented but unreliable intern to something closer to a senior collaborator who happens to work at machine speed.
The "jagged frontier" that researchers talked about, where AI was brilliant at some tasks and catastrophically bad at adjacent ones, started smoothing out. Not in theory. In my daily work. The tasks where I used to hold my breath and pray now mostly just work. The frontier is still jagged. But the valleys got shallower.
What AI still can't do
I need to be honest about this part, because the evangelists skip it and it matters.
AI cannot see what's broken.
I mean this literally. I've fed screenshots of obviously broken layouts to the most capable models available. Overlapping text, missing images, buttons floating in white space. The response: "The layout looks clean and well-structured." Every time.
AI has no sense of totality. It can build a beautiful component in isolation and have no idea that it clashes with everything around it. It reinvents the wheel constantly, building a new utility function when an identical one exists three files away. It can't zoom out.
Taste, grids, creative use of negative space. These remain firmly human. You have to paint the picture in code for AI to understand what you're aiming for. I learned this the hard way with Framer: giving the AI your actual design system code produces dramatically better results than giving it a screenshot of what you want. Figma's visual output as a reference? Mediocre results. The code that produces that visual output? Suddenly the AI understands the relationships, the spacing logic, the systematic thinking behind the design.
The real human value right now is quality assurance, attention to detail, and deep domain knowledge. AI multiplies what you already know. If you understand systems architecture, AI will help you build systems ten times faster. If you don't, it'll help you build bad systems ten times faster.
This is the part most people miss. AI doesn't close the expertise gap. It widens it. The more you know, the more you can extract. An experienced engineer using AI tools ships what used to require a small team. A novice using the same tools ships something that looks impressive until it hits production.
Here's a useful frame: AI is 80% good at a lot of things, but rarely 100% at anything. It writes solid first-draft code but misses edge cases. It structures a research summary but draws the wrong conclusion. It generates a layout that almost works but has a spacing problem that would embarrass you in a client review.
That last 20% is where expertise lives. And it's where most of the value is.
This means the highest-leverage application of AI isn't replacing experts. It's arming them. An experienced designer using AI skips the mechanical work and spends all their time on the 20% that actually matters. The judgment calls. The trade-offs. The "this technically works but it's wrong for this context" decisions. AI handles the volume. The expert handles the precision.
The companies getting the most from AI right now aren't the ones automating people out. They're the ones who figured out that AI plus a domain expert produces better results than either alone. And it's not even close.
The investment that compounds
Feedback loops require infrastructure most people won't build. The feedback button. The evaluation pipeline. A code-level design system AI can actually read. Structured prompts, context management, agent handoff protocols. You build the scaffolding before you get the results.
Each cycle gets easier. The client project's feedback patterns now inform how I build every AI system. The design system from my own website gives AI a foundation on client projects. Painful trial and error, now documented, reusable, compounding.

52% of large companies use AI, but only 17% of small ones. Only 13% of workers have received any AI training. 77% of employers say they plan to upskill. 13% actually did.
Kodak invented the digital camera and couldn't bring themselves to use it. Nokia had a touchscreen phone prototype before the iPhone existed. In Europe, the EU AI Act has given risk-averse leadership a reason to wait. "We can't do anything until the regulatory framework is clear." Meanwhile the gap widens.
There's also the profession-as-identity problem. When AI threatens the specific activity you've built your identity around, the natural response is to dismiss it. Not because it doesn't work. Because accepting that it works means rethinking who you are.
The ceiling isn't the technology. It's our ability to imagine what to do with it.
The bigger pattern
The same pattern shows up everywhere. Lunar regolith, the dust that destroys equipment, used as raw material for shelters against that dust. AI decoding brain signals while brain architecture informs better AI. In my own work: each AI system I build generates feedback that improves the next one. The methodology becomes an input to the tool that delivers it.
Self-reinforcing loops. The internet followed this pattern. Every user made it more valuable for everyone already there. At some point, opting out became harder than opting in.

The people who compound are the ones who keep coming back. Try. Fail. Understand why. Adjust. Build the infrastructure so the next attempt starts from higher ground. That's the loop.
The honest middle
I'm not an AI evangelist. I've debugged too many hallucinated code paths to have illusions. These tools are powerful, inconsistent, occasionally brilliant, and reliably blind to their own failures.
But the client project captured knowledge that existed only in people's heads and made it available to the entire organization. Twenty years of operational expertise, now a living system that improves every day.
Vision remains human. No AI decided to build the feedback mechanism. A person saw what was missing and designed a way to close the gap. AI executed. The human directed.
The question isn't whether AI will change how you work. It's whether you'll be the one deciding how.
Want to work together?
If this resonates and you're facing similar challenges, let's talk.