
The critics are right. Let's get that out of the way first.
Vibe coding—the practice of building software through natural language conversations with AI—produces code that can be inconsistent, duplicative, and sometimes dangerously naive about the broader system it's being inserted into. The AI will happily reinvent a utility function that already exists three files away. It will write tests that test nothing meaningful. It will "fix" one bug by introducing two more in places you weren't watching.
These are valid criticisms. I know because I've experienced every one of them firsthand over the past few months while building several projects almost entirely through vibe coding—from a full consulting website to internal tools and everything in between. I've watched AI make baffling decisions, duplicate code I explicitly told it not to duplicate, and confidently produce solutions that completely missed the point.
But here's what I find interesting: I've heard every single one of these complaints before—about human engineers.
Part 1: The Dirty Secret of Software Teams
Anyone who has managed a development team knows the reality. You will have engineers who copy-paste code rather than abstract it properly. You will find tests that exist purely to satisfy coverage metrics. You will discover API documentation that diverged from the actual implementation six months ago and nobody noticed.
The difference isn't that humans are reliable and AI isn't. The difference is that we've developed decades of processes, tools, and cultural norms to catch these problems: code reviews, linting, CI pipelines, architectural decision records, pair programming, and the occasional heated Slack thread about why someone committed directly to main.
The Core Insight: We don't trust individual engineers implicitly. We trust systems that include engineers. The question isn't whether AI can write perfect code—it can't, and neither can we. The question is whether we can build systems that harness AI's genuine strengths while compensating for its genuine weaknesses.
Part 2: Portrait of an Unusual Collaborator
Here's how I've come to think about working with AI on code: imagine the most brilliant engineer you've ever met. They can read and comprehend an entire codebase in seconds. They know every language, every framework, every obscure library you throw at them. They can produce a working implementation faster than you can finish describing what you want.
Now imagine that this brilliant engineer has anterograde amnesia.
Every conversation starts fresh. They don't remember the architectural decisions you made together last week. They don't recall that you specifically chose to avoid that pattern because of performance implications you discovered the hard way. They will solve the same problem differently each time unless you explicitly remind them of the constraints.
| Trait | Brilliant Human Engineer | AI Coding Assistant |
|---|---|---|
| Code comprehension speed | Hours to days | Seconds |
| Language/framework knowledge | Specialized | Universal |
| Long-term project memory | Excellent | None (per session) |
| Consistency across sessions | High | Requires explicit context |
| Ego when receiving feedback | Variable | Zero |
This isn't a flaw to be fixed—it's a fundamental characteristic to be designed around. And honestly? It's not that different from onboarding a new contractor, except this contractor can get productive in minutes instead of weeks.
Part 3: The Art of Explicit Context
The single most important skill in vibe coding isn't prompting technique or knowing which model to use. It's developing the discipline to externalize your project's accumulated wisdom.
That design decision you made and keep in your head? Write it down. The coding conventions your team follows implicitly? Make them explicit. The reason you structured the database that way? Document it somewhere the AI can see it.
The Hidden Benefit: This might feel like overhead, but here's the thing: you should have been doing this anyway. The AI's need for explicit context simply forces good engineering hygiene that benefits everyone—including future-you who will also forget why that code exists.
A well-maintained CLAUDE.md file (or whatever you call your project context document) isn't just for the AI. It's a living architectural decision record that would help any new team member, human or otherwise.
What to Include in Your Project Context
- Architectural decisions and the reasoning behind them
- Coding conventions specific to your project
- Patterns to follow with file references
- Anti-patterns to avoid and why
- Directory structure explanations
- Testing philosophy and requirements
Part 4: Guarding Against the Path of Least Resistance
AI will absolutely take the easy way out. Ask it to add a feature, and it might duplicate existing logic rather than extend it. Ask it to fix a bug, and it might patch the symptom rather than address the cause. This isn't malice—it's optimization for the immediate request without awareness of the broader context.
Sound familiar? This is exactly what happens when you give an engineer a ticket without context about the system, without time to explore the codebase, without understanding of the long-term direction.
The solution is the same in both cases: create guardrails.
- Establish a pre-commit checklist that includes scanning for duplication
- Build your CI pipeline to catch the patterns you know are problematic
- State constraints upfront: "Before writing new code, check if similar functionality exists. Prefer extending existing utilities over creating new ones. Follow the patterns established in [specific file]."
You're not just prompting—you're establishing engineering standards. The AI becomes a team member who follows those standards as long as you're clear about what they are.
Part 5: The Vigilance Tax
Yes, vibe coding requires constant vigilance. You need to review what's being produced. You need to verify that the AI understood your intent. You need to catch the moments where efficiency became sloppiness.
But let's be honest about the alternative. Traditional software development also requires constant vigilance:
- Code reviews exist because we don't trust unreviewed code
- QA exists because we don't trust that developers caught everything
- Staging environments exist because we don't trust that production will behave like development
The Real Difference: The vigilance tax for AI isn't new. It's just different. And in some ways, it's easier—the AI doesn't get defensive when you reject its approach, doesn't argue about stylistic preferences, doesn't take it personally when you ask it to redo something completely.
Part 6: Why This Is Still the Way Forward
Despite everything I've described—the memory limitations, the tendency toward duplication, the need for constant oversight—I believe vibe coding represents a fundamental shift in how software gets built.
The reason is simple: the failure modes are manageable, and the success modes are extraordinary.
When it works, and it works more often than critics suggest, you can:
- Go from idea to working implementation in a fraction of the traditional time
- Explore approaches you wouldn't have had time to consider
- Maintain codebases in languages you're not expert in
- Focus on the what and why while delegating much of the how
And here's the part that makes me genuinely optimistic: every limitation I've described is a current limitation, not a fundamental one. Context windows are expanding. Memory and retrieval systems are improving. The models themselves are getting better at maintaining consistency and asking clarifying questions.
The vibe coders who develop good habits now—externalizing context, establishing clear standards, maintaining appropriate skepticism—aren't just managing today's limitations. They're building skills that will compound as the tools improve.
The Bottom Line
Vibe coding isn't magic, and it isn't going to replace the need for engineering judgment. What it does is shift where that judgment gets applied.
- Less time writing boilerplate. More time on architecture.
- Less time remembering syntax. More time on system design.
- Less time on implementation details. More time ensuring those implementations actually serve your users.
The critics who say "AI code requires too much oversight to be worthwhile" are measuring against an imaginary baseline where human code doesn't require oversight. It does. It always has.
The question isn't whether to trust AI with your codebase. It's whether you can build the systems—the documentation, the guardrails, the review processes—that let you trust the combination of AI capability and human judgment.
That combination, done right, is more powerful than either alone.
And it's only going to get better.