Relevat Digital
All insights
Strategy

Vibe Coding Has a Place. It Is Not Production.

AI can now write enough working code that whole apps appear in a weekend. That is wonderful for prototypes and dangerous for anything else. Here is where we draw the line.

· 2 min read

Vibe coding - describing what you want and letting an AI write the code, often without reading it - is real, useful, and here to stay. It is also being applied in places it does not belong, with consequences that will land six months from now. Here is the line we draw on real projects.

Where vibe coding wins

For exploration, prototyping, and internal one-shot tools, vibe coding is genuinely transformative. A founder who could not write code can now build a working prototype to validate a customer conversation. An ops manager can spin up an internal dashboard in an afternoon. A senior engineer can sketch three architectures before lunch instead of one.

The value is speed-of-thought. The cost - that the code is not necessarily understood by anyone, including the model that wrote it - is acceptable when the artefact is disposable, single-user, or short-lived.

Where it goes wrong

The trouble starts when vibe-coded artefacts get treated like production systems. We have seen the same pattern several times in 2025 and 2026: a prototype that worked surprisingly well gets pushed into customer use, no one ever read the code carefully, and then six months in something breaks in a way nobody can debug because nobody understands the system.

The failure modes are predictable. Security holes that a careful reviewer would have caught. Database schemas that work for ten users and collapse at ten thousand. Integration logic that handles the happy path and silently corrupts data on edge cases. None of this is the AI’s fault. It is what happens when “it works” is treated as the same thing as “it is correct.”

The tests that decide it

Before promoting any vibe-coded system to production, we ask three questions. If any answer is “no”, the code needs to be properly reviewed and often largely rewritten before going live.

  • Can a human engineer on the team explain, in detail, what every important file does and why?
  • Are there tests that would catch a regression on a real edge case, not just the happy path?
  • Is there a clear plan for who debugs this when it breaks at 2am six months from now?

The hybrid that actually works

The most productive teams we work with use AI heavily, but not blindly. They let the model write the first draft, then a human reads every line, restructures what does not fit the codebase’s conventions, and is accountable for what ships. The throughput gain is still enormous - often 2x or 3x - but the resulting code is understood, maintained, and trustworthy.

This is a discipline, not a tool, and it does not happen by accident. It comes from a culture that treats AI as a very fast junior engineer whose work always needs review.

How we help

We build production AI systems and we use AI heavily to do it - while taking responsibility for understanding everything we ship. For clients with prototype that needs to graduate to a real system, we run a hardening pass: review, refactor, test, and make it operable. If you have something that “kind of works” and you are about to put it in front of customers, that is exactly the moment to bring in another set of eyes.

Tags

#AI#Vibe Coding#Engineering#Strategy

Want to talk?

Working on something similar?

A 30-minute call is usually enough. We respond within one business day.