Would you trust a “vibe coder” with your life?
Without a doubt, AI-powered coding tools are going to revolutionize the industry, and their future potential is undeniable. They’re a prime example of AI granting people superpowers - a compelling pitch of a killer AI use case.
After spending years studying for my Bachelor’s in Computer Science and thousands of hours writing code, I can attest that tools like Cursor and Windsurf are game-changers. Even as a founder who’s built mobile apps, front-end, back-end, and platforms across various programming languages, I’d have killed to have access to such superpowers when I had my startup.
And yet… I’ve never, ever, written anything bug free (excluding code containing the words Hello World).
Bugs are part of the journey
Sometimes they are known - a trade-off to ship something quickly, commented for you to resolve later.
Sometimes surprising - why exactly does that scroll screw up after 10,000 rows?
But invariably, as the programmer I always understood how something worked, or could break it down, troubleshoot, and ask others smarter than I for an assist. I was responsible for the code I shipped.
As I talk to friends, colleagues, and hobbyists about their growing ability to “vibe code” without fully grasping the underlying code, I wonder about the future implications of what gets promoted to production for use by everyday users like you or me.
Would I trust it with my life?
- Would I trust a vibe coder to build an advanced driver assistance system that I’d bet my life on? Not today.
- How about a robotic vision algorithm for navigating an autonomous forklift? I’m skeptical.
- What happens when a vibe-coded app leaks sensitive information, or a vibe-coded firmware grants attackers root access to devices on my home network?
I would like to believe vibe coders have a greater burden of responsibility to test their solutions thoroughly, given their potential lack of understanding of the code. However, I’m a realist about human nature.
The vibe coder is after speed. They are the embodiment of move quick, learn and iterate.
Should we really believe they intend to put their AI black-box application through rigorous testing and hold up their launch?
Where does responsibility lie?
I firmly believe AI-coded solutions and AI-assistive coding tools are amazing productivity accelerants. Still, given a choice, I’d prefer a solution programmed by a human (or an AI-assisted human) rather than an AI directed by a human - at least with the current state of things.
The real concern I’ve been grappling with is that soon, we’ll never know if an AI or a human wrote the code… so where exactly does the responsibility lie?


