13 Bugs That Would Have Been Shockingly Easy to Ship When Vibe Coding

I was feeling that coding flow state last week. You know the one: fingers flying across the keyboard, Cursor suggesting the perfect snippets, ChatGPT o3 filling in my function logic. Pure development bliss.

Then I caught myself before hitting deploy.


This content originally appeared on DEV Community and was authored by Monty

I was feeling that coding flow state last week. You know the one: fingers flying across the keyboard, Cursor suggesting the perfect snippets, ChatGPT o3 filling in my function logic. Pure development bliss.

Then I caught myself before hitting deploy.

What if I ran one quick manual audit first?

Thank goodness I did. Hidden beneath the surface of my seemingly perfect code lurked thirteen bugs that would have wreaked absolute havoc in production. The scariest part? My AI coding companions happily generated every single one.

Let me walk you through what I found, why AI happily created these bugs, and how you can catch them before they slip into your codebase.

1: Secrets Scattered in process.env Without Validation

My AI partner peppered process.env.API_KEY references throughout the codebase. Looks innocent enough, right?

Problem is, there's zero validation. If that environment variable is missing or empty, you'll face mysterious runtime failures when you least expect them.

Why AI does this: Most docs and Stack Overflow answers show the simple one-liner without validation, so that's what AI tools replicate.

The fix: Centralize all environment variables in a single config.ts file that validates everything at startup:

export const cfg = z
.object({ OPENAI_KEY: z.string().min(1) })
.parse(process.env);




2: Stored XSS Through Unsanitized Markdown

The blog feature looked great until I noticed the AI-generated code rendered markdown directly with dangerouslySetInnerHTML without any sanitization.

One malicious comment later, and attackers could inject scripts running in your users' browsers.

Why AI does this: Tutorial code often skips sanitization for brevity, and AI training data is full of these examples.

The fix: Always pipe content through rehype-sanitize or DOMPurify before rendering.

3: Admin Endpoints Missing Role Checks

The API had a beautiful set of admin endpoints for user management. Only problem? Nothing actually checked if the caller was an admin.

Why AI does this: Generic CRUD scaffolding rarely includes role-based access control by default.

The fix: Wrap sensitive handlers with middleware that verifies permissions:

export const getAllUsers = requireRole("admin", async () => { ... });




4: Webhooks That Trust Every Payload

The webhook handler would process any JSON body sent its way without verifying the sender. Free money for anyone who figures out your endpoint URL!

Why AI does this: Quick-start examples focus on parsing JSON, not security verification.

The fix: Verify signatures, cap body size, and reject unsigned payloads:

app.post(
"/webhook",
express.raw({ type: "application/json", limit: "100kb" }),
verifySignature,
handleWebhook
);




5: Database Updates on Every Token

The chat UI streamed tokens nicely, but behind the scenes, each token triggered a separate database write. Hello, performance nightmare.

Why AI does this: Streaming examples usually just print to console, so AI naively swaps console.log for database calls without considering the performance impact.

The fix: Buffer tokens in memory and flush once per response:

let buffer = "";
stream.on("data", t => buffer += t);
stream.on("end", () =>
prisma.message.update({ where: { id }, data: { text: buffer } })
);




6: New API Client on Every Request

Every incoming request spawned a fresh OpenAI client instance. Not only wasteful but potentially hitting rate limits faster.

Why AI does this: Single-file examples instantiate clients inside handlers for simplicity.

The fix: Memoize clients at module scope:

let openai: OpenAI | undefined;
export const getOpenAI = () =>
openai ??= new OpenAI({ apiKey: cfg.OPENAI_KEY });




7: Empty String Credential Fallbacks

Spotted this gem: const apiKey = process.env.API_KEY || "". Silent failure waiting to happen.

Why AI does this: AI adds these fallbacks to silence TypeScript errors about potential undefined values.

The fix: Validate credentials upfront and throw explicit errors when missing. No fallbacks.

8: No Rate Limiting on Auth Routes

The login endpoint would happily process unlimited attempts, making it a juicy target for credential stuffing attacks.

Why AI does this: Authentication tutorials prioritize getting something working quickly over security.

The fix: Add rate limiting middleware and use generic error messages that don't reveal which part of the credentials was wrong.

9: Missing Security Headers

The Express app lacked Content Security Policy, HSTS, and other critical security headers.

Why AI does this: Quick-starts emphasize routing functionality over security configuration.

The fix: Enable Helmet middleware with appropriate security policies.

10: Long Polling With Fixed Intervals

The status checking logic used setInterval(checkStatus, 1000) with no backoff strategy. Server under load? You just made it worse.

Why AI does this: Sample polling code uses fixed intervals for simplicity.

The fix: Implement exponential backoff and early exit conditions.

11: Queries on Unindexed Columns

The search feature looked great until I realized it was filtering on non-indexed columns. Table scans on every query!

Why AI does this: Schema examples rarely include index creation, focusing instead on data structure.

The fix: Add appropriate indexes to your schema:

model Post {
// fields here
@@index([status, createdAt])
}




12: Database Checks on Every Streamed Token

The streaming logic checked for request cancellation by querying the database on every token. That's potentially thousands of unnecessary database hits per request.

Why AI does this: It replaces simple in-memory flags with database lookups without considering performance implications.

The fix: Check once every few seconds or use a pub-sub channel for real-time cancellation signals.

13: Console Logging Leaking Sensitive Data

Found console.log(user) sprinkled throughout the codebase, potentially dumping passwords, tokens, and personal info into logs.

Why AI does this: Example code uses console logs for debugging, and AI continues this pattern.

The fix: Use a structured logger like Pino with proper redaction of sensitive fields.

When AI Becomes Your Pair Programmer

Coding with AI tools is magical.

The speed, the flow, the satisfaction of watching complete functions materialize before your eyes... it's addictive.

But we need to remember that these tools are trained on mountains of example code that prioritizes demonstration over production-readiness.

Most tutorials skip security, performance optimizations, and error handling because they're focused on teaching core concepts.

This means your AI pair programmer has learned the same shortcuts.

Pre-Deployment Checklist

After my close call, I created a quick audit routine for AI-assisted code:

  1. Schedule a dedicated review phase after coding with AI

  2. Check all environment variable usage for validation

  3. Audit user input handling and content rendering

  4. Verify all authentication and authorization logic

  5. Look for performance bottlenecks in database access patterns

  6. Review client instantiation patterns

  7. Check for proper error handling

  8. Test with security headers and best practices

The good news? Once you know what to look for, these issues are easy to spot and fix.

AI coding tools are still your friends. Just make sure you're the one doing the final review before hitting deploy.

Flow fast, then verify carefully.

What bugs have you caught in your AI-generated code? Drop me a note if you've spotted others I should add to my checklist!


This content originally appeared on DEV Community and was authored by Monty


Print Share Comment Cite Upload Translate Updates
APA

Monty | Sciencx (2025-06-26T16:13:00+00:00) 13 Bugs That Would Have Been Shockingly Easy to Ship When Vibe Coding. Retrieved from https://www.scien.cx/2025/06/26/13-bugs-that-would-have-been-shockingly-easy-to-ship-when-vibe-coding/

MLA
" » 13 Bugs That Would Have Been Shockingly Easy to Ship When Vibe Coding." Monty | Sciencx - Thursday June 26, 2025, https://www.scien.cx/2025/06/26/13-bugs-that-would-have-been-shockingly-easy-to-ship-when-vibe-coding/
HARVARD
Monty | Sciencx Thursday June 26, 2025 » 13 Bugs That Would Have Been Shockingly Easy to Ship When Vibe Coding., viewed ,<https://www.scien.cx/2025/06/26/13-bugs-that-would-have-been-shockingly-easy-to-ship-when-vibe-coding/>
VANCOUVER
Monty | Sciencx - » 13 Bugs That Would Have Been Shockingly Easy to Ship When Vibe Coding. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/06/26/13-bugs-that-would-have-been-shockingly-easy-to-ship-when-vibe-coding/
CHICAGO
" » 13 Bugs That Would Have Been Shockingly Easy to Ship When Vibe Coding." Monty | Sciencx - Accessed . https://www.scien.cx/2025/06/26/13-bugs-that-would-have-been-shockingly-easy-to-ship-when-vibe-coding/
IEEE
" » 13 Bugs That Would Have Been Shockingly Easy to Ship When Vibe Coding." Monty | Sciencx [Online]. Available: https://www.scien.cx/2025/06/26/13-bugs-that-would-have-been-shockingly-easy-to-ship-when-vibe-coding/. [Accessed: ]
rf:citation
» 13 Bugs That Would Have Been Shockingly Easy to Ship When Vibe Coding | Monty | Sciencx | https://www.scien.cx/2025/06/26/13-bugs-that-would-have-been-shockingly-easy-to-ship-when-vibe-coding/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.