Vibe Coding Security: Best Practices for AI-Generated Code
Vibe coding -- the practice of describing what you want in natural language and letting AI tools like Cursor, GitHub Copilot, or Claude write the code -- has fundamentally changed how software gets built. In 2026, a single developer can ship in a week what used to take a team a month.
But there is a catch. AI models generate code that works, but working code is not the same as secure code. Models are trained on millions of public repositories, including repositories full of SQL injection vulnerabilities, hardcoded secrets, and broken authentication patterns. When you vibe code without reviewing security, you are shipping the internet's average security practices -- which are terrible.
This guide covers 18 security best practices for AI-generated code, with concrete vulnerable and secure code examples you can apply today.
Why AI-Generated Code Has Security Risks
Before diving into practices, it helps to understand why AI code is often insecure:
- Training data includes vulnerable code. Models learn from GitHub, Stack Overflow, and blog tutorials. Much of this code prioritizes simplicity over security.
- AI optimizes for "does it work?" not "is it safe?" The model's reward signal is whether the code compiles and runs, not whether it resists attack.
- Context window limitations. AI generates code one file at a time. It cannot see your full architecture, so it may create inconsistent security boundaries.
- Developers skip review. When code appears instantly, the temptation to ship without reading it is strong. This is where vulnerabilities slip through.
The solution is not to stop using AI -- it is to review AI output through a security lens. Here are the practices that matter most.
Input Validation & Sanitization
1. Never Trust User Input -- Even When AI Writes the Handler
AI-generated API routes often accept and use request body data directly without validation.
Vulnerable:
// AI-generated code that trusts input blindly
export async function POST(request: Request) {
const { email, role } = await request.json();
await db.user.create({
data: { email, role }, // User can set role to "admin"
});
return Response.json({ success: true });
}
Secure:
import { z } from "zod";
const CreateUserSchema = z.object({
email: z.string().email().max(255),
name: z.string().min(1).max(100),
// Role is NOT accepted from user input
});
export async function POST(request: Request) {
const body = await request.json();
const result = CreateUserSchema.safeParse(body);
if (!result.success) {
return Response.json(
{ error: result.error.flatten() },
{ status: 400 }
);
}
await db.user.create({
data: { ...result.data, role: "user" }, // Role set server-side
});
return Response.json({ success: true });
}
Use Zod or a similar schema validation library on every API endpoint. Never allow users to set privileged fields like role, isAdmin, or credits.
2. Sanitize HTML Output to Prevent XSS
AI often generates code that renders user content without escaping it.
Vulnerable:
// AI-generated component that renders raw HTML
function Comment({ content }: { content: string }) {
return <div dangerouslySetInnerHTML={{ __html: content }} />;
}
Secure:
import DOMPurify from "isomorphic-dompurify";
function Comment({ content }: { content: string }) {
const sanitized = DOMPurify.sanitize(content);
return <div dangerouslySetInnerHTML={{ __html: sanitized }} />;
}
Better yet, avoid dangerouslySetInnerHTML entirely unless you have a genuine need for rich text rendering. React escapes content by default -- the vulnerability only appears when you bypass that protection.
3. Validate and Restrict File Uploads
AI-generated upload handlers rarely check file types, sizes, or content.
Vulnerable:
export async function POST(request: Request) {
const formData = await request.formData();
const file = formData.get("file") as File;
// No validation -- accepts any file type and size
const buffer = Buffer.from(await file.arrayBuffer());
await writeFile(`/uploads/${file.name}`, buffer);
return Response.json({ path: `/uploads/${file.name}` });
}
Secure:
import { randomUUID } from "crypto";
import path from "path";
const ALLOWED_TYPES = ["image/jpeg", "image/png", "image/webp"];
const MAX_SIZE = 5 * 1024 * 1024; // 5MB
export async function POST(request: Request) {
const formData = await request.formData();
const file = formData.get("file") as File;
if (!file || !ALLOWED_TYPES.includes(file.type)) {
return Response.json({ error: "Invalid file type" }, { status: 400 });
}
if (file.size > MAX_SIZE) {
return Response.json({ error: "File too large" }, { status: 400 });
}
// Use a random filename to prevent path traversal
const ext = path.extname(file.name);
const safeName = `${randomUUID()}${ext}`;
const buffer = Buffer.from(await file.arrayBuffer());
await writeFile(path.join("/uploads", safeName), buffer);
return Response.json({ path: `/uploads/${safeName}` });
}
Authentication & Authorization
4. Check Auth on Every Protected Route
AI frequently generates API routes without any authentication checks, especially when you prompt it to "create a CRUD API."
Vulnerable:
// AI assumed this was an internal endpoint
export async function DELETE(
request: Request,
{ params }: { params: { id: string } }
) {
await db.project.delete({ where: { id: params.id } });
return Response.json({ success: true });
}
Secure:
import { getServerSession } from "@/lib/auth";
export async function DELETE(
request: Request,
{ params }: { params: { id: string } }
) {
const session = await getServerSession();
if (!session?.user) {
return Response.json({ error: "Unauthorized" }, { status: 401 });
}
// Verify the user owns this resource
const project = await db.project.findUnique({
where: { id: params.id },
});
if (!project || project.userId !== session.user.id) {
return Response.json({ error: "Forbidden" }, { status: 403 });
}
await db.project.delete({ where: { id: params.id } });
return Response.json({ success: true });
}
Notice the two-step check: first authenticate (is the user logged in?), then authorize (does this user own this resource?). AI almost never generates the authorization step.
5. Use Constant-Time Comparison for Tokens
When AI generates token or API key verification, it typically uses simple string comparison, which is vulnerable to timing attacks.
Vulnerable:
if (request.headers.get("x-api-key") === process.env.API_KEY) {
// Process request
}
Secure:
import { timingSafeEqual } from "crypto";
function verifyApiKey(provided: string, expected: string): boolean {
const a = Buffer.from(provided);
const b = Buffer.from(expected);
if (a.length !== b.length) return false;
return timingSafeEqual(a, b);
}
const apiKey = request.headers.get("x-api-key") ?? "";
if (!verifyApiKey(apiKey, process.env.API_KEY!)) {
return Response.json({ error: "Invalid API key" }, { status: 401 });
}
6. Never Expose User IDs in Predictable Patterns
AI tends to use auto-incrementing IDs, making it trivial to enumerate all users or resources.
Vulnerable:
// Users can guess other user IDs: /api/users/1, /api/users/2, etc.
const user = await db.user.findUnique({ where: { id: parseInt(id) } });
Secure:
// Use UUIDs -- not guessable
const user = await db.user.findUnique({ where: { id } }); // UUID string
Use UUIDs or CUIDs for all database primary keys. This is a one-line config change in most ORMs.
Skip the boilerplate. Get PropelKit →
API Security
7. Implement Rate Limiting
AI never adds rate limiting. Without it, your API is vulnerable to brute-force attacks, scraping, and abuse.
Vulnerable:
// No rate limiting -- can be called unlimited times
export async function POST(request: Request) {
const { email, password } = await request.json();
const user = await authenticate(email, password);
return Response.json({ token: user.token });
}
Secure:
import { Ratelimit } from "@upstash/ratelimit";
import { Redis } from "@upstash/redis";
const ratelimit = new Ratelimit({
redis: Redis.fromEnv(),
limiter: Ratelimit.slidingWindow(5, "60 s"), // 5 requests per minute
});
export async function POST(request: Request) {
const ip = request.headers.get("x-forwarded-for") ?? "anonymous";
const { success } = await ratelimit.limit(ip);
if (!success) {
return Response.json(
{ error: "Too many requests" },
{ status: 429 }
);
}
const { email, password } = await request.json();
const user = await authenticate(email, password);
return Response.json({ token: user.token });
}
Apply stricter limits to authentication endpoints (5/minute) and looser limits to read endpoints (100/minute).
8. Validate Content-Type Headers
AI-generated routes rarely check the Content-Type header, which can lead to unexpected parsing behavior.
Secure:
export async function POST(request: Request) {
const contentType = request.headers.get("content-type");
if (!contentType?.includes("application/json")) {
return Response.json(
{ error: "Content-Type must be application/json" },
{ status: 415 }
);
}
// ... process request
}
9. Set Security Headers
AI never adds security headers. Configure these in your next.config.ts:
const securityHeaders = [
{ key: "X-Frame-Options", value: "DENY" },
{ key: "X-Content-Type-Options", value: "nosniff" },
{ key: "Referrer-Policy", value: "strict-origin-when-cross-origin" },
{ key: "Permissions-Policy", value: "camera=(), microphone=()" },
{
key: "Strict-Transport-Security",
value: "max-age=63072000; includeSubDomains; preload",
},
];
Database Security
10. Use Parameterized Queries -- Always
This is the oldest vulnerability in the book, and AI still generates raw SQL concatenation regularly.
Vulnerable:
// AI-generated search endpoint
const results = await db.$queryRawUnsafe(
`SELECT * FROM products WHERE name LIKE '%${searchTerm}%'`
);
Secure:
const results = await db.$queryRaw`
SELECT * FROM products WHERE name LIKE ${"%" + searchTerm + "%"}
`;
Better yet, use your ORM's built-in query builder, which parameterizes automatically:
const results = await db.product.findMany({
where: { name: { contains: searchTerm } },
});
11. Limit Query Results
AI-generated endpoints often return all matching records with no pagination, which enables data scraping and can crash your server.
Secure:
const MAX_PAGE_SIZE = 100;
export async function GET(request: Request) {
const { searchParams } = new URL(request.url);
const page = Math.max(1, parseInt(searchParams.get("page") ?? "1"));
const limit = Math.min(
MAX_PAGE_SIZE,
parseInt(searchParams.get("limit") ?? "20")
);
const results = await db.product.findMany({
take: limit,
skip: (page - 1) * limit,
select: { id: true, name: true, price: true }, // Only return needed fields
});
return Response.json(results);
}
12. Apply Row-Level Security
Every database query should be scoped to the authenticated user's data. AI almost never adds tenant isolation.
Vulnerable:
// Returns ALL invoices, not just the user's
const invoices = await db.invoice.findMany();
Secure:
const invoices = await db.invoice.findMany({
where: { userId: session.user.id },
});
Dependency Management
13. Audit AI-Suggested Dependencies
When AI suggests installing a package, verify it before running npm install. Check:
- Download count on npm -- packages with fewer than 1,000 weekly downloads are riskier
- Last publish date -- abandoned packages do not get security patches
- Maintainer count -- single-maintainer packages are one compromised account away from a supply chain attack
- License compatibility -- GPL dependencies in proprietary code create legal risk
# Check package info before installing
npm info suspicious-package
npm audit
14. Pin Dependency Versions
AI-generated package.json files use caret ranges (^1.2.3), which auto-update to new minor versions. A compromised update can affect your app.
Vulnerable:
"dependencies": {
"some-package": "^2.0.0"
}
Secure:
"dependencies": {
"some-package": "2.0.0"
}
Use npm ci in production to install exact versions from your lockfile. Run npm audit in your CI pipeline.
Secrets Management
15. Never Hardcode Secrets
AI frequently generates placeholder secrets that developers forget to replace, or it puts secrets directly in source code.
Vulnerable:
// AI-generated JWT signing
const token = jwt.sign(payload, "my-secret-key-123");
Secure:
const secret = process.env.JWT_SECRET;
if (!secret) {
throw new Error("JWT_SECRET environment variable is required");
}
const token = jwt.sign(payload, secret);
16. Do Not Log Sensitive Data
AI-generated error handlers often log the entire request body or user object, which can leak passwords, tokens, or PII into your logging service.
Vulnerable:
catch (error) {
console.error("Login failed:", { email, password, error });
}
Secure:
catch (error) {
console.error("Login failed:", {
email,
error: error instanceof Error ? error.message : "Unknown error"
});
}
17. Validate Environment Variables at Startup
Fail fast if a required secret is missing, rather than discovering it at runtime when a user hits a broken flow.
// src/lib/env.ts
import { z } from "zod";
const envSchema = z.object({
DATABASE_URL: z.string().url(),
STRIPE_SECRET_KEY: z.string().startsWith("sk_"),
STRIPE_WEBHOOK_SECRET: z.string().startsWith("whsec_"),
JWT_SECRET: z.string().min(32),
RESEND_API_KEY: z.string().startsWith("re_"),
});
export const env = envSchema.parse(process.env);
18. Rotate Secrets That AI Has Seen
If you paste a real API key into an AI chat to debug an issue, consider that key compromised. AI providers state they do not train on user data, but your key has still left your machine.
Rotate the key immediately. Use a secrets manager (Vercel environment variables, AWS Secrets Manager, or Doppler) rather than .env files committed to git.
How PropelKit Handles Security Out of the Box
Building secure SaaS applications means getting dozens of details right across authentication, payments, API design, and infrastructure. Missing even one creates a vulnerability.
PropelKit is a Next.js 15 SaaS boilerplate that implements these security practices by default:
- Authentication: Supabase Auth with row-level security, session validation on every protected route, and proper CSRF protection
- Input Validation: Zod schemas on all API endpoints with type-safe request parsing
- Rate Limiting: Upstash Redis-based rate limiting on authentication and payment endpoints, with in-memory fallback for development
- Payment Security: Webhook signature verification for Stripe, Razorpay, and DodoPayments with idempotent fulfillment handlers
- Security Headers: X-Frame-Options, Content-Security-Policy, and HSTS configured in
next.config.ts - Error Monitoring: Sentry integration that captures errors without leaking sensitive data
- Environment Validation: All required environment variables are validated at build time
Instead of auditing every line of AI-generated code for security gaps, start with a foundation that has already been hardened. You can focus on building features while the security infrastructure is handled for you.
Conclusion
Vibe coding is not going away -- it is only getting faster and more capable. The developers who thrive will be the ones who use AI for speed while maintaining a strong security review habit.
You do not need to become a security expert. You need a checklist:
- Validate all input with schemas
- Authenticate and authorize every protected endpoint
- Use parameterized queries
- Rate limit your APIs
- Never hardcode secrets
- Audit dependencies before installing them
- Review every line of AI-generated code before shipping
Treat AI as a junior developer who writes fast but needs code review. The code is a starting point, not a finished product. Review it, harden it, and ship with confidence.
Ready to ship your SaaS?
PropelKit gives you everything you need — auth, payments, AI tools, multi-tenancy, and more. Go from idea to revenue in a day.
Get PropelKit