March 15, 2026 · Variant Systems
10 Anti-Patterns Hiding in Every AI-Generated Codebase
The same 10 bugs show up in every AI-built codebase we audit. TypeScript without runtime validation, orphan migrations, flat auth, and more — with code examples.
We audit AI-generated codebases for a living. Cursor, Claude Code, Copilot, Bolt, Lovable, Replit Agent — we’ve reviewed code from all of them.
The tools are different. The anti-patterns are the same.
After dozens of audits, we’ve cataloged the ten issues that show up in nearly every AI-built codebase. Not edge cases. Not nitpicks. Structural problems that will break your product in production, leak your users’ data, or silently corrupt your business logic.
Here’s what to look for — and what the fix looks like.
1. The Phantom Validation
TypeScript types used as runtime validation. The code has interfaces and types everywhere but zero runtime checks on API inputs.
This is the most common anti-pattern we see. AI tools love TypeScript. They generate beautiful type definitions. Founders see those types and assume their API is validating input. It’s not. TypeScript types evaporate at compile time. They don’t exist at runtime. A malicious request with the wrong shape sails right through.
What AI generates:
interface CreateUserInput {
email: string;
name: string;
role: "admin" | "user";
}
app.post("/api/users", async (req, res) => {
const input: CreateUserInput = req.body;
// TypeScript is happy. Runtime is wide open.
const user = await db.users.create({ data: input });
return res.json(user);
});What it should look like:
import { z } from "zod";
const CreateUserSchema = z.object({
email: z.string().email(),
name: z.string().min(1).max(200),
role: z.enum(["admin", "user"]),
});
app.post("/api/users", async (req, res) => {
const parsed = CreateUserSchema.safeParse(req.body);
if (!parsed.success) {
return res.status(400).json({ errors: parsed.error.flatten() });
}
const user = await db.users.create({ data: parsed.data });
return res.json(user);
});The fix is straightforward: use Zod, Valibot, or ArkType to define schemas that validate at runtime. Derive your TypeScript types from those schemas with z.infer<>, not the other way around. Types flow from validation, not from hope.
2. The Optimistic Auth
Auth middleware exists on early routes but not on routes added later. The AI doesn’t remember your auth requirements between sessions.
This one is subtle because the auth setup looks correct. The middleware is properly implemented. The first batch of routes is protected. But every new prompt session starts fresh. The AI generates new endpoints without the auth wrapper because it wasn’t part of the latest prompt context.
What AI generates:
// routes/users.ts — first session, auth is set up
router.get("/api/users", authMiddleware, async (req, res) => {
const users = await db.users.findMany();
return res.json(users);
});
// routes/reports.ts — added two weeks later, different session
router.get("/api/reports", async (req, res) => {
// No auth. Anyone on the internet can pull your reports.
const reports = await db.reports.findMany();
return res.json(reports);
});What it should look like:
// middleware/auth.ts — applied at the router level
const protectedRouter = express.Router();
protectedRouter.use(authMiddleware);
// Every route on this router is automatically protected
protectedRouter.get("/api/users", async (req, res) => {
const users = await db.users.findMany();
return res.json(users);
});
protectedRouter.get("/api/reports", async (req, res) => {
const reports = await db.reports.findMany();
return res.json(reports);
});
// Only public routes go on the unprotected router
const publicRouter = express.Router();
publicRouter.post("/api/auth/login", loginHandler);Default to protected. Make public routes the exception that requires deliberate opt-in. We found this pattern in 3 out of 5 Claude Code projects we audited last quarter.
3. The Assertion Mirage
Tests that use expect(result).toBeDefined() without checking actual business logic. Tests that literally cannot fail.
AI tools are great at generating tests. The file structure is correct, the imports are right, the test descriptions read well. But the assertions check that something exists, not that it’s correct. A function that returns null, undefined, or a completely wrong object will still pass.
What AI generates:
test("createOrder processes payment", async () => {
const order = await createOrder({
items: [{ id: "abc", quantity: 2 }],
paymentMethod: "card",
});
expect(order).toBeDefined();
expect(order.status).toBeTruthy();
});What it should look like:
test("createOrder charges correct total and sets status to confirmed", async () => {
const order = await createOrder({
items: [{ id: "abc", quantity: 2, unitPrice: 1500 }],
paymentMethod: "card",
});
expect(order.totalCents).toBe(3000);
expect(order.status).toBe("confirmed");
expect(order.chargeId).toMatch(/^ch_/);
expect(mockPaymentGateway.charge).toHaveBeenCalledWith(
expect.objectContaining({ amount: 3000, currency: "usd" }),
);
});
test("createOrder fails gracefully on declined card", async () => {
mockPaymentGateway.charge.mockRejectedValue(new CardDeclinedError());
const order = await createOrder({
items: [{ id: "abc", quantity: 2, unitPrice: 1500 }],
paymentMethod: "card",
});
expect(order.status).toBe("payment_failed");
expect(order.chargeId).toBeNull();
});Good tests assert specific business outcomes. They check exact values, verify side effects, and test failure paths. If your test suite can’t catch a bug where every order is charged $0, the tests aren’t doing anything.
4. The God Prompt Pattern
One massive component or function that does everything because it was generated from a single long prompt. 500-line React components. 300-line API handlers.
When you describe an entire feature in one prompt, the AI generates it in one function. Fetching data, transforming it, handling errors, managing state, rendering UI — all in a single monolith.
What AI generates:
// 400+ lines. Data fetching, business logic, error handling,
// and rendering all jammed into one component.
export default function Dashboard() {
const [users, setUsers] = useState([]);
const [revenue, setRevenue] = useState(0);
const [loading, setLoading] = useState(true);
const [error, setError] = useState(null);
const [dateRange, setDateRange] = useState({ start: null, end: null });
const [filters, setFilters] = useState({});
const [sortBy, setSortBy] = useState("date");
const [page, setPage] = useState(1);
useEffect(() => {
// 50 lines of data fetching with nested try/catch
}, [dateRange, filters, sortBy, page]);
const handleExport = async () => {
// 40 lines of CSV generation
};
const calculateMetrics = () => {
// 60 lines of business logic
};
return (
// 200 lines of JSX with inline conditionals everywhere
);
}What it should look like:
// hooks/useDashboardData.ts
export function useDashboardData(filters: DashboardFilters) {
return useQuery({
queryKey: ["dashboard", filters],
queryFn: () => fetchDashboardData(filters),
});
}
// lib/metrics.ts
export function calculateMetrics(data: DashboardData): Metrics {
// Business logic is testable in isolation
}
// components/Dashboard.tsx
export default function Dashboard() {
const [filters, setFilters] = useState<DashboardFilters>(defaultFilters);
const { data, isLoading, error } = useDashboardData(filters);
const metrics = data ? calculateMetrics(data) : null;
if (isLoading) return <DashboardSkeleton />;
if (error) return <DashboardError error={error} />;
return (
<DashboardLayout>
<MetricsPanel metrics={metrics} />
<UsersTable users={data.users} />
<ExportButton data={data} />
</DashboardLayout>
);
}Break prompts into smaller pieces. Generate the data layer, the business logic, and the UI separately. Or generate the monolith first and immediately ask the AI to refactor it into composable pieces. The second approach works surprisingly well.
5. The Secret Sprinkle
API keys and connection strings hardcoded in source because the founder pasted them into the prompt. Sometimes in .env, sometimes directly in code, sometimes both with different values.
This is the one that makes us wince. We’ve found live Stripe secret keys, SendGrid API keys, and database connection strings committed directly to source code. Not in .env.example. In the actual source files. Because the founder’s prompt said “connect to my database at postgres://user:password@host/db” and the AI dutifully hardcoded it.
What AI generates:
// lib/db.ts
const pool = new Pool({
connectionString: "postgres://admin:s3cretPa$$@db.example.com:5432/myapp",
});
// lib/email.ts
const sgMail = require("@sendgrid/mail");
sgMail.setApiKey("SG.abc123xyz789-realkey");
// .env (also exists, with a DIFFERENT database password)
DATABASE_URL=postgres://admin:oldpassword@db.example.com:5432/myappWhat it should look like:
// lib/db.ts
const pool = new Pool({
connectionString: requireEnv("DATABASE_URL"),
});
// lib/env.ts
export function requireEnv(key: string): string {
const value = process.env[key];
if (!value) {
throw new Error(`Missing required environment variable: ${key}`);
}
return value;
}Use a requireEnv helper that throws on startup if a variable is missing. Never paste credentials into prompts. If you already have, rotate every key immediately — they’re in your shell history, your AI tool’s context window, and possibly the AI provider’s logs.
6. The Duplicate Divergence
Three different implementations of the same utility because each was generated in a different prompt session. Date formatting, error handling, API response wrapping — duplicated and slightly different each time.
Each prompt session generates code in isolation. The AI doesn’t know you already have a formatDate function in utils/date.ts when it generates a new feature that needs date formatting. So it writes another one. Inline. With slightly different behavior.
What AI generates:
// utils/format.ts (session 1)
export function formatDate(date: Date): string {
return date.toLocaleDateString("en-US");
}
// components/InvoiceList.tsx (session 2)
function formatInvoiceDate(d: string) {
return new Date(d).toISOString().split("T")[0];
}
// routes/reports.ts (session 3)
const fmtDate = (ts: number) =>
new Intl.DateTimeFormat("en-US", {
year: "numeric",
month: "short",
day: "numeric",
}).format(new Date(ts));Three formats. Three input types. Three outputs. Your invoices show “3/15/2026”, your reports show “Mar 15, 2026”, and your API returns “2026-03-15”. Good luck debugging timezone issues.
What it should look like:
// lib/dates.ts — single source of truth
import { formatDate as fnsFormat } from "date-fns";
export function formatDate(
input: Date | string | number,
style: "short" | "long" | "iso" = "short",
): string {
const date = new Date(input);
switch (style) {
case "iso":
return fnsFormat(date, "yyyy-MM-dd");
case "long":
return fnsFormat(date, "MMM d, yyyy");
case "short":
return fnsFormat(date, "MM/dd/yyyy");
}
}Before starting a new prompt session, tell the AI about your existing utilities. Or after generation, grep for duplicates. formatDate, handleError, apiResponse, parseQuery — these are the usual suspects.
7. The Missing Middle
Happy path works perfectly. Error path returns generic 500. No distinction between “not found”, “unauthorized”, “validation failed”, and “server error”.
AI-generated code handles errors in the most literal sense: it has try/catch blocks. But everything caught becomes the same generic response. Your frontend can’t show useful error messages because the API doesn’t send any.
What AI generates:
app.get("/api/orders/:id", async (req, res) => {
try {
const order = await db.orders.findUnique({
where: { id: req.params.id },
});
return res.json(order);
} catch (error) {
return res.status(500).json({ message: "Something went wrong" });
}
});What happens when order is null? The API returns null with a 200 status. What happens when the user doesn’t own this order? It returns the order anyway. What happens when the ID format is invalid? 500.
What it should look like:
app.get("/api/orders/:id", async (req, res) => {
const { id } = req.params;
if (!isValidUuid(id)) {
return res.status(400).json({
code: "INVALID_ID",
message: "Order ID must be a valid UUID",
});
}
const order = await db.orders.findUnique({ where: { id } });
if (!order) {
return res.status(404).json({
code: "NOT_FOUND",
message: "Order not found",
});
}
if (order.userId !== req.user.id) {
return res.status(403).json({
code: "FORBIDDEN",
message: "You don't have access to this order",
});
}
return res.json(order);
});Every API endpoint should distinguish between at least four outcomes: bad input (400), not authenticated (401), not authorized (403), not found (404), and actual server error (500). If your frontend shows “Something went wrong” for every failure, this is why.
8. The Orphan Migration
Schema changes made directly to the ORM models without corresponding migration files. Works on the developer’s machine, fails on deploy.
AI tools modify your Prisma schema or TypeORM entities when you ask for a new feature. They don’t generate migration files because you didn’t ask for them. The developer runs prisma db push locally (which force-syncs the schema), everything works, and the change gets committed. On the next deploy, the production database is out of sync with the code.
What AI generates:
// prisma/schema.prisma — AI added the `plan` and `trialEndsAt` fields
model User {
id String @id @default(uuid())
email String @unique
name String
plan String @default("free") // Added by AI
trialEndsAt DateTime? // Added by AI
createdAt DateTime @default(now())
}
// No migration file exists for these changes.
// Developer ran `prisma db push` locally and moved on.What it should look like:
# Generate a named migration
npx prisma migrate dev --name add-user-plan-fields
# This creates:
# prisma/migrations/20260315_add_user_plan_fields/migration.sql
#
# -- AlterTable
# ALTER TABLE "User" ADD COLUMN "plan" TEXT NOT NULL DEFAULT 'free';
# ALTER TABLE "User" ADD COLUMN "trialEndsAt" TIMESTAMP(3);Every schema change gets a migration file. Every migration file gets committed. Your deploy pipeline runs prisma migrate deploy, not prisma db push. This is the difference between a deployable app and a “works on my machine” demo.
9. The Console.log Observatory
No structured logging, no error monitoring, no metrics. The entire observability strategy is console.log statements that will never be seen in production.
This was a universal finding in our Claude Code audit report — every single project had zero error monitoring and zero structured logging. AI-generated code handles errors syntactically, but nothing gets recorded anywhere useful.
What AI generates:
app.post("/api/payments", async (req, res) => {
try {
const charge = await stripe.charges.create({
amount: req.body.amount,
currency: "usd",
source: req.body.token,
});
console.log("Payment successful:", charge.id);
return res.json({ success: true });
} catch (error) {
console.log("Payment failed:", error);
return res.status(500).json({ message: "Payment failed" });
}
});When this payment fails in production, where does that console.log go? Nowhere you’ll ever see it. Your user gets a generic error. You get nothing.
What it should look like:
import { logger } from "@/lib/logger";
import * as Sentry from "@sentry/node";
app.post("/api/payments", async (req, res) => {
const requestId = req.headers["x-request-id"] || crypto.randomUUID();
try {
const charge = await stripe.charges.create({
amount: req.body.amount,
currency: "usd",
source: req.body.token,
});
logger.info("payment.success", {
requestId,
chargeId: charge.id,
amount: req.body.amount,
userId: req.user.id,
});
return res.json({ success: true, chargeId: charge.id });
} catch (error) {
logger.error("payment.failed", {
requestId,
userId: req.user.id,
amount: req.body.amount,
error: error instanceof Error ? error.message : "Unknown error",
});
Sentry.captureException(error, {
extra: { requestId, userId: req.user.id },
});
return res.status(500).json({
message: "Payment processing failed. Please try again.",
requestId,
});
}
});Structured logging with context. Error monitoring that alerts you. A request ID the user can reference in support tickets. This is the minimum for any endpoint that touches money.
10. The Flat Auth
No role-based access control, no row-level security, no tenant isolation. Every authenticated user can access every resource. IDOR vulnerabilities everywhere.
AI tools implement authentication — proving who you are. They rarely implement authorization — proving what you’re allowed to do. The result is a system where any logged-in user can access any other user’s data by changing an ID in the URL.
What AI generates:
// Any authenticated user can access any project
app.get("/api/projects/:id", authMiddleware, async (req, res) => {
const project = await db.projects.findUnique({
where: { id: req.params.id },
});
return res.json(project);
});
// Any authenticated user can update any project
app.put("/api/projects/:id", authMiddleware, async (req, res) => {
const project = await db.projects.update({
where: { id: req.params.id },
data: req.body,
});
return res.json(project);
});Change the ID in the URL. You now have access to someone else’s project. This is an Insecure Direct Object Reference (IDOR) vulnerability, and it’s in almost every AI-generated codebase we audit.
What it should look like:
// Scoped to the authenticated user's organization
app.get("/api/projects/:id", authMiddleware, async (req, res) => {
const project = await db.projects.findFirst({
where: {
id: req.params.id,
organizationId: req.user.organizationId, // tenant isolation
},
});
if (!project) {
return res.status(404).json({ message: "Project not found" });
}
return res.json(project);
});
app.put(
"/api/projects/:id",
authMiddleware,
requireRole("editor"),
async (req, res) => {
const project = await db.projects.findFirst({
where: {
id: req.params.id,
organizationId: req.user.organizationId,
},
});
if (!project) {
return res.status(404).json({ message: "Project not found" });
}
const updated = await db.projects.update({
where: { id: project.id },
data: req.body,
});
return res.json(updated);
},
);Every database query should be scoped to the authenticated user’s tenant. Every mutation should check the user’s role. Return 404 instead of 403 for resources that don’t belong to the user — don’t leak the fact that the resource exists.
The pattern under the patterns
These ten anti-patterns share a root cause: AI generates code for the prompt it received, not the system it’s being added to.
Each prompt is a fresh context. The AI doesn’t know about your auth middleware when generating a new route. It doesn’t know about your date utility when formatting timestamps. It doesn’t remember that you told it to use Zod three sessions ago.
The code it produces is locally correct and globally broken.
This isn’t a reason to stop using AI tools. They’re genuinely good at generating working code fast. But “working” and “production-ready” are different things. The gap between them is exactly where these anti-patterns live.
What to do about it
If you’re building with AI tools, audit your own code against this list. Grep for console.log as your only logging. Search for routes without auth middleware. Check if your types have corresponding Zod schemas. Look for duplicate utility functions.
If you’re not sure what you’re looking at, or if you’ve inherited an AI-generated codebase and need to know what’s actually in there, we can help. We run every audit through automated analyzers first, then a senior engineer reviews architecture, security, and business logic. You get a prioritized list of what to fix and in what order.
We’ve written about what we typically find in Claude Code projects specifically, but these ten anti-patterns show up across every AI tool. The tool doesn’t matter. The patterns are the same.
Get a code audit before your users find these problems for you.