Variant Systems

March 15, 2026 · Variant Systems

How to Commission a Code Audit (And What to Do With the Report)

A practical guide to hiring a code auditor, evaluating providers, understanding pricing, and turning audit findings into action. For founders, CTOs, and investors.

code-audit startup due-diligence hiring guide

You know you need a code audit. Maybe someone on your board said it. Maybe you inherited a codebase and don’t trust it. Maybe you built something with AI tools and the voice in the back of your head is asking: does this actually work the way I think it does?

The problem isn’t motivation. It’s process. How do you find the right firm? How much should you pay? What should the report actually contain? And once you have it, what do you do with a 30-page document full of findings?

This is the guide we wish existed when we were on the buying side.

When you actually need one

Not every codebase needs an audit. Some genuinely don’t. Here’s when they do.

Pre-fundraise. Investors are increasingly asking for technical due diligence, especially at Series A and beyond. A clean audit report is leverage. A surprise finding during investor diligence is a kill shot.

Pre-acquisition. If you’re buying a software company, the code is the asset. You wouldn’t buy a building without an inspection. Same logic applies. We’ve seen acquisition deals renegotiated by seven figures based on audit findings.

Post-incident. Something broke in production. A breach happened. Data leaked. You need to understand the root cause and whether there are more problems hiding behind the one that surfaced.

Inheriting a codebase. New CTO joins a startup. Agency hands off a project. Previous developer disappears. You need to know what you’re working with before you start building on top of it.

AI-built app going to production. You used Cursor, Claude Code, Bolt, or Lovable to build your MVP. The code looks clean. The problems hide under that surface. Before real users touch it, get a human to review what the AI actually built.

Pre-launch. Your app is about to go live with real users, real data, real money flowing through it. An audit now is cheaper than a breach later.

When you don’t need one: hobby projects, early prototypes you’re still actively iterating on, internal tools with no sensitive data, or anything you’re planning to throw away and rebuild. Don’t audit code that hasn’t stabilized yet. You’ll just be auditing a moving target.

What a good audit covers

There are two fundamentally different things that both get called “code audits.” Understanding the difference saves you from buying the wrong one.

Automated scans

Tools like SonarQube, Snyk, Semgrep, and CodeQL run static analysis across your codebase. They check for known vulnerability patterns, dependency issues, secret leaks, code smells, and structural problems. They’re fast, cheap, and consistent.

They’re also blind to anything that requires judgment. An automated scanner can tell you that you’re using an outdated version of a library. It cannot tell you that your authentication architecture has a fundamental design flaw, or that your data model won’t survive your next 10x in users.

Human architectural review

A senior engineer reads your code. All of it, or the parts that matter most. They evaluate architecture decisions, security design, error handling patterns, data modeling, test coverage, deployment configuration, and compliance readiness. They understand your business context and assess the code against what you’re actually trying to do.

You need both. Automated scans catch the mechanical issues that humans miss through fatigue. Human review catches the architectural and design issues that no scanner is built to detect. An audit that only does one is incomplete.

A thorough audit should cover:

AreaWhat’s evaluated
SecurityAuth, input validation, secrets management, data exposure, OWASP top 10
ArchitectureComponent structure, separation of concerns, scalability patterns
Code qualityConsistency, readability, error handling, edge cases
DependenciesOutdated packages, known CVEs, license risks, supply chain exposure
Test coverageWhat’s tested, what’s not, quality of existing tests
Data modelSchema design, migrations, indexing, data integrity constraints
DeploymentCI/CD pipeline, environment configuration, infrastructure as code
ComplianceGDPR/SOC2/HIPAA readiness depending on your industry

What a good report looks like

This is where most audit providers fall short. The audit itself might be solid, but if the report is a wall of automated scanner output with no prioritization, it’s not useful. You paid for judgment, not data.

A good report has:

Findings categorized by severity. Critical, warning, and informational. Critical means “fix this before anything else or you have a real risk.” Warning means “this will cause problems as you scale.” Informational means “worth knowing, fix when convenient.”

Each finding includes four things:

  1. What the problem is — specific, with code references
  2. Why it matters — the business impact, not just the technical description
  3. How to fix it — concrete guidance, not “consider improving this area”
  4. Estimated effort — hours or days, so you can plan remediation

An executive summary. One page that a non-technical board member or investor can read and understand the overall health of the codebase.

A prioritized remediation plan. Not just a list of problems, but a sequence. What to fix first, what can wait, what’s acceptable tech debt.

A bad report looks like: 847 findings from SonarQube, exported to PDF, with a cover page. That’s not an audit. That’s a tool output with a logo on it.

How to evaluate providers

The code audit market has no certification, no standard methodology, and no licensing. Anyone can hang a shingle. Here’s how to tell the real ones from the PDF factories.

Red flags

SignalWhy it’s a problem
Won’t share a sample reportThey don’t want you to see the quality of their output before you pay
No technical people on sales callsThe people selling it can’t explain what they’ll actually do
Pricing by LOC onlyLines of code is a rough input, not the only factor. A 10K LOC app with complex state management is harder to audit than a 30K LOC CRUD app
No industry expertiseAn auditor who’s never seen a fintech app won’t know what compliance gaps to look for
Guaranteed timeline before seeing the codeThey’re selling a template, not a review
”We audit any technology”Nobody is an expert in everything. If they claim to be, they’re generalists at best

Green flags

SignalWhy it matters
Engineers do the audit, not just review itThe person reading your code should be the person writing the report
Clear methodology they can explainThey’ve done this enough to have a process
Fixed scope and price after initial assessmentThey looked at your repo before quoting. That’s professionalism
Specific deliverables in the proposalYou know exactly what you’re getting
They ask about your business contextArchitecture decisions can’t be evaluated without understanding what the software needs to do
They’ll walk you through the report liveA document dump is not a deliverable. You should be able to ask questions

Ask for references. Ask to see a redacted sample report. Ask who specifically will be reading your code and what their background is. These are normal questions. Any firm that gets defensive about them is telling you something.

What it should cost

Pricing transparency is rare in this space. Here are realistic industry ranges based on what we’ve seen across the market, not specific to any one provider.

ScopeCodebase sizeTypical range
Automated scan onlyAny$500 - $2,000
Focused review (one area)< 20K LOC$3,000 - $8,000
Comprehensive audit20K - 100K LOC$8,000 - $20,000
Investment-grade due diligenceAny$20,000+

What drives the price up: multiple languages, urgent timelines, compliance requirements, investor-facing deliverables, and large or complex codebases. We wrote a detailed pricing breakdown if you want the granular view.

What drives the price down: narrow scope, flexible timeline, and a well-organized codebase with good documentation (the auditor spends less time figuring out what things are).

If someone quotes you $500 for a “comprehensive audit,” they’re running an automated scan and putting their logo on it. If someone quotes you $50,000 for a 15K LOC SaaS app, they’re overcharging. The ranges above are honest.

What to do with the report

The audit is not the end. It’s the beginning of the actual work. Here’s how to handle the output.

Triage the criticals first. Security vulnerabilities, data exposure risks, authentication bypasses — these get fixed before anything else. Not next sprint. Now.

Don’t try to fix everything at once. A comprehensive audit might have 30-60 findings across all severity levels. That’s normal. It doesn’t mean your code is garbage. It means someone looked at it carefully. Prioritize by severity, then by effort-to-impact ratio.

Use it for stakeholder communication. An audit report is one of the clearest ways to communicate technical health to non-technical stakeholders. Share the executive summary with your board, your investors, your co-founder. It gives them a concrete picture instead of “the engineering team says we need to refactor.”

Budget the remediation. Each finding should have an effort estimate. Add them up. That’s real engineering time you need to allocate. If you don’t budget for remediation, the audit was an expensive document that changes nothing.

Consider who does the fixes. Some audit firms offer remediation services. Some don’t. There’s an argument for having the same team that found the issues fix them — they already understand your codebase. There’s also an argument for keeping auditors independent. Either approach works. What doesn’t work is letting the report sit in a drawer.

Schedule a follow-up. After remediation, a quick re-check on the critical findings confirms they’re actually fixed. This doesn’t need to be a full re-audit. A focused review on the specific items is usually enough.

The AI-generated codebase twist

If your codebase was built primarily with AI tools — and an increasing number are — your audit needs a different lens.

Traditional audits evaluate code assuming a human wrote it. Human-written code has certain failure patterns: inconsistency, knowledge gaps in specific areas, shortcuts under deadline pressure. AI-generated code has entirely different failure patterns.

We’ve audited dozens of AI-built codebases and the issues are consistent: the code looks senior-level but has assumption gaps that no senior engineer would make. Auth flows that handle the happy path perfectly but fail silently on edge cases. Error handling that looks comprehensive but catches and swallows exceptions instead of surfacing them. Data validation that exists everywhere except the one place it matters most.

The specific problem with AI-generated code is that it lacks institutional knowledge. A human developer who builds an auth system has opinions about auth — learned from experience, from reading about breaches, from getting burned. An AI generates auth code that follows patterns from its training data. The patterns are correct. The judgment behind them is absent.

Make sure your auditor has experience reviewing AI-generated codebases specifically. Ask them what patterns they look for. If they don’t have a good answer, they haven’t done enough of them. We built a free automated audit tool specifically because AI-generated code needs a different first pass than human-written code.

The bottom line

A code audit is not a pass/fail test. It’s a diagnostic. The goal is not to prove your code is perfect — it’s to know where the risks are so you can make informed decisions about what to fix, when to fix it, and how much it will cost.

The best time to get one is before something forces you to. Before the investor asks. Before the breach. Before the new CTO opens the repo and starts asking uncomfortable questions.

If you’re evaluating whether your codebase needs a review, we offer a free initial assessment — no commitment, no sales pitch. We’ll tell you honestly whether a full audit makes sense for your situation, or whether you’re better off spending that money elsewhere.

For details on what our independent code audit covers, how we work, and what you get — check the service page.