Variant Systems

February 14, 2026 · Variant Systems

We Open-Sourced Our Code Audit as a Claude Code Plugin

A zero-dependency Claude Code plugin that runs 7 analyzers on any codebase — secrets, security, dependencies, structure, tests, and more.

claude-code code-audit open-source ai-tools

A robotic hand examining code through a magnifying glass, revealing hidden vulnerabilities

We run code audits for founders raising rounds, investors evaluating acquisitions, and teams inheriting codebases they didn’t write. Every engagement starts the same way — scan for secrets, check dependencies, look at the test coverage, see how bad the import graph is. Same checklist, every time.

The manual part of an audit is the architecture review, the design judgment, the “should this codebase support what’s planned next” question. But that first pass — the structural scan — is mechanical. We kept doing it by hand anyway, until we stopped.

We automated it, used it internally for months, and just open-sourced it as a Claude Code plugin. It’s called code-audit — 7 analyzers, zero dependencies. Yes, we’re using AI to audit AI-generated code. That’s the kind of year it is.

Why This Exists

AI-generated code is everywhere now. Cursor, Copilot, Bolt, Lovable, Devin — every week there’s a new tool writing code for people. The output quality varies wildly.

The problem isn’t that AI writes bad code. Sometimes it writes great code. The problem is that nobody’s checking. A founder gets a working prototype from an AI tool, ships it, and moves on. Six months later, they’re raising a round and someone finally looks under the hood.

We’ve seen the same issues repeatedly:

  • Hardcoded API keys committed to version control
  • SQL injection vulnerabilities in user-facing endpoints
  • Dependencies with known CVEs that were never updated
  • Test files that exist but don’t actually test anything meaningful
  • Circular imports that make the codebase impossible to refactor

These aren’t edge cases. They’re the norm. And they’re all detectable automatically.

The 7 Analyzers

We didn’t build a linter. We built a sanity check for modern development.

AnalyzerFocusWhy it matters
SecretsHardcoded keys, .env leaksThe #1 cause of AWS account takeovers. Should never be in version control.
SecuritySQLi, XSS, OWASP Top 10AI prototypes almost never sanitize inputs.
DependenciesCVEs, unpinned versionsYour dependency tree is an attack surface. This treats it like one.
StructureFile bloat, nesting, complexityStructural signals that predict maintenance nightmares.
TestsWeak assertions, coverage gapsTests that always pass are worse than no tests — false confidence.
ImportsCircular deps, coupling hotspotsArchitectural rot that’s invisible in code review.
AI PatternsTool fingerprints, silent errorsThe telltale signs of generated code accepted without review.

Quick Start

If you’re already using Claude Code, two commands:

/plugin marketplace add variant-systems/skills
/plugin install code-audit@variant-systems-skills

Or with npx:

npx skills add variant-systems/skills --skill code-audit

For Claude.ai, upload the code-audit folder via Settings > Skills.

Only requirement is Node.js 18+. No Python, no Ruby, no system packages. If it runs Claude Code, it runs this.

Zero Dependencies, On Purpose

A code audit tool with 200 transitive dependencies would be ironic.

The core analyzers are pure Node.js. No npm install step. No dependency tree of their own to worry about. It’s light, it’s fast, and it won’t break when a random sub-dependency on npm gets hijacked.

The tradeoff: the built-in analyzers use pattern matching and AST-level heuristics rather than full static analysis. They catch the 70% of issues that are structurally obvious. For the remaining 30%, the plugin detects your ecosystem and recommends optional tools — Semgrep for deeper static analysis, Trivy for container scanning, TruffleHog or Gitleaks for secrets detection. If those tools are installed, the plugin uses them automatically. If not, it works fine without them.

The 70/30 Split

This is how we think about code audits generally.

The 70% — automatable. Hardcoded secrets, known CVEs, obviously missing tests, files that are way too long. A script can find these reliably. That’s what the plugin does.

The 30% — human judgment. Is this architecture appropriate for the scale? Are these abstractions helping or hurting? Is the error handling strategy consistent? Does the data model make sense for the business domain?

The plugin gives you a structured report with findings, severity ratings, and remediation guidance. That report becomes the starting point for the human review — you’re not wasting time on things a script could have caught.

When we run code audits for clients, the automated pass happens first. The senior engineer’s time goes toward architecture, design decisions, and business logic — the stuff that actually requires experience.

What the Output Looks Like

The plugin generates a CODE_AUDIT_REPORT.md. Here’s what a typical summary looks like:

## Audit Summary: 12 Issues Found
- Critical: 2 (Hardcoded Stripe key in config.ts, SQL injection in /api/search)
- Warning: 5 (Circular imports, 1200-line file: UserController.ts)
- Info: 5 (Missing tests for 3 endpoints, unpinned dependency versions)

Below that, you get detailed results from each analyzer — specific file paths, line numbers, remediation guidance, and a prioritized action list.

It’s designed to be readable by both engineers and non-technical stakeholders. A founder doing due diligence can skim the summary. An engineer can drill into the specifics.

Works Across Ecosystems

The analyzers aren’t tied to a specific language or framework. They work on:

  • JavaScript/TypeScript (Node, React, Next.js, etc.)
  • Python (Django, Flask, FastAPI)
  • Elixir/Phoenix
  • Ruby/Rails
  • Go
  • And most other common stacks

Language-specific checks (like detecting unsafe code execution in JavaScript or insecure deserialization in Python) are pattern-matched per ecosystem. Structural checks (file size, nesting depth, import graphs) are language-agnostic.

Get Started

The plugin is MIT-licensed and available now:

  • GitHub repo
  • Install: /plugin marketplace add variant-systems/skills

Run it on your codebase. See what it finds. Most codebases have at least a few surprises hiding in them.

If you want the other 30% — the architecture review, the design decisions, the stuff that actually requires experience — we do that too.


Need a thorough code audit? Variant Systems runs comprehensive code reviews covering architecture, security, and maintainability — learn more about our code audit service.