Variant Systems

February 8, 2026 · Variant Systems

Technical Due Diligence for Software Acquisitions

Technical due diligence guide for software acquisitions. Architecture, security, AI-generated code, and what your deal team needs to know.

technical-due-diligence M&A code-audit acquisition software

Technical due diligence for software acquisitions — architecture, security, and code quality review

A PE firm we worked with closed on a SaaS acquisition last year. Revenue was solid, churn was low, the product demoed well. Six weeks later, their integration team reported that the backend was a single 12,000-line file, the “microservices architecture” from the pitch deck was three Docker containers running the same monolith, and the engineer who understood the payment system had already left.

The remediation cost more than the technology asset itself.

This isn’t an outlier. It’s what happens when technical due diligence is treated as a checkbox instead of a deal gate. When you’re acquiring a software company, the technology isn’t incidental to the deal — it is the deal. And most deal teams spend less time evaluating it than they spend on the lease review.

Why Technical Due Diligence Is No Longer Optional

For years, tech DD was a formality. The deal team checked a box, a consultant skimmed the stack, and everyone moved on to revenue multiples. That worked when most codebases were written by known teams using established patterns. You could infer quality from the team’s pedigree and the product’s uptime.

Two things changed. First, the tools to build software got radically easier. AI-assisted development means a solo founder can ship a functional SaaS product in a weekend. That product can acquire customers, show growth, and look like a compelling acquisition target — while the code underneath is a single-file monolith with no tests and business logic entangled with UI rendering. The gap between “working product” and “maintainable product” has never been wider. We cover the specific risks of AI-generated codebases in a separate guide.

Second, software acquisitions got more frequent and smaller. Mid-market deals — $5M to $100M — don’t get the same DD rigor that a $500M acquisition does. But the technical risks are often worse, because smaller companies have less engineering process, fewer people, and more shortcuts baked in.

The result: post-acquisition technical surprises are the norm, not the exception. And the gap between what a thorough pre-close audit costs and what post-close remediation costs has widened. We routinely see six-figure cleanup bills on deals where a two-week review would have either killed the deal or repriced it correctly.

Technical due diligence directly impacts valuation, integration timeline, and post-acquisition engineering headcount. If your deal team doesn’t include someone who can read code, you’re negotiating blind.

What a Technical Due Diligence Covers

A proper tech due diligence engagement examines seven domains. Each one tells you something different about the risk profile of what you’re buying.

Architecture review. This is the structural assessment. Is the application a monolith, microservices, or something in between? Where are the single points of failure? What happens when traffic doubles? We map the component inventory, dependency graph, and deployment topology. We identify architectural decisions that constrain future growth — and the ones that were smart trade-offs given the team’s resources.

Code quality and maintainability. This goes beyond “is the code clean.” We evaluate consistency of patterns, separation of concerns, naming conventions, and whether a new engineer could onboard in weeks rather than months. We look at the ratio of business logic to boilerplate, the abstraction quality, and whether the codebase has a discernible structure or is a collection of ad hoc solutions. A thorough code audit quantifies these findings into actionable categories.

Security posture. We evaluate whether the team treats security as a practice or an afterthought. Has there ever been a security audit or penetration test? Are credentials managed through a vault, or are they sitting in source files? Is the dependency tree monitored for CVEs, or has nobody checked since launch? We scan for the vulnerabilities that create acquirer liability — the kind that, if discovered post-close, turn into breach notifications and regulatory scrutiny. Security findings have killed more deals we’ve been involved in than any other single domain.

Scalability assessment. Database design, query patterns, caching strategies, connection pooling, and resource utilization under load. We identify the bottleneck that will break first when the business grows. Sometimes it’s the database. Sometimes it’s a synchronous process that should be async. Sometimes it’s an architecture that fundamentally can’t scale without a rewrite.

Team and process assessment. Commit patterns tell a story. We look at PR review practices, CI/CD maturity, deployment frequency, and the bus factor — how many people actually understand the system. A strong codebase built by one person who’s leaving is a different risk than a mediocre codebase maintained by a stable team.

IP and licensing. Open source license compliance is non-negotiable. We check for GPL-infected dependencies in proprietary products, verify license compatibility across the dependency tree, and flag AI-generated code ownership questions. This is increasingly complex as AI-generated code blurs the lines of authorship and licensing.

Technical debt quantification. Every codebase has debt. The question is how much, where it lives, and what it costs to fix. We categorize debt by severity and estimate remediation effort in engineering-weeks, giving your deal team concrete numbers for integration planning. The output isn’t “this code is bad.” It’s “fixing this will take X weeks at Y cost, and here’s what happens if you don’t.”

No single domain tells the full story. A codebase with great architecture but terrible security is a different risk profile than one with messy code but solid infrastructure. The value of comprehensive software due diligence is in the composite picture — understanding how these domains interact and where the compound risks live.

The Technical Due Diligence Checklist

This is the comprehensive tech due diligence checklist we work from when evaluating software acquisitions. Not every item applies to every deal, but skipping a category without conscious reasoning is how surprises happen. We recommend printing this and checking items off during the DD process — the gaps tell you as much as the findings.

Architecture

  • Complete component and service inventory
  • Dependency mapping (internal and external)
  • Deployment topology and environment diagram
  • Scaling strategy documentation (or lack thereof)
  • Third-party service dependencies and vendor lock-in assessment
  • API design patterns and versioning strategy
  • Data flow diagrams for critical paths

Code Quality

  • Test coverage metrics (unit, integration, end-to-end)
  • Code review practices and PR history
  • Documentation quality (inline, API docs, architectural decision records)
  • Coding standards and linting configuration
  • Dependency freshness (how outdated are packages?)
  • Dead code and unused dependency inventory
  • Build time and development environment setup complexity

Security

  • Vulnerability scanning results (SAST and DAST)
  • Penetration test history and remediation status
  • Encryption implementation (at rest and in transit)
  • Access control review (RBAC, ABAC, or ad hoc)
  • Secrets management (vault, env vars, or hardcoded)
  • OWASP Top 10 assessment
  • Authentication implementation (session management, token handling)
  • API security (rate limiting, input validation, output encoding)

Infrastructure

  • Cloud provider setup and IaC coverage
  • Monitoring and alerting configuration
  • Backup and disaster recovery procedures
  • CI/CD pipeline review and deployment automation
  • Cost structure and optimization opportunities
  • SSL/TLS configuration and certificate management
  • Container orchestration and service mesh (if applicable)

Operations

  • Incident history (last 12 months)
  • Uptime SLAs and actual uptime data
  • On-call procedures and escalation paths
  • Runbooks and operational documentation
  • Log aggregation and searchability
  • Performance baselines and trending

Data

  • Database design and normalization assessment
  • Migration strategy and schema evolution history
  • Backup validation (when was the last tested restore?)
  • GDPR, CCPA, and privacy compliance
  • Data retention policies
  • PII handling and data classification
  • Database performance (slow queries, index coverage, connection management)

People

  • Key person dependencies (bus factor analysis)
  • Documentation quality relative to team size
  • Estimated onboarding time for new engineers
  • Tribal knowledge inventory (what’s undocumented?)
  • Team retention risk and compensation benchmarks

Red Flags That Kill Deals

Not all findings are equal. Some are speed bumps. Others are deal-killers. Here are the red flags that should make you walk away or significantly renegotiate.

No version control history. If the git history was squashed, rewritten, or doesn’t exist, you can’t trace how the codebase evolved. You can’t identify who wrote what, when bugs were introduced, or how decisions were made. This also raises questions about what the seller is hiding. Legitimate reasons exist for history cleanup, but a complete absence of history is a red flag.

Zero test coverage on critical paths. Payments, authentication, data mutations — these are the paths where bugs cost real money. If there are no tests around Stripe integration, user login flows, or database writes, every future change to those systems is a coin flip. The cost to add test coverage retroactively is 3-5x the cost of writing tests alongside the code.

Hardcoded secrets in the codebase. We find production credentials committed to repositories in roughly 40% of the codebases we review. The credentials themselves are fixable in a day. But their presence is a leading indicator — teams that skip secrets management tend to skip input validation, access control, and audit logging too. When you find hardcoded secrets, widen the security review. The real cost isn’t rotating the keys — it’s the other security shortcuts you’ll discover alongside them.

Single-developer dependency with no documentation. One person wrote the entire system, they’re not staying post-acquisition, and the only documentation is their commit messages. You’re not buying software. You’re buying a liability. Reverse-engineering an undocumented codebase built by someone who’s gone takes 2-3x longer than building it fresh.

Unpatched security vulnerabilities in production. Known CVEs in dependencies, with no remediation plan and no timeline. This means the team either doesn’t know about the vulnerabilities or doesn’t care. Both are bad. If there’s a breach between signing and closing, you own the fallout.

GPL-infected dependencies in proprietary software. If a proprietary product uses GPL-licensed code without proper compliance, the acquirer inherits a legal liability. This isn’t theoretical — it has resulted in forced open-sourcing of proprietary codebases. License compliance is easy to verify and expensive to ignore.

AI-generated code with no human review trail. Large chunks of code generated by AI tools with no evidence of human review, testing, or validation. The code might work today and fail unpredictably tomorrow. Without a review trail, you can’t assess what level of scrutiny was applied. For more on this specific risk, see our guide on due diligence for AI-generated codebases.

Infrastructure that can’t be reproduced. Manual server configurations, no infrastructure-as-code, no documentation of how the production environment was set up. If the server dies, can you rebuild it? If the answer is “ask Dave, he set it up two years ago,” that’s a material risk. We’ve seen acquisitions where the production environment took 3 months to reconstruct because nobody documented the original setup. That’s 3 months of integration delay, engineering salaries, and opportunity cost.

Any single red flag is negotiable. Two or three together should trigger a significant valuation adjustment. Five or more should make you seriously question whether you’re acquiring a product or a rewrite project.

How AI-Generated Code Changes the DD Playbook

If the target company used AI tools to build its product — Cursor, Bolt.new, Lovable, Copilot, or similar — the standard DD playbook needs adjustment. AI-generated code breaks the usual correlation between how code looks and how it actually performs.

The short version: AI code passes surface-level checks (formatting, naming, test presence) while hiding structural problems that inflate post-acquisition costs. Each tool leaves different fingerprints, the IP ownership questions aren’t settled, and the remediation scope is typically 30-60% larger than equivalent human-written codebases.

This is a big enough topic that we wrote a dedicated guide: Due Diligence for AI-Generated Codebases. It covers tool-specific patterns, the IP and licensing question, how to quantify remediation cost, and a five-step framework for evaluating AI-built targets.

The practical impact on your DD engagement: budget 40-50% of the review timeline for code analysis instead of the usual 20%. The proxy metrics that normally shortcut evaluation — commit frequency, test presence, documentation — are less reliable when a machine generated all of those artifacts automatically.

What the Report Should Include

A technical due diligence report isn’t useful if it’s a 200-page document that nobody reads. It needs to serve multiple audiences: the investment committee that needs a go/no-go recommendation, the deal team that needs negotiation leverage, and the integration team that needs an engineering plan.

Executive summary. One to two pages. The overall risk assessment, the three most critical findings, and a clear recommendation. This is what the investment committee reads. It should answer: “Should we proceed, and if so, what does it cost to get this codebase where it needs to be?”

Risk matrix. Every finding categorized by severity (critical, high, medium, low), likelihood of impact, and estimated remediation effort. This gives the deal team a structured way to discuss which risks they’re willing to accept, which they need remediated pre-close, and which they’ll price into the deal.

Remediation cost estimates. Engineering-weeks and dollar ranges for each major finding, with explicit confidence intervals. “The authentication system needs a rewrite: 6-10 engineering-weeks, $60K-$120K depending on scope” is actionable. “The code has issues” is not.

Integration timeline and headcount needs. How many engineers, for how long, to bring the codebase to the acquirer’s standards? This directly impacts the financial model. A codebase that needs 6 months of work from 4 engineers before it can be integrated into the parent company’s infrastructure is a different investment than one that can be deployed alongside existing systems in 2 weeks.

Comparative assessment. How does this codebase compare to similar acquisitions? Is the technical debt typical for a company this size and stage, or is it unusually high? Context matters. A seed-stage startup’s codebase should look different from a Series B company’s. The question is whether it looks different in expected ways or alarming ways.

Specific negotiation points. Findings that warrant valuation adjustments, escrow holdbacks, or earnout structures tied to remediation milestones. A good DD report gives the deal team concrete, defensible reasons to adjust terms. “We found X, remediation costs Y, therefore we recommend adjusting the purchase price by Z” is the format that moves negotiations.

The goal of the report isn’t to find problems. Every codebase has problems. The goal is to quantify what you’re actually buying, so the deal reflects reality rather than assumptions.

A word on timing: technical DD that happens after signing gives you regret, not leverage. The earlier in the deal process you start — even with just a technical questionnaire before you get code access — the more room you have to adjust terms based on what you find. We’ve written a practical guide to the pre-acquisition code review process that covers timeline, access structuring, and how to manage seller sensitivity. The typical engagement runs 2-4 weeks depending on codebase size, which is trivial relative to the overall deal timeline.


Evaluating a software acquisition? Variant Systems provides independent technical due diligence for investors, PE firms, and corporate M&A teams. We tell you what the code says — not what the seller wants you to hear.