The Deal That Almost Sank Us
In 2019, we almost acquired a company for $47 million.
On paper, it was perfect: $8M ARR, 40% growth, enterprise customers, a technology that would accelerate our roadmap by two years. The founders were eager, the financials were clean, the market was hot.
Then we did technical due diligence.
Seventy-two hours later, we walked away.
Here is what we found: The entire system ran on a single database server with no failover. The "microservices architecture" was actually 47 services that all called each other synchronously, meaning one failure cascaded everywhere. The lead engineer, who had built 80% of the codebase, had already accepted an offer at Google. And the "automated testing" turned out to be a single Selenium script that took four hours to run and failed 30% of the time.
The rebuild cost? Conservatively $12 million and 18 months, if everything went right.
The lesson was worth more than the deal we did not do.
Why Most Due Diligence Fails
"Buyers discover technical problems. They just discover them after closing."
Traditional due diligence treats technology like a checkbox. Lawyers review IP assignments. Accountants verify software capitalization. Maybe someone asks if the code is "in good shape."
This is like buying a house by verifying the deed and asking if the foundation is "solid."
| What They Ask | What They Accept | What They Should Demand |
|---|---|---|
| "Is the code documented?" | "Yes, we have docs" | Show me a new engineer onboarding |
| "Any technical debt?" | "Normal amount" | Walk me through your debt register |
| "How is security?" | "We passed an audit" | When was the last penetration test? |
| "Can it scale?" | "Absolutely" | What happens at 10x current load? |
The Mental Model: Technology as a Living System
Stop evaluating technology. Start evaluating the system that produces technology.
A codebase is a snapshot. The team, processes, and practices that created it determine whether that snapshot gets better or worse over time.
The Four Pillars Assessment
Pillar 1: Architecture
What we are really asking: Can this system handle success?
| Dimension | Green Flag | Red Flag |
|---|---|---|
| Scaling Model | Horizontal scaling proven | "We will scale when we need to" |
| Data Architecture | Clear separation, documented schemas | Database as integration layer |
| Failure Handling | Graceful degradation, circuit breakers | Cascading failures likely |
| Dependencies | Managed, versioned, security-scanned | "It works, do not touch it" |
Pillar 2: Code Quality
| Indicator | What It Reveals | How to Measure |
|---|---|---|
| Test Coverage | Confidence in changes | Coverage percentage (aim: 70%+ for critical paths) |
| Deployment Frequency | System health and team confidence | Deploys per week (healthy: 5+) |
| Lead Time | Development efficiency | Idea to production time |
| Change Failure Rate | Quality of changes | % of deployments causing incidents |
Pillar 3: Infrastructure and Security
The Infrastructure Audit Checklist:
- Encryption at rest implemented
- Encryption in transit (TLS 1.2+)
- Backup restoration tested within last 90 days
- Principle of least privilege enforced
- Credentials rotated on employee departure
- Penetration testing within last 12 months
Pillar 4: Team and Process
The Team Assessment Framework:
| Factor | Healthy Signal | Warning Signal |
|---|---|---|
| Bus Factor | Multiple people can handle any system | "Only Sarah knows that part" |
| On-Call Health | Rotations work, manageable pages | Same person always gets called |
| Documentation | Runbooks exist and are current | "We should write that down" |
| Onboarding | New engineers productive in 2-4 weeks | "It takes months to ramp up" |
The 10 Questions That Reveal Everything
| # | Question | What Good Looks Like |
|---|---|---|
| 1 | Show me how a feature goes from idea to production | Clear workflow, 5-10 steps, measured cycle time |
| 2 | What happens when the site goes down at 3 AM? | Documented runbook, clear escalation |
| 3 | How do you know the system is healthy right now? | Dashboard pulled up in seconds |
| 4 | What is your biggest technical risk? | Honest answer, already being addressed |
| 5 | If you had unlimited engineering budget, what would you fix? | Prioritized list, rationale for ordering |
| 6 | How does a new engineer become productive? | Documented onboarding, 2-4 week ramp |
| 7 | How often do you deploy? | Daily+, automated, with rollback capability |
| 8 | Walk me through your last major outage | Blameless analysis, specific improvements made |
| 9 | Who has production database access? | Named list under 10 people |
| 10 | What technical decision do you most regret? | Thoughtful reflection, lessons applied |
Scoring the Responses:
- Immediate, detailed answer: Green flag
- Has to check or think hard: Yellow flag
- Defensive or evasive: Red flag
- "We do not track that": Walk away signal
The Scoring System
We score each pillar 1-5:
| Score | Meaning | Valuation Impact |
|---|---|---|
| 5 | Industry-leading, minimal risk | Premium justified |
| 4 | Solid foundation, minor improvements needed | No discount |
| 3 | Functional but requires investment | 10-20% technology discount |
| 2 | Significant remediation required | 20-40% discount or walk |
| 1 | Critical issues, potential value destruction | Walk away |
Calculating Aggregate Impact:
Technical Valuation Multiplier = (A + C + I + T) / 20
Multiplier Interpretation:
0.9 - 1.0 = Full valuation
0.7 - 0.9 = 10-30% discount
0.5 - 0.7 = 30-50% discount
< 0.5 = Walk away
Key Takeaways
| Insight | Implication | Action |
|---|---|---|
| Technology due diligence is team diligence | A strong team can fix weak code; a weak team cannot maintain strong code | Weight team assessment equally |
| The questions that get deflected reveal the most | Evasion indicates known problems | Push harder on non-answers |
| Hidden costs compound | $1M of technical debt today is $3M in 2 years | Discount accordingly |
| Systems over snapshots | Current code quality matters less than the system producing code | Evaluate processes, not just artifacts |
The Closing Question
Before any technical acquisition, we ask ourselves:
"If we knew everything about this codebase that the team knows, would we still do this deal at this price?"
Technical due diligence exists to close that knowledge gap.
The $47 million deal we walked away from? The company was acquired six months later for $52 million. Eighteen months after that, they wrote off the technology entirely and rebuilt from scratch.
The deal they thought they were getting was not the deal they got.
That is the cost of inadequate due diligence.