Skip to main content
HW88
  • Our StoryTeamFounder
  • Ventures
  • Learn
  • CapabilitiesBuild PodsEngagement
  • Insights
  • Case Studies
  • Our StoryTeamFounder
  • Ventures
  • Learn
  • CapabilitiesBuild PodsEngagement
  • Insights
  • Case Studies
  • Contact
HavenWizards88

Venture Studio for high-stakes founders. We build and automate entire ecosystems for global scale.

Company

  • About Us
  • Team
  • Ventures
  • Case Studies
  • Learn
  • Insights
  • Media
  • Build Log

Services

  • Capabilities
  • Build Pods
  • Strategic Advisory
  • Technology Development
  • Growth Acceleration
  • FAQ

Legal

  • Privacy Policy
  • Terms of Service
  • Cookie Policy

© 2026 HavenWizards 88 Ventures OPC. All rights reserved.

Makati City, Philippines

  1. Home
  2. /Insights
  3. /Operational KPIs That Actually Matter
←Back to PlaybooksINSIGHT

Operational KPIs That Actually Matter

A KPI is only useful if you can complete the sentence "if this number moves significantly, we will do [specific action] within [specific timeframe]" — any metric that fails this test is decoration, not an instrument.

D
Diosh Lequiron
May 10, 2026 · 13 min read
kpismetricsventure-studiooperationsportfolio-management
Share

Operational KPIs That Actually Matter

The first portfolio review I ran across our active ventures, in early 2024, took six hours. It included nineteen slides per venture, color-coded dashboards in three different tools, and a Notion document with thirty-four metrics that someone — possibly me — had decided were important. At the end of six hours, I could not have told you which ventures were healthy and which were quietly bleeding. I could only have told you which ones had the most graphs.

That review was the moment I realized our KPI system was a performance, not an instrument. The dashboards existed because dashboards were what serious operators were supposed to have. Nobody was making decisions from them. Nobody was changing operational behavior because of them. They were a kind of ambient anxiety — proof that we cared about metrics, proof that we were "data-driven," proof that we understood our businesses. They were not, in any operational sense, telling us anything.

I rebuilt the system over the next four months. Most of what I deleted was advice that had come from SaaS-focused KPI frameworks — frameworks designed for a single product with one user type, one revenue stream, and one growth motion. We run nine active ventures across seven industries, and the frameworks built for Slack or Notion did not transfer. What follows is what survived the deletion, and why.

Why Most KPI Frameworks Fail Multi-Venture Operations

The standard SaaS KPI framework — the one that lives in every founder's bookmark folder, with MAU, NPS, CAC, LTV, ARPU, churn, expansion revenue, and net revenue retention — was designed for an operating environment that does not exist in our portfolio. It assumes one product. One pricing model. One acquisition channel that's optimizable through paid media. One target persona. One funnel that goes from awareness to activation to retention to expansion.

Bayanihan Harvest is an e-commerce platform for agricultural products. Its operational reality is inventory turnover, supplier reliability, and last-mile delivery performance. None of those metrics exist in the SaaS framework. AgriForge is an agritech infrastructure venture. Its KPIs are model accuracy, deployment uptime, and partner integration latency. HW88 Education is a teaching business. Its KPIs are completion rates, revenue per cohort, and instructor utilization. CapitalWizards operates in financial education and content. Its KPIs are content reach, conversion to paid resources, and audience trust signals.

If I forced all four of those ventures into a single KPI framework, I would lose the metrics that actually predict failure for each one. Inventory turnover does not appear in a SaaS dashboard. Cohort completion does not appear in a SaaS dashboard. Model accuracy does not appear in a SaaS dashboard. The metrics that matter are venture-specific, and the framework that aggregates them across the portfolio has to acknowledge that.

The other failure mode of standard KPI frameworks: they prioritize what is measurable over what is meaningful. Vanity metrics survive in dashboards because they are easy to track. The metrics that would actually tell you whether the business is healthy require manual work, custom queries, or honest conversations — and they get cut from dashboards because they don't auto-populate. We will return to this distinction shortly.

The 3-Metric Weekly Status Update

Every venture in our portfolio submits a weekly status update. It contains three numbers. There is no narrative. There is no commentary. There is no qualitative summary. Just three numbers and the date.

The three metrics are: one acquisition metric, one retention metric, and one operational metric. The specific metrics differ by venture, but the slot definitions are identical across the portfolio.

The acquisition metric tracks new customer reality at the top of the funnel — not impressions, not visits, not leads, but the count of new units of demand that crossed a threshold of real intent in the past seven days. For Bayanihan Harvest, that is new paying customers. For HW88 Education, it is new cohort enrollments. For Mr Pet Lover, it is new active subscriber accounts. The unit changes. The principle does not.

The retention metric tracks whether existing customers are still showing up and paying. For most ventures this is week-over-week active rate among the prior month's active base, with a hard rule that "active" must be defined operationally — what specific action makes someone active — and the definition cannot change without explicit approval. Definitions that move are not retention metrics, they are excuses.

The operational metric is venture-specific and tracks the one thing that, if it broke, would put the venture at risk within thirty days. For Bayanihan Harvest it is supplier on-time delivery rate. For AgriForge it is model deployment uptime. For HW88 Education it is instructor capacity utilization. The metric answers the question: what is the choke point that, if neglected, will quietly kill this business?

The reason for three and only three is operational. A weekly update with twenty metrics is not a status update — it is a report. Reports get skimmed. Three metrics get read. Operators have to think about which three matter most for their venture, which forces them to articulate their own theory of the business. We learned this from several months of operators sending weekly updates that listed everything they could measure, which translated to nobody — including the operator — knowing what to actually focus on.

Vanity Metrics vs. Signal Metrics

The distinction between vanity metrics and signal metrics is not new. The mistake most founders make with the distinction is treating it as a binary — vanity bad, signal good — when in reality the same number can be either, depending on what decision it informs.

A vanity metric is a number that increases over time and feels good to report but does not change anyone's behavior when it moves. Signal metrics are numbers that, when they move, immediately trigger a specific operational decision. The test is not the metric itself — it is whether the operator can, in a sentence, complete this template: "If this number drops by 20%, we will [specific action] within [specific timeframe]."

For Bayanihan Harvest, total page views is a vanity metric. If it drops 20%, we do not have a defined response. We could investigate, but the investigation is exploratory. There is no playbook. By contrast, supplier on-time delivery rate is a signal metric. If it drops 20%, we trigger a specific protocol: top three underperformers get a direct conversation within forty-eight hours, alternate suppliers get activated within seven days, and customer service messaging shifts to set delivery expectations until the rate recovers. The number is connected to a decision tree.

For HW88 Education, course landing-page conversion rate is a vanity metric in our context — we can measure it, but improving it is not the bottleneck. Cohort completion rate is a signal metric. A drop in completion rate triggers a specific response: instructor outreach to the cohort within seventy-two hours, root-cause review on session content, and a pause on the next cohort launch until the underlying issue is identified. The number forces action.

The test we run on every metric in our weekly updates: complete the sentence "If this moves significantly, we will…" If you cannot complete the sentence with a specific action and timeframe, the metric is decoration. Take it off the dashboard. Most operators discover, after running this test, that 70% of what they were tracking fails the test. That number is not a research finding — it is what we found in our own portfolio after running the audit on five ventures.

The 4 Universal Metrics

We rejected the SaaS framework's universal metrics. We replaced them with four metrics that we found apply across every business model in our portfolio — e-commerce, content, education, agritech, fintech, infrastructure. These are real metrics, not metric categories.

The first is gross margin per unit of customer attention earned. This is more specific than "gross margin" — it asks how much profit you generate per hour, per session, per email open, or per active customer interaction. The unit of measurement varies by venture; the principle is that customer attention is the constraining resource, and margin per unit of that resource tells you whether the business model is healthy. A venture with high revenue and low margin per unit of attention is burning attention faster than it can replenish, and attention is harder to replace than money.

The second is time-to-recovery on operational failures. When something breaks — a payment processor goes down, a supplier misses a delivery, a class doesn't get delivered — how long until the venture is back to normal operations? This is not theoretical resilience. It is measured outage time per quarter, summed and divided by the number of failures. The metric exposes operational fragility before it shows up in revenue numbers. A venture with growing time-to-recovery is one quarter away from a customer crisis.

The third is revenue concentration risk. What percentage of revenue comes from the top 10% of customers, or the top single channel, or the top single product? Above 60% concentration in any of those three is a risk. Above 75% is an active threat. We track this monthly because concentration risk does not move slowly — a single customer churning, a single channel being deprioritized by an algorithm change, a single product hitting a regulatory issue, can take a venture from healthy to in-crisis in days. The metric is venture-specific in calculation, universal in importance.

The fourth is operator hours per dollar of revenue. How many hours does the venture's lead operator spend per dollar of revenue generated? This metric exposes whether the venture is scaling or whether it is just demanding more from the operator over time. A venture where this number is dropping is gaining leverage. A venture where this number is flat or rising is, regardless of top-line growth, a venture that will hit a personal-capacity ceiling. We measure this because we run a portfolio, and a single operator burning out has cascade effects across multiple ventures. The 73% operations-time reduction we achieved across the portfolio came from optimizing this metric specifically — not by working harder, but by automating or eliminating the work that wasn't producing dollar-per-hour returns.

These four are universal in our portfolio. They appear in every quarterly review, regardless of business model. The venture-specific metrics layer on top of them.

How to Set Baselines When You Have No History

A common question from operators in their first quarter: how do I set targets when I have no historical data? The frameworks that exist mostly assume you have a year of operating data and can set targets as percentage improvements over baseline. New ventures don't have that.

The method we use is what we call 90-day calibration. For the first thirty days post-launch, you measure without any target. The goal is to establish what normal looks like — the natural variance of the venture's operating reality. You will see noise. Days where the metric spikes for reasons unrelated to performance. Days where it drops because of timing artifacts. After thirty days, you have a sense of the floor, the ceiling, and the typical band of variance.

For days thirty-one through sixty, you set provisional targets. Not aspirational targets — provisional ones. The provisional target is the median of the first thirty days, plus a small improvement increment that represents the natural learning curve of an operating team. If the provisional target is missed in days thirty-one through sixty, that is signal that you do not yet understand the venture's operating reality. The response is to revise the provisional target downward, not to push the team harder.

For days sixty-one through ninety, you set the actual quarterly target. This target is informed by sixty days of real data and one round of provisional-target calibration. By day ninety, you have a target that is grounded in evidence rather than wishful thinking, and you have a team that has practiced hitting and missing targets in a low-stakes environment. The target you set on day ninety is the one that goes into the quarterly review and becomes a real operational commitment.

The temptation new operators have is to skip directly to setting targets on day one. We have watched this fail repeatedly. The targets set on day one are always either too aggressive — based on what the operator hopes will happen — or too conservative — based on fear of missing. Neither version reflects reality, and neither produces useful operational pressure. The 90-day calibration sounds slow, but it produces targets that the team can actually hit and learn from, which means quarter two starts from a position of confidence rather than from yet another miss-target spiral.

The Quarterly Portfolio Review

Once a quarter, all active ventures undergo a portfolio review. The review is structured to be short, honest, and decision-producing — not exhaustive.

The review covers four sections per venture, in this order. First, the four universal metrics, with current value, prior-quarter value, and direction. Second, the venture-specific operational metric — the one chosen as the venture's choke point — with the same comparison. Third, a one-paragraph honest answer to the question "what would have to be true for this venture to be in serious trouble in six months?" Fourth, a resource ask: what does the venture need from the portfolio in the next quarter — engineering capacity, capital, content distribution, hiring support — and what's the expected return on that resource.

The reviewer is the founder. The participants are the venture operator and one peer operator from a different venture. The peer operator's job is to ask the questions that the venture operator has been avoiding. The peer-operator presence is non-negotiable. We tried running reviews founder-and-operator alone, and they reliably degraded into mutual reassurance. Adding a third party who has nothing to lose by asking hard questions changed the dynamic completely.

Reviews trigger one of four outcomes: continue current trajectory, accelerate (deploy more resources), de-prioritize (reduce resource allocation), or freeze (move to inactive portfolio). The freeze trigger is specifically a metric-based decision: if the operator hours per dollar of revenue has been rising for two consecutive quarters, or if revenue concentration has crossed 75% in any single dimension and not been reduced within a quarter, or if time-to-recovery on operational failures has tripled from baseline — those are not subjective judgments. They are pre-committed triggers, and the review surfaces them.

The freeze trigger has fired three times in our portfolio. In each case, the operator was relieved when it fired, because the trigger named what they had been quietly feeling for weeks. The metrics did not surprise us. They forced us to act on what we already knew.

That is what KPIs are actually for. Not to inform decisions we have not yet made — to force decisions we are already avoiding.

THE ARSENAL IN ACTION

Systems Thinking, Applied

Explore the capabilities behind our playbooks.

HW-Automate

Automation principles we use to eliminate ops drag, reduce handoffs, and keep teams lean without slowing delivery.

8 playbooksRead Playbooks

HW-Insights

Data and analytics thinking from our ventures, including how we instrument decisions and spot growth inflection points.

5 playbooksRead Playbooks

HW-Scale

Infrastructure patterns that grow without complexity, with playbooks on reliability, ownership, and cost control.

6 playbooksRead Playbooks
D

Diosh Lequiron

President & CEO, HavenWizards 88 Ventures

Building arena-forged execution systems and deploying governed Filipino talent across multiple venture lines. Every insight comes from real operations, not theory.

Related Playbooks

INSIGHT

Building a Tech Venture in the Philippines: What Founders Need to Know

I'm writing this from inside the market — HavenWizards 88 Ventures is a Philippine OPC with 9 active ventures built, staffed, and operated here, and the things I know about this market come from daily operational reality, not a market research report or a two-week site visit.

11 min read
INSIGHT

AI Tools for Founders 2025: What We Deployed and What We Cut

Every AI tools roundup follows the same structure: here are 15 tools, here are their features, here is a pricing tier table. None of them tell you what they actually cut, what failed in production, or what cost two weeks of debugging time before they gave up.

10 min read
INSIGHT

Venture Studio vs VC: Why We Chose the Operator Model

When I registered HavenWizards 88 Ventures OPC, I had already made the decision — not to raise venture capital, not to build a single bet, but to operate a portfolio of ventures as an owner-operator from the ground up.

10 min read

Get the Founder's Briefing

A bi-weekly, no-fluff dispatch of the systems, playbooks, and decisions we are using right now inside our ventures and partner builds. Expect short, tactical notes you can apply in the same week.

Join 2,000+ founders and operators.

No spam. Unsubscribe anytime.