← Back to Blog

From Dashboards to Decisions: How Engineers and Founders Should Use Metrics Without Getting Lost in Charts

At Meta, I once reviewed a dashboard that had 247 metrics on it. Two hundred and forty-seven. When I asked the team which metrics actually informed decisions, they pointed to 3.

The other 244 were "nice to know" or "someone asked for it once." The dashboard took 4 engineers 2 months to build. It was used by 6 people. And those 6 people still made decisions based on the same 3 metrics they'd been tracking in a spreadsheet before the dashboard existed.

This is the analytics trap: we build measurement systems that generate data, not decisions.

Now, as a founder, I've made the opposite mistake: flying blind because "we're too early for metrics." Both extremes are wrong. Here's what actually works.

The Instrument → Observe → Change → Review Loop

At Meta, good teams operated on a simple cycle:

Instrument: Decide what to measure and set up tracking Observe: Watch the metrics for patterns over 1-2 weeks Change: Make ONE change based on what you learned Review: Did the change work? Why or why not?

Most teams skip Observe and Review. They instrument everything, then immediately start changing things based on incomplete data, never validating if the changes worked.

Bad loop: Instrument → Change → Change → Change → "Why isn't anything working?"

Good loop: Instrument → Observe → Change → Review → Observe → Change → Review

The difference is patience and discipline.

How to Choose Which Metrics Matter

Early in my time at Meta, I tried to track everything. My manager said something that changed how I think about metrics:

"If you can't tell me what action you'll take based on this metric, stop tracking it."

Harsh, but correct. Here's the framework I use now:

1. Start with the Decision

Before you add a metric, ask:

  • What decision does this metric inform?
  • What action would I take if this metric goes up vs. down?
  • How often do I need to check this to make that decision?

If you can't answer all three, don't track it.

Example - Bad Metric: "Let's track total page views."

  • What decision does this inform? "Uh... if it goes up, that's good?"
  • What action would you take? "Keep doing what we're doing?"

This metric doesn't drive decisions. It's vanity.

Example - Good Metric: "Let's track conversion rate from landing page to signup."

  • What decision? Whether our messaging is clear
  • What action? If it drops below 10%, we test new copy
  • How often? Weekly, since we ship changes weekly

This metric has a threshold and an action tied to it.

2. Use the Tier System

I learned this at Meta: organize metrics into tiers based on how often they inform decisions.

Tier 1 - North Star (1 metric): The single metric that best represents success. Check weekly/monthly. Example: Revenue, DAU growth rate, net retention.

Tier 2 - Key Drivers (3-5 metrics): Metrics that directly impact the North Star. Check daily/weekly. Example: Conversion rate, activation rate, feature usage.

Tier 3 - Health Metrics (5-10 metrics): Warning lights. You don't optimize them, but you watch for problems. Example: Error rates, load time, churn rate.

Tier 4 - Context Metrics (as many as needed): You don't track these regularly, but you can query them when debugging. Example: Specific feature engagement, user segments, cohort behavior.

Most dashboards show only Tier 4 metrics. You end up with 100 charts and no clarity.

Good dashboards show: 1 North Star, 5 Key Drivers, 10 Health Metrics. That's 16 metrics max.

3. Apply the "So What?" Test

For every metric on your dashboard, ask "So what?"

"Our signup rate is 15%." So what?

"That's down from 18% last week." So what?

"It means our landing page change hurt conversions." Good—now we have a decision: revert or iterate.

If you can't get to a decision within 3 "so whats," the metric probably doesn't matter.

Avoiding Analysis Paralysis

The worst thing about good data is it makes you want more data before making decisions. I've seen teams spend 3 months analyzing instead of 1 week testing.

Symptom 1: "We Need More Data"

If you find yourself saying this after 2+ weeks of observation, you don't need more data—you need a hypothesis.

Instead of: "Let's collect 3 more months of data on user behavior." Try: "I think users are dropping off because X. Let's test a change that addresses X and see if drop-off improves."

You learn more from a 1-week test than 3 months of observation.

Symptom 2: Building Complex Dashboards Before You Have Users

I see pre-PMF startups building comprehensive analytics before they have 100 users. This is backward.

Before 100 users: Track 3 metrics manually in a spreadsheet. Talk to users. Make changes based on conversations, not data.

Before 1,000 users: Add basic instrumentation (signup flow, key actions, retention). Review weekly. Still talk to users.

Before 10,000 users: Build a simple dashboard (North Star + Key Drivers). Review daily. Still talk to users.

After 10,000 users: Now you can justify more sophisticated analytics.

Most founders build the 10,000-user analytics system when they have 10 users. Then they spend more time looking at dashboards than talking to customers.

Symptom 3: Optimizing Metrics That Don't Move the North Star

This one killed so many projects at Meta.

Example: A team optimized "time on page" for 3 months. Time on page went up 40%. Revenue didn't move.

Why? Because users were spending more time confused, not more time engaged.

The metric improved, but it didn't matter.

The fix: Before optimizing any Tier 2 or Tier 3 metric, validate that it actually correlates with your North Star. Run a regression, plot it, eyeball it. If there's no relationship, don't optimize it.

Practical Decision Frameworks

Here are the frameworks I actually use to turn data into decisions:

Framework 1: The Threshold System

Set thresholds for key metrics. When a threshold is crossed, you take a predefined action.

Example:

| Metric | Threshold | Action | |--------|-----------|--------| | 7-day retention | < 20% | Pause growth, fix onboarding | | Conversion rate | < 10% | Test new landing page | | Error rate | > 1% | All hands to fix bugs | | NPS | < 30 | User research sprint |

This removes decision fatigue. The dashboard tells you what to do.

Framework 2: Trend Over Snapshot

Never make decisions based on a single data point. Look at trends over 2-4 weeks.

Bad: "Signups are down today, let's change the CTA." Good: "Signups have been trending down for 3 weeks, down 25% overall. Let's test a new CTA."

Avoid knee-jerk reactions to noise.

Framework 3: Cohort Comparison

Instead of looking at overall metrics, compare cohorts.

"Retention is 30%" doesn't tell you much.

"Users who complete onboarding have 45% retention, users who don't have 8% retention" tells you exactly where to focus.

Example query I run weekly:

-- Compare retention by onboarding completion
SELECT 
    completed_onboarding,
    COUNT(DISTINCT user_id) as users,
    AVG(CASE WHEN days_since_signup >= 7 
        AND returned THEN 1 ELSE 0 END) as retention_7d
FROM user_activity
WHERE signup_date >= CURRENT_DATE - 30
GROUP BY completed_onboarding;

This tells me if onboarding matters. If it does, I optimize it. If it doesn't, I focus elsewhere.

Framework 4: The "Why Pyramid"

When a metric changes, drill down:

  1. What changed? (the metric)
  2. Where did it change? (which segment, cohort, feature)
  3. Why did it change? (correlation with other changes)
  4. What should we do? (the decision)

Example:

  1. Retention dropped from 35% to 28%
  2. Only for users who signed up via mobile
  3. Coincides with our mobile onboarding update last week
  4. Revert the mobile onboarding change, investigate what broke

Most people skip straight from 1 to 4 and make wrong decisions.

What Good Looks Like: Real Examples

Let me show you what this looks like in practice.

Example 1: Early-Stage Startup (My Current Company)

We're building tools for creators. Here's our full dashboard:

North Star: Weekly active creators (WAC)

Key Drivers:

  • Signup → first content created (activation rate)
  • Creators who publish 2+ pieces of content (power user %)
  • Average pieces of content per creator per week
  • Week 2 retention rate

Health Metrics:

  • Error rate on key flows
  • Time to first value (signup → first success)
  • NPS from weekly survey
  • Churn rate

That's it. 9 metrics total. Reviewed Monday mornings, takes 15 minutes.

When WAC is flat or declining, we look at the key drivers to see which one dropped. That tells us where to focus that week.

Example 2: Meta Ads Team

My team at Meta tracked performance for ads measurement systems processing 500B+ events per day.

North Star: Measurement accuracy (how close our numbers were to ground truth)

Key Drivers:

  • Event delivery latency (P50, P99)
  • Data completeness rate
  • Processing error rate
  • Schema validation pass rate

Health Metrics:

  • Infrastructure costs per billion events
  • On-call pages per week
  • Time to detect/resolve issues
  • Customer support tickets related to data accuracy

For a team of 20 engineers supporting billions in ad revenue, we reviewed 12 metrics weekly and 4 metrics daily.

That's it. The rest lived in Tier 4 (queryable when debugging, but not on the main dashboard).

Common Mistakes and How to Fix Them

Mistake 1: Death by Dashboard

Symptom: You have 5+ dashboards and don't know which to check.

Fix: Pick ONE dashboard to review daily/weekly. Archive the rest. If you realize you need something from an archived dashboard, add just that metric to the main one.

Mistake 2: Metrics Theater

Symptom: You review dashboards, nod, then make decisions based on gut feel anyway.

Fix: For every decision, write down: "Based on [metric], we decided to [action]." If you can't fill in the blanks, you're not actually using the data.

Mistake 3: Lagging Indicators Only

Symptom: All your metrics are outcomes (revenue, users, retention) with no leading indicators.

Fix: For each outcome metric, identify what predicts it. If revenue is the outcome, conversion rate is the leading indicator. Track both.

Mistake 4: No Action Thresholds

Symptom: You see metrics go up or down but don't know what to do about it.

Fix: Set thresholds. "If X drops below Y, we do Z." Remove the decision-making friction.

What to Do This Week

If your metrics situation is a mess, here's how to fix it:

Day 1: Pick Your North Star

Answer: "If we could only track ONE metric for the next 6 months, what would it be?"

That's your North Star. Everything else supports it.

Day 2: Identify 3-5 Key Drivers

What metrics directly influence your North Star? If your North Star is revenue:

  • Potential drivers: signups, conversion rate, average order value, purchase frequency
  • Pick the 3-5 you can actually influence

Day 3: Set Thresholds and Actions

For each key driver, define:

  • What's "good" (threshold)
  • What action you'll take if it drops below "good"
  • How often you'll check it

Day 4: Build a Simple Dashboard

One page. North Star at the top, key drivers below, health metrics at the bottom. Nothing else.

Tools: Mode, Metabase, even Google Sheets. The tool doesn't matter, the discipline does.

Day 5: Schedule Review Cadence

  • North Star: Weekly review, 15 minutes
  • Key Drivers: Daily check, 5 minutes (automated alert if threshold crossed)
  • Health Metrics: Weekly scan, 5 minutes

Put these on your calendar. Make them non-negotiable.

The Real Goal: Decisions, Not Data

The best dashboard I ever saw at Meta had 6 metrics on it. The team using it shipped 2x faster than teams with 50-metric dashboards.

Why? Because they spent 5 minutes looking at data and 4 hours building. Everyone else spent 2 hours looking at data, debating what it meant, then building the wrong thing.

Metrics exist to help you make decisions faster and better, not to give you more things to look at.

If your dashboard doesn't make the next action obvious, it's not working.

Start simple. Instrument the minimum needed to make your next decision. Observe until you see a pattern. Make one change. Review if it worked.

Then repeat.

That's the loop. Everything else is just decoration.

← Back to All Articles