How Product Managers Can Measure What Matters

Everybody knows metrics are important for product managers. The key is using the right data to inform and influence your product decisions. With so many metrics to track and so much data to analyze, it can be difficult to know which numbers are truly important and what actions to take based on them. In this article, I’ll explore how product managers can measure what matters, so they can make data-driven decisions that drive product success.

There are 3 types of data: The good, the bad, and, yes, the ugly.

The Good, the Bad and the Ugly

Let’s start off with the bad — Vanity Metrics

What are vanity metrics? They’re a way of measuring something without context, they are easily manipulated, and they do not lead to actions you can take to grow your business.  Examples of vanity metrics include app downloads, page views, time spent on a website, Twitter followers, Facebook likes, etc.

The number of times your app has been downloaded does not provide a direct correlation to how successful your business is. If a million people have downloaded it, but only 1/10th are using it, that’s not good. Same with page views. Just because your website has lots of traffic does not mean you have a successful business.

It is important for companies to properly measure the right data so that they can get a handle on the true health of their business.

If you focus primarily on vanity metrics, you can get a false sense of success by focusing on metrics that might make you feel good, but don’t actually tell you how well your business is doing. This is an especially lesson for start-ups, which seem tempted to use vanity metrics because they haven’t quite figured out which metrics actually matter.

Let’s take a look at some examples of vanity metrics…

Screen Shot 2018-09-22 at 8.17.39 PM

On this chart, it looks like something really great happened on August 14th. Wow, web traffic really spiked that day! The problem that many people make is attributing that solitary spike to an action they took directly preceding it. Multiple people at their company may have done something that resulted in the spike. There could even be external events that caused the spike. The key takeaway here is if this spike didn’t lead to the company’s bottom line by generating revenue, it’s useless.

Speaking of useless, this entire page is useless!

google-analyticsMany product managers use Google Analytics. But if you use their default dashboard, pictured here, there’s nothing you can take action on based on these metrics! The thing to remember is that you should customize your dashboard to focus on the metrics that measure how well your product is achieving key outcomes. Data points such as the number of users, page views, and average session duration don’t help you measure what’s working. And you can’t improve what you’re not effectively measuring.

The Ugly – Flawed Charts

As often as product managers talk about making data-informed decisions, it’s amazing to me how often data is presented in ways that are completely misleading or downright wrong.

Screen Shot 2018-09-22 at 8.23.26 PM

Many of you may remember the Venn diagram incident from Hillary Clinton’s campaign.

The general point her campaign team was trying to make was true. The majority of Americans support universal background checks, including a majority of gun owners. If you think about it, though, the yellow circle should be completely inside of the blue circle because the gun owners referenced here are all Americans. But that’s now how Venn diagrams work! This information should’ve been presented using a different type of chart.

Screen Shot 2018-09-22 at 8.23.33 PM

In this Google example, it appears the email designer messed up the doughnut chart. As you can see, the green line goes around more than half of the circle, even though the number referenced is 41%.

The chart below is bad for several reasons. If you follow the use of colors, you’ll see Iceland, Finland, Portugal, and Spain all nicely represented with different colors. And then, it shows the UK, Denmark, Australia, Venezuela, and Kenya as being part of the same country. This would’ve worked much better as a simple bar chart!

Screen Shot 2018-09-22 at 8.23.42 PM

The lesson here is to make sure you use the right type of visualization for displaying your data and that your chart is correct.

Where can you go for help with selecting and creating charts?

If you’re struggling with deciding which type of chart to use, I highly recommend From Data to Viz.

Screen Shot 2018-08-15 at 7.52.49 AM
From Data to Viz

Whether you’re new to the world of creating graphs or you’ve been doing this for years, this website is a treasure trove of information about how to present data.  

Based on the type of data you have, such as chronological or numerical, it will guide you through the relevant types of visualizations that are best suited for your particular case. 

Depending on the type of chart you’ve selected, it will also inform you about the pitfalls to avoid.

Another great source of information is this book by Edward Tufte. The Visual Display of Quantitative Information is the bible in the industry. I can’t recommend this book enough! And, if you ever get the chance to attend one of his seminars, it’s well worth it!Screen Shot 2018-09-22 at 8.30.20 PM

The Good = Measuring for Outcomes

We’ve talked a lot about vanity metrics. Vanity metrics are typically raw numbers taken without context, such as page views and app downloads. They don’t correlate directly to customer value. And they don’t provide guidance for what future product changes you should make.

What do we mean by outcome-oriented metrics? Those are metrics that typically do 3 things:

  1. They link actions to results.
  2. They focus on delivering customer value.
  3. They provide insight into the health of your product or business.

Remember the differences between vanity and outcome-focused metrics. Vanity metrics might make us feel good, but they don’t help us improve or optimize our business.

Hopefully, by now, you know which metrics to avoid.  And you know a little more about the types of metrics that you should focus on.

Where should you start when you’re ready to embrace an outcome-focused approach to measuring success? It’s really quite simple.


Start with your company’s strategy.

When your company has a strong, clear vision, that should serve as your north star and guide you to prioritize the most important metric. Based on that vision, what is the one metric that matters most to the success of your company and that you can rally your team around?

Let’s look at a great example of leading from strategy with Zappos. Their CEO literally wrote the book on modern customer service. Zappos disrupted online retail sales by focusing on its support department as an opportunity to market and generate revenue rather than as a cost center. Its entire strategy revolves around creating loyalty among its customers using effective KPIs that lead to what they call ‘wow’ moments. And they did this because they found that repeat customers spend more than first-time customers and drive referrals.


Here’s a more personal, real-world example from Sonos, where I worked for nearly twelve years as a product management leader. The Sonos mission is to fill every home with music. We would use that as our north star when defining key outcomes. For example, ‘fastest time to music’ was a key outcome by which we would measure the success of a particular feature. The rationale for that was the faster the music starts playing, the better Sonos is doing its job of filling the home with music.

Ask yourself, what is the key outcome you can focus on and set actionable KPIs against based on your companies strategy?

With your company strategy in hand, I recommend selecting a useful framework for measuring key outcomes, and the one I like is Pirate Metrics.

Pirate Metrics

Pirate Metrics were coined by Dave McClure back in 2007. Here’s a link to his presentation where he first describes this framework. He breaks it down into 5 components.

Acquisition – how well are you getting customers to your site or app?

Activation – are your customers having a great ‘first run’ experience?

Retention – how often are your customers coming back?

Referral – are they telling others about your product?

Revenue – are they paying for your service? Are you able to monetize your customers?

Let’s dig deeper into this and come up with some examples to help you better understand pirate metrics.

To better understand how to apply pirate metrics, pretend we’re product managers for, an online learning platform that lets you watch videos on how to do everything from learning to knit to learning to code and everything in between.  What are some outcomes that we might care about that could be measured using the pirate metric framework? 

Screen Shot 2018-09-22 at 7.00.16 AM

  1. For starters, let’s look at acquisition. We don’t necessarily care how many people come to the homepage. That’s a vanity metric. What we probably do care about, though, is how many people are signing up for a free trial of the service within a specific time period, say each month.
  2. From there, how can we measure activation and whether or not our customers are having a great first run experience? A good metric, building on the previous one, would be what percentage of the people signing up for a free trial are watching a video from start to finish? If they’re only watching a few seconds of a video and then leaving, you could hypothesize that they’re not having a great first run experience.
  3. To measure retention, we could measure what percentage of those people are coming back to watch another video within the month. This is probably one of the most important metrics product managers care about. Knowing your MAU, or monthly average users might even be your company’s top line metric, which is the case for a company like Facebook.
  4. For measuring referrals, we could measure what % of people are sharing links of videos to their network. Some companies like to measure NPS, also known as the Net Promoter Score. It’s based on asking customers if they would recommend your product to friends or colleagues.  I’ll talk more about this in a few minutes.
  5. The final, and probably most important metric, especially if you work for a SAAS product is measuring how much revenue your customers generate. ARPU, or average revenue per user, is often the top-line metric for SAAS business. How might we measure this? One possibility is to measure the % of people who go from the trial to paying for a monthly subscription, which most companies refer to as their key conversion rate.

Putting outcome-based metrics into practice

Now that you have a framework to help you identify all of the possible types of outcomes you can measure, let’s talk about how to harness this information and put it into practice.

We can boil it down to four key steps:

  1. What is the key outcome based on your company’s vision and strategy? What is the most important thing you can improve upon? Perhaps it’s an increase in your conversation rate.
  2. Form a hypothesis. The important thing to remember is to start small and look for the most meaningful lever that you can pull and focus on that. Don’t change five different things on your landing page, and then start measuring your conversation rate. You won’t be able to correlate which of those changes affected your conversion rate. Change only one thing. Your hypothesis could be something like: “If we reduce the price of our product by 10%, we’ll see an increase in our conversation rate of at least 11%.”
  3. Build your experiment. This is where you need to dissect your hypothesis into key components so that you can collect the correct data to validate if your hypothesis is correct or not. Before you build anything, make sure you know what your current state, or baseline metric, is. In this case, make sure you can state what your current conversion rate is. Once you have your experiment ready, set up your analytics to measure the KPI against the current baseline and the goal that you’ve set.
  4. Measure and analyze. Once you’ve got the new data coming in from your experiment, you should be able to quickly analyze if it was a success or not.
  • If it wasn’t, that means your hypothesis was incorrect. Remember that you should not view these moments as failures or a waste of your precious developer resources. This is a learning moment. Failing fast and learning early is key to allowing you to eventually zero in on what works.
  • If the experiment was mildly successful, I encourage you to tweak your hypothesis based on the new data you have.
  • If it was wildly successful, celebrate! And then look for the next lever you can pull to help your business be even more successful.

What I just described with those 4 steps is a framework for continuous learning. Taken from the Lean Startup Methodology, can be used by anyone.

Lean Startup Methodology by Eric Ries

By creating this virtuous loop of building experiments, measuring your KPIs and learning each step of the way, you can quickly and successfully create value for your customers (and your business).

Key Takeaways

  1. Measure outcomes, not vanity metrics. Tie actions to results.
  2. Make sure your data is correct. Examine a sample of your data before moving on.
  3. Choose the right type of visualization for the data. From Data to Viz is a great resource for this.
  4. Define KPIs based on your company’s strategy and top-line metric.
  5. Experiment with small changes and foster a culture of continuous learning.
%d bloggers like this: