top of page

Operational Metrics: The Swiss Army Knife of Product Development‍

Updated: Oct 17, 2023



In the competitive world of product development, leveraging operational metrics effectively can be the key to success. Be it success metrics, counter metrics or stickiness metrics, understanding and utilizing these critical indicators can help steer your product towards its desired goals. But, are you interpreting these metrics correctly? Let's dive in to discover what your metrics are really saying.


Understanding the Power of Metrics

Metrics, in essence, are the compass guiding your product direction. They enable you to quantify success and track progress towards defined objectives. They are often categorized into two primary types: success metrics and counter metrics.


Success Metrics: Setting the Course

Success metrics, also known as goal metrics, are the quantifiable measures of a product's triumph. They dictate the prioritization of features and enhancements in a product. In the quest for success, product developers often select a singular "North Star metric" to streamline their efforts.


Take, for instance, a health and fitness application. The app's primary goal might be to enhance the physical fitness of its users. To measure this, the team could decide on a North Star metric such as "time spent exercising with the app's fitness content". All features would then be designed to boost this key metric.

However, focusing solely on success metrics can sometimes lead to unforeseen impacts on the product.


Counter Metrics: The Checks and Balances

This is where counter metrics, also known as guardrail or health metrics, come into play. They measure the aspects of a product that may be impacted when optimizing for success metrics. These metrics ensure that in the pursuit of success, the overall health of the product isn't compromised.


For every success metric, there should be one or more corresponding counter metrics. These serve as a sanity check, ensuring a holistic perspective in decision-making.

For example, in our fitness app scenario, "time spent viewing informational content" could serve as a counter metric to "time spent exercising". This would ensure overall app engagement isn't negatively affected in the pursuit of increased workout time.


Deciphering the Stickiness Metric

One popular and often misunderstood engagement metric is the stickiness metric. It measures the frequency of user engagement with your product. A common way to calculate stickiness is through the DAU/MAU (Daily Active Users / Monthly Active Users) ratio.

However, interpreting stickiness can be tricky. The same stickiness result can be achieved with entirely different data sets, leading to significantly diverse implications for your product.


The Many Faces of Stickiness

Let's examine various scenarios to see how different data sets can yield the same stickiness results.


Scenario A: High User Engagement, High Stickiness

Imagine a scenario where you have 1,000 unique users for a week, all of whom engage with your app every single day. This would yield a stickiness ratio of 100% - a perfect score.

However, this high stickiness ratio is not necessarily indicative of a successful product. If the total number of users is not significant, then the high stickiness ratio may not hold much value. In other words, the behavior of a small user base is not predictive of the larger potential user base.


Scenario B: Low User Engagement, Low Stickiness

Conversely, consider a case where you have 1,000 unique users for a week, but only a fraction of them engage with your app each day. This leads to a lower stickiness ratio. While it might initially seem like a negative indicator, it could signify diverse user preferences and the need for a broader product offering.


Scenario C: Skewed Usage, Misleading Stickiness

In another scenario, all users engage only on specific days. For example, all users engage on the first six days of a month and then there is zero engagement for the rest of the month. The resulting stickiness ratio might be high, but this could be misleading. It could be a result of a specific event or external influence rather than a reflection of the product's inherent stickiness.


Dangers of Averages in Stickiness Metrics

Stickiness metrics often rely on averages, which can sometimes lead to misleading conclusions. For example, consider two user cohorts: Cohort A, which has a very high stickiness, and Cohort B, which has a lower stickiness. If we average these two cohorts, we end up with a misleading stickiness metric that doesn't accurately reflect either cohort's behavior.


Therefore, it's crucial to understand the limitations of averages and aim to dive deep into your raw engagement data to get the clearest picture of stickiness.


Conclusion: A Multifaceted Approach to Metrics

In the end, it's evident that metrics are a powerful tool in product development. However, a one-size-fits-all approach can be misleading. Understanding the nuances of each metric, their interplay and their implications is crucial in making informed decisions.

Remember, success metrics, counter metrics and stickiness metrics are just tools in your toolbox. Use them wisely, and they can guide your product to success.

bottom of page