As a product owner, my goal has always been simple: find out what is the most valuable thing that the team can build and help them build it. Whether we’re working on a new product, refining an existing powerhouse, or general digital transformation, allowing data to inform the direction and prioritization is a cheat code to positive results not just for the company’s bottom line, but also for team morale. Utilizing data insights up front keeps us from building the wrong thing and lets us move onto the next initiative sooner by reducing the number of iterations needed.
The idea of building with data-driven insights is almost always an easy sell to the stakeholders, but the effective execution of it is certainly more complicated. Data can be expensive and time consuming to gather and understanding its limitations is critical to ensuring the team is drawing correct conclusions. In this article we’ll dive into some good rules for what data can be used and when so we can effectively inform our product plan.
Starting from the basics, there are two main types of data that should be gathered to inform the product plan: quantitative and qualitative. They differ in how they can be gathered and what insights can be drawn from them. Quantitative data, characterized by numerical values, provides a birds-eye view of behavior. It can pinpoint the location of problems and successes within your user journey. Quantitative data is everywhere in product, and it makes sense that it’s everywhere because it can provide concrete analysis into broad user behavior and is easy to digest - whether communicating to leadership or the development team. From website traffic statistics, product usage statistics, to key performance indicators (KPIs) it can be an invaluable and clear measurement. It’s easily scalable too, although the tools used to capture quantitative metrics may need to evolve throughout the product cycle, things such as conversion rate can always be measured whether the traffic is 50 daily users to 50 million.
Tools for collecting quantitative data don’t need to be fancy or expensive. For small teams or new products developing simple KPIs and tracking them in an Excel document can be an excellent place to start. As products grow, more advanced solutions such as GA4 or PowerBI can provide insights in feature adoption, bounce rate, or ad interaction. It can also be a great way first alert to identifying problems when KPIs dip and tools such as GA4 can even send an automated alert if a KPI is out of a defined range. Although the collection of quantitative metrics requires resources to set up, it can generally provide continuous insights with relatively low maintenance.
For more mature products, the true test for collecting quantitative data lies within A/B testing where normal variations in metrics can be controlled by having multiple live versions of the product with small, thoughtful variations that can support or not support a hypotheses. This tool is best for the refinement of existing features but can be invaluable to ensure that new features are adding to the platform as expected. There are many tools out there that enable A/B and multivariant testing such as Optimizely, Convert.com, or Kameleoon just to name a few. Some are best for experienced developers while some can enable the marketing team to effectively test new insights with little to no technical support.
If there is anything I commonly see lacking in companies’ adoption of quantitative data, it’s usually a thorough understanding and respect for its limitations. The reason quantitative data is so easy to understand is because it’s often just a simplification of the whole story. It can only tell us about the things we have set it up to track and we must keep in mind we’re not looking at whole picture. It’s kind of like comparing a line drawing to a full color picture. Sometimes the lines are enough to tell us where everything is and we don’t need to color, but we always need to ask if we have enough information to form a hypothesis or act on it. Using data-driven insights to make decisions is usually appealing to stakeholders but putting it into action can be challenging. Additionally, just because it looks like a metric has improved some percentage doesn’t mean it has without a full consideration of factors such as inherent randomness or normal fluctuations. That’s where statistical significance comes into play. Statistical significance measures the certainty of a result obtained from an experiment or change. The higher its value, the more you can trust that the result is not due to randomness, and that its truly representative of a new baseline.
So, wait for statistical significance - easy enough right? The reality is that unless you have a drastic change in metrics and/or a substantial number of samples, you’re likely going to be waiting for weeks, maybe months for data to reach statistical significance. It’s so tempting to call a winner early if a metric is trending in a positive direction, but this could be due to randomness rather than actual change, i.e. a false positive. But is that always wrong? Not necessarily, it all depends on a company’s risk tolerance. As long as the possibility of a false positive is recognized and the risk is deemed acceptable, then calling a winner or conclusion early isn’t necessarily a bad idea.
So, waiting to ensure our results aren’t due to randomness can take a while to truly detect, but is there anything we can do to ensure we’re getting the full picture? Often, the true answer to why a KPI is changing can be hidden between the quantitative metrics we’ve set up to measure. By putting in the effort for qualitative data as well, we can begin to get a full picture and gain confidence that we’re really understanding our product.
If quantitative data is the “what” for product, qualitative data can be invaluable to illuminating the “why”. Without fully understanding the why, products may have to iterate for several cycles until the desired outcome is achieved. Qualitative data can be gathered from methods such as usability testing, surveys, user interviews, or Customer Experience Management (CXM) software such as FullStory or SessionStack. The tools help to create a narrative, uncover pain points, and inform the design of an ideal solution. Through these methods, visibility into the user journey can uncover if a product is meeting users’ needs and how it could improve.
For example, if a call to action (CTA) isn’t receiving the click-through rate a team was hoping for, is it because the text is difficult to read? Or is it because there are other competing CTAs around the page? Or maybe the value of the CTA isn’t resonating with the users as intended. If we understand the root cause we should be able to quickly and effectively design and deploy a solution and an actionable plan to achieve intended results. However, if we just know the click-through rate is bad, observing sessions, or user interviews can be invaluable tools to see the why.
Because qualitative data enables a broader space for feedback than quantitative data, it’s often much harder to compile and make sense of. It can be expensive to gather and compile and still lack clarity because it’s not uncommon to have no overlap or even conflict between answers for things like user surveys. Even if you have overlap, like quantitative data, qualitative data can still be prone to randomness so reaching statistical significance can truly be prohibitive.
At the heart of data-driven decision making is the interplay between qualitative and quantitative data. Although one without the other can be useful alone, each category presents its own challenges. Quantitative can seem like it’s providing clarity, but that’s because it’s providing a simplified view and tracking what we enable ourselves to track. Qualitative can enable us to see the nuance, but it can also be so open that it lacks clarity.Utilizing both types of data is the best way to create a holistic picture to guide your product plan. Maybe you’ve uncovered an interesting new feature request in your qualitative user interviews but you’re not sure how valuable the feature is to your broader users. You can then restructure interview or survey questions to collect quantitative data around that feature and decide to act on it. We may not have found that feature request without the qualitative data, but we may not have felt confident investing the resources to develop it without the quantitative data.
Similarly, qualitative can clarify quantitative data. We may see a change in a KPI and think we know the cause. However, especially if it’s a negative change, it’s difficult and often not recommended to wait until the change reaches statistical significance to act on it. Instead, we can take our theories and then test them qualitatively and uncover if the movement of the metric is true and even understand the “why” so we can act accordingly. If we have lots of pieces of data pointing to the same thing, the risk of acting without statistical significance is more tolerable.
Okay, so we have data, qualitative and quantitative collected with as much integrity and intent as possible. It’s important to highlight that the journey doesn’t stop here. The interpretation of data involves both art and science, requiring a deep understanding of context and analytical techniques. It can often take even experienced analysts time to get comfortable understanding what the data is telling them, when it’s okay to extrapolate, and when more data is required. Given the volume of data that organizations typically collect, managing and analyzing this information can easily constitute a full-time role—or even require multiple dedicated positions. This highlights the significant commitment and resources needed to effectively translate data into actionable insights. Committing these resources can enable a more effective overall team and - maybe most importantly - a happier team that’s free from individual biases driving priorities blindly.
Adding data-driven decision-making into the product lifecycle not only refines the product, but also strategically aligns it with market demands. One key area to explore further is the integration of real-time analytics. By harnessing real-time data, product teams can respond swiftly to changes in user behavior or market conditions, ensuring the product remains relevant and competitive. Additionally, incorporating predictive analytics can forecast future trends and user needs, enabling proactive adjustments to the product strategy.
It’s important to use data up front to decide what to build, but it’s maybe even more important to utilize data afterward to validate the expected result. One of my personal favorite statements I’ve heard in my career was “The internet is humbling.” This was stated by a senior executive while reviewing the data from a carefully crafted feature that wasn’t received as expected. It turned out, our data didn’t sufficiently capture the story of our users. Although it wasn’t the immediate expected outcome, it allowed us to build a more insightful data model with more accurate considerations and representative KPIs. Because of this retrospective insight, future feature planning could be developed with less risk.
Developing a data model to inform the product’s evolution is an iterative process that warrants constant feedback for best results. Companies that commit to this approach can rapidly and efficiently respond to change with pinpoint accuracy.