Hey designers, while you're consumed with finding the right metric, you should know a few pragmatic realities of putting a metric in place. Here's what I've learned over the years.
There is a difference between measures and metrics
Measures are numbers that give life to metrics. An Error Rate might be 10%, but that rate consists of three measures:
- # of total possible errors
- # number of participants
- # errors for each participant
You have to track measures in order to calculate the metric.
There is a difference between business relevance and statistical significance
You want to feel confident that the measure is relevant and real, not that you just got lucky. You want your measures to mean something at your company, and you want the statistical significance to deliver your message with confidence.
Business relevance is all about measures that connect to a business outcome. Many of the articles you read about metrics only talk about business relevance. When it comes to measuring, this is only half the equation.
For statistical significance, you need to think about two things:
- The size of the sample of people
- The variation in the people you sample with
Relevant measures that aren't real is guessing. Guessing is risky, and while some executives are comfortable with guessing, most aren't so audacious.
Choosing a new target metric frames the work
Your target is really the outcome your experiment is trying to achieve or relate to. Think about it as an if/then statement.
"If we reduce login errors by 5%, then we expect to convert more trial subscribers."
Measuring and metrics have delayed gratification
This is perhaps the biggest reality to overcome. Getting a metric in place that you can rely on will take more than a 3-week sprint. You will have to make difficult trade-off decisions to just get a baseline measure in place.
Tracking and monitoring measures takes away from other work
It's also solving people and process problems. You will have to find new ways to get measures in place while your colleagues or stakeholders are worried about slowing down or missing shipping dates. It's these additional factors that we, as an industry, don't talk about a lot and it's a disservice to us all.
The more we argue about what the right metrics are, the more we delay having actual measures and metrics in place. We can only learn from them once they're there.
TLDR: You can't choose where to go without knowing where you are. Knowing where you are takes time and energy. There are no right metrics. There is only learning from imperfect measures and metrics to know what are more right. IME, the benefits of doing all this work is you will have MUCH more confidence in your decisions after you have imperfect measures and metrics in place.
You have to let go of the idea that there are "right metrics" for design. It's preventing you from getting closer to the answer.