To those who know me, I'm often in search of a little bit of daylight. Off finding some space to be outside, go for a walk, and let my brain wander. While on a walk yesterday, my pal Lucas called, and we got to chatting.
What started as a casual check-in turned into a 90-minute discussion. We discussed some of our past experiences and how they led us down some of the paths we've followed. One path that I'm often asked about is how I started showing my colleagues' the value of design.
In 2009, I was challenged to do something seemingly simple—recreate the scorecards of measures and metrics my colleagues at Apple were using. My colleagues used scorecards to track and monitor the impact of their decisions over time on key business measures. My goal was to track and monitor the impact of the decisions I was making as a designer on those key business measures.
At the time, I worked inside AppleCare, Apple's customer support division. Rather than designing the products you know and (perhaps) love, I designed enterprise-grade applications for AppleCare advisors and the teams that supported them. We were developing capabilities you might find today in applications like Salesforce, ServiceNow, or Zendesk, but surfacing them inside proprietary applications called iLog and iDesk.
The design was very Apple, with a high bar placed on Interaction Design and Information Architecture. Still, the expectations were very Tim Cook, a high bar set on data-informed decisions, vetted business cases, and a desire to positively impact multiple parts of the business at once.
But where to start?
As a designer, I wanted to give my leadership team a quick overview of what was happening with our products' use, adoption, performance, etc. In turn, provide them with something they could quickly skim over in two minutes to understand where the most significant risks and opportunities moving forward were.
The problem was, I hadn't seen anyone doing something like this before, so I started in the only way I knew. I started with research.
As I began exploring potential options to try, it was essential to see what already existed. I was already using scoring methods like the System Usability Scaleand Technology Acceptance Model, but also discovered desirability studies by Christian Rohrer, and techniques that came from the consulting world like Net Promoter Score.
While I found that these techniques helped me and my analysis, these methods were less effective when I presented them to my colleagues and stakeholders. I found myself having to explain what these methods were over and over again rather than focus on the risks and opportunities I saw. That did not help me meet my goal of providing a skim-able two-minute overview. I had to keep going.
It was at this point, after trying a couple approaches to explain the value of my work, that I adjusted my focus.
To help my colleagues make more informed business decisions, I didn't need to devise a new metric or scale. I needed to solve a communication gap.
When I made this adjustment, I pivoted my research approach. Rather than searching for measurement tools, I searched for alternate communication methods. In particular, I wanted to address two key concerns; share my rationale for my design decisions and communicate with language my colleagues already knew.
This search led me to new approaches that seem to better fit my use case, and serendipitously, I found two techniques that fit the bill; the UX Health Check Poster that Lívia Labate and Austin Govella presented at the 2009 IA Summit and The Balanced Scorecard, popularized by Robert S. Kaplan and David P. Norton.
I loved Lívia and Austin's approach because they defined the rubric of what good design looked like. As designers, that’s our responsibility. I loved The Balanced Scorecard approach because it was a method and tool already used by my colleagues, and I didn't need to teach them something new.
After learning more about each, I sought to combine each in a new way that might work for me. I borrowed the categories of my colleagues used in their scorecard, Business Performance, Technical Performance, Adoption, and Communication, and applied my rubrics of design to them.
Examples of these were:
- Business Performance = Users can perform key tasks
- Technical Performance = Users can avoid and mitigate errors
- Adoption = Users understand how new features integrate with existing tools or processes
- Communication = Users understand when a new feature is available to them and why
This slight change had tremendous success.
As it turned out, my colleagues—at least at Apple—were desperate to know what my rationale and argument was as a designer and how decisions I made affected the overall outcomes they were reporting on. I had been involved with big product decisions all along, but I was able to articulate how my decisions were correlating to the outcomes they were reporting for the first time.
They really appreciated that.
Remixing is my favorite skill
By remixing what others had already tried and applied, I discovered vital areas to improve the scorecard for my colleagues at Apple. By remixing what worked for me at Apple to other places, I have adjusted this scorecard approach to new contexts.
Before I knew it, I would be developing different scorecards at other companies. Over the last 12 years, I've found that this approach has unlocked so many questions my colleagues have about OKRs, metrics, and showing the value of design.
But what's really cool about this is, after teaching others how to do this for the last 18 months, so many other design leaders have been able to relieve some of the anxiety and burden they feel in showing their value.
You don’t always need to make or build. Remixing can be one of your greatest assets.