When you hear a customer problem, it can be tempting to just dive right in and devise a solution. However, it’s important to remember there is never just one good solution to a problem. A problem can be solved in many different ways, depending on what we need to focus on.
Users can also be unpredictable. What we think might solve their pain points may not actually even begin to address the problems they are facing. Therefore, it’s advisable to test your ideas before you start building a solution. A way in which you can do this is to write and test hypotheses.
A hypothesis is basically an assumption. It’s a statement about what you believe to be true today. It can be proven or disproven using research.
A strong hypothesis is usually driven by existing evidence. Ask yourself: Why do you believe your assumption to be true? Perhaps your hunch was sparked by a passing conversation with a customer, something you read in a support ticket or issue, or even something you spotted in GitLab’s usage data.
There are lots of different structures for hypotheses, but a good approach is to use this simple statement:
We believe [doing this] for [these people] will achieve [this outcome].
The statement consists of three elements.
We believe [doing this] should detail your proposed solution to users’ problems.
for [these people] should identify who you are targeting.
will achieve [this outcome] is where you should document your measure of success. What is your expected result?
storing information about how an incident was resolved, how long the resolution took, and what the outcome was in a way that’s easy for
engineers responsible for incident management to access will achieve
a 20% faster resolution time for incidents. This is because referring to past incident information helps to inform potential solutions for remediation.
When writing your hypothesis, focus on simple solutions first and keep the scope small. If you’re struggling to articulate your assumptions about users, it’s probably better to start with developing a better understanding of users first, rather than forming weak hypotheses and running aimless research studies.
A good user research project creates conditions where you can see if your hypothesis seems to be true or false by evaluating both performance and attitudinal data. A simple binary metric (yes/no, pass/fail, etc.) is used to gather performance data. Attitudinal data, such as sentiments and opinions, helps you to describe your participants' experience.
A strong hypothesis is easy to test. It shouldn’t take you much time to design a research study to validate or invalidate your hypothesis.
If your hypothesis is invalidated by users, don’t feel disheartened. You’ve stopped precious Engineering time being spent on building a solution that simply doesn’t solve users’ problems. A good measure of being iterative is throwing something away because user research proved that it wasn’t going to work. You’re not always going to get things right the first time. We learn more about user needs as a result of testing multiple hypotheses and, in turn, we generate new ideas for future rounds of testing.