As more teams are taking part in rapid prototyping activities to innovate within the Artificial Intelligence (AI) space, it’s natural to pause and ask yourself: What’s the problem this aims to solve?
It’s common that exciting break-through technologies lead to product innovations. However, this can result in teams presenting a technical solution that is looking for a user problem - instead of the other way around.
We need to make sure to keep users’ needs (and unmet needs) in mind while innovating. If we don’t do this, the risk is that we end up building powerful AI solutions that may not address identified user problems.
The guidelines on this page aim to provide direction on how to include the user’s perspective throughout the design and development of AI solutions.
If you have an assigned UX Researcher in your stage: If you need UX Research support, connect with your assigned stage UX Researcher following the research prioritization process. Your AI-specific research topic will be prioritized against the other research projects already identified within their stage.
If you DON’T have an assigned UX Researcher in your stage: For those working in a stage group that doesn’t have a UX Researcher assigned, Nick Hertz is managing those research requests. The research prioritization process still applies and you can add your topic to this AI research-specific prioritization calculator after you have opened a research issue.
AI solutions themselves won’t reveal the user problem they are meant to solve. To identify and understand user needs (and to determine if the AI solution is addressing a real user problem), there are different approaches you can take:
Review existing research
Use case definition (recommended option for medium level confidence)
Extended solution validation
Generative research (recommended option for low confidence)
Did you know that you can validate your future AI powered feature in parallel to the engineering team building it by using Wizard of Oz prototyping? Validating before the AI solution is available is a great way to capture users' expectations and requirements early on. They can inform engineering efforts in training the AI.
Here are a few things to keep in mind when preparing the prototype:
It may be tempting, but don’t ask users if they would use this AI feature. People are poor predictors of future behavior, so their answers won’t be accurate or useful for you. To get closer to understanding if people may use a solution, it’s best to understand:
Once an AI powered solution is available for validation, make sure to not only collect feedback on its usability, but also:
We are piloting a set of AI metrics and recommend including them in your solution validation.
To get robust feedback during solution validation, it’s recommended to collect at least three data points. As AI output varies, it’s not sufficient to rely on the first output only. You can do this by having three similar tasks to see how participants react to the AI’s responses in these three different scenarios.
Tip: Avoid asking the tempting “Would you use this?” question.
If you are maturing your AI feature towards Generally Available, take a look at the UX maturity requirements (internal link) for further guidance on metrics and success criteria.
AI will make mistakes due to their probabilistic nature. It’s important to understand how AI mistakes may affect users. Will certain mistakes result in turning users away from using it? Or using GitLab? Here's what you can do:
AI evolves as users engage with it over time. As a result, users’ mental models about how it works as they engage with it over time may change (it’s a continuous loop). To ensure we’re continuing to offer AI solutions of value, it’s important to understand how mental models change over time and evaluate the performance of AI solutions as use cases and users increase.
We are piloting a set of AI metrics that allow you to evaluate and track user's experience with AI powered features over time.
We developed, and are currently piloting, a set of metrics to evaluate AI powered features in terms of how well they are meeting user needs. These metrics can be used during Solution Validation and to track a user's experience with an AI powered feature over time.
The metrics focus on the following 8 constructs that we observed in a literature review and are captured in 11 survey questions.
A survey with these metrics is available for you to send to your participants who are working with AI features. If you want to use this survey, ask Anne Lasch for access to the Qualtrics project.