The Plan UX team supports Product Planning, Project Management and Optimize. Product Planning and Project Management are focused on the work items architecture architecture effort. This page focuses mainly on the specifics of how we support this, since it requires alignment and cross-group collaboration.
When designing for objects that use the work items architecture we will follow this process intending to ensure that we are providing value-rich experiences that meet users needs. The work items Architecture enables code efficiency and consistency, and the UX team supports the effort by identifying user needs and the places where those needs converge into similar workflows.
The first objects built using the work items architecture support the Parker, Delaney and Sasha personas in tasks related to planning and tracking work. Additional objects will be added in the future, supporting a variety of user personas.
Read more about work items
Work items refers to objects that use the work items architecture. You can find more terms defined related to the architecture here: work items terminology.
When we talk about the user experience, we avoid using the term 'work items' for user facing concepts, because it's not specific to the experience and introduces confusion. Instead, we will use descriptors specific to the part of the product we're talking about and that support a similar JTBD. Here are examples of how we are categorizing these:
This enables us to differentiate these by persona and workflow. While they may share a common architecture on the backend and similar layout on the frontend, in the UI they may:
When designing with the work items architecuture, Product Designers should understand roughly how the architecture works and what implications exist for the user experience.
If the quad discovers that the desired user experience would require a greater contribution to the work item architecture than initially thought, they would discuss trade-offs as a team in order to decide whether to proceed or leave the object separate.
The quad that owns the code for the object (incident, epic, etc) decides if something should use the work item architecture based on trade-offs around code reuse and user experience. This should be a cross-functional decision, and the group Product Designer should advise their team regarding how well the user's ideal workflow could or could not be supported by the work items architecture. This will allow the team to evaluate how much existing frontend pieces of the architecture could be re-used, and what would need to be added or customized in order to support the desired experience. 1. As part of the decision making process, Product Designers should do problem validation user research (or leverage existing) to understand the desired user experience, including user goals, tasks, content/data field needs, and whether or not this work item type has relationships and the nature of those relationships. 1. During this phase, the Product Designer and Product Manager should ensure that success metrics are defined per our work item research process (link TBD) 1. High level wireframes should be produced to ensure everyone has a shared understanding of what is wanted and to establish a medium term vision for the work.
After the quad decides the work item architecture is suitable, the Product Designer will design the experience in detail. As part of the detailed design, Product Designers, in collaboration with the quad, will:
1. Design how existing widgets will be utilized, and any new widgets needed or if existing widgets could be abstracted to fit a new use case. For example: The Timeline widget for incidents was designed in isolation specific to the incident use case. It could be reworked slightly to support more use cases, such as objective or key result check-ins.
2. Define how users will access this work item. Design how this work item will appear in existing views, such as lists, or any new views needed for this work item.
- Ensure new components and patterns are contributed back to Pajamas.
3. Solution validation should be conducted as needed to ensure the workflow and usability meets the user needs.
This section describes the research program to support Product Planning and Project Management groups. The reseach plan may be iterated on to support any team working on user experiences that support the Work Items initiative.
With such a large scope of work that touches upon several personas, the efforts associated with building out the experiences related to Product Planning and Project Management can be overwhelming. This research plan is meant to guide the team: starting with problem validation where research needs are defined, and ending in solution validation where usability testing is conducted to give us confidence in our designs.
The following research plan is designed to fit within the way the Plan teams operate, with flexibility and efficiencies called out. The research plan is based on our standard software development process, with some adjustments to accommodate for the scope of Product Planning and Project Management.
With whatever we’re building, we should be able to say:
Naturally, when we start designing a solution, we start with: understanding our users, the relevant personas, the problems they face, their unmet needs, etc - all in the context of Plan.
The first three steps are to be completed across the relevant personas. While it may be too arduous to conduct detailed research for each persona on a given topic, it’s recommended to include relevant personas in each step to look for differences and similarities. If there are more differences than similarities, that’s a clue that it’s probably time to dig deeper into those particular personas to learn more.
Step 1: Define research needs: This is perhaps the most important step; it’s where everyone in the quad gains a common understanding of the work to do and where research questions start. Start by working with stakeholders to:
Output: a set of problem validation research questions to field
Step 2: Review existing research: It’s always good practice to first look to see if there are existing insights that address your research questions. This can be done by looking within Dovetail or a more exhaustive internet search. To help stay focused during this step, it’s recommended to first document the following:
To stay efficient during this step, it’s best to focus your attention on the research questions you are less confident in and research questions that contain a large knowledge gap. Note that it’s not uncommon for a search to yield results that aren’t applicable to your research questions. However, just knowing that is important - and can justify the next step.
Step 3: Conduct research: Now it’s time to start conducting problem validation research to address the knowledge gaps identified in earlier steps. Along the way, you’ll be:
EFFICIENCY BONUS: At this point, there’s an opportunity to increase efficiency with participant recruitment by creating a common screener, where a mini-database of qualified participants is created to use throughout the course of the work.
Step 4: Analyze the data: After the research is conducted, it’s time to take a step back and analyze the data you collected. You should be able to learn:
Output: at the end of this step, we have a known user problem. We understand what they need, why they need it, how important it is, etc. These aspects are critical in understanding how those fit into the goal you’re working towards. If something doesn’t feel right or isn’t aligned, this is the time to discuss with stakeholders to reassess.
Step 5: Establish measures and a baseline: By Step 4, we learned about areas of the experience that are important to our users. Now is the time to establish measures on how our users feel about those experiences. In doing so, we’re also creating a baseline of the experience as it is today. The goal is to provide us with a measured indication to ultimately understand if our proposed designs are resulting in a better user experience than the baseline experience. This can be accomplished by:
Output: a set of reusable measures and a benchmark.
Note: This isn’t currently part of our standard research process. We would like to introduce a method to baseline the experience and measure changes over time to confirm whether the experience is improving. We feel this is important for Product Planning and Project Management, because there are so many opportunities to improve and we want to focus on impact to aid decision making.
Step 6: Design the solution(s): Now that problem validation is complete and you have a baseline, it’s time to design the solution(s). It’s important to:
Output: Mockups or prototypes
———-
ARE WE CONFIDENT? We can pause at this moment and ask ourselves if we’re confident in the design solution(s). Sometimes, a design solution is straightforward enough where we’re very confident to move ahead with it without solution validation. However, there are times when we’re unsure how the design solution will perform, thereby resulting in a low level of confidence. This is when we decide if we need to conduct solution validation research.
Step 7: Prepare for testing: This is where we identify the research questions we need to answer for solution validation. During this step, the best 1-3 designs are selected for testing. Often, prototypes are built for this kind of testing.
Output: assets to run participants through solution validation.
Step 8: Conduct research: It’s time to run participants through the design solutions using our standard solution validation research approach with the relevant personas. However, there’s one exception: you’ll be using the measures you established in Step 5. Within this step, you’ll be:
Output: a design that performs better than the baseline.
EFFICIENCY BONUS: At this point, there’s an opportunity to increase efficiency with participant recruitment by creating a common screener, where a mini-database of qualified participants is created to use throughout the course of the work.
We may also be able to leverage some of our previous studies to more rapidly build tasks and scenarios for our participants to complete. For example, we could use the scenarios from the benchmarking study to evaluate experience of editing a task from a drawer.
———
DONE! At this point, we have a valid solution. It solves the problem, it’s easy to use and understand, and it’s a better solution than what users currently have.
When the design is released, even if solution validation wasn’t done, it’s still important to measure against the baseline scores. With extended use in real-life within their own environments, users may score differently than during the study.