The goal of this page is to create, share and iterate on the Jobs to be Done (JTBD) and their corresponding job statements for the Testing categories within the Verify Stage. Our goal is to utilize the JTBD framework to better understand our buyers' and users' needs.
Utilize JTBD and job statements to:
When I run CI for a web app or web site, I want to automatically test for accessibility so I can be confident everyone can get value from my changes.
Job statements | Maturity | Confidence | Source |
---|---|---|---|
When I make changes for my website, I want to automatically see how those changes impacted accessibility of the site, so that I can be confident everyone can get value from my changes. |
|
Researched | Issue |
When I review my website source, I want to see a list of accessibility issues, so that I can proactively fix those issues in a future change. |
|
Issue |
When I build my project, I want to review test result data, so that I can stop and review test failures before bugs get into production.
Job statements | Maturity | Confidence | Source |
---|---|---|---|
When new or existing tests are failing in a build, I want to be able to identify them and where they are in the code as easily as possible, so that I can fix them quickly and get back to pushing features into production. |
|
Researched | Issue |
When I open a Merge Request, I want to see if any of the code changes are not covered by tests, so I can figure out what tests I need to add to maintain or improve the test coverage of the project. |
|
Issue |
When I am reviewing the software projects my team works on, I want to see the trend of test coverage over time, so I can see which way it is trending.
Job statements | Maturity | Confidence | Source |
---|---|---|---|
When I am reviewing the software projects my team works on, I want to see the trend of test coverage over time, so I can see how our efforts to improve are going OR identify an issue that could cause bugs before it is released. |
|
Researched | Issue |
When I am reviewing the software projects my team works on, I want to see a list of possible flaky tests, so that I know what to focus on to reduce wated time by the team. |
|
Issue |
When a code change is made, I want to know if the change introduces a latency for my end users, so that I can meet the quality standards of performance response time to maintain usability.
Job statements | Maturity | Confidence | Source |
---|---|---|---|
When evaluating the performance of a new feature before it goes lives, I want to validate if it performs similarly to how it will in a real use context, so I can trust that I am not introducing a new latency. |
|
Issue |
When a user-facing product change is being made, I want to gather usability feedback before the changes are live, so I can be confident that the feature works as expected.
Job statements | Maturity | Confidence | Source |
---|---|---|---|
When I review a user interface change before it goes live, I want to test various flows of where the change appears, so I can evaluate how it performs in different circumstances. |
|
Issue | |
When reviewing a user interface change before a software is released, I want to provide feedback on visual elements of what can be improved, so that my team and I can discuss in context of the built changes. |
|
Issue |
When reviewing a user interface change before a software is released, I want to reduce unexpected negative impacts to the end user, so we can retain usability while releasing changes.
Job statements | Maturity | Confidence | Source |
---|
When new or existing jobs are failing, I want to be able to easily trace the failures back to code, so that I can fix them quickly and get back to pushing features into production.
Job statements | Maturity | Confidence | Source |
---|