The goal of this page is to document, share and iterate on the Jobs to be Done (JTBD) and their corresponding job statements for the Pipeline Execution group. Using JTBD framework we intend to arrive at the more specific problems to be solved in relation to Continuous Integration workflows.
Utilize JTBD and job statements to:
When making a code change, I want to integrate it to the target branch quickly and safely, so I can save time and focus on developing.
|When integrating code changes, I want to automatically run relevant checks on it, so I can avoid worrying about unexpected conflicts while merging.||Researched||Issue|
|When running automated checks on code changes, I want to see the specific reason a check failed, so I can resolve it and move to another task.||Researched||Issue|
When integrating changes to a target branch, I want to be alerted about unforeseen issues, so I can avoid upsetting our users with downtime.
|When integrating changes to a target branch, I want the deployed changes to be automatically monitored, so I can focus on developing.||Researched||Issue|
When working in a large team, I want the changes to be merged without hassle or delay, so I can ensure my performance is unaffected.
When running, reviewing and interacting with automated checks, I want the platform to respond without delay or failure, so I can timely deliver on my tasks.
|When making changes to a software, I want to quickly see results and troubleshoot failures in the build to get it ready to merge so I can get back to working on new changes.||To be Researched||Issue|
When a user-facing product change is being made, I want to gather usability feedback before the changes are live, so I can be confident that the feature works as expected.
|When I review a user interface change before it goes live, I want to test various flows of where the change appears, so I can evaluate how it performs in different circumstances.||Issue|
|When reviewing a user interface change before a software is released, I want to provide feedback on visual elements of what can be improved, so that my team and I can discuss in context of the built changes.||Issue|
When reviewing a user interface change before a software is released, I want to reduce unexpected negative impacts to the end user, so we can retain usability while releasing changes.
When I run CI for a web app or web site, I want to automatically test for accessibility so I can be confident everyone can get value from my changes.
|When I make changes for my website, I want to automatically see how those changes impacted accessibility of the site, so that I can be confident everyone can get value from my changes.||Researched||Issue|
|When I review my website source, I want to see a list of accessibility issues, so that I can proactively fix those issues in a future change.||Issue|
When I build my project, I want to review test result data, so that I can stop and review test failures before bugs get into production.
|When new or existing tests are failing in a build, I want to be able to identify them and where they are in the code as easily as possible, so that I can fix them quickly and get back to pushing features into production.||Researched||Issue|
|When I open a Merge Request, I want to see if any of the code changes are not covered by tests, so I can figure out what tests I need to add to maintain or improve the test coverage of the project.||Issue|
When I am reviewing the software projects my team works on, I want to see the trend of test coverage over time, so I can see which way it is trending.
|When I am reviewing the software projects my team works on, I want to see the trend of test coverage over time, so I can see how our efforts to improve are going OR identify an issue that could cause bugs before it is released.||Researched||Issue|
|When I am reviewing the software projects my team works on, I want to see a list of possible flaky tests, so that I know what to focus on to reduce wated time by the team.||Issue|