This handbook page outlines the competitor comparison process to use in conjunction with Category Maturity Scorecards.
Our existing Category Maturity Scorecard process focuses on usability from the perspective of our users. What that process lacks is a look into how we, as an application, compare in functionality and features against our key competitors with our Jobs to be Done (JTBD) in mind. In other words, adding a competitor comparison gives a more rigorous view into “here’s the problem we’re trying to solve” (JTBD) combined with “here’s how it’s being solved” (features/functionality) and “here’s how pleasant it is to use” (Category Maturity Scorecard).
To address that gap, this competitor comparison piece is designed to fit within the existing Category Maturity Scorecard process as an add-on component and another data point to use when measuring category maturity.
In this new process, we’re not conducting a direct feature comparison. Instead, we’re focusing on the problems we’re trying to solve (otherwise known as the JTBDs) and how they’re being solved (otherwise known as features) within GitLab vs. our competitors. Ultimately, this will help us understand what ‘feature complete’ may look like for a given JTBD within the industry.
You can conduct a competitor comparison at any time, but it’s strongly recommended to conduct it early, prior to reaching Minimal or Viable. Completing the competitor comparison early on helps you to:
If you’re not able to conduct a competitor comparison early on, that’s ok; there’s still tremendous value in conducting it during the later stages of Category Maturity Scoring.
Conducting a competitor comparison should be done at least annually if you're at Loveable. Otherwise, when you intend to make a category shift.
A Product Manager will lead their quad team, in collaboration with a Product Marketing Manager or Technical Marketing Manager, in conducting the competitor comparison. Reason: A Product Manager is best suited for leading the comparison effort, because they have a detailed view into the JTBDs, competitors, and features. The following steps walkthrough the process for a competitor comparison:
For this process to be sustainable, we need to provide an adaptable and simple methodology to add to the Category Maturity Scorecard framework that answers the question, “Is GitLab the best-in-class for this category?”
As a Category Maturity Scorecard is based on Jobs to be Done (JTBD), it makes sense to use the same target jobs in the competitor comparison. So, we use our own judgment and market knowledge to evaluate each JTBD separately and inform the overall best in class rating.
Throughout the Category Maturity Scorecard process, we assess one or more JTBDs to understand the representative tasks that users will perform within GitLab (a JTBD comprises multiple tasks). For the competitor comparison, we heuristically evaluate those same tasks within the competitor product. For each task, we objectively assess whether GitLab or the competitor has the superior experience and/or feature offering.
To indicate if a JTBD is best in class, we use a simple rating in the form of an amendment to the existing Category Maturity Score:
Example:
Another way of asking this is: “what are we looking at with competitors in these comparisons?” The short answer is: focus on what’s important for the user while considering the JTBD. Heuristics must apply to both GitLab and the other competitor(s), thereby being able to compare across both applications.
It’s also critical to keep the heuristics objective and factual. That means doing your best to exclude your own personal opinions from the heuristics being used. Here are some examples of objective vs. subjective heuristics:
Objective heuristics 👍 | Subjective heuristics 👎 |
---|---|
Number of steps to complete the JTBD | Quicker |
Time it takes to complete the JTBD | Faster |
Ability to complete the JTBD within a single page | More convenient |
Can complete the JTBD | Better |
How the JTBD can be completed | More/less features |
Sometimes, a heuristic can simply be how something is being done, for comparison’s sake. Meaning, it may not be rateable. For example, it could define how a call is being made to a database. If it’s a differentiator and important for the user trying to complete a JTBD, it can surface up as a heuristic. If you have any questions on this topic, or would like a review, please consult your UX Researcher.
If you’re having trouble determining if your heuristics are objective vs. subjective, consider the below:
Tip: it’s sometimes best to go into a comparison without heuristics identified ahead of time. Heuristics can often reveal themselves to you once the comparison is underway. A real example:
A good number of heuristics to aim for per JTBD is 3 or 4. Focus on what you think are the important differentiators that can help tell the story of ‘best in class’.
JTBD: When signing up for a service, I want to move through the process as quickly as possible, so I can start using the product effectively.
Potential heuristics:
To perform the competitor analysis for this, you would need to perform the sign-up process for both GitLab and the competitor, while keeping track of the heuristics you identified.
After going through and documenting each heuristic across both applications, there are 3 possible outcomes:
This gives us a rating for each JTBD, as to which application is better. For the category rating, the same tie breaking applies
Thus you arrive at your overall score for the competitor analysis.