Category Maturity - Competitor Comparison

Leveraging the existing Category Maturity Scorecard process and Jobs to be Done (JTBDs), this competitor comparison process allows us to establish best in class by looking closely at how we solve (features) for the problems we’re solving for (jobs) vs. our competitors.

This handbook page outlines the competitor comparison process to use in conjunction with Category Maturity Scorecards.

Why it’s important to look at competitors

Our existing Category Maturity Scorecard process focuses on usability from the perspective of our users. What that process lacks is a look into how we, as an application, compare in functionality and features against our key competitors with our Jobs to be Done (JTBD) in mind. In other words, adding a competitor comparison gives a more rigorous view into “here’s the problem we’re trying to solve” (JTBD) combined with “here’s how it’s being solved” (features/functionality) and “here’s how pleasant it is to use” (Category Maturity Scorecard).

To address that gap, this competitor comparison piece is designed to fit within the existing Category Maturity Scorecard process as an add-on component and another data point to use when measuring category maturity.

In this new process, we’re not conducting a direct feature comparison. Instead, we’re focusing on the problems we’re trying to solve (otherwise known as the JTBDs) and how they’re being solved (otherwise known as features) within GitLab vs. our competitors. Ultimately, this will help us understand what ‘feature complete’ may look like for a given JTBD within the industry.

Fitting into the existing Category Maturity Scorecard process

You can conduct a competitor comparison at any time, but it’s strongly recommended to conduct it early, prior to reaching Minimal or Viable. Completing the competitor comparison early on helps you to:

  • Identify industry trends related to your JTBDs/workflows
  • Identify gaps in our own feature offerings
  • Use the comparison findings for planning purposes
  • Address the identified gaps before evaluating for Competitive and Complete maturity

If you’re not able to conduct a competitor comparison early on, that’s ok; there’s still tremendous value in conducting it during the later stages of Category Maturity Scoring.

Conducting a competitor comparison should be done at least annually if you’re at Complete Otherwise, when you intend to make a category shift.

Steps to follow

A Product Manager will lead their quad team, in collaboration with a Product Marketing Manager or Developer Advocate, in conducting the competitor comparison. Reason: A Product Manager is best suited for leading the comparison effort, because they have a detailed view into the JTBDs, competitors, and features. The following steps walkthrough the process for a competitor comparison:

  • Step 1 - Choose the competitor(s)

    • If there are many competitors in the space, it’s advised to focus on the top competitor(s).
    • The competitor(s) being evaluated should be justifiable as a direct competitor with the JTBDs in mind. Components such as market share, feature overlap, etc. should be considered when determining which competitors to compare against.
    • If the competitor requires a paid subscription, you will need to conduct the comparison to the best of your ability using publicly available sources. Also, reach out to the #competition Slack channel to see if we can leverage any third-party vendors to help expand our understanding of the competitor.
    • Depending on the competitor, you may need to gain approval through the Individual Use Software process.
  • Step 2 - Follow Legal guidelines

  • Step 3 - Decide on the JTBDs to focus on

    • The JTBDs should align directly with the Category Maturity Scorecard testing you’re also evaluating (or already have evaluated). The process on how to identify JTBDs is outlined here.
    • Sometimes, the scope of a JTBD can span across multiple stage groups, resulting in tighter collaboration across different teams. One suggestion on how to best approach that scenario is to hold an async workshop to land on a set of agreed-upon JTBDs while also discussing the division of labor and timing considerations for the competitor comparison.
  • Step 4 - Identify the top competitor(s)

    • Keep this in the context of the JTBDs that you’re evaluating.
  • Step 5 - Copy the template & populate the JTBDs you’re evaluating

    • Review the example within the template for guidance.
    • Link the deck from the issue and make sure “View” settings of your completed deck are for GitLab internal only, as these decks should not be viewed external to GitLab. As a reminder, it’s important not to share your screen if you’re presenting the deck on a public stream or a recorded meeting that will appear on YouTube.
  • Step 6 - Compare the functionality/features that solve for the JTBDs

    • Be sure to include screenshots of both GitLab and the competitor to illustrate differences in the experiences/features related to the JTBDs.
    • If you discover either GitLab or the competitor(s) don’t offer a solution (feature) to solve for the JTBD being compared, that’s probably an indicator that there’s a gap.
  • Step 7 - Rate each JTBD

    • Each slide should be an opportunity to show your work and justification behind your rating.
    • Remember: the competitor comparison should be completely data/evidence driven.
  • Step 8 - Document/update the scoring

    • Either add or amend the existing category score on this handbook page with the appropriate signal
  • Step 9 - Identifying actionable steps to take

    • Now that the gaps are identified and ratings are complete, it’s time to develop an action plan to move maturity up and/or achieve best in class
    • (WIP to a handbook page on suggestions on how to do this - coming soon)

Scoring best in class

For this process to be sustainable, we need to provide an adaptable and simple methodology to add to the Category Maturity Scorecard framework that answers the question, “Is GitLab the best-in-class for this category?”

The approach to rating

As a Category Maturity Scorecard is based on Jobs to be Done (JTBD), it makes sense to use the same target jobs in the competitor comparison. So, we use our own judgment and market knowledge to evaluate each JTBD separately and inform the overall best in class rating.

Throughout the Category Maturity Scorecard process, we assess one or more JTBDs to understand the representative tasks that users will perform within GitLab (a JTBD comprises multiple tasks). For the competitor comparison, we heuristically evaluate those same tasks within the competitor product. For each task, we objectively assess whether GitLab or the competitor has the superior experience and/or feature offering.

Indicating best in class

To indicate if a JTBD is best in class, we use a simple rating in the form of an amendment to the existing Category Maturity Score:

  • (+) Indicates best in class
  • Bold (+) indicates a GitLab sweep, where GitLab scored better than the competitor on every JTBD
  • (-) Indicates not best in class
  • No marking means competitor analysis was not performed.

Example:

  • ‘Viable’ - The absence of any marking indicates that a competitor analysis was not performed
  • ‘Viable (+)’ - (+) indicates best in class
  • ‘Viable (+)’ - (+) indicates best in class and a GitLab sweep, where GitLab scored better than the competitor on every JTBD
  • ‘Viable (-)’ - (-) indicates not best in class

Identifying heuristics

Another way of asking this is: “what are we looking at with competitors in these comparisons?” The short answer is: focus on what’s important for the user while considering the JTBD. Heuristics must apply to both GitLab and the other competitor(s), thereby being able to compare across both applications.

It’s also critical to keep the heuristics objective and factual. That means doing your best to exclude your own personal opinions from the heuristics being used. Here are some examples of objective vs. subjective heuristics:

Objective heuristics 👍 Subjective heuristics 👎
Number of steps to complete the JTBD Quicker
Time it takes to complete the JTBD Faster
Ability to complete the JTBD within a single page More convenient
Can complete the JTBD Better
How the JTBD can be completed More/less features

Sometimes, a heuristic can simply be how something is being done, for comparison’s sake. Meaning, it may not be rateable. For example, it could define how a call is being made to a database. If it’s a differentiator and important for the user trying to complete a JTBD, it can surface up as a heuristic. If you have any questions on this topic, or would like a review, please consult your UX Researcher.

If you’re having trouble determining if your heuristics are objective vs. subjective, consider the below:

  • If you’re using words like ‘better’, ‘faster’, ‘more’, etc in defining your heuristics, they’re probably subjective heuristics and need to be adjusted to be more objective.
  • If your heuristics can be compared using numbers, yes/no, or descriptions, then they’re probably objective heuristics.

Tip: it’s sometimes best to go into a comparison without heuristics identified ahead of time. Heuristics can often reveal themselves to you once the comparison is underway. A real example:

  • In comparing features, it became clearly apparent that one application contained far more steps than the other application to complete the same task. In this example, a comparable heuristic became ‘number of steps’.

A good number of heuristics to aim for per JTBD is 3 or 4. Focus on what you think are the important differentiators that can help tell the story of ‘best in class’.

Example case study

JTBD: When signing up for a service, I want to move through the process as quickly as possible, so I can start using the product effectively.

Potential heuristics:

  • Time
  • Number of steps

To perform the competitor analysis for this, you would need to perform the sign-up process for both GitLab and the competitor, while keeping track of the heuristics you identified.

After going through and documenting each heuristic across both applications, there are 3 possible outcomes:

  1. GitLab is better in more heuristics. (GitLab is best in class for this JTBD.)
  2. Competitor is better in more heuristics. (GitLab is not best in class for this JTBD.)
  3. It is an even split. (GitLab is not best in class for this JTBD, because “best” does not imply a tie. You are not the best if someone is as good as you are.)

This gives us a rating for each JTBD, as to which application is better. For the category rating, the same tie breaking applies

Thus you arrive at your overall score for the competitor analysis.