Fuzz testing, or fuzzing, is the act of inputting unexpected, malformed, and/or random data to measure response or stability of an application or service. This is accomplished by monitoring for unexpected behavior or service / application crashes. Fuzz testing will find issues that traditional testing and QA methods typically do not.
The goal of fuzz testing is to discover software defects that need to be addressed as these defects may lead to (exploitable) vulnerabilities that other QA methods and security scanning miss. Fuzz testing can find not only security issues, but also flaws in the business logic of an application or service.
GitLab offers two types of fuzz testing:
Coverage-guided fuzz testing is all about using contextual information from source code to better inform fuzz tests as well as to help correlate the results of a fuzz testing crash directly to the region of code that was vulnerable. This can be thought of as an "inside-out" oriented approach. This dramatically improves the cycle time to go from an initial fuzz test to a crash to an update to vulnerable areas.
A large benefit of coverage-guided fuzz testing is that it can be done without requiring all development of an app to be completed or for a Review App to be created for live testing. Instead, coverage-guided fuzz testing can be done iteratively on small parts of the app. A common workflow is to extend unit tests to perform fuzz testing on small parts of the app until the whole app is completed. This means that you can integrate fuzz testing earlier into your SDLC and shift security further left. Getting these results sooner means that developers can act on them sooner, reducing cycle times.
Traditional fuzz testing is a technique that uses a definition of the inputs a target application is expecting to better understand the Implementation Under Test (IUT). This allows the fuzzer to smartly make mutations that are very close to valid, in contrast to fuzzers that don't understand the expected inputs of the IUT are. In contrast to coverage-guided fuzz testing, this is more of an "outside-in" oriented approach.
Additionally, traditional fuzz testing is able to observe how the behavior of the IUT changes to make different decisions for subsequent fuzz tests. For example, if a traditional fuzz test notices that a specific type of input causes HTTP 500 errors, it could make similar sorts of mutations to find errors in the same parts of the app. This approach provides better quality of results by inducing more faults and helping to pinpoint where they are in the app.
Fuzz testing can be performed using one of the two fuzz algorithm strategies:
GitLab will provide more valuable fuzz testing results by combining both traditional and coverage-guided fuzz testing techniques. By using both approaches together, we can find more faults and vulnerabilities than one approach would find on its own.
Traditional fuzz testing tests what an app should be doing, based on its published interfaces. Because we also have the source code of the app available at the time of test, we can use coverage-guided fuzz testing techniques to identify parts of the app that aren't mentioned in the publicly facing interfaces. We can also use the source code of the app to provide context and better understand why the app is crashing. This allows us to intelligently generate new test cases to reach other parts of the app's code.
A final, and very important, benefit of combining the techniques is that because we have context around where in the code a fault has occurred, we can point users to specific parts of their app where the fault occurred so it can be fixed. Generally, the process of determining where in source code a fault occurred is difficult, so allowing users to immediately see this location is incredibly valuable and will directly help them to mitigate identified issues.
We have created a playlist of videos that discuss the different types of fuzz testing, techniques, and how to use fuzz testing called Fuzzing 101 that has lots of information you may find useful.
In a survey GitLab conducted in May 2020, 81% of our respondents said that they think fuzz testing is important. However, over 60% said that the most difficult part of fuzz testing is setting it up and integrating it into their CI systems. Because of this difficulty, only 36% of respondents were actually using fuzz testing.
Despite many people thinking fuzz testing is important, the large number of users not using it shows that there is an opportunity to give new value with GitLab if we can help users overcome those initial pain points related to adopting fuzz testing.
There are two primary target audiences for fuzz testing:
There is an additional role which may be included in the target audience. Product verification (PV) engineers may leverage fuzz testing as part of their unit or regression tests. If the software development team does not directly handle their unit test creation, a PV engineer may be partnered with a software development team to create and maintain unit tests on behalf of the software development team.
Sasha is the primary persona we will focus on building fuzz testing for initially. Ensuring they are able to successfully set up and consume the results of a fuzz test is critical to drive adoption of fuzz testing.
Usability is one of our key areas of emphasis. We prefer to get 80% of fuzz testing quality while only requiring users like Sasha to do 20% of the work. Even though better results are possible with more work by the end-user, we do not want to go in this direction initially since it means users will get frustrated and not use fuzz testing at all.
In the future, we will focus on building out deeper fuzz testing workflows to support Sam and the security research they are doing. We will also deepen the fuzz testing experience so that users like Sasha can easily learn more about fuzz testing and become a more advanced user without requiring a large amount of time to be invested. At that point, we will be focusing on getting the 20% of better quality results that require the 80% of work by end users.
Approaching and prioritizing personas in this way will help us to drive adoption of Fuzz Testing and address the pain points identified in our research.
GitLab believes there are different types of markets and users that will find value in fuzz testing. To this end, we are prioritizing what use cases and where we want to focus first. Read more about them in full on our market segments page.
As we mature fuzz testing, we will work closely with existing GitLab users to add fuzz testing to their apps. If users already have Source Code Management and CI/CD set up for their projects, this will make it easy for them to add fuzz testing to what they already have. It will be more work for a brand new user to GitLab to add SCM, CI/CD, and our other capabilities in addition to also adding fuzz testing. Getting up and running with the other capabilities can be done quickly with our quick start guides, examples, and creating projects with AutoDevOps, but those will be extra steps to complete before adding fuzz testing.
Existing GitLab users who use other Secure scanners will be good candidates to work with to add fuzz testing. They are already familiar with Secure-related workflows and how to process vulnerabilities reported as part of the Security Dashboard. Fuzz testing is a great complement to the other scans that they use.
Focusing on existing users will help us move quicker in getting users onboarded with fuzz testing and will also help increase our Stages per User metric.
In keeping with GitLab's seed then nurture, we are focused on adding support for different technologies, languages, and use cases initially to ensure fuzz testing can be broadly adopted by all our users. Over time, we will build deeper experiences and more capabilities where they will be most impactful.
New language support coverage-guided fuzz testing (release post)
Over the last several iterations, we have delivered additional language support for coverage-guided fuzz testing. This enables users to use fuzz testing for an even wider variety of use cases.
Support for Java Spring (release post)
After releasing our initial release of Java support on top of the JQF fuzz engine, we realized it wouldn't work for Java Spring apps. Our new engine now enables users to fuzz Java Spring apps and requires no extra steps on their part to adapt their code.
First delivery of API fuzz testing (release post)
This first delivery allows users to test their APIs with either an OpenAPI specification, a Postman collection, or a HAR archive. This is a great first iteration for users and enables them to easily get started with fuzz testing their APIs.
GitLab's aspirations with fuzz testing is that it becomes a security technique that all of our users take advantage of to find and fix issues in their apps before attackers can exploit them.
Since fuzz testing is traditionally difficult to use, our emphasis is on making fuzz testing an accessible technology that our users can take advantage of without becoming technical experts in the space nor requiring them to form a dedicated fuzz testing team.
GitLab will provide its users with application robustness and security visibility testing solutions which validate the entire application stack. This will be provided by verifying the user’s application or service both while at rest as well as while running. This will include historical trending and recommendations for next steps to provide peace of mind to our users.
Furthermore, GitLab will provide real-time remediation for user solutions running in production. This will be possible by analyzing the user issues found in GitLab Secure and applying “virtual patches” to the user’s production application or service leveraging GitLab Protect. This will allow your organization to be secure and continue functioning until the underlying vulnerability can be remediated in your code.
GitLab ultimately envisions our fuzz testing solutions becoming a critical part of every organization's DevSecOps workflow, whether that is for web APIs, traditional desktop software, or for hardware devices. Our long-term vision is to shift fuzz testing solutions left for these use cases and that fuzz testing becomes a common part of every developer's workflow. We will start with the web API and web appliation use cases, which are already well supported in GitLab, as part of an iterative approach to delivering fuzz testing.
Our near-term goals are to drive additional adoption of fuzz testing and deliver the capabilities needed to drive that adoption.
Since we are moving forward quickly and have limited resources, it is important we focus our efforts where they will be most impactful. To this end, our high-level near-term priorities are:
Provide users a better way to easily consume, triage, and remediate fuzz testing findings.
The next step is to expand our support for fuzz testing and making it straightforward to easily consume results that users find from their fuzz testing jobs. We made it easy in 13.3 to add coverage-guided fuzz testing to pipelines but we want to do more to make it easy to consume the results. To this end, we are focusing on making our fuzz testing results richer and providing more info in the Security Dashboard, pipeline, and MR views. Specifically, we will be moving the API fuzz testing results into these screens, rather than their current location.
Allow coverage-guided fuzz users to manage their corpus objects more easily.
A pain point coverage-guided fuzz users face is related to managing the inputs that the fuzz engine uses. Users currently must commit their files to their Git repo, which can cause difficulties if they have to get reviews of them as well as increasing the size of the repo. We plan to improve this by making a separate area, the corpus registry, in GitLab to hold these corpus objects, so nothing needs to be added to the Git repo. This will allow users to easily create, edit, and update their corpuses for use with fuzzing.
Build greater awareness of fuzz testing among GitLab team members, customers, users, and the market.
One of the areas we are focused on is educating internal teams, our users, and the broader market on fuzz testing and the value it provides. We need to work to dispel older notions that fuzz testing can only be applied in limited situations and that it is difficult to use. To this end, we are conducting multiple training sessions with internal teams, hosting live sessions, making articles for external audiences, educating analysts, and producing other content to build greater awareness.
Broaden our support for coverage-guided fuzz testing use cases.
We are also committed to make sure that users can use fuzz testing, regardless of the technology they use to build their apps. To achieve this, our next steps are to offer support for additional languages such as Javascript, Python, C#, and Ruby in upcoming releases.
Allow users to do API fuzz testing with the artifacts they already have.
To make it easier to get started with API fuzz testing, we will focus on being able to use the files and artifacts that users already have to quickly set up API fuzz testing. An example of this is adding support for fuzz testing OpenAPI v3 specifications. This lets users quickly start fuzz testing, finding issues, and resolve them rather than spending more time on setting up fuzz testing. We also are going to focus on how to leverage other scanners users already use, such as DAST, to automatically create the artifacts needed to run fuzz testing.
Enable the ability to easily run fuzz testing continuously
Users today must balance how long they run fuzz test jobs as part of pipelines with the needs of developers to get results quickly while they iterate on features. Today users can create a scheduled pipeline to start long-running fuzz testing jobs, but this is not ideal. We are focusing on build a rich experience to configure continuous fuzz testing jobs. This will allow users to get the benefits of running fuzz testing for long periods of time while giving developers quick fuzz tests on individual MRs. We will start by focusing on coverage-guided fuzz testing for this approach and then allowing API fuzz testing to be done in the same way.
Create an open source protocol fuzz testing offering.
As part of the acquisition of Peach Tech, GitLab acquired mature protocol fuzz testing technology. We are considering how to best bring this to our users. We will initially start by open-sourcing the core engine for protocol fuzz testing as a first iteration. This will allow the community to contribute and for users to build support for their own protocols and begin fuzzing them.
After open sourcing parts of protocol fuzz testing, we will focus on how to integrate protocol fuzz testing into GitLab, so users can configure the fuzz engine, manage fuzz runs, and triage results alongside other security results for their projects.
We have several longer-term, more strategic objectives we will be focusing on as well. Those are discussed in more detail in our longer-term strategy page.
Our initial focus is on web applications and REST APIs, so we are not focusing on fuzz testing local desktop or mobile applications at this time.
Applications that require special hardware, such as Wireless, Bluetooth, and Automotive-based fuzz testing, are not where we are focused now. This is because requiring proprietary hardware is not where GitLab is strong today and makes it more difficult for us to quickly iterate. As we work on protocol fuzzing support, which generally involves these industries, we will focus on how to interact with hardware via software but not on actually bringing custom hardware directly into GitLab.
The fuzz testing competitive landscape is composed of both commercial and open source testing solutions. The following outlines the competitive landscape for fuzz testing solutions:
GitLab already uses OWASP ZAP today to power the DAST capability within the GitLab Secure offerings. ZAP is a web application security scanner, leveraging a proxy to perform testing, and does not perform protocol-level fuzz testing. All fuzz testing by ZAP is performed within the context of the web application being tested.
We expect to see various research efforts around fuzz testing. These may be led by university groups, innovation teams inside companies, or independent security researchers. Since research is intended to advance the state of the art, GitLab is open to collaboration on open-source efforts, rather than viewing those sorts of projects as competitive by default.
GitLab announced the acquisition of Fuzzit and Peach Tech on June 11, 2020. This immediately enhanced and accelerated our plans for fuzz testing. Watch the video below to get all the details!
Analysts consider fuzz testing to be part of Application Security Testing and is generally discussed as part of those reports, rather than as a standalone capability. There are challenges to make it part of the DevSecOps paradigm, and they are interested to hear more about that.
Gartner wrote "Outsourcing or leveraging managed services to perform fuzzing is the recommendation in the absence of internal subject matter expertise and staffing." as part of their "Structuring Application Security Practices and Tools to Support DevOps and DevSecOps" research. This recommendation is a result of the high-level of manual configuration many fuzzers need. This is a pain point GitLab should be mindful of as we work on bringing our own fuzz testing solutions out so that users can be successful without needing to be a fuzz testing expert nor need to do a large amount of configuration.
Gartner also published their How to Deploy Application and Perform Application Security Testing where they say "You should reserve fuzzing for nonweb applications." We disagree with this conclusion and think that it comes primarily from the difficulty normally associated with doing fuzz testing. This underscores the importance of our emphasis of making fuzz testing straightforward and easy to use.
Additionaly, as APIs become more and more of a staple of modern application development, they should be prioritized for fuzz testing. As one of the primary interfaces exposed to end-users, fuzz testing can reveal critical bugs and security issues before they are exposed to the world, despite them not being traditional web applications.
By combining our fuzz testing solution with our DAST, IAST, and SAST solutions into a GitLab Secure suite, GitLab can be placed into the Gartner Magic Quadrant for Application Security Testing. This will give GitLab additional exposure as well as show security thought leadership.
Other analyst firms include 451 Research and IDC as they have focused security practices in which GitLab Secure can be highlighted and show leadership.
We will continue to update this section with customer success and sales issues as we begin getting feedback.
We will continue to update this section with specific GitLab issues as we begin getting feedback.
We have begun working with internal teams to add fuzz testing in various places, such as GitLab Runner. Working with internal GitLab teams is giving us good feedback is also finding bugs to fix, which means fuzz testing is providing value.
We have several pages that provide more info about our direction for fuzz testing: