Blog AI/ML How to put generative AI to work in your DevSecOps environment
Published on March 7, 2024
7 min read

How to put generative AI to work in your DevSecOps environment

Learn how artificial intelligence, when integrated throughout the platform, can reap tangible rewards for organizations and their DevSecOps teams.


Generative AI has ushered in a new wave of innovation that's poised to help alleviate many tedious manual and time-consuming aspects of software development and delivery, and, as a result, accelerate DevSecOps workflows. But to realize the full potential of generative AI, the technology has to be sprinkled not just at the point of code creation, but everywhere.

According to our 2023 State of AI in Software Development report, code creation accounts for only 25% of a developer's time. There are so many other critical tasks that happen from the first commit through to production that could benefit from the power of AI. I discussed this in-depth with GitLab Field CTO organization leaders, Lee Faus and Brian Wald, in our recent webinar, "Explore the power of AI and GitLab Duo" (now available on-demand).

We focused on all the opportunities for AI to be infused to help shepherd software to delivery, creating better, more secure software faster. For instance, something as commonplace as examining a failed build can be improved by using AI to assess what went wrong and how to fix it. Although AI does not eliminate the task, it can help reduce the steps and time required to complete it.

Here are a few takeaways from our conversation to help you get started leveraging AI in your DevSecOps environment.

Duo Webinar - Duo throughout SDLC

Start with an assessment of your workflows

Before you can fully realize the impact of AI, you’ll have to do some upfront work, including revisiting your workflows. You want to understand the ideal workflow you can build out to have consistency in your approach to using AI and have the proper guardrails in place to reduce any risks that AI might introduce.

For instance, if you're writing code with generative AI, some of that generated code might include security vulnerabilities. That's just how it works. So you'll need a workflow in place to catch those vulnerabilities and reduce the chance of them making it into production. Once you have this workflow, you can start to introduce a lot of AI functionality in a more consistent manner that will increase the velocity of your development.

Here's an example of how assessing your workflow upfront can improve the benefits you'll get from AI. While AI can automatically build tests for you, you wouldn't want it to do so after the code's already created. Developers are not part of the QA team because they only test what they've written. Generative AI acts similarly, so you need your workflow for an AI-generated test to start earlier — where developers can use details in issues to interactively generate unit tests for the code they want to write. By considering the workflow, they can create the merge requests with the test first, and then, when they pull the branch to start working on the implementation, their code suggestions are more robust because the context now includes the proper tests and their response hits will be much higher than if they started with the code directly.

You can't revamp all your workflows at once, so make sure to focus on those related to your biggest software development and delivery challenges, such as modernizing legacy code bases, handling an increase in security issues, or operating on ever-thinning budgets and staff.

Establish guardrails for AI

You'll also want to consider the risk of AI in terms of the data it's interacting with and make sure you're putting guardrails in place to mitigate that risk and meet your unique compliance needs. You'll want to consider the AI models you're using, whether you're accessing vector databases, and how large language models (LLMs) are being trained.

For these questions, you'll want to pull together your legal, compliance, and DevSecOps teams together to ask tough questions of your AI providers. We provide some helpful guidance in the GitLab AI Transparency Center and our blog post on building a transparency-first AI strategy.

Another critical guardrail is streamlining how many separate AI tools you're using throughout the software development lifecycle and across your organization. The more tools you use, the more complexity you introduce, potentially causing operational issues, oversight challenges, and security risk. In addition, numerous tools result in increased overhead costs.

Measure the impact of AI

Measuring the changes in productivity and other key metrics is going to be essential to truly understanding the impact of AI in your organization. Typically, organizations would look at output from the perspective of how often they are shipping code into production, the four DORA metrics, or the time it takes to remediate bugs. But that doesn't provide a holistic picture.

At GitLab, we measure the impact of AI by building out the standardization of workflows inside our hierarchy structure of groups and projects so we can roll up metrics from teams to business units and analyze those outputs directly inside the user interface.

When you implement AI on top of this structure, you're able to see the increase in velocity, including the time it takes to resolve vulnerabilities and validate that merge requests have the right reviewers and the right tests, which reduces the time it takes to go through the code review process. You can see each stage inside GitLab, including dependencies, and the delta it takes the development team to get through those stages. Dashboards show what that speed looks like and makes it easier to pivot based off that data. For instance, you can decide whether or not to release software into production. Want to know more? You can dig deeper with my blog on measuring AI effectiveness.

GitLab Duo: Your one-stop shop for impactful, generative AI features

We're building GitLab Duo, our expanding toolbox of AI features for the DevSecOps platform, with powerful generative AI models and cutting-edge technologies from hypercloud vendors. Today, GitLab Duo has features in general availability, beta, and experimental phases, ranging from code assistant to conversational chat assistant to vulnerability explainer. When used consistently across the software development lifecycle, GitLab Duo will drive a 10x faster cycle time, helping organizations do more with less and allowing employees to spend their time on higher-value tasks.

The "Omdia Market Radar: AI-Assisted Software Development, 2023–24" report highlighted GitLab Duo as one of the products the analyst firm considers “suitable for enterprise-grade application development," noting that its “AI assistance is integrated throughout the SDLC pipeline.”

Here is a look at GitLab Duo's features in action:

Practical uses for GitLab Duo

Here are some practical ways to use GitLab Duo throughout the software development lifecycle.

  • Write merge request descriptions: GitLab Duo can automate the creation of comprehensive descriptions for merge requests and quickly and accurately capture the essence of an MR's string of commits. It can also surface tasks that are missing based on the code that is written and the intent of the MR's linked issue.

  • Explain code in natural language: QA testers can use the Code Explanation feature to quickly and easily understand code. For instance, if an MR includes code written in Rust and a complex set of methods, the QA tester can highlight the methods and receive a natural language readout of what the change is trying to do. This allows the QA tester to write much better test cases that will cover not just the sunny day but also rainy day scenarios.

  • Root cause analysis of pipeline errors: If your pipelines are becoming larger and you try to refactor them, you could break something, which can be difficult to troubleshoot – especially if you're executing a series of bash scripts or running a Docker image leveraging internal commands inside the image. You can run the errors you receive through generative AI and it will explain a possible root cause and a recommended solution that you can copy and paste directly back into your CI job.

  • Vulnerability resolution: In the rush to shift security left, engineering teams have had to quickly become security experts. With generative AI, engineers can access Duo Chat to learn what the vulnerability is, where it is in the code, and even open an automated MR with a possible fix – all within the development window, so no context-switching.

Live demo! Discover the future of AI-driven software development at our GitLab 17 virtual launch event. Register today!

We want to hear from you

Enjoyed reading this blog post or have questions or feedback? Share your thoughts by creating a new topic in the GitLab community forum. Share your feedback

Ready to get started?

See what your team could do with a unified DevSecOps Platform.

Get free trial

New to GitLab and not sure where to start?

Get started guide

Learn about what GitLab can do for your team

Talk to an expert