Blog AI/ML New report on AI-assisted tools points to rising stakes for DevSecOps
Published on February 14, 2024
5 min read

New report on AI-assisted tools points to rising stakes for DevSecOps

Read the key findings from the "Omdia Market Radar: AI-Assisted Software Development, 2023-24" report, including the state of AI-based code assistants.

aipower.jpeg

Small wonder that the buzz about deploying generative AI and large language models (LLMs) for code completion and code generation has focused almost exclusively on developer productivity. It's a significant milestone — but it’s not the entire story. Less widely understood is what AI-assisted tools can do for development teams and, more broadly, for organizational competitiveness. Combining AI-powered tools and integrated development environments (IDEs) doesn’t just pump up developer efficiency, it transforms the entire software development lifecycle (SDLC) while adding “layers” of safety enhancements.

DevSecOps teams see firsthand that AI-assisted software tools help reduce software testing bottlenecks and improve security as they streamline workflows. In this new era, DevSecOps can simultaneously shorten the software development cycle, enforce security standards, and enhance output. In short, the right tools make organizations more competitive.

Just as LLM quality improvements amplify the value of generative AI, the new class of AI-powered development tools must offer privacy- and transparency- controls to harness these models effectively. Utilizing rigorous controls, DevSecOps gains efficiencies and improves team collaboration while reducing AI adoption's security and compliance risks.

An analyst take on what matters

One of the key findings of a new report called “Omdia Market Radar: AI-Assisted Software Development, 2023–24” is that “the use of AI-based code assistants has reached a level of proficiency such that enterprises not using this technology will be at a disadvantage.”

Read the Omdia Market Radar report.

Few may have anticipated the development community’s swift integration of AI-powered application development. Until recently, it’s been a gradual build. According to Omdia, “The application of AI to code assistance has been ongoing for the last decade with a focus on assisting professional developers.” After years of development, the report emphasizes that “this technology is now a permanent part of the landscape.”

Omdia’s finding also tracks with the GitLab 2023 Global DevSecOps Report: The State of AI in Software Development, which featured input from 1,000 global leaders in development, IT operations, and security. Today, nearly one-in-four DevSecOps teams have adopted AI tools, and another two-thirds plan to use AI in software development. In the GitLab report, more than half (55%) of teams heralded the promise of improved efficiency. At the same time, two in five respondents expressed concerns about whether AI-generated code may introduce security vulnerabilities.

Advocating a layered approach

Given potential risks such as LLM inaccuracy, including widely documented hallucinations, Omdia cautions brands that “careless use of LLM output could harm and tarnish” their reputation. “To increase the accuracy of this technology and ensure that developers can use this technology safely and without violating license rules in the data used to train the models, there is a need to add layers on top of the foundation model.”

By layers, Omdia emphasizes the value of “safety and enhancement” safeguards and filters. These layers create a “major differentiator” for AI-assisted development tools because they manage “training data licensing rules, the quality and accuracy of the generated output, and the prevention of insecure code.” The report's authors caution that “generated outputs need to be carefully evaluated” to ensure they are “safe and of high quality.”

In effect, the safeguards and filters in AI-assisted software development establish a “defense-in-depth” strategy for coding. That’s a concept in which “attacks missed by one technology are caught by another,” which can also apply to any elevated digital risk, such as reputational harm.

A new perspective on GitLab Duo

Omdia highlighted GitLab Duo, the company’s suite of AI capabilities, as one of the products it considers “suitable for enterprise-grade application development,” noting that its “AI assistance is integrated throughout the SDLC pipeline.”

Among the report highlights:

  • “GitLab places an emphasis on respecting user privacy and being transparent in how it operates. In its selection of AI technology, it is agnostic to the models adopted and will use what it considers the best model for each use case.”

  • “When GitLab looked at where developers were spending their time, it was only 25% on coding, and 75% was taken up by other necessary tasks: planning, onboarding, testing, documentation, and security. Therefore, GitLab applies AI to all these tasks, not just code generation assistance.”

  • “To ensure privacy, GitLab does not let its AI retain user data in any way and does not use client code to train its models.”

  • GitLab’s AI gateway is model agnostic, and “GitLab uses models from Google and Anthropic to power GitLab Duo.”

  • Beyond code suggestions, developers “can ask Duo Explanation using natural language to explain what the code does.”

GitLab Duo introduces stronger controls

For DevSecOps teams, there’s no tradeoff between efficiency and security. Both are essential. GitLab Duo includes vital features such as Duo Code Suggestions and Chat, which enable AI-powered code completion, code generation, and chat, improving collaboration between developers, security, and operations teams.

With GitLab Duo, customer privacy is never subjected to tradeoffs. All customer code stays secret — it’s never applied to model training or fine-tuning. These practices are core to GitLab’s privacy- and transparency-first approach to team collaboration and security and reduce AI adoption's compliance risks.

The Omdia report notes that “software developers face greater complexity and hurdles today in producing code.” As a result, “There is a need to build in application security, including enforcing standards and triaging security vulnerabilities.” The report finds that GitLab has “security guardrails consistently applied throughout.”

Adopters need tools that can help them tap AI’s benefits without introducing vulnerabilities or undermining compliance standards in ways that jeopardize trust with customers, partners, employees, and other critical stakeholders. DevSecOps teams seek tools to reduce the time, stress, and complexity of the entire application lifecycle.

Read the Omdia Market Radar report.

Rusty Weston is an award-winning data-driven storyteller, editor, researcher, and writer. He formerly served as Editor of InformationWeek.com, Managing Editor at Yahoo!, and Vice President and Managing Editor for the Ogilvy content team.

We want to hear from you

Enjoyed reading this blog post or have questions or feedback? Share your thoughts by creating a new topic in the GitLab community forum. Share your feedback

Ready to get started?

See what your team could do with a unified DevSecOps Platform.

Get free trial

New to GitLab and not sure where to start?

Get started guide

Learn about what GitLab can do for your team

Talk to an expert