AI has taken the world by storm, creating a tectonic shift across industries and society as a whole. With it, important discussions have emerged about its beneficial applications and the potential for negative repercussions, especially in the field of software development. AI is already changing how software is designed, built, secured, and deployed. With all of the industry buzz around the technology, it’s more important than ever to separate the hype from reality.
That’s why I’m pleased to share that GitLab has recently released its Global DevSecOps Report: The State of AI in Software Development. With this survey, we had one goal in mind: to uncover whether AI is living up to its promise. We surveyed more than 1,000 global senior technology executives, developers, and security and operations professionals to understand how organizations use AI in software development today, and what they hope to achieve with it in the future.
Many of the sentiments reflected in the report echo what I hear firsthand from customers across industries and across the world: An eagerness to harness the benefits of AI for business innovation, while remaining cautious of potential risks.
Perhaps unsurprisingly, privacy and security, productivity, and training emerged as key challenges to successful AI adoption. Organizations recognize that the policies and strategies put into place now, and the ways in which we shift our workflows to incorporate AI, will shape the future of software development.
Let’s dive into a few of the key findings.
Download the full Global DevSecOps Report: The State of AI in Software Development.
As the excitement around AI increases, so do security concerns
Security and privacy are top concerns, but they don’t detract from the urgency to implement AI. While 83% of those surveyed said that implementing AI in their software development processes is essential to avoid falling behind, 79% noted that they are concerned about AI tools having access to private information or intellectual property.
The top concern, by far, is the potential for sensitive information such as customer data being exposed (72%), and nearly half of respondents (48%) said they were concerned that trade secrets may be exposed.
Similarly, when it comes to the biggest concerns around introducing AI into the software development lifecycle, 48% said that AI-generated code may not be subject to the same copyright protection as human-generated code.
Given these results, it’s not surprising that 95% of senior technology executives said they prioritize privacy and protection of intellectual property when selecting an AI tool.
To safely benefit from AI, organizations can avoid pitfalls, including data leakage and security vulnerabilities, by first deploying it in a low-risk environment in their organization. This enables teams to learn by trial and error and build best practices before allowing additional teams to adopt AI, ensuring it scales safely and sustainably.
AI is poised to increase the productivity of some teams — and increase the workload of others
Different business functions have different goals and use cases for AI, highlighting conflicting areas of opportunity and concern.
Our survey findings show that 40% of security practitioners are worried that AI-powered code generation will increase their workload. However, code generation is just one of many areas where AI can add value. In our survey, developers told us that they spend only 25% of their total time writing code. The rest is spent improving existing code, understanding code, testing and maintaining code, and identifying and mitigating security vulnerabilities. Organizations stand to see major productivity and collaboration benefits by applying AI across the software development lifecycle.
Approximately 50% of respondents expressed interest in AI-powered use cases across the software development lifecycle beyond code generation. In other words, there’s a strong appetite for more — and more integrated — AI spanning the breadth of the software development lifecycle.
Companies and employees are at odds over how to bridge the AI skills gap
While organizations reported optimism about their company’s use of AI, our survey shows a discrepancy between organizations’ and practitioners’ satisfaction with AI training resources.
Despite 75% of respondents saying their organization provides training and resources for using AI, a roughly equal proportion also said they are finding resources on their own, suggesting that the available resources and training within organizations may be insufficient.
With AI introducing a new set of skills to learn, 34% of respondents said they need training to use or interpret AI, and developers were significantly more likely to lack confidence in AI-generated output than either security or operations respondents (38% compared to 28% and 28%, respectively).
Organizations should focus on providing AI training and resources to all job roles and functional areas that will be using AI, and it is especially important to ensure that the resources for development teams are relevant, up to date, and cover the latest AI technologies and applications.
But wait, there’s more
These findings reinforce that for organizations to benefit from AI, it needs to be secure and delivered in a single application that is embedded across the entire software development lifecycle.
These core tenets guide our vision for the future of the GitLab AI-powered DevSecOps platform and GitLab Duo, our suite of AI capabilities that enables organizations to boost efficiency, productivity, and collaboration. Only by adopting a privacy-first, integrated approach to implementing AI can organizations be confident that their intellectual property is safe while empowering everyone to deliver better, more secure software faster.
To explore the full report, download the Global DevSecOps Report: The State of AI in Software Development.
“83% of DevSecOps professionals say implementing AI is essential, but privacy and security concerns are blockers to deeper adoption.” – Ashley Kramer
Click to tweet