The Source Artificial Intelligence
Article

As AI becomes standard, watch for these 4 DevSecOps trends

Harnessing AI to drive innovation and deliver enhanced customer value will be critical to staying competitive in the AI-driven marketplace.

January 17, 2024 6 min read
David DeSanto
David DeSanto Chief Product Officer

AI’s role in software development is reaching a pivotal moment — one that will compel organizations and their DevSecOps leaders to be more proactive in advocating for effective and responsible AI utilization.

Simultaneously, developers and the wider DevSecOps community must prepare to address four global trends in AI: the increased use of AI in code testing, ongoing threats to IP ownership and privacy, a rise in AI bias, and — despite all of these challenges — an increased reliance on AI technologies. Successfully aligning with these trends will position organizations and DevSecOps teams for success. Ignoring them could stifle innovation or, worse, derail your business strategy.

From luxury to standard: Organizations will embrace AI across the board

Integrating AI will become standard, not a luxury, across all industries of products and services, leveraging DevSecOps to build AI functionality alongside the software that will leverage it. Harnessing AI to drive innovation and deliver enhanced customer value will be critical to staying competitive in the AI-driven marketplace.

From my conversations with GitLab customers and monitoring industry trends, with organizations pushing the boundaries of efficiency through AI adoption, more than two-thirds of businesses will embed AI capabilities within their offerings by the end of 2024. Organizations are evolving from experimenting with AI to becoming AI-centric.

To prepare, organizations must invest in revising software development governance and emphasizing continuous learning and adaptation in AI technologies. This will require a cultural and strategic shift. It demands rethinking business processes, product development, and customer engagement strategies. And it requires training — which DevSecOps teams say they want and need. In our latest Global DevSecOps Report, 81% of respondents said they would like more training on how to use AI effectively.

As AI becomes more sophisticated and integral to business operations, companies will need to navigate the ethical implications and societal impacts of their AI-driven solutions, ensuring that they contribute positively to their customers and communities.

AI will dominate code-testing workflows

The evolution of AI in DevSecOps is already transforming code testing, and the trend is expected to accelerate. GitLab’s research found that only 41% of DevSecOps teams currently use AI for automated test generation as part of software development, but that number is expected to reach 80% by the end of 2024 and approach 100% within two years.

As organizations integrate AI tools into their workflows, they are grappling with the challenges of aligning their current processes with the efficiency and scalability gains that AI can provide. This shift promises a radical increase in productivity and accuracy — but it also demands significant adjustments to traditional testing roles and practices. Adapting to AI-powered workflows requires training DevSecOps teams in AI oversight and fine-tuning AI systems to facilitate its integration into code testing to enhance software products’ overall quality and reliability.

Additionally, this trend will redefine the role of quality assurance professionals, requiring them to evolve their skills to oversee and enhance AI-based testing systems. It’s impossible to overstate the importance of human oversight, as AI systems will require continuous monitoring and guidance to be highly effective.

AI’s threat to IP and privacy in software security will accelerate

The growing adoption of AI-powered code creation increases the risk of AI-introduced vulnerabilities and the chance of widespread IP leakage and data privacy breaches affecting software security, corporate confidentiality, and customer data protection.

To mitigate those risks, businesses must prioritize robust IP and privacy protections in their AI adoption strategies and ensure that AI is implemented with full transparency about how it’s being used. Implementing stringent data governance policies and employing advanced detection systems will be crucial to identifying and addressing AI-related risks. Fostering heightened awareness of these issues through employee training and encouraging a proactive risk management culture is vital to safeguarding IP and data privacy.

The security challenges of AI also underscore the ongoing need to implement DevSecOps practices throughout the software development life cycle, where security and privacy are not afterthoughts but are integral parts of the development process from the outset. In short, businesses must keep security at the forefront when adopting AI — similar to the shift left concept within DevSecOps — to ensure that innovations leveraging AI do not come at the cost of security and privacy.

Brace for a rise in AI bias before we see better days

While 2023 was AI’s breakout year, its rise put a spotlight on bias in algorithms. AI tools that rely on internet data for training inherit the full range of biases expressed across online content. This development poses a dual challenge: exacerbating existing biases and creating new ones that impact the fairness and impartiality of AI in DevSecOps.

To counteract pervasive bias, developers must focus on diversifying their training datasets, incorporating fairness metrics, and deploying bias-detection tools in AI models, as well as explore AI models designed for specific use cases. One promising avenue to explore is using AI feedback to evaluate AI models based on a clear set of principles, or a “constitution,” that establishes firm guidelines about what AI will and won’t do. Establishing ethical guidelines and training interventions are crucial to ensure unbiased AI outputs.

Organizations must establish robust data governance frameworks to ensure the quality and reliability of the data in their AI systems. AI systems are only as good as the data they process, and bad data can lead to inaccurate outputs and poor decisions.

Developers and the broader tech community should demand and facilitate the development of unbiased AI through constitutional AI or reinforcement learning with human feedback aimed at reducing bias. This requires a concerted effort across AI providers and users to ensure responsible AI development that prioritizes fairness and transparency.

Preparing for the AI revolution in DevSecOps

As organizations ramp up their shift toward AI-centric business models, it’s not just about staying competitive — it’s also about survival. Business leaders and DevSecOps teams will need to confront the anticipated challenges amplified by using AI — whether they be threats to privacy, trust in what AI produces, or issues of cultural resistance.

Collectively, these developments represent a new era in software development and security. Navigating these changes requires a comprehensive approach encompassing ethical AI development and use, vigilant security and governance measures, and a commitment to preserving privacy. The actions organizations and DevSecOps teams take now will set the course for the long-term future of AI in DevSecOps, ensuring its ethical, secure, and beneficial deployment.

This article was originally published January 7, 2024, on TechCrunch.

Key takeaways
  • AI in DevSecOps demands proactive advocacy for responsible use and addressing global trends like AI bias and privacy risks.
  • Embracing AI in code testing will redefine QA roles, requiring new skills and oversight for improved software quality.
  • GitLab Duo offers AI benefits with clear ownership and privacy commitments.