Article

AI agents are reshaping software: What CISOs need to know

Most executives believe AI agents will dominate software development by 2028. Here’s what security leaders must do to prepare today.

October 21, 20255 min read
Josh Lemos
Josh LemosChief Information Security Officer

New research from GitLab shows that 89% of C-level executives surveyed expect AI agents will become the standard approach for building software within three years. This transformation brings significant security implications, as 85% of these leaders recognize that AI agents will introduce never-before-seen security challenges.

The findings highlight a critical dilemma facing CISOs and security professionals: They can’t afford to pause AI adoption, but they must address the emerging risks it creates. With 91% of executives surveyed planning to boost their AI investments in software development over the next 18 months, each new AI breakthrough intensifies these security concerns.

AI governance gaps create adoption barriers

Security leaders clearly understand the primary risks associated with AI agents. Survey participants identified cybersecurity threats (52%), data privacy and security concerns (51%), and governance challenges (45%) as their top worries. These interconnected risks continue to evolve as the technology advances.

Organizations need robust AI governance frameworks to adapt their security approaches in response to emerging threats. However, this is easier said than done, since AI impacts multiple technology areas, from data governance to identity and access management. GitLab’s research indicates that organizations are falling behind in governance frameworks as many surveyed leaders said their organizations haven’t implemented regulatory-aligned governance (47%) or internal policies (48%) around AI.

This governance gap is the result of legitimate industry-wide challenges that make it difficult for leaders to focus their efforts effectively. AI agents behave unpredictably due to their non-deterministic nature, which disrupts traditional security boundaries. Additionally, new universal protocols such as Model Context Protocol and Agent2Agent, which simplify data access and improve how agents work together, increase security complexity because they expand the attack surface and create new pathways for unauthorized access across interconnected systems.

However, these challenges shouldn’t stop security leaders from prioritizing AI governance. Organizations waiting for comprehensive AI best practices will find themselves constantly behind the curve, and those that avoid AI altogether will still be exposed to AI risks through vendor relationships and unauthorized AI use within their environments.

Practical steps CISOs can take for AI agent readiness

Security leaders should start by establishing AI observability systems that can track, audit, and attribute agent behaviors across all environments. Here are a few steps CISOs can take today to reduce AI risk and improve governance.

Establish identity policies that create accountability for agent actions

As AI systems proliferate, managing non-human identities will be just as critical as controlling human user access. Composite identities offer one solution by connecting AI agent credentials with the human users who direct them. This approach helps organizations to authenticate and authorize agents while maintaining clear accountability for their actions.

Implement comprehensive monitoring frameworks

Development, operations, and security teams require visibility into AI agent activities across various workflows, processes, and systems. Monitoring cannot stop at code repositories. Teams must track agent behavior in staging environments, production systems, connected databases, and all applications the agents can access.

Develop team AI capabilities

AI literacy is now a must-have for security teams. In GitLab’s survey, 43% of respondents acknowledged a growing AI skills gap, and this is likely to expand unless technical leaders invest in team education. Training should cover model behavior, prompt engineering, and critical evaluation of model inputs and outputs.

Knowing where models excel and where they underperform helps teams avoid unnecessary security risks and technical debt. For instance, models trained on anti-patterns effectively detect those specific issues but struggle with unfamiliar logic bugs. AI models that perform poorly in areas where security engineers or developers lack experience will leave security gaps that human professionals won’t be able to identify. One solution that can help is to ensure teams have sufficient expertise to validate AI outputs and catch potential errors.

CISOs should consider dedicating a portion of learning and development budgets to continuous technical education. This builds internal AI security expertise, creating AI champions who can train colleagues and reinforce good practices.

Security benefits outweigh AI adoption risks

Properly monitored and implemented AI actually enhances security outcomes. In fact, 45% of survey respondents ranked security as the top area where AI can add value for software development. When used to accelerate rather than replace human expertise, AI can democratize security knowledge across development teams by automating routine security tasks, providing intelligent coding suggestions, and offering security context within developer workflows.

For example, AI can explain vulnerabilities, enabling developers to resolve issues quickly without waiting for security team guidance. These capabilities help improve security outcomes, reduce risk exposure, and increase understanding between development and security teams.

Success belongs to organizations that embrace AI — but do so carefully. Even imperfect foundational controls help teams adapt as conditions change. If the executives surveyed are right, the three-year clock is already ticking. Leaders who guide their teams toward the right AI use cases won't just minimize risk; they will gain a competitive advantage. After all, the security of your software is a core component of its quality.

Next steps

Research Report: The Economics of Software Innovation

Learn what global C-suite executives are saying about AI-powered business growth, agentic AI adoption, upskilling, and how to demonstrate the impact of software innovation.

Read the report
Frequently asked questions
Key takeaways
  • Nearly 9 in 10 executives expect AI agents to become standard in software development within three years, creating urgent security challenges.
  • Organizations lack proper AI governance, with nearly half missing regulatory compliance and internal policies for artificial intelligence systems.
  • Security leaders can prepare by implementing identity policies, monitoring frameworks, and upskilling teams for the AI-driven software future.

The Source Newsletter

Stay updated with insights for the future of software development.