Published on: July 31, 2025

3 min read

Securing AI together: GitLab’s partnership with security researchers

Learn how GitLab collaborates with security researchers to identify and defend against emerging threats.

As GitLab's Senior Director of Application Security, my primary mission is straightforward: to protect our customers from harm caused by software vulnerabilities. In an era where AI is transforming how we build software, this mission has taken on new dimensions and urgency. Here’s how we're working with the global security research community to make GitLab Duo Agent Platform secure against emerging threats.

The AI security challenge

AI-powered platforms create incredible productivity for engineers. However, the ability to create code also brings a crucial need for robust security. For example, prompt injection attacks embed hidden instructions in comments, source code, and merge request descriptions. These can guide AI into making attacker-controlled recommendations to the user or, in some cases, autonomously taking unintended action. Addressing these risks helps ensure the responsible and secure evolution of AI in development.

GitLab’s security and engineering teams work diligently to provide customers with a safe and secure platform. Partnerships with external security researchers, such as Persistent Security, are an integral part of that approach.

Our commitment to transparent collaboration

GitLab's AI Transparency Center details how we uphold ethics and transparency in our development and use of AI-powered features. This commitment extends to our collaboration with security researchers.

When Persistent Security reached out to GitLab to discuss a complex prompt injection issue with industry-wide impact, they were quickly connected to the GitLab Product Security Response Team to investigate if any of our products were affected.

Through this dialogue, we were able to quickly identify and implement mitigations that were deployed prior to the public beta of GitLab Duo Agent Platform in July 2025. This rapid response exemplifies our approach to working with security researchers and collaborating transparently throughout the process to coordinate remediation and disclosure to protect customers.

Why external research matters for AI security

AI systems present unique security challenges that require diverse perspectives and specialized expertise.

External researchers are essential for:

  • Rapid Threat Evolution: AI security threats evolve quickly. The research community helps us stay ahead of emerging attack patterns, from prompt injection techniques to novel ways of manipulating AI responses.
  • Real-World Testing: External researchers test our systems in ways that mirror actual attacker behavior, providing invaluable insights into how our defenses perform under pressure.
  • Diverse Expertise: External security researchers often demonstrate exceptional creativity, with reports standing out for innovative approaches to identifying complex vulnerabilities. This diversity of thinking strengthens our overall security posture.

Our ongoing commitment

The security research community remains a crucial partner in our mission to protect customers. We're committed to:

  • Providing clear guidance to researchers about our AI systems and security boundaries
  • Maintaining rapid response times for security disclosures
  • Sharing our learnings with the broader community through public disclosure and research

The future of AI security depends on collaboration between organizations like GitLab and the security research community. By working together, we can ensure that AI remains a force for productivity and innovation while protecting our customers and users from harm.

To our security research partners: thank you for your partnership, making us stronger, more secure, and better prepared for the challenges ahead. I’ll be at Black Hat August 6-7, 2025, and look forward to connecting with AI security researchers there. You can reach me through the Black Hat mobile app or on LinkedIn.

Do you want to play a role in keeping GitLab secure? Visit our HackerOne program to get started, or learn more about our AI security practices at our AI Transparency Center.

We want to hear from you

Enjoyed reading this blog post or have questions or feedback? Share your thoughts by creating a new topic in the GitLab community forum.
Share your feedback

50%+ of the Fortune 100 trust GitLab

Start shipping better software faster

See what your team can do with the intelligent

DevSecOps platform.