[{"data":1,"prerenderedAt":890},["ShallowReactive",2],{"/en-us/topics/devops/secure-ai-code-completion":3,"navigation-en-us":224,"banner-en-us":631,"footer-en-us":641,"next-steps-en-us":880},{"id":4,"title":5,"body":6,"category":6,"config":6,"content":7,"description":6,"extension":214,"meta":215,"navigation":216,"path":217,"seo":218,"slug":6,"stem":222,"testContent":6,"type":6,"__hash__":223},"pages/en-us/topics/devops/secure-ai-code-completion/index.yml","",null,[8,22,29,153,175,212],{"type":9,"componentName":9,"componentContent":10},"CommonBreadcrumbs",{"crumbs":11},[12,16,20],{"config":13,"title":15},{"href":14},"/topics/","Topics",{"title":17,"config":18},"DevOps",{"href":19},"/topics/devops/",{"title":21},"Secure AI-powered code completion",{"type":23,"componentName":23,"componentContent":24},"CommonArticleHero",{"title":25,"text":26,"config":27},"The complete guide to secure AI-powered code completion","AI coding tools help teams accelerate delivery cycles and reduce cognitive load on engineering teams. But rapid adoption has outpaced the security, privacy, and compliance frameworks designed to govern it, creating challenges that traditional frameworks weren’t designed to address.",{"id":28},"the-complete-guide-to-secure-ai-powered-code-completion",{"type":30,"componentName":30,"componentContent":31},"CommonSideNavigationWithTree",{"components":32,"anchors":112},[33,40,46,52,58,64,70,76,82,88,94,100,106],{"type":34,"componentName":34,"componentContent":35},"TopicsCopy",{"header":36,"text":37,"config":38},"What is AI-powered code completion?","AI-powered code completion analyzes your codebase's context and structure to suggest the next line or block of code in real time. These tools use machine learning models trained on millions of lines of code to automate repetitive tasks, minimize syntax errors, and help developers discover APIs and libraries faster.",{"id":39},"what-is-ai-powered-code-completion",{"type":34,"componentName":34,"componentContent":41},{"header":42,"text":43,"config":44},"What are AI code security risks?","[AI code assistants](/solutions/code-suggestions/) introduce security challenges beyond traditional software vulnerabilities. The most critical risk is insecure code generation, where AI models suggest patterns containing known security flaws, missing input validation, or have weak authentication or inadequate encryption.\n\nAI models trained on public repositories learn from existing security flaws in open source code. When a model encounters vulnerable patterns repeatedly during training, it may reproduce similar insecure implementations. This creates a feedback loop where historical security mistakes become embedded in new codebases.\n\n### What is prompt injection?\nPrompt injection is an attack vector unique to AI development tools. Attackers embed adversarial instructions inside code comments, variable names, or documentation strings, causing the AI to generate malicious code or expose sensitive information. The model cannot distinguish legitimate context from crafted attack instructions.\n\n### How can AI tools expose sensitive data?\nSome AI code assistants transmit code snippets to cloud services for processing, potentially exposing proprietary algorithms, credentials, or customer data. Even tools that claim to anonymize data can leak sensitive information through model outputs or training data contamination.\n\n### What real-world vulnerabilities have AI code tools generated?\nDocumented examples include [AI tools suggesting code](/topics/devops/ai-for-coding/) that logs sensitive user data without encryption, recommending deprecated libraries with known Common Vulnerabilities and Exposures (CVEs), and generating authentication logic without rate limiting. In one case, an AI tool suggested hardcoding database credentials directly in source files rather than using environment variables or a secret manager.",{"id":45},"what-are-ai-code-security-risks",{"type":34,"componentName":34,"componentContent":47},{"header":48,"config":49,"text":51},"What is an SBOM in AI development?",{"id":50},"what-is-an-sbom-in-ai-development","A Software Bill of Materials (SBOM) provides comprehensive visibility into every component, library, and dependency in a software project. For AI-assisted development, an [SBOM](/blog/the-ultimate-guide-to-sboms/) tracks which code the AI suggested, which third-party packages it incorporated, and how these elements affect your security posture.\n\nSBOMs are the foundation for rapid vulnerability response. When a new security advisory affects a specific library version, an SBOM lets teams immediately identify all affected projects and prioritize remediation. This is especially critical when AI tools rapidly introduce dependencies that developers may not fully vet.\n\n### What standards should teams use to generate SBOMs?\nTeams should adopt standardized formats like SPDX or CycloneDX. These specifications define how to document component names, versions, licenses, suppliers, and dependency relationships in machine-readable formats. [Modern CI/CD](/topics/ci-cd/) platforms, including GitLab, can automatically generate SBOMs during builds, keeping documentation synchronized with the live codebase.",{"type":34,"componentName":34,"componentContent":53},{"header":54,"text":55,"config":56},"How do you integrate AI security tooling?","Automated security tooling forms the technical infrastructure of secure AI-assisted development. SAST, DAST, and SCA tools must run continuously within [CI/CD pipelines](/topics/ci-cd/cicd-pipeline/) to catch vulnerabilities before they reach production.\n\nThe integration follows four stages:\n1. AI generates code based on developer context and prompts.\n2. Automated tools immediately scan for security issues, known vulnerability patterns, and policy violations.\n3. Results are presented to the developer with severity ratings and remediation guidance.\n4. The developer fixes identified issues or approves passing code for merge.\n\n### Why must security scans run automatically on every code change?\nManual security scanning creates gaps where vulnerable code slips through. AI assistants generate dozens, if not hundreds, of suggestions daily, making it challenging to thoroughly review. Automated scans on every change, including AI suggestions, are the only reliable way to maintain consistent security. These automated logs also provide auditable evidence for compliance verification.",{"id":57},"how-do-you-integrate-ai-security-tooling",{"type":34,"componentName":34,"componentContent":59},{"header":60,"config":61,"text":63},"How do you manage AI dependencies safely?",{"id":62},"how-do-you-manage-ai-dependencies-safely","Dependencies represent one of the highest risks in AI-assisted development because code assistants frequently suggest importing external libraries. Without careful management, these dependencies introduce vulnerabilities, licensing conflicts, or supply chain security risks.\n\nDependency auditing should run continuously, comparing your SBOM against current vulnerability databases. When a new CVE affects a dependency, automated systems should flag the issue immediately and create remediation tickets. The SBOM is the authoritative source for identifying affected projects and prioritizing updates.\n\n### Dependency hygiene practices for development teams\nHere are several best practices to follow to prevent vulnerabilities within dependencies:\n* Verify package sources and maintainer reputation before adding new dependencies.\n* Lock dependency versions in manifest files to ensure reproducible builds.\n* Schedule regular vulnerability scans and prioritize updates by severity and exploitability.\n* Remove unused dependencies to reduce attack surface.\n* Monitor for dependency confusion attacks using names similar to internal libraries.",{"type":34,"componentName":34,"componentContent":65},{"header":66,"config":67,"text":69},"How do you review AI-generated code?",{"id":68},"how-do-you-review-ai-generated-code","Security reviews are the critical human checkpoint in AI-assisted workflows. Automated tools catch many vulnerability classes, but they cannot assess business logic flaws, evaluate security architecture decisions, or identify context-specific risks that require human judgment.\n\nDevelopers can insert TODO comments or security review tags when AI generates functions handling authentication or sensitive data. These markers prevent code from merging until a security engineer approves it, making security review an explicit and trackable step rather than an implicit expectation.\n\nHigh-risk categories that require mandatory human security review include:\n* Authentication and authorization logic\n* Data encryption and decryption operations\n* Input validation for user-facing features\n* Database queries\n* Infrastructure-as-code (Iac) for production environments\n\n### Is human judgment still required?\nAutomated tools excel at finding known vulnerability patterns at scale. Human reviewers, on the other hand, can identify novel security issues, assess risk in context, and make judgment calls about acceptable trade-offs. When securing AI-generated code, both automated and human review are necessary. Defense in depth is what makes AI-generated code secure.",{"type":34,"componentName":34,"componentContent":71},{"header":72,"text":73,"config":74},"Can AI enforce secure coding standards?","AI code assistants can actively reinforce secure coding standards when properly configured, generating code that follows organizational security policies from the start. Implementation begins with defining clear, technology-specific secure coding guidelines covering input validation, output encoding, error handling, logging, and cryptographic requirements.\n\nSome secure coding requirements AI can actively help enforce include:\n* Input validation that sanitizes all user-provided data before processing\n* Output escaping that prevents injection attacks in web applications\n* Error handling that logs security events without exposing sensitive information to users\n* Secure credential management using environment variables or secret management services\n* Cryptographic operations using approved algorithms and key lengths\n\n### Why do Infrastructure-as-Code files deserve special security attention?\n[IaC](/topics/gitops/infrastructure-as-code/) and cloud provisioning scripts can expose entire environments when misconfigured. AI assistants generating Terraform, CloudFormation, or [Kubernetes](/solutions/kubernetes/) manifests should follow principles including least privilege access, encryption in transit and at rest, network segmentation, and audit logging. Organizations should maintain secure IaC template libraries for AI tools to reference.",{"id":75},"can-ai-enforce-secure-coding-standards",{"type":34,"componentName":34,"componentContent":77},{"header":78,"text":79,"config":80},"How do you defend against prompt injection attacks?","Prompt injection exploits how AI models treat all text in their context window as potentially relevant input. An attacker embedding instructions in pull request comments, documentation strings, or variable names may cause the AI to generate malicious code or disable security features without the developer realizing it.\n\nDefense requires multiple layers:\n* Input filtering to detect and remove suspicious patterns from code comments and documentation before AI tools process them\n* Automated monitoring to flag when generated code modifies authentication logic, changes access controls, or introduces new external dependencies\n* Mandatory human review for all security-critical functions before code merges\n* Limiting the context AI tools can access, particularly for sensitive projects\n\n### Why must developers treat AI-generated code as untrusted input?\nDevelopers and security reviewers must understand that AI-generated code should never be trusted blindly, especially for security-critical functions. Establishing a culture where teams question and verify AI suggestions helps prevent both accidental vulnerabilities and deliberate prompt injection attacks from succeeding.",{"id":81},"how-do-you-defend-against-prompt-injection-attacks",{"type":34,"componentName":34,"componentContent":83},{"header":84,"text":85,"config":86},"How do teams integrate human oversight in AI workflows?","To create effective, collaborative workflows, teams integrate AI as a team member that requires supervision, not as a fully autonomous agent. Developers generate code with AI assistance, reviewers evaluate functional correctness and security implications, and security engineers examine authentication, authorization, and data handling logic.\n\n### Roles required for secure AI-assisted development\nTeams must work collaboratively across the software lifecycle to assess the functionality and security of AI-assisted development. \n* **Developers** prompt and guide AI tools while understanding feature requirements and user workflows\n* **Reviewers** verify code quality, functionality, and flag maintainability issues\n* **Security engineers** specifically assess vulnerability patterns and attack vectors in AI-generated code\n\n### Why is documentation important for AI-generated code?\nThe teams who inherit this code need to understand not just what AI-generated code does but why it was written that way. Comments should indicate when AI generated the code and what prompts or context guided the creation. This transparency helps teams identify patterns of AI-suggested vulnerabilities and refine their AI usage practices over time.",{"id":87},"how-do-teams-integrate-human-oversight-in-ai-workflows",{"type":34,"componentName":34,"componentContent":89},{"header":90,"text":91,"config":92},"How do you evaluate AI code tools?","[Selecting an AI code completion tool](/gitlab-duo-agent-platform/) requires evaluating security capabilities, privacy protections, and compliance features. Key criteria include built-in security scanning, data privacy, and audit trails. \n\nSome organizations require certifications from AI tools, such as:\n* **SOC 2** verifies controls around security, availability, and confidentiality\n* **GDPR compliance** demonstrates appropriate handling of European user data\n* **HIPAA eligibility** confirms the tool can be used with protected health information when properly configured",{"id":93},"how-do-you-evaluate-ai-code-tools",{"type":34,"componentName":34,"componentContent":95},{"header":96,"text":97,"config":98},"What is an AI governance framework?","An [AI governance framework](/blog/a-developers-guide-to-building-an-ai-security-governance-framework/) provides the organizational structure for managing AI tool adoption, usage policies, and risk management.\n\nA governance framework defines: \n* Who can approve new AI tools\n* What security reviews are required before deployment\n* How AI-generated code is tracked and audited\n* How the organization responds when AI tools suggest vulnerable code or expose sensitive data\n\nAs AI models get better at detecting complex vulnerability patterns and business logic flaws, human oversight will shift away from routine detection. Instead, teams will  focus on novel attack vectors and strategic security decisions that AI cannot assess.",{"id":99},"what-is-an-ai-governance-framework",{"type":34,"componentName":34,"componentContent":101},{"header":102,"text":103,"config":104},"What is real-time compliance monitoring?","Real-time compliance monitoring means AI tools continuously verify that code meets regulatory requirements as developers write it. Rather than discovering HIPAA, PCI-DSS, or GDPR violations during audits, AI assistants flag compliance issues immediately, preventing non-compliant code from ever being committed.",{"id":105},"what-is-real-time-compliance-monitoring",{"type":34,"componentName":34,"componentContent":107},{"header":108,"text":109,"config":110},"What is the future of secure AI coding?","Secure AI coding is becoming increasingly sophisticated with capabilities that will reshape security practices.\n\nMany workflows today rely on a single AI assistant and human reviewer, but future workflows will orchestrate multiple agents working in parallel, catching vulnerabilities faster and earlier.\n\nAs AI systems take on more of the detection and generation work, human oversight doesn't diminish. Security engineers will spend less time on routine pattern matching and more time on the threats AI cannot anticipate: novel attack vectors, business logic flaws, and decisions that require organizational context.\n\nTo successfully secure AI development, teams will build clear boundaries between what AI handles and what humans own, and create collaborative workflows.",{"id":111},"what-is-the-future-of-secure-ai-coding",{"data":113},[114,117,120,123,126,129,132,135,138,141,144,147,150],{"config":115,"text":36},{"href":116},"#what-is-ai-powered-code-completion",{"config":118,"text":42},{"href":119},"#what-are-ai-code-security-risks",{"config":121,"text":48},{"href":122},"#what-is-an-sbom-in-ai-development",{"config":124,"text":54},{"href":125},"#how-do-you-integrate-ai-security-tooling",{"config":127,"text":60},{"href":128},"#how-do-you-manage-ai-dependencies-safely",{"config":130,"text":66},{"href":131},"#how-do-you-review-ai-generated-code",{"config":133,"text":72},{"href":134},"#can-ai-enforce-secure-coding-standards",{"config":136,"text":78},{"href":137},"#how-do-you-defend-against-prompt-injection-attacks",{"config":139,"text":84},{"href":140},"#how-do-teams-integrate-human-oversight-in-ai-workflows",{"config":142,"text":90},{"href":143},"#how-do-you-evaluate-ai-code-tools",{"config":145,"text":96},{"href":146},"#what-is-an-ai-governance-framework",{"config":148,"text":102},{"href":149},"#what-is-real-time-compliance-monitoring",{"config":151,"text":108},{"href":152},"#what-is-the-future-of-secure-ai-coding",{"type":154,"componentName":154,"componentContent":155},"CommonFaq",{"header":156,"groups":157},"Frequently Asked Questions",[158],{"questions":159},[160,163,166,169,172],{"question":161,"answer":162},"How does AI-powered code completion work?","AI-powered code completion analyzes your code's context and structure to offer real-time suggestions, using machine learning models trained on large codebases to predict the next line or block.",{"question":164,"answer":165},"What security risks should developers be aware of with AI code assistants?","Developers should watch for generated code that introduces vulnerabilities, follows outdated practices, or accidentally exposes sensitive data, and always review AI-suggested code before deployment.",{"question":167,"answer":168},"How can teams ensure AI-generated code is secure before deployment?","Teams should review AI-generated code, use automated security scanning tools in their CI/CD pipeline, and enforce secure coding standards consistently across all contributions.",{"question":170,"answer":171},"Are AI code completion tools safe for handling private or proprietary code?","Some tools offer features that keep code private or run locally, but developers should review privacy settings and policies before using them with sensitive information.",{"question":173,"answer":174},"What best practices should be followed when using AI code completion in development?","Follow secure coding standards, validate all AI-generated code with automated tools, and perform regular code reviews to ensure both code quality and security.",{"type":176,"componentName":176,"componentContent":177},"CommonResourcesContainer",{"header":178,"tabs":179},"Related resources",[180],{"name":181,"config":182,"items":183},"resources",{"key":181},[184,194,204],{"header":185,"type":186,"image":187,"link":190},"10 AI prompts to speed your team’s software delivery","Blog",{"altText":185,"config":188},{"src":189},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772632341/duj8vaznbhtyxxhodb17.png",{"text":191,"config":192},"Learn more",{"href":193,"icon":186},"https://about.gitlab.com/blog/10-ai-prompts-to-speed-your-teams-software-delivery/",{"header":195,"type":196,"image":197,"link":200},"Is AI achieving its promise at scale?","Web",{"altText":195,"config":198},{"src":199},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772134325/grvjf4696dexax95lytn.jpg",{"text":201,"config":202},"Get your AI maturity score",{"href":203,"icon":196},"https://about.gitlab.com/assessments/ai-modernization-assessment/",{"header":205,"type":186,"image":206,"link":209},"Introduction to GitLab Duo Agent Platform",{"altText":205,"config":207},{"src":208},"https://res.cloudinary.com/about-gitlab-com/image/upload/f_auto,q_auto,c_lfill/v1765809212/noh0mdfn9o94ry9ykura.png",{"text":191,"config":210},{"href":211,"icon":186},"https://about.gitlab.com/blog/introduction-to-gitlab-duo-agent-platform/",{"type":213,"componentName":213},"CommonNextSteps","yml",{},true,"/en-us/topics/devops/secure-ai-code-completion",{"config":219,"title":25,"ogTitle":25,"description":221,"ogDescription":221},{"noIndex":220},false,"Secure AI code completion requires SBOM tracking, automated SAST/DAST scanning, prompt injection defenses, and human oversight of all security-critical code.","en-us/topics/devops/secure-ai-code-completion/index","MTLFTKMMMuh80a9G1VEK8dVu72Ztm40pWTaFzvQ24SA",{"data":225},{"logo":226,"freeTrial":231,"sales":236,"login":241,"items":246,"search":551,"minimal":582,"duo":601,"switchNav":610,"pricingDeployment":621},{"config":227},{"href":228,"dataGaName":229,"dataGaLocation":230},"/","gitlab logo","header",{"text":232,"config":233},"Get free trial",{"href":234,"dataGaName":235,"dataGaLocation":230},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com&glm_content=default-saas-trial/","free trial",{"text":237,"config":238},"Talk to sales",{"href":239,"dataGaName":240,"dataGaLocation":230},"/sales/","sales",{"text":242,"config":243},"Sign in",{"href":244,"dataGaName":245,"dataGaLocation":230},"https://gitlab.com/users/sign_in/","sign in",[247,273,368,373,472,532],{"text":248,"config":249,"cards":251},"Platform",{"dataNavLevelOne":250},"platform",[252,258,266],{"title":248,"description":253,"link":254},"The intelligent orchestration platform for DevSecOps",{"text":255,"config":256},"Explore our Platform",{"href":257,"dataGaName":250,"dataGaLocation":230},"/platform/",{"title":259,"description":260,"link":261},"GitLab Duo Agent Platform","Agentic AI for the entire software lifecycle",{"text":262,"config":263},"Meet GitLab Duo",{"href":264,"dataGaName":265,"dataGaLocation":230},"/gitlab-duo-agent-platform/","gitlab duo agent platform",{"title":267,"description":268,"link":269},"Why GitLab","See the top reasons enterprises choose GitLab",{"text":191,"config":270},{"href":271,"dataGaName":272,"dataGaLocation":230},"/why-gitlab/","why gitlab",{"text":274,"left":216,"config":275,"link":277,"lists":281,"footer":350},"Product",{"dataNavLevelOne":276},"solutions",{"text":278,"config":279},"View all Solutions",{"href":280,"dataGaName":276,"dataGaLocation":230},"/solutions/",[282,306,329],{"title":283,"description":284,"link":285,"items":290},"Automation","CI/CD and automation to accelerate deployment",{"config":286},{"icon":287,"href":288,"dataGaName":289,"dataGaLocation":230},"AutomatedCodeAlt","/solutions/delivery-automation/","automated software delivery",[291,295,298,302],{"text":292,"config":293},"CI/CD",{"href":294,"dataGaLocation":230,"dataGaName":292},"/solutions/continuous-integration/",{"text":259,"config":296},{"href":264,"dataGaLocation":230,"dataGaName":297},"gitlab duo agent platform - product menu",{"text":299,"config":300},"Source Code Management",{"href":301,"dataGaLocation":230,"dataGaName":299},"/solutions/source-code-management/",{"text":303,"config":304},"Automated Software Delivery",{"href":288,"dataGaLocation":230,"dataGaName":305},"Automated software delivery",{"title":307,"description":308,"link":309,"items":314},"Security","Deliver code faster without compromising security",{"config":310},{"href":311,"dataGaName":312,"dataGaLocation":230,"icon":313},"/solutions/application-security-testing/","security and compliance","ShieldCheckLight",[315,319,324],{"text":316,"config":317},"Application Security Testing",{"href":311,"dataGaName":318,"dataGaLocation":230},"Application security testing",{"text":320,"config":321},"Software Supply Chain Security",{"href":322,"dataGaLocation":230,"dataGaName":323},"/solutions/supply-chain/","Software supply chain security",{"text":325,"config":326},"Software Compliance",{"href":327,"dataGaName":328,"dataGaLocation":230},"/solutions/software-compliance/","software compliance",{"title":330,"link":331,"items":336},"Measurement",{"config":332},{"icon":333,"href":334,"dataGaName":335,"dataGaLocation":230},"DigitalTransformation","/solutions/visibility-measurement/","visibility and measurement",[337,341,345],{"text":338,"config":339},"Visibility & Measurement",{"href":334,"dataGaLocation":230,"dataGaName":340},"Visibility and Measurement",{"text":342,"config":343},"Value Stream Management",{"href":344,"dataGaLocation":230,"dataGaName":342},"/solutions/value-stream-management/",{"text":346,"config":347},"Analytics & Insights",{"href":348,"dataGaLocation":230,"dataGaName":349},"/solutions/analytics-and-insights/","Analytics and insights",{"title":351,"items":352},"GitLab for",[353,358,363],{"text":354,"config":355},"Enterprise",{"href":356,"dataGaLocation":230,"dataGaName":357},"/enterprise/","enterprise",{"text":359,"config":360},"Small Business",{"href":361,"dataGaLocation":230,"dataGaName":362},"/small-business/","small business",{"text":364,"config":365},"Public Sector",{"href":366,"dataGaLocation":230,"dataGaName":367},"/solutions/public-sector/","public sector",{"text":369,"config":370},"Pricing",{"href":371,"dataGaName":372,"dataGaLocation":230,"dataNavLevelOne":372},"/pricing/","pricing",{"text":374,"config":375,"link":376,"lists":380,"feature":459},"Resources",{"dataNavLevelOne":181},{"text":377,"config":378},"View all resources",{"href":379,"dataGaName":181,"dataGaLocation":230},"/resources/",[381,414,431],{"title":382,"items":383},"Getting started",[384,389,394,399,404,409],{"text":385,"config":386},"Install",{"href":387,"dataGaName":388,"dataGaLocation":230},"/install/","install",{"text":390,"config":391},"Quick start guides",{"href":392,"dataGaName":393,"dataGaLocation":230},"/get-started/","quick setup checklists",{"text":395,"config":396},"Learn",{"href":397,"dataGaLocation":230,"dataGaName":398},"https://university.gitlab.com/","learn",{"text":400,"config":401},"Product documentation",{"href":402,"dataGaName":403,"dataGaLocation":230},"https://docs.gitlab.com/","product documentation",{"text":405,"config":406},"Best practice videos",{"href":407,"dataGaName":408,"dataGaLocation":230},"/getting-started-videos/","best practice videos",{"text":410,"config":411},"Integrations",{"href":412,"dataGaName":413,"dataGaLocation":230},"/integrations/","integrations",{"title":415,"items":416},"Discover",[417,422,426],{"text":418,"config":419},"Customer success stories",{"href":420,"dataGaName":421,"dataGaLocation":230},"/customers/","customer success stories",{"text":186,"config":423},{"href":424,"dataGaName":425,"dataGaLocation":230},"/blog/","blog",{"text":427,"config":428},"Remote",{"href":429,"dataGaName":430,"dataGaLocation":230},"https://handbook.gitlab.com/handbook/company/culture/all-remote/","remote",{"title":432,"items":433},"Connect",[434,439,444,449,454],{"text":435,"config":436},"GitLab Services",{"href":437,"dataGaName":438,"dataGaLocation":230},"/services/","services",{"text":440,"config":441},"Community",{"href":442,"dataGaName":443,"dataGaLocation":230},"/community/","community",{"text":445,"config":446},"Forum",{"href":447,"dataGaName":448,"dataGaLocation":230},"https://forum.gitlab.com/","forum",{"text":450,"config":451},"Events",{"href":452,"dataGaName":453,"dataGaLocation":230},"/events/","events",{"text":455,"config":456},"Partners",{"href":457,"dataGaName":458,"dataGaLocation":230},"/partners/","partners",{"backgroundColor":460,"textColor":461,"text":462,"image":463,"link":467},"#2f2a6b","#fff","Insights for the future of software development",{"altText":464,"config":465},"the source promo card",{"src":466},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758208064/dzl0dbift9xdizyelkk4.svg",{"text":468,"config":469},"Read the latest",{"href":470,"dataGaName":471,"dataGaLocation":230},"/the-source/","the source",{"text":473,"config":474,"lists":476},"Company",{"dataNavLevelOne":475},"company",[477],{"items":478},[479,484,490,492,497,502,507,512,517,522,527],{"text":480,"config":481},"About",{"href":482,"dataGaName":483,"dataGaLocation":230},"/company/","about",{"text":485,"config":486,"footerGa":489},"Jobs",{"href":487,"dataGaName":488,"dataGaLocation":230},"/jobs/","jobs",{"dataGaName":488},{"text":450,"config":491},{"href":452,"dataGaName":453,"dataGaLocation":230},{"text":493,"config":494},"Leadership",{"href":495,"dataGaName":496,"dataGaLocation":230},"/company/team/e-group/","leadership",{"text":498,"config":499},"Team",{"href":500,"dataGaName":501,"dataGaLocation":230},"/company/team/","team",{"text":503,"config":504},"Handbook",{"href":505,"dataGaName":506,"dataGaLocation":230},"https://handbook.gitlab.com/","handbook",{"text":508,"config":509},"Investor relations",{"href":510,"dataGaName":511,"dataGaLocation":230},"https://ir.gitlab.com/","investor relations",{"text":513,"config":514},"Trust Center",{"href":515,"dataGaName":516,"dataGaLocation":230},"/security/","trust center",{"text":518,"config":519},"AI Transparency Center",{"href":520,"dataGaName":521,"dataGaLocation":230},"/ai-transparency-center/","ai transparency center",{"text":523,"config":524},"Newsletter",{"href":525,"dataGaName":526,"dataGaLocation":230},"/company/contact/#contact-forms","newsletter",{"text":528,"config":529},"Press",{"href":530,"dataGaName":531,"dataGaLocation":230},"/press/","press",{"text":533,"config":534,"lists":535},"Contact us",{"dataNavLevelOne":475},[536],{"items":537},[538,541,546],{"text":237,"config":539},{"href":239,"dataGaName":540,"dataGaLocation":230},"talk to sales",{"text":542,"config":543},"Support portal",{"href":544,"dataGaName":545,"dataGaLocation":230},"https://support.gitlab.com","support portal",{"text":547,"config":548},"Customer portal",{"href":549,"dataGaName":550,"dataGaLocation":230},"https://customers.gitlab.com/customers/sign_in/","customer portal",{"close":552,"login":553,"suggestions":560},"Close",{"text":554,"link":555},"To search repositories and projects, login to",{"text":556,"config":557},"gitlab.com",{"href":244,"dataGaName":558,"dataGaLocation":559},"search login","search",{"text":561,"default":562},"Suggestions",[563,565,569,571,575,579],{"text":259,"config":564},{"href":264,"dataGaName":259,"dataGaLocation":559},{"text":566,"config":567},"Code Suggestions (AI)",{"href":568,"dataGaName":566,"dataGaLocation":559},"/solutions/code-suggestions/",{"text":292,"config":570},{"href":294,"dataGaName":292,"dataGaLocation":559},{"text":572,"config":573},"GitLab on AWS",{"href":574,"dataGaName":572,"dataGaLocation":559},"/partners/technology-partners/aws/",{"text":576,"config":577},"GitLab on Google Cloud",{"href":578,"dataGaName":576,"dataGaLocation":559},"/partners/technology-partners/google-cloud-platform/",{"text":580,"config":581},"Why GitLab?",{"href":271,"dataGaName":580,"dataGaLocation":559},{"freeTrial":583,"mobileIcon":588,"desktopIcon":593,"secondaryButton":596},{"text":584,"config":585},"Start free trial",{"href":586,"dataGaName":235,"dataGaLocation":587},"https://gitlab.com/-/trials/new/","nav",{"altText":589,"config":590},"Gitlab Icon",{"src":591,"dataGaName":592,"dataGaLocation":587},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758203874/jypbw1jx72aexsoohd7x.svg","gitlab icon",{"altText":589,"config":594},{"src":595,"dataGaName":592,"dataGaLocation":587},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758203875/gs4c8p8opsgvflgkswz9.svg",{"text":597,"config":598},"Get Started",{"href":599,"dataGaName":600,"dataGaLocation":587},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com/get-started/","get started",{"freeTrial":602,"mobileIcon":606,"desktopIcon":608},{"text":603,"config":604},"Learn more about GitLab Duo",{"href":264,"dataGaName":605,"dataGaLocation":587},"gitlab duo",{"altText":589,"config":607},{"src":591,"dataGaName":592,"dataGaLocation":587},{"altText":589,"config":609},{"src":595,"dataGaName":592,"dataGaLocation":587},{"button":611,"mobileIcon":616,"desktopIcon":618},{"text":612,"config":613},"/switch",{"href":614,"dataGaName":615,"dataGaLocation":587},"#contact","switch",{"altText":589,"config":617},{"src":591,"dataGaName":592,"dataGaLocation":587},{"altText":589,"config":619},{"src":620,"dataGaName":592,"dataGaLocation":587},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1773335277/ohhpiuoxoldryzrnhfrh.png",{"freeTrial":622,"mobileIcon":627,"desktopIcon":629},{"text":623,"config":624},"Back to pricing",{"href":371,"dataGaName":625,"dataGaLocation":587,"icon":626},"back to pricing","GoBack",{"altText":589,"config":628},{"src":591,"dataGaName":592,"dataGaLocation":587},{"altText":589,"config":630},{"src":595,"dataGaName":592,"dataGaLocation":587},{"title":632,"button":633,"config":638},"See how agentic AI transforms software delivery",{"text":634,"config":635},"Watch GitLab Transcend now",{"href":636,"dataGaName":637,"dataGaLocation":230},"/events/transcend/virtual/","transcend event",{"layout":639,"icon":640,"disabled":216},"release","AiStar",{"data":642},{"text":643,"source":644,"edit":650,"contribute":655,"config":660,"items":665,"minimal":869},"Git is a trademark of Software Freedom Conservancy and our use of 'GitLab' is under license",{"text":645,"config":646},"View page source",{"href":647,"dataGaName":648,"dataGaLocation":649},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/","page source","footer",{"text":651,"config":652},"Edit this page",{"href":653,"dataGaName":654,"dataGaLocation":649},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/content/","web ide",{"text":656,"config":657},"Please contribute",{"href":658,"dataGaName":659,"dataGaLocation":649},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/CONTRIBUTING.md/","please contribute",{"twitter":661,"facebook":662,"youtube":663,"linkedin":664},"https://twitter.com/gitlab","https://www.facebook.com/gitlab","https://www.youtube.com/channel/UCnMGQ8QHMAnVIsI3xJrihhg","https://www.linkedin.com/company/gitlab-com",[666,713,764,808,835],{"title":369,"links":667,"subMenu":682},[668,672,677],{"text":669,"config":670},"View plans",{"href":371,"dataGaName":671,"dataGaLocation":649},"view plans",{"text":673,"config":674},"Why Premium?",{"href":675,"dataGaName":676,"dataGaLocation":649},"/pricing/premium/","why premium",{"text":678,"config":679},"Why Ultimate?",{"href":680,"dataGaName":681,"dataGaLocation":649},"/pricing/ultimate/","why ultimate",[683],{"title":684,"links":685},"Contact Us",[686,689,691,693,698,703,708],{"text":687,"config":688},"Contact sales",{"href":239,"dataGaName":240,"dataGaLocation":649},{"text":542,"config":690},{"href":544,"dataGaName":545,"dataGaLocation":649},{"text":547,"config":692},{"href":549,"dataGaName":550,"dataGaLocation":649},{"text":694,"config":695},"Status",{"href":696,"dataGaName":697,"dataGaLocation":649},"https://status.gitlab.com/","status",{"text":699,"config":700},"Terms of use",{"href":701,"dataGaName":702,"dataGaLocation":649},"/terms/","terms of use",{"text":704,"config":705},"Privacy statement",{"href":706,"dataGaName":707,"dataGaLocation":649},"/privacy/","privacy statement",{"text":709,"config":710},"Cookie preferences",{"dataGaName":711,"dataGaLocation":649,"id":712,"isOneTrustButton":216},"cookie preferences","ot-sdk-btn",{"title":274,"links":714,"subMenu":723},[715,719],{"text":716,"config":717},"DevSecOps platform",{"href":257,"dataGaName":718,"dataGaLocation":649},"devsecops platform",{"text":720,"config":721},"AI-Assisted Development",{"href":264,"dataGaName":722,"dataGaLocation":649},"ai-assisted development",[724],{"title":15,"links":725},[726,731,736,739,744,749,754,759],{"text":727,"config":728},"CICD",{"href":729,"dataGaName":730,"dataGaLocation":649},"/topics/ci-cd/","cicd",{"text":732,"config":733},"GitOps",{"href":734,"dataGaName":735,"dataGaLocation":649},"/topics/gitops/","gitops",{"text":17,"config":737},{"href":19,"dataGaName":738,"dataGaLocation":649},"devops",{"text":740,"config":741},"Version Control",{"href":742,"dataGaName":743,"dataGaLocation":649},"/topics/version-control/","version control",{"text":745,"config":746},"DevSecOps",{"href":747,"dataGaName":748,"dataGaLocation":649},"/topics/devsecops/","devsecops",{"text":750,"config":751},"Cloud Native",{"href":752,"dataGaName":753,"dataGaLocation":649},"/topics/cloud-native/","cloud native",{"text":755,"config":756},"AI for Coding",{"href":757,"dataGaName":758,"dataGaLocation":649},"/topics/devops/ai-for-coding/","ai for coding",{"text":760,"config":761},"Agentic AI",{"href":762,"dataGaName":763,"dataGaLocation":649},"/topics/agentic-ai/","agentic ai",{"title":765,"links":766},"Solutions",[767,769,771,776,780,783,787,790,792,795,798,803],{"text":316,"config":768},{"href":311,"dataGaName":316,"dataGaLocation":649},{"text":305,"config":770},{"href":288,"dataGaName":289,"dataGaLocation":649},{"text":772,"config":773},"Agile development",{"href":774,"dataGaName":775,"dataGaLocation":649},"/solutions/agile-delivery/","agile delivery",{"text":777,"config":778},"SCM",{"href":301,"dataGaName":779,"dataGaLocation":649},"source code management",{"text":727,"config":781},{"href":294,"dataGaName":782,"dataGaLocation":649},"continuous integration & delivery",{"text":784,"config":785},"Value stream management",{"href":344,"dataGaName":786,"dataGaLocation":649},"value stream management",{"text":732,"config":788},{"href":789,"dataGaName":735,"dataGaLocation":649},"/solutions/gitops/",{"text":354,"config":791},{"href":356,"dataGaName":357,"dataGaLocation":649},{"text":793,"config":794},"Small business",{"href":361,"dataGaName":362,"dataGaLocation":649},{"text":796,"config":797},"Public sector",{"href":366,"dataGaName":367,"dataGaLocation":649},{"text":799,"config":800},"Education",{"href":801,"dataGaName":802,"dataGaLocation":649},"/solutions/education/","education",{"text":804,"config":805},"Financial services",{"href":806,"dataGaName":807,"dataGaLocation":649},"/solutions/finance/","financial services",{"title":374,"links":809},[810,812,814,816,819,821,823,825,827,829,831,833],{"text":385,"config":811},{"href":387,"dataGaName":388,"dataGaLocation":649},{"text":390,"config":813},{"href":392,"dataGaName":393,"dataGaLocation":649},{"text":395,"config":815},{"href":397,"dataGaName":398,"dataGaLocation":649},{"text":400,"config":817},{"href":402,"dataGaName":818,"dataGaLocation":649},"docs",{"text":186,"config":820},{"href":424,"dataGaName":425,"dataGaLocation":649},{"text":418,"config":822},{"href":420,"dataGaName":421,"dataGaLocation":649},{"text":427,"config":824},{"href":429,"dataGaName":430,"dataGaLocation":649},{"text":435,"config":826},{"href":437,"dataGaName":438,"dataGaLocation":649},{"text":440,"config":828},{"href":442,"dataGaName":443,"dataGaLocation":649},{"text":445,"config":830},{"href":447,"dataGaName":448,"dataGaLocation":649},{"text":450,"config":832},{"href":452,"dataGaName":453,"dataGaLocation":649},{"text":455,"config":834},{"href":457,"dataGaName":458,"dataGaLocation":649},{"title":473,"links":836},[837,839,841,843,845,847,849,853,858,860,862,864],{"text":480,"config":838},{"href":482,"dataGaName":475,"dataGaLocation":649},{"text":485,"config":840},{"href":487,"dataGaName":488,"dataGaLocation":649},{"text":493,"config":842},{"href":495,"dataGaName":496,"dataGaLocation":649},{"text":498,"config":844},{"href":500,"dataGaName":501,"dataGaLocation":649},{"text":503,"config":846},{"href":505,"dataGaName":506,"dataGaLocation":649},{"text":508,"config":848},{"href":510,"dataGaName":511,"dataGaLocation":649},{"text":850,"config":851},"Sustainability",{"href":852,"dataGaName":850,"dataGaLocation":649},"/sustainability/",{"text":854,"config":855},"Diversity, inclusion and belonging (DIB)",{"href":856,"dataGaName":857,"dataGaLocation":649},"/diversity-inclusion-belonging/","Diversity, inclusion and belonging",{"text":513,"config":859},{"href":515,"dataGaName":516,"dataGaLocation":649},{"text":523,"config":861},{"href":525,"dataGaName":526,"dataGaLocation":649},{"text":528,"config":863},{"href":530,"dataGaName":531,"dataGaLocation":649},{"text":865,"config":866},"Modern Slavery Transparency Statement",{"href":867,"dataGaName":868,"dataGaLocation":649},"https://handbook.gitlab.com/handbook/legal/modern-slavery-act-transparency-statement/","modern slavery transparency statement",{"items":870},[871,874,877],{"text":872,"config":873},"Terms",{"href":701,"dataGaName":702,"dataGaLocation":649},{"text":875,"config":876},"Cookies",{"dataGaName":711,"dataGaLocation":649,"id":712,"isOneTrustButton":216},{"text":878,"config":879},"Privacy",{"href":706,"dataGaName":707,"dataGaLocation":649},{"header":881,"blurb":882,"button":883,"secondaryButton":888},"Start building faster today","See what your team can do with the intelligent orchestration platform for DevSecOps.\n",{"text":884,"config":885},"Get your free trial",{"href":886,"dataGaName":235,"dataGaLocation":887},"https://gitlab.com/-/trial_registrations/new?glm_content=default-saas-trial&glm_source=about.gitlab.com/","feature",{"text":687,"config":889},{"href":239,"dataGaName":240,"dataGaLocation":887},1776403431949]