Security has developed an internal security notification dashboard. This dashboard will only be used in cases of high priority security notifications appropriate for the entire organization. Notifications will be sent via slack and email to GitLab team members.
Information security encompasses a variety of different working groups. These security best practices support the functions of business operations, infrastructure, and product development, to name a few. Everybody is responsible for maintaining a level of security to support compliance (available internal-only), while raising the bar of our security posture.
The GitLab Security Teams are available 24/7/365 and are ready to assist with questions, concerns, or issues you may have.
There are some common scenarios faced by GitLab team members:
To contact for any other reason, see Contacting the Team or Engaging the Security On-Call.
The CEO (and Executive team) will not send you an email to wire cash, or a text message to ask for gift cards, or anything else that feels like a CEO fraud or CEO scam. These types of spear attack events will be more common as we grow. Feel free to verify any unusual requests via the #ceo Slack channel.
What should you do if you receive a potential phishing email or text (smishing) from GitLab's CEO?
If you are wanting to implement a process, code, or some other procedure that could impact the security posture of GitLab or its products, something that the Security Team uses as a resource is Threat Modeling. The Security Team highly encourages change and improvements, and also ensure that changes and improvements are done securely. The Security Team uses a threat framework based upon the PASTA methodology. For more information including an issue template for doing your own threat modeling, check out the Threat Modeling page.
If you have a question or concern and need to speak with the Security Team, you can contact Security.
Users without 2FA enabled that are stale for
over 30 days will be blocked/suspended until resolved. This improves the
security posture for both the user and GitLab.
If any systems provide an option to use SMS text as a second factor, this is highly discouraged.
Phone company security can be easily subverted by attackers allowing them to take over a phone account.
(Ref: 6 Ways Attackers Are Still Bypassing SMS 2-Factor Authentication / 2 minute Youtube social engineering attack with a phone call and crying baby)The following instructions are for Apple (MacBook Pro or Air) users. Linux users please go to the Linux Tools section of the handbook.
When backing up data team members should use GitLab's Google Drive. Our deployment is regularly tested and data at rest is encrypted by default. For alternative options, please reach out to IT.
System Preferences
-> Security & Privacy
under the Firewall
tab. If the option reads Firewall: Off
, you will need to click on the lock at the bottom of the dialog box to make changes, and click on Turn Firewall On
(see screenshot).Sometimes a team member needs to test a particular scenario that requires bypassing of the firewall. If this is the case, ensure one of the following network scenarios/configurations is used for your laptop:
security
Slack channel if you have questions about this.All GitLab team members must keep their computers locked when not actively being used and any sensitive GitLab information must be stored and secured when not in use when working from a shared or public space.
Refer to this guide for setting up a dedicated WiFi so that your work notebook is isolated from other personal devices in your home network.
Many services that team members use such as Slack and Zoom have mobile applications that can be loaded onto iOS or Android devices, allowing for use of those resources from a mobile phone. Refer to the acceptable use policy for more information on using a mobile device.
Most major applications (Slack, Zoom, Okta Verify) have been examined and vetted by the Security Team, but there are some applications which are not only of limited scope in the data they can access, but also have security issues. In such cases, use the mobile device's web browser for access to the resource. If you have a question about the security of a mobile app and want to know if you should be using it to access GitLab data, review the security tips on this page or contact the Security Team via Slack in the #security channel.
Some Google Cloud resources, if deployed with default settings, may introduce risk to shared environments. For example, you may be deploying a temporary development instance that will never contain any sensitive data. But if that instance is not properly secured, it could potentially be compromised and used as a gateway to other, more sensitive resources inside the same project.
Below are some steps you can take to reduce these risks.
By default, Google will attach what is called the Compute Engine default service account to newly launched Compute Instances. This grants every process running on your new Compute Instance 'Project Editor' rights, meaning that if someone gains access to your instance they gain access to everything else in the project as well.
This default account should not be used. Instead, you should choose one of the following two options:
--no-service-account --no-scopes
flags if using the gcloud
command, or by selecting the following option in the web interface:When permitting access to Compute Instances via firewall rules, you should ensure you are exposing only the minimum ports to only the minimum instances required.
When creating a new firewall rule, you can choose to apply it to one of the following "Targets":
All instances in the network
: This is probably not the option you want. Selecting this option is a common mistake and may expose insecure services on instances other than your own.Specified target tags
: This is probably the option you want. This allows you to limit the rule to instances that are marked with a specific network tag. You should create a descriptive tag name like "allow-https-from-all" so that it can be easily identified and used when needed.Specified service account
: This is a less likely option, but perfectly viable if you have already done some design around custom service accounts. It is similar to a tag but will be assigned automatically to all instances using a specific service account.When choosing "Ports and Protocols" to expose, you should never select "Allow All" and should never manually enter entire ranges such as 1-65535
. Instead, you should choose only the specific required TCP/UDP ports you need to expose.
GKE nodes are Compute Instances, and by default use the same Compute Engine default service account described above. Despite making it their default, Google specifically states "You should create and use a minimally privileged service account to run your GKE cluster instead of using the Compute Engine default service account.".
Whether deploying a GKE cluster manually or automatically via Terraform, you can follow these instructions to create and attach a service account with the minimum permissions required for a GKE cluster node to function.
In addition, you should enable Workload Identity and Shielded Nodes on all new clusters. This can be done by appending the --workload-pool=[PROJECT-ID].svc.id.goog --enable-shielded-nodes
flags if using the gcloud command, or by selecting the following options in the web interface (located under the "Security" menu):
When creating a Cloud Function with a "trigger type" of HTTP
, Google provides two layers of access control. The first is an identity check, via the following two options under Authentication:
The second is network-based access control, via the following options under Advanced Settings -> Connections -> Ingress Settings. You should choose the least permissive option that will still allow your function to work:
Some uses cases will prevent you from choosing the "best practice" when it comes to authenticating an inbound request. For example, you may wish to host a webhook target for an external service that doesn't support the use of Google Cloud credentials. For this use case, you can store a complex, machine-generated secrete as an environment variable inside your function and then ensure the requesting service includes that secret inside the request headers or JSON payload. More details and examples can be found here.
Similar to Compute Instances and GKE clusters, Cloud Functions also bind to a service account by default. And once again, Google states that "it's likely too permissive for what your function needs in production, and you'll want to configure it for least privilege access".
For most simple functions, this shouldn't an issue. However, it is possible that a complex function could be abused to allow the person invoking the function to impersonate that service account. For this reason, you'll want to configure a new service account with the bare minimum permissions required for your function to operate.
You can then choose to use this new service account via the option under Advanced Settings -> Advanced -> Service account.
#whats-happening-at-gitlab
slack channel/security
Slack command.Passwords are one of the primary mechanisms that protect GitLab information systems and other resources from unauthorized use. Follow GitLab's password guidelines when constructing secure passwords and ensuring proper password management to keep GitLab secure. To learn what makes a password truly secure, read this article or watch this conference presentation on password strength.
GitLab requires all new hires to complete New Hire security orientation training as part of the onboarding process and annual training there after.
New Hire security training will be automatically assigned to you on day 1 of orientation as soon as you access the ProofPoint tile via Okta. Training is required to be completed within your first 30 days.
The purpose of the annual training is to mature our internal posture through regular training while satisfying external regulatory requirements. The training is meant to be fun, engaging and not time-consuming.
GitLab conducts routine phishing simulations at a minimum of once per quarter. All team members may occasionally receive emails that are designed to look like legitimate business-related communications but will in actuality be simulated phishing attacks. Real phishing attacks are designed to steal credentials or trick the recipient into downloading or executing dangerous attachments. No actual attempts will be made by GitLab to steal credentials or execute malicious code.
The goal of these campaigns is not to catch people clicking on dangerous links or punish those who do, but rather to get people thinking about security and the techniques used by attackers via email to trick you into running malicious software or disclosing web passwords. If you fall victim to one of these simulated attacks feel free to take the training courses again or to ask the security team for more information on what could've been done to recognize the attack. What you shouldn't do is feel any shame for having clicked on the link or entered any data, nor should you feel like you need to cop to the security team and let them know you made a mistake. Making a mistake online is practically the reason the Internet was invented.
When you receive an email with a link, hover your mouse over the link or view the source of the email to determine the link's true destination.
If you hover your mouse cursor over a link in Google Chrome it will show you the link destination in the status bar at the bottom left corner of your browser window.
In Safari the status bar must be enabled to view the true link destination (View -> Show Status Bar).
Some examples or methods used to trick users into entering sensitive data into phishing forms include:
When viewing the source of an HTML email it is important to remember that the
text inside the "HREF" field is the actual link destination/target and the text
before the </A>
tag is the text that will be displayed to the user.
<a href="http://evilsite.example.org">Google Login!</a>
In this case, "Google Login!" will be displayed to the user but the actual target of the link is "evilsite.example.org".
After clicking on a link always look for the green lock icon and "secure" label that signify a validated SSL service. This icon alone is not enough to verify the authenticity of a website, however the lack of the green icon does mean you should never enter sensitive data into that website.
If you think an email is suspicious, it may be a phishing attempt targeted at you or GitLab, or it may be a security test. Please report the email to Security by using the PhishAlarm button in Gmail. It is located in the right hand sidebar of your Gmail workspace. If you don't see the button, the right hand sidebar may be collapsed and you'll need to click the arrows at the bottom right hand corner of the window to expand the sidebar.
If you are on a mobile device and using the Gmail app, the PhishAlarm button is toward the bottom of your Gmail app, in the available add-ons section.
If you are using another email client, you may not be able to submit the email using the PhishAlarm button. In this case, you may manually submit the phishing email by forwarding it to phishing@gitlab.com
.
Note: Forwarding phishing emails to phishing@gitlab.com
requires additional steps by following the manual submission instructions below.
If you use the Report Phishing
button at the top right of the Gmail client, this will also require you to follow the manual submission instructions below. This is because the Report Phishing
button by Gmail does not provide our security team with the email itself, and only provides us with a notification.
PhishAlarm is found on the right side panel in Gmail. You can hide or show this panel by clicking on the > or < symbol on the bottom right corner of your web browser window
To submit an email via PhishAlarm to GitLab's Security Team using Gmail:
phishing@gitlab.com
To forward the email as an attachment to GitLab's Security Team using Gmail:
Forward as attachment
phishing@gitlab.com
Phishing and other social engineering attacks aren't only sent via email.
You might receive a suspicious text / SMS message, a weird Direct Message on social
media platforms like LinkedIn, or a phone call. If it's work related, please
use the /security
Slack command.
Even if you're unsure or it feels insignificant, you can always ask in the #security Slack channel.
Should a team member lose a device such as a thumb drive, YubiKey, mobile phone, tablet, laptop, etc. that contains their credentials or other GitLab-sensitive data they should report the issue using the /security
command in Slack to engage SIRT.
GitLab provides a panic@gitlab.com
email address for team members to use in situations when Slack is inaccessible and immediate security response is required.
This email address is only accessible to GitLab team members and can be reached from their gitlab.com or personal email address as listed in Workday. Using this address provides an excellent way to limit the damage caused by a loss of one of these devices.
Additionally if a GitLab team member experiences a personal emergency the People Group also provides an emergency contact email.
As part of raising that bar, GitLab is implementing Zero Trust, or the practice of shifting access control from the perimeter of the org to the individuals, the assets and the endpoints. You can learn more about this strategy from the Google BeyondCorp whitepaper: A New Approach to Enterprise Security.
In our case, Zero Trust means that all devices trying to access an endpoint or asset within our GitLab environment will need to authenticate and be authorized. Because Zero Trust relies on dynamic, risk-based decisions, this also means that users must be authorized and validated: what department are they in, what role do they have, how sensitive is the data and the host that they are trying to access? We’re at the beginning stages in our Zero Trust roadmap, but as we move along in the journey, we’ll document our lessons learned, process and progress in our Security blog.
To learn more about the concept of Zero Trust and our roadmap for implementation, see this GitLab presentation from GoogleNext19: https://www.youtube.com/watch?v=DrPiCBtaydM
You can also check out our Zero Trust Networking (ZTN) blog series where we detail the ZTN implementation challenges we foresee ahead, some we've already managed to work through, and where we'll go from here:
Head over to the /r/netsec subreddit to see our October 29, 2019 Reddit AMA on Zero Trust where we fielded questions around our ZTN implementation, roadmap, strategy and more.
Identity is a critical element of the implementation of a ZTN framework. GitLab is moving forward with an implementation of Okta to allow us to standardize authentication for Cloud Application access and implement user-friendly SSO. See our Okta page for more details.
In many enterprise environments, virtual private networks (VPN) are used to allow access to less secured resources, typically also protected by an enterprise firewall. Adding corporate VPN connectivity only marginally improves the security of using those systems and assumes a network perimeter is in place. At GitLab, as an all remote company, we do most of our work using other Software-as-a-Service (SaaS) providers that we rely on to maintain confidentiality of communication and data.
In relation to Zero Trust, a corporate VPN is a perimeter, which ZTN architecture deemphasizes as a basis for making authorization decisions. Current access to critical systems is managed through alternative controls.
While a corporate VPN is not implemented at this time, there are other valid use cases for which individual team members may still wish to use a personal VPN, such as privacy or preventing traffic aggregation. Team members that wish to use a personal VPN service for any reason may still expense one.
For the use case of laptop usage in untrusted environments, such as coffee shops and coworking spaces, team members should prioritize a baseline of always-on host protections, such as up-to-date security patching, host firewalls, and antivirus, by following the system configuration guidelines at a minimum. That said, a personal VPN may provide additional protections in these situations. For more on personal VPNs see the Personal VPN page.
The Security Department provides essential security operational services, is directly engaged in the development and release processes, and offers consultative and advisory services to better enable the business to function while minimising risk.
To reflect this, we have structured the Security Department around four key tenets, which drive the structure and the activities of our group. These are :
2021 was a productive and accomplished year for GitLab Security. You can find the many ways we made GitLab and our customers more secure in FY22. In FY23 (Feb 2022 - Jan 2023) we will continue moving the security needle forward as we focus on increased involvement in product features, diversifying our certification roadmap, and increased visibility of our threat landscape.
The Security Assurance sub-department continues to improve customer engagement and advance our SaaS security story. Independent security validation (compliance reports and certifications) is a critical component to ensuring transparency and adequacy of our security practices. Current and prospective customers highly value independent attestations of security controls and rely on these to reaffirm security of the software and inherent protection of their data. FY22 saw expansion of GitLab’s SOC 2 report to include the Security and Confidentiality criteria along with achievement of GitLab very first ISO/IEC 27001 certification. In FY23 we will continue to grow GitLab’s certification portfolio through SOC and ISO expansion with an additional focus on compliance offerings geared towards heavily regulated markets like FIPS 140-2 and FedRAMP. These audits will greatly expand our ability to reach new markets, attract new customers, increase contract values and make GitLab even more competitive in the enterprise space. A heavy focus will be placed on tooling and automation in FY23 to enable our rapid growth.
The Security Engineering sub-department's focus in FY23 will continue to be in the direction of a proactive security stature. Adoption of additional automation and key technology integrations will help further increase efficiency and effectiveness. After the shift left accomplished last year, our ability to detect and remediate risks pre-production has improved. Building on this capability, improving visibility and alerting on vulnerabilities detected as close to code development as possible will be a new focus. Continued maturity of our infrastructure security, log aggregation, alerting, and monitoring will build upon the increased infrastructure visibility and observability accomplished last year. All of this will contribute towards minimizing risk and exposure in a proactive manner.
For FY23 the Security Operations sub-department will be committed to a focus on anti-abuse and incident response process maturity. Using established maturity frameworks, the program will focus on utilizing existing technologies with new expanded datasets supported by refined processes resulting in faster time to triage and short time to remediate. Additional focus on gaining a deeper understanding of security incidents, abuse, and causes will drive additional preventative practices. Altogether, this will result in fewer security incidents, less abuse, a more secure, and more reliable service for all GitLab users.
Our newest sub-department, Threat Management: FY23 began with the creation of a new sub-department known as Threat and Vulnerability Management. This department will contain our Red Team, Security Research Team, and a newly formed Vulnerability Management team. While the focus of the Red Team and Vulnerability Research teams will not change, the newly formed Vulnerability Management team will take an iterative approach to better understanding and managing vulnerabilities across all of GitLab. Initially, Vulnerability Management will be very focused on implementing an initial process to better track and analyze cloud assets (GCP, AWS, Azure, DO) for vulnerabilities. Once this initial process is in place and being executed on we will begin expanding coverage to the GitLab product, specific business critical projects and other potential weaknesses. The overall goal of this team will be to create a holistic view of GitLab’s attack surface and ensure that the necessary attention is given to remediating issues. FY23 will also see the introduction of several new security teams. In addition to the vulnerability management team mentioned above, we are also adding a Log Management team. This team will report into the Security Engineering sub-department and will be responsible for creating a more holistic approach to log management, incident response, and forensic investigation.
Lastly, we value the opinions and feedback of our team members and encourage them to submit ideas handbook first (directly to the handbook in the form of an MR). We saw incredible gains in our culture amp survey results in FY22 and going forward we are committed to continuous improvement of our leadership team, team growth and development, and GitLab culture within the Security Department.
Unlike typical companies, part of the mandates of our Security, Infrastructure, and Support Departments is to contribute to the development of the GitLab Product. This follows from these concepts, many of which are also behaviors attached to our core values:
As such, everyone in the department should be familiar with, and be acting upon, the following statements:
This topic is part of our Engineering FY23 Direction.
Our vision is to be the leading example in security, innovation and transparency.
Our mission is to enable our business to succeed in the most secure way possible by managing risk, empowering people and developing a healthy security culture. This will be achieved through 3 prioritized areas of focus:
To help achieving the vision of being the most Transparent Security Group in the world the Security Department has nominated a Security Culture Committee.
Security Engineering |
Security Operations |
Threat Management |
Security Assurance |
---|---|---|---|
|
The Security Engineering teams below are primarily focused on Securing the Product. This reflects the Security Department’s current efforts to be involved in the Application development and Release cycle for Security Releases, Security Research, our HackerOne bug bounty program, Security Automation, External Security Communications, and Vulnerability Management.
The term “Product” is interpreted broadly and includes the GitLab application itself and all other integrations and code that is developed internally to support the GitLab application for the multi-tenant SaaS. Our responsibility is to ensure all aspects of GitLab that are exposed to customers or that host customer data are held to the highest security standards, and to be proactive and responsive to ensure world-class security in anything GitLab offers.
Application Security specialists work closely with development, product security PMs, and third-party groups (including paid bug bounty programs) to ensure pre and post deployment assessments are completed. Initiatives for this specialty also include:
The Infrastructure Security team consists of cloud security specialists that serve as a stable counterpart to the Infrastructure Department and their efforts. The team is focused on two key aspects of security:
The Security Logging team is focused on guaranteeing that GitLab has the data coverage required to:
Security Automation specialists help us scale by creating tools that perform common tasks automatically. Examples include building automated security issue triage and management, proactive vulnerability scanning, and defining security metrics for executive review. Initiatives for this specialty also include:
The External Communications Team leads customer advocacy, engagement and communications in support of GitLab Security Team programs. Initiatives for this specialty include:
Security Operations Sub-department teams are primarily focused on protecting GitLab the business and GitLab.com. This encompasses protecting company property as well as to prevent, detect and respond to risks and events targeting the business and GitLab.com. This sub department includes the Security Incident Response Team (SIRT), Trust and Safety team and Red team.
These functions have the responsibility of shoring up and maintaining the security posture of GitLab.com to ensure enterprise-level security is in place to protect our new and existing customers.
The SIRT team is here to manage security incidents across GitLab. These stem from events that originate from outside of our infrastructure, as well as those internal to GitLab. This is often a fast-paced and stressful environment where responding quickly and maintaining ones composure is critical.
More than just being the first to acknowledge issues as they arise, SIRT is responsible for leading, designing, and implementing the strategic initiatives to grow the Detection and Response practices at GitLab. These initiatives include:
SIRT can be contacted on slack via our handle @sirt-members
or in a GitLab issue using @gitlab-com/gl-security/security-operations/sirt
. If your request requires immediate attention please review the steps for engaging the security on-call.
Trust & Safety specialists investigate and mitigate the malicious use of our systems, which is defined under Section 3 of the GitLab Website Terms of Use. This activity primarily originates from inside our infrastructure.
Initiatives for this specialty include:
For more information please see our Resources Section
Code of Conduct Violations are handled by the Community Relations Team. For more information on reporting these violations please see the GitLab Community Code of Conduct page.
Threat Management Sub-department teams are cross-functional. They are responsible for collaborating across the Security department to identify, communicate, and remediate threats or vulnerabilities that may impact GitLab, our Team Members or our users and the community at large.
GitLab's internal Red Team emulates adversary activity to better GitLab’s enterprise and product security. This includes activities such as:
Security Research team members focus on security problems that require a high level of expertise, and development of novel solutions. This includes in-depth security testing against FOSS that is critical to GitLab, and development of new security capabilities. Initiatives for this specialty include:
Security research specialists are subject matter experts (SMEs) with highly specialized security knowledge in specific areas, including reverse engineering, incident response, malware analysis, network protocol analysis, cryptography, and so on. They are often called upon to take on security tasks for other security team members as well as other departments when highly specialized security knowledge is needed. Initiatives for SMEs may include:
Security research specialists are often used to promote GitLab thought leadership by engaging as all-around security experts, to let the public know that GitLab doesn’t just understand DevSecOps or application security, but has a deep knowledge of the security landscape. This can include the following:
Security Threat & Vulnerability Management is responsible for the recurring process of identifying, classifying, prioritizing, mitigating, and remediating vulnerabilities. This process is designed to provide insight into our environments, leverage GitLab for vulnerability workflows, promote healthy patch management among other preventative best-practices, and remediate risk; all with the end goal to better secure our environments, our product, and the company as a whole.
The Security Assurance sub-department is comprised of the teams below. They target Customer Assurance projects among their responsibilities. This reflects the need for us to provide resources to our customers to assure them of the security and safety of GitLab as an application to use within their organisation and as a enterprise-level SaaS. This also involves providing appropriate support, services and resources to customers so that they trust GitLab as a Secure Company, as a Secure Product, and Secure SaaS
The Field Security team serves as the public representation of GitLab's internal Security function. We are tasked with providing high levels of security assurance to internal and external customer through the completion of Customer Assurance Activities, maintenance of Customer Assurance Collateral, and evangelism of Security Best Practices.
Initiatives for this specialty include:
Operating as a second line of defense, Security Compliance's core mission is to implement a best in class governance, risk and compliance program that encompasses SaaS, on-prem, and open source instances. Initiatives for this specialty include:
For additional information about the Security Compliance program see the Security Compliance team handbook page or refer to GitLab's security controls for a detailed list of all compliance controls organized by control family.
We support GitLab's growth by effectively and appropriately identifying, tracking, and treating Security Operational and Third Party risks.
Initiatives for this specialty include:
It’s important to note that the three tenets do not operate independently of each other, and every team within the Security Department provides an important function to perform in order to progress these tenets. For example, Application Security may be strongly focused on Securing the Product, but it still has a strong focus around customer assurance and protecting the company in performing its functions. Similarly, Security Operations functions may be engaged on issues related to Product vulnerabilities, and the resolution path for this deeply involves improving the security of product features, as well as scoping customer impact and assisting in messaging to customers.
Security Program Management is responsible for complete overview and driving security initiatives across Product, Engineering, and Business Enablement. This includes the tracking, monitoring, and influencing priority of significant security objectives, goals, and plans/roadmaps from all security sub-departments. Security Program Manager Job Family
Security Architecture plans, designs, tests, implements, and maintains the security strategy and solutions across the entire GitLab ecosystem.
At GitLab, we believe that the security of the business should be a concern of everyone within the company and not just the domain of specialists. If you identified an urgent security issue or you need immediate assistance from the Security Department, please refer to Engaging the Security Engineer On-Call.
Please be aware that the Security Department can only be paged internally. If you are an external party, please proceed to Vulnerability Reports and HackerOne section of this page.
/security
Slack command to be guided through a form that engages the Security Engineer On-Call#security
channel in GitLab Slack.@sirt-members
in Slack or by opening an issue with /security
in Slack. Please be advised the SLA for Slack mentions is 6 hours on business days.Many teams follow a convention of having a GitLab group team-name-team
with a primary project used for issue tracking underneath team-name
or similar.
~meta
and backend tasks, and catch all for anything not covered by other projectsSecurity crosses many teams in the company, so you will find ~security
labelled issues across all GitLab projects, especially:
When opening issues, please follow the Creating New Security Issues process for using labels and the confidential flag
gl-security/runbooks
should only be used for documenting specifics that would increase risk and/or have customer impact if publicly disclosed.GitLab.com
environment, consider if it's possible to release when the ~security
issue becomes non-confidential. This group can also be used for private demonstration projects for
security issues.@trust-and-safety
in the channel to alert the team to anything urgent.#security-department-standup
- Private channel for daily standups.#incident-management
and other infrastructure department channels#security-alert-manual
- New reports for the security department from various intake sources, including ZenDesk and new HackerOne reports.#hackerone-feed
- Feed of most activity from our HackerOne program.#security-alert-*
and #abuse*
- Multiple channels for different notifications
handled by the Security Department.External researchers or other interested parties should refer to our Responsible Disclosure Policy for more information about reporting vulnerabilities. Customers can contact Support or the Field Security team.
If you suspect you've received a phishing email and have not engaged with the sender, please see: What to do if you suspect an email is a phishing attack.
If you have engaged a phisher by replying to an email, clicking on a link, have sent and received text messages, or have purchased goods requested by the phisher, please engage the Security Engineer on-call.
Further information on GitLab's security response program is described in our Incident Response guide.
Ransomware is a persistent threat to many organizations, including GitLab. In the event of a ransomware attack involving GitLab assets, it's important to know the existing response procedures in place. Given the variability of targets in such attacks, it's critical to adapt to existing circumstances and understand that disaster recovery processes are in place to avoid paying any ransom. GitLab's red team has done extensive research to determine the most likely targets to be affected. As a result, the following guidelines are intended to help bootstrap an efficient response to protect the organization.
Critical First Steps:
Relevant Teams:
Depending on the impacted resources, the following teams should be engaged and made aware of the issue created for the rapid engineering response. Note that this is not a comprehensive list depending on impacted assets.
Communications:
Once we've determined that we need to communicate externally about an incident, the SIMOC should kick off our Security incident communications plan and key stakeholders will be engaged for collaboration, review and approval on any external-facing communications. Note: if customer data is exposed, external communications may be required by law.
The company-wide mandate is justification for mapping Security headcount to around 5% of total company headcount. Tying Security Department growth headcount to 5% of total company headcount ensures adequate staffing support for the following (below are highlights and not the entire list of responsibilities of the Security Department):
Career opportunities at GitLab, personal growth, and development are important and encouraged. Security team members and managers are encouraged to use Individual Development Plans to help foster, guide, and assist with career growth.
Information regarding growth and development benefits available to GitLab team members is available on the General & Entity Specific Benefits page, with specific information regarding general budgeting strategy, reimbursement requirements, and budget exceptions for tuition available in the Growth and Development Benefit section of that page. Eligibility information and directions on how to apply for growth and development benefits can be found on the Growth and Development Benefit page. Be sure to review the administration process for growth and development costs exceeding $1000 before proceeding with payment as the reimbursement process and timing differs depending on category.
For information on the security internship, see the Internship page.
The Security Organization is piloting a fully immersive on-the-job cross-training program among our various sub-organizations and teams. Participants will get a true behind the scenes look at how the Security Organization protects, defends, and assures our customers and team members day in and day out.
For more information, see the Security Shadow Program page.
Gearing ratios related to the Security Department have been moved to a separate page.
The Security department will collaborate with development and product management for security-related features in GitLab. The Secure Sub-Department must not be mistaken with the Security Teams.
We work closely with bounty programs, as well as security assessment and penetration testing firms to ensure external review of our security posture.
GitLab releases patches for vulnerabilities in dedicated security releases. There are two types of security releases: a monthly, scheduled security release, and ad-hoc security releases for critical vulnerabilities. For more information, you can visit our security FAQ. You can see all of our regular and security release blog posts here. In addition, the issues detailing each vulnerability are made public on our issue tracker 30 days after the release in which they were patched.
Our team targets release of the scheduled, monthly security release around the 28th, or 6-10 days after the monthly feature release (which is released on the 22nd of each month) and communicates the release via blog and email notification to subscribers of our security notices.
release/docs
.Information Security Policies are reviewed annually by the Director of Security Assurance. Significant policy changes are reviewed and approved by the code owners.
Information security considerations such as regulatory, compliance, confidentiality, integrity and availability requirements are most easily met when companies employ centrally supported or recommended industry standards. Whereas GitLab operates under the principle of least privilege, we understand that centrally supported or recommended industry technologies are not always feasible for a specific job function or company need. Deviations from the aforementioned standard or recommended technologies is discouraged. However, it may be considered provided that there is a reasonable, justifiable business and/or research case for an information security policy exception; resources are sufficient to properly implement and maintain the alternative technology; the process outlined in this and other related documents is followed and other policies and standards are upheld.
In the event a team member requires a deviation from the standard course of business or otherwise allowed by policy, the Requestor must submit a Policy Exception Request to the GitLab Security Compliance team, which contains, at a minimum, the following elements:
Exception request approval requirements are documented within the issue template. The requester should tag the appropriate individuals who are required to provide an approval per the approval matrix.
If the business wants to appeal an approval decision, such appeal will be sent to Legal at legal@gitlab.com. Legal will draft an opinion as to the proposed risks to the company if the deviation were to be granted. Legal’s opinion will be forwarded to the CEO and CFO for final disposition.
Any deviation approval must:
GitLab.com
environment, consider
if it's possible to release when the ~security
issue becomes
non-confidential. This group can also be used for private demonstration projects for
security issues.Security crosses many teams in the company, so you will find ~security
labelled
issues across all GitLab projects, especially:
When opening issues, please follow the Creating New Security Issues process for using labels and the confidential flag.
Learn more about awards of security initiatives.
GitLab releases patches for vulnerabilities in dedicated security releases. There are two types of security releases: a monthly, scheduled security release, and ad-hoc security releases for critical vulnerabilities. For more information, you can visit our security FAQ. You can see all of our regular and security release blog posts here. In addition, the issues detailing each vulnerability are made public on our issue tracker 30 days after the release in which they were patched.
Our team targets release of the scheduled, monthly security release for 7-10 days after the monthly feature release (which is released on the 22nd of each month) and communicates the release via blog and email notification to subscribers of our security notices.
release/docs
.The Security team needs to be able to communicate the priorities of security related issues to the Product, Development, and Infrastructure teams. Here's how the team can set priorities internally for subsequent communication (inspired in part by how the support team does this).
Use the Vulnerability Disclosure issue template to report a new security vulnerability, or use our HackerOne bug bounty program.
New security issue should follow these guidelines when being created on GitLab.com
:
confidential
if unsure whether issue a potential
vulnerability or not. It is easier to make an issue that should have been
public open than to remediate an issue that should have been confidential.
Consider adding the /confidential
quick action to a project issue template.~security
at a minimum. If you're reporting a vulnerability (or something you suspect may possibly be one) please use the Vulnerability Disclosure template while creating the issue. Otherwise, follow the steps here (with a security label).~"type::bug"
, ~"type::maintenance"
, or ~"type::feature"
if appropriate~customer
if issue is a result of a customer report~internal customer
should be added by team members when the issue
impacts GitLab operations.~dependency update
if issue is related to updating to newer versions of the dependencies GitLab requires.~featureflag::
scoped labels if issue is for a functionality behind a feature flag~keep confidential
. If possible avoid this by linking
resources only available to GitLab team member, for example, the originating
ZenDesk ticket. Label the link with (GitLab internal)
for clarity.Occasionally, data that should remain confidential, such as the private project contents of a user that reported an issue, may get included in an issue. If necessary, a sanitized issue may need to be created with more general discussion and examples appropriate for public disclosure prior to release.
For review by the Application Security team, @ mention @gitlab-com/gl-security/appsec
.
For more immediate attention, refer to Engaging security on-call.
~security
IssuesSeverity and priority labels are set by an application security engineer at the time of triage
if and only if the issue is determined to be a vulnerability.
To identify such issues, the engineer will add the ~bug::vulnerability
label.
Severity label is determined by CVSS score, using the GitLab CVSS calculator.
If another team member feels that the chosen ~severity
/ ~priority
labels
need to be reconsidered, they are encouraged to begin a discussion on the relevant issue.
The presence of the ~bug::vulnerability
label modifies the standard severity labels(~severity::1
, ~severity::2
, ~severity::3
, ~severity::4
)
by additionally taking into account
likelihood as described below, as well as any
other mitigating or exacerbating factors. The priority of addressing
~security
issues is also driven by impact, so in most cases, the priority label
assigned by the security team will match the severity label.
Exceptions must be noted in issue description or comments.
The intent of tying ~severity/~priority
labels to remediation times is to measure and improve GitLab's
response time to security issues to consistently meet or exceed industry
standard timelines for responsible disclosure. Mean time to remediation (MTTR) is
a external
metric that may be evaluated by users as an indication of GitLab's commitment
to protecting our users and customers. It is also an important measurement that
security researchers use when choosing to engage with the security team, either
directly or through our HackerOne Bug Bounty Program.
Vulnerabilities must be mitigated and remediated according to specific timelines. The timelines are specified in the Vulnerability Management handbook (a controlled document).
If a better understanding of an issue leads us to discover the severity has changed, recalculate the time to remediate from the date the issue was opened. If that date is in the past, the issue must be remediated on or before the next security release.
~security
IssuesFor ~security
issues with the ~bug::vulnerability
label and a severity of ~severity::3
or higher, the security engineer assigns the Due date
,
which is the target date of when fixes should be ready for release.
This due date should account for the Time to remediate
times above, as well as
monthly security releases on the 28th of each month. For example, suppose today is October 1st, and
a new severity::2
~security
issue is opened. It must be addressed in a security release within 60 days,
which is November 30th. So therefore, it must catch the November 28th security release.
Furthermore, the Security Release Process deadlines
say that it should the code fix should be ready by November 23rd. So the due date
in this example should be November 23rd.
Note that some ~security
issues may not need to be part of a product release, such as
an infrastructure change. In that case, the due date will not need to account for
monthly security release dates.
On occasion, the due date of such an issue may need to be changed if the security team needs to move up or delay a monthly security release date to accommodate for urgent problems that arise.
~security
IssuesThe issue description should have a How to reproduce
section to ensure clear replication details are in description. Add additional details, as needed:
curl
command that triggers the issue~security
issuesIssues labelled with the security
but without ~type::bug + ~bug::vulnerability
labels are not considered vulnerabilities, but rather security enhancements, defense-in-depth mechanisms, or other security-adjacent bugs. For example, issues labeled ~"type::feature"
or ~"type::maintenance"
. This means the security team does not set the ~severity
and ~priority
labels or follow the vulnerability triage process as these issues will be triaged by product or other appropriate team owning the component.
Implementation of security feature issues should be done publicly in line with our Transparency value, i.e. not following the security developer workflow.
On the contrary, note that issues with the security
, ~type::bug
, and severity::4
labels are considered Low
severity vulnerabilities and will be handled according to the standard vulnerability triage process.
The security team may also apply ~internal customer
and ~security request
to issue as an
indication that the feature is being requested by the security team to meet
additional customer requirements, compliance or operational needs in
support of GitLab.com.
Some ~security
issues are neither vulnerabilities nor security enhancements and yet are labeled ~security
. An example of this would be a non-security ~"type::bug"
in the login mechanism. Such an issue will be labeled ~security
because it is security-sensitive but it isn't a vulnerability and it isn't a ~"type::feature"
either. In those cases the ~"securitybot::ignore"
label is applied so that the bot doesn't trigger the normal vulnerability workflow and notifications as those issues aren't subject to the "time to remediation" requirements mentioned above.
The security engineer must:
~group::editor
, ~group::package
, etc.)~merge request
.@pm for scheduling
.The product manager will assign a Milestone
that has been assigned a due
date to communicate when work will be assigned to engineers. The Due date
field, severity label, and priority label on the issue should not be changed
by PMs, as these labels are intended to provide accurate metrics on
~security
issues, and are assigned by the security team. Any blockers,
technical or organizational, that prevents ~security
issues from being
addressed as our top priority
should be escalated up the appropriate management chains.
Note that issues are not scheduled for a particular release unless the team leads add them to a release milestone and they are assigned to a developer.
Issues with an severity::1
or severity::2
rating should be immediately brought to the
attention of the relevant engineering team leads and product managers by
tagging them in the issue and/or escalating via chat and email if they are
unresponsive.
Issues with an severity::1
rating have priority over all other issues and should be
considered for a critical security release.
Issues with an severity::2
rating should be scheduled for the next scheduled
security release, which may be days or weeks ahead depending on severity and
other issues that are waiting for patches. An severity::2
rating is not a guarantee
that a patch will be ready prior to the next security release, but that
should be the goal.
Issues with an severity::3
rating have a lower sense of urgency and are assigned a
target of the next minor version. If a low-risk or low-impact vulnerability
is reported that would normally be rated severity::3
but the reporter has
provided a 30 day time window (or less) for disclosure the issue may be
escalated to ensure that it is patched before disclosure.
It is possible that a ~security issue becomes irrelevant after it was initially triaged, but before a patch was implemented. For example, the vulnerable functionality was removed or significantly changed resulting in the vulnerability not being present anymore.
If an engineer notices that an issue has become irrelevant, they should @-mention the person that triaged the issue to confirm that the vulnerability is not present anymore. Note that it might still be necessary to backport a patch to previous releases according to our maintenance policy. In case no backports are necessary, the issue can be closed.
With the approval of an Application Security Engineer a security issue may be fixed on the current stable release only, with no backports. Follow the GitLab Maintenance Policy and apply the ~reduced backports
label to the issue.
For systems built (or significantly modified) by Departments that house customer and other sensitive data, the Security Team should perform applicable application security reviews to ensure the systems are hardened. Security reviews aim to help reduce vulnerabilities and to create a more secure product.
This short questionnaire below should help you in quickly deciding if you should engage the application security team:
If the change is doing one or more of the following:
You should engage @gitlab-com/gl-security/appsec
.
There are two ways to request a security review depending on how significant the changes are. It is divided between individual merge requests and larger scale initiatives.
Loop in the application security team by /cc @gitlab-com/gl-security/appsec
in your merge request or issue.
These reviews are intended to be faster, more lightweight, and have a lower barrier of entry.
To get started, create an issue in the security tracker using the Appsec Review template. The complete process can be found at here.
Some use cases of this are for epics, milestones, reviewing for a common security weakness in the entire codebase, or larger features.
No, code changes do not require security approval to progress. Non-blocking reviews enables the freedom for our code to keep shipping fast, and it closer aligns with our values of iteration and efficiency. They operate more as guardrails instead of a gate.
To help speed up a review, it's recommended to provide any or all of the following:
The current process for larger scale internal application security reviews be found here
Security reviews are not proof or certification that the code changes are secure. They are best effort, and additional vulnerabilities may exist after a review.
It's important to note here that application security reviews are not a one-and-done, but can be ongoing as the application under review evolves.
If you are using third party libraries make sure that:
GitLab receives vulnerability reports by various pathways, including:
~bug::vulnerability
and @-mention @gitlab-com/gl-security/appsec
on issues.For any reported vulnerability:
~security
and ~bug::vulnerability
labels to the issue. Add the appropriate group label if known.dev
or in other non-public ways even if there is a reason to believe that the vulnerability is already out in the public domain (e.g. the original report was made in a public issue that was later made confidential).See the dedicated page to read about our Triage Rotation process.
See the dedicated page to read about our HackerOne process.
See the dedicated page to read about our dashboard review process.
We use CVE IDs to uniquely identify and publicly define vulnerabilities in our products. Since we publicly disclose all security vulnerabilities 30 days after a patch is released, CVE IDs must be obtained for each vulnerability to be fixed. The earlier obtained the better, and it should be requested either during or immediately after a fix is prepared.
We currently request CVEs through our CVE project. Keep in mind that some of our security releases contain security related enhancements which may not have an associated CWE or vulnerability. These particular issues are not required to obtain a CVE since there's no associated vulnerability.
On the day of the security release several things happen in order:
The GitLab issue should then be closed and - after 30 days - sanitized and made public. If the report was received via HackerOne, follow the HackerOne process.
At GitLab we value being as transparent as possible, even when it costs. Part of this is making confidential GitLab issues about security vulnerabilities public 30 days after a patch. The process is as follows:
~keep confidential
tag. If one exists
~keep confidential
, remove sensitive information from the description and comments, e.g.
To facilitate this process the GitLab Security Bot comments on confidential issues 30 days after issue closure when they are not labelled ~keep confidential
.
Even though many of our 3rd-party dependencies, hosted services, and the static
about.gitlab.com
site are listed explicitly as out of scope, they are sometimes
targeted by researchers. This results in disruption to normal GitLab operations.
In these cases, if a valid email can be associated with the activity, a warning
such as the following should be sent to the researcher using an official channel
of communication such as ZenDesk.
Dear Security Researcher,
The system that you are accessing is currently out-of-scope for our bounty
program or has resulted in activity that is disruptive to normal GitLab
operations. Reports resulting from this activity may be disqualified from
receiving a paid bounty. Continued access to this system causing disruption to
GitLab operations, as described in policy under "Rules of Engagement,
Testing, and Proof-of-concepts", may result in additional restrictions on
participation in our program:
Activity that is disruptive to GitLab operations will result in account bans and disqualification of the report.
Further details and some examples are available in the full policy available at:
https://hackerone.com/gitlab
Best Regards,
Security Department | GitLab
/handbook/security/
Security Engineers typically act as Subject Matter Experts and advisors to GitLab's engineering teams. Security Engineers may wish to make a larger contribution to GitLab products, for example a defense-in-depth measure or new security feature.
Like any contributor, follow the Contributor and Development Docs, paying particular attention to the issue workflow, merge requests workflow, style guides, and testing standards.
Security Engineers will need to collaborate with and ultimately hand over their work to a team in the Development Department. That team will be responsible for prioritisation, review, rollout, error budget, and maintenance of the contribution. Security Engineers should ideally open an Issue or Epic as early as possible, labelled with the candidate owning team. The team can inform implementation or architectural decisions, highlight existing or upcoming work that may impact yours, and let them plan capacity for reviewing your work.
If a team does not have capacity or a desire to assist, a Security Engineer's work can still continue; everyone can contribute.
Requests from Security Engineers for new features and enhancements should follow the process in "Requesting something to be scheduled"
This does not apply to addressing security vulnerabilities or dependency updates, which have separate processes for triage and patching.
We have a process in place to conduct security reviews for externally contributed code, especially if the code functionality includes any of the following:
The Security Team works with our Community Outreach Team to ensure that security reviews are conducted where relevant. For more information about contributing, please reference the Contribute to GitLab page.
The packages we ship are signed with GPG keys, as described in the GitLab documentation. The process around how to make and store the key pair in a secure manner is described in the runbooks. The Distribution team is responsible for updating the package signing key. For more details that are specific to key locations and access at GitLab, find the internal google doc titled "Package Signing Keys at GitLab" on Google Drive.
Along with the internal security testing done by the Application Security, Security Research, and Red teams, GitLab annually contracts a 3rd-party penetration test of our infrastructure. For more information on the goals of these exercises, please see our Penetration Testing Policy.
The following process is followed for these annual tests:
GitLab customers can request a redacted copy of the report. For steps on how to do so, please see our External Testing page.