Published on: August 28, 2019

6 min read

Getting [meta] with GitLab CI/CD: Building build images

Let's talk about building build images with GitLab CI/CD. The power of Docker as a build platform is unleashed when you get meta.

An alternative title for this post could have been:

I heard you liked Docker, so I put dind.

Getting started

It should be clear by now that I love building stuff with GitLab CI/CD. From

DNS

to breakfast GitLab CI/CD

offers a pretty wide range. However, past those "fun" use cases, I also like

to share some best practices I have acquired during my years of using [GitLab

CI/CD](/solutions/continuous-integration/), both for software and non-software projects alike.

I crossed out "best" above because I don't really like the term "best practices." It

implies that there is only one right answer to a given question – which is the

opposite of the point of computer science. Sure there are better and worse ways to

do something – but like many things in life, you have to find what works for

you. "The best camera is the one you have with you"

comes to mind when building CI/CD for projects. Something that works is better than something that's pretty.

But, enough of my digression, let's get to the practice I wanted to share in this

post: Building build images as part of the build process. Yes, it is precisely as meta as it sounds.

Why?

Often when building a particular project, you may have several unique build dependencies.

In many languages, package managers solve for the majority if not all of these

dependencies – at least for build time (think npm, RubyGems,

Maven). However, when we are building and

deploying (CI/CD let's remember) from a machine that is not our own, that may not

be enough. There may be a few dependencies we might need from elsewhere.

The language libraries themselves are one such dependency – to build Java I'm going to need

the JDK or JRE. To build Node, I'll need... well Node, etc. In a Docker-based environment,

those languages and dependencies typically have an official image on Docker

Hub (JRE from Oracle or

Node from Node.js for instance). Assume, however, that

I may need a few other things not included in either those official Docker images or

the package manager I'm using. For instance, maybe I need a CLI tool for

deploy (AWS, Heroku,

Firebase, etc.). We also might need a testing

framework or tool like Selenium or

headless Chrome.

Or I may need other tools for packaging, testing, or deployment.

Sometimes there is a Docker image on Docker Hub for these combinations – or some of

them – but not always a maintained version. One easy solution to this could be to

just run the install of the tools before every job that needs it. This can

even be "automated" using something like

the before_script syntax.

However, this adds time to our pipeline and seems inefficient: Is there a better way?

Enter the GitLab Docker registry

Since GitLab is a single application for the entire DevOps lifecycle – it actually

ships out of the box with a built-in

Docker registry.

This can be a useful tool when deploying code in a containerized environment. We can

build our application into a container and send it off into Kubernetes or some

other Docker orchestrator.

However, I also see this registry as an opportunity to save time in my

pipeline (and save round trips to Docker hub and back every time). For builds that require

some of these extra dependencies, I like to build a "build" Docker image.

That way, I have an image with all of those baked right in. Then, as part of my

pipeline, I can build the image at the start (only when changes are made or every time).

And the rest of the pipeline can consume that image as the base image.

Putting it in practice

For example, let's see what it looks like to build a simple Docker image to use with

deploying to Google Firebase.

Firebase is a "backend as a service" tool that provides a database, authentication,

and other services across platforms (web, iOS, and Android). It also includes web hosting

and several other items that can be deployed through a CLI.

This tool makes getting started really easy. You can deploy the whole stack with

firebase deploy. Alternatively, you can deploy a part (like serverless functions)

with a command like firebase deploy --only functions.

Making this work in a CI/CD world requires a few extra steps though. We'll need a Node

Docker image that has the firebase CLI in it, so let's make a simple Dockerfile to do that.

Putting this Dockerfile in .meta/Dockerfile


FROM node:10


RUN npm install -g firebase-tools

Next, I'll add a job to the front of my pipeline.

Added to the front of my .gitlab-ci.yml


meta-build-image:
  image: docker:stable
  services:
    - docker:dind
  stage: prepare
  script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    - cd .meta
    - docker build -t $CI_REGISTRY/group/project/buildimage:latest .
    - docker push $CI_REGISTRY/group/project/buildimage:latest
  only:
    refs:
      - main
    changes:
      - .meta/Dockerfile

Let's break down that job:

  1. We use the docker:stable image and a service of docker:dind

  2. The stage is my first stage called prepare

  3. In the script, we login to the GitLab registry with the built-in variables and build the

image. For more details see the GitLab documentation for building Docker images.

  1. We only run this on main and only when the .meta/Dockerfile changes. This makes

sure we are specific about when we change the Docker image. We could also use the

commit hash or other methods here to make the image more fungible.

Now, in further jobs down the pipeline, I can use the latest build of the Docker image like this:


firestore:
  image: registry.gitlab.com/group/project/buildimage
  stage: deploy 🚢🇮🇹
  script:
    - firebase deploy --only firestore
  only:
    changes:
      - .firebase-config/firestore.rules
      - .firebase-config/firestore.indexes.json

In this job, we only run the job if something about

the Firestore (the database from Firebase)

configuration changes. And when it does, we run the firestore deploy command in CI. I

also added a token for deploy as a GitLab CI/CD variable

based off the Firebase documentation

for using firebase with CI.

Summary

In the end, this helps speed up pipelines by ensuring that you have a custom-built build

image that you control. You don't have to rely on unstable or unmaintained Docker Hub

images or even have a Docker Hub account yourself to get started.

To learn more about GitLab CI/CD you can read the GitLab website

or the CI/CD docs. Also, there's a lot more to

learn about the GitLab Docker registry.

Cover image by Hack Capital on Unsplash.

We want to hear from you

Enjoyed reading this blog post or have questions or feedback? Share your thoughts by creating a new topic in the GitLab community forum.
Share your feedback

50%+ of the Fortune 100 trust GitLab

Start shipping better software faster

See what your team can do with the intelligent

DevSecOps platform.