Tired of long CI builds slowing down your workflow? This article dives into an unconventional approach to boost your Docker builds—by skipping them altogether. Discover strategies to streamline your testing pipeline, reduce unnecessary builds, and enhance developer productivity without complex optimizations or intricate Dockerfile tweaks.
Conditional Builds Idea
We usually use the image to run the test CI job itself, which ensures that the setup is equivalent, we have the right environment, the correct dependencies as well as the same OS & python versions.
However, the thing that changes the majority of the time is the codebase of the image and not the previous steps of dependencies. Therefore, given that the GitLab CI job will already overlay the codebase in the job onto the image, we have probably built the same image several times already.
What are the conditions when we actually need to rebuild the image:
Dockerfile changed: This one is pretty obvious
The dependencies changed: In this example we use Python, but it can be any dependency
Enter GitLab Workflows
GitLab offers a rich feature set in the form of Workflows that allows for changes to be made on various conditions.
Step 0: Define the default CI image
For our tests, we will use a variable for the image in the job.
variables:
CI_IMAGE: "$CI_REGISTRY_IMAGE:latest"
Step 1: Identify if a rebuild is needed
With the changes keyword, we can check if certain files or even folders have changed.
workflow:
rules:
- changes: # watch these files for any change
- Dockerfile
- pyproject.toml
- poetry.lock
- .dockerignore
variables: # change this variable only if there will be a rebuild
CI_IMAGE: "$CI_REGISTRY_IMAGE:ci-$CI_COMMIT_SHA"
REBUILD: true # we will use this to control if builds run
- when: always # allow all other pipelines (pass through)
Step 2: Do the CI build if necessary
In the case we need to run the actual build (identified by the changes) we will build and push the image to the defined variable which will be used by the later tests instead of the default.
build:ci:
extends: .build # the build itself will be defined later
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event" && $REBUILD'
when: always
The build job main change is that the destination is important to be set to the correct path!
.build:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [ "" ]
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor
--cache=true
--context $CI_PROJECT_DIR
--dockerfile $CI_PROJECT_DIR/Dockerfile
--cache-repo $CI_REGISTRY_IMAGE/cache # this part helps with caching
# IMPORTANT: this will ensure the image gets pushed correctly
--destination $CI_IMAGE
Step 3: Use the correct image in the tests
Now whenever we want to use this image for the tests, we will just use the $CI_IMAGE
which will either be the default or the rebuilt one identified by its hash.
.test:
stage: test
image: $CI_IMAGE
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
when: always
- when: never
Step 4: Ensure other builds for “default” and prod
This can of course be modified to your taste, but the key is that the build job has the variable $CI_IMAGE
which will control to what path the image gets pushed.
build:default:
extends: .build
rules:
- if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH && $CI_COMMIT_TAG == null'
when: always
build:tag:
extends: .build
variables:
CI_IMAGE: "$CI_REGISTRY_IMAGE:$CI_COMMIT_TAG"
rules:
- if: '$CI_COMMIT_TAG'
when: always