Setting up integration tests in GitLab CI
As the lead developer for the Website Template Project, maintaining quality is a top priority for me. We recently came to a point in our growth at which our regular manual testing was getting costly. Our quality efforts needed to scale. It was time to automate our website tests.
In this article, I’ll review a basic setup for automated integration tests. The process is fairly straightforward, but there are a couple of gotchas that this article might help you avoid. To make it even easier, here’s a little repo with a simple example that we’ll use throughout the discussion.
• Overview
• Prerequisites
• Setup
• Docker setup
— Test server docker setup
— Test runner docker setup
— Docker Compose setup
• Running tests locally
• GitLab CI setup
• Wrapping up
Overview
The example consists of a GatsbyJS server to test and a simple integration test to run against it.
hello-world/
integration-tests/
We want to ensure the tests are scalable and keep the environment consistent no matter where we run the tests. We’ll use Docker to spin up both the test server and test runner, and manage the communication between them.
We just need to set that up, then get it running in GitLab CI.
Here are the files directly relevant to the Docker and CI setups:
# Docker setup
hello-world/Dockerfile
hello-world/.dockerignore
integration-tests/Dockerfile
integration-tests/.dockerignore# Docker Compose setup
docker-compose.yaml# Gitlab CI
.gitlab-ci.yaml
Prerequisites
It’s a good idea to have some basic knowledge of the following.
- Cucumber. We’ll be using
cucumber-js
withselenium
for our integration tests. What’s most important for this discussion is that we’re running integration tests that require talking to a server. Everything we’re doing could easily be adapted to test an API, for example. - Docker. We’re going to run everything in little controlled environments. For this purpose, you can liken Docker containers to virtual machines.
- Docker Compose. Makes it easier to work with Docker containers and manage the communication between them.
- GatsbyJS. A frontend framework that we’ll use to kick off a server. You could use Flask or another preferred way of serving web pages.
- GitLab CI/CD. Use a simple
yaml
file to test, deploy, build, and perform other tasks that you’d rather leave to machines.
Setup
- Fork this repo or copy it into a blank GitLab project.
- Install Docker.
- Install Docker Compose.
If you want to run the server or tests outside of docker, you’ll need to take a few additional steps. See the README if you’d like to do that.
The README also contains a brief “Troubleshooting” section that can help you overcome common hurdles.
Docker setup
Test server docker setup
First, let’s look at the test server’s Dockerfile:
# hello-world/Dockerfile# Use node as the base image
FROM node# Copy in the server code
RUN mkdir -p /test-server
COPY . /test-server
WORKDIR /test-server# Set up the server
RUN npm install
RUN npm install -g gatsby-cli
RUN gatsby build# Run the server
CMD gatsby serve --host 0.0.0.0
Let’s go over that one piece at a time.
# Use node as the base image
FROM node
We use npm
to install our server’s node modules and set up Gatsby’s CLI. So we base our Docker image off of the standard node
image. If your server uses Flask or Django, you could use a Python image instead.
# Copy in the server code
RUN mkdir -p /test-server
COPY . /test-server
WORKDIR /test-server
This section just copies everything in hello-world/
into the image, then sets /test-server/
as the directory where we’ll run our commands.
# Set up the server
RUN npm install
RUN npm install -g gatsby-cli
RUN gatsby build
Once we have our server code in the image, we need to install node modules and build the public-facing files. If you’re using a Python-based server like Flask or Django, this phase would involve a pip install -r
, instead. Similarly for other servers.
# Run the server
CMD gatsby serve --host 0.0.0.0
This is how to spin up a production server with GatsbyJS. Note the --host 0.0.0.0
part. This makes the server accessible to other devices on the network. It’s crucial for our setup because the test runner needs to talk to our server. Other servers have similar flags or settings.
Test runner docker setup
Next is the test runner Dockerfile.
# integration-tests/Dockerfile# Use node as our base image
FROM node# Install Firefox
RUN apt-get update && apt-get install -y firefox-esr# Set up the test runner code
RUN mkdir -p /test-runner
COPY . /test-runner
WORKDIR /test-runner
RUN npm install# Install Firefox webdriver (geckodriver)
# https://github.com/mozilla/geckodriver/releases
RUN wget https://github.com/mozilla/geckodriver/releases/download/v0.29.0/geckodriver-v0.29.0-linux64.tar.gz
RUN tar xvzf geckodriver-v0.29.0-linux64.tar.gz
ENV PATH="${PATH}:/test-runner"# Run browser tests in headless mode
ENV HEADLESS=true# The current setup relies on docker-compose to set TEST_SERVER# Execute tests
CMD npm run test
Again, let’s take it step-by-step.
# Use node as our base image
FROM node
We’re using node
as our base image again because the integration tests are written using Javascript. If your tests are written in Ruby, for example, you’d choose a Ruby image.
# Install Firefox
RUN apt-get update && apt-get install -y firefox-esr
This is a standard Debian-style install command. Note that I chose Firefox ESR, the Extended Support Release. You can choose whatever version you want your code to support. If you want to test with a different browser, like Chrome, you can install that, instead.
# Set up the test runner code
RUN mkdir -p /test-runner
COPY . /test-runner
WORKDIR /test-runner
RUN npm install
This section copies the integration tests into the image, sets the entry directory to the code’s location, and installs the node modules. As usual, adjust the installation command for your test code. If your tests are written in Ruby, you might use a bundle install
.
# Install Firefox webdriver (geckodriver)
# https://github.com/mozilla/geckodriver/releases
RUN wget https://github.com/mozilla/geckodriver/releases/download/v0.29.0/geckodriver-v0.29.0-linux64.tar.gz
RUN tar xvzf geckodriver-v0.29.0-linux64.tar.gz
ENV PATH="${PATH}:/test-runner"
In order to run integration tests against a browser using Selenium, you’ll need the a Web Driver corresponding to that browser. The code above downloads the web driver for Firefox, decompresses it in the same directory as the test code, and puts adds directory to the PATH
. There are different web drivers corresponding to different browsers, so choose the one for the browser(s) you want to test with.
# Run browser tests in headless mode
ENV HEADLESS=true
This line sets an environment variable that we use to control whether the browser runs in headless mode. See this section of the cucumber setup code. I always run in headless mode through Docker, but sometimes I like to watch the tests click through the site if I’m running the tests directly on my local machine.
# The current setup relies on docker-compose to set TEST_SERVER
The location and name of the test server changes depending on the context. In the Docker context, we direct our tests towards http://test-server:9000
. Outside of the little Docker park, we use http://localhost:9000
if running the server locally, or something like https://remote-server.com/
to target a remote server. [Note that the port may vary depending on your server. 9000 is the default port for Gatsby’s production server.] We’ll use Docker Compose to set the TEST_SERVER
variable for the Docker context. If running the tests on your local machine, you can use: TEST_SERVER=http://localhost:9000 npm run test
. See the README for details.
# Execute tests
CMD npm run test
Does what it says. The command is set to run Cucumber here.
Docker Compose setup
Here are the contents of docker-compose.yaml
:
services:
test-server:
image: test-server
build: ./hello-world
ports:
- "9000:9000"
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:9000/"]
interval: 1s
timeout: 5s
retries: 5
test-runner:
image: test-runner
build: ./integration-tests
environment:
TEST_SERVER: "http://test-server:9000"
depends_on:
test-server:
condition: service_healthy
That sets up a couple of services, test-server
and test-runner
, which can interact via a default network.
For test-server
, we tell Docker Compose where to find the Dockerfile and code, and that we want to use port 9000 (Gatsby’s default production server port). We also define a healthcheck so Docker Compose understands what it means for the server to be healthy.
For test-runner
, we again start by telling Docker Compose where to find the Dockerfile and code. We then set up the TEST_SERVER
environment variable, as discussed in the previous section. Finally, we state that the test runner depends on the server being healthy.
Heads up: I still had to set a timeout for Cucumber tests to pass on slower hosts like the cute little GitLab runners. So there’s some room to optimize the healtheck.
Running tests locally
So far, we’ve handled the Docker and Docker Compose setup. Execute the following from within the project dir:
docker-compose up --exit-code-from test-runner
That kicks off docker containers for the test server and test runner, including the specifications we gave in docker-compose.yaml
. The command will exit when test-runner
finishes and with the same exit code. So we’ll know if the tests fail. You can check the exit code with echo $?
right after running the tests, or do something like:
docker-compose up --exit-code-from test-runner && \
echo "Yay success" || \
echo "It's on fire, yo"
See the README if you’d like to run the tests outside of the Docker context. If you run into trouble, the README has a little trouble-shooting section.
GitLab CI setup
GitLab CI will automatically pick up instructions in .gitlab-ci.yaml
:
stages:
- testintegration tests:
stage: test
image: docker/compose:1.27.4
services:
- docker:dind
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
script:
- docker-compose up --exit-code-from test-runner
This just sets up an integration test step. It’s easy to incorporate this step into a CI that includes other steps.
First, we set the Docker Compose image in our .gitlab-ci.yaml
. Do not use latest
. I tried and found myself with an older, incompatible version. You can set the version to whatever works for you locally. Check your local version with:
docker-compose version
Next, we need to equip the test stage with Docker-in-Docker (dind
). We do so by setting the service and related Docker variables as shown. See GitLab’s documentation for more info.
Finally, the script executed is the same one we used to run the tests locally:
docker-compose up --exit-code-from test-runner
Since that command exits with the same exit code as test-runner
, the CI step will fail if the tests fail, which is the desired behavior.
Wrapping up
That’s it! Now you should be able to push changes to the project and watch the tests run.