Skip navigation

Duo Security is now a part of Cisco

About Cisco

Using Reports with Gitlab Runners

Nick Steele April 23rd, 2019 (Last Updated: April 23rd, 2019)

01. Background

With the Duo Labs team making more and more use of GitLab Runners for building, testing, and deployment, we’ve begun to explore how we can better construct a workflows for team members deploying code. Nick Mooney did a great job of laying out a set of guidelines for deploying code in with different schemes in place for whether the code will be only viewed internally or reach a wider audience in Duo or to the public.

If we’re working on a project that will potentially be a publicized repo, even if it is just published for the greater Duo org, its quality should be a reflection of our team’s dedication to producing great research and development projects. To support this, we should leverage GitLab Runners to a greater capacity, not just running our own tests against the codebase but making sure that the code and its dependencies are of good quality and are secure.

We shouldn’t, however, have to spend much time developing our own runner jobs to make this support possible. GitLab provides two major ways to support pre-built CI tasks, by either using Auto DevOps, or by using runner artifacts and artifact reports. The latter is what we will discuss.

Why not use Auto DevOps?

Users are currently able to use Auto DevOps, which allows us to automate a large amount of the CI/CD pipeline and also include custom rules. However, Auto DevOps only supports the Google Kubernetes Engine for the time being. Until this changes in the future, we need to support a more abstract build scheme.

What Type of CI/CD Templates Exist for Runners?

A list of natively supported report types can be found here. But they include support for things like code quality checking, dependency vulnerabilities, and licensing conflicts. We can also add additional templates if we chose to.

02. GitLab Runner Jobs and Artifacts

Gitlab Runners have CI/CD jobs that they run primarily when new commits are pushed to a given VCS repo. When these jobs successfully complete their purpose, they can create artifacts, which are files or directories of files which are then attached to a job. Artifacts are helpful for a bunch of reasons, such as collecting test and job results, building and storing media or text documents for storage from a job, building binaries and files that are consumed by other CI/CD jobs, and in the same vein as the latter, creating reports which can be used to initialize other jobs or be read out in GitLab, and we’ll talk about this more below. But first, let’s look at creating an example artifact.

In this example below, let’s say we want to include a GitLab step to produce a PDF file as a result of a script being run against my repository. In the .gitlab-ci.yml file, which defines our gitlab runner jobs, we could define a job like this:

pdf:
	script: make_pdf my_document.txt
	artifacts:
		paths:
		- my_document.pdf
		expire_in: 1 week

In this job, we would request that a runner, with access to function make_pdfcan take my_document.txt, which would exist in my code repository, and convert it into a pdf document. The artifacts path here will tell the runner to store the generated PDF in its job output for 1 week. Afterwards, we are able to retrieve this PDF through the GitLab site.

Templatted CI/CD job reports can either be created by having jobs that produce these artifacts, or by using artifact results to initialize the jobs.

03. Reports

Reports can be generated from template CI/CD jobs by including them in our .gitlab-ci.yml document. They will then generate a report for us with the results of that job. This method could be as simple as adding this to your CI YML:

include:
	template: SAST.gitlab-ci.yml

SAST stands for Static Application Security Testing, and allows us to analyze our applications for certain known vulnerabilities. The job, once included, will then generate a list of vulnerabilities found in the repository.

Requirements

Some reports require information to be available in certain locations, for example if we use the browser performance metric reporter, it requires the presence of whatever JavaScript files we are attempting to test to be retrievable inside the repository. Because of this, we need to make certain the runner, most likely on AWS, is able to retrieve this data with the permissions allotted to it.

Additionally, most artifact jobs require docker-in-docker or just docker images to be supported. We already support this for our VCS, but it is something to keep in mind.