Automate docker build using Jenkins

Vedarth Sharma
3 min readOct 25, 2020

Docker images are very useful for keeping ready to use environments in a container. Instead of going through the hassel of figuring out the dependencies and paths that you need to setup for your system, you can simply pull the docker image of that environment and use it to play around with the tech stack you wanna try. It is pretty convenient, but as a docker image provider, you have a responsibility that can be very inconvenient and that is making sure that your latest tagged image is actually the latest stable version of the packages that your environment is using. It’s not so difficult if you are just talking about a single docker image. But once that number crosses 100, then it becomes a real problem. So doing it manually is not an option, so automation is the way to go.

One way to automate a task like this is through using Jenkins. The idea is to build docker images at scheduled interval. This will ensure that docker images pushed to the artifactory are always up to date. So that’s what I did!

Jenkins provides a lot of flexibility in selecting how the pipeline needs to be configured. The preferred method is to use `Jenkinsfile` file. This file contains the instructions for different stages of the pipeline that can be defined. For our usecase those stages will be, getting the source code, building the docker images, pushing the built docker images and finally removing all the docker images left.

Pipeline type job is the perfect one for this usecase.

There are a few best practices that we should follow while writing Jenkinsfile:-

Use groovy code just to connect steps

Instead of adding multiple steps in pipeline, try to achieve more with a single command.

Pipelines shouldn’t contain complex llogic

Groovy code takes up resources of controller. The following are the most common example Groovy methods to avoid using:

  1. JsonSlurper: This command loads the local file into memory on the controller twice.
  2. HttpRequest: Response is stored twice and controller has to make the request.

Reducing repetition of similar Pipeline steps

Try to combine steps as much as possible to avoid load on controller.

Avoiding calls to Jenkins API

Avoid accessing Jenkins API from Jenkinsfile, instead try building a plugin that acts as a wrapper for that API.

Do not override built-in Pipeline steps

Try to avoid overriding built-in pipeline steps as your solution is prone to break after an API update, for example.

Avoiding large global variable declaration files

Unreasonably long global variable names are wastage of memory.

Avoiding very large shared libraries

Can lead to increased memory overhead and slower execution time.

Jenkins has a plugin for docker that can be used to execute commands like build and push. Jenkins provides a credential manager which allows the user to store credentials in Jenkins server and use them in the Jenkinsfile for that pipeline.

You can have different environments as different slave nodes for your master node. It is specially useful if you want to run pipelines concurrently, for different slave nodes.

For setting up the project, the github repo needs to configured, by creating a webhook for Jenkins. Once that’s done, repo can be used by Jenkins.

Althoough docker images can be built by shell script as well, but to push them, using stored credentials, docker plugin for Jenkins makes more sense.

Build triggers can be set to any time in accordance with cron time format. Usually 1 week to 1 month is enough, but it can be even made daily, if requirement is so.

Refer to the example below to get a better understanding of how the Jenkinsfile will look like.

node('project') {
stage('Clone repository') {
checkout scm
}
stage('Build image') {
DOCKER_HOME = tool "docker"
def app = docker.build("user/image_name")
}
stage('Push') {
app.withRegistry( '', registryCredential ) {
app.push()
}
}
stage('Clean') {
sh "docker rmi user/image_name"
}
}

All in all, Jenkins is an amazing tool for automation.

--

--