Building a CI / CD Pipeline in Space

In the previous post we learned how to push artifacts to a package repository. Let’s assume that we’ve implemented the first use cases for our backend application with some unit tests. Now imagine that two engineers join the project and you know you’ll practice continuous integration in order to establish a best-practice of high performing teams from the beginning of your greenfield project.

In order to get a CI/CD pipeline, if you’re not so lucky you need to create a ticket describing everything upfront, join a few meetings and wait some time until somebody with the right permissions starts working on it. If you’re lucky, your team hosts the source code and the package(s) of your application on JetBrains Space and you can just start working on it right away.

Open Space, navigate to your project and click on “Jobs” in the menu on the left. With the “Create .Space.kts” Button you can get started easily. Space automation is based on jobs implemented in Kotlin. If you’re not into Kotlin yet, no worries, Space generates a “hello world” job for you and offers many examples that you can just apply and tweak for your own needs.

Maven is not (yet) a first class citizen of Space automation like Gradle, but the documentation gives examples that can be used right away. For our application, the first version of our automation script looks like this:

job("Build and run tests") { 
   container("maven:3.6.3-openjdk-15-slim") { 
       shellScript { 
           content = """ 
	            mvn clean install 
           """ 
       } 
   } 
}

This is quite self explanatory and means 

  • The job with the name “Build and run tests”
  • should create a container from the image “maven:3.6.3-openjdk-15-slim”
  • and run “mvn clean install”

As soon as this is committed to the main-Branch, the job launches and two minutes later, we’ve got our first successfull build:

By the way, if you prefer to use the Maven Wrapper, just use a pure jdk Docker image and call the `./mvnw clean install` command in the shellScript part.

job("Build and run tests") {
   container("openjdk:15-alpine") {
       shellScript {
           content = """
              ./mvnw clean install
           """
       }
   }
}

Jobs and Steps

An automation script consists of jobs and a job consists of steps. So the simplest automation script has one job with one step. This is what we basically have here. Generally a script can have multiple jobs (up to 100) whereas all jobs in one script run in parallel. One job can contain up to 50 steps, whereas steps can be configured to run in parallel or in sequence. Currently there are some restrictions regarding the environment to execute steps in. The documentation under https://www.jetbrains.com/help/space/jobs-and-actions.html#main-features-of-jobs-and-steps gives more details.

 

Publishing the generated artifact

Usually a build pipeline delivers an artifact that you can publish somewhere for further processing or for your peers if you work on a library. Since we’ve already configured our maven package repository, we must change or script to push the resulting .jar file.

As we’ve seen in how to connect to a package repository, you need a settings.xml file. Usually you have one persisted on your local machine which is used by Maven to retrieve credentials for the repository to publish to. Since the containers where the build jobs run are ephemeral, the only place to store settings is in the repository itself. So you need to create a settings.xml where you configure the username and password with placeholders which will be replaced during runtime.

<settings>
   <servers>
       <server>
           <id>nordhof-demo-space-maven</id>
           <!-- provide credentials via the command-line args: -->
           <!-- 'spaceUsername' and 'spacePassword' -->
           <username>${spaceUsername}</username>
           <password>${spacePassword}</password>
       </server>
   </servers>
</settings>

Next we need to adapt the build script in order to publish the artifact.

job("Build, run tests, publish") {
   container("openjdk:15-alpine") {
       shellScript {
           content = """
              echo Build and run tests...
               ./mvnw clean install
               echo Publish artifacts...
               ./mvnw versions:set -DnewVersion=${'$'}JB_SPACE_EXECUTION_NUMBER
               ./mvnw deploy -s settings.xml \
                   -DspaceUsername=${'$'}JB_SPACE_CLIENT_ID \
                   -DspacePassword=${'$'}JB_SPACE_CLIENT_SECRET
           """
       }
   }
}

We should have a closer look at two commands:

./mvnw versions:set -DnewVersion=${'$'}JB_SPACE_EXECUTION_NUMBER

sets the version of the jar-file and as you can imagine the placeholder ${‘$’} is replaced with the value of the subsequent environment variable JB_SPACE_EXECUTIION_NUMBER which is provided by Space.

./mvnw deploy -s settings.xml \
                   -DspaceUsername=${'$'}JB_SPACE_CLIENT_ID \
                   -DspacePassword=${'$'}JB_SPACE_CLIENT_SECRET

is used to publish the artifact to the maven repository. With the `-s` option we can tell Maven to use the settings.xml file we have in our repository and with `-DspaceUsername=${‘$’}JB_SPACE_CLIENT_ID` we can replace the placeholder `spaceUsername` from the settings.xml file with the value of the JB_SPACE_CLIENT_ID environment variable. In JB_SPACE_CLIENT_ID and JB_SPACE_CLIENT_SECRET credentials are stored to authenticate in various Space modules like package repositories.

There are many environment variables which might be helpful in the build pipeline, e.g. JB_SPACE_GIT_REVISION. Have a look at the documentation (https://www.jetbrains.com/help/space/automation-environment-variables.html) to find out more.

After those changes are pushed, the job starts automatically and if we did everything right, there is a new .jar file in our maven package repository:

 

Building and publishing a docker image

Since we’re planning to deploy docker containers, we also want a docker image as a result of our build process. Therefor we need to add a new step to our build job:

docker {
   beforeBuildScript {
       content = """
           echo Copy files from previous step
           cp -r /mnt/space/share docker
       """
   }
   build {
       context = "docker"
       labels["vendor"] = "nordhof-demo"
   }
   push("nordhof-demo.registry.jetbrains.space/p/demo/containers/hello-world-backend") {
       tag = "0.0.\$JB_SPACE_EXECUTION_NUMBER"
   }
}

docker is a special step, actually a special container having Docker installed. It is used to build and publish docker images. Let’s look at the details:

  • In the “beforeBuildScript” section, we copy files that the previous step shares via a file share (https://www.jetbrains.com/help/space/sharing-execution-context.html#accessing-file-share-directly)
  • In the build section the docker build is executed with the path to the docker context.
  • In the push section the docker push command is executed. With the tag configuration, it’s possible to set a certain tag. In our example here, we use the JB_SPACE_EXECUTION_NUMBER to set a specific version.

Further documentation about build docker images in Space automation can be found under https://www.jetbrains.com/help/space/docker.html.

The entire automation script now looks like this

job("Build, tests, publish jar, publish docker") {
   container("openjdk:15-alpine") {
       shellScript {
           content = """
              echo Build and run tests...
               ./mvnw clean test
               echo Publish artifacts...
               ./mvnw versions:set -DnewVersion=${'$'}JB_SPACE_EXECUTION_NUMBER
               ./mvnw deploy -s settings.xml \
                   -DspaceUsername=${'$'}JB_SPACE_CLIENT_ID \
                   -DspacePassword=${'$'}JB_SPACE_CLIENT_SECRET
               cp -rv target /mnt/space/share
               cp -v Dockerfile /mnt/space/share
           """
       }
   }
   docker {
       beforeBuildScript {
           content = """
               echo Copy files from previous step
               cp -r /mnt/space/share docker
           """
       }
       build {
           context = "docker"
           labels["vendor"] = "nordhof-demo"
       }
       push("nordhof-demo.registry.jetbrains.space/p/demo/containers/hello-world-backend") {
           tag = "0.0.\$JB_SPACE_EXECUTION_NUMBER"
       }
   }
}

Note the changes in the maven step compared to the initial version in the post. We changed the build command from “./mvn clean install” to “./mvn clean test” because “test” doesn’t create a .jar file. With “install” we would finally have two .jar files in the target directory, one with the version configured in the pom.xml and one with the version we set in the “./mvn versions:set” command build by “./mvn deploy”. Also it is necessary to use the “cp” commands to share the target directory and the Dockerfile with the next step.

As soon, as this script is pushed it successfully

  • builds the jar file
  • pushes it to our maven package repository
  • builds the docker image
  • pushes the docker image to our docker repository

And finally there is our newly created docker image in the docker repository of our project in Space.

 

Conclusion

With just a few lines of code developers are able to craft their build pipeline for their repositories. This enables autonomous work of teams and with the utilization of docker they have many possibilities to do whatever their application needs.

There’s a lot more to explore about Space Automation in general, e.g. there is an interesting blog post by Maarten Balliauw on how to optimize routine workflows using Space Automation (https://blog.jetbrains.com/space/2021/01/18/using-space-automation-to-optimize-routine-workflows/)

Creating package repositories in Space

As a small company or team, building online services, you usually do not want to run your own application for storing artifacts like .jar files or docker images. This is something that should be co-located to source code repositories and should just be available. Besides the fact that there are some open source solutions, setting up and running such a storage system takes time and resources.

JetBrains Space comes into play here very handy. Package repositories, aka Packages are part of a project in Space. So you can configure dedicated package repositories for each project. Currently it is possible to use the following package repository types:

  • Container registry: for Docker images, OCI images and Helm chart
  • Maven repository: for .jar, .klib, pom, .war files
  • NuGet Feed: for NuGet packages
  • npm registry: for npm package

Let’s assume we are building a Spring Boot based backend application. Since the deployment artifact of such an application is a .jar file, we want to push it to a maven repository.

Creating a maven package repository and publishing a .jar file

The first thing you need to do is to open JetBrains Space and navigate to your project. On the left you will see the “Packages” menu, where you will see an empty page from where you can create a new repository.

Then select “Maven Repository” from the list of types.

Afterwards, a good name needs to be set as well as some other settings:

Now the maven repository is ready to use. In order to publish artifacts from your local machine, a few configs have to be set. Therefore the “Get started” button opens a popup and provides the settings you need to configure.

The access token, which I obfuscated in the screenshot (I’ve done my best ;-)) can be easily added by pushing the “Generate personal token” button. This is also quite handy, because otherwise you would have to go to your settings, find the page for “Access tokens”, create a new access token and copy it from there.

If you want to publish artifacts from your localhost, go to the “Publish” section and copy the <distributionManagement> settings.

As soon as this is done, you can run

mvn deploy

and the artifact will be published successfully.

If you publish a library that is supposed to be used in other applications, you just need to configure the repository as in the “Connect” section and define the dependency. That’s it.

Creating a container registry and publishing a docker image

At some point you usually want to create a Docker image for your application. As soon as you have the Dockerfile in place, it’s time to create a container registry. Via the “Packages” page, use the “New repository” button to create a container registry.

The settings are quite equal to the ones in our maven repository and again, as soon as you created the registry, the “Get started” button helps you with your local configuration.

Depending on the tool you select (Docker or Helm), you see what you must do to connect, consume or publish artifacts from your local machine.

Applying this to our backend application, we need to 

docker login nordhof-demo.registry.jetbrains.space -u moldaschl -p $PW_DOCKER_NORDHOF_DEMO

in order to connect to our newly created container registry. Then we build the image:

docker build -t nordhof-demo.registry.jetbrains.space/p/demo/containers/hello-world-backend:latest .

and push it to our registry:

docker push nordhof-demo.registry.jetbrains.space/p/demo/containers/hello-world-backend:latest

Having pushed everything, our package repositories look like

So we have our artifacts ready for further processing.

Conclusion

As a Java backend engineer I was able to set up a maven repository and docker registry where my team can publish and consume artifacts. With JetBrains Space this is really easy and it works like a charm. Having the chance to easily set up such services for software engineers makes your team autonomous which is an essential characteristic of high performing teams.