Nico
Nico
Creator of this small website
Jan 15, 2018 3 min read

Jenkins Job DSL and JSON

thumbnail for this post

Why ?

Because maintaining a jenkins server is not necessarily what feels rewarding, especially when it come to create more and more jobs that look the same. Fortunately, if you have read my previous post you will know that it is possible to use the “Job DSL” to remove the harness of this. But, even using the DSL, things can go what seems repetitive and boring. A while ago, Jenkins introduced the Jenkinsfile, which is a file that lives in your project repository and allows to define your CI tests directly here. But there is no magic here : you need to define a job that will scan the repo for changes and all. This is a repetitive task, prone to error and time consuming for the “little value” it brings.

The good news is that it is possible to automate the creation of the job and make it easy for your developers to do it themselves, using the DSL of course

Some code

As DSL allows us to use the power of groovy, we are going to make things happen through it. All jobs are really similar, and writing the same code over and over is not what makes the job interesting. Another thing is that we want to do is to pull the data to build the from a source that is not directly code, but rather a data structure. Here thing are being kept simple and some JSON will be used as the data source. It is a local file, but you could totally pull the data from a remote API

With the following JSON file :

{
  "organizations": [
    {
      "name": "our_developers",
      "user": "build_user",
      "repositories": [
        "project1",
        "project2",
        "project3",
        "project4",
      ]
    },
    {
      "name": "the-awesome-ops",
      "user": "my_user",
      "repositories": [
        "repo1",
        "repo2",
        "repo3"
      ]
    }
  ]
}

And the associated groovy script

import groovy.json.JsonSlurper

hudson.FilePath workspace = hudson.model.Executor.currentExecutor().getCurrentWorkspace()
def config = new JsonSlurper().parseText(new File("${workspace}/data/multibranch_jobs.json").text)

config["organizations"].each { org ->
  org["repositories"].each { repo ->
    multibranchPipelineJob("${org["name"]}_${repo}") {
      orphanedItemStrategy {
        discardOldItems {
          daysToKeep(365) // keep one year
        }
      }

      branchSources {
        github {
          id("${org["name"]}_${repo}")                    // create a unique ID, see below
          apiUri('https://git.corp.company.tild/api/v3')  // this is for a GH:E, no need to specify for public GH
          scanCredentialsId("${org['user']}_git_token")   // this is the ID of the token in jenkins credentials
          repoOwner(org['name'])
          repository(repo)
        }
      }
    }
  }
}

Small gotcha

You need to specify the id part of the branch source when you work with job DSL. If you don’t then it will be randomly generated each time the job DSL is being read and jenkins will loose track of what had been done. This leads to have to rescan each time you have a change, which removes most of the automation. This is referenced in this jira ticket