Category Archives: Jenkins

Programmatically configure Jenkins pipeline shared groovy libraries

If you’re a Jenkins user adopting Jenkins pipelines, it is likely that you have run in to the scenario where you want to break out common functionality from your pipeline scripts, instead of having to duplicate it in multiple pipeline definitions. To cater for this scenario, Jenkins provides the Pipeline Shared Groovy Libraries plugin.

You can configure your shared global libraries through the Jenkins UI (Manage Jenkins > Configure System), which is what most resource online seems to suggest. But if you are like me and looking to automate things, you do not want configure your shared global libraries through the UI but rather doing it programmatically. This post aims to describe how to automate the configuration of your shared global libraries.

Jenkins stores plugin configuration in its home directory as XML files. The expected configuration file name for the Shared Groovy Libraries plugin (version 2.8) is: org.jenkinsci.plugins.workflow.libs.GlobalLibraries.xml

Here’s an example configuration to add an implicitly loaded global library, checked out from a particular Git repository:

<?xml version=’1.0′ encoding=’UTF-8’?>
<org.jenkinsci.plugins.workflow.libs.GlobalLibraries plugin=”workflow-cps-global-lib@2.8″>
    <libraries>
        <org.jenkinsci.plugins.workflow.libs.LibraryConfiguration>
            <name>my-shared-library</name>
            <retriever class=”org.jenkinsci.plugins.workflow.libs.SCMSourceRetriever”>
                <scm class=”jenkins.plugins.git.GitSCMSource” plugin=”git@3.4.1″>
                    <id>7356ba0d-a25f-4f61-8c56-b3c565a39929</id>
                    <remote>ssh://git@github.com:Diabol/jenkins-pipeline-shared-library-template.git</remote>
                    <credentialsId>ssh</credentialsId>
                    <traits>
                        <jenkins.plugins.git.traits.BranchDiscoveryTrait/>
                    </traits>
                </scm>
            </retriever>
            <defaultVersion>master</defaultVersion>
            <implicit>true</implicit>
            <allowVersionOverride>true</allowVersionOverride>
        </org.jenkinsci.plugins.workflow.libs.LibraryConfiguration>
    </libraries>
</org.jenkinsci.plugins.workflow.libs.GlobalLibraries>

You can also find a copy/paste friendly version of the above configuration on GitHub: https://gist.github.com/tommysdk/e752486bf7e0eeec0ed3dd32e56f61a4

If your version control system requires authentication, e.g. ssh credentials, create those credentials and use the associated id’s in your org.jenkinsci.plugins.workflow.libs.GlobalLibraries.xml.

Store the XML configuration file in your Jenkins home directory (echo “$JENKINS_HOME”) and start your Jenkins master for it to load the configuration.

If you are running Jenkins with Docker, adding the configuration is just a copy directive in your Dockerfile:

FROM jenkins:2.60.1
...
COPY org.jenkinsci.plugins.workflow.libs.GlobalLibraries.xml $JENKINS_HOME/org.jenkinsci.plugins.workflow.libs.GlobalLibraries.xml

Jenkins environment used: 2.60.1, Pipeline Shared Groovy Libraries plugin 2.8

Tommy Tynjä
@tommysdk
http://www.diabol.se

How to use the Delivery Pipeline plugin with Jenkins pipelines

Since we’ve just released support for Jenkins pipelines in the Delivery Pipeline plugin (you can read more about it here), this article aims to describe how to use the Delivery Pipeline plugin from your Jenkins pipelines.

First, you need to have a Jenkins pipeline or a multibranch pipeline. Here is an example configuration:
node {
  stage 'Commit stage' {
    echo 'Compiling'
    sleep 2

    echo 'Running unit tests'
    sleep 2
  }

  stage 'Test'
  echo 'Running component tests'
  sleep 2

  stage 'Deploy' {
    echo 'Deploy to environment'
    sleep 2
  }
}

You can either group your stages with or without curly braces as shown in the example above, depending on which versions you use of the Pipeline plugin. The stages in the pipeline configuration are used for the visualization in the Delivery Pipeline views.

To create a new view, press the + tab on the Jenkins landing page as shown below.
Jenkins landing page.

Give your view a new and select “Delivery Pipeline View for Jenkins Pipelines”.
Create new view.

Configure the view according to your needs, and specify which pipeline project to visualize.
Configure view.

Now your view has been created!
Delivery Pipeline view created.

If you selected the option to “Enable start of new pipeline build”, you can trigger a new pipeline run by clicking the build button to the right of the pipeline name. Otherwise you can navigate to your pipeline and start it from there. Now you will see how the Delivery Pipeline plugin renders your pipeline run! Since the view is not able to evaluate the pipeline in advance, the pipeline will be rendered according to the progress of the current run.
Jenkins pipeline run in the Delivery Pipeline view.

If you prefer to model each stage in a more fine grained fashion you can specify tasks for each stage. Example:
node {
  stage 'Build'
  task 'Compile'
  echo 'Compiling'
  sleep 1

  task 'Unit test'
  sleep 1

  stage 'Test'
  task 'Component tests'
  echo 'Running component tests'
  sleep 1

  task 'Integration tests'
  echo 'Running component tests'
  sleep 1

  stage 'Deploy'
  task 'Deploy to UAT'
  echo 'Deploy to UAT environment'
  sleep 1

  task 'Deploy to production'
  echo 'Deploy to production'
  sleep 1
}

Which will render the following pipeline view:
Jenkins pipeline run in the Deliveyr Pipeline view.

The pipeline view also supports a full screen view, optimized for visualization on information radiators:
Delivery Pipeline view in full screen mode.

That’s it!

If you are making use of Jenkins pipelines, we hope that you will give the Delivery Pipeline plugin a spin!

To continue to drive adoption and development of the Delivery Pipeline plugin, we rely on feedback and contributions from our users. For more information about the project and how to contribute, please visit the project page on GitHub: https://github.com/Diabol/delivery-pipeline-plugin

We use the official Jenkins JIRA to track issues, using the component label “delivery-pipeline-plugin”: https://issues.jenkins-ci.org/browse/JENKINS

Enjoy!

Tommy Tynjä
@tommysdk
http://www.diabol.se

Announcing Delivery Pipeline Plugin 1.0.0 with support for Jenkins pipelines

We are pleased to announce the first version of the Delivery Pipeline Plugin for Jenkins (1.0.0) which includes support for Jenkins pipelines! Even though Jenkins pipelines has been around for a few years (formerly known as the Workflow plugin), it is during the last year that the Pipeline plugin has started to gain momentum in the community.

The Delivery Pipeline plugin is a mature project that was primarily developed with one purpose in mind. To allow visualization of Continuous Delivery pipelines, suitable for visualization on information radiators. We believe that a pipeline view should be easy to understand, yet contain all necessary information to be able to follow a certain code change passing through the pipeline, without the need to click through a UI. Although there are alternatives to the Delivery Pipeline plugin, there is yet a plugin that suits as good for delivery pipeline visualizations on big screens as the Delivery Pipeline plugin.

Delivery Pipeline View using a Jenkins Pipeline.

Delivery Pipeline plugin has a solid user base and a great community that helps us to drive the project forward, giving feedback, feature suggestions and code contributions. Until now, the project has supported the creation of pipelines using traditional Jenkins jobs with downstream/upstream dependencies with support for more complex pipelines using e.g. forks and joins.

Now, we are pleased to release a first version of the Delivery Pipeline plugin that has basic support for Jenkins pipelines. This feature has been developed and tested together with a handful of community contributors. We couldn’t have done it without you! The Jenkins pipeline support has some known limitations still, but we felt that the time was right to release a first version that could get traction and usage out in the community. To continue to drive adoption and development we rely on feedback and contributions from the community. So if you are already using Jenkins pipelines, please give the new Delivery Pipeline a try. It gives your pipelines the same look and feel as you have grown accustomed to if you are used to the look and feel of the Delivery Pipeline plugin. We welcome all feedback and contributions!

Other big features in the 1.0.0 version includes the ability to show which upstream project that triggered a given pipeline (issue JENKINS-22283), updated style sheets and a requirement on Java 7 and Jenkins 1.642.3 as the minimum supported core version.

For more information about the project and how to contribute, please visit the project page on GitHub: https://github.com/Diabol/delivery-pipeline-plugin

Tommy Tynjä
@tommysdk
http://www.diabol.se

The power of Jenkins JobDSL

At my current client we define all of our Jenkins jobs with Jenkins Job Builder so that we can have all of our job configurations version controlled. Every push to the version control repository which hosts the configuration files will trigger an update of the jobs in Jenkins so that we can make sure that the state in version control is always reflected in our Jenkins setup.

We have six jobs that are essentially the same for each of our projects but with only a slight change of the input parameters to each job. These jobs are responsible for deploying the application binaries to different environments.

Jenkins Job Builder supports templating, but our experience is that those templates usually will be quite hard to maintain in the long run. And since it is just YAML files, you’re quite restricted to what you’re able to do. JobDSL Jenkins job configurations on the other hand are built using Groovy. It allows you to insert actual code execution in your job configurations scripts which can be very powerful.

In our case, with JobDSL we can easily just create one common job configuration which we then just iterate over with the job specific parameters to create the necessary jobs for each environment. We also can use some utility methods written in Groovy which we can invoke in our job configurations. So instead of having to maintain similar Jenkins Job Builder configurations in YAML, we can do it much more consise with JobDSL.

Below, an example of a JobDSL configuration file (Groovy code), which generates six jobs according to the parameterized job template:

class Utils {
    static String environment(String qualifier) { qualifier.substring(0, qualifier.indexOf('-')).toUpperCase() }
    static String environmentType(String qualifier) { qualifier.substring(qualifier.indexOf('-') + 1)}
}

[
    [qualifier: "us-staging"],
    [qualifier: "eu-staging"],
    [qualifier: "us-uat"],
    [qualifier: "eu-uat"],
    [qualifier: "us-live"],
    [qualifier: "eu-live"]
].each { Map environment ->

    job("myproject-deploy-${environment.qualifier}") {
        description "Deploy my project to ${environment.qualifier}"
        parameters {
            stringParam('GIT_SHA', null, null)
        }
        scm {
            git {
                remote {
                    url('ssh://git@my_git_repo/myproject.git')
                    credentials('jenkins-credentials')
                }
                branch('$GIT_SHA')
            }
        }
        deliveryPipelineConfiguration("Deploy to " + Utils.environmentType("${environment.qualifier}"),
                Utils.environment("${environment.qualifier}") + " environment")
        logRotator(-1, 30, -1, -1)
        steps {
            shell("""deployment/azure/deploy.sh ${environment.qualifier}""")
        }
    }
}

If you need help getting your Jenkins configuration into good shape, just contact us and we will be happy to help you! You can read more about us at diabol.se.

Tommy Tynjä
@tommysdk

Slow Jenkins Start up – Think twice before updating files in a loop

Some time ago I spent a day trying to figure out why our Jenkins Master is so slow (~30min) to start up. We have around 1000 jobs and about 100 plugins. The large amount of jobs makes us hit some performance issues that never is an issue in smaller installations. The number of plugins, makes the list of possible culprits long. Furthermore, it might be a combination of plugins causing the problem. And we may in fact have several separate issues.

Template Project + Job Config History = @#*$%*!?!

One issue that we have found is the combination of Template Project and Job Config History plugins. The Template Project implements ItemListener.onLoaded() and in a loop updates (twice) all projects using it (and we use it a lot). However, this seems to be some workaround that never(?) actually does any real work. Since the job is updated, this will trigger the Job Config History plugin (and maybe others listening to this). Which of course the Template plugin didn’t account for.

The Job Config History is potentially writing several times to disk for each job when triggered. Disk I/O is, as every programmer should know, relatively slow, but one write operation per job would be acceptable at startup if there is no other way. HOWEVER, there is a sleep 500 ms statement to avoid some clashing when writing to disk. 500 ms is an eon in computer world! Disk operations are normally a lot faster than that. 50 ms would be more reasonable value, provided that you can’t avoid a sleep call completely.

1000 jobs x 2 calls/job x 500 ms = 1000 s ? 17 minutes !

Oops! Well, that explains a large part of our slow startup time. The funny thing is that load, disk I/O, cpu and memory is low, we’re mostly just waiting. I had to do thread analysis to find this problem.

I reported JENKINS-24915. It is still reported unresolved, but may of course be resolved anyway. I haven’t tested recently since we choose to uninstall the job config history plugin, even though we like it and you could argue that it’s the template plugin that is

Summary

Jenkins loads all jobs on startup, which is reasonable. But if, for a large number of  jobs, this triggers a slow operation, then all hell can break loose.

In general, you should think twice before you implement remote or disk calls in a loop. Specifically for Jenkins, doing it in an event method that potentially is called for all jobs is not a good idea.

A comment on ‘Continuous Delivery Pipelines: GoCD vs Jenkins’

In Continuous Delivery Pipelines: GoCD vs Jenkins, there are some good points on modeling continuous delivery, but to make his point, that Go is better (at CD) than Jenkins, he chooses to represent Jenkins with the Build Pipeline Plugin and not Delivery Pipeline Plugin by Diabol with superior visualization. I haven’t used Go and I would like to get the time to have a deeper look at it. As far as I gather, it’s a competent tool, quite possibly better at CD than Jenkins, but I have to say that it is quite possible to do CD and pipelines in Jenkins, I am doing it as I write.

His post brings up some interesting points like, do we need pipelines as first class citizens rather than just ‘visual doodles’ (in my tools)? I am a bit skeptical. Pipelines are indeed central in CD, but my thoughts on what’s lacking from Jenkins goes a bit differently. I think we need a set of tools to do CD and pipelines.

Pipelines should probably be expressed outside of the (build/CI/deploy) tool because they will span several domains and abstractions levels. There will be need for different visualizations of the pipeline state and configuration. In fact what matters in run-time (as opposed to design-time) is not the pipeline itself, but questions like:

  • Where is my commit?
    • Example: It’s passed commit phase and has been deployed to the first environment where automated tests are running
  • What is the state of this environment?
    • Example: It has version 1.4.123 of app X stemming from commit Y and version 1.3.567 of the VM template stemming from commit Z.

The pipeline will also be subjected to continuous improvements. Then, in design-time (and debug-time), there are questions like:

  • What are the up-stream dependencies of this job?
  • Where is this artifact produced?

Your tools should help you answer questions like that, simply and quickly, by good visualization and design.

Diabol now a Cloudbees partner

Diabol is now a Jenkins gold service partner to Cloudbees: Diabol AB partner page Cloudbees.com

Cloudbees, the ‘Jenkins Enterprise company’, is a continuous delivery (CD) leader. They provides solutions that enable IT organizations to respond rapidly to the software delivery needs of the business. Their offerings are powered by Jenkins CI, the world’s most popular open source continuous integration (CI) server. The CloudBees CD Platform provides a range of solutions for use on-premise and in the cloud that meet the security, scalability and manageability needs of enterprises. Their solutions support many of the world’s largest and most business-critical deployments.

Diabol is proud to collaborate with Cloudbees.

Agile Configuration Management – part 1

On June 5 I held a lightning talk on Agile Configuration Management at the Agila Sverige 2014 conference. The 10 minute format does not allow for digging very deep. In this series of blog posts I expand on this topic.

The last year I have lead a long journey towards agile and devopsy automated configuration management for infrastructure at my client, a medium sized IT department. It’s part of a larger initiative of moving towards mature continuous delivery. We sit in a team that traditionally has had responsibility to maintain the test environments, but as part of the CD initiative we’ve been pushing to transform this to instead providing and maintaining a delivery platform for all environments.

The infrastructure part was initiated when we were to set up a new system and had a lot of machines to configure for that. Here was a golden window of opportunity to introduce modern configuration management (CM) automation tools. Note that nobody asked us to do this, it was just the only decent thing to do. Consequently, nobody told us what tools to use and how to do it.

The requirement was thus to configure the servers up to the point where our delivery pipeline implemented with Jenkins could deploy the applications, and to maintain them. The main challenge was that we need to support a large amount of java web applications with slightly different configuration requirements.

Goals

So we set out to find tools and build a framework that would support agile and devopsy CM. We’re building something PaaS-like. More specifically the goals we set up were:

  1. Self service model
    It’s important to not create a new silo. We want the developers to be able to get their work done without involving us. There is no configuration manager or other command or control function. The developers are already doing application CM, it’s just not acknowledged as CM.
  2. Infrastructure as Code
    This means that all configuration for servers are managed and versioned together as code, and the code and only the code can affect the configuration of the infrastructure. When we do this we can apply all the good practices we know well from software development such as unit testing, collaboration, diff, merge, etc.
  3. Short lead times for changes
    Short means minutes to hours rather than weeks. Who wants to wait 5 days rather than 5 minutes to see the effect of a change. Speeding up the feedback cycle is the most important factor for being able to experiment, learn and get things done.

Project phases

Our journey had different phases, each with their special context, goals and challenges.

1. Bootstrap

At the outset we address a few systems and use cases. The environments are addressed one after the other. The goal is to build up knowledge and create drafts for frameworks. We evaluate some, but not all tools. Focus is on getting something simple working. We look at Puppet and Ansible but go for the former as Ansible was very new and not yet 1.0. The support systems, such as the puppet master are still manually managed.

We use a centralized development model in this phase. There are few committers. We create a svn repository for the puppet code and the code is all managed together, although we luckily realize already now that it must be structured and modularized, inspired by Craig Dunns blog post.

2. Scaling up

We address more systems and the production environment. This leads to the framework expanding to handle more variations in use cases. There are more committers now as some phase one early adopters are starting to contribute. It’s a community development model. The code is still shared between all teams, but as outlined below each team deploy independently.

The framework is a moving target and the best way to not become legacy is to keep moving:

  • We increase automation, e.g. the puppet installations are managed with Ansible.
  • We migrate from svn to git.
  • Hiera is introduced to separate code and data for puppet.
  • Full pipelines per system are implemented in Jenkins.
    We use the Puppet dynamic environments pattern, have the Puppet agent daemon stopped and use Ansible to trigger a puppet agent run via the Jenkins job to be able to update the systems independently.

The Pipeline

As continuous delivery consultants we wanted of course to build a pipeline for the infrastructure changes we could be proud of.

Steps

  1. Static checks (Parse, Validate syntax, Compile)
  2. Apply to CI  (for all systems)
  3. Apply to TEST (for given system)
  4. Dry-run (–noop) in PROD (for given system)
  5. PROD Release notes/request generation (for given system)
  6. Apply in PROD (for given system)

First two steps are automatic and executed for all systems on each commit. Then the pipeline fork and the rest of the steps are triggered manually per system)

Complexity increases

Were doing well, but the complexity has increased. There is some coupling in that the code base is monolithic and is shared between several teams/systems. There are upsides to this. Everyone benefits from improvements and additions. We early on had to structure the code base and not have a big ball of mud that solves only one use case.

Another form of coupling is that some servers (e.g. load balancers) are shared which forces us to implement blocks in the Jenkins apply jobs so that they do not collide.

There is some unfamiliarity with the development model so there is some uncertainty on the responsibilities – who test and deploy what, when? Most developers including my team are also mainly ignorant on how to test infrastructure.

Looking at our pipeline we could tell that something is not quite all right:

Puppet Complex Pipeline in Jenkins

 

In the next part of this blog series we will see how we addressed these challenges in phase 3: Increase independence and quality.

Recent blogs about the Delivery Pipeline plugin

The Delivery Pipeline plugin from Diabol is getting some traction. Now over 600 installations. Here’s some recent blogging about it.

First one from none less than Mr Jenkins himself, Kohsuke Kawaguchi, and Andrew Phillips, VP of Products for XebiaLabs:

InfoQ: Orchestrating Your Delivery Pipelines with Jenkins

Second is about the first experience with the Jenkins/Hudson Build and Delivery Pipeline plugins:

Oracle SOA / Java blog: The Jenkins Build and Delivery Pipeline plugins

Marcus Philip
@marcus_phi

Introducing Delivery Pipeline Plugin for Jenkins

In Continuous Delivery visualisation is one of the most important areas. When using Jenkins as a build server it is now possible with the Delivery Pipeline Plugin to visualise one or more delivery pipelines in the same view even in full screen. Perfect for information radiators.

The plugin uses the upstream/downstream dependencies of jobs to visualize the pipelines.

delivery-pipeline-plugin
Fullscreen view
screenshot
Work view

A pipeline consists of several stages, usually one stage will be the same as one job in Jenkins. An example of a pipeline which can consist of both build, unit test, packaging and analyses the pipeline can be quite long if every Jenkins job is a stage. So in the Delivery Pipeline Plugin it is possible to group jobs into the same stage, calling the Jenkins jobs tasks instead.

stage

Stage

task

Task

The version showed in the header is the version/display name of the first Jenkins job in the pipeline, so the first job has to define the version.

The plugin also has possibility to show what we call a Aggregated View which shows the latest execution of every stage and displays the version for  that stage.