Category Archives: Pipelines

Programmatically configure Jenkins pipeline shared groovy libraries

If you’re a Jenkins user adopting Jenkins pipelines, it is likely that you have run in to the scenario where you want to break out common functionality from your pipeline scripts, instead of having to duplicate it in multiple pipeline definitions. To cater for this scenario, Jenkins provides the Pipeline Shared Groovy Libraries plugin.

You can configure your shared global libraries through the Jenkins UI (Manage Jenkins > Configure System), which is what most resource online seems to suggest. But if you are like me and looking to automate things, you do not want configure your shared global libraries through the UI but rather doing it programmatically. This post aims to describe how to automate the configuration of your shared global libraries.

Jenkins stores plugin configuration in its home directory as XML files. The expected configuration file name for the Shared Groovy Libraries plugin (version 2.8) is: org.jenkinsci.plugins.workflow.libs.GlobalLibraries.xml

Here’s an example configuration to add an implicitly loaded global library, checked out from a particular Git repository:

<?xml version=’1.0′ encoding=’UTF-8’?>
<org.jenkinsci.plugins.workflow.libs.GlobalLibraries plugin=”workflow-cps-global-lib@2.8″>
    <libraries>
        <org.jenkinsci.plugins.workflow.libs.LibraryConfiguration>
            <name>my-shared-library</name>
            <retriever class=”org.jenkinsci.plugins.workflow.libs.SCMSourceRetriever”>
                <scm class=”jenkins.plugins.git.GitSCMSource” plugin=”git@3.4.1″>
                    <id>7356ba0d-a25f-4f61-8c56-b3c565a39929</id>
                    <remote>ssh://git@github.com:Diabol/jenkins-pipeline-shared-library-template.git</remote>
                    <credentialsId>ssh</credentialsId>
                    <traits>
                        <jenkins.plugins.git.traits.BranchDiscoveryTrait/>
                    </traits>
                </scm>
            </retriever>
            <defaultVersion>master</defaultVersion>
            <implicit>true</implicit>
            <allowVersionOverride>true</allowVersionOverride>
        </org.jenkinsci.plugins.workflow.libs.LibraryConfiguration>
    </libraries>
</org.jenkinsci.plugins.workflow.libs.GlobalLibraries>

You can also find a copy/paste friendly version of the above configuration on GitHub: https://gist.github.com/tommysdk/e752486bf7e0eeec0ed3dd32e56f61a4

If your version control system requires authentication, e.g. ssh credentials, create those credentials and use the associated id’s in your org.jenkinsci.plugins.workflow.libs.GlobalLibraries.xml.

Store the XML configuration file in your Jenkins home directory (echo “$JENKINS_HOME”) and start your Jenkins master for it to load the configuration.

If you are running Jenkins with Docker, adding the configuration is just a copy directive in your Dockerfile:

FROM jenkins:2.60.1
...
COPY org.jenkinsci.plugins.workflow.libs.GlobalLibraries.xml $JENKINS_HOME/org.jenkinsci.plugins.workflow.libs.GlobalLibraries.xml

Jenkins environment used: 2.60.1, Pipeline Shared Groovy Libraries plugin 2.8

Tommy Tynjä
@tommysdk
http://www.diabol.se

How to use the Delivery Pipeline plugin with Jenkins pipelines

Since we’ve just released support for Jenkins pipelines in the Delivery Pipeline plugin (you can read more about it here), this article aims to describe how to use the Delivery Pipeline plugin from your Jenkins pipelines.

First, you need to have a Jenkins pipeline or a multibranch pipeline. Here is an example configuration:
node {
  stage 'Commit stage' {
    echo 'Compiling'
    sleep 2

    echo 'Running unit tests'
    sleep 2
  }

  stage 'Test'
  echo 'Running component tests'
  sleep 2

  stage 'Deploy' {
    echo 'Deploy to environment'
    sleep 2
  }
}

You can either group your stages with or without curly braces as shown in the example above, depending on which versions you use of the Pipeline plugin. The stages in the pipeline configuration are used for the visualization in the Delivery Pipeline views.

To create a new view, press the + tab on the Jenkins landing page as shown below.
Jenkins landing page.

Give your view a new and select “Delivery Pipeline View for Jenkins Pipelines”.
Create new view.

Configure the view according to your needs, and specify which pipeline project to visualize.
Configure view.

Now your view has been created!
Delivery Pipeline view created.

If you selected the option to “Enable start of new pipeline build”, you can trigger a new pipeline run by clicking the build button to the right of the pipeline name. Otherwise you can navigate to your pipeline and start it from there. Now you will see how the Delivery Pipeline plugin renders your pipeline run! Since the view is not able to evaluate the pipeline in advance, the pipeline will be rendered according to the progress of the current run.
Jenkins pipeline run in the Delivery Pipeline view.

If you prefer to model each stage in a more fine grained fashion you can specify tasks for each stage. Example:
node {
  stage 'Build'
  task 'Compile'
  echo 'Compiling'
  sleep 1

  task 'Unit test'
  sleep 1

  stage 'Test'
  task 'Component tests'
  echo 'Running component tests'
  sleep 1

  task 'Integration tests'
  echo 'Running component tests'
  sleep 1

  stage 'Deploy'
  task 'Deploy to UAT'
  echo 'Deploy to UAT environment'
  sleep 1

  task 'Deploy to production'
  echo 'Deploy to production'
  sleep 1
}

Which will render the following pipeline view:
Jenkins pipeline run in the Deliveyr Pipeline view.

The pipeline view also supports a full screen view, optimized for visualization on information radiators:
Delivery Pipeline view in full screen mode.

That’s it!

If you are making use of Jenkins pipelines, we hope that you will give the Delivery Pipeline plugin a spin!

To continue to drive adoption and development of the Delivery Pipeline plugin, we rely on feedback and contributions from our users. For more information about the project and how to contribute, please visit the project page on GitHub: https://github.com/Diabol/delivery-pipeline-plugin

We use the official Jenkins JIRA to track issues, using the component label “delivery-pipeline-plugin”: https://issues.jenkins-ci.org/browse/JENKINS

Enjoy!

Tommy Tynjä
@tommysdk
http://www.diabol.se

Announcing Delivery Pipeline Plugin 1.0.0 with support for Jenkins pipelines

We are pleased to announce the first version of the Delivery Pipeline Plugin for Jenkins (1.0.0) which includes support for Jenkins pipelines! Even though Jenkins pipelines has been around for a few years (formerly known as the Workflow plugin), it is during the last year that the Pipeline plugin has started to gain momentum in the community.

The Delivery Pipeline plugin is a mature project that was primarily developed with one purpose in mind. To allow visualization of Continuous Delivery pipelines, suitable for visualization on information radiators. We believe that a pipeline view should be easy to understand, yet contain all necessary information to be able to follow a certain code change passing through the pipeline, without the need to click through a UI. Although there are alternatives to the Delivery Pipeline plugin, there is yet a plugin that suits as good for delivery pipeline visualizations on big screens as the Delivery Pipeline plugin.

Delivery Pipeline View using a Jenkins Pipeline.

Delivery Pipeline plugin has a solid user base and a great community that helps us to drive the project forward, giving feedback, feature suggestions and code contributions. Until now, the project has supported the creation of pipelines using traditional Jenkins jobs with downstream/upstream dependencies with support for more complex pipelines using e.g. forks and joins.

Now, we are pleased to release a first version of the Delivery Pipeline plugin that has basic support for Jenkins pipelines. This feature has been developed and tested together with a handful of community contributors. We couldn’t have done it without you! The Jenkins pipeline support has some known limitations still, but we felt that the time was right to release a first version that could get traction and usage out in the community. To continue to drive adoption and development we rely on feedback and contributions from the community. So if you are already using Jenkins pipelines, please give the new Delivery Pipeline a try. It gives your pipelines the same look and feel as you have grown accustomed to if you are used to the look and feel of the Delivery Pipeline plugin. We welcome all feedback and contributions!

Other big features in the 1.0.0 version includes the ability to show which upstream project that triggered a given pipeline (issue JENKINS-22283), updated style sheets and a requirement on Java 7 and Jenkins 1.642.3 as the minimum supported core version.

For more information about the project and how to contribute, please visit the project page on GitHub: https://github.com/Diabol/delivery-pipeline-plugin

Tommy Tynjä
@tommysdk
http://www.diabol.se

Delivery pipelines with GoCD

At my most recent assignment, one of my missions was to set up delivery pipelines for a bunch of services built in Java and some front-end apps. When I started they already had some rudimentary automated builds using Hudson, but we really wanted to start from scratch. We decided to give GoCD a chance because it pretty much satisfied our demands. We wanted something that could:

  • orchestrate a deployment pipeline (Build -> CI -> QA -> Prod)
  • run on-premise
  • deploy to production in a secure way
  • show a pipeline dashboard
  • handle user authentication and authorization
  • support fan-in

GoCD is the open source “continuous delivery server” backed up by Thoughtworks (also famous for selenium)

GoCD key concepts

A pipeline in GoCD consists of stages which are executed in sequence. A stage contains jobs which are executed in parallel. A job contains tasks which are executed in sequence. The smallest schedulable unit are jobs which are executed by agents. An agent is a Java process that normally runs on its own separate machine. An idling agent fetches jobs from the GoCD server and executes its tasks. A pipeline is triggered by a “material” which can be any of the following the following version control systems:

  • Git
  • Subversion
  • Mercurial
  • Team Foundation Server
  • Perforce

A pipeline can also be triggered by a dependency material, i.e. an upstreams pipeline. A pipeline can have more than one material.

Technically you can put the whole delivery pipeline (CI->QA->PROD) inside a pipeline as stages but there are several reasons why you would want to split it in separate chained pipelines. The most obvious reason for doing so is the GoCD concept of environments. A pipeline is the smallest entity that you could place inside an environment. The concepts of environments are explained more in detailed in the security section below.

You can logically group pipelines in ”pipelines groups” so in the typical setup for a micro-service you might have a pipeline group containing following pipelines:

example-server-pl

A pipeline group can be used as a view and a place holder for access rights. You can give a specific user or role view, execution or admin rights to all pipelines within a pipeline group.

For the convenience of not repeating yourself, GoCD offers the possibility to create templates which are parameterized workflows that can be reused. Typically you could have templates for:

  • building (maven, static code analysis)
  • deploying to a test environment
  • deploying to a production environment

For more details on concepts in GoCD, see: https://docs.go.cd/current/introduction/concepts_in_go.html

Administration

Installation

The GoCD server and agents are bundled as rpm, deb, windows and OSX install packages. We decided to install it using puppet https://forge.puppet.com/unibet/go since we already had puppet in place. One nice feature is that agents auto-upgrades when the GoCD server is upgraded, so in the normal case you only need to care about GoCD server upgrades.

User management

We use the LDAP integration to authenticate users in GoCD. When a user defined in LDAP is logging in for the for the first time it’s automatically registered as a new user in GoCD. If you use role based authorization then an admin user needs to assign roles to the new user.

Disk space management

All artifacts created by pipelines are stored on the GoCD server and you will sooner or later face the fact that disks are getting full. We have used the tool “GoCD janitor” that analyses the value stream (the collection of chained upstreams and downstream pipelines) and automatically removes artifacts that won’t make it to production.

Security

One of the major concerns when deploying to production is the handling of deployment secrets such as ssh keys. At my client they extensively use Ansible as a deployment tool so we need the ability to handle ssh keys on the agents in a secure way. It’s quite obvious that you don’t want to use the same ssh keys in test and production so in GoCD they have a feature called environments for this purpose. You can place an agent and a pipeline in an environment (ci, qa, production) so that anytime a production pipeline is triggered it will run on an agent in the production environment.

There is also a possibility to store encrypted secrets within a pipeline configuration using so called secure variables. A secure variable can be used within a pipeline like an environment variable with the only difference that it’s encrypted with a shared secret stored on the GoCD server. We have not used this feature that much since we solved this issue in other ways. You can also define secure variables on the environment level so that a pipeline running in a specific environment will inherit all secure variables defined in that specific environment.

Pipelines as code

This was one of the features GoCD were lacking at the very beginning, but at the same there were API endpoints for creating and managing pipelines. Since GoCD version 16.7.0 there is support for “pipelines as code” via the yaml-plugin or the json-plugin. Unfortunately templates are not supported which can lead to duplicated code in your pipeline configuration repo(s).

For further reading please refer to: https://docs.go.cd/current/advanced_usage/pipelines_as_code.html

Example

Let’s wrap it up with a fully working example where the key concepts explained above are used. In this example we will set up a deployment pipeline (BUILD -> QA -> PROD ) for a dropwizard application. It will also setup an basic example where fan-in is used. In that example you will notice that downstreams pipeline “prod” won’t be trigger unless both “test” and “perf-test” are finished. We use the concept of  “pipelines as code” to configure the deployment pipeline using “gocd-plumber”. GoCD-plumber is a tool written in golang (by me btw), which uses the GoCD API to create pipelines from yaml code. In contrast to the yaml-plugin and json-plugin it does support templates by the act of merging a pipeline hash over a template hash.

Preparations

This example requires Docker, so if you don’t have it installed, please install it first.

  1. git clone https://github.com/dennisgranath/gocd-docker.git
  2. cd gocd-docker
  3. docker-compose up
  4. go to http://127.0.0.1:8153 and login as ‘admin’ with password ‘badger’
  5. Press play button on the “create-pipelines” pipeline
  6. Press pause button to “unpause” pipelines

 

 

The power of Jenkins JobDSL

At my current client we define all of our Jenkins jobs with Jenkins Job Builder so that we can have all of our job configurations version controlled. Every push to the version control repository which hosts the configuration files will trigger an update of the jobs in Jenkins so that we can make sure that the state in version control is always reflected in our Jenkins setup.

We have six jobs that are essentially the same for each of our projects but with only a slight change of the input parameters to each job. These jobs are responsible for deploying the application binaries to different environments.

Jenkins Job Builder supports templating, but our experience is that those templates usually will be quite hard to maintain in the long run. And since it is just YAML files, you’re quite restricted to what you’re able to do. JobDSL Jenkins job configurations on the other hand are built using Groovy. It allows you to insert actual code execution in your job configurations scripts which can be very powerful.

In our case, with JobDSL we can easily just create one common job configuration which we then just iterate over with the job specific parameters to create the necessary jobs for each environment. We also can use some utility methods written in Groovy which we can invoke in our job configurations. So instead of having to maintain similar Jenkins Job Builder configurations in YAML, we can do it much more consise with JobDSL.

Below, an example of a JobDSL configuration file (Groovy code), which generates six jobs according to the parameterized job template:

class Utils {
    static String environment(String qualifier) { qualifier.substring(0, qualifier.indexOf('-')).toUpperCase() }
    static String environmentType(String qualifier) { qualifier.substring(qualifier.indexOf('-') + 1)}
}

[
    [qualifier: "us-staging"],
    [qualifier: "eu-staging"],
    [qualifier: "us-uat"],
    [qualifier: "eu-uat"],
    [qualifier: "us-live"],
    [qualifier: "eu-live"]
].each { Map environment ->

    job("myproject-deploy-${environment.qualifier}") {
        description "Deploy my project to ${environment.qualifier}"
        parameters {
            stringParam('GIT_SHA', null, null)
        }
        scm {
            git {
                remote {
                    url('ssh://git@my_git_repo/myproject.git')
                    credentials('jenkins-credentials')
                }
                branch('$GIT_SHA')
            }
        }
        deliveryPipelineConfiguration("Deploy to " + Utils.environmentType("${environment.qualifier}"),
                Utils.environment("${environment.qualifier}") + " environment")
        logRotator(-1, 30, -1, -1)
        steps {
            shell("""deployment/azure/deploy.sh ${environment.qualifier}""")
        }
    }
}

If you need help getting your Jenkins configuration into good shape, just contact us and we will be happy to help you! You can read more about us at diabol.se.

Tommy Tynjä
@tommysdk

Top class Continuous Delivery in AWS

Last week Diabol arranged a workshop in Stockholm where we invited Amazon together with Klarna and Volvo Group Telematics that both practise advanced Continuous Delivery in AWS. These companies are in many ways pioneers in this area as there is little in terms of established practices. We want to encourage and facilitate cross company knowledge sharing and take Continuous Delivery to the next level. The participants have very different businesses, processes and architectures but still struggle with similar challenges when building delivery pipelines for AWS. Below follows a short summary of some of the topics covered in the workshop.

Centralization and standardization vs. fully autonomous teams

One of the most interesting discussions among the participants wasn’t even technical but covered the differences in how they are organized and how that affects the work with Continuous Delivery and AWS. Some come from a traditional functional organisation and have placed their delivery teams somewhere in between development teams and the operations team. The advantages being that they have been able to standardize the delivery platform to a large extent and have a very high level of reuse. They have built custom tools and standardized services that all teams are more or less forced to use This approach depends on being able to keep at least one step ahead of the dev teams and being able to scale out to many dev teams without increasing headcount. One problem with this approach is that it is hard to build deep AWS knowledge out in the dev teams since they feel detached from the technical implementation. Others have a very explicit strategy of team autonomy where each team basically is in charge of their complete process all the way to production. In this case each team must have a quite deep competence both about AWS and the delivery pipelines and how they are set up. The production awareness is extremely high and you can e.g. visualize each team’s cost of AWS resources. One problem with this approach is a lower level of reusability and difficulties in sharing knowledge and implementation between teams.

Both of these approaches have pros and cons but in the end I think less silos and more team empowerment wins. If you can manage that and still build a common delivery infrastructure that scales, you are in a very good position.

Infrastructure as code

Another topic that was thoroughly covered was different ways to deploy both applications and infrastructure to AWS. CloudFormation is popular and very powerful but has its shortcomings in some scenarios. One participant felt that CF is too verbose and noisy and have built their own YAML configuration language on top of CF. They have been able to do this since they have a strong standardization of their micro-service architecture and the deployment structure that follows. Other participants felt the same problem with CF being too noisy and have broken out a large portion of configuration from the stack templates to Ansible, leaving just the core infrastructure resources in CF. This also allows them to apply different deployment patterns and more advanced orchestration. We also briefly discussed 3:rd part tools, e.g. Terraform, but the general opinion was that they all have a hard time keeping up with features in AWS. On the other hand, if you have infrastructure outside AWS that needs to be managed in conjunction with what you have in AWS, Terraform might be a compelling option. Both participants expressed that they would like to see some kind of execution plan / dry-run feature in CF much like Terraform have.

Docker on AWS

Use of Docker is growing quickly right now and was not surprisingly a hot topic at the workshop. One participant described how they deploy their micro-services in Docker containers with the obvious advantage of being portable and lightweight (compared to baking AMI’s). This is however done with stand-alone EC2-instances using a shared common base AMI and not on ECS, an approach that adds redundant infrastructure layers to the stack. They have just started exploring ECS and it looks promising but questions around how to manage centralized logging, monitoring, disk encryption etc are still not clear. Docker is a very compelling deployment alternative but both Docker itself and the surrounding infrastructure need to mature a bit more, e.g. docker push takes an unreasonable long time and easily becomes a bottleneck in your delivery pipelines. Another pain is the need for a private Docker registry that on this level of continuous delivery needs to be highly available and secure.

What’s missing?

The discussions also identified some feature requests for Amazon to bring home. E.g. we discussed security quite a lot and got into the technicalities of AIM-roles, accounts, security groups etc. It was expressed that there might be a need for explicit compliance checks and controls as a complement to the more crude ways with e.g. PEN-testing. You can certainly do this by extracting information from the API’s and process it according to your specific compliance rules, but it would be nice if there was a higher level of support for this from AWS.

We also discussed canarie releasing and A/B testing. Since this is becoming more of a common practice it would be nice if Amazon could provide more services to support this, e.g. content based routing and more sophisticated analytic tools.

Next step

All-in-all I think the workshop was very successful and the discussions and experience sharing was valuable to all participants. Diabol will continue to push Continuous Delivery maturity in the industry by arranging meetups and workshops and involve more companies that can contribute and benefit from this collaboration.