Category Archives: Pipelines

Delivery pipelines with GoCD

At my most recent assignment, one of my missions was to set up delivery pipelines for a bunch of services built in Java and some front-end apps. When I started they already had some rudimentary automated builds using Hudson, but we really wanted to start from scratch. We decided to give GoCD a chance because it pretty much satisfied our demands. We wanted something that could:

  • orchestrate a deployment pipeline (Build -> CI -> QA -> Prod)
  • run on-premise
  • deploy to production in a secure way
  • show a pipeline dashboard
  • handle user authentication and authorization
  • support fan-in

GoCD is the open source “continuous delivery server” backed up by Thoughtworks (also famous for selenium)

GoCD key concepts

A pipeline in GoCD consists of stages which are executed in sequence. A stage contains jobs which are executed in parallel. A job contains tasks which are executed in sequence. The smallest schedulable unit are jobs which are executed by agents. An agent is a Java process that normally runs on its own separate machine. An idling agent fetches jobs from the GoCD server and executes its tasks. A pipeline is triggered by a “material” which can be any of the following the following version control systems:

  • Git
  • Subversion
  • Mercurial
  • Team Foundation Server
  • Perforce

A pipeline can also be triggered by a dependency material, i.e. an upstreams pipeline. A pipeline can have more than one material.

Technically you can put the whole delivery pipeline (CI->QA->PROD) inside a pipeline as stages but there are several reasons why you would want to split it in separate chained pipelines. The most obvious reason for doing so is the GoCD concept of environments. A pipeline is the smallest entity that you could place inside an environment. The concepts of environments are explained more in detailed in the security section below.

You can logically group pipelines in ”pipelines groups” so in the typical setup for a micro-service you might have a pipeline group containing following pipelines:

example-server-pl

A pipeline group can be used as a view and a place holder for access rights. You can give a specific user or role view, execution or admin rights to all pipelines within a pipeline group.

For the convenience of not repeating yourself, GoCD offers the possibility to create templates which are parameterized workflows that can be reused. Typically you could have templates for:

  • building (maven, static code analysis)
  • deploying to a test environment
  • deploying to a production environment

For more details on concepts in GoCD, see: https://docs.go.cd/current/introduction/concepts_in_go.html

Administration

Installation

The GoCD server and agents are bundled as rpm, deb, windows and OSX install packages. We decided to install it using puppet https://forge.puppet.com/unibet/go since we already had puppet in place. One nice feature is that agents auto-upgrades when the GoCD server is upgraded, so in the normal case you only need to care about GoCD server upgrades.

User management

We use the LDAP integration to authenticate users in GoCD. When a user defined in LDAP is logging in for the for the first time it’s automatically registered as a new user in GoCD. If you use role based authorization then an admin user needs to assign roles to the new user.

Disk space management

All artifacts created by pipelines are stored on the GoCD server and you will sooner or later face the fact that disks are getting full. We have used the tool “GoCD janitor” that analyses the value stream (the collection of chained upstreams and downstream pipelines) and automatically removes artifacts that won’t make it to production.

Security

One of the major concerns when deploying to production is the handling of deployment secrets such as ssh keys. At my client they extensively use Ansible as a deployment tool so we need the ability to handle ssh keys on the agents in a secure way. It’s quite obvious that you don’t want to use the same ssh keys in test and production so in GoCD they have a feature called environments for this purpose. You can place an agent and a pipeline in an environment (ci, qa, production) so that anytime a production pipeline is triggered it will run on an agent in the production environment.

There is also a possibility to store encrypted secrets within a pipeline configuration using so called secure variables. A secure variable can be used within a pipeline like an environment variable with the only difference that it’s encrypted with a shared secret stored on the GoCD server. We have not used this feature that much since we solved this issue in other ways. You can also define secure variables on the environment level so that a pipeline running in a specific environment will inherit all secure variables defined in that specific environment.

Pipelines as code

This was one of the features GoCD were lacking at the very beginning, but at the same there were API endpoints for creating and managing pipelines. Since GoCD version 16.7.0 there is support for “pipelines as code” via the yaml-plugin or the json-plugin. Unfortunately templates are not supported which can lead to duplicated code in your pipeline configuration repo(s).

For further reading please refer to: https://docs.go.cd/current/advanced_usage/pipelines_as_code.html

Example

Let’s wrap it up with a fully working example where the key concepts explained above are used. In this example we will set up a deployment pipeline (BUILD -> QA -> PROD ) for a dropwizard application. It will also setup an basic example where fan-in is used. In that example you will notice that downstreams pipeline “prod” won’t be trigger unless both “test” and “perf-test” are finished. We use the concept of  “pipelines as code” to configure the deployment pipeline using “gocd-plumber”. GoCD-plumber is a tool written in golang (by me btw), which uses the GoCD API to create pipelines from yaml code. In contrast to the yaml-plugin and json-plugin it does support templates by the act of merging a pipeline hash over a template hash.

Preparations

This example requires Docker, so if you don’t have it installed, please install it first.

  1. git clone https://github.com/dennisgranath/gocd-docker.git
  2. cd gocd-docker
  3. docker-compose up
  4. go to http://127.0.0.1:8153 and login as ‘admin’ with password ‘badger’
  5. Press play button on the “create-pipelines” pipeline
  6. Press pause button to “unpause” pipelines

 

 

The power of Jenkins JobDSL

At my current client we define all of our Jenkins jobs with Jenkins Job Builder so that we can have all of our job configurations version controlled. Every push to the version control repository which hosts the configuration files will trigger an update of the jobs in Jenkins so that we can make sure that the state in version control is always reflected in our Jenkins setup.

We have six jobs that are essentially the same for each of our projects but with only a slight change of the input parameters to each job. These jobs are responsible for deploying the application binaries to different environments.

Jenkins Job Builder supports templating, but our experience is that those templates usually will be quite hard to maintain in the long run. And since it is just YAML files, you’re quite restricted to what you’re able to do. JobDSL Jenkins job configurations on the other hand are built using Groovy. It allows you to insert actual code execution in your job configurations scripts which can be very powerful.

In our case, with JobDSL we can easily just create one common job configuration which we then just iterate over with the job specific parameters to create the necessary jobs for each environment. We also can use some utility methods written in Groovy which we can invoke in our job configurations. So instead of having to maintain similar Jenkins Job Builder configurations in YAML, we can do it much more consise with JobDSL.

Below, an example of a JobDSL configuration file (Groovy code), which generates six jobs according to the parameterized job template:

class Utils {
    static String environment(String qualifier) { qualifier.substring(0, qualifier.indexOf('-')).toUpperCase() }
    static String environmentType(String qualifier) { qualifier.substring(qualifier.indexOf('-') + 1)}
}

[
    [qualifier: "us-staging"],
    [qualifier: "eu-staging"],
    [qualifier: "us-uat"],
    [qualifier: "eu-uat"],
    [qualifier: "us-live"],
    [qualifier: "eu-live"]
].each { Map environment ->

    job("myproject-deploy-${environment.qualifier}") {
        description "Deploy my project to ${environment.qualifier}"
        parameters {
            stringParam('GIT_SHA', null, null)
        }
        scm {
            git {
                remote {
                    url('ssh://git@my_git_repo/myproject.git')
                    credentials('jenkins-credentials')
                }
                branch('$GIT_SHA')
            }
        }
        deliveryPipelineConfiguration("Deploy to " + Utils.environmentType("${environment.qualifier}"),
                Utils.environment("${environment.qualifier}") + " environment")
        logRotator(-1, 30, -1, -1)
        steps {
            shell("""deployment/azure/deploy.sh ${environment.qualifier}""")
        }
    }
}

If you need help getting your Jenkins configuration into good shape, just contact us and we will be happy to help you! You can read more about us at diabol.se.

Tommy Tynjä
@tommysdk

Top class Continuous Delivery in AWS

Last week Diabol arranged a workshop in Stockholm where we invited Amazon together with Klarna and Volvo Group Telematics that both practise advanced Continuous Delivery in AWS. These companies are in many ways pioneers in this area as there is little in terms of established practices. We want to encourage and facilitate cross company knowledge sharing and take Continuous Delivery to the next level. The participants have very different businesses, processes and architectures but still struggle with similar challenges when building delivery pipelines for AWS. Below follows a short summary of some of the topics covered in the workshop.

Centralization and standardization vs. fully autonomous teams

One of the most interesting discussions among the participants wasn’t even technical but covered the differences in how they are organized and how that affects the work with Continuous Delivery and AWS. Some come from a traditional functional organisation and have placed their delivery teams somewhere in between development teams and the operations team. The advantages being that they have been able to standardize the delivery platform to a large extent and have a very high level of reuse. They have built custom tools and standardized services that all teams are more or less forced to use This approach depends on being able to keep at least one step ahead of the dev teams and being able to scale out to many dev teams without increasing headcount. One problem with this approach is that it is hard to build deep AWS knowledge out in the dev teams since they feel detached from the technical implementation. Others have a very explicit strategy of team autonomy where each team basically is in charge of their complete process all the way to production. In this case each team must have a quite deep competence both about AWS and the delivery pipelines and how they are set up. The production awareness is extremely high and you can e.g. visualize each team’s cost of AWS resources. One problem with this approach is a lower level of reusability and difficulties in sharing knowledge and implementation between teams.

Both of these approaches have pros and cons but in the end I think less silos and more team empowerment wins. If you can manage that and still build a common delivery infrastructure that scales, you are in a very good position.

Infrastructure as code

Another topic that was thoroughly covered was different ways to deploy both applications and infrastructure to AWS. CloudFormation is popular and very powerful but has its shortcomings in some scenarios. One participant felt that CF is too verbose and noisy and have built their own YAML configuration language on top of CF. They have been able to do this since they have a strong standardization of their micro-service architecture and the deployment structure that follows. Other participants felt the same problem with CF being too noisy and have broken out a large portion of configuration from the stack templates to Ansible, leaving just the core infrastructure resources in CF. This also allows them to apply different deployment patterns and more advanced orchestration. We also briefly discussed 3:rd part tools, e.g. Terraform, but the general opinion was that they all have a hard time keeping up with features in AWS. On the other hand, if you have infrastructure outside AWS that needs to be managed in conjunction with what you have in AWS, Terraform might be a compelling option. Both participants expressed that they would like to see some kind of execution plan / dry-run feature in CF much like Terraform have.

Docker on AWS

Use of Docker is growing quickly right now and was not surprisingly a hot topic at the workshop. One participant described how they deploy their micro-services in Docker containers with the obvious advantage of being portable and lightweight (compared to baking AMI’s). This is however done with stand-alone EC2-instances using a shared common base AMI and not on ECS, an approach that adds redundant infrastructure layers to the stack. They have just started exploring ECS and it looks promising but questions around how to manage centralized logging, monitoring, disk encryption etc are still not clear. Docker is a very compelling deployment alternative but both Docker itself and the surrounding infrastructure need to mature a bit more, e.g. docker push takes an unreasonable long time and easily becomes a bottleneck in your delivery pipelines. Another pain is the need for a private Docker registry that on this level of continuous delivery needs to be highly available and secure.

What’s missing?

The discussions also identified some feature requests for Amazon to bring home. E.g. we discussed security quite a lot and got into the technicalities of AIM-roles, accounts, security groups etc. It was expressed that there might be a need for explicit compliance checks and controls as a complement to the more crude ways with e.g. PEN-testing. You can certainly do this by extracting information from the API’s and process it according to your specific compliance rules, but it would be nice if there was a higher level of support for this from AWS.

We also discussed canarie releasing and A/B testing. Since this is becoming more of a common practice it would be nice if Amazon could provide more services to support this, e.g. content based routing and more sophisticated analytic tools.

Next step

All-in-all I think the workshop was very successful and the discussions and experience sharing was valuable to all participants. Diabol will continue to push Continuous Delivery maturity in the industry by arranging meetups and workshops and involve more companies that can contribute and benefit from this collaboration.