Tag Archives: Continuous Delivery

Programmatically configure Jenkins pipeline shared groovy libraries

If you’re a Jenkins user adopting Jenkins pipelines, it is likely that you have run in to the scenario where you want to break out common functionality from your pipeline scripts, instead of having to duplicate it in multiple pipeline definitions. To cater for this scenario, Jenkins provides the Pipeline Shared Groovy Libraries plugin.

You can configure your shared global libraries through the Jenkins UI (Manage Jenkins > Configure System), which is what most resource online seems to suggest. But if you are like me and looking to automate things, you do not want configure your shared global libraries through the UI but rather doing it programmatically. This post aims to describe how to automate the configuration of your shared global libraries.

Jenkins stores plugin configuration in its home directory as XML files. The expected configuration file name for the Shared Groovy Libraries plugin (version 2.8) is: org.jenkinsci.plugins.workflow.libs.GlobalLibraries.xml

Here’s an example configuration to add an implicitly loaded global library, checked out from a particular Git repository:

<?xml version=’1.0′ encoding=’UTF-8’?>
<org.jenkinsci.plugins.workflow.libs.GlobalLibraries plugin=”workflow-cps-global-lib@2.8″>
            <retriever class=”org.jenkinsci.plugins.workflow.libs.SCMSourceRetriever”>
                <scm class=”jenkins.plugins.git.GitSCMSource” plugin=”git@3.4.1″>

You can also find a copy/paste friendly version of the above configuration on GitHub: https://gist.github.com/tommysdk/e752486bf7e0eeec0ed3dd32e56f61a4

If your version control system requires authentication, e.g. ssh credentials, create those credentials and use the associated id’s in your org.jenkinsci.plugins.workflow.libs.GlobalLibraries.xml.

Store the XML configuration file in your Jenkins home directory (echo “$JENKINS_HOME”) and start your Jenkins master for it to load the configuration.

If you are running Jenkins with Docker, adding the configuration is just a copy directive in your Dockerfile:

FROM jenkins:2.60.1
COPY org.jenkinsci.plugins.workflow.libs.GlobalLibraries.xml $JENKINS_HOME/org.jenkinsci.plugins.workflow.libs.GlobalLibraries.xml

Jenkins environment used: 2.60.1, Pipeline Shared Groovy Libraries plugin 2.8

Tommy Tynjä

How to use the Delivery Pipeline plugin with Jenkins pipelines

Since we’ve just released support for Jenkins pipelines in the Delivery Pipeline plugin (you can read more about it here), this article aims to describe how to use the Delivery Pipeline plugin from your Jenkins pipelines.

First, you need to have a Jenkins pipeline or a multibranch pipeline. Here is an example configuration:
node {
  stage 'Commit stage' {
    echo 'Compiling'
    sleep 2

    echo 'Running unit tests'
    sleep 2

  stage 'Test'
  echo 'Running component tests'
  sleep 2

  stage 'Deploy' {
    echo 'Deploy to environment'
    sleep 2

You can either group your stages with or without curly braces as shown in the example above, depending on which versions you use of the Pipeline plugin. The stages in the pipeline configuration are used for the visualization in the Delivery Pipeline views.

To create a new view, press the + tab on the Jenkins landing page as shown below.
Jenkins landing page.

Give your view a new and select “Delivery Pipeline View for Jenkins Pipelines”.
Create new view.

Configure the view according to your needs, and specify which pipeline project to visualize.
Configure view.

Now your view has been created!
Delivery Pipeline view created.

If you selected the option to “Enable start of new pipeline build”, you can trigger a new pipeline run by clicking the build button to the right of the pipeline name. Otherwise you can navigate to your pipeline and start it from there. Now you will see how the Delivery Pipeline plugin renders your pipeline run! Since the view is not able to evaluate the pipeline in advance, the pipeline will be rendered according to the progress of the current run.
Jenkins pipeline run in the Delivery Pipeline view.

If you prefer to model each stage in a more fine grained fashion you can specify tasks for each stage. Example:
node {
  stage 'Build'
  task 'Compile'
  echo 'Compiling'
  sleep 1

  task 'Unit test'
  sleep 1

  stage 'Test'
  task 'Component tests'
  echo 'Running component tests'
  sleep 1

  task 'Integration tests'
  echo 'Running component tests'
  sleep 1

  stage 'Deploy'
  task 'Deploy to UAT'
  echo 'Deploy to UAT environment'
  sleep 1

  task 'Deploy to production'
  echo 'Deploy to production'
  sleep 1

Which will render the following pipeline view:
Jenkins pipeline run in the Deliveyr Pipeline view.

The pipeline view also supports a full screen view, optimized for visualization on information radiators:
Delivery Pipeline view in full screen mode.

That’s it!

If you are making use of Jenkins pipelines, we hope that you will give the Delivery Pipeline plugin a spin!

To continue to drive adoption and development of the Delivery Pipeline plugin, we rely on feedback and contributions from our users. For more information about the project and how to contribute, please visit the project page on GitHub: https://github.com/Diabol/delivery-pipeline-plugin

We use the official Jenkins JIRA to track issues, using the component label “delivery-pipeline-plugin”: https://issues.jenkins-ci.org/browse/JENKINS


Tommy Tynjä

Announcing Delivery Pipeline Plugin 1.0.0 with support for Jenkins pipelines

We are pleased to announce the first version of the Delivery Pipeline Plugin for Jenkins (1.0.0) which includes support for Jenkins pipelines! Even though Jenkins pipelines has been around for a few years (formerly known as the Workflow plugin), it is during the last year that the Pipeline plugin has started to gain momentum in the community.

The Delivery Pipeline plugin is a mature project that was primarily developed with one purpose in mind. To allow visualization of Continuous Delivery pipelines, suitable for visualization on information radiators. We believe that a pipeline view should be easy to understand, yet contain all necessary information to be able to follow a certain code change passing through the pipeline, without the need to click through a UI. Although there are alternatives to the Delivery Pipeline plugin, there is yet a plugin that suits as good for delivery pipeline visualizations on big screens as the Delivery Pipeline plugin.

Delivery Pipeline View using a Jenkins Pipeline.

Delivery Pipeline plugin has a solid user base and a great community that helps us to drive the project forward, giving feedback, feature suggestions and code contributions. Until now, the project has supported the creation of pipelines using traditional Jenkins jobs with downstream/upstream dependencies with support for more complex pipelines using e.g. forks and joins.

Now, we are pleased to release a first version of the Delivery Pipeline plugin that has basic support for Jenkins pipelines. This feature has been developed and tested together with a handful of community contributors. We couldn’t have done it without you! The Jenkins pipeline support has some known limitations still, but we felt that the time was right to release a first version that could get traction and usage out in the community. To continue to drive adoption and development we rely on feedback and contributions from the community. So if you are already using Jenkins pipelines, please give the new Delivery Pipeline a try. It gives your pipelines the same look and feel as you have grown accustomed to if you are used to the look and feel of the Delivery Pipeline plugin. We welcome all feedback and contributions!

Other big features in the 1.0.0 version includes the ability to show which upstream project that triggered a given pipeline (issue JENKINS-22283), updated style sheets and a requirement on Java 7 and Jenkins 1.642.3 as the minimum supported core version.

For more information about the project and how to contribute, please visit the project page on GitHub: https://github.com/Diabol/delivery-pipeline-plugin

Tommy Tynjä

Delivery pipelines with GoCD

At my most recent assignment, one of my missions was to set up delivery pipelines for a bunch of services built in Java and some front-end apps. When I started they already had some rudimentary automated builds using Hudson, but we really wanted to start from scratch. We decided to give GoCD a chance because it pretty much satisfied our demands. We wanted something that could:

  • orchestrate a deployment pipeline (Build -> CI -> QA -> Prod)
  • run on-premise
  • deploy to production in a secure way
  • show a pipeline dashboard
  • handle user authentication and authorization
  • support fan-in

GoCD is the open source “continuous delivery server” backed up by Thoughtworks (also famous for selenium)

GoCD key concepts

A pipeline in GoCD consists of stages which are executed in sequence. A stage contains jobs which are executed in parallel. A job contains tasks which are executed in sequence. The smallest schedulable unit are jobs which are executed by agents. An agent is a Java process that normally runs on its own separate machine. An idling agent fetches jobs from the GoCD server and executes its tasks. A pipeline is triggered by a “material” which can be any of the following the following version control systems:

  • Git
  • Subversion
  • Mercurial
  • Team Foundation Server
  • Perforce

A pipeline can also be triggered by a dependency material, i.e. an upstreams pipeline. A pipeline can have more than one material.

Technically you can put the whole delivery pipeline (CI->QA->PROD) inside a pipeline as stages but there are several reasons why you would want to split it in separate chained pipelines. The most obvious reason for doing so is the GoCD concept of environments. A pipeline is the smallest entity that you could place inside an environment. The concepts of environments are explained more in detailed in the security section below.

You can logically group pipelines in ”pipelines groups” so in the typical setup for a micro-service you might have a pipeline group containing following pipelines:


A pipeline group can be used as a view and a place holder for access rights. You can give a specific user or role view, execution or admin rights to all pipelines within a pipeline group.

For the convenience of not repeating yourself, GoCD offers the possibility to create templates which are parameterized workflows that can be reused. Typically you could have templates for:

  • building (maven, static code analysis)
  • deploying to a test environment
  • deploying to a production environment

For more details on concepts in GoCD, see: https://docs.go.cd/current/introduction/concepts_in_go.html



The GoCD server and agents are bundled as rpm, deb, windows and OSX install packages. We decided to install it using puppet https://forge.puppet.com/unibet/go since we already had puppet in place. One nice feature is that agents auto-upgrades when the GoCD server is upgraded, so in the normal case you only need to care about GoCD server upgrades.

User management

We use the LDAP integration to authenticate users in GoCD. When a user defined in LDAP is logging in for the for the first time it’s automatically registered as a new user in GoCD. If you use role based authorization then an admin user needs to assign roles to the new user.

Disk space management

All artifacts created by pipelines are stored on the GoCD server and you will sooner or later face the fact that disks are getting full. We have used the tool “GoCD janitor” that analyses the value stream (the collection of chained upstreams and downstream pipelines) and automatically removes artifacts that won’t make it to production.


One of the major concerns when deploying to production is the handling of deployment secrets such as ssh keys. At my client they extensively use Ansible as a deployment tool so we need the ability to handle ssh keys on the agents in a secure way. It’s quite obvious that you don’t want to use the same ssh keys in test and production so in GoCD they have a feature called environments for this purpose. You can place an agent and a pipeline in an environment (ci, qa, production) so that anytime a production pipeline is triggered it will run on an agent in the production environment.

There is also a possibility to store encrypted secrets within a pipeline configuration using so called secure variables. A secure variable can be used within a pipeline like an environment variable with the only difference that it’s encrypted with a shared secret stored on the GoCD server. We have not used this feature that much since we solved this issue in other ways. You can also define secure variables on the environment level so that a pipeline running in a specific environment will inherit all secure variables defined in that specific environment.

Pipelines as code

This was one of the features GoCD were lacking at the very beginning, but at the same there were API endpoints for creating and managing pipelines. Since GoCD version 16.7.0 there is support for “pipelines as code” via the yaml-plugin or the json-plugin. Unfortunately templates are not supported which can lead to duplicated code in your pipeline configuration repo(s).

For further reading please refer to: https://docs.go.cd/current/advanced_usage/pipelines_as_code.html


Let’s wrap it up with a fully working example where the key concepts explained above are used. In this example we will set up a deployment pipeline (BUILD -> QA -> PROD ) for a dropwizard application. It will also setup an basic example where fan-in is used. In that example you will notice that downstreams pipeline “prod” won’t be trigger unless both “test” and “perf-test” are finished. We use the concept of  “pipelines as code” to configure the deployment pipeline using “gocd-plumber”. GoCD-plumber is a tool written in golang (by me btw), which uses the GoCD API to create pipelines from yaml code. In contrast to the yaml-plugin and json-plugin it does support templates by the act of merging a pipeline hash over a template hash.


This example requires Docker, so if you don’t have it installed, please install it first.

  1. git clone https://github.com/dennisgranath/gocd-docker.git
  2. cd gocd-docker
  3. docker-compose up
  4. go to and login as ‘admin’ with password ‘badger’
  5. Press play button on the “create-pipelines” pipeline
  6. Press pause button to “unpause” pipelines



Reasons why code freezes don’t work

This article is a continuation on my previous article on how to release software with quality and confidence.

When the big e-commerce holidays such as Black Friday, Cyber Monday and Christmas are looming around the corner, many companies are gearing up to make sure their systems are stable and able to handle the expected increase in traffic.

What many organizations do is introducing a full code freeze for the holiday season, where no new code changes or features are allowed to be released to the production systems during this period of the year. Another approach is to only allow updates to the production environment during a few hours during office hours. These approaches might sound logical but are in reality anti-patterns that neither reduces risk nor ensures stability.

What you’re doing when you stop deploying software is interrupting the pace of the development teams. The team’s feedback loop breaks and the ways of workings are forced to deviate from the standard process, which leads to decreased productivity.

When your normal process allows deploying software on a regular basis in an automated and streamlined fashion, it becomes just as natural as breathing. When you stop deploying your software, it’s like holding your breath. This is what happens to your development process. When you finally turn on the floodgates after the holiday season, the risk for bugs and deployment failures are much more likely to occur. As more changes are stacked up, the bigger the risk of unwanted consequences for each deployment. Changes might also be rushed, with less quality to be able to make it into production in time before a freeze. Keeping track of what changes are pending release becomes more challenging.

Keeping your systems stable with exceptional uptime should be a priority throughout the whole year, not only during holiday season. The ways of working for the development teams and IT organization should embrace this mindset and expectations of quality. Proper planning can go a long way to reduce risk. It might not make sense to push through a big refactoring of the source code during the busiest time of the year.

A key component for allowing a continuous flow of evolving software throughout the entire year is organizational maturity regarding monitoring, logging and planning. It should be possible to monitor, preferably visualised on big screens in the office, how the systems are behaving and functioning in real-time. Both technical and business metrics should be metered and alarm thresholds configured accordingly based on these metrics.

Deployment strategies are also of importance. When deploying a new version of a system, it should always be cheap to rollback to a previous version. This should be automated and possible to do by clicking just one button. Then, if there is a concern after deploying a new version, discovered by closely monitoring the available metrics, it is easy to revert to a previously known good version. Canary releasing is also a strategy where possible issues can be discovered before rolling out new code changes for all users. Canary releasing allows you to route a specific amount of traffic to specific version of the software and thus being able to compare metrics, outcomes and possible errors between the different versions.

When the holiday season begins, keep calm and keep breathing.

Tommy Tynjä
LinkedIn profile

Summary of Eurostar 2016

About Eurostar

Eurostar is Europe’s largest conference that is focused on testing and this year the conference was held in Stockholm October 31 – November 3. Since I have been working with test automation lately it seemed like a good opportunity to go my first test conference (I was there for two days). The conference had the usual mix of tutorials, presentations and expo, very much a traditional conference setup unlike the community driven style.

Key take away

Continuous delivery and DevOps changes much of the conventional thinking around test. The change is not primarily related to that you should automate everything related to test but that, in the same way as you drive user experience testing with things like A/B testing, a key enabler of quality is monitoring in production and the ability to quickly respond to problems. This does not mean that all automated testing is useless. But it calls for a very different mindset compared to conventional quality wisdom where the focus has been on finding problems as early as possible (in terms of phases of development). Instead there is increased focus on deploying changes fast in a gradual and controlled way with a high degree of monitoring and diagnostics, thereby being able to diagnose and remedy any issues quickly.


Roughly in order of how much I liked the sessions, here is what I participated in:

Sally Goble et. al. – How we learned to love quality and stop testing

This was both well presented and thought provoking. They described the journey at Guardian from having a long (two weeks) development cycle with a considerable amount of testing to the current situation where they deploy continuously. The core of the story was how the focus had been changed from test to quality with a DevOps setup. When they first started their journey in automation they took the approach of creating Selenium tests for their full manual regression test suite. This is pretty much scrapped now and they rely primarily on the ability to quickly detect problems in production and do fixes. Canary releases and good use of monitoring / APM, and investments in improved logging were the key enablers here.

Automated tests are still done on a unit, api and integration test level but as noted above really not much automation of front end tests.

Declan O´Riordan – Application security testing: A new approach

Declan is an independent consultant and started his talk claiming that the number of security related incidents continue to increase and that there is a long list of potential security breaches that one need to be aware of. He also talked about how continuous delivery has shrunk the time frame available for security testing to almost nothing. I.e., it is not getting any easier to secure your applications. Then he went on claiming that there has been a breakthrough in terms of what tools can with regard to security testing in the last 1-2 years. These new tools are categorised as IAST (Interactive Analysis Security Testing) and RASP (Runtime Application Self-Protection). While traditional automated security testing tools find 20-30% of the security issues in an application, IAST-tools find as much as 99% of the issues automatically. He gave a demo and it was impressive. He used the toolset from Contrast but there are other supplier with similar tools and many hustling to catch up. It seems to me that an IAST tool should be part of your pipeline before going to production and a RASP solution should be part of your production monitoring/setup. Overall an interesting talk and lots to evaluate, follow up on, and possibly apply.

Jan van Moll – Root cause analysis for testers

This was both entertaining and well presented. Jan is head of quality at Philips Healthcare but he is also an independent investigator / software expert that is called in when things go awfully wrong or when there are close escapes/near misses, like when a plane crashes.

No clear takeaway for me from the talk that can immediately be put to use but a list of references to different root cause analysis techniques that I hope to get the time to look into at some point. It would have been interesting to hear more as this talk only scratched the surface of the subject.

Julian Harty – Automated testing of mobile apps

This was interesting but it is not a space that I am directly involved in so I am not sure that there will be anything that is immediately useful for me. Things that were talked about include:

  • Monkey testing, there is apparently some tooling included in the Android SDK that is quite useful for this.
  • An analysis that Microsoft research has done on 30 000 app crash dumps indicates that over 90% of all crashes are caused by 10 common implementation mistakes. Failing to check http status codes and always expecting a 200 comes to mind as one of the top ones.
  • appdiff.com a free and robot-based approach to automated testing where the robots apply machine learned heuristics. Simple and free to try and if you are doing mobile and not already using it you should probably have a look.

Ben Simo – Stories from testing healthcare.gov

The presenter rose to fame at the launch of the Obamacare website about a year ago. As you remember there were lots of problems the weeks/months after the launch. Ben approached this as a user from the beginning but after a while when things worked so poorly he started to look at things from a tester point of view. He then uncovered a number of issues related to security, usability, performance, etc. He started to share his experience on social media, mostly to help others trying to use the site, but also rose to fame in mainstream media. The presentation was fun and entertaining but I am not sure there was so much to learn as it mostly was a run-through of all of the problems he found and how poorly the project/launch had been handled. So it was entertaining and interesting but did not offer so much in terms of insight or learning.

Jay Sehti – What happened when we switched our data center off?

The background was that a couple of years ago Financial Times had a major outage in one of their data centres and the talk was about what went down in relation to that. I think the most interesting lesson was that they had built a dashboard in Dashing showing service health across their key services/applications that each are made up of a number of micro services. But when they went to the dashboard to see what was still was working and where there were problems they realised that the dashboard had a single point of failure related to the data centre that was down. Darn. Lesson learned: secure your monitoring in the same way or better as your applications.

In addition to that specific lesson I think the most interesting part of this presentation was what kind of journey they had gone through going to continuous delivery and micro services. In many ways this was similar to the Guardian story in that they now relied more on monitoring and being able to quickly respond to problems rather than having extensive automated (front end) tests. He mentioned for example they still had some Selenium tests but that coverage was probably around 20% now compared to 80% before.

Similar to Guardian they had plenty of test automation at Unit/API levels but less automation of front end tests.

Tutorial – Test automation management patterns

This was mostly a walk-through of the website/wiki testautomationpatterns.wikispaces.com and how to use it. The content on the wiki is not bad as such but it is quite high-level and common-sense oriented. It is probably useful to browse through the Issues and Automation Patterns if you are involved in test automation and have a difficult time to get traction. The diagnostics tool did not appear that useful to me.

No big revelations for me during this tutorial if anything it was more of a confirmation of that the approach we have taken at my current customer around testing of backend systems is sound.

Liz Keogh – How to test the inside of your head

Liz, an independent consultant of BDD-fame, talked among other things about Cynefin and how it is applicable in a testing context. Kind of interesting but it did not create much new insight for me (refreshed some old and that is ok too).

Bryan Bakker – Software reliability: Measuring to know

In this presentation Bryan presented an approach (process) to reliability engineering that he has developed together with a couple of colleagues/friends. The talk was a bit dry and academic and quite heavily geared towards embedded software. Surveillance cameras were the primary example that was used. Some interesting stuff here in particular in terms of how to quantify reliability.

Adam Carmi – Transforming your automated tests with visual testing

Adam is CTO of an Israeli tools company called Applitools and the talk was close to a marketing pitch for their tool. Visual Testing is distinct from functional testing in that is only concerned with visuals, i.e., what the human eye can see. It seems to me that if your are doing a lot of cross-device, cross-browser testing this kind of automated test might be of merit.

Harry Collins – The critique of AI in the age of the net

Harry is a professor in sociology at Cardiff University. This could have been an interesting talk about scientific theory / sociology / AI / philosophy / theory of mind and a bunch of other things. I am sure the presenter has the knowledge to make a great presentation on any of these subjects but this was ill-prepared, incoherent, pretty much without point, and not very well-presented. More of a late-night rant in a bar than a keynote.


As with most conferences there was a mix of good and not quite so good content but overall I felt that it was more than worthwhile to be there as I learned a bunch of things and maybe even had an insight or two. Hopefully there will be opportunity to apply some of the things I learned at the customers I am working with.

Svante Lidman
LinkedIn profile

How to release software frequently with quality and confidence

Continuous Delivery is a way of working for reducing cost, time and risk of releasing software. The idea is that small and frequent code changes impose less risk for bugs and disturbances. Key aspects in Continuous Delivery is to always have your source code in a releasable state and that your software delivery process is fully automated. These automated processes are typically called delivery pipelines.

In many companies, releasing software is a painful process. A process that is not done on a frequent basis. Many companies are not even able to release software once a month during a given year. In order to make something less painful you have to practice it. If you’re about to run a marathon for the first time, it might seem painful. But you won’t get better at running if you don’t do it frequent enough. It will still hurt.

As humans, we tend to be bad at repetitive work. It is therefore both time consuming and error prone to involve manual steps in the process of deploying software. That is reason enough to automate the whole process. Automation ensures that the process is fast, repeatable and provides traceability.

Never let a human do a scripts job.

With automation in place you can start building confidence in your release process and start practicing it more frequently. Releasing software should be practiced as often as possible so that you don’t even think about it happening.

Releasing new code changes should be so natural that you don’t need to think about it. It should be like breathing. Not something painful as giving birth.

A prerequisite for successfully practicing Continuous Delivery is test automation. Without test automation in place, testing becomes the bottleneck in your delivery process. It is therefore crucial that quality is built into the software. Writing automated tests, both unit, integration and acceptance tests should be a natural part of the development process. You cannot ship a piece of code that has no proper test coverage. If you are not testing the code you’re developing, you cannot be certain that it work as intended. If there is a need for regression tests, automate them and make them cheap to run so that they can be run repeatedly for every code change.

With Continuous Delivery it becomes evident that a release and a deployment are not the same things. A deployment is an exercise that happens all the time, for each new code change. A release is however a business or marketing decision on when to launch a new feature to your end users.

Tommy Tynjä
LinkedIn profile

JavaOne Latin America summary

Last week I attended the JavaOne Latin America software development conference in São Paulo, Brazil. This was a joint event with Oracle Open World, a conference with focus on Oracle solutions. I was accepted as a speaker for the Java, DevOps and Methodologies track at JavaOne. This article intends to give a summary of my main takeaways from the event.

The main points Oracle made during the opening keynote of Oracle Open World was their commitment to cloud technology. Oracle CEO Mark Hurd made the prediction that in 10 years, most production workloads will be running in the cloud. Oracle currently engineers all of their solutions for the cloud, which is something that also their on-premise solutions can leverage from. Security is a very important aspect in all of their solutions. Quite a few sessions during JavaOne showcased Java applications running on Oracles cloud platform.

The JavaOne opening keynote demonstrated the Java flight recorder, which enables you to profile your applications with near zero overhead. The aim of the flight recorder is to have as minimal overhead as possible so it can be enabled in production systems by default. It has a user interface which provides valuable data in case you want to get an understanding of how your Java applications perform.

The next version of Java, Java 9, will feature long awaited modularity through project Jigsaw. Today, all of your public APIs in your source code packages are exposed and can be used by anyone who has access to the classes. With Jigsaw, you as a developer can decide which classes are exported and exposed. This gives you as a developer more control over what are internal APIs and what are intended for use. This was quickly demonstrated on stage. You will also have the possibility to decide what your application dependencies are within the JDK. For instance it is quite unlikely that you need e.g. UI related libraries such as Swing or audio if you develop back-end software. I once tried to dissect the Java rt.jar for this particular purpose (just for fun), so it is nice to finally see this becoming a reality. The keynote also mentioned project Valhalla (e.g. value types for classes) and project Panama but at this date it is still uncertain if they will be included in Java 9.

Mark Heckler from Pivotal had an interesting session on the Spring framework and their Spring cloud projects. Pivotal works closely with Netflix, which is a known open source contributor and one of the industry leaders when it comes to developing and using new technologies. However, since Netflix is committed to run their applications on AWS, Spring cloud aims to make use of Netflix work to create portable solutions for a wider community by introducing appropriate abstraction layers. This enables Spring cloud users to avoid changing their code if there is a need to change the underlying technology. Spring cloud has support for Netflix tools such as Eureka, Ribbon, Zuul, Hystrix among others.

Arquillian, by many considered the best integration testing framework for Java EE projects, was dedicated a full one hour session at the event. As a long time contributor to project, it was nice to see a presentation about it and how its Cube extension can make use of Docker containers to execute your tests against.

One interesting session was a non-technical one, also given by Mark Heckler, which focused on financial equations and what factors drives business decisions. The purpose was to give an understanding of how businesses calculate paybacks and return on investments and when and why it makes sense for a company to invest in new technology. For instance the payback period for an initiative should be as short as possible, preferably less than a year. The presentation also covered net present values and quantification’s. Transforming a monolithic architecture to a microservice style approach was the example used for the calculations. The average cadence for releases of key monolithic applications in our industry is just one release per year. Which in many cases is even optimistic! Compare these numbers with Amazon, who have developed their applications with a microservice architecture since 2011. Their average release cadence is 7448 times a day, which means that they perform a production release once every 11.6s! This presentation certainly laid a good foundation for my own session on continuous delivery.

Then it was finally time for my presentation! When I arrived 10 minutes before the talk there was already a long line outside of the room and it filled up nicely. In my presentation, The Road to Continuous Delivery, I talked about how a Swedish company revamped its organization to implement a completely new distributed system running on the Java platform and using continuous delivery ways of working. I started with a quick poll of how many in the room were able to perform production releases every day and I was apparently the only one. So I’m glad that there was a lot of interest for this topic at the conference! If you are interested in having me presenting this or another talk on the topic at your company or at an event, please contact me! It was great fun to give this presentation and I got some good questions and interesting discussions afterwards. You can find the slides for my presentation here.

Java Virtual Machine (JVM) architect Mikael Vidstedt had an interesting presentation about JVM insights. It is apparently not uncommon that the vast majority of the Java heap footprint is taken up by String objects (25-50% not uncommon) and many of them have the same value. JDK 8 however introduced an important JVM improvement to address this memory footprint concern (through string deduplication) in their G1 garbage collector improvement effort.

I was positively surprised that many sessions where quite non-technical. A few sessions talked about possibilities with the Raspberry Pi embedded device, but there was also a couple of presentations that touched on software development in a broader sense. I think it is good and important for conferences of this size to have such a good balance in content.

All in all I thought it was a good event. Both JavaOne and Oracle Open World combined for around 1300 attendees and a big exhibition hall. JavaOne was multi-track, but most sessions were in Portuguese. However, there was usually one track dedicated to English speaking sessions but it obviously limited the available content somewhat for non-Portuguese speaking attendees. An interesting feature was that the sessions held in English were live translated to Portuguese to those who lent headphones for that purpose.

The best part of the conference was that I got to meet a lot of people from many different countries. It is great fun to meet colleagues from all around the world that shares my enthusiasm and passion for software development. The strive for making this industry better continues!

Tommy Tynjä
LinkedIn profile

Diabol hjälper Klarna utveckla en ny plattform och att bli experter på Continuous Delivery

Klarna har sedan starten 2005 haft en kraftig tillväxt och på mycket kort tid växt till ett företag med över 1000 anställda. För att möta den globala marknadens behov av sina betalningstjänster behövde Klarna göra stora förändringar i både teknik, organisation och processer. Klarna anlitade konsulter från Diabol för att nå sina högt satta mål med utveckling av en ny tjänsteplattform och bli ledande inom DevOps och Continuous Delivery.


Efter stora framgångar på den nordiska marknaden och flera år av stark tillväxt behövde Klarna utveckla en ny plattform för sina betalningstjänster för att kunna möta den globala marknaden. Den nya plattformen skulle hantera miljontals transaktioner dagligen och vara robust, skalbar och samtidigt stödja ett agilt arbetssätt med snabba förändringar i en växande organisation. Tidplanen var mycket utmanande och förutom utveckling av alla tjänster behövde man förändra både arbetssätt och infrastruktur för att möta utmaningarna med stor skalbarhet och korta ledtider.


Diabols erfarna konsulter med expertkompetens inom Java, DevOps och Continuous Delivery fick förtroendet att stärka upp utvecklingsteamen för att ta fram den nya plattformen och samtidigt automatisera releaseprocessen med bl.a. molnteknik från Amazon AWS. Kompetens kring automatisering och verktyg byggdes även upp i ett internt supportteam med syfte att stödja utvecklingsteamen med verktyg och processer för att snabbt, säkert och automatiserat kunna leverera sina tjänster oberoende av varandra. Diabol hade en central roll i detta team och agerade som coach för Continuous Delivery och DevOps brett i utvecklings- och driftorganisationen.


Klarna kunde på rekordtid gå live med den nya plattformen och öppna upp på flera stora internationella marknader. Autonoma utvecklingsteam med stort leveransfokus kan idag på egen hand leverera förändringar och ny funktionalitet till produktion helt automatiskt vilket vid behov kan vara flera gånger om dagen.

Uttömmande automatiserade tester körs kontinuerligt vid varje kodförändring och uppsättning av testmiljöer i AWS sker också helt automatiserat. En del team praktiserar även s.k. “continuous deployment” och levererar kodändringar till sina produktionsmiljöer utan någon som helst manuell handpåläggning.

“Diabol har varit en nyckelspelare för att uppnå våra högt ställda mål inom DevOps och Continuous Delivery.”

– Tobias Palmborg, Manager Engineering Support, Klarna



Diabol migrerar Abdona till AWS och inför en automatiserad leveransprocess

Abdona tillhandahåller tjänster för affärsresehantering till ett flertal organisationer i offentlig sektor. I samband med en större utvecklingsinsats vill man också se över infrastrukturen för drift och testmiljöer för att minska kostnader och på ett säkert sätt kunna garantera hög kvalité och korta leveranstider. Diabol anlitades för ett helhetsåtagande att modernisera infrastruktur, utvecklingsmiljö, test- och leveransprocess.


Abdonas system består av en klassisk 3-lagersarkitektur i Java Enterprise och sedan lanseringen för 7 år sedan har endast mindre uppdateringar skett. Teknik och infrastruktur har inte uppdaterats och har med tiden blivit förlegade och svårhanterliga. Manuellt konfigurerade servrar, undermålig dokumentation och spårbarhet, knapphändig versionshantering, ingen kontinuerlig integration eller stabil byggmiljö, manuell test och deployment. Förutom dessa strukturella problem var kostnaden för hårdvara som satts upp manuellt för både test- och driftmiljö var omotiverad dyr jämfört med dagens molnbaserade alternativ.


Diabol började med att kartlägga problemen och först och främst ta kontroll över kodbasen som var utspridd över flera versionshanteringssytem. All kod flyttades till Atlassian Bitbucket och en byggserver med Jenkins sattes upp för att på ett repeterbart sätt bygga och testa systemet. Vidare så valdes Nexus för att hantera beroenden och arkivera de artifakter som produceras av byggservern. Infrastruktur migrerades till Amazon AWS av både kostnadsmässiga skäl, men också för att kunna utnyttja moderna verktyg för automatisering och möjligheterna med dynamisk infrastruktur. Applikationslager flyttades till EC2 och databasen till RDS. Terraform valdes för att automatisera uppsättningen av resurser i AWS och Puppet introducerades för automatisk konfigurationshantering av servrar. En fullständig leveranspipeline med automatiskt deployment implementerades i Jenkins.


Migrering till Amazon AWS har lett till drastiskt minskade driftkostnader för Abdona. Därtill har man nu en skalbar modern infrastruktur, fullständig spårbarhet och en automatisk leveranskedja som garanterar hög kvalitet och korta ledtider. Systemet är helt och hållet rekonstruerbart från kodbasen och testmiljöer kan skapas helt automatiskt vid behov.