Delivery pipelines with GoCD

At my most recent assignment, one of my missions was to set up delivery pipelines for a bunch of services built in Java and some front-end apps. When I started they already had some rudimentary automated builds using Hudson, but we really wanted to start from scratch. We decided to give GoCD a chance because it pretty much satisfied our demands. We wanted something that could:

  • orchestrate a deployment pipeline (Build -> CI -> QA -> Prod)
  • run on-premise
  • deploy to production in a secure way
  • show a pipeline dashboard
  • handle user authentication and authorization
  • support fan-in

GoCD is the open source “continuous delivery server” backed up by Thoughtworks (also famous for selenium)

GoCD key concepts

A pipeline in GoCD consists of stages which are executed in sequence. A stage contains jobs which are executed in parallel. A job contains tasks which are executed in sequence. The smallest schedulable unit are jobs which are executed by agents. An agent is a Java process that normally runs on its own separate machine. An idling agent fetches jobs from the GoCD server and executes its tasks. A pipeline is triggered by a “material” which can be any of the following the following version control systems:

  • Git
  • Subversion
  • Mercurial
  • Team Foundation Server
  • Perforce

A pipeline can also be triggered by a dependency material, i.e. an upstreams pipeline. A pipeline can have more than one material.

Technically you can put the whole delivery pipeline (CI->QA->PROD) inside a pipeline as stages but there are several reasons why you would want to split it in separate chained pipelines. The most obvious reason for doing so is the GoCD concept of environments. A pipeline is the smallest entity that you could place inside an environment. The concepts of environments are explained more in detailed in the security section below.

You can logically group pipelines in ”pipelines groups” so in the typical setup for a micro-service you might have a pipeline group containing following pipelines:

example-server-pl

A pipeline group can be used as a view and a place holder for access rights. You can give a specific user or role view, execution or admin rights to all pipelines within a pipeline group.

For the convenience of not repeating yourself, GoCD offers the possibility to create templates which are parameterized workflows that can be reused. Typically you could have templates for:

  • building (maven, static code analysis)
  • deploying to a test environment
  • deploying to a production environment

For more details on concepts in GoCD, see: https://docs.go.cd/current/introduction/concepts_in_go.html

Administration

Installation

The GoCD server and agents are bundled as rpm, deb, windows and OSX install packages. We decided to install it using puppet https://forge.puppet.com/unibet/go since we already had puppet in place. One nice feature is that agents auto-upgrades when the GoCD server is upgraded, so in the normal case you only need to care about GoCD server upgrades.

User management

We use the LDAP integration to authenticate users in GoCD. When a user defined in LDAP is logging in for the for the first time it’s automatically registered as a new user in GoCD. If you use role based authorization then an admin user needs to assign roles to the new user.

Disk space management

All artifacts created by pipelines are stored on the GoCD server and you will sooner or later face the fact that disks are getting full. We have used the tool “GoCD janitor” that analyses the value stream (the collection of chained upstreams and downstream pipelines) and automatically removes artifacts that won’t make it to production.

Security

One of the major concerns when deploying to production is the handling of deployment secrets such as ssh keys. At my client they extensively use Ansible as a deployment tool so we need the ability to handle ssh keys on the agents in a secure way. It’s quite obvious that you don’t want to use the same ssh keys in test and production so in GoCD they have a feature called environments for this purpose. You can place an agent and a pipeline in an environment (ci, qa, production) so that anytime a production pipeline is triggered it will run on an agent in the production environment.

There is also a possibility to store encrypted secrets within a pipeline configuration using so called secure variables. A secure variable can be used within a pipeline like an environment variable with the only difference that it’s encrypted with a shared secret stored on the GoCD server. We have not used this feature that much since we solved this issue in other ways. You can also define secure variables on the environment level so that a pipeline running in a specific environment will inherit all secure variables defined in that specific environment.

Pipelines as code

This was one of the features GoCD were lacking at the very beginning, but at the same there were API endpoints for creating and managing pipelines. Since GoCD version 16.7.0 there is support for “pipelines as code” via the yaml-plugin or the json-plugin. Unfortunately templates are not supported which can lead to duplicated code in your pipeline configuration repo(s).

For further reading please refer to: https://docs.go.cd/current/advanced_usage/pipelines_as_code.html

Example

Let’s wrap it up with a fully working example where the key concepts explained above are used. In this example we will set up a deployment pipeline (BUILD -> QA -> PROD ) for a dropwizard application. It will also setup an basic example where fan-in is used. In that example you will notice that downstreams pipeline “prod” won’t be trigger unless both “test” and “perf-test” are finished. We use the concept of  “pipelines as code” to configure the deployment pipeline using “gocd-plumber”. GoCD-plumber is a tool written in golang (by me btw), which uses the GoCD API to create pipelines from yaml code. In contrast to the yaml-plugin and json-plugin it does support templates by the act of merging a pipeline hash over a template hash.

Preparations

This example requires Docker, so if you don’t have it installed, please install it first.

  1. git clone https://github.com/dennisgranath/gocd-docker.git
  2. cd gocd-docker
  3. docker-compose up
  4. go to http://127.0.0.1:8153 and login as ‘admin’ with password ‘badger’
  5. Press play button on the “create-pipelines” pipeline
  6. Press pause button to “unpause” pipelines

 

 

Reasons why code freezes don’t work

This article is a continuation on my previous article on how to release software with quality and confidence.

When the big e-commerce holidays such as Black Friday, Cyber Monday and Christmas are looming around the corner, many companies are gearing up to make sure their systems are stable and able to handle the expected increase in traffic.

What many organizations do is introducing a full code freeze for the holiday season, where no new code changes or features are allowed to be released to the production systems during this period of the year. Another approach is to only allow updates to the production environment during a few hours during office hours. These approaches might sound logical but are in reality anti-patterns that neither reduces risk nor ensures stability.

What you’re doing when you stop deploying software is interrupting the pace of the development teams. The team’s feedback loop breaks and the ways of workings are forced to deviate from the standard process, which leads to decreased productivity.

When your normal process allows deploying software on a regular basis in an automated and streamlined fashion, it becomes just as natural as breathing. When you stop deploying your software, it’s like holding your breath. This is what happens to your development process. When you finally turn on the floodgates after the holiday season, the risk for bugs and deployment failures are much more likely to occur. As more changes are stacked up, the bigger the risk of unwanted consequences for each deployment. Changes might also be rushed, with less quality to be able to make it into production in time before a freeze. Keeping track of what changes are pending release becomes more challenging.

Keeping your systems stable with exceptional uptime should be a priority throughout the whole year, not only during holiday season. The ways of working for the development teams and IT organization should embrace this mindset and expectations of quality. Proper planning can go a long way to reduce risk. It might not make sense to push through a big refactoring of the source code during the busiest time of the year.

A key component for allowing a continuous flow of evolving software throughout the entire year is organizational maturity regarding monitoring, logging and planning. It should be possible to monitor, preferably visualised on big screens in the office, how the systems are behaving and functioning in real-time. Both technical and business metrics should be metered and alarm thresholds configured accordingly based on these metrics.

Deployment strategies are also of importance. When deploying a new version of a system, it should always be cheap to rollback to a previous version. This should be automated and possible to do by clicking just one button. Then, if there is a concern after deploying a new version, discovered by closely monitoring the available metrics, it is easy to revert to a previously known good version. Canary releasing is also a strategy where possible issues can be discovered before rolling out new code changes for all users. Canary releasing allows you to route a specific amount of traffic to specific version of the software and thus being able to compare metrics, outcomes and possible errors between the different versions.

When the holiday season begins, keep calm and keep breathing.

Tommy Tynjä
@tommysdk
LinkedIn profile
http://www.diabol.se

Summary of Eurostar 2016

About Eurostar

Eurostar is Europe’s largest conference that is focused on testing and this year the conference was held in Stockholm October 31 – November 3. Since I have been working with test automation lately it seemed like a good opportunity to go my first test conference (I was there for two days). The conference had the usual mix of tutorials, presentations and expo, very much a traditional conference setup unlike the community driven style.

Key take away

Continuous delivery and DevOps changes much of the conventional thinking around test. The change is not primarily related to that you should automate everything related to test but that, in the same way as you drive user experience testing with things like A/B testing, a key enabler of quality is monitoring in production and the ability to quickly respond to problems. This does not mean that all automated testing is useless. But it calls for a very different mindset compared to conventional quality wisdom where the focus has been on finding problems as early as possible (in terms of phases of development). Instead there is increased focus on deploying changes fast in a gradual and controlled way with a high degree of monitoring and diagnostics, thereby being able to diagnose and remedy any issues quickly.

Presentations

Roughly in order of how much I liked the sessions, here is what I participated in:

Sally Goble et. al. – How we learned to love quality and stop testing

This was both well presented and thought provoking. They described the journey at Guardian from having a long (two weeks) development cycle with a considerable amount of testing to the current situation where they deploy continuously. The core of the story was how the focus had been changed from test to quality with a DevOps setup. When they first started their journey in automation they took the approach of creating Selenium tests for their full manual regression test suite. This is pretty much scrapped now and they rely primarily on the ability to quickly detect problems in production and do fixes. Canary releases and good use of monitoring / APM, and investments in improved logging were the key enablers here.

Automated tests are still done on a unit, api and integration test level but as noted above really not much automation of front end tests.

Declan O´Riordan – Application security testing: A new approach

Declan is an independent consultant and started his talk claiming that the number of security related incidents continue to increase and that there is a long list of potential security breaches that one need to be aware of. He also talked about how continuous delivery has shrunk the time frame available for security testing to almost nothing. I.e., it is not getting any easier to secure your applications. Then he went on claiming that there has been a breakthrough in terms of what tools can with regard to security testing in the last 1-2 years. These new tools are categorised as IAST (Interactive Analysis Security Testing) and RASP (Runtime Application Self-Protection). While traditional automated security testing tools find 20-30% of the security issues in an application, IAST-tools find as much as 99% of the issues automatically. He gave a demo and it was impressive. He used the toolset from Contrast but there are other supplier with similar tools and many hustling to catch up. It seems to me that an IAST tool should be part of your pipeline before going to production and a RASP solution should be part of your production monitoring/setup. Overall an interesting talk and lots to evaluate, follow up on, and possibly apply.

Jan van Moll – Root cause analysis for testers

This was both entertaining and well presented. Jan is head of quality at Philips Healthcare but he is also an independent investigator / software expert that is called in when things go awfully wrong or when there are close escapes/near misses, like when a plane crashes.

No clear takeaway for me from the talk that can immediately be put to use but a list of references to different root cause analysis techniques that I hope to get the time to look into at some point. It would have been interesting to hear more as this talk only scratched the surface of the subject.

Julian Harty – Automated testing of mobile apps

This was interesting but it is not a space that I am directly involved in so I am not sure that there will be anything that is immediately useful for me. Things that were talked about include:

  • Monkey testing, there is apparently some tooling included in the Android SDK that is quite useful for this.
  • An analysis that Microsoft research has done on 30 000 app crash dumps indicates that over 90% of all crashes are caused by 10 common implementation mistakes. Failing to check http status codes and always expecting a 200 comes to mind as one of the top ones.
  • appdiff.com a free and robot-based approach to automated testing where the robots apply machine learned heuristics. Simple and free to try and if you are doing mobile and not already using it you should probably have a look.

Ben Simo – Stories from testing healthcare.gov

The presenter rose to fame at the launch of the Obamacare website about a year ago. As you remember there were lots of problems the weeks/months after the launch. Ben approached this as a user from the beginning but after a while when things worked so poorly he started to look at things from a tester point of view. He then uncovered a number of issues related to security, usability, performance, etc. He started to share his experience on social media, mostly to help others trying to use the site, but also rose to fame in mainstream media. The presentation was fun and entertaining but I am not sure there was so much to learn as it mostly was a run-through of all of the problems he found and how poorly the project/launch had been handled. So it was entertaining and interesting but did not offer so much in terms of insight or learning.

Jay Sehti – What happened when we switched our data center off?

The background was that a couple of years ago Financial Times had a major outage in one of their data centres and the talk was about what went down in relation to that. I think the most interesting lesson was that they had built a dashboard in Dashing showing service health across their key services/applications that each are made up of a number of micro services. But when they went to the dashboard to see what was still was working and where there were problems they realised that the dashboard had a single point of failure related to the data centre that was down. Darn. Lesson learned: secure your monitoring in the same way or better as your applications.

In addition to that specific lesson I think the most interesting part of this presentation was what kind of journey they had gone through going to continuous delivery and micro services. In many ways this was similar to the Guardian story in that they now relied more on monitoring and being able to quickly respond to problems rather than having extensive automated (front end) tests. He mentioned for example they still had some Selenium tests but that coverage was probably around 20% now compared to 80% before.

Similar to Guardian they had plenty of test automation at Unit/API levels but less automation of front end tests.

Tutorial – Test automation management patterns

This was mostly a walk-through of the website/wiki testautomationpatterns.wikispaces.com and how to use it. The content on the wiki is not bad as such but it is quite high-level and common-sense oriented. It is probably useful to browse through the Issues and Automation Patterns if you are involved in test automation and have a difficult time to get traction. The diagnostics tool did not appear that useful to me.

No big revelations for me during this tutorial if anything it was more of a confirmation of that the approach we have taken at my current customer around testing of backend systems is sound.

Liz Keogh – How to test the inside of your head

Liz, an independent consultant of BDD-fame, talked among other things about Cynefin and how it is applicable in a testing context. Kind of interesting but it did not create much new insight for me (refreshed some old and that is ok too).

Bryan Bakker – Software reliability: Measuring to know

In this presentation Bryan presented an approach (process) to reliability engineering that he has developed together with a couple of colleagues/friends. The talk was a bit dry and academic and quite heavily geared towards embedded software. Surveillance cameras were the primary example that was used. Some interesting stuff here in particular in terms of how to quantify reliability.

Adam Carmi – Transforming your automated tests with visual testing

Adam is CTO of an Israeli tools company called Applitools and the talk was close to a marketing pitch for their tool. Visual Testing is distinct from functional testing in that is only concerned with visuals, i.e., what the human eye can see. It seems to me that if your are doing a lot of cross-device, cross-browser testing this kind of automated test might be of merit.

Harry Collins – The critique of AI in the age of the net

Harry is a professor in sociology at Cardiff University. This could have been an interesting talk about scientific theory / sociology / AI / philosophy / theory of mind and a bunch of other things. I am sure the presenter has the knowledge to make a great presentation on any of these subjects but this was ill-prepared, incoherent, pretty much without point, and not very well-presented. More of a late-night rant in a bar than a keynote.

Summary

As with most conferences there was a mix of good and not quite so good content but overall I felt that it was more than worthwhile to be there as I learned a bunch of things and maybe even had an insight or two. Hopefully there will be opportunity to apply some of the things I learned at the customers I am working with.

Svante Lidman
@svante_lidman
LinkedIn profile
http://www.diabol.se

How to release software frequently with quality and confidence

Continuous Delivery is a way of working for reducing cost, time and risk of releasing software. The idea is that small and frequent code changes impose less risk for bugs and disturbances. Key aspects in Continuous Delivery is to always have your source code in a releasable state and that your software delivery process is fully automated. These automated processes are typically called delivery pipelines.

In many companies, releasing software is a painful process. A process that is not done on a frequent basis. Many companies are not even able to release software once a month during a given year. In order to make something less painful you have to practice it. If you’re about to run a marathon for the first time, it might seem painful. But you won’t get better at running if you don’t do it frequent enough. It will still hurt.

As humans, we tend to be bad at repetitive work. It is therefore both time consuming and error prone to involve manual steps in the process of deploying software. That is reason enough to automate the whole process. Automation ensures that the process is fast, repeatable and provides traceability.

Never let a human do a scripts job.

With automation in place you can start building confidence in your release process and start practicing it more frequently. Releasing software should be practiced as often as possible so that you don’t even think about it happening.

Releasing new code changes should be so natural that you don’t need to think about it. It should be like breathing. Not something painful as giving birth.

A prerequisite for successfully practicing Continuous Delivery is test automation. Without test automation in place, testing becomes the bottleneck in your delivery process. It is therefore crucial that quality is built into the software. Writing automated tests, both unit, integration and acceptance tests should be a natural part of the development process. You cannot ship a piece of code that has no proper test coverage. If you are not testing the code you’re developing, you cannot be certain that it work as intended. If there is a need for regression tests, automate them and make them cheap to run so that they can be run repeatedly for every code change.

With Continuous Delivery it becomes evident that a release and a deployment are not the same things. A deployment is an exercise that happens all the time, for each new code change. A release is however a business or marketing decision on when to launch a new feature to your end users.

Tommy Tynjä
@tommysdk
LinkedIn profile
http://www.diabol.se

JavaOne Latin America summary

Last week I attended the JavaOne Latin America software development conference in São Paulo, Brazil. This was a joint event with Oracle Open World, a conference with focus on Oracle solutions. I was accepted as a speaker for the Java, DevOps and Methodologies track at JavaOne. This article intends to give a summary of my main takeaways from the event.

The main points Oracle made during the opening keynote of Oracle Open World was their commitment to cloud technology. Oracle CEO Mark Hurd made the prediction that in 10 years, most production workloads will be running in the cloud. Oracle currently engineers all of their solutions for the cloud, which is something that also their on-premise solutions can leverage from. Security is a very important aspect in all of their solutions. Quite a few sessions during JavaOne showcased Java applications running on Oracles cloud platform.

The JavaOne opening keynote demonstrated the Java flight recorder, which enables you to profile your applications with near zero overhead. The aim of the flight recorder is to have as minimal overhead as possible so it can be enabled in production systems by default. It has a user interface which provides valuable data in case you want to get an understanding of how your Java applications perform.

The next version of Java, Java 9, will feature long awaited modularity through project Jigsaw. Today, all of your public APIs in your source code packages are exposed and can be used by anyone who has access to the classes. With Jigsaw, you as a developer can decide which classes are exported and exposed. This gives you as a developer more control over what are internal APIs and what are intended for use. This was quickly demonstrated on stage. You will also have the possibility to decide what your application dependencies are within the JDK. For instance it is quite unlikely that you need e.g. UI related libraries such as Swing or audio if you develop back-end software. I once tried to dissect the Java rt.jar for this particular purpose (just for fun), so it is nice to finally see this becoming a reality. The keynote also mentioned project Valhalla (e.g. value types for classes) and project Panama but at this date it is still uncertain if they will be included in Java 9.

Mark Heckler from Pivotal had an interesting session on the Spring framework and their Spring cloud projects. Pivotal works closely with Netflix, which is a known open source contributor and one of the industry leaders when it comes to developing and using new technologies. However, since Netflix is committed to run their applications on AWS, Spring cloud aims to make use of Netflix work to create portable solutions for a wider community by introducing appropriate abstraction layers. This enables Spring cloud users to avoid changing their code if there is a need to change the underlying technology. Spring cloud has support for Netflix tools such as Eureka, Ribbon, Zuul, Hystrix among others.

Arquillian, by many considered the best integration testing framework for Java EE projects, was dedicated a full one hour session at the event. As a long time contributor to project, it was nice to see a presentation about it and how its Cube extension can make use of Docker containers to execute your tests against.

One interesting session was a non-technical one, also given by Mark Heckler, which focused on financial equations and what factors drives business decisions. The purpose was to give an understanding of how businesses calculate paybacks and return on investments and when and why it makes sense for a company to invest in new technology. For instance the payback period for an initiative should be as short as possible, preferably less than a year. The presentation also covered net present values and quantification’s. Transforming a monolithic architecture to a microservice style approach was the example used for the calculations. The average cadence for releases of key monolithic applications in our industry is just one release per year. Which in many cases is even optimistic! Compare these numbers with Amazon, who have developed their applications with a microservice architecture since 2011. Their average release cadence is 7448 times a day, which means that they perform a production release once every 11.6s! This presentation certainly laid a good foundation for my own session on continuous delivery.

Then it was finally time for my presentation! When I arrived 10 minutes before the talk there was already a long line outside of the room and it filled up nicely. In my presentation, The Road to Continuous Delivery, I talked about how a Swedish company revamped its organization to implement a completely new distributed system running on the Java platform and using continuous delivery ways of working. I started with a quick poll of how many in the room were able to perform production releases every day and I was apparently the only one. So I’m glad that there was a lot of interest for this topic at the conference! If you are interested in having me presenting this or another talk on the topic at your company or at an event, please contact me! It was great fun to give this presentation and I got some good questions and interesting discussions afterwards. You can find the slides for my presentation here.

Java Virtual Machine (JVM) architect Mikael Vidstedt had an interesting presentation about JVM insights. It is apparently not uncommon that the vast majority of the Java heap footprint is taken up by String objects (25-50% not uncommon) and many of them have the same value. JDK 8 however introduced an important JVM improvement to address this memory footprint concern (through string deduplication) in their G1 garbage collector improvement effort.

I was positively surprised that many sessions where quite non-technical. A few sessions talked about possibilities with the Raspberry Pi embedded device, but there was also a couple of presentations that touched on software development in a broader sense. I think it is good and important for conferences of this size to have such a good balance in content.

All in all I thought it was a good event. Both JavaOne and Oracle Open World combined for around 1300 attendees and a big exhibition hall. JavaOne was multi-track, but most sessions were in Portuguese. However, there was usually one track dedicated to English speaking sessions but it obviously limited the available content somewhat for non-Portuguese speaking attendees. An interesting feature was that the sessions held in English were live translated to Portuguese to those who lent headphones for that purpose.

The best part of the conference was that I got to meet a lot of people from many different countries. It is great fun to meet colleagues from all around the world that shares my enthusiasm and passion for software development. The strive for making this industry better continues!

Tommy Tynjä
@tommysdk
LinkedIn profile
http://www.diabol.se

AWS Summit recap

This week, the annual AWS Summit took place in sunny Stockholm. This article aims to provide a recap of my impressions from the event.

It was evident that the event had grown from last year, with approximately 2000 people attending this year’s one day event at Waterfront Congress Centre. Only a few session were technical as most of the presentations just gave an overview of the different services and various use cases. I really appreciated the talks from different AWS customers who spoke about their use of AWS technologies and what problems they solved and how. I found it valuable to hear from different companies on how they leverage certain products in their production environments.

The opening keynote was long (2 hours!) and included a lot of sales talk. The main keynote speaker mentioned that 20 percent of the audience had never used any AWS services at all, which explains the thorough walkthrough of the different AWS products. One product which stood out was Amazon Inspector, which can detect and remediate security issues early in your AWS environment. It is not yet available in all regions, but is available in e.g. eu-west-1 (Ireland). It was also interesting to hear about migration of large amounts of data using Snowball, a physical device shipped to your datacenter, which allows you to move your data faster than over the Internet (except for the physical delivery of the device to and from your own datacenter).

It is undeniable that Internet of Things (IoT) is gaining traction and that the amount of connected devices around has grown exponentially the past few years. AWS provides several services for developing and running IoT services. With AWS IoT, your devices can securely communicate with your backend servers. What I found most interesting was the concept of device shadows. A shadow is an interface which allows you to communicate with a device even though it would be offline at the moment. In your application, you can communicate with the shadow without the need to care about whether the device is online or not. If you want to change the state of a device currently offline, you will update the shadow and when the device connects again, it will get the new desired state from the shadow.

At the startup track, we got to hear how Mojang leverages AWS for their Minecraft Realm concept. Instead of letting external parties host their game servers, they decided to go with AWS for Minecraft Realm, to allow for a more flexible infrastructure. An interesting aspect is that they had to develop their own algorithm for scaling out quickly, as in a gaming environment it is not acceptable to wait for five minutes for an auto scaling group to spin up new machines to meet the current demand from users. Instead, they have to use quite large instance types and have new servers on standby to be able to take on new traffic as it arrives. It is not trivial either to terminate instances where there is people playing, even though only a few, that wouldn’t provide a good user experience. Instead, they kindly inform the user that the server will terminate in five minutes and that usually makes the users change server. Not ideal but live migration is too far away at the moment. They still use old EC2 classic instances and they will have to do some heavy lifting to modernise their stack on AWS.

There was also a presentation from QuizUp on how they use infrastructure as code with Terraform to manage their AWS resources. A benefit they get from using Terraform instead of Cloudformation is to get an execution plan before actually applying changes. The drawback is that it is not possible to query Terraform for the current resources and their state directly from AWS.

In the world of relational databases in AWS (RDS), Aurora is an AWS developed database to maximise reliability, scalability and cost-effectiveness. It delivers up to five times the throughput of a standard MySQL running on the same hardware. It is designed to scale and to handle failures. It even provides an SQL extension to simulate failures:
ALTER SYSTEM CRASH [INSTANCE | DISPATCHER | NODE ];
ALTER SYSTEM SIMULATE percentage_of_failure PERCENT
* READ REPLICA FAILURE
* DISK FAILURE
* DISK CONGESTION
FOR INTERVAL quantity [ YEAR | QUARTER | MONTH | WEEK | DAY | HOUR | MINUTE | SECOND ]

Probably the most interesting session of the day was about serverless architecture using AWS Lambda. Lambda allows you to upload snippets of code, functions to AWS which runs them for you. No need to provision servers or think about scalability, AWS does that for you and you only pay for the time your code executes in units of 100 ms. The best thing about this talk was the peek under the hood. AWS leverages Linux containers (not Docker) to isolate the resources of the uploaded functions and to be able to run and scale these quickly. It also offers predictive capacity planning. An interesting part is that you can upload libraries which your code depends on as part of your function, so you could basically run a small microservice just by using Lambda. To deploy your function, you package it in a zip archive and use Cloudformation (specified as type AWS::Lambda::Function). You’re able to run your function inside of your VPC and thus leverage other resources available within your VPC.

All in all I thought this was a great event. If you didn’t attend I really recommend attending the next one – especially if you’re already using AWS.

As we at Diabol are standard partners with Amazon, not only can we assist you with your cloud platform strategies but also to tie that together with the full view of your systems development process. Don’t hesitate to contact us!

You can read more about us at diabol.se.

Tommy Tynjä
@tommysdk

The power of Jenkins JobDSL

At my current client we define all of our Jenkins jobs with Jenkins Job Builder so that we can have all of our job configurations version controlled. Every push to the version control repository which hosts the configuration files will trigger an update of the jobs in Jenkins so that we can make sure that the state in version control is always reflected in our Jenkins setup.

We have six jobs that are essentially the same for each of our projects but with only a slight change of the input parameters to each job. These jobs are responsible for deploying the application binaries to different environments.

Jenkins Job Builder supports templating, but our experience is that those templates usually will be quite hard to maintain in the long run. And since it is just YAML files, you’re quite restricted to what you’re able to do. JobDSL Jenkins job configurations on the other hand are built using Groovy. It allows you to insert actual code execution in your job configurations scripts which can be very powerful.

In our case, with JobDSL we can easily just create one common job configuration which we then just iterate over with the job specific parameters to create the necessary jobs for each environment. We also can use some utility methods written in Groovy which we can invoke in our job configurations. So instead of having to maintain similar Jenkins Job Builder configurations in YAML, we can do it much more consise with JobDSL.

Below, an example of a JobDSL configuration file (Groovy code), which generates six jobs according to the parameterized job template:

class Utils {
    static String environment(String qualifier) { qualifier.substring(0, qualifier.indexOf('-')).toUpperCase() }
    static String environmentType(String qualifier) { qualifier.substring(qualifier.indexOf('-') + 1)}
}

[
    [qualifier: "us-staging"],
    [qualifier: "eu-staging"],
    [qualifier: "us-uat"],
    [qualifier: "eu-uat"],
    [qualifier: "us-live"],
    [qualifier: "eu-live"]
].each { Map environment ->

    job("myproject-deploy-${environment.qualifier}") {
        description "Deploy my project to ${environment.qualifier}"
        parameters {
            stringParam('GIT_SHA', null, null)
        }
        scm {
            git {
                remote {
                    url('ssh://git@my_git_repo/myproject.git')
                    credentials('jenkins-credentials')
                }
                branch('$GIT_SHA')
            }
        }
        deliveryPipelineConfiguration("Deploy to " + Utils.environmentType("${environment.qualifier}"),
                Utils.environment("${environment.qualifier}") + " environment")
        logRotator(-1, 30, -1, -1)
        steps {
            shell("""deployment/azure/deploy.sh ${environment.qualifier}""")
        }
    }
}

If you need help getting your Jenkins configuration into good shape, just contact us and we will be happy to help you! You can read more about us at diabol.se.

Tommy Tynjä
@tommysdk

Choosing Atlassian cloud or on-premise, things to consider

Many companies are considering moving off from Atlassian cloud instance to a self hosted (on-premise), or hiring a third party hosting service.  Here are some pros and cons (for and against) that you and your company should consider:

Pros (on-premise):
+ Customization in the source code
+ Access to the entire Atlassian Plugins management library
+ Don’t need to force the application upgrade
+ Access to the log files
+ Access to the DB
+ Potential cost savings from the licensing fees
+ Restrict the network access (e.g: host JIRA in a server only accessible to your company using a VPN)
+ Commercial and Academic licenses give you another developer license where you can use in a second instance (Usually for non-production where you can replicate your production instance and run some tests related to new features, plugins, upgrade…)
+ Use your domain name
+ No storage limit
+ LDAP

Cons:
– Concern about a self- or CA signed certificate (probably you want to enforce SSL)
– Server administration (Including the allocated resources)
– Upgrade plan
– Maintaining your own infrastructure
– Reliability (If your production instance goes down?)

However, as a best solution in order to you figure out what would better fit your company needs, I’d advise you to generate an evaluation license (my.atlassian.com) and give a try in a test server (Importing your Cloud data) to see if it will better fulfill your final goal. Instruction to do that can be found here.


Being an Atlassian Experts in Sweden, Diabol can help you the tools improve your Atlassian usage. Just contact us. You can read more about us at diabol.se.

Diabol hjälper Klarna utveckla en ny plattform och att bli experter på Continuous Delivery

Klarna har sedan starten 2005 haft en kraftig tillväxt och på mycket kort tid växt till ett företag med över 1000 anställda. För att möta den globala marknadens behov av sina betalningstjänster behövde Klarna göra stora förändringar i både teknik, organisation och processer. Klarna anlitade konsulter från Diabol för att nå sina högt satta mål med utveckling av en ny tjänsteplattform och bli ledande inom DevOps och Continuous Delivery.

Utmaning

Efter stora framgångar på den nordiska marknaden och flera år av stark tillväxt behövde Klarna utveckla en ny plattform för sina betalningstjänster för att kunna möta den globala marknaden. Den nya plattformen skulle hantera miljontals transaktioner dagligen och vara robust, skalbar och samtidigt stödja ett agilt arbetssätt med snabba förändringar i en växande organisation. Tidplanen var mycket utmanande och förutom utveckling av alla tjänster behövde man förändra både arbetssätt och infrastruktur för att möta utmaningarna med stor skalbarhet och korta ledtider.

Lösning

Diabols erfarna konsulter med expertkompetens inom Java, DevOps och Continuous Delivery fick förtroendet att stärka upp utvecklingsteamen för att ta fram den nya plattformen och samtidigt automatisera releaseprocessen med bl.a. molnteknik från Amazon AWS. Kompetens kring automatisering och verktyg byggdes även upp i ett internt supportteam med syfte att stödja utvecklingsteamen med verktyg och processer för att snabbt, säkert och automatiserat kunna leverera sina tjänster oberoende av varandra. Diabol hade en central roll i detta team och agerade som coach för Continuous Delivery och DevOps brett i utvecklings- och driftorganisationen.

Resultat

Klarna kunde på rekordtid gå live med den nya plattformen och öppna upp på flera stora internationella marknader. Autonoma utvecklingsteam med stort leveransfokus kan idag på egen hand leverera förändringar och ny funktionalitet till produktion helt automatiskt vilket vid behov kan vara flera gånger om dagen.

Uttömmande automatiserade tester körs kontinuerligt vid varje kodförändring och uppsättning av testmiljöer i AWS sker också helt automatiserat. En del team praktiserar även s.k. “continuous deployment” och levererar kodändringar till sina produktionsmiljöer utan någon som helst manuell handpåläggning.

“Diabol har varit en nyckelspelare för att uppnå våra högt ställda mål inom DevOps och Continuous Delivery.”

– Tobias Palmborg, Manager Engineering Support, Klarna

 

 

Diabol migrerar Abdona till AWS och inför en automatiserad leveransprocess

Abdona tillhandahåller tjänster för affärsresehantering till ett flertal organisationer i offentlig sektor. I samband med en större utvecklingsinsats vill man också se över infrastrukturen för drift och testmiljöer för att minska kostnader och på ett säkert sätt kunna garantera hög kvalité och korta leveranstider. Diabol anlitades för ett helhetsåtagande att modernisera infrastruktur, utvecklingsmiljö, test- och leveransprocess.

Utmaning

Abdonas system består av en klassisk 3-lagersarkitektur i Java Enterprise och sedan lanseringen för 7 år sedan har endast mindre uppdateringar skett. Teknik och infrastruktur har inte uppdaterats och har med tiden blivit förlegade och svårhanterliga. Manuellt konfigurerade servrar, undermålig dokumentation och spårbarhet, knapphändig versionshantering, ingen kontinuerlig integration eller stabil byggmiljö, manuell test och deployment. Förutom dessa strukturella problem var kostnaden för hårdvara som satts upp manuellt för både test- och driftmiljö var omotiverad dyr jämfört med dagens molnbaserade alternativ.

Lösning

Diabol började med att kartlägga problemen och först och främst ta kontroll över kodbasen som var utspridd över flera versionshanteringssytem. All kod flyttades till Atlassian Bitbucket och en byggserver med Jenkins sattes upp för att på ett repeterbart sätt bygga och testa systemet. Vidare så valdes Nexus för att hantera beroenden och arkivera de artifakter som produceras av byggservern. Infrastruktur migrerades till Amazon AWS av både kostnadsmässiga skäl, men också för att kunna utnyttja moderna verktyg för automatisering och möjligheterna med dynamisk infrastruktur. Applikationslager flyttades till EC2 och databasen till RDS. Terraform valdes för att automatisera uppsättningen av resurser i AWS och Puppet introducerades för automatisk konfigurationshantering av servrar. En fullständig leveranspipeline med automatiskt deployment implementerades i Jenkins.

Resultat

Migrering till Amazon AWS har lett till drastiskt minskade driftkostnader för Abdona. Därtill har man nu en skalbar modern infrastruktur, fullständig spårbarhet och en automatisk leveranskedja som garanterar hög kvalitet och korta ledtider. Systemet är helt och hållet rekonstruerbart från kodbasen och testmiljöer kan skapas helt automatiskt vid behov.

We Continuously Deliver!