Tag Archives: Continuous Delivery

Next week is conference week!

Next week will be a conference heavy one in Stockholm! The inaugural CoDe Continuous Delivery & DevOps conference takes place on June 2nd while Agila Sverige (Agile Sweden) occupies June 3rd and 4th. Both conferences target agile software development practices and Diabol is very well represented on these events. We will have one speaker on each conference presenting talks related to Continuous Delivery.

Marcus Philip will present “Continuous Delivery for databases” on the CoDe conference while Tommy Tynjä will give his talk “Next Generation Continuous Delivery” on Agila Sverige where he takes a look on the software of tomorrow can be built and delivered.

So if you’re interested in learning more about Continuous Delivery and what we do, don’t hesitate to come by!

See you on a conference very soon!

AWS Cloudformation introduction

AWS Cloudformation is a concept for modelling and setting up your Amazon Web Services resources in an automatic and unified way. You create templates that describe your AWS resoruces, e.g. EC2 instances, EBS volumes, load balancers etc. and Cloudformation takes care of all provisioning and configuration of those resources. When Cloudformation provisiones resources, they are grouped into something called a stack. A stack is typically represented by one template.

The templates are specified in json, which allows you to easily version control your templates describing your AWS infrastrucutre along side your application code.

Cloudformation fits perfectly in Continuous Delivery ways of working, where the whole AWS infrastructure can be automatically setup and maintained by a delivery pipeline. There is no need for a human to provision instances, setting up security groups, DNS record sets etc, as long as the Cloudformation templates are developed and maintained properly.

You use the AWS CLI to execute the Cloudformation operations, e.g. to create or delete a stack. What we’ve done at my current client is to put the AWS CLI calls into certain bash scripts, which are version controlled along side the templates. This allows us to not having to remember all the options and arguments necessary to perform the Cloudformation operations. We can also use those scripts both on developer workstations and in the delivery pipeline.

A drawback of Cloudformation is that not all features of AWS are available through that interface, which might force you to create workarounds depending on your needs. For instance Cloudformation does not currently support the creation of private DNS hosted zones within a Virtual Private Cloud. We solved this by using the AWS CLI to create that private DNS hosted zone in the bash script responsible for setting up our DNS configuration, prior to performing the actual Cloudformation operation which makes use of that private DNS hosted zone.

As Cloudformation is a superb way for setting up resources in AWS, in contrast of managing those resources manually e.g. through the web UI, you can actually enforce restrictions on your account so that resources only can be created through Cloudformation. This is something that we currently use at my current client for our production environment setup to assure that the proper ways of workings are followed.

I’ve created a basic Cloudformation template example which can be found here.

Tommy Tynjä

Is your delivery pipeline an array or a linked list?

The fundamental data structure of a delivery pipeline and its implications

A delivery pipeline is a system. A system is something that consists of parts that create a complex whole, where the essence lies largely in the interaction between the parts. In a delivery pipeline we can see the activities in it (build, test, deploy, etc.) as the parts, and their input/output as the interactions. There are two fundamental ways to define interactions in order to organize a set of parts into a whole, a system:

  1. Top-level orchestration, aka array
  2. Parts interact directly with other parts, aka linked list

You could also consider sub-levels of organization. This would form a tree. The sub-level of interaction could be defined in the same way as its parents or not.

My question is: Is one approach better than the other for creating delivery pipelines?

I think the number one requirement on a pipeline is maintainability. So better here would mean mainly more maintainable, that is: easier and quicker to create, to reason about, to reuse, to modify, extend and evolve even for a large number of complex pipelines. Let’s review the approaches in the context of delivery pipelines:

1. Top-level orchestration

This means having one config (file) that defines the whole pipeline. It is like an array.

An example config could look like this:

  scm: commit
  build: number
  scm: github org=Diabol repo=delivery-pipeline-plugin.git
  - name: commit
      - build
      - unit_test
  - name: test
      env: test
      - deploy: continue_on_fail=true
      - smoke_test
      - system_test
  - name: prod
      env: prod
      - deploy
      - smoke_test

The tasks, like build, is defined (in isolation) elsewhere. TravisBamboo and Go does it this way.

2. Parts interact directly

This means that as part of the task definition, you have not only the main task itself, but also what should happen (e.g. trigger other jobs) when the task success or fails. It is like a linked list.

An example task config:

name: build
  - scm: github org=Diabol repo=delivery-pipeline-plugin.git
  - mvn: install
  - email: committer
    when: on_fail
  - trigger: deploy_test
    when: on_success

The default way of creating pipelines in Jenkins seems to be this approach: using upstream/downstream relationships between jobs.


There is also a supplementary approach to create order: Tagging parts, aka Inversion of Control. In this case, the system materializes bottom-up. You could say that the system behavior is an emerging property. An example config where the tasks are tagged with a stage:

- name: build
  stage: commit
    - mvn: install

- name: integration_test
  stage: commit
    - mvn: verify -PIT

Unless complemented with something, there is no way to order things in this approach. But it’s useful for adding another layer of organization, e.g. for an alternative view.

Comparisons to other systems

Maybe we can enlighten our question by comparing with how we organize other complex system around us.

Example A: (Free-market) Economic Systems, aka getting a shirt

1. Top-level organization

Go to the farmer, buy some cotton, hand it to weaver, get the fabric from there and hand that to the tailor together with size measures.

2. Parts interact directly

There are some variants.

  1. The farmer sells the cotton to the weaver, who sells the fabric to the tailor, who sews a lot of shirts and sells one that fits.
  2. Buy the shirt from the tailor, who bought the fabric from the weaver, who bought the cotton from the farmer.
  3. The farmer sells the cotton to a merchant who sells it to the weaver. The weaver sells the fabric to a merchant who sells it to the tailor. The tailor sells the shirts to a store. The store sells the shirts.

The variations is basically about different flow of information, pull or push, and having middle-mens or not.


Economic systems tends to be organized the second way. There is an efficient system coordination mechanism through demand and supply with price as the deliberator, ultimately the system is driven by the self-interest of the actors. It’s questionable whether this is a good metaphor for a delivery pipeline. You can consider deploying the artifact as the interest of a deploy job , but what is the deliberating (price) mechanism? And unless we have a common shared value measurement, such as money, how can we optimize globally?

Example B: Assembly line, aka build a car

Software process has historically suffered a lot from using broken metaphors to factories and construction, but lets do it anyway.

1. Top-level organization

The chief engineer designs the assembly line using the blueprints. Each worker knows how to do his task, but does not know what’s happening before or after.

2. Parts interact directly

Well, strictly this is more of an old style work shop than an assembly line. The lathe worker gets some raw material, does the cylinders and brings them to the engine assembler, who assembles the engine and hands that over to …, etc.


It seems the assembly line approach has won, but not in the tayloristic approach. I might do the wealth of experiences and research on this subject injustice by oversimplification here, but to me it seems that two frameworks for achieving desired quality and cost when using an assembly line has emerged:

  1. The Toyota way: The key to quality and cost goals is that everybody cares and that the everybody counts. Everybody is concerned about global quality and looks out for improvements, and everybody have the right to ‘stop the line’ if their is a concern. The management layer underpins this by focusing on the long term goals such as the global quality vision and the learning organization.
  2. Teams: A multi-functional team follows the product from start to finish. This requires a wider range of skills in a worker so it entails higher labour costs. The benefit is that there is a strong ownership which leads to higher quality and continuous improvements.

The approaches are not mutually exclusive and in software development we can actually see both combined in various agile techniques:

  • Continuous improvement is part of Scrum and Lean for Software methodologies.
  • It’s all team members responsibility if a commit fails in a pipeline step.


For parts interacting directly it seems that unless we have an automatic deliberation mechanism we will need a ‘planned economy’, and that failed, right? And top-level organization needs to be complemented with grass root level involvement or quality will suffer.


My take is that the top-level organization is superior, because you need to stress the holistic view. But it needs to be complemented with the possibility for steps to be improved without always having to consider the whole. This is achieved by having the team that uses the pipeline own it and management supporting them by using modern lean and agile management ideas.

Final note

It should be noted that many desirable general features of a system framework that can ease maintenance if rightly used, such as inheritance, aggregation, templating and cloning, are orthogonal to the organizational principle we talk about here. These features can actually be more important for maintainability. But my experience is that the organizational principle puts a cap on the level of complexity you can manage.

Marcus Philip

Continuous Delivery in the cloud

During the last year or so, I have been implementing Continuous Delivery and delivery pipelines for several different companies. From 100+ employees down to less than 5. And in several different businesses. It is striking to see that the problems, once you dig them out, are soo similar in all companies. Even more striking maybe is that the solutions to the problems are soo similar. Implementing a low cycle time – high thoughput delivery pipeline in IT departments is very hard, but the challenges are very similar in most of the customers I have worked with. Implementing the somewhat immature tools and automation solutions that need to be connected and in sync (GO, Jenkins, Nolio, Rundeck, Puppet, Chef, Maven, Gails, Redmine, Liquibase, Selenium, Twist.. etc) is something that needs to be done over and over again.

This observation has given us the idea that continuous delivery is perfectly suited for a cloud offering. Therefore me and my colleges have started an ambitious initiative to implement a complete “from-requirement-to-production-to-validation” delivery pipeline as a could service. We will do it using our collective experience from the different solutions we have done for our various customers.

Why is it that every IT department need to be, not only experts of their business, but also experts in the craft of systems development. This is really strange, in particular since the challenges seem to be so similar in every single IT department. Systems development should be all about implementing the business requirements as efficient and correct as possible. Not so much about discussing how testing should be increased to maintain quality, or implementing solutions for deploying war-files to JBoss. That “development” is done in each and every company these days. Talk about breaking the DRY rule.

The solution to the problem above has traditionally been to try to purchase “off-the-shelf” systems that, at least on paper, fulfills the business needs. The larger the “off-the-shelf” system, the more business needs are fulfilled. But countless are the businesses that have come to realize that fulfilling on paper is not the same as fulfilling in production, with customers, making money. And worse, once the “off-the-shelf” system has been incorporated into every single corner of the company, it becomes very hard to adapt to changing markets and growing business requirements. And markets change rapidly these days – if in doubt, ask Nokia.

A continuous delivery pipeline solution might seem like a trivial thing when talking about large markets like the one Nokia is acting in. But there are many indicators that huge improvements can be achieved by relatively small means that are related to IT. Let me mention two strong indicators:

  1. Most of the really large new companies in the world, all have leaders with a heavy legacy in hardcore systems development/programming. (Oracle, Google, Apple, Flickr, Facebook, Spotify, Microsoft to mention a few)
  2. A relatively new concept “Lean-Startup”, by Eric Ries takes an extremely IT-close approach to business development as a whole. This concept has renderd huge impact in fortune 500 companies around the word. And Eric Ries has busy days coaching and talking to companies that are struggling to be innovative when the market they are in are moving faster than they can handle. As an example, one of the large european banks recently put their new internet bank system “on ice” simply because they realized that it was going to be out of date before it was launched.

All in all. To coupe with the extreme pace, systems development is becoming more and more industry like. And traditional industries have spent more that 80 years optimizing production lines. No one would ever get the stupid idea to try to build a factory to develop a toy or a new component for a car without seeking professional guidance and assistance. I mean, what kind of robot from Motoman Robotics would you choose? That is simply not core business value!

More on our continuous delivery pipeline in the cloud solution will follow on this channel!

Retrospective from a JFokus presentation

I have been thrilled to get the chance to present my experience with DevOps and Continuous delivery at JFokus 2012! This is a short story, describing the experience.

Back in october 2011 when my proposal was approved, I immediately started to read the great Presentation Zen book by Garr Reynolds. This book is great! It makes presentation seem really simple and the recommendations are crystal clear. However, it turns out that the normal guidance a bullet point presentation gives, is considered a disaster in the book. This leaves the presentation clean and lean, but I soon realized it also leaves the presenter alone on the stage. Infront of 150 or so persons, you need to remember everything you have planned to say on each slide. I thought I was going to pull it off, but a repetition one week before the event, made me think twice. It was a disaster! But beeng somewhat stubborn, I decided to stick to the plan and luckily it got 100% better at the actual event.

DevOps and Continuous Delivery really is something that I’m passionate about, and I hope that helped in my attempt to deliver something meaningful to the ones that showed up. I got the feeling that people was listening and hopefully left the presentation with some thoughts that can help their companies be more productive and successful.

One question I got after was: “Are anyone actually doing this?” Well, I’d like to pass the question forward. send me a tweet @danielfroding, if you are doing this. I know at least four places, of which I have been highly involved in two, that has implemented an automated deployment pipeline and are developing the system according to Continuous Delivery principles.

My slides are here: http://www.jfokus.se/jfokus12/preso/jf12_RetrospectiveFromTheYearOfDevOps.pdf

JFokus is really a great conference and I’m proud that we have such event in Stockholm! A huge thanks to the organizers for setting this up!

See you next year!