Category Archives: Continuous Delivery

Top class Continuous Delivery in AWS

Last week Diabol arranged a workshop in Stockholm where we invited Amazon together with Klarna and Volvo Group Telematics that both practise advanced Continuous Delivery in AWS. These companies are in many ways pioneers in this area as there is little in terms of established practices. We want to encourage and facilitate cross company knowledge sharing and take Continuous Delivery to the next level. The participants have very different businesses, processes and architectures but still struggle with similar challenges when building delivery pipelines for AWS. Below follows a short summary of some of the topics covered in the workshop.

Centralization and standardization vs. fully autonomous teams

One of the most interesting discussions among the participants wasn’t even technical but covered the differences in how they are organized and how that affects the work with Continuous Delivery and AWS. Some come from a traditional functional organisation and have placed their delivery teams somewhere in between development teams and the operations team. The advantages being that they have been able to standardize the delivery platform to a large extent and have a very high level of reuse. They have built custom tools and standardized services that all teams are more or less forced to use This approach depends on being able to keep at least one step ahead of the dev teams and being able to scale out to many dev teams without increasing headcount. One problem with this approach is that it is hard to build deep AWS knowledge out in the dev teams since they feel detached from the technical implementation. Others have a very explicit strategy of team autonomy where each team basically is in charge of their complete process all the way to production. In this case each team must have a quite deep competence both about AWS and the delivery pipelines and how they are set up. The production awareness is extremely high and you can e.g. visualize each team’s cost of AWS resources. One problem with this approach is a lower level of reusability and difficulties in sharing knowledge and implementation between teams.

Both of these approaches have pros and cons but in the end I think less silos and more team empowerment wins. If you can manage that and still build a common delivery infrastructure that scales, you are in a very good position.

Infrastructure as code

Another topic that was thoroughly covered was different ways to deploy both applications and infrastructure to AWS. CloudFormation is popular and very powerful but has its shortcomings in some scenarios. One participant felt that CF is too verbose and noisy and have built their own YAML configuration language on top of CF. They have been able to do this since they have a strong standardization of their micro-service architecture and the deployment structure that follows. Other participants felt the same problem with CF being too noisy and have broken out a large portion of configuration from the stack templates to Ansible, leaving just the core infrastructure resources in CF. This also allows them to apply different deployment patterns and more advanced orchestration. We also briefly discussed 3:rd part tools, e.g. Terraform, but the general opinion was that they all have a hard time keeping up with features in AWS. On the other hand, if you have infrastructure outside AWS that needs to be managed in conjunction with what you have in AWS, Terraform might be a compelling option. Both participants expressed that they would like to see some kind of execution plan / dry-run feature in CF much like Terraform have.

Docker on AWS

Use of Docker is growing quickly right now and was not surprisingly a hot topic at the workshop. One participant described how they deploy their micro-services in Docker containers with the obvious advantage of being portable and lightweight (compared to baking AMI’s). This is however done with stand-alone EC2-instances using a shared common base AMI and not on ECS, an approach that adds redundant infrastructure layers to the stack. They have just started exploring ECS and it looks promising but questions around how to manage centralized logging, monitoring, disk encryption etc are still not clear. Docker is a very compelling deployment alternative but both Docker itself and the surrounding infrastructure need to mature a bit more, e.g. docker push takes an unreasonable long time and easily becomes a bottleneck in your delivery pipelines. Another pain is the need for a private Docker registry that on this level of continuous delivery needs to be highly available and secure.

What’s missing?

The discussions also identified some feature requests for Amazon to bring home. E.g. we discussed security quite a lot and got into the technicalities of AIM-roles, accounts, security groups etc. It was expressed that there might be a need for explicit compliance checks and controls as a complement to the more crude ways with e.g. PEN-testing. You can certainly do this by extracting information from the API’s and process it according to your specific compliance rules, but it would be nice if there was a higher level of support for this from AWS.

We also discussed canarie releasing and A/B testing. Since this is becoming more of a common practice it would be nice if Amazon could provide more services to support this, e.g. content based routing and more sophisticated analytic tools.

Next step

All-in-all I think the workshop was very successful and the discussions and experience sharing was valuable to all participants. Diabol will continue to push Continuous Delivery maturity in the industry by arranging meetups and workshops and involve more companies that can contribute and benefit from this collaboration.  

 

Conference retrospectives

The first week of June was a busy one for us with representation on a couple of conferences taking place in Stockholm, the inaugural CoDe Continuous Delivery & DevOps conference on June 2nd and Agila Sverige (Agile Sweden) on June 3rd and 4th. Here’s a small retrospective on both of those events.

The CoDe conference was quite small (a bit less than 200 people) but had a very good vibe to it. We were silver sponsors of the event and we had our Jenkins Delivery Pipeline Plugin on display in our booth which gained a lot of interest from attendees. It led to many interesting discussions on CI/CD servers, automation, capabilities and visualization. Our feeling was that the attendees had a lot of genuine interest, knowledge and awareness about Continuous Delivery which fueled these interesting discussions.

Our Marcus Philip presented “Continuous Delivery for Databases” at the CoDe conference, where he talked about how to transition a legacy database application with a lot of PL/SQL code into a streamlined process where all changes are version controlled, traceable and automatically deployed. The talk covered the whole spectrum of the problem, starting with why they weren’t doing Continuous Delivery, why they should do it, thoughts on different solutions to the problem and how they did the actual implementation and how it panned out. Many attendees enjoyed the presentation and Marcus also got the opportunity to present the talk again at the Oracle Stockholm Meetup on June 16th. Cool!

Slides from the talk can be found here: https://speakerdeck.com/marcusphi/continuous-delivery-for-databases.

Agila Sverige is a conference which strongly encourages discussions among the conference attendees. All talks on the conference are lightning talks and there is plenty of time allocated for open space discussions and breaks, allowing you to network and share experiences with others. The conference is in Swedish but is probably one of the best when it comes to agile software development practices. Two or three years ago many talks and discussions on this conference focused on “why to practice agile development practices”. This year it was apparent that the industry has matured on all levels since the majority of talks and open space sessions rather focused on “how to practice agile development practices”.

Tommy Tynjä presented a visionary talk on how tomorrow’s software can be structured and delivered in his presentation “Next Generation Continuous Delivery”. The talk gained a lot of interest, the room was packed for the presentation and Tommy had some interesting discussions on the topic afterwards.

Slides from his talk can be found here: http://www.slideshare.net/tommysdk/next-generation-continuous-delivery
There is also a video recording available at: https://agilasverige.solidtango.com/video/next-generation-continuous-delivery.

We give our thumbs up for both of these conferences and we hope to see you on any of these events next year as well! We always look forward to meet people to discuss thoughts, experiences, problems and visions regarding Continuous Delivery, DevOps and agile software development methodologies. But if you don’t want to wait until next year, don’t hesitate to contact us!

Next week is conference week!

Next week will be a conference heavy one in Stockholm! The inaugural CoDe Continuous Delivery & DevOps conference takes place on June 2nd while Agila Sverige (Agile Sweden) occupies June 3rd and 4th. Both conferences target agile software development practices and Diabol is very well represented on these events. We will have one speaker on each conference presenting talks related to Continuous Delivery.

Marcus Philip will present “Continuous Delivery for databases” on the CoDe conference while Tommy Tynjä will give his talk “Next Generation Continuous Delivery” on Agila Sverige where he takes a look on the software of tomorrow can be built and delivered.

So if you’re interested in learning more about Continuous Delivery and what we do, don’t hesitate to come by!

See you on a conference very soon!

AWS Cloudformation introduction

AWS Cloudformation is a concept for modelling and setting up your Amazon Web Services resources in an automatic and unified way. You create templates that describe your AWS resoruces, e.g. EC2 instances, EBS volumes, load balancers etc. and Cloudformation takes care of all provisioning and configuration of those resources. When Cloudformation provisiones resources, they are grouped into something called a stack. A stack is typically represented by one template.

The templates are specified in json, which allows you to easily version control your templates describing your AWS infrastrucutre along side your application code.

Cloudformation fits perfectly in Continuous Delivery ways of working, where the whole AWS infrastructure can be automatically setup and maintained by a delivery pipeline. There is no need for a human to provision instances, setting up security groups, DNS record sets etc, as long as the Cloudformation templates are developed and maintained properly.

You use the AWS CLI to execute the Cloudformation operations, e.g. to create or delete a stack. What we’ve done at my current client is to put the AWS CLI calls into certain bash scripts, which are version controlled along side the templates. This allows us to not having to remember all the options and arguments necessary to perform the Cloudformation operations. We can also use those scripts both on developer workstations and in the delivery pipeline.

A drawback of Cloudformation is that not all features of AWS are available through that interface, which might force you to create workarounds depending on your needs. For instance Cloudformation does not currently support the creation of private DNS hosted zones within a Virtual Private Cloud. We solved this by using the AWS CLI to create that private DNS hosted zone in the bash script responsible for setting up our DNS configuration, prior to performing the actual Cloudformation operation which makes use of that private DNS hosted zone.

As Cloudformation is a superb way for setting up resources in AWS, in contrast of managing those resources manually e.g. through the web UI, you can actually enforce restrictions on your account so that resources only can be created through Cloudformation. This is something that we currently use at my current client for our production environment setup to assure that the proper ways of workings are followed.

I’ve created a basic Cloudformation template example which can be found here.

Tommy Tynjä
@tommysdk

Past and upcoming events

We have been unusually busy at Diabol during the past few months, speaking at various software conferences on a variety of Continuous Delivery related topics. We enjoy sharing our experiences and to meet new and familiar faces to discuss topics that we’re passionate about, such as Continuous Delivery, DevOps and automation.

We have also arranged a Continuous Delivery seminar of our own, which attracted 20 top IT-management professionals from various well known Swedish enterprises. The seminar was a great success, with interesting presentations and good discussions among the attendees.

The next upcoming event where we will be presenting is the first edition of the Continuous Delivery Conference in Bussum, Netherlands on December 4th. Andreas Rehn will present “From dinosaur to unicorn in 12 months: how to push continuous delivery maturity to the next level”.

Past events where we have been presenting lately, together with video recording or presentation material:

If you plan to attend a conference where we’re speaking or just attending, come by and say hi! We look forward talking to you!

A comment on ‘Continuous Delivery Pipelines: GoCD vs Jenkins’

In Continuous Delivery Pipelines: GoCD vs Jenkins, there are some good points on modeling continuous delivery, but to make his point, that Go is better (at CD) than Jenkins, he chooses to represent Jenkins with the Build Pipeline Plugin and not Delivery Pipeline Plugin by Diabol with superior visualization. I haven’t used Go and I would like to get the time to have a deeper look at it. As far as I gather, it’s a competent tool, quite possibly better at CD than Jenkins, but I have to say that it is quite possible to do CD and pipelines in Jenkins, I am doing it as I write.

His post brings up some interesting points like, do we need pipelines as first class citizens rather than just ‘visual doodles’ (in my tools)? I am a bit skeptical. Pipelines are indeed central in CD, but my thoughts on what’s lacking from Jenkins goes a bit differently. I think we need a set of tools to do CD and pipelines.

Pipelines should probably be expressed outside of the (build/CI/deploy) tool because they will span several domains and abstractions levels. There will be need for different visualizations of the pipeline state and configuration. In fact what matters in run-time (as opposed to design-time) is not the pipeline itself, but questions like:

  • Where is my commit?
    • Example: It’s passed commit phase and has been deployed to the first environment where automated tests are running
  • What is the state of this environment?
    • Example: It has version 1.4.123 of app X stemming from commit Y and version 1.3.567 of the VM template stemming from commit Z.

The pipeline will also be subjected to continuous improvements. Then, in design-time (and debug-time), there are questions like:

  • What are the up-stream dependencies of this job?
  • Where is this artifact produced?

Your tools should help you answer questions like that, simply and quickly, by good visualization and design.

Diabol proudly presents Continuous Delivery seminar

Diabol is proud to arrange a seminar completely dedicated to Continuous Delivery, to be kicked off in less than a week on September 30th in Stockholm. This event is an exclusive invite-only event where the top IT-management attendees will learn how Continuous Delivery can help their organization in becoming more efficient in developing and delivering software. Our hand-picked speakers will present how Continuous Delivery and delivery process automation have changed their respective organizations in becoming lean business machines. Instead of dealing with painful manual repetitive tasks which are commonly associated with a traditional release and deploy process, their employees can now focus on innovation and to create business value.

Event speakers:

  • Stefan Berg, former CIO at Com Hem will present: “From average to top performer in less than a year!”
  • Tomas Riha, Agile Architect at Volvo Group Telematics will present: “From hobby project to Continuous Delivery as a Service for the entire organization”

Make sure you keep visiting this channel for more news on Continuous Delivery!

Diabol now a Cloudbees partner

Diabol is now a Jenkins gold service partner to Cloudbees: Diabol AB partner page Cloudbees.com

Cloudbees, the ‘Jenkins Enterprise company’, is a continuous delivery (CD) leader. They provides solutions that enable IT organizations to respond rapidly to the software delivery needs of the business. Their offerings are powered by Jenkins CI, the world’s most popular open source continuous integration (CI) server. The CloudBees CD Platform provides a range of solutions for use on-premise and in the cloud that meet the security, scalability and manageability needs of enterprises. Their solutions support many of the world’s largest and most business-critical deployments.

Diabol is proud to collaborate with Cloudbees.

Feature switches in practice

Feature switches (or feature flags, toggles etc) is a programming technique which has gained a lot of attention through the concepts of Trunk Based Development and Continuous Delivery. Feature switches allows you to shield not yet production ready code while still being committed to mainline in version control. This allows you to work on development tasks on mainline and to continuously integrate your code while avoiding the burdens of branching. Another useful benefit is that you can decide which functionality to run in production by switching functionality on/off. The best thing is that this technique is very easy to implement, you basically just need to start doing it! In this blog post I’ll show you how easy it is to do this in Java.

In my current project we are integrating to a third party service which our system depends heavily on. While our system will continue to work if that third party service becomes unavailable, it still means a loss in revenue to the business. Therefore we want to be able to monitor this integration point closely and provide mechanisms to be able to troubleshoot it efficiently. As the communication between these systems are web service based through SOAP, we found it very useful to be able to log the entire payloads sent and received between the two systems. This feature is an ideal candidate for feature switching.

I implemented a feature which allows us to decide in runtime whether we should log every SOAP message sent and received to a file system. This would also happen asynchronously to not affect application throughput too much. This feature would be switched off in production by default, but would allow us to turn it on if we needed to troubleshoot integration failures.

The most basic feature switch to implement would just be a simple if-statement:

boolean xmlLogFeatureIsEnabled = false;
if (xmlLogFeatureIsEnabled) {
	logToFile(xml);
}

But instead of hardcoding the feature switch state, we want this to be dynamically evaluated so we can change the behavior on a running system without the need for restarts or too much manual labor. To be able to do this we use a small framework called Togglz, which allows you to very easily create feature switches which you then can manage in runtime.

First, we create a feature definition enumeration which implements org.togglz.core.Feature:

public enum FeatureDefinition implements Feature {

    @Label("Log XML to file")
    LOG_XML_TO_FILE;

    public boolean isActive() {
        return FeatureContext.getFeatureManager().isActive(this);
    }
}

Then, we implement org.togglz.core.manager.TogglzConfig which will keep track of the feature states:

@ApplicationScoped
public class FeatureConfiguration implements TogglzConfig {

    @Resource
    private Datasource datasource;

    public Class<? extends Feature> getFeatureClass() {
        return FeatureDefinition.class;
    }

    public StateRepository getStateRepository() {
        return new CachingStateRepository(new JDBCStateRepository(datasource), 10, TimeUnit.MINUTES);
    }

    public UserProvider getUserProvider() {
        return new NoOpUserProvider();
    }
}

We use dependency injection in our project, so this allows us to easily inject a datasource in our feature configuration which Togglz can use to store the feature states in. We then apply a 10 minute cache for the feature state reload so that Togglz won’t have to look up the state in the database for each time a feature state is evaluated. Please note that you might want to implement the configuration a bit more robust than in the example above. When we want to switch a feature on/off it is merely a matter of updating a database column value.

At last, we just change the if-statement encapsulating the feature method call to:

if (FeatureDefinition.LOG_XML_TO_FILE.isActive()) {
    logToFile(xml);
}

And that’s it! This is all we need to do to be able to dynamically switch features on/off in a running Java system. This technique is very useful when exercising Continuous Delivery ways of working where each commit is a potential production release. As you can see, feature switches allows you to commit your changes to version control without necessarily expose them to your end users.

To see this in action, feel free to check out my Togglz example project which uses a simple servlet to demonstrate the behavior.

 

Tommy Tynjä
@tommysdk

Slimmed down immutable infrastructure

Last weekend we had a hackathon at Diabol. The topics somehow related to DevOps and Continuous Delivery. My group of four focused on slim microservices with immutable infrastructure. Since we believe in automated delivery pipelines for software development and infrastructure setup, the next natural step would be to merge these two together. Ideally, one would produce a machine image that contains everything needed to run the current application. The servers would be immutable, since we don’t want anyone doing manual changes to a running environment. Rather, the changes should be checked in to version control and a new server would be created based on the automated build pipeline for the infrastructure.

The problem with traditional machine images running on e.g. VMware or Amazon is that they tend to very large in size, a couple of gigabytes is not an unusual size. Images of that size become cumbersome to work with as they take a long time to create and ship over a network. Therefore it is desirable to keep server images as small as possible, especially since you might create and tear down servers ad-hoc for e.g. test purposes in your delivery pipeline. Linux is a very common server operating system but many Linux distributions are shipped with features that we are very unlikely to ever be using on a server, such as C compilers or utility programs. But since we adopt immutable servers, we don’t even need things as editors, man pages or even ssh!

Docker is an interesting solution for slimmed down infrastructure and full stack machine images which we evaluated during the hackathon. After getting our hands dirty after a couple of hours, we were quite pleased with its capabilities. We’ll definitely keep it on our radar and continue with our evaluation of it.

Since we’re mostly operating in the Java space, I also spent some time looking at how we could save some size on our machine images by potentially slimming down the JVM. Since a delivery pipeline will be triggered several times a day to deploy, test etc, every megabyte saved will increase the pipeline throughput. But why should you slim down the JVM? Well the JVM also contains features (or libraries) that are highly unlikely to ever be used on a server, such as audio, the awt and Swing UI frameworks, JavaFX, fonts, cursor images etc. The standard installation of the Java 8 JRE is around 150 MB. It didn’t take long to shave off a third of that size by removing libraries such as the aforementioned ones. Unfortunately the core library of Java, rt.jar is 66 MB of size, which is a constraint for the minimal possible size of a working JVM (unless you start removing the class files inside it too). Without too much work, I was able to safely remove a third of the size of the standard JRE installation, landing on a bit under 100 MB of size and still run our application. Although this practice might not be suitable for production use of technical or even legal reasons, it’s still interesting to see how much we typically install on our severs although it’ll never be used. The much anticipated project Jigsaw which will introduce modularity to Java SE has been postponed several times. Hopefully it can be incorporated into Java 9, enabling us to decide which modules we actually want to use for our particular use case.

Our conclusion for the time spent on this topic during the hackathon is that Docker is an interesting alternative to traditional machine image solutions, which not only allows, but also encourages slim servers and immutable infrastructure.

Tommy Tynjä
@tommysdk