A kind of Scrum

When I talk to different companies in the software industry about how they work I often hear the expression that we use “a sort of scrum” or “we have our own version of scrum”. I hear a warning bell from those kind of statements. Agile and Scrum are buzzwords that most companies like to boast that they use, especially since many system developers prefer to work that way. But how much can you tweak Scrum and still get the benefit out of it that we proponents promise?

It is true that “agile” means easy to change and that agile development is based largely on changing the approach from the feedback you receive. In all agile methods, the aim is to have frequent and short feedback loops. But it is a common misunderstanding that Agile is so lightweight that you easily can change the methods as you like. It is not the frameworks that are agile, they make you agile if applied properly.

It is easy to introduce scrum meetings each morning. It’s easy to have planning and retrospective meetings. It is easy to put up a Scrum board for all to see. But it’s hard to get all the pieces to work together as a whole, and to get the entire organization to be permeated by the agile values:

  • Deliver often
  • Respect people
  • Responding quickly to changes

Scrum is often implemented in isolation in a development department and the change is often driven by the developers on the floor. It is perhaps not surprising because the movement has been built by developers and there is an inherent power shift from traditional managers downward in the organization to the developers.

Such initiatives from below often encounter obstacles and resistance when trying to fit the agile way of working into an organization that is not prepared for it. That’s when you easily begin to stretch the agile values and create ”our own variant of scrum”. What happens is that the transparency of scrum exposes dysfunctions in the organization, but instead of resolving the root causes, you change the process and make workarounds and thus continues to hide the root causes. This pattern is so common that it has a name, scrumbut and the effect is often that the team doesn’t deliver a “potentially deliverable product” each sprint.

To less mature organizations, the best advice is to adhere strictly to a methodology such as Scrum to have something to hold on to and the result can be relatively good anyway. Scrum as it is described in The Scrum Guide is a very mature approach that has evolved and adapted over two decades. More mature organizations knows what effects changes to the processes will cause and therefore will have more freedom to stretch the methods to their own advantage.

Change can be costly, not least in the form of altered balance of power, but there is a lot to gain by getting everybody involved and pull together. A good agile organization is like a Formula 1 car driving fast and responds to the slightest input pulse from the driver, but also has a trimmed team in the pits that are willing to fix anything that might happen during a race.

If you intend to introduce Scrum in your organization and unless you’re really mature, you would do well to stick to Scrum by the book and be responsive to all the temptations of deviation. Take them as a signal that there are some things in the organization that are not working optimally and try to fix the root causes. We are all children at the beginning and to mimic can be an effective way at the beginning of a change.

Conference retrospectives

The first week of June was a busy one for us with representation on a couple of conferences taking place in Stockholm, the inaugural CoDe Continuous Delivery & DevOps conference on June 2nd and Agila Sverige (Agile Sweden) on June 3rd and 4th. Here’s a small retrospective on both of those events.

The CoDe conference was quite small (a bit less than 200 people) but had a very good vibe to it. We were silver sponsors of the event and we had our Jenkins Delivery Pipeline Plugin on display in our booth which gained a lot of interest from attendees. It led to many interesting discussions on CI/CD servers, automation, capabilities and visualization. Our feeling was that the attendees had a lot of genuine interest, knowledge and awareness about Continuous Delivery which fueled these interesting discussions.

Our Marcus Philip presented “Continuous Delivery for Databases” at the CoDe conference, where he talked about how to transition a legacy database application with a lot of PL/SQL code into a streamlined process where all changes are version controlled, traceable and automatically deployed. The talk covered the whole spectrum of the problem, starting with why they weren’t doing Continuous Delivery, why they should do it, thoughts on different solutions to the problem and how they did the actual implementation and how it panned out. Many attendees enjoyed the presentation and Marcus also got the opportunity to present the talk again at the Oracle Stockholm Meetup on June 16th. Cool!

Slides from the talk can be found here: https://speakerdeck.com/marcusphi/continuous-delivery-for-databases.

Agila Sverige is a conference which strongly encourages discussions among the conference attendees. All talks on the conference are lightning talks and there is plenty of time allocated for open space discussions and breaks, allowing you to network and share experiences with others. The conference is in Swedish but is probably one of the best when it comes to agile software development practices. Two or three years ago many talks and discussions on this conference focused on “why to practice agile development practices”. This year it was apparent that the industry has matured on all levels since the majority of talks and open space sessions rather focused on “how to practice agile development practices”.

Tommy Tynjä presented a visionary talk on how tomorrow’s software can be structured and delivered in his presentation “Next Generation Continuous Delivery”. The talk gained a lot of interest, the room was packed for the presentation and Tommy had some interesting discussions on the topic afterwards.

Slides from his talk can be found here: http://www.slideshare.net/tommysdk/next-generation-continuous-delivery
There is also a video recording available at: https://agilasverige.solidtango.com/video/next-generation-continuous-delivery.

We give our thumbs up for both of these conferences and we hope to see you on any of these events next year as well! We always look forward to meet people to discuss thoughts, experiences, problems and visions regarding Continuous Delivery, DevOps and agile software development methodologies. But if you don’t want to wait until next year, don’t hesitate to contact us!

Next week is conference week!

Next week will be a conference heavy one in Stockholm! The inaugural CoDe Continuous Delivery & DevOps conference takes place on June 2nd while Agila Sverige (Agile Sweden) occupies June 3rd and 4th. Both conferences target agile software development practices and Diabol is very well represented on these events. We will have one speaker on each conference presenting talks related to Continuous Delivery.

Marcus Philip will present “Continuous Delivery for databases” on the CoDe conference while Tommy Tynjä will give his talk “Next Generation Continuous Delivery” on Agila Sverige where he takes a look on the software of tomorrow can be built and delivered.

So if you’re interested in learning more about Continuous Delivery and what we do, don’t hesitate to come by!

See you on a conference very soon!

Diabol sponsors and speaks at CoDeSthlm

Diabol is a proud sponsor and speaker of CoDeSthlm,  the Continuous Delivery & DevOps one-day conference June 2 in Stockholm.

Our Marcus Philip will hold the presentation Continuous Delivery for Databases talking about why databases are lagging in the move towards CD, why this is a problem and what we can do about it.

Also note that the new meetup group for CD lovers, Continuous-Delivery-Stockholm, has received a discount offer from the conference organizers available for all meetup members, see this post.

AWS Cloudformation introduction

AWS Cloudformation is a concept for modelling and setting up your Amazon Web Services resources in an automatic and unified way. You create templates that describe your AWS resoruces, e.g. EC2 instances, EBS volumes, load balancers etc. and Cloudformation takes care of all provisioning and configuration of those resources. When Cloudformation provisiones resources, they are grouped into something called a stack. A stack is typically represented by one template.

The templates are specified in json, which allows you to easily version control your templates describing your AWS infrastrucutre along side your application code.

Cloudformation fits perfectly in Continuous Delivery ways of working, where the whole AWS infrastructure can be automatically setup and maintained by a delivery pipeline. There is no need for a human to provision instances, setting up security groups, DNS record sets etc, as long as the Cloudformation templates are developed and maintained properly.

You use the AWS CLI to execute the Cloudformation operations, e.g. to create or delete a stack. What we’ve done at my current client is to put the AWS CLI calls into certain bash scripts, which are version controlled along side the templates. This allows us to not having to remember all the options and arguments necessary to perform the Cloudformation operations. We can also use those scripts both on developer workstations and in the delivery pipeline.

A drawback of Cloudformation is that not all features of AWS are available through that interface, which might force you to create workarounds depending on your needs. For instance Cloudformation does not currently support the creation of private DNS hosted zones within a Virtual Private Cloud. We solved this by using the AWS CLI to create that private DNS hosted zone in the bash script responsible for setting up our DNS configuration, prior to performing the actual Cloudformation operation which makes use of that private DNS hosted zone.

As Cloudformation is a superb way for setting up resources in AWS, in contrast of managing those resources manually e.g. through the web UI, you can actually enforce restrictions on your account so that resources only can be created through Cloudformation. This is something that we currently use at my current client for our production environment setup to assure that the proper ways of workings are followed.

I’ve created a basic Cloudformation template example which can be found here.

Tommy Tynjä

Jfokus 2015 main takeaways

The biggest Java conference in Sweden, Jfokus, wrapped up a three day conference on Wednesday. This was my fifth consecutive one and I thought it was the best in recent years. In this blog post I’ll summarize my main takeaways.

During the first day, the tutorial day, I truly enjoyed Ken Sipe from Mesosphere and his talk “Docker at Production Scale”. It was supposed to be a tutorial but it wasn’t focused on one subject, which one might expect from the title. The talk was divided in several parts where he first started talking about infrastructure in general, what’s problematic in today’s datacenters regarding resource consumption etc. He then gave a good intro to Docker, where even I who have been using Docker on a day-to-day basis during the last two months learned a couple of new things. The last part of the talk focused on Apache Mesos, which problems it solves and how. He briefly also touched on subjects such as Service Discovery. For me, the main takeaway from this talk is that the way we structure our datacenters and our applications are definitely changing and that there is a lot of exciting technologies taking form in this area at this point of time.

Christian Heilmann held an interesting opening keynote where he talked a lot about mobile apps and their profitability. He pinpointed that many think apps are sure way to make good money, but in reality, that is not the case. Very few apps out of all apps in the marketplace are actually that profitable. Your company will have a greater chance of making money out of apps if targeting the enterprise customers instead of individuals. Enterprises are constantly looking for ways for their employees to be more efficient and certain apps could possible help them achieve that. Individuals are also very conservative now days when it comes to installing new apps. Most people just go with the apps they have and install one or no new app per month. He had some interesting figures backing up those statements.

A great talk was “Thinking fast and slow in software development” by Daniel Bryant. His talk was based on the book Thinking fast and slow by Daniel Kahneman, but applied to software development. One good quote from this talk was “look for actual problems instead of solutions”. As developers we tend to start elaborating solutions instead of trying to understand the actual problem. Are we really understanding what the actual problem is? Or do we just think we know what the problem is when in fact do not? He also mentioned that some of the most common factors for failures in software projects (source IEEE) is poor communication among customers, developers and users, the use of immature technology and sloppy development practices. These are things that we definitely could do better at in our industry! There is no reason why we should accept this happening. We should all care more about software craftsmanship.

Jeremy Deane held an interesting presetnation on concurrent processing techniques using e.g. plain java.util.concurrent techniques and actors. He had a few good tips on how to decouple a web service which under the hood depend on a slow responding third party web service by making the communication in between them asynchronous. All of his examples can be found in his GitHub repo.

As usual at Jfokus, Arun Gupta presented what’s to come in the next Java EE version, in this case Java EE 8. Focus will be to improve the new features introduced in Java EE 7 to make them more usable. Support for HTTP 2 and HTML 5 will be added to help those technologies gain traction. Servlet 4.0 will be introduced as well as MVC 1.0. JAX-RS, JMS and JSON APIs will also get facelifts. The Batch processing API will not be tied to Java EE only but will in the future be available in conjunction with Java SE as well. Obviously Java EE 8 will contain improvements for the new language features introduced in Java 8, such as functional interfaces etc.

The second day kicked off with a talk by Jez Humble called “21st Century Software Delivery”. He really put emphasis on how important continuous delivery is along with continuous experimentation. As an industry, we are bad at experimenting. We try to build this big thing that we think our customers want (but which we don’t really know). Instead we should try out the thing we’re building in a small scale first, not only as a protype but a fully functional, nice and shiny feature that the customers will appreciate, but with very limited scope. We can then measure how well this experiment turns out by conducting A/B testing and comparing the analytics for this feature. Should we continue to build on top of that experiment or not? You wouldn’t go out to build a very large complex building without building it in small scale first and to assure that techniques and functionality is matching the expections. We should do that in software development as well.

Jez Humbles second talk, on automated acceptance testing was valuable. For me, who am passionated about delivering quality software, there wasn’t much new in his content but it was still refreshing to get reminded on why we actually want to do testing in a certain way, even though you might do it already. Most people have probably encountered flaky tests. Those tests that for one reason or another goes red with or without an apparant reason, just to go green in a build later containing a totally unrelated code change. Flaky tests leads to less confidence and trust for the tests which eventually leads to people ignoring them or not even running them at all. One interesting technique to battle flaky tests is to move known flaky tests to separate suite, which is to be run separately (preferrably after) your actual suite. Then, you can remain confident about your test suite always being reliable. When a flaky test has become stable again, it should move back into the main suite. It is important to remember that test suite shouldn’t be static. Tests should come and go and be moved appropriately alongside the codebase development. Do also not be afraid to actually delete tests that no longer brings value. We are horrible at identifying and actually removing those tests.

In between talks I also enjoyed meeting a lot of people I’ve worked with in various projects throughout my career. Always a real pleasure to catch up with old friends as well as getting to know a few new ones as well. I also got a brief chat with Jez Humble regarding continuous delivery and strategies for balancing between maintenance and development of new features in highly autonomous teams with end-to-end responsibility, a discussion which was very much appreciated. Even though the vast majority of sessions were awesome, I almost wish there would have been some more breaks as well so I could have had the opportunity for even more socializing.

All in all, a great event. Thanks to everyone involved in organizing this event and to all the speakers and attendees. See you in 2016!

Tommy Tynjä

Agile Configuration Management – intermezzo

Why do I need agile configuration management?

The main reason for doing agile configuration management is that it’s a necessary means too achieve agile infrastructure and architecture. When you have an agile architecture it becomes easy to make the correct choice for architecture.

Developers make decisions every day without realizing it. Solving that requirement by adding a little more code to the monolith and a new table in the DB is a choice, even though it may not be perceived as such. When you have an agile architecture, it’s easy to create (and maintain!) a new server or  middleware to adress the requirement in a better way.

Docker on Mac OS X using CoreOS

Docker is on everybodys lips these days. It’s an open-source software project that leverages from Linux kernel resource isolation to allow independent so called containers to run within a single Linux instance, thus avoiding overhead of virtual machines/hypervisors while still offering full container isolation. Docker is therefore a feasible approach for automated and scalable software deployments.

Many developers (including myself) are nowdays developing on Mac OS X, which is not Linux. It is however possible to use Docker on OS X but one should be aware of what this implies. As OS X is not based on Linux and therefore lacks the kernel features which would allow you to run Docker containers natively on your system, you still need to have a Linux host somewhere. Docker provides you with something called boot2docker which essentially is a Linux distribution (based on Tiny Core Linux) built specifically for running Docker containers.

In case you want to use a more general Linux VM, if you want to use it for other tasks than just running Docker containers for instance, an alternative for boot2docker is to use CoreOS as your Docker host VM. CoreOS is quite lightweight (but is obviously bigger than boot2docker) and comes bundled with Docker. Setting up a fresh CoreOS instance to run with Vagrant is easy:

mkdir ~/coreos
cd ~/coreos
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "coreos"
config.vm.box_url = "http://storage.core-os.net/coreos/amd64-generic/dev-channel/coreos_production_vagrant.box"
config.vm.network "private_network",
ip: ""
end' > Vagrantfile
vagrant up
vagrant ssh
core@localhost ~ $ docker --version
Docker version 0.9.0, build 2b3fdf2

Now you have a CoreOS Linux VM available which you can use as your Docker host.

If you want to mount a directory to be shared between OS X and your CoreOS host, just add the following line with the proper existent paths in the Vagrantfile:
config.vm.synced_folder "/Users/tommy/share", "/home/core/share", id: "core", :nfs => true, :mount_options => ['nolock,vers=3,udp']

Happy hacking!

Past and upcoming events

We have been unusually busy at Diabol during the past few months, speaking at various software conferences on a variety of Continuous Delivery related topics. We enjoy sharing our experiences and to meet new and familiar faces to discuss topics that we’re passionate about, such as Continuous Delivery, DevOps and automation.

We have also arranged a Continuous Delivery seminar of our own, which attracted 20 top IT-management professionals from various well known Swedish enterprises. The seminar was a great success, with interesting presentations and good discussions among the attendees.

The next upcoming event where we will be presenting is the first edition of the Continuous Delivery Conference in Bussum, Netherlands on December 4th. Andreas Rehn will present “From dinosaur to unicorn in 12 months: how to push continuous delivery maturity to the next level”.

Past events where we have been presenting lately, together with video recording or presentation material:

If you plan to attend a conference where we’re speaking or just attending, come by and say hi! We look forward talking to you!

A comment on ‘Continuous Delivery Pipelines: GoCD vs Jenkins’

In Continuous Delivery Pipelines: GoCD vs Jenkins, there are some good points on modeling continuous delivery, but to make his point, that Go is better (at CD) than Jenkins, he chooses to represent Jenkins with the Build Pipeline Plugin and not Delivery Pipeline Plugin by Diabol with superior visualization. I haven’t used Go and I would like to get the time to have a deeper look at it. As far as I gather, it’s a competent tool, quite possibly better at CD than Jenkins, but I have to say that it is quite possible to do CD and pipelines in Jenkins, I am doing it as I write.

His post brings up some interesting points like, do we need pipelines as first class citizens rather than just ‘visual doodles’ (in my tools)? I am a bit skeptical. Pipelines are indeed central in CD, but my thoughts on what’s lacking from Jenkins goes a bit differently. I think we need a set of tools to do CD and pipelines.

Pipelines should probably be expressed outside of the (build/CI/deploy) tool because they will span several domains and abstractions levels. There will be need for different visualizations of the pipeline state and configuration. In fact what matters in run-time (as opposed to design-time) is not the pipeline itself, but questions like:

  • Where is my commit?
    • Example: It’s passed commit phase and has been deployed to the first environment where automated tests are running
  • What is the state of this environment?
    • Example: It has version 1.4.123 of app X stemming from commit Y and version 1.3.567 of the VM template stemming from commit Z.

The pipeline will also be subjected to continuous improvements. Then, in design-time (and debug-time), there are questions like:

  • What are the up-stream dependencies of this job?
  • Where is this artifact produced?

Your tools should help you answer questions like that, simply and quickly, by good visualization and design.

We Continuously Deliver!