All posts by Tommy Tynjä

Twitter: @tommysdk

Experiences from a highly productive development team

During the past year I’ve been working in a development team which arguable is the most productive team I’ve been around in my ten year career in software development. During the year, we have been able to achieve great things. We have been taking new systems and services live, delivering new features to both internal and external customers and decommissioning old services which have served their purpose in a fast pace. This article aims to address some of the key aspects that I have been observing, which makes this team so productive and efficient in how they work.

We seldom, if ever, perform work on our own. We have been exercising so called mob programming for a year now, which means the whole team works on the same problem, in front of a big shared monitor, with one keyboard and mouse being passed around the participants. This was something that the team just decided to try after attending a conference which had a presentation on mob programming, which then evolved into a standard way of working within the team. The team’s manager do not care how we work, as long as we deliver. Every morning we finish our daily scrum/standup meeting by deciding on the team formation. Sometimes a small task needs to be performed that might not be suitable for the mob. In that case we exercise pair programming while the mob continues. The benefits of everyone working together are e.g. knowledge sharing, that everyone knows what’s happening, architecture decisions can be made as we go etc. We also prefer constant pre-commit code review because at that point in time everybody is familiar with the code changes, the intent of the changes and what possible solutions were discarded, something that is not as clear with a separate post-commit review which also adds unnecessary overhead. A team moves quicker than an individual and the shared knowledge of the team is greater than of particular individuals. These effects were more or less immediate when we started with mob programming.

Continuous Delivery. We have a streamlined process where the whole delivery process is automated. For each code change that is pushed to a source code repository, a new build of the software is triggered. For each build we run unit, subsystem and regression tests in a fully automated fashion. This is our safety net and if all tests pass, we automatically continue deploying the code to our different environments. In roughly 30 minutes, the new code change has been deployed to production if it has passed all the gatekeeping. No manual interventions needed. This allows us to go fast and get quick feedback on whether a given code change is fit for purpose. A benefit of being able to ship new code changes as soon as possible is that you do not create an inventory of code changes. If a problem would arise, the developers would be up to speed and have the context of the particular code in mind if they would need to make any additional changes. Even though we tag our deployments automatically for traceability, it is also very easy to know what code is actually deployed in production.

Code that is in production brings value, code that isn’t, doesn’t.

Teams with end-to-end responsibility. The Amazon CTO Werner Vogels have been quoted saying: “You build it, you run it”. This aspect is really important for a development team to be able to take responsibility and accountability for the code they write. If you are on-call for the code you’ve written, you will pay attention to quality since you do not want to be woken up in the middle of the night due to technical issues. And if problems arise, you will be sure to solve them for good once they occur. Having to think about the operational aspects of software on a daily basis has made me a better developer. Not only do I write code that is easy to test (an effect of test-driven development), but I also keep in mind how the code is going to be deployed and run.

A team dedicated product owner (PO) is key. A year ago we had a PO that was split between our and another team. When that person was dedicated for our team only a few months later we noticed great effects. The PO was much more available for the needs of the development team. The PO was able answer questions and take decisions which shortened the feedback loop for the developers substantially. These days the PO even joins the programming mob to both gain an understanding of how particular problems are solved but also to bring domain specific expertise to the team.

Automate everything. Not everything is worth automating at first, but if it is obvious that a task is repetitive and thus error prone it will be worth automating. Automate the tedious task together in the team and everyone will understand the benefits of it and know the operating procedure. Time is a valuable asset and we only have so much of it. We need to use it wisely to be able to bring as much value for the company and our customers.

Don’t let a human do a scripts job.

The willingness to experiment and try out new things is of utmost importance. Not all experiments will bring the desired effect and some might even bring negative effect. But the culture should embrace continuous improvement and experimentation. It should be easy to discard an experiment and rollback.

You should never stop improving. No matter how good you feel about yourself, which processes you have in place, how fast you are able to deliver new features to your customers, you must not stop improving. The journey has no end. If you stop improving, someone of your competitors will eventually outplay you. Why has Usain Bolt been the best sprinter for such a long period of time? He probably works out harder than any of his competitors to stay on top of the game. You should too.

Tommy Tynjä
@tommysdk
LinkedIn profile
http://www.diabol.se

Reasons why code freezes don’t work

This article is a continuation on my previous article on how to release software with quality and confidence.

When the big e-commerce holidays such as Black Friday, Cyber Monday and Christmas are looming around the corner, many companies are gearing up to make sure their systems are stable and able to handle the expected increase in traffic.

What many organizations do is introducing a full code freeze for the holiday season, where no new code changes or features are allowed to be released to the production systems during this period of the year. Another approach is to only allow updates to the production environment during a few hours during office hours. These approaches might sound logical but are in reality anti-patterns that neither reduces risk nor ensures stability.

What you’re doing when you stop deploying software is interrupting the pace of the development teams. The team’s feedback loop breaks and the ways of workings are forced to deviate from the standard process, which leads to decreased productivity.

When your normal process allows deploying software on a regular basis in an automated and streamlined fashion, it becomes just as natural as breathing. When you stop deploying your software, it’s like holding your breath. This is what happens to your development process. When you finally turn on the floodgates after the holiday season, the risk for bugs and deployment failures are much more likely to occur. As more changes are stacked up, the bigger the risk of unwanted consequences for each deployment. Changes might also be rushed, with less quality to be able to make it into production in time before a freeze. Keeping track of what changes are pending release becomes more challenging.

Keeping your systems stable with exceptional uptime should be a priority throughout the whole year, not only during holiday season. The ways of working for the development teams and IT organization should embrace this mindset and expectations of quality. Proper planning can go a long way to reduce risk. It might not make sense to push through a big refactoring of the source code during the busiest time of the year.

A key component for allowing a continuous flow of evolving software throughout the entire year is organizational maturity regarding monitoring, logging and planning. It should be possible to monitor, preferably visualised on big screens in the office, how the systems are behaving and functioning in real-time. Both technical and business metrics should be metered and alarm thresholds configured accordingly based on these metrics.

Deployment strategies are also of importance. When deploying a new version of a system, it should always be cheap to rollback to a previous version. This should be automated and possible to do by clicking just one button. Then, if there is a concern after deploying a new version, discovered by closely monitoring the available metrics, it is easy to revert to a previously known good version. Canary releasing is also a strategy where possible issues can be discovered before rolling out new code changes for all users. Canary releasing allows you to route a specific amount of traffic to specific version of the software and thus being able to compare metrics, outcomes and possible errors between the different versions.

When the holiday season begins, keep calm and keep breathing.

Tommy Tynjä
@tommysdk
LinkedIn profile
http://www.diabol.se

How to release software frequently with quality and confidence

Continuous Delivery is a way of working for reducing cost, time and risk of releasing software. The idea is that small and frequent code changes impose less risk for bugs and disturbances. Key aspects in Continuous Delivery is to always have your source code in a releasable state and that your software delivery process is fully automated. These automated processes are typically called delivery pipelines.

In many companies, releasing software is a painful process. A process that is not done on a frequent basis. Many companies are not even able to release software once a month during a given year. In order to make something less painful you have to practice it. If you’re about to run a marathon for the first time, it might seem painful. But you won’t get better at running if you don’t do it frequent enough. It will still hurt.

As humans, we tend to be bad at repetitive work. It is therefore both time consuming and error prone to involve manual steps in the process of deploying software. That is reason enough to automate the whole process. Automation ensures that the process is fast, repeatable and provides traceability.

Never let a human do a scripts job.

With automation in place you can start building confidence in your release process and start practicing it more frequently. Releasing software should be practiced as often as possible so that you don’t even think about it happening.

Releasing new code changes should be so natural that you don’t need to think about it. It should be like breathing. Not something painful as giving birth.

A prerequisite for successfully practicing Continuous Delivery is test automation. Without test automation in place, testing becomes the bottleneck in your delivery process. It is therefore crucial that quality is built into the software. Writing automated tests, both unit, integration and acceptance tests should be a natural part of the development process. You cannot ship a piece of code that has no proper test coverage. If you are not testing the code you’re developing, you cannot be certain that it work as intended. If there is a need for regression tests, automate them and make them cheap to run so that they can be run repeatedly for every code change.

With Continuous Delivery it becomes evident that a release and a deployment are not the same things. A deployment is an exercise that happens all the time, for each new code change. A release is however a business or marketing decision on when to launch a new feature to your end users.

Tommy Tynjä
@tommysdk
LinkedIn profile
http://www.diabol.se

JavaOne Latin America summary

Last week I attended the JavaOne Latin America software development conference in São Paulo, Brazil. This was a joint event with Oracle Open World, a conference with focus on Oracle solutions. I was accepted as a speaker for the Java, DevOps and Methodologies track at JavaOne. This article intends to give a summary of my main takeaways from the event.

The main points Oracle made during the opening keynote of Oracle Open World was their commitment to cloud technology. Oracle CEO Mark Hurd made the prediction that in 10 years, most production workloads will be running in the cloud. Oracle currently engineers all of their solutions for the cloud, which is something that also their on-premise solutions can leverage from. Security is a very important aspect in all of their solutions. Quite a few sessions during JavaOne showcased Java applications running on Oracles cloud platform.

The JavaOne opening keynote demonstrated the Java flight recorder, which enables you to profile your applications with near zero overhead. The aim of the flight recorder is to have as minimal overhead as possible so it can be enabled in production systems by default. It has a user interface which provides valuable data in case you want to get an understanding of how your Java applications perform.

The next version of Java, Java 9, will feature long awaited modularity through project Jigsaw. Today, all of your public APIs in your source code packages are exposed and can be used by anyone who has access to the classes. With Jigsaw, you as a developer can decide which classes are exported and exposed. This gives you as a developer more control over what are internal APIs and what are intended for use. This was quickly demonstrated on stage. You will also have the possibility to decide what your application dependencies are within the JDK. For instance it is quite unlikely that you need e.g. UI related libraries such as Swing or audio if you develop back-end software. I once tried to dissect the Java rt.jar for this particular purpose (just for fun), so it is nice to finally see this becoming a reality. The keynote also mentioned project Valhalla (e.g. value types for classes) and project Panama but at this date it is still uncertain if they will be included in Java 9.

Mark Heckler from Pivotal had an interesting session on the Spring framework and their Spring cloud projects. Pivotal works closely with Netflix, which is a known open source contributor and one of the industry leaders when it comes to developing and using new technologies. However, since Netflix is committed to run their applications on AWS, Spring cloud aims to make use of Netflix work to create portable solutions for a wider community by introducing appropriate abstraction layers. This enables Spring cloud users to avoid changing their code if there is a need to change the underlying technology. Spring cloud has support for Netflix tools such as Eureka, Ribbon, Zuul, Hystrix among others.

Arquillian, by many considered the best integration testing framework for Java EE projects, was dedicated a full one hour session at the event. As a long time contributor to project, it was nice to see a presentation about it and how its Cube extension can make use of Docker containers to execute your tests against.

One interesting session was a non-technical one, also given by Mark Heckler, which focused on financial equations and what factors drives business decisions. The purpose was to give an understanding of how businesses calculate paybacks and return on investments and when and why it makes sense for a company to invest in new technology. For instance the payback period for an initiative should be as short as possible, preferably less than a year. The presentation also covered net present values and quantification’s. Transforming a monolithic architecture to a microservice style approach was the example used for the calculations. The average cadence for releases of key monolithic applications in our industry is just one release per year. Which in many cases is even optimistic! Compare these numbers with Amazon, who have developed their applications with a microservice architecture since 2011. Their average release cadence is 7448 times a day, which means that they perform a production release once every 11.6s! This presentation certainly laid a good foundation for my own session on continuous delivery.

Then it was finally time for my presentation! When I arrived 10 minutes before the talk there was already a long line outside of the room and it filled up nicely. In my presentation, The Road to Continuous Delivery, I talked about how a Swedish company revamped its organization to implement a completely new distributed system running on the Java platform and using continuous delivery ways of working. I started with a quick poll of how many in the room were able to perform production releases every day and I was apparently the only one. So I’m glad that there was a lot of interest for this topic at the conference! If you are interested in having me presenting this or another talk on the topic at your company or at an event, please contact me! It was great fun to give this presentation and I got some good questions and interesting discussions afterwards. You can find the slides for my presentation here.

Java Virtual Machine (JVM) architect Mikael Vidstedt had an interesting presentation about JVM insights. It is apparently not uncommon that the vast majority of the Java heap footprint is taken up by String objects (25-50% not uncommon) and many of them have the same value. JDK 8 however introduced an important JVM improvement to address this memory footprint concern (through string deduplication) in their G1 garbage collector improvement effort.

I was positively surprised that many sessions where quite non-technical. A few sessions talked about possibilities with the Raspberry Pi embedded device, but there was also a couple of presentations that touched on software development in a broader sense. I think it is good and important for conferences of this size to have such a good balance in content.

All in all I thought it was a good event. Both JavaOne and Oracle Open World combined for around 1300 attendees and a big exhibition hall. JavaOne was multi-track, but most sessions were in Portuguese. However, there was usually one track dedicated to English speaking sessions but it obviously limited the available content somewhat for non-Portuguese speaking attendees. An interesting feature was that the sessions held in English were live translated to Portuguese to those who lent headphones for that purpose.

The best part of the conference was that I got to meet a lot of people from many different countries. It is great fun to meet colleagues from all around the world that shares my enthusiasm and passion for software development. The strive for making this industry better continues!

Tommy Tynjä
@tommysdk
LinkedIn profile
http://www.diabol.se

AWS Summit recap

This week, the annual AWS Summit took place in sunny Stockholm. This article aims to provide a recap of my impressions from the event.

It was evident that the event had grown from last year, with approximately 2000 people attending this year’s one day event at Waterfront Congress Centre. Only a few session were technical as most of the presentations just gave an overview of the different services and various use cases. I really appreciated the talks from different AWS customers who spoke about their use of AWS technologies and what problems they solved and how. I found it valuable to hear from different companies on how they leverage certain products in their production environments.

The opening keynote was long (2 hours!) and included a lot of sales talk. The main keynote speaker mentioned that 20 percent of the audience had never used any AWS services at all, which explains the thorough walkthrough of the different AWS products. One product which stood out was Amazon Inspector, which can detect and remediate security issues early in your AWS environment. It is not yet available in all regions, but is available in e.g. eu-west-1 (Ireland). It was also interesting to hear about migration of large amounts of data using Snowball, a physical device shipped to your datacenter, which allows you to move your data faster than over the Internet (except for the physical delivery of the device to and from your own datacenter).

It is undeniable that Internet of Things (IoT) is gaining traction and that the amount of connected devices around has grown exponentially the past few years. AWS provides several services for developing and running IoT services. With AWS IoT, your devices can securely communicate with your backend servers. What I found most interesting was the concept of device shadows. A shadow is an interface which allows you to communicate with a device even though it would be offline at the moment. In your application, you can communicate with the shadow without the need to care about whether the device is online or not. If you want to change the state of a device currently offline, you will update the shadow and when the device connects again, it will get the new desired state from the shadow.

At the startup track, we got to hear how Mojang leverages AWS for their Minecraft Realm concept. Instead of letting external parties host their game servers, they decided to go with AWS for Minecraft Realm, to allow for a more flexible infrastructure. An interesting aspect is that they had to develop their own algorithm for scaling out quickly, as in a gaming environment it is not acceptable to wait for five minutes for an auto scaling group to spin up new machines to meet the current demand from users. Instead, they have to use quite large instance types and have new servers on standby to be able to take on new traffic as it arrives. It is not trivial either to terminate instances where there is people playing, even though only a few, that wouldn’t provide a good user experience. Instead, they kindly inform the user that the server will terminate in five minutes and that usually makes the users change server. Not ideal but live migration is too far away at the moment. They still use old EC2 classic instances and they will have to do some heavy lifting to modernise their stack on AWS.

There was also a presentation from QuizUp on how they use infrastructure as code with Terraform to manage their AWS resources. A benefit they get from using Terraform instead of Cloudformation is to get an execution plan before actually applying changes. The drawback is that it is not possible to query Terraform for the current resources and their state directly from AWS.

In the world of relational databases in AWS (RDS), Aurora is an AWS developed database to maximise reliability, scalability and cost-effectiveness. It delivers up to five times the throughput of a standard MySQL running on the same hardware. It is designed to scale and to handle failures. It even provides an SQL extension to simulate failures:
ALTER SYSTEM CRASH [INSTANCE | DISPATCHER | NODE ];
ALTER SYSTEM SIMULATE percentage_of_failure PERCENT
* READ REPLICA FAILURE
* DISK FAILURE
* DISK CONGESTION
FOR INTERVAL quantity [ YEAR | QUARTER | MONTH | WEEK | DAY | HOUR | MINUTE | SECOND ]

Probably the most interesting session of the day was about serverless architecture using AWS Lambda. Lambda allows you to upload snippets of code, functions to AWS which runs them for you. No need to provision servers or think about scalability, AWS does that for you and you only pay for the time your code executes in units of 100 ms. The best thing about this talk was the peek under the hood. AWS leverages Linux containers (not Docker) to isolate the resources of the uploaded functions and to be able to run and scale these quickly. It also offers predictive capacity planning. An interesting part is that you can upload libraries which your code depends on as part of your function, so you could basically run a small microservice just by using Lambda. To deploy your function, you package it in a zip archive and use Cloudformation (specified as type AWS::Lambda::Function). You’re able to run your function inside of your VPC and thus leverage other resources available within your VPC.

All in all I thought this was a great event. If you didn’t attend I really recommend attending the next one – especially if you’re already using AWS.

As we at Diabol are standard partners with Amazon, not only can we assist you with your cloud platform strategies but also to tie that together with the full view of your systems development process. Don’t hesitate to contact us!

You can read more about us at diabol.se.

Tommy Tynjä
@tommysdk

The power of Jenkins JobDSL

At my current client we define all of our Jenkins jobs with Jenkins Job Builder so that we can have all of our job configurations version controlled. Every push to the version control repository which hosts the configuration files will trigger an update of the jobs in Jenkins so that we can make sure that the state in version control is always reflected in our Jenkins setup.

We have six jobs that are essentially the same for each of our projects but with only a slight change of the input parameters to each job. These jobs are responsible for deploying the application binaries to different environments.

Jenkins Job Builder supports templating, but our experience is that those templates usually will be quite hard to maintain in the long run. And since it is just YAML files, you’re quite restricted to what you’re able to do. JobDSL Jenkins job configurations on the other hand are built using Groovy. It allows you to insert actual code execution in your job configurations scripts which can be very powerful.

In our case, with JobDSL we can easily just create one common job configuration which we then just iterate over with the job specific parameters to create the necessary jobs for each environment. We also can use some utility methods written in Groovy which we can invoke in our job configurations. So instead of having to maintain similar Jenkins Job Builder configurations in YAML, we can do it much more consise with JobDSL.

Below, an example of a JobDSL configuration file (Groovy code), which generates six jobs according to the parameterized job template:

class Utils {
    static String environment(String qualifier) { qualifier.substring(0, qualifier.indexOf('-')).toUpperCase() }
    static String environmentType(String qualifier) { qualifier.substring(qualifier.indexOf('-') + 1)}
}

[
    [qualifier: "us-staging"],
    [qualifier: "eu-staging"],
    [qualifier: "us-uat"],
    [qualifier: "eu-uat"],
    [qualifier: "us-live"],
    [qualifier: "eu-live"]
].each { Map environment ->

    job("myproject-deploy-${environment.qualifier}") {
        description "Deploy my project to ${environment.qualifier}"
        parameters {
            stringParam('GIT_SHA', null, null)
        }
        scm {
            git {
                remote {
                    url('ssh://git@my_git_repo/myproject.git')
                    credentials('jenkins-credentials')
                }
                branch('$GIT_SHA')
            }
        }
        deliveryPipelineConfiguration("Deploy to " + Utils.environmentType("${environment.qualifier}"),
                Utils.environment("${environment.qualifier}") + " environment")
        logRotator(-1, 30, -1, -1)
        steps {
            shell("""deployment/azure/deploy.sh ${environment.qualifier}""")
        }
    }
}

If you need help getting your Jenkins configuration into good shape, just contact us and we will be happy to help you! You can read more about us at diabol.se.

Tommy Tynjä
@tommysdk

Conference retrospectives

The first week of June was a busy one for us with representation on a couple of conferences taking place in Stockholm, the inaugural CoDe Continuous Delivery & DevOps conference on June 2nd and Agila Sverige (Agile Sweden) on June 3rd and 4th. Here’s a small retrospective on both of those events.

The CoDe conference was quite small (a bit less than 200 people) but had a very good vibe to it. We were silver sponsors of the event and we had our Jenkins Delivery Pipeline Plugin on display in our booth which gained a lot of interest from attendees. It led to many interesting discussions on CI/CD servers, automation, capabilities and visualization. Our feeling was that the attendees had a lot of genuine interest, knowledge and awareness about Continuous Delivery which fueled these interesting discussions.

Our Marcus Philip presented “Continuous Delivery for Databases” at the CoDe conference, where he talked about how to transition a legacy database application with a lot of PL/SQL code into a streamlined process where all changes are version controlled, traceable and automatically deployed. The talk covered the whole spectrum of the problem, starting with why they weren’t doing Continuous Delivery, why they should do it, thoughts on different solutions to the problem and how they did the actual implementation and how it panned out. Many attendees enjoyed the presentation and Marcus also got the opportunity to present the talk again at the Oracle Stockholm Meetup on June 16th. Cool!

Slides from the talk can be found here: https://speakerdeck.com/marcusphi/continuous-delivery-for-databases.

Agila Sverige is a conference which strongly encourages discussions among the conference attendees. All talks on the conference are lightning talks and there is plenty of time allocated for open space discussions and breaks, allowing you to network and share experiences with others. The conference is in Swedish but is probably one of the best when it comes to agile software development practices. Two or three years ago many talks and discussions on this conference focused on “why to practice agile development practices”. This year it was apparent that the industry has matured on all levels since the majority of talks and open space sessions rather focused on “how to practice agile development practices”.

Tommy Tynjä presented a visionary talk on how tomorrow’s software can be structured and delivered in his presentation “Next Generation Continuous Delivery”. The talk gained a lot of interest, the room was packed for the presentation and Tommy had some interesting discussions on the topic afterwards.

Slides from his talk can be found here: http://www.slideshare.net/tommysdk/next-generation-continuous-delivery
There is also a video recording available at: https://agilasverige.solidtango.com/video/next-generation-continuous-delivery.

We give our thumbs up for both of these conferences and we hope to see you on any of these events next year as well! We always look forward to meet people to discuss thoughts, experiences, problems and visions regarding Continuous Delivery, DevOps and agile software development methodologies. But if you don’t want to wait until next year, don’t hesitate to contact us!

Next week is conference week!

Next week will be a conference heavy one in Stockholm! The inaugural CoDe Continuous Delivery & DevOps conference takes place on June 2nd while Agila Sverige (Agile Sweden) occupies June 3rd and 4th. Both conferences target agile software development practices and Diabol is very well represented on these events. We will have one speaker on each conference presenting talks related to Continuous Delivery.

Marcus Philip will present “Continuous Delivery for databases” on the CoDe conference while Tommy Tynjä will give his talk “Next Generation Continuous Delivery” on Agila Sverige where he takes a look on the software of tomorrow can be built and delivered.

So if you’re interested in learning more about Continuous Delivery and what we do, don’t hesitate to come by!

See you on a conference very soon!

AWS Cloudformation introduction

AWS Cloudformation is a concept for modelling and setting up your Amazon Web Services resources in an automatic and unified way. You create templates that describe your AWS resoruces, e.g. EC2 instances, EBS volumes, load balancers etc. and Cloudformation takes care of all provisioning and configuration of those resources. When Cloudformation provisiones resources, they are grouped into something called a stack. A stack is typically represented by one template.

The templates are specified in json, which allows you to easily version control your templates describing your AWS infrastrucutre along side your application code.

Cloudformation fits perfectly in Continuous Delivery ways of working, where the whole AWS infrastructure can be automatically setup and maintained by a delivery pipeline. There is no need for a human to provision instances, setting up security groups, DNS record sets etc, as long as the Cloudformation templates are developed and maintained properly.

You use the AWS CLI to execute the Cloudformation operations, e.g. to create or delete a stack. What we’ve done at my current client is to put the AWS CLI calls into certain bash scripts, which are version controlled along side the templates. This allows us to not having to remember all the options and arguments necessary to perform the Cloudformation operations. We can also use those scripts both on developer workstations and in the delivery pipeline.

A drawback of Cloudformation is that not all features of AWS are available through that interface, which might force you to create workarounds depending on your needs. For instance Cloudformation does not currently support the creation of private DNS hosted zones within a Virtual Private Cloud. We solved this by using the AWS CLI to create that private DNS hosted zone in the bash script responsible for setting up our DNS configuration, prior to performing the actual Cloudformation operation which makes use of that private DNS hosted zone.

As Cloudformation is a superb way for setting up resources in AWS, in contrast of managing those resources manually e.g. through the web UI, you can actually enforce restrictions on your account so that resources only can be created through Cloudformation. This is something that we currently use at my current client for our production environment setup to assure that the proper ways of workings are followed.

I’ve created a basic Cloudformation template example which can be found here.

Tommy Tynjä
@tommysdk

Jfokus 2015 main takeaways

The biggest Java conference in Sweden, Jfokus, wrapped up a three day conference on Wednesday. This was my fifth consecutive one and I thought it was the best in recent years. In this blog post I’ll summarize my main takeaways.

During the first day, the tutorial day, I truly enjoyed Ken Sipe from Mesosphere and his talk “Docker at Production Scale”. It was supposed to be a tutorial but it wasn’t focused on one subject, which one might expect from the title. The talk was divided in several parts where he first started talking about infrastructure in general, what’s problematic in today’s datacenters regarding resource consumption etc. He then gave a good intro to Docker, where even I who have been using Docker on a day-to-day basis during the last two months learned a couple of new things. The last part of the talk focused on Apache Mesos, which problems it solves and how. He briefly also touched on subjects such as Service Discovery. For me, the main takeaway from this talk is that the way we structure our datacenters and our applications are definitely changing and that there is a lot of exciting technologies taking form in this area at this point of time.

Christian Heilmann held an interesting opening keynote where he talked a lot about mobile apps and their profitability. He pinpointed that many think apps are sure way to make good money, but in reality, that is not the case. Very few apps out of all apps in the marketplace are actually that profitable. Your company will have a greater chance of making money out of apps if targeting the enterprise customers instead of individuals. Enterprises are constantly looking for ways for their employees to be more efficient and certain apps could possible help them achieve that. Individuals are also very conservative now days when it comes to installing new apps. Most people just go with the apps they have and install one or no new app per month. He had some interesting figures backing up those statements.

A great talk was “Thinking fast and slow in software development” by Daniel Bryant. His talk was based on the book Thinking fast and slow by Daniel Kahneman, but applied to software development. One good quote from this talk was “look for actual problems instead of solutions”. As developers we tend to start elaborating solutions instead of trying to understand the actual problem. Are we really understanding what the actual problem is? Or do we just think we know what the problem is when in fact do not? He also mentioned that some of the most common factors for failures in software projects (source IEEE) is poor communication among customers, developers and users, the use of immature technology and sloppy development practices. These are things that we definitely could do better at in our industry! There is no reason why we should accept this happening. We should all care more about software craftsmanship.

Jeremy Deane held an interesting presetnation on concurrent processing techniques using e.g. plain java.util.concurrent techniques and actors. He had a few good tips on how to decouple a web service which under the hood depend on a slow responding third party web service by making the communication in between them asynchronous. All of his examples can be found in his GitHub repo.

As usual at Jfokus, Arun Gupta presented what’s to come in the next Java EE version, in this case Java EE 8. Focus will be to improve the new features introduced in Java EE 7 to make them more usable. Support for HTTP 2 and HTML 5 will be added to help those technologies gain traction. Servlet 4.0 will be introduced as well as MVC 1.0. JAX-RS, JMS and JSON APIs will also get facelifts. The Batch processing API will not be tied to Java EE only but will in the future be available in conjunction with Java SE as well. Obviously Java EE 8 will contain improvements for the new language features introduced in Java 8, such as functional interfaces etc.

The second day kicked off with a talk by Jez Humble called “21st Century Software Delivery”. He really put emphasis on how important continuous delivery is along with continuous experimentation. As an industry, we are bad at experimenting. We try to build this big thing that we think our customers want (but which we don’t really know). Instead we should try out the thing we’re building in a small scale first, not only as a protype but a fully functional, nice and shiny feature that the customers will appreciate, but with very limited scope. We can then measure how well this experiment turns out by conducting A/B testing and comparing the analytics for this feature. Should we continue to build on top of that experiment or not? You wouldn’t go out to build a very large complex building without building it in small scale first and to assure that techniques and functionality is matching the expections. We should do that in software development as well.

Jez Humbles second talk, on automated acceptance testing was valuable. For me, who am passionated about delivering quality software, there wasn’t much new in his content but it was still refreshing to get reminded on why we actually want to do testing in a certain way, even though you might do it already. Most people have probably encountered flaky tests. Those tests that for one reason or another goes red with or without an apparant reason, just to go green in a build later containing a totally unrelated code change. Flaky tests leads to less confidence and trust for the tests which eventually leads to people ignoring them or not even running them at all. One interesting technique to battle flaky tests is to move known flaky tests to separate suite, which is to be run separately (preferrably after) your actual suite. Then, you can remain confident about your test suite always being reliable. When a flaky test has become stable again, it should move back into the main suite. It is important to remember that test suite shouldn’t be static. Tests should come and go and be moved appropriately alongside the codebase development. Do also not be afraid to actually delete tests that no longer brings value. We are horrible at identifying and actually removing those tests.

In between talks I also enjoyed meeting a lot of people I’ve worked with in various projects throughout my career. Always a real pleasure to catch up with old friends as well as getting to know a few new ones as well. I also got a brief chat with Jez Humble regarding continuous delivery and strategies for balancing between maintenance and development of new features in highly autonomous teams with end-to-end responsibility, a discussion which was very much appreciated. Even though the vast majority of sessions were awesome, I almost wish there would have been some more breaks as well so I could have had the opportunity for even more socializing.

All in all, a great event. Thanks to everyone involved in organizing this event and to all the speakers and attendees. See you in 2016!

Tommy Tynjä
@tommysdk