Tag Archives: aws

AWS Summit recap

This week, the annual AWS Summit took place in sunny Stockholm. This article aims to provide a recap of my impressions from the event.

It was evident that the event had grown from last year, with approximately 2000 people attending this year’s one day event at Waterfront Congress Centre. Only a few session were technical as most of the presentations just gave an overview of the different services and various use cases. I really appreciated the talks from different AWS customers who spoke about their use of AWS technologies and what problems they solved and how. I found it valuable to hear from different companies on how they leverage certain products in their production environments.

The opening keynote was long (2 hours!) and included a lot of sales talk. The main keynote speaker mentioned that 20 percent of the audience had never used any AWS services at all, which explains the thorough walkthrough of the different AWS products. One product which stood out was Amazon Inspector, which can detect and remediate security issues early in your AWS environment. It is not yet available in all regions, but is available in e.g. eu-west-1 (Ireland). It was also interesting to hear about migration of large amounts of data using Snowball, a physical device shipped to your datacenter, which allows you to move your data faster than over the Internet (except for the physical delivery of the device to and from your own datacenter).

It is undeniable that Internet of Things (IoT) is gaining traction and that the amount of connected devices around has grown exponentially the past few years. AWS provides several services for developing and running IoT services. With AWS IoT, your devices can securely communicate with your backend servers. What I found most interesting was the concept of device shadows. A shadow is an interface which allows you to communicate with a device even though it would be offline at the moment. In your application, you can communicate with the shadow without the need to care about whether the device is online or not. If you want to change the state of a device currently offline, you will update the shadow and when the device connects again, it will get the new desired state from the shadow.

At the startup track, we got to hear how Mojang leverages AWS for their Minecraft Realm concept. Instead of letting external parties host their game servers, they decided to go with AWS for Minecraft Realm, to allow for a more flexible infrastructure. An interesting aspect is that they had to develop their own algorithm for scaling out quickly, as in a gaming environment it is not acceptable to wait for five minutes for an auto scaling group to spin up new machines to meet the current demand from users. Instead, they have to use quite large instance types and have new servers on standby to be able to take on new traffic as it arrives. It is not trivial either to terminate instances where there is people playing, even though only a few, that wouldn’t provide a good user experience. Instead, they kindly inform the user that the server will terminate in five minutes and that usually makes the users change server. Not ideal but live migration is too far away at the moment. They still use old EC2 classic instances and they will have to do some heavy lifting to modernise their stack on AWS.

There was also a presentation from QuizUp on how they use infrastructure as code with Terraform to manage their AWS resources. A benefit they get from using Terraform instead of Cloudformation is to get an execution plan before actually applying changes. The drawback is that it is not possible to query Terraform for the current resources and their state directly from AWS.

In the world of relational databases in AWS (RDS), Aurora is an AWS developed database to maximise reliability, scalability and cost-effectiveness. It delivers up to five times the throughput of a standard MySQL running on the same hardware. It is designed to scale and to handle failures. It even provides an SQL extension to simulate failures:
ALTER SYSTEM CRASH [INSTANCE | DISPATCHER | NODE ];
ALTER SYSTEM SIMULATE percentage_of_failure PERCENT
* READ REPLICA FAILURE
* DISK FAILURE
* DISK CONGESTION
FOR INTERVAL quantity [ YEAR | QUARTER | MONTH | WEEK | DAY | HOUR | MINUTE | SECOND ]

Probably the most interesting session of the day was about serverless architecture using AWS Lambda. Lambda allows you to upload snippets of code, functions to AWS which runs them for you. No need to provision servers or think about scalability, AWS does that for you and you only pay for the time your code executes in units of 100 ms. The best thing about this talk was the peek under the hood. AWS leverages Linux containers (not Docker) to isolate the resources of the uploaded functions and to be able to run and scale these quickly. It also offers predictive capacity planning. An interesting part is that you can upload libraries which your code depends on as part of your function, so you could basically run a small microservice just by using Lambda. To deploy your function, you package it in a zip archive and use Cloudformation (specified as type AWS::Lambda::Function). You’re able to run your function inside of your VPC and thus leverage other resources available within your VPC.

All in all I thought this was a great event. If you didn’t attend I really recommend attending the next one – especially if you’re already using AWS.

As we at Diabol are standard partners with Amazon, not only can we assist you with your cloud platform strategies but also to tie that together with the full view of your systems development process. Don’t hesitate to contact us!

You can read more about us at diabol.se.

Tommy Tynjä
@tommysdk

Diabol hjälper Klarna utveckla en ny plattform och att bli experter på Continuous Delivery

Klarna har sedan starten 2005 haft en kraftig tillväxt och på mycket kort tid växt till ett företag med över 1000 anställda. För att möta den globala marknadens behov av sina betalningstjänster behövde Klarna göra stora förändringar i både teknik, organisation och processer. Klarna anlitade konsulter från Diabol för att nå sina högt satta mål med utveckling av en ny tjänsteplattform och bli ledande inom DevOps och Continuous Delivery.

Utmaning

Efter stora framgångar på den nordiska marknaden och flera år av stark tillväxt behövde Klarna utveckla en ny plattform för sina betalningstjänster för att kunna möta den globala marknaden. Den nya plattformen skulle hantera miljontals transaktioner dagligen och vara robust, skalbar och samtidigt stödja ett agilt arbetssätt med snabba förändringar i en växande organisation. Tidplanen var mycket utmanande och förutom utveckling av alla tjänster behövde man förändra både arbetssätt och infrastruktur för att möta utmaningarna med stor skalbarhet och korta ledtider.

Lösning

Diabols erfarna konsulter med expertkompetens inom Java, DevOps och Continuous Delivery fick förtroendet att stärka upp utvecklingsteamen för att ta fram den nya plattformen och samtidigt automatisera releaseprocessen med bl.a. molnteknik från Amazon AWS. Kompetens kring automatisering och verktyg byggdes även upp i ett internt supportteam med syfte att stödja utvecklingsteamen med verktyg och processer för att snabbt, säkert och automatiserat kunna leverera sina tjänster oberoende av varandra. Diabol hade en central roll i detta team och agerade som coach för Continuous Delivery och DevOps brett i utvecklings- och driftorganisationen.

Resultat

Klarna kunde på rekordtid gå live med den nya plattformen och öppna upp på flera stora internationella marknader. Autonoma utvecklingsteam med stort leveransfokus kan idag på egen hand leverera förändringar och ny funktionalitet till produktion helt automatiskt vilket vid behov kan vara flera gånger om dagen.

Uttömmande automatiserade tester körs kontinuerligt vid varje kodförändring och uppsättning av testmiljöer i AWS sker också helt automatiserat. En del team praktiserar även s.k. “continuous deployment” och levererar kodändringar till sina produktionsmiljöer utan någon som helst manuell handpåläggning.

“Diabol har varit en nyckelspelare för att uppnå våra högt ställda mål inom DevOps och Continuous Delivery.”

– Tobias Palmborg, Manager Engineering Support, Klarna

 

 

Diabol migrerar Abdona till AWS och inför en automatiserad leveransprocess

Abdona tillhandahåller tjänster för affärsresehantering till ett flertal organisationer i offentlig sektor. I samband med en större utvecklingsinsats vill man också se över infrastrukturen för drift och testmiljöer för att minska kostnader och på ett säkert sätt kunna garantera hög kvalité och korta leveranstider. Diabol anlitades för ett helhetsåtagande att modernisera infrastruktur, utvecklingsmiljö, test- och leveransprocess.

Utmaning

Abdonas system består av en klassisk 3-lagersarkitektur i Java Enterprise och sedan lanseringen för 7 år sedan har endast mindre uppdateringar skett. Teknik och infrastruktur har inte uppdaterats och har med tiden blivit förlegade och svårhanterliga. Manuellt konfigurerade servrar, undermålig dokumentation och spårbarhet, knapphändig versionshantering, ingen kontinuerlig integration eller stabil byggmiljö, manuell test och deployment. Förutom dessa strukturella problem var kostnaden för hårdvara som satts upp manuellt för både test- och driftmiljö var omotiverad dyr jämfört med dagens molnbaserade alternativ.

Lösning

Diabol började med att kartlägga problemen och först och främst ta kontroll över kodbasen som var utspridd över flera versionshanteringssytem. All kod flyttades till Atlassian Bitbucket och en byggserver med Jenkins sattes upp för att på ett repeterbart sätt bygga och testa systemet. Vidare så valdes Nexus för att hantera beroenden och arkivera de artifakter som produceras av byggservern. Infrastruktur migrerades till Amazon AWS av både kostnadsmässiga skäl, men också för att kunna utnyttja moderna verktyg för automatisering och möjligheterna med dynamisk infrastruktur. Applikationslager flyttades till EC2 och databasen till RDS. Terraform valdes för att automatisera uppsättningen av resurser i AWS och Puppet introducerades för automatisk konfigurationshantering av servrar. En fullständig leveranspipeline med automatiskt deployment implementerades i Jenkins.

Resultat

Migrering till Amazon AWS har lett till drastiskt minskade driftkostnader för Abdona. Därtill har man nu en skalbar modern infrastruktur, fullständig spårbarhet och en automatisk leveranskedja som garanterar hög kvalitet och korta ledtider. Systemet är helt och hållet rekonstruerbart från kodbasen och testmiljöer kan skapas helt automatiskt vid behov.

Top class Continuous Delivery in AWS

Last week Diabol arranged a workshop in Stockholm where we invited Amazon together with Klarna and Volvo Group Telematics that both practise advanced Continuous Delivery in AWS. These companies are in many ways pioneers in this area as there is little in terms of established practices. We want to encourage and facilitate cross company knowledge sharing and take Continuous Delivery to the next level. The participants have very different businesses, processes and architectures but still struggle with similar challenges when building delivery pipelines for AWS. Below follows a short summary of some of the topics covered in the workshop.

Centralization and standardization vs. fully autonomous teams

One of the most interesting discussions among the participants wasn’t even technical but covered the differences in how they are organized and how that affects the work with Continuous Delivery and AWS. Some come from a traditional functional organisation and have placed their delivery teams somewhere in between development teams and the operations team. The advantages being that they have been able to standardize the delivery platform to a large extent and have a very high level of reuse. They have built custom tools and standardized services that all teams are more or less forced to use This approach depends on being able to keep at least one step ahead of the dev teams and being able to scale out to many dev teams without increasing headcount. One problem with this approach is that it is hard to build deep AWS knowledge out in the dev teams since they feel detached from the technical implementation. Others have a very explicit strategy of team autonomy where each team basically is in charge of their complete process all the way to production. In this case each team must have a quite deep competence both about AWS and the delivery pipelines and how they are set up. The production awareness is extremely high and you can e.g. visualize each team’s cost of AWS resources. One problem with this approach is a lower level of reusability and difficulties in sharing knowledge and implementation between teams.

Both of these approaches have pros and cons but in the end I think less silos and more team empowerment wins. If you can manage that and still build a common delivery infrastructure that scales, you are in a very good position.

Infrastructure as code

Another topic that was thoroughly covered was different ways to deploy both applications and infrastructure to AWS. CloudFormation is popular and very powerful but has its shortcomings in some scenarios. One participant felt that CF is too verbose and noisy and have built their own YAML configuration language on top of CF. They have been able to do this since they have a strong standardization of their micro-service architecture and the deployment structure that follows. Other participants felt the same problem with CF being too noisy and have broken out a large portion of configuration from the stack templates to Ansible, leaving just the core infrastructure resources in CF. This also allows them to apply different deployment patterns and more advanced orchestration. We also briefly discussed 3:rd part tools, e.g. Terraform, but the general opinion was that they all have a hard time keeping up with features in AWS. On the other hand, if you have infrastructure outside AWS that needs to be managed in conjunction with what you have in AWS, Terraform might be a compelling option. Both participants expressed that they would like to see some kind of execution plan / dry-run feature in CF much like Terraform have.

Docker on AWS

Use of Docker is growing quickly right now and was not surprisingly a hot topic at the workshop. One participant described how they deploy their micro-services in Docker containers with the obvious advantage of being portable and lightweight (compared to baking AMI’s). This is however done with stand-alone EC2-instances using a shared common base AMI and not on ECS, an approach that adds redundant infrastructure layers to the stack. They have just started exploring ECS and it looks promising but questions around how to manage centralized logging, monitoring, disk encryption etc are still not clear. Docker is a very compelling deployment alternative but both Docker itself and the surrounding infrastructure need to mature a bit more, e.g. docker push takes an unreasonable long time and easily becomes a bottleneck in your delivery pipelines. Another pain is the need for a private Docker registry that on this level of continuous delivery needs to be highly available and secure.

What’s missing?

The discussions also identified some feature requests for Amazon to bring home. E.g. we discussed security quite a lot and got into the technicalities of AIM-roles, accounts, security groups etc. It was expressed that there might be a need for explicit compliance checks and controls as a complement to the more crude ways with e.g. PEN-testing. You can certainly do this by extracting information from the API’s and process it according to your specific compliance rules, but it would be nice if there was a higher level of support for this from AWS.

We also discussed canarie releasing and A/B testing. Since this is becoming more of a common practice it would be nice if Amazon could provide more services to support this, e.g. content based routing and more sophisticated analytic tools.

Next step

All-in-all I think the workshop was very successful and the discussions and experience sharing was valuable to all participants. Diabol will continue to push Continuous Delivery maturity in the industry by arranging meetups and workshops and involve more companies that can contribute and benefit from this collaboration.  

 

AWS Cloudformation introduction

AWS Cloudformation is a concept for modelling and setting up your Amazon Web Services resources in an automatic and unified way. You create templates that describe your AWS resoruces, e.g. EC2 instances, EBS volumes, load balancers etc. and Cloudformation takes care of all provisioning and configuration of those resources. When Cloudformation provisiones resources, they are grouped into something called a stack. A stack is typically represented by one template.

The templates are specified in json, which allows you to easily version control your templates describing your AWS infrastrucutre along side your application code.

Cloudformation fits perfectly in Continuous Delivery ways of working, where the whole AWS infrastructure can be automatically setup and maintained by a delivery pipeline. There is no need for a human to provision instances, setting up security groups, DNS record sets etc, as long as the Cloudformation templates are developed and maintained properly.

You use the AWS CLI to execute the Cloudformation operations, e.g. to create or delete a stack. What we’ve done at my current client is to put the AWS CLI calls into certain bash scripts, which are version controlled along side the templates. This allows us to not having to remember all the options and arguments necessary to perform the Cloudformation operations. We can also use those scripts both on developer workstations and in the delivery pipeline.

A drawback of Cloudformation is that not all features of AWS are available through that interface, which might force you to create workarounds depending on your needs. For instance Cloudformation does not currently support the creation of private DNS hosted zones within a Virtual Private Cloud. We solved this by using the AWS CLI to create that private DNS hosted zone in the bash script responsible for setting up our DNS configuration, prior to performing the actual Cloudformation operation which makes use of that private DNS hosted zone.

As Cloudformation is a superb way for setting up resources in AWS, in contrast of managing those resources manually e.g. through the web UI, you can actually enforce restrictions on your account so that resources only can be created through Cloudformation. This is something that we currently use at my current client for our production environment setup to assure that the proper ways of workings are followed.

I’ve created a basic Cloudformation template example which can be found here.

Tommy Tynjä
@tommysdk

Get started with AWS Elastic Beanstalk

Amazon Web Services (AWS) launched a beta of their new concept Elastic Beanstalk in January. AWS Elastic Beanstalk allows you to in a few clicks setup a new environment where you can deploy your application. Say you have a development team developing a web-app running on Tomcat and you need a test server where you can test your application. In a few simple steps you can setup a new machine with a fresh installation of Tomcat where you can deploy your application. You can even use the AWS Elastic Beanstalk command line client to deploy your application as simple as with:

$ elastic-beanstalk-update-application -a my_app.war -d "Application description"

I had the opportunity to try it out and I would like to share how you get started with the service.

As with other AWS cloud based services, you only pay for the resources your application consumes and if you’re only interested in a short evaluation, it will propably only cost you a couple of US dollars. This first release of Elastic Beanstalk is targeted for Java developers who are familiar with the Apache Tomcat software stack. The concept is simple, you simply upload your application (e.g. war-packaged web application) to a Elastic Beanstalk instance through the AWS web interface called the Management Console. The Management Console allows you to handle versioning of your applications, monitoring and log viewing straight throught the web interface. It also provides load balancing and scaling out of the box. The Elastic Beanstalk service is currently running on a 32-bit Amazon Linux AMI using Apache Tomcat 6.0.29.

But what if you would like to customize the software stack your application is running on? Common tasks you might want to do is adding jar-files to the Tomcat lib-directory, configure connection pooling capabilities, edit the Tomcat server.xml or even install third party products such as ActiveMQ. Fortunatly, all of this is possible! You first have to create your custom AMI (Amazon Machine Image). Go to the EC2 tab in the Management Console, select your default Elastic Beanstalk instance and select Instance Actions > Create Image (EBS AMI). You should then see your custom image under Images / AMIs in the left menu with a custom AMI ID. Back in the Elastic Beanstalk tab, select Environment Details of the environment you want to customize and select Edit Configuration. Under the Server tab, you can specify a Custom AMI ID to your instance, which should refer to the AMI ID of your newly created custom image. After applying the changes, your environment will “reboot”, running your custom AMI.

Now you might ask yourself, what IP address do my instance actually have? Well, you have to assign an IP address to it first. You can then either use this IP or the DNS name to log in to your instance. AWS is using a concept called Elastic IPs, which means that they provide a pool of IP addresses from where you can get a random free IP address to bind to your current instance. If you want to release the IP address from your instance, it is just as easy. All of this is done straight out of the Management Console in the EC2 tab under Elastic IPs. You just select Allocate Address and then bind this Elastic IP to your instance. To prevent users to have unused IP addresses lying around, AWS is charging your account for every unbound Elastic IP, which might end up a costful experience. So make sure you release your Elastic IP address back to the pool if you’re not using it.

To be able to log in to your machine, you will have to gerenate a key pair which is used as an authentication token. You generate a key pair in the EC2 tab under Networking & Security / Key Pairs. Then go back to the Elastic Beanstalk tab, select Environment Details of your environment and attach your key pair by providing it in the Server > Existing Key Pair field. You then need to download the key file (with a .pem extension by default) to the machine you will actually connect from. You also need to open the firewall for your instance to allow connections from the IP address you are connecting from. Do that by creating a security group Networking & Security Groups on the EC2 tab. Make sure to allow SSH over tcp for port 22 for the IP address you are connecting from. Then attach the security group to your Elastic Beanstalk environment by going to the Elastic Beanstalk tab and selecting Environment Details > Edit Configuration for your environment. Add the security group in the field EC2 Security Group on the Server tab.

So now you’re currently running a custom image (which is actually a replica of the default one) which has an IP address, a security group and a key pair assigned to it. Now is the time to log in to the machine and do some actual customization! Log in to your machine using ssh and the key you downloaded from your AWS Management Console, e.g.:

$ ssh -i /home/tommy/mykey.pem ec2-user@ec2-MY_ELASTIC_IP.compute-1.amazonaws.com

… where MY_ELASTIC_IP is the IP address of your machine, such as:

$ ssh -i /home/tommy/mykey.pem ec2-user@ec2-184-73-226-161.compute-1.amazonaws.com

Please note the hyphens instead of dots in the IP address! Then, you’re good to go and start customizing your current machine! If you would like to save these configurations onto a new image (AMI), just copy the current AMI through the Management Console. You can then use this image when booting up new environments. You find Tomcat under /usr/share/tomcat6/. See below for a login example:

tommy@linux /host $ ssh -i /home/tommy/mykey.pem ec2-user@ec2-184-73-226-161.compute-1.amazonaws.com
The authenticity of host 'ec2-184-73-226-161.compute-1.amazonaws.com (184.73.226.161)' can't be established.
RSA key fingerprint is 13:7d:aa:31:5c:3b:17:ed:74:6d:87:07:23:ee:33:20.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ec2-184-73-226-161.compute-1.amazonaws.com,184.73.226.161' (RSA) to the list of known hosts.
Last login: Thu Feb  3 23:48:38 2011 from 72-21-198-68.amazon.com

 __|  __|_  )  Amazon Linux AMI
 _|  (     /     Beta
 ___|\___|___|

See /usr/share/doc/amzn-ami/image-release-notes for latest release notes.
[ec2-user@ip-10-122-194-97 ~]$  sudo su -
[root@ip-10-122-194-97 /]# ls -al /usr/share/tomcat6/
total 12
drwxrwxr-x  3 root tomcat 4096 Feb  3 23:50 .
drwxr-xr-x 64 root root   4096 Feb  3 23:50 ..
drwxr-xr-x  2 root root   4096 Feb  3 23:50 bin
lrwxrwxrwx  1 root tomcat   12 Feb  3 23:50 conf -> /etc/tomcat6
lrwxrwxrwx  1 root tomcat   23 Feb  3 23:50 lib -> /usr/share/java/tomcat6
lrwxrwxrwx  1 root tomcat   16 Feb  3 23:50 logs -> /var/log/tomcat6
lrwxrwxrwx  1 root tomcat   23 Feb  3 23:50 temp -> /var/cache/tomcat6/temp
lrwxrwxrwx  1 root tomcat   24 Feb  3 23:50 webapps -> /var/lib/tomcat6/webapps
lrwxrwxrwx  1 root tomcat   23 Feb  3 23:50 work -> /var/cache/tomcat6/work
Tommy Tynjä
@tommysdk