What is app oriented architecture?

I have had the opportunity to discuss IT solutions in large governmental institutes and large companies with business people the last couple of days, and I have got to learn a new word: “app oriented architecture”. It is not one or two times, it is more a rule than exception that these people have been deeply influenced by applications on IPhone and Android.

– “We are looking at designing a central data storage where we have all the business logic and data, and then create small applications around that for all the functionality we need. Like this (a mandatory finger pointing at a phone – every time) small simple applications that can be replaced and easily developed by contractors.”

To me it sounds a bit like SOA, but these persons don’t share that view:

– “We had a SOA project a couple of years ago, it was a complete failure, we will do it right this time.”

Ok, being a software developer and architect I had never thought about IPhone applications as being something you should base an IT architecture around. But clarely there is something about the IPhone that makes people think in new ways.

So what is app oriented architecture then? Well when you think about it, the idea is not all bad, in fact it is really, really good. Not in a development/technical way, but in a pedagogical way. Suddenly we have means to talk about functionality and architecture in a way that the non geeks can understand, because suddenly they understand that functionality should be lightweight, contained, based on a common platform, available as a packaged solution on demand. And when I started to think more about that, it is clear that the geeks (developers and architects) can learn from that as well. Less integration, clearer interfaces, user driven deployments, unused apps gets outdated, improved rollout of upgrades, etc… The IT department could be organized as an “appstore” providing inhouse and near house developed functionality packaged as components and deployed on user demand.

But why is it that the business people thinks that this is all better than SOA (and why do I agree)? Well the main problem with SOA is that it is too technical, even the name Service Oriented Architecture talks about a technical solution on the IT side of the problem, not the business side. So SOA takes a technical approach to solve a business problem and at the same time limits the IT department to a heavy rigid platform with ESBs and WebServices.

The paradigm shift where is that it has suddenly been clear to business people what a “service” really is. It is not a WebService interface on the financial system, or a security service in an expensive product or some other technical beast. It is a piece of functionality that is clear to everyone what it does and why – just like “I use this app to look at the weather forecast”. Suddenly business people can talk about business (not weather) and geeks can focus on putting that business into systems. If we have to call it app oriented architecture, then that is fine by me. As long as I’m not forced to use web services or ESBs just because a “SOA provider” need to make money.

Get started with AWS Elastic Beanstalk

Amazon Web Services (AWS) launched a beta of their new concept Elastic Beanstalk in January. AWS Elastic Beanstalk allows you to in a few clicks setup a new environment where you can deploy your application. Say you have a development team developing a web-app running on Tomcat and you need a test server where you can test your application. In a few simple steps you can setup a new machine with a fresh installation of Tomcat where you can deploy your application. You can even use the AWS Elastic Beanstalk command line client to deploy your application as simple as with:

$ elastic-beanstalk-update-application -a my_app.war -d "Application description"

I had the opportunity to try it out and I would like to share how you get started with the service.

As with other AWS cloud based services, you only pay for the resources your application consumes and if you’re only interested in a short evaluation, it will propably only cost you a couple of US dollars. This first release of Elastic Beanstalk is targeted for Java developers who are familiar with the Apache Tomcat software stack. The concept is simple, you simply upload your application (e.g. war-packaged web application) to a Elastic Beanstalk instance through the AWS web interface called the Management Console. The Management Console allows you to handle versioning of your applications, monitoring and log viewing straight throught the web interface. It also provides load balancing and scaling out of the box. The Elastic Beanstalk service is currently running on a 32-bit Amazon Linux AMI using Apache Tomcat 6.0.29.

But what if you would like to customize the software stack your application is running on? Common tasks you might want to do is adding jar-files to the Tomcat lib-directory, configure connection pooling capabilities, edit the Tomcat server.xml or even install third party products such as ActiveMQ. Fortunatly, all of this is possible! You first have to create your custom AMI (Amazon Machine Image). Go to the EC2 tab in the Management Console, select your default Elastic Beanstalk instance and select Instance Actions > Create Image (EBS AMI). You should then see your custom image under Images / AMIs in the left menu with a custom AMI ID. Back in the Elastic Beanstalk tab, select Environment Details of the environment you want to customize and select Edit Configuration. Under the Server tab, you can specify a Custom AMI ID to your instance, which should refer to the AMI ID of your newly created custom image. After applying the changes, your environment will “reboot”, running your custom AMI.

Now you might ask yourself, what IP address do my instance actually have? Well, you have to assign an IP address to it first. You can then either use this IP or the DNS name to log in to your instance. AWS is using a concept called Elastic IPs, which means that they provide a pool of IP addresses from where you can get a random free IP address to bind to your current instance. If you want to release the IP address from your instance, it is just as easy. All of this is done straight out of the Management Console in the EC2 tab under Elastic IPs. You just select Allocate Address and then bind this Elastic IP to your instance. To prevent users to have unused IP addresses lying around, AWS is charging your account for every unbound Elastic IP, which might end up a costful experience. So make sure you release your Elastic IP address back to the pool if you’re not using it.

To be able to log in to your machine, you will have to gerenate a key pair which is used as an authentication token. You generate a key pair in the EC2 tab under Networking & Security / Key Pairs. Then go back to the Elastic Beanstalk tab, select Environment Details of your environment and attach your key pair by providing it in the Server > Existing Key Pair field. You then need to download the key file (with a .pem extension by default) to the machine you will actually connect from. You also need to open the firewall for your instance to allow connections from the IP address you are connecting from. Do that by creating a security group Networking & Security Groups on the EC2 tab. Make sure to allow SSH over tcp for port 22 for the IP address you are connecting from. Then attach the security group to your Elastic Beanstalk environment by going to the Elastic Beanstalk tab and selecting Environment Details > Edit Configuration for your environment. Add the security group in the field EC2 Security Group on the Server tab.

So now you’re currently running a custom image (which is actually a replica of the default one) which has an IP address, a security group and a key pair assigned to it. Now is the time to log in to the machine and do some actual customization! Log in to your machine using ssh and the key you downloaded from your AWS Management Console, e.g.:

$ ssh -i /home/tommy/mykey.pem ec2-user@ec2-MY_ELASTIC_IP.compute-1.amazonaws.com

… where MY_ELASTIC_IP is the IP address of your machine, such as:

$ ssh -i /home/tommy/mykey.pem ec2-user@ec2-184-73-226-161.compute-1.amazonaws.com

Please note the hyphens instead of dots in the IP address! Then, you’re good to go and start customizing your current machine! If you would like to save these configurations onto a new image (AMI), just copy the current AMI through the Management Console. You can then use this image when booting up new environments. You find Tomcat under /usr/share/tomcat6/. See below for a login example:

tommy@linux /host $ ssh -i /home/tommy/mykey.pem ec2-user@ec2-184-73-226-161.compute-1.amazonaws.com
The authenticity of host 'ec2-184-73-226-161.compute-1.amazonaws.com (184.73.226.161)' can't be established.
RSA key fingerprint is 13:7d:aa:31:5c:3b:17:ed:74:6d:87:07:23:ee:33:20.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ec2-184-73-226-161.compute-1.amazonaws.com,184.73.226.161' (RSA) to the list of known hosts.
Last login: Thu Feb  3 23:48:38 2011 from 72-21-198-68.amazon.com

 __|  __|_  )  Amazon Linux AMI
 _|  (     /     Beta
 ___|\___|___|

See /usr/share/doc/amzn-ami/image-release-notes for latest release notes.
[ec2-user@ip-10-122-194-97 ~]$  sudo su -
[root@ip-10-122-194-97 /]# ls -al /usr/share/tomcat6/
total 12
drwxrwxr-x  3 root tomcat 4096 Feb  3 23:50 .
drwxr-xr-x 64 root root   4096 Feb  3 23:50 ..
drwxr-xr-x  2 root root   4096 Feb  3 23:50 bin
lrwxrwxrwx  1 root tomcat   12 Feb  3 23:50 conf -> /etc/tomcat6
lrwxrwxrwx  1 root tomcat   23 Feb  3 23:50 lib -> /usr/share/java/tomcat6
lrwxrwxrwx  1 root tomcat   16 Feb  3 23:50 logs -> /var/log/tomcat6
lrwxrwxrwx  1 root tomcat   23 Feb  3 23:50 temp -> /var/cache/tomcat6/temp
lrwxrwxrwx  1 root tomcat   24 Feb  3 23:50 webapps -> /var/lib/tomcat6/webapps
lrwxrwxrwx  1 root tomcat   23 Feb  3 23:50 work -> /var/cache/tomcat6/work
Tommy Tynjä
@tommysdk

Deployment pipeline och Continuous delivery

En deployment pipeline är till stor utsträckning ett löpande band för att införa förändringar i ett befintligt system. Eller om man så vill automatisering av releaseprocessen.

Vad är nyttan med pipelines? Det finns massvis, men det är två fundamentala mekanismer som sticker ut: Det första är att en automatisk pipeline kräver att man faktiskt har kartlagt och fastställt vilken releaseprocess man har. Det är här den största svårigheten/nyttan finns. Att bara modellera den och komma överens om vilka steg som ska finnas, samt vilka aktiviteter som ska ingå i vilka steg gör att arbetet kommer gå otroligt mycket smidigare. Att sedan stoppa in ett verktyg som gör vissa av stegen automatiskt är mer eller mindre en bonus och ger en bra överblick över vilka steg som ingår i processen. Allt för många gör tvärt om, stoppar in verktyget först. Min erfarenhet är att det aldrig blir aldrig bra.

Det andra är att det främjar DevOps-samarbetet genom att alla får insyn i vad som händer på väg från utveckling till produktion. En pipeline går rakt i igenom Dev-QA-Ops vilket är väsentligt för att alla ska jobba mot samma mål. Devsidan får ett “API” mot releaseprocessen som gör att de håller sig inom de ramar som produktionsmiljön utgör. Rätt implementerat får QA-avdelningen får en knapp att trycka på för att lägga in vilken version som helt i en testmiljö. Ops avdelningens arbete blir även mer inriktat till att bygga automatiska funktioner (robotar på löpande bandet) som stödjer release processen istället för att arbeta med återkommande manuella uppgifter som behöver upprepas för varje release. Man har då gått från att vara en ren kostnad till att investera i sitt löpande band.

I dagsläget finns en uppsjö produkter som hanterar pipelines och nya dyker upp hela tiden.

ThoughtWorks har flaggskeppet Go (fd Cruise) som bygger helt och hållet på pipelines.
Jenkins/Hudson kräver en plugin för att hantera samma sak, fördelen är att både Jenkins och plugin är fritt att använda som opensource.
Atlassians Bamboo kan kombineras med Jira och en plugin från SysBliss för att skapa pipelines.
Vill man ha bättre kontroll på vem som gör vad är Nolio en produkt som kan hantera användarrättigheter i samband med pipelines.
AnthillPro är ännu en produkt som är mycket bra på att bygga avancerad deployment autmation (pipelines).

Pipelines är mycket bra, men det finns några fällor att kliva i så det gäller att vara påläst, pragmatisk och hålla tungan rätt i mun..

Vill man läsa mer om detta, kan man förutom att följa denna blogg, läsa Jez Humbles bok : Continuous Delivery