Apr 01 2014

Recent blogs about the Delivery Pipeline plugin

Posted by: MarcusPhilip @ 10:45

The Delivery Pipeline plugin from Diabol is getting some traction. Now over 600 installations. Here’s some recent blogging about it.

First one from none less than Mr Jenkins himself, Kohsuke Kawaguchi, and Andrew Phillips, VP of Products for XebiaLabs:

InfoQ: Orchestrating Your Delivery Pipelines with Jenkins

Second is about the first experience with the Jenkins/Hudson Build and Delivery Pipeline plugins:

Oracle SOA / Java blog: The Jenkins Build and Delivery Pipeline plugins

Marcus Philip
@marcus_phi

Tags:


Feb 14 2014

Test categorization in deployment pipelines

Posted by: TommyTynjä @ 11:32

Have you ever gotten tired of waiting for those long running tests in CI to finish so you can get feedback on your latest code change? Chances are that you have. A common problem is that test suites tend to grow too large, making the feedback loop an enemy instead of a companion. This is a problem when building devilvery pipelines for Continuous Delivery, but also for more traditional approaches to software development. A solution to this problem is to divide your test suite into separate categories, or stages, where tests are grouped according to similarity or type. The categories can then be arranged to execute the quickest and those most likely to fail first, to enable faster feedback to the developers.

An example of a logical grouping of tests in a deployment pipeline:

Commit stage:
* Unit tests
* Component smoke tests
These tests execute fast and will be executed by the developers before commiting changes into version control.

Component tests:
* Component tests
* Integration tests
These tests are to be run in CI and can be further categorized so that e.g. component tests that are most likely to catch failures will execute first, before more thorough testing.

End user tests:
* Functional tests
* User acceptance tests
* Usability/exploratory testing

As development continues, it is important to maintain these test categories so that the feedback loop can be kept as optimal as possible. This might involve moving tests between categories, further splitting up test suites or even grouping categories that might be able to run in parallel.

How is this done in practice? You’ve probably encountered code bases where all these different kind of tests, unit, integration, user acceptance tests have all been scattered throughout the same test source tree. In the Java world, Maven is a commonly used build tool. Generally, its model supports running unit and integration tests separately out of the box, but it still expects tests to be in the same structure, differentiated only with a naming convention. This isn’t practical if you have hundreds or thousands of tests for a single component (or Maven module). To have a maintainable test structure and make effective use of test categorization, splitting up tests in different source trees is desirable, for example such as:

src/test – unit tests
src/test-integration – integration tests
src/test-acceptance – acceptance tests

Gradle is a build tool which makes it easy to leverage from this kind of test categorization. Changing build tool is something that might not be practically possible for many reasons, but it is fully possibile to leverage from Gradles capabilities from your existing build tool. You want to use the right tool for the job, right? Gradle is an excellent tool for this kind of job.

Gradle makes use of source sets to define what source code tree is production code and which is e.g. test code. You can easily define your own source sets, which is something you can use to categorize your tests.

Defining the test categories in the example above can be done in your build.gradle such as:

sourceSets {
  main {
    java {
      srcDir 'src/main/java'
    }
    resources {
      srcDir 'src/main/resources'
    }
  }
  test {
    java {
      srcDir 'src/test/java'
    }
    resources {
      srcDir 'src/test/resources'
    }
  }
  integrationTest {
    java {
      srcDir 'src/test-integration/java'
    }
    resources {
      srcDir 'src/test-integration/resources'
    }
    compileClasspath += sourceSets.main.runtimeClasspath
  }
  acceptanceTest {
    java {
      srcDir 'src/test-acceptance/java'
    }
    resources {
      srcDir 'src/test-acceptance/resources'
    }
    compileClasspath += sourceSets.main.runtimeClasspath
  }
}

To be able to run the different test suites, setup a Gradle task for each test category as appropriate for your component, such as:

task integrationTest(type: Test) {
  description = "Runs integration tests"
  testClassesDir = sourceSets.integrationTest.output.classesDir
  classpath += sourceSets.test.runtimeClasspath + sourceSets.integrationTest.runtimeClasspath
  useJUnit()
  testLogging {
    events "passed", "skipped", "failed"
  }
}

task acceptanceTest(type: Test) {
  description = "Runs acceptance tests"
  testClassesDir = sourceSets.acceptanceTest.output.classesDir
  classpath += sourceSets.test.runtimeClasspath + sourceSets.acceptanceTest.runtimeClasspath
  useJUnit()
  testLogging {
    events "passed", "skipped", "failed"
  }
}

test {
  useJUnit()
  testLogging {
    events "passed", "skipped", "failed"
  }
}

Unit tests in src/test will be run by default. To run integration-tests located in src/test-integration, invoke the integrationTest task by executing “gradle integrationTest”. To run acceptance tests located in src/test-acceptance, invoke the acceptanceTest task by executing “gradle acceptanceTest”. These commands can then be used to tailor your test suite execution throughout your deployment pipeline.

A full build.gradle example file that shows how to setup test categories as described above can be found on GitHub.

The above example shows how tests can be logically grouped to avoid waiting for that one big test suite to run for hours, just to report a test failure on a simple test case that should have been reported instantly during the test execution phase.


Tommy Tynjä
@tommysdk


Dec 05 2013

How to validate your yaml files from command line

Posted by: MarcusPhilip @ 15:15

I like using Hiera with Puppet. In my  puppet pipeline I just added YAML syntax validation for the Hiera files in the compile step. Here’s how:

# ...
GIT_DIFF_CMD="git diff --name-only --diff-filter=ACMR $OLD_REVISION $REVISION"
declare -i RESULT=0
set +e # Don't exit on error. Collect the errors instead.
YAML_PATH_LIST=`$GIT_DIFF_CMD | grep -F 'hieradata/' | grep -F '.yaml'`
echo 'YAML files to check syntax:'; echo "$YAML_PATH_LIST"; echo "";
for YAML_PATH in $YAML_PATH_LIST; do
  ruby -e "require 'yaml'; YAML.load_file('${YAML_PATH}')"
  RESULT+=$?
done
# ...
exit $RESULT

The line in bold does the actual validation.

If you read my previous post you can see that we have managed to migrated to git. Hurray!

Tags: , ,


Dec 03 2013

Introducing Delivery Pipeline Plugin for Jenkins

Posted by: PatrikBoström @ 14:33

In Continuous Delivery visualisation is one of the most important areas. When using Jenkins as a build server it is now possible with the Delivery Pipeline Plugin to visualise one or more delivery pipelines in the same view even in full screen. Perfect for information radiators.

The plugin uses the upstream/downstream dependencies of jobs to visualize the pipelines.

delivery-pipeline-plugin

Fullscreen view

screenshot

Work view

A pipeline consists of several stages, usually one stage will be the same as one job in Jenkins. An example of a pipeline which can consist of both build, unit test, packaging and analyses the pipeline can be quite long if every Jenkins job is a stage. So in the Delivery Pipeline Plugin it is possible to group jobs into the same stage, calling the Jenkins jobs tasks instead.

stage

Stage

task

Task

The version showed in the header is the version/display name of the first Jenkins job in the pipeline, so the first job has to define the version.

The plugin also has possibility to show what we call a Aggregated View which shows the latest execution of every stage and displays the version for  that stage.


Dec 02 2013

Is your delivery pipeline an array or a linked list?

Posted by: MarcusPhilip @ 13:07

The fundamental data structure of a delivery pipeline and its implications

A delivery pipeline is a system. A system is something that consists of parts that create a complex whole, where the essence lies largely in the interaction between the parts. In a delivery pipeline we can see the activities in it (build, test, deploy, etc.) as the parts, and their input/output as the interactions. There are two fundamental ways to define interactions in order to organize a set of parts into a whole, a system:

  1. Top-level orchestration, aka array
  2. Parts interact directly with other parts, aka linked list

You could also consider sub-levels of organization. This would form a tree. The sub-level of interaction could be defined in the same way as its parents or not.

My question is: Is one approach better than the other for creating delivery pipelines?

I think the number one requirement on a pipeline is maintainability. So better here would mean mainly more maintainable, that is: easier and quicker to create, to reason about, to reuse, to modify, extend and evolve even for a large number of complex pipelines. Let’s review the approaches in the context of delivery pipelines:

1. Top-level orchestration

This means having one config (file) that defines the whole pipeline. It is like an array.

An example config could look like this:

globals:
  scm: commit
  build: number
triggers:
  scm: github org=Diabol repo=delivery-pipeline-plugin.git
stages:
  - name: commit
    tasks:
      - build
      - unit_test
  - name: test
    vars:
      env: test
    tasks:
      - deploy: continue_on_fail=true
      - smoke_test
      - system_test
  - name: prod
    vars:
      env: prod
    tasks:
      - deploy
      - smoke_test

The tasks, like build, is defined (in isolation) elsewhere. TravisBamboo and Go does it this way.

2. Parts interact directly

This means that as part of the task definition, you have not only the main task itself, but also what should happen (e.g. trigger other jobs) when the task success or fails. It is like a linked list.

An example task config:

name: build
triggers:
  - scm: github org=Diabol repo=delivery-pipeline-plugin.git
steps:
  - mvn: install
post:
  - email: committer
    when: on_fail
  - trigger: deploy_test
    when: on_success

The default way of creating pipelines in Jenkins seems to be this approach: using upstream/downstream relationships between jobs.

Tagging

There is also a supplementary approach to create order: Tagging parts, aka Inversion of Control. In this case, the system materializes bottom-up. You could say that the system behavior is an emerging property. An example config where the tasks are tagged with a stage:

- name: build
  stage: commit
  steps:
    - mvn: install
    ...

- name: integration_test
  stage: commit
  steps:
    - mvn: verify -PIT
  ...

Unless complemented with something, there is no way to order things in this approach. But it’s useful for adding another layer of organization, e.g. for an alternative view.

Comparisons to other systems

Maybe we can enlighten our question by comparing with how we organize other complex system around us.

Example A: (Free-market) Economic Systems, aka getting a shirt

1. Top-level organization

Go to the farmer, buy some cotton, hand it to weaver, get the fabric from there and hand that to the tailor together with size measures.

2. Parts interact directly

There are some variants.

  1. The farmer sells the cotton to the weaver, who sells the fabric to the tailor, who sews a lot of shirts and sells one that fits.
  2. Buy the shirt from the tailor, who bought the fabric from the weaver, who bought the cotton from the farmer.
  3. The farmer sells the cotton to a merchant who sells it to the weaver. The weaver sells the fabric to a merchant who sells it to the tailor. The tailor sells the shirts to a store. The store sells the shirts.

The variations is basically about different flow of information, pull or push, and having middle-mens or not.

Conclusion

Economic systems tends to be organized the second way. There is an efficient system coordination mechanism through demand and supply with price as the deliberator, ultimately the system is driven by the self-interest of the actors. It’s questionable whether this is a good metaphor for a delivery pipeline. You can consider deploying the artifact as the interest of a deploy job , but what is the deliberating (price) mechanism? And unless we have a common shared value measurement, such as money, how can we optimize globally?

Example B: Assembly line, aka build a car

Software process has historically suffered a lot from using broken metaphors to factories and construction, but lets do it anyway.

1. Top-level organization

The chief engineer designs the assembly line using the blueprints. Each worker knows how to do his task, but does not know what’s happening before or after.

2. Parts interact directly

Well, strictly this is more of an old style work shop than an assembly line. The lathe worker gets some raw material, does the cylinders and brings them to the engine assembler, who assembles the engine and hands that over to …, etc.

Conclusion

It seems the assembly line approach has won, but not in the tayloristic approach. I might do the wealth of experiences and research on this subject injustice by oversimplification here, but to me it seems that two frameworks for achieving desired quality and cost when using an assembly line has emerged:

  1. The Toyota way: The key to quality and cost goals is that everybody cares and that the everybody counts. Everybody is concerned about global quality and looks out for improvements, and everybody have the right to ‘stop the line’ if their is a concern. The management layer underpins this by focusing on the long term goals such as the global quality vision and the learning organization.
  2. Teams: A multi-functional team follows the product from start to finish. This requires a wider range of skills in a worker so it entails higher labour costs. The benefit is that there is a strong ownership which leads to higher quality and continuous improvements.

The approaches are not mutually exclusive and in software development we can actually see both combined in various agile techniques:

  • Continuous improvement is part of Scrum and Lean for Software methodologies.
  • It’s all team members responsibility if a commit fails in a pipeline step.

Conclusion

For parts interacting directly it seems that unless we have an automatic deliberation mechanism we will need a ‘planned economy’, and that failed, right? And top-level organization needs to be complemented with grass root level involvement or quality will suffer.

Summary

My take is that the top-level organization is superior, because you need to stress the holistic view. But it needs to be complemented with the possibility for steps to be improved without always having to consider the whole. This is achieved by having the team that uses the pipeline own it and management supporting them by using modern lean and agile management ideas.

Final note

It should be noted that many desirable general features of a system framework that can ease maintenance if rightly used, such as inheritance, aggregation, templating and cloning, are orthogonal to the organizational principle we talk about here. These features can actually be more important for maintainability. But my experience is that the organizational principle puts a cap on the level of complexity you can manage.

Marcus Philip
@marcus_phi

Tags: , ,


Oct 02 2013

Gist: Ansible 1.3 Conditional Execution Examples

Posted by: MarcusPhilip @ 13:04

I just published a gist on Ansible 1.3 Conditional Execution

It is a very complete example with comments. I find the conditional expressions to be ridiculously hard to get right in Ansible. I don’t have a good model of what’s going on under the surface (as I don’t know Python) so I often get it wrong.

What makes it even harder is that there has been at least three different variants over the course from version 0.7 to 1.3. Now ‘when’ seems to be the recommended one, but I used to have better luck with the earlier versions.

One thing that makes it hard is that the type of the variable is very important, and it’s not obvious what that is. It seems it may be interpreted as a string even if defined as False. The framework doesn’t really help you. I think a language like this should be able to ‘do what I mean’.

Here is the official Ansible docs on this.

Tags:


Sep 25 2013

Puppet change promotion and code base design

Posted by: MarcusPhilip @ 18:03

I have recently introduced puppet at a medium sized development organizations. I was new to puppet when I started, but feel like a seasoned and scarred veteran by now. Here’s my solution for puppet code base design and change promotion.

Like any change applied to a system we want to have a defined pipeline to production that includes testing. I think the problem is not particular to the modern declarative CM tools like puppet, it’s just that they makes the problem a lot more explicit compared to manual CM.

Solution Summary

We have a number of environments: CI, QA, PROD, etc. We use a puppet module path with $environment variable to be able to update these environments independently.

We have built a pipeline in Jenkins that is triggered by commits to the svn repo that contains the Puppet (and Hiera) code. The initial commit stage jobs are all automatically triggered as long as the preceding step is OK, but QA and PROD application is manually triggered.

The steps in the commit stage is:

  1. Compile
    1. Update the code in CI environment on puppet master from svn.
    2. Use the master to parse the manifests and validate the erb templates changed in this commit.
    3. Use the master to compile all nodes in CI env.
  2. Apply to CI environment (with puppet agent --test)
  3. Apply to DEV environment
  4. Apply to Test (ST) environment

The compile sub-step is run even if the parse or validate failed, to gather as much info as possible before failing.

Jenkins puppet pipeline visualized in Diabols new Delivery Pipeline plugin

Jenkins puppet pipeline visualized in Diabols new Delivery Pipeline plugin

The great thing about this is that the compile step will catch most problems with code and config before they have any chance of impacting a system.

Noteworthy is also that we have a noop run for prod before the real thing. Together with the excellent reporting facilities in Foreman, this allows me to see with a high fidelity exactly what changes that will be applied, line by line diff if needed, and what services that will be restarted.

Triggering agent runs

The puppet agents are not daemonized. We didn’t see any important advantage in having them run as daemons, but the serious disadvantages of having no simple way to prevent application of changes before they are tested (with parse and compile).

The agent runs are triggered using Ansible. It may seem strange to introduce another CM tool to do this, but Ansible is a really simple and powerful tool to run commands on a large set of nodes. And I like YAML.

Also, Puppet run is deprecated with the suggestion to use MCollective instead. However, that involves setting up a message queue, i.e. another middleware to manage and monitor. Every link in your tool chain has to carry it’s own weight (and more) and the weight of Ansible is basically zero, and for MQ > 0.

We also use Ansible to install the puppet agents. Funny bootstrapping problem here: You can’t install puppet without puppet… Again, Ansible was the simplest solution for us since we don’t manage the VMs ourselves (and either way, you have to be able to easily update the VMs, which takes a machinery of it’s own if it’s to be done the right way).

External DMZ note

Well, all developers loves network security, right? Makes your life simple and safe… Anyway, I guess it’s just a fact of life to accept. Since, typically, you do not allow inwards connections from your external DMZ, and since it’s the puppet agent that pulls, we had to set up an external puppet master in the external DMZ (with rsync from internal of puppet modules and yum repo) that manages the servers in external DMZ. A serious argument for using a push based tool like Ansible instead of puppet. But for me, puppet wins when you have a larger CM code base. Without the support of the strict checking of puppet we would be lost. But I guess I’m biased, coming from statically typed programming languages.

Code organization

We use the Foreman as an ENC, but the main use of it is to get a GUI for viewing hosts and reports. We have decided to use a puppet design pattern where the nodes are only mapped to one or a few top level role classes in Foreman, and the details is encapsulated inside the role class, using one or more layers of puppet classes. This is inspired by Craig Dunn’s Roles and Profiles pattern.

Then we use Hiera yaml files to put in most of the parameters, using the automatic-parameter-lookup heavily.

This way almost everything is in version control, which makes refactoring and releasing a lot easier.

But beware, you cannot use the future parser of puppet with Foreman as of now. This is needed for the new puppet lambda functions. This was highly annoying, as it prevents me from designing the hiera data structure in the most logical way and then just slicing it as necessary.

The create_resources function in puppet partly mitigates this, but it’s strict on the parameters, so if the data structure contains a key that doesn’t correspond to a parameter of the class, it fails.

Releasable Units

One of the questions we faced was how and whether to split up the puppet codebase into separately releasable components. Since we are used to trunk based development on a shared code base, we decided that is was probably easier to manage everything together.

Local testing

Unless you can quickly test your changes locally before committing, the pipeline is gonna be red most of the time. This is solved in a powerful and elegant way using Vagrant. Strongly recommended. In a few seconds I can test a minor puppet code change, and in a minute I can test the full puppet config for a node type. The box has puppet and the Vagrantfile is really short:

Vagrant.configure("2") do |config|
  config.vm.box = "CentOS-6.4-x86_64_puppet-3_2_4-1"
  config.vm.box_url = "ftp://ftptemp/CentOS-6.4-x86_64_puppet-3_2_4-1.box"

  config.vm.synced_folder "vagrant_puppet", "/home/vagrant/.puppet"
  config.vm.synced_folder "puppet", "/etc/puppet"
  config.vm.synced_folder "hieradata", "/etc/puppet/hieradata"

  config.vm.provision :puppet do |puppet|
    puppet.manifests_path = "manifests"
    puppet.manifest_file  = "site.pp"
    puppet.module_path = "modules"
  end
end

As you can see it’s easy to map in the hiera stuff that’s needed to be able to test the full solution.

Foot Notes

It’s been suggested in the DevOps community that you should treat servers as cattle, not pets. At the place where I implemented this, we haven’t yet reached that level of maturity. This may somewhat impact the solution, but large parts of it would be the same.

A while ago I posted Puppet change promotion – Good practices? in LinkedIn DevOps group. The solution I described here is what I came up with.

Resources

Environment based DevOps Deployment using Puppet and Mcollective
Advocates master less puppet
The NBN Puppet Journey
De-centralise and Conquer: Masterless Puppet in a Dynamic Environment

Code Examples

Control script

This script is used from several of the Jenkins jobs.

 #!/bin/bash
set -e  # Exit on error

function usage {
echo "Usage: $0 -r  (-s|-p|-c|-d)
example:
$0 -pc -r 123
$0 -d -r 156
-r The svn revision to use
-s Add a sleep of 60 secs after svn up to be sure we have rsync:ed the puppet code to external puppet
-p parse the manifests changed in
-c compile all hosts in \$TARGET_ENV
-d Do a puppet dry-run (noop) on \$TARGET_HOSTS

Updates puppet modules from svn in \$TARGET_ENV on puppet master at the
beginning of run, and reverts if any failures.

The puppet master is used for parsing and compiling.

This scrips relies on environment variables:
* \$TARGET_ENV for svn
* \$TARGET_HOSTS for dry-run
";
}

if [ $# -lt 1 ]; then
usage; exit 1;
fi

# Set options
sleep=false; parse=false; compile=false; dryrun=false;
while getopts "r:spcd" option; do
case $option in
r) REVISION="$OPTARG";;
s) sleep=true;;
p) parse=true;;
c) compile=true;;
d) dryrun=true;;
*) echo "Unknown parameter: $opt $OPTARG"; usage; exit 1;;
esac
done
shift $((OPTIND - 1))

if [ "x$REVISION" = "x" ]; then
usage; exit 1;
fi

# This directory is updated by a Jenkins job
cd /opt/tools/ci-jenkins/jenkins-home/common-tools/scripts/ansible/

# SVN UPDATE ##################################################################
declare -i OLD_SVN_REV
declare -i NEXT_SVN_REV
## Store old svn rev before updating so we can roll back if not OK
OLD_SVN_REV=`ssh -T admin@puppetmaster svn info /etc/puppet/environments/${TARGET_ENV}/modules/| grep -E '^Revision:' | cut -d ' ' -f 2`
echo $'\n######### ######### ######### ######### ######### ######### ######### #########'
echo "Current svn revision in ${TARGET_ENV}: $OLD_SVN_REV"
if [ "$OLD_SVN_REV" != "$REVISION" ]; then
# We could have more than on commit since last run (even if we use post-commit hooks)
NEXT_SVN_REV=${OLD_SVN_REV}+1
# Update Puppet master
ansible-playbook puppet-master-update.yml -i hosts --extra-vars="target_env=${TARGET_ENV} revision=${REVISION}"
# SLEEP #############################
$sleep {
echo 'Sleep for a minute to be sure we have rsync:ed the puppet code to external puppet...'
sleep 60
}
else
echo 'Svn was already at required revision. Continuing...'
NEXT_SVN_REV=$REVISION
fi

# Final result ################################################################
declare -i RESULT
RESULT=0
set +e  # Don't exit on error. Collect the errors instead.

# PARSE #######################################################################
$parse {
# Parse manifests ###################
## Get only the paths to the manifests that was changed (to limit the number of parses).
MANIFEST_PATH_LIST=`svn -q -v --no-auth-cache --username $JENKINS_USR --password $JENKINS_PWD -r $NEXT_SVN_REV:$REVISION log http://scm.company.com/svn/puppet/trunk | grep -F '/puppet/trunk/modules' | grep -F '.pp' |  grep -Fv '   D' | cut -c 28- | sed 's/ .*//g'`
echo $'\n######### ######### ######### ######### ######### ######### ######### #########'
echo $'Manifests to parse:'; echo "$MANIFEST_PATH_LIST"; echo "";
for MANIFEST_PATH in $MANIFEST_PATH_LIST; do
# Parse this manifest on puppet master
ansible-playbook puppet-parser-validate.yml -i hosts --extra-vars="manifest_path=/etc/puppet/environments/${TARGET_ENV}/modules/${MANIFEST_PATH}"
RESULT+=$?
done

# Check template syntax #############
TEMPLATE_PATH_LIST=`svn -q -v --no-auth-cache --username $JENKINS_USR --password $JENKINS_PWD -r $NEXT_SVN_REV:$REVISION log http://scm.company.com/svn/platform/puppet/trunk | grep -F '/puppet/trunk/modules' | grep -F '.erb' |  grep -Fv '   D' | cut -c 28-`
echo $'\n######### ######### ######### ######### ######### ######### ######### #########'
echo $'Templates to check syntax:'; echo "$TEMPLATE_PATH_LIST"; echo "";
for TEMPLATE_PATH in $TEMPLATE_PATH_LIST; do
erb -P -x -T '-' modules/${TEMPLATE_PATH} | ruby -c
RESULT+=$?
done
}

# COMPILE #####################################################################
$compile {
echo $'\n######### ######### ######### ######### ######### ######### ######### #########'
echo "Compile all manifests in $TARGET_ENV"
ansible-playbook puppet-master-compile-all.yml -i hosts --extra-vars="target_env=${TARGET_ENV} puppet_args=--color=false"
RESULT+=$?
}

# DRY-RUN #####################################################################
$dryrun {
echo $'\n######### ######### ######### ######### ######### ######### ######### #########'
echo "Run puppet in dry-run (noop) mode on $TARGET_HOSTS"
ansible-playbook puppet-run.yml -i hosts --extra-vars="hosts=${TARGET_HOSTS} puppet_args='--noop --color=false'"
RESULT+=$?
}

set -e  # Back to default: Exit on error

# Revert svn on puppet master if there was a problem ##########################
if [ $RESULT -ne 0 ]; then
echo $'\n######### ######### ######### ######### ######### ######### ######### #########'
echo $'Revert svn on puppet master due to errors above\n'
ansible-playbook puppet-master-revert-modules.yml -i hosts --extra-vars="target_env=${TARGET_ENV} revision=${OLD_SVN_REV}"
fi

exit $RESULT

Ansible playbooks

The ansible playbooks called from bash are simple.

puppet-master-compile-all.yml

---
# usage: ansible-playbook puppet-master-compile-all.yml -i hosts --extra-vars="target_env=ci1 puppet_args='--color=html'"

- name: Compile puppet catalogue for all hosts for a given environment on the puppet master
  hosts: puppetmaster-int
  user: ciadmin
  sudo: yes      # We need to be root
  tasks:
    - name: Compile puppet catalogue for in {{ target_env }}
      command: puppet master {{ puppet_args }} --compile {{ item }} --environment {{ target_env }}
      with_items: groups['ci1']

puppet-run.yml

---
# usage: ansible-playbook puppet-run.yml -i hosts --forks=12 --extra-vars="hosts=xyz-ci puppet_args='--color=false'"

- name: Run puppet agents for {{ hosts }}
  hosts: $hosts
  user: cipuppet
  tasks:
    - name: Trigger puppet agent run with args {{ puppet_args }}
      shell: sudo /usr/bin/puppet agent {{ puppet_args }} --test || if [ $? -eq 2 ]; then echo 'Notice - There were changes'; exit 0; else exit $?; fi;
      register: puppet_agent_result
      changed_when: "'Notice - There were changes' in puppet_agent_result.stdout"

Ansible inventory file (hosts)

The hosts file is what triggers the ansible magic. Here’s an excerpt.

# BUILD SERVERS ###############################################################
[puppetmaster-int]
puppet.company.com

[puppetmaster-ext]
extpuppet.company.com

[puppetmasters:children]
puppetmaster-int
puppetmaster-ext

[puppetmasters:vars]
puppet_args=""

# System XYZ #######################################################################
[xyz-ci]
xyzint6.company.com
xyzext6.company.com

# PROD
[xyz-prod-ext]
xyzext1.company.com

[xyz-prod-ext:vars]
puppet_server=extpuppet.company.com

[xyz-prod-int]
xyzint1.company.com

[xyz-prod:children]
xyz-prod-ext
xyz-prod-int

...

# ENVIRONMENT AGGREGATION #####################################################
[ci:children]
xyz-ci
pqr-ci

[prod:children]
xyz-prod
pqr-prod

[all_envs:children]
dev
ci
st
qa
prod

# Global defaults
[all_envs:vars]
puppet_args=""
puppet_server=puppet.company.com

Marcus Philip
@marcus_phi

Tags: , , , , , ,


May 29 2013

Continuous Delivery testing levels

Posted by: TommyTynjä @ 14:56

This blog post is a summary of thoughts discussed between me, Andreas Rehn (@andreasrehn) and Patrik Boström (@patbos).

A key part of Continuous Delivery is automated testing and even the simplest delivery pipeline will consist of several different testing stages. There is unit tests, integration tests, user acceptance tests etc. But what defines the different test levels?

We realized that we often mean different things regarding each testing level and this was especially true when talking about integration tests. For me, integration tests can be tests that test the integrations within one component, e.g. testing an internal API or integration between a couple of business objects interacting with each other, a database etc. This is how the Arquillian (an integration testing framework for Java) community is referring to integration testing. Another kind of integration tests are those testing an actual integration with e.g. a third party web service. What we’ve been referring to when talking about integration tests in the context of Continuous Delivery, is testing a component in a fully integrated environment and testing the component from the outside, rather than the inside, so called black box testing. These are often more functional by nature.

We came to the conclusion that we would like to redefine the terminology for the latter type of integration testing to avoid confusion and fuzziness. Since these kind of tests are more functional tests, testing the behavior and flows of the component, we decided to start calling these types of tests component tests instead. That leaves us with the following levels of testing in the early stages of a delivery pipeline:

* Unit tests
* Smoke tests
* Component tests
* Integration tests

When should you run the different tests? You want feedback as soon as possible but you don’t want to have a too big test suite too early in the pipeline as this could severely delay the feedback. It’s inefficient to force developers to run a five+ minute build before each commit. Therefore you should divide your test suite into different phases. The first phases typically includes unit tests and smoke tests. The second phase will run the component tests in a fully integrated production like environment. The third phase will execute integration tests, e.g. with Arquillian. Certain integration tests will not need to be run in a fully integrated environment, depending on the context, but there are definitely benefits of running all of them in such an environment. These tests can also test integrations towards databases, third party dependencies etc.

To be fully confident in the quality of your releases you need to make use of these different tests as they all fulfill a specific purpose. It is worth considering though, in what phase certain tests should be placed as you don’t want rerun tests in different phases. If you’re validating an algorithm, the unit test phase is probably the most appropriate phase, while testing your database queries fits well into the integration test phase and user interface and functional tests as component tests. This raises the question, how much should you actually test? As that is a topic on its own, we’ll leave that for another time.

Conclusion:
Unit tests – testing atomic pieces of code on their own. Typically tested with a unit testing framework
Integration tests – putting atomic pieces together to moving parts, testing integration points, internal APIs, database interactions etc. Typically tested with Arquillian and/or with a unit testing framework along with mocks and stubs.
Component tests – functional tests of the component, so called black box testing. Often tested with Selenium, acceptance testing frameworks or through web service calls, depending on the component. Also a subject for testing with Arquillian.

Tommy Tynjä
@tommysdk


May 28 2013

Testing the presence of log messages with java.util.logging

Posted by: TommyTynjä @ 14:41

Sometimes there is value in creating a unit test to assert that a specific log message actually gets printed. It might be for audit logs or making sure that system misconfigurations get logged properly. A couple of years ago my colleague Daniel blogged about how to create a custom Log4j appender and to use that in your unit tests to assert the presence of certain log messages. Read about it here.

Today I was resolving an issue in the Arquillian (the open source integration testing framework for Java) codebase. This involved in logging a warning in a certain use case. I obviously wanted to test my code by adding a test case for the different use cases, asserting that the log message got printed out correctly. I’ve used the approach of asserting log messages in unit tests many times in the past, but I’ve always used Log4j in those cases. This time around I was forced to solve the problem for plain java.util.logging (JUL) which Arquillian uses. Fun, as I’m always up for a challenge.

What I did was similar to the log4j approach. I need to add a custom log handler which I attach to the logger in the affected class. I create an outputstream, which I attach to a StreamHandler. I then attach the StreamHandler to the logger. As long as I have a reference to the output stream, I can then get the logged contents and use that in my assertions. Example below using JUnit 4:

private static Logger log = Logger.getLogger(AnnotationDeploymentScenarioGenerator.class.getName()); // matches the logger in the affected class
private static OutputStream logCapturingStream;
private static StreamHandler customLogHandler;

@Before
public void attachLogCapturer()
{
  logCapturingStream = new ByteArrayOutputStream();
  Handler[] handlers = log.getParent().getHandlers();
  customLogHandler = new StreamHandler(logCapturingStream, handlers[0].getFormatter());
  log.addHandler(customLogHandler);
}

public String getTestCapturedLog() throws IOException
{
  customLogHandler.flush();
  return logCapturingStream.toString();
}

… then I can use the above methods in my test case:

@Test
public void shouldLogWarningForMismatchingArchiveTypeAndFileExtension() throws Exception
{
  final String expectedLogPart = "unexpected file extension";

  new AnnotationDeploymentScenarioGenerator().generate(
        new TestClass(DeploymentWithMismatchingTypeAndFileExtension.class));

  String capturedLog = getTestCapturedLog();
  Assert.assertTrue(capturedLog.contains(expectedLogPart));
}


Tommy Tynjä
@tommysdk


May 09 2013

Test data – part 1

Posted by: MarcusPhilip @ 12:00

When you run an integration or system test, i.e. a test that spans one or more logical or physical boundaries in the system, you normally need some test data, as most non­trivial operations depends on some persistent state in the system. Even if the test tries to follow the advice of favoring to verify behavior over state, you may still need specific input to even achieve a certain behavior. For example, if you want to test an order flow for a specific type of product, you must know how to add a product of that type to the basket, e.g. knowing a product name.

But, and here is the problem, if you don’t have strict control of that data it may change over time, so suddenly your test will fail.

When unit testing, you’ll want to use mocks or fakes for dependencies (and have well factored code that lets you easily do that), but here I’m talking about tests where you specifically want to use the real dependency.

Basically, there are only two robust ways to manage test data:

  1. Each tests creates the data it needs.
  2. Create a managed set of data that covers all of your test needs.

You can also use a combination of the two.

For the first strategy, either you have an idempotent approach so that you just ensure a certain state, or, you create and delete the data for each run. In some cases you can use transactions to be able to safely parallelize your tests and not modify persistent state. Just open one at the start of the test and then abort it instead of committing at the end. Obviously you cannot test functionality that depends on transactions this way.

The second strategy is a lot easier if you already have a clear separation between reference data, application data and transactional data.

By reference data I mean data that change with very low frequency and that often is of limited size and has a list or key/value structure. Examples could be a list of supported languages or zip code to address lookup. This should be fairly easy to keep in one authoritative, version controlled location, either in bulk or as deltas.

The term application data is not as established as reference data. It is data that affects the behavior of the application. It is not modified by normal end user actions, but is continuously modified by developers or administrators. Examples could be articles in a CMS or sellable products in an eCommerce website. This data is crucial for tests. It’s typically the data that tests use as input or for assertions.

The challenge here is to keep the production data and the test data set in synch. Ideally there should be a process that makes it impossible (or at least hard) to update the former without updating the second. However, there are often many complicating factors: the data can be in another system owned by another team and without a good test double, the data can be large, or it can have complex relationships or dependencies that sometimes very few fully grasp. Often it is managed by non­technical people so their tool set, knowledge and skills are different.

Unit or component tests can often overcome these challenges by using a strategy to mock systems or create arbitrary test data and verify behavior and not exact state, but acceptance tests cannot do that. We sometimes need to verify that a specific product can be ordered, not a fictional one created by the test.

Finally, transactional data is data continuously created by the application. It is typical large, fast growing and of medium complexity. Example could be orders, article comments and logs.

One challenge here is how to handle old, ‘obsolete’ data. You may have data stored that is impossible to generate in the current application because the business rules (and the corresponding implementation) have changed. For the test data it means you cannot use the application to create the test data if that was you strategy. Obviously, this can make the application code more complicated, and for the test code, hopefully you have it organized so it’s easy to correlate the acceptance test to the changed business rule and easy to change them accordingly. The tests may get more complicated because there can now e.g. be different behavior for customers with an ‘old’ contract. This may be hard for new developers in the team that only know of the current behavior of the app. You may even have seemingly contradicting assertions.

Another problem can be the sheer size. This can be remediated by having a strategy for aggregating, compacting and/or extracting data. This is normally easy if you plan for it up front, but can be hard when your database is 100 TB. I know that hardware is cheap, but having a 100 TB DB is inconvenient.

The line between application data and transactional data is not always clear cut. For example when an end user performs an action, such as a purchase, he may become eligible for certain functionality or products, thus having altered the behavior of the application. It’s still a good approach though to keep the order rows and the customer status separated.

I hope to soon write more on the tougher problems in automated testing and of managing test data specifically.

Marcus Philip
@marcus_phi

Tags: , ,


Next Page »