Sep 25 2013

Puppet change promotion and code base design

I have recently introduced puppet at a medium sized development organizations. I was new to puppet when I started, but feel like a seasoned and scarred veteran by now. Here’s my solution for puppet code base design and change promotion.

Like any change applied to a system we want to have a defined pipeline to production that includes testing. I think the problem is not particular to the modern declarative CM tools like puppet, it’s just that they makes the problem a lot more explicit compared to manual CM.

Solution Summary

We have a number of environments: CI, QA, PROD, etc. We use a puppet module path with $environment variable to be able to update these environments independently.

We have built a pipeline in Jenkins that is triggered by commits to the svn repo that contains the Puppet (and Hiera) code. The initial commit stage jobs are all automatically triggered as long as the preceding step is OK, but QA and PROD application is manually triggered.

The steps in the commit stage is:

  1. Compile
    1. Update the code in CI environment on puppet master from svn.
    2. Use the master to parse the manifests and validate the erb templates changed in this commit.
    3. Use the master to compile all nodes in CI env.
  2. Apply to CI environment (with puppet agent --test)
  3. Apply to DEV environment
  4. Apply to Test (ST) environment

The compile sub-step is run even if the parse or validate failed, to gather as much info as possible before failing.

Jenkins puppet pipeline visualized in Diabols new Delivery Pipeline plugin

Jenkins puppet pipeline visualized in Diabols new Delivery Pipeline plugin

The great thing about this is that the compile step will catch most problems with code and config before they have any chance of impacting a system.

Noteworthy is also that we have a noop run for prod before the real thing. Together with the excellent reporting facilities in Foreman, this allows me to see with a high fidelity exactly what changes that will be applied, line by line diff if needed, and what services that will be restarted.

Triggering agent runs

The puppet agents are not daemonized. We didn’t see any important advantage in having them run as daemons, but the serious disadvantages of having no simple way to prevent application of changes before they are tested (with parse and compile).

The agent runs are triggered using Ansible. It may seem strange to introduce another CM tool to do this, but Ansible is a really simple and powerful tool to run commands on a large set of nodes. And I like YAML.

Also, Puppet run is deprecated with the suggestion to use MCollective instead. However, that involves setting up a message queue, i.e. another middleware to manage and monitor. Every link in your tool chain has to carry it’s own weight (and more) and the weight of Ansible is basically zero, and for MQ > 0.

We also use Ansible to install the puppet agents. Funny bootstrapping problem here: You can’t install puppet without puppet… Again, Ansible was the simplest solution for us since we don’t manage the VMs ourselves (and either way, you have to be able to easily update the VMs, which takes a machinery of it’s own if it’s to be done the right way).

External DMZ note

Well, all developers loves network security, right? Makes your life simple and safe… Anyway, I guess it’s just a fact of life to accept. Since, typically, you do not allow inwards connections from your external DMZ, and since it’s the puppet agent that pulls, we had to set up an external puppet master in the external DMZ (with rsync from internal of puppet modules and yum repo) that manages the servers in external DMZ. A serious argument for using a push based tool like Ansible instead of puppet. But for me, puppet wins when you have a larger CM code base. Without the support of the strict checking of puppet we would be lost. But I guess I’m biased, coming from statically typed programming languages.

Code organization

We use the Foreman as an ENC, but the main use of it is to get a GUI for viewing hosts and reports. We have decided to use a puppet design pattern where the nodes are only mapped to one or a few top level role classes in Foreman, and the details is encapsulated inside the role class, using one or more layers of puppet classes. This is inspired by Craig Dunn’s Roles and Profiles pattern.

Then we use Hiera yaml files to put in most of the parameters, using the automatic-parameter-lookup heavily.

This way almost everything is in version control, which makes refactoring and releasing a lot easier.

But beware, you cannot use the future parser of puppet with Foreman as of now. This is needed for the new puppet lambda functions. This was highly annoying, as it prevents me from designing the hiera data structure in the most logical way and then just slicing it as necessary.

The create_resources function in puppet partly mitigates this, but it’s strict on the parameters, so if the data structure contains a key that doesn’t correspond to a parameter of the class, it fails.

Releasable Units

One of the questions we faced was how and whether to split up the puppet codebase into separately releasable components. Since we are used to trunk based development on a shared code base, we decided that is was probably easier to manage everything together.

Local testing

Unless you can quickly test your changes locally before committing, the pipeline is gonna be red most of the time. This is solved in a powerful and elegant way using Vagrant. Strongly recommended. In a few seconds I can test a minor puppet code change, and in a minute I can test the full puppet config for a node type. The box has puppet and the Vagrantfile is really short:

Vagrant.configure("2") do |config|
  config.vm.box = "CentOS-6.4-x86_64_puppet-3_2_4-1"
  config.vm.box_url = "ftp://ftptemp/CentOS-6.4-x86_64_puppet-3_2_4-1.box"

  config.vm.synced_folder "vagrant_puppet", "/home/vagrant/.puppet"
  config.vm.synced_folder "puppet", "/etc/puppet"
  config.vm.synced_folder "hieradata", "/etc/puppet/hieradata"

  config.vm.provision :puppet do |puppet|
    puppet.manifests_path = "manifests"
    puppet.manifest_file  = "site.pp"
    puppet.module_path = "modules"
  end
end

As you can see it’s easy to map in the hiera stuff that’s needed to be able to test the full solution.

Foot Notes

It’s been suggested in the DevOps community that you should treat servers as cattle, not pets. At the place where I implemented this, we haven’t yet reached that level of maturity. This may somewhat impact the solution, but large parts of it would be the same.

A while ago I posted Puppet change promotion – Good practices? in LinkedIn DevOps group. The solution I described here is what I came up with.

Resources

Environment based DevOps Deployment using Puppet and Mcollective
Advocates master less puppet
The NBN Puppet Journey
De-centralise and Conquer: Masterless Puppet in a Dynamic Environment

Code Examples

Control script

This script is used from several of the Jenkins jobs.

 #!/bin/bash
set -e  # Exit on error

function usage {
echo "Usage: $0 -r  (-s|-p|-c|-d)
example:
$0 -pc -r 123
$0 -d -r 156
-r The svn revision to use
-s Add a sleep of 60 secs after svn up to be sure we have rsync:ed the puppet code to external puppet
-p parse the manifests changed in
-c compile all hosts in \$TARGET_ENV
-d Do a puppet dry-run (noop) on \$TARGET_HOSTS

Updates puppet modules from svn in \$TARGET_ENV on puppet master at the
beginning of run, and reverts if any failures.

The puppet master is used for parsing and compiling.

This scrips relies on environment variables:
* \$TARGET_ENV for svn
* \$TARGET_HOSTS for dry-run
";
}

if [ $# -lt 1 ]; then
usage; exit 1;
fi

# Set options
sleep=false; parse=false; compile=false; dryrun=false;
while getopts "r:spcd" option; do
case $option in
r) REVISION="$OPTARG";;
s) sleep=true;;
p) parse=true;;
c) compile=true;;
d) dryrun=true;;
*) echo "Unknown parameter: $opt $OPTARG"; usage; exit 1;;
esac
done
shift $((OPTIND - 1))

if [ "x$REVISION" = "x" ]; then
usage; exit 1;
fi

# This directory is updated by a Jenkins job
cd /opt/tools/ci-jenkins/jenkins-home/common-tools/scripts/ansible/

# SVN UPDATE ##################################################################
declare -i OLD_SVN_REV
declare -i NEXT_SVN_REV
## Store old svn rev before updating so we can roll back if not OK
OLD_SVN_REV=`ssh -T admin@puppetmaster svn info /etc/puppet/environments/${TARGET_ENV}/modules/| grep -E '^Revision:' | cut -d ' ' -f 2`
echo $'\n######### ######### ######### ######### ######### ######### ######### #########'
echo "Current svn revision in ${TARGET_ENV}: $OLD_SVN_REV"
if [ "$OLD_SVN_REV" != "$REVISION" ]; then
# We could have more than on commit since last run (even if we use post-commit hooks)
NEXT_SVN_REV=${OLD_SVN_REV}+1
# Update Puppet master
ansible-playbook puppet-master-update.yml -i hosts --extra-vars="target_env=${TARGET_ENV} revision=${REVISION}"
# SLEEP #############################
$sleep {
echo 'Sleep for a minute to be sure we have rsync:ed the puppet code to external puppet...'
sleep 60
}
else
echo 'Svn was already at required revision. Continuing...'
NEXT_SVN_REV=$REVISION
fi

# Final result ################################################################
declare -i RESULT
RESULT=0
set +e  # Don't exit on error. Collect the errors instead.

# PARSE #######################################################################
$parse {
# Parse manifests ###################
## Get only the paths to the manifests that was changed (to limit the number of parses).
MANIFEST_PATH_LIST=`svn -q -v --no-auth-cache --username $JENKINS_USR --password $JENKINS_PWD -r $NEXT_SVN_REV:$REVISION \
  log http://scm.company.com/svn/puppet/trunk \
  | grep -F '/puppet/trunk/modules' | grep -F '.pp' |  grep -Fv '   D' | cut -c 28- | sed 's/ .*//g'`
echo $'\n######### ######### ######### ######### ######### ######### ######### #########'
echo $'Manifests to parse:'; echo "$MANIFEST_PATH_LIST"; echo "";
for MANIFEST_PATH in $MANIFEST_PATH_LIST; do
# Parse this manifest on puppet master
ansible-playbook puppet-parser-validate.yml -i hosts --extra-vars="manifest_path=/etc/puppet/environments/${TARGET_ENV}/modules/${MANIFEST_PATH}"
RESULT+=$?
done

# Check template syntax #############
TEMPLATE_PATH_LIST=`svn -q -v --no-auth-cache --username $JENKINS_USR --password $JENKINS_PWD -r $NEXT_SVN_REV:$REVISION \
  log http://scm.company.com/svn/platform/puppet/trunk \
  | grep -F '/puppet/trunk/modules' | grep -F '.erb' |  grep -Fv '   D' | cut -c 28-`
echo $'\n######### ######### ######### ######### ######### ######### ######### #########'
echo $'Templates to check syntax:'; echo "$TEMPLATE_PATH_LIST"; echo "";
for TEMPLATE_PATH in $TEMPLATE_PATH_LIST; do
erb -P -x -T '-' modules/${TEMPLATE_PATH} | ruby -c
RESULT+=$?
done
}

# COMPILE #####################################################################
$compile {
echo $'\n######### ######### ######### ######### ######### ######### ######### #########'
echo "Compile all manifests in $TARGET_ENV"
ansible-playbook puppet-master-compile-all.yml -i hosts --extra-vars="target_env=${TARGET_ENV} puppet_args=--color=false"
RESULT+=$?
}

# DRY-RUN #####################################################################
$dryrun {
echo $'\n######### ######### ######### ######### ######### ######### ######### #########'
echo "Run puppet in dry-run (noop) mode on $TARGET_HOSTS"
ansible-playbook puppet-run.yml -i hosts --extra-vars="hosts=${TARGET_HOSTS} puppet_args='--noop --color=false'"
RESULT+=$?
}

set -e  # Back to default: Exit on error

# Revert svn on puppet master if there was a problem ##########################
if [ $RESULT -ne 0 ]; then
echo $'\n######### ######### ######### ######### ######### ######### ######### #########'
echo $'Revert svn on puppet master due to errors above\n'
ansible-playbook puppet-master-revert-modules.yml -i hosts --extra-vars="target_env=${TARGET_ENV} revision=${OLD_SVN_REV}"
fi

exit $RESULT

Ansible playbooks

The ansible playbooks called from bash are simple.

puppet-master-compile-all.yml

---
# usage: ansible-playbook puppet-master-compile-all.yml -i hosts --extra-vars="target_env=ci1 puppet_args='--color=html'"

- name: Compile puppet catalogue for all hosts for a given environment on the puppet master
  hosts: puppetmaster-int
  user: ciadmin
  sudo: yes      # We need to be root
  tasks:
    - name: Compile puppet catalogue for in {{ target_env }}
      command: puppet master {{ puppet_args }} --compile {{ item }} --environment {{ target_env }}
      with_items: groups['ci1']

puppet-run.yml

---
# usage: ansible-playbook puppet-run.yml -i hosts --forks=12 --extra-vars="hosts=xyz-ci puppet_args='--color=false'"

- name: Run puppet agents for {{ hosts }}
  hosts: $hosts
  user: cipuppet
  tasks:
    - name: Trigger puppet agent run with args {{ puppet_args }}
      shell: sudo /usr/bin/puppet agent {{ puppet_args }} --test || if [ $? -eq 2 ]; then echo 'Notice - There were changes'; exit 0; else exit $?; fi;
      register: puppet_agent_result
      changed_when: "'Notice - There were changes' in puppet_agent_result.stdout"

Ansible inventory file (hosts)

The hosts file is what triggers the ansible magic. Here’s an excerpt.

# BUILD SERVERS ###############################################################
[puppetmaster-int]
puppet.company.com

[puppetmaster-ext]
extpuppet.company.com

[puppetmasters:children]
puppetmaster-int
puppetmaster-ext

[puppetmasters:vars]
puppet_args=""

# System XYZ #######################################################################
[xyz-ci]
xyzint6.company.com
xyzext6.company.com

# PROD
[xyz-prod-ext]
xyzext1.company.com

[xyz-prod-ext:vars]
puppet_server=extpuppet.company.com

[xyz-prod-int]
xyzint1.company.com

[xyz-prod:children]
xyz-prod-ext
xyz-prod-int

...

# ENVIRONMENT AGGREGATION #####################################################
[ci:children]
xyz-ci
pqr-ci

[prod:children]
xyz-prod
pqr-prod

[all_envs:children]
dev
ci
st
qa
prod

# Global defaults
[all_envs:vars]
puppet_args=""
puppet_server=puppet.company.com

Marcus Philip
@marcus_phi

Tags: , , , , , ,

2 Responses to “Puppet change promotion and code base design”

  1. Nathan says:

    Marcus,

    If you don’t mind sharing what you’re trying to accomplish with Hiera + lambdas I might be able to present a work-around. The scenarios I’m able to cook up in my head…seems like create_resources() and Hiera hashes should accomplish most things.

    Let me know.

  2. Marcus Philip says:

    Thank you @Nathan. Good point.

    I am already using create_resources, and yes, it does solve some of my issue. However, if the data structure contains a key that does not correspond to a parameter of the class, it fails. I find it to be more strict than necessary.

    So I can still not have a large data structure with e.g. all my services. The thing is I have to correlate e.g. a port with a service so I cannot have just flat lists.

    What I did in one case, which I strongly dislike, was to add a class parameter THAT IS NOT USED just to be able to reuse the same data structure.

    I updated the blog post to include a note on this as well.