This is the third post on a three part series on Puppet deployment.
As the last blog post in this series demonstrated, scaling Puppet deployment is hard. As your deployment grows in size and complexity and you have to maintain more modules, the tools to manage this sort of thing start to break down.
If the current tools don’t cut it, then what do you need? What characteristics should a good deployment tool have?
First off, a good deployment should be fast. Slow deployments can kill productivity, and it butchers your ability to react as things start happening. If you’re able to deploy something very quickly and something goes wrong, then you can turn around and run another deployment to fix it. Basically, reaction time matters, and it matters quite a bit.
In addition, if you’re using a deployment tool as part of your development workflow then speed is absolutely critical. You want to have a very short feedback cycle between making a change and being able to test it, and if you are constantly waiting for your code to deploy then your productivity is going to be trashed.
This the second post on a three part series on Puppet deployment.
In my original envisioning of dynamic environments with Puppet, I had a narrow vision to fit my current situation. It was simple enough - you would have one and only one repository, it would contain all of your manifests, and that would basically would be it.
As of the date of this writing there are 850+ Puppet modules on the Puppet Forge, and a few thousand modules on Github. On top of those raw counts the rate of module contribution is increasing and the quality of modules is steadily going up as people are figuring out how to make truly reusable modules. The adage goes “good coders code, great coders reuse,” so it makes sense to publish your good modules and reuse existing work.
So how do you roll existing modules into your deployment?
This is the first post in a three part series on Puppet deployment.
My first interaction with Puppet was when I was a junior sysadmin at my University. One of the previous lead Unix sysadmins had dabbled a little with Puppet when Puppet itself was a very new tool, and his work spurred the use of configuration management at the university. More and more of the infrastructure became managed with Puppet and it became a fundamental part of day to day operations and was critical to the smooth functioning of the Unix team.
It’s the kind of story you tend to hear all over the place when asking how people came to use Puppet.
Access to the Puppet manifests was fairly tightly controlled. After all, if you had access to the Puppet manifests running on a machine then you basically had root on the machine, so it made sense to lock things down. People had to show up and show some merit in the organization before they were given read access to the git repository containing the manifests. They were then encouraged to learn about Puppet and make contributions, but a senior sysadmin had to review and merge your code.
Configuration management is hard. Configuring systems properly is a lot of hard work, and trying to manage services and automate system configuration is a serious undertaking.
Even when you’ve managed to get your infrastructure organized in Puppet
manifests or Chef cookbooks, organizing your code can get ugly, fast. All too
often a new tool has to be managed under a short deadline, so any sort of code
written to manage it solves the immediate problem and no more. Quick fixes and
temporary code can build up, and before you know it your configuration
management becomes a tangled mess. Nobody intends for their
configuration management tool to get out of hand, but without guidelines for
development all it takes is a few instances of
git commit -a -m 'Good enough'
for the rot to set in.
Partial templates are a design pattern that are pervasive pretty much every where templating is used. The concept is very simple - any sort of data that needs to be reused can be pulled out into a file by itself, and applied within other templates. This also means that if there’s a particularly complex set of templated behavior, that behavior can be isolated and maintained by itself. And of course if you’re using a partial template, if there’s a bugfix applied to that partial then you make the change in one place.
Partial templates are frequently implemented as one template calling out to the templating engine with another template, and embedding the result inside of that first template. For instance, Ruby on Rails considers any template that’s prefixed with an underscore to be a partial template. So given the following code fragment:
<%%= render "menu" %>
When this trivial template is evaluated, the file
_menu.html.erb will be
rendered and inserted into this template, per Rails conventions.