I’ve been involved with the Puppet community for the last 6 or so years. It’s been great to be involved with such a great group of people, and I feel that I’ve been able to contribute a lot to benefit the community as a whole.

But as the years have passed, the work that started as a fun hobby has started to weigh on me. I have been the sole maintainer for many projects, have helped maintain many more, and have frequently been asked to help with yet even more projects. At one point I could expect to see emails for one to two new GitHub issues a day, and conversations on several more. Simply keeping up with this volume of email alone is exhausting. Actually keeping up with fixing bugs, adding features, merging pull requests, and providing support has proved overwhelming.

I’m tired. I’m probably more than a little burned out. I’m unable to keep up with issues, and even if I could I don’t think it would do much good. It’s very important to me to provide a welcoming environment for people, and I don’t think I’m capable of providing the level of positivity and energy needed to do that.

» Read full post

Opiates and Testing

After more than a year off from blogging I managed to wander into my blog drafts folder and found a mostly finished blog post from past ages. Perhaps an old post brought to life will kickstart new blog posts so without further ado, here is a tale of painkillers, programming, and pontifications.

My discovery of unit testing and test driven development was a happy accident due to desperation and vicodin.

In my second year of college I was took a computer science course on algorithms and data structures. All was going well until I was in a bad rock climbing accident, fell a good 30 feet or so, and broke my right foot in about 10 places. (Approximately. The doctors lost count of all the fractures). I tried soldiering on in that algorithms class, but I discovered after the fact that vicodin and programming DO NOT MIX. Things got ugly when I tried writing a binary search tree. I would happily bash out code and nested if statements, try to run it, and watch my program spin off in an endless loop. It turns out that in my attempt to write a binary search tree, I had invented a whole new data structure - the binary wreath. It was bad.

» Read full post

The Puppet RAL: Types, Parameters, and Properties

This is the second post on a series on the Puppet RAL.

The RAL needs to be able to model any sort of thing you would want to manage. Because of this it’s broken into a number of different classes to be able to model any sort of resource you could need. This blog post is going to discuss the puppet Type, Parameter, and Property classes and what they do.


A Puppet Type represents a category of things you can manage. Things like users, services, files, and so forth are examples of common types. Types provide the structure for managing things and define what you can do with them.

The Type
service { 'my-service':
  ensure  => running,
  restart => '/usr/local/bin/restart-my-service',

An important detail is that Types are (mostly) abstract, and only define what instances of that Type can do. This is one of the killer features of the RAL for a number of reasons.

First off, it’s trivial to extend the core behavior of Puppet. If you want to add support for a new platform it’s not necessary to add extensions to core Puppet; you can implement a provider for that type and use it immediately.

» Read full post

The Puppet RAL: An Introduction

The Puppet RAL is one of the main parts of Puppet that people use on a daily basis. The RAL is how system state is actually inspected and enforced by Puppet, it’s where most extensions to Puppet are added, and it’s one of the parts of Puppet that really set it apart.

Unfortunately, the RAL is a little lacking on the side of documentation and explanation. The RAL provides a lot of powerful functionality but unless you’ve been digging through that code on a regular basis, it can prove hard to make full use of the RAL.

As of late the state of developer documentation has been improving. Puppet 3.1 added much improved inline documentation to a lot of the code base and covered a lot of the RAL, which is a welcome improvement. In addition Dan Bode and Nan Liu wrote the excellent Types and Providers book, which is arguably the best reference to the RAL to date.

» Read full post

Rethinking Puppet Deployment

This is the third post on a three part series on Puppet deployment.

As the last blog post in this series demonstrated, scaling Puppet deployment is hard. As your deployment grows in size and complexity and you have to maintain more modules, the tools to manage this sort of thing start to break down.

If the current tools don’t cut it, then what do you need? What characteristics should a good deployment tool have?


First off, a good deployment should be fast. Slow deployments can kill productivity, and it butchers your ability to react as things start happening. If you’re able to deploy something very quickly and something goes wrong, then you can turn around and run another deployment to fix it. Basically, reaction time matters, and it matters quite a bit.

In addition, if you’re using a deployment tool as part of your development workflow then speed is absolutely critical. You want to have a very short feedback cycle between making a change and being able to test it, and if you are constantly waiting for your code to deploy then your productivity is going to be trashed.

» Read full post

Scaling Puppet Environment Deployment

This the second post on a three part series on Puppet deployment.

In my original envisioning of dynamic environments with Puppet, I had a narrow vision to fit my current situation. It was simple enough - you would have one and only one repository, it would contain all of your manifests, and that would basically would be it.

As of the date of this writing there are 850+ Puppet modules on the Puppet Forge, and a few thousand modules on Github. On top of those raw counts the rate of module contribution is increasing and the quality of modules is steadily going up as people are figuring out how to make truly reusable modules. The adage goes “good coders code, great coders reuse,” so it makes sense to publish your good modules and reuse existing work.

So how do you roll existing modules into your deployment?

» Read full post

How Dynamic Environments Came to Be

This is the first post in a three part series on Puppet deployment.

My first interaction with Puppet was when I was a junior sysadmin at my University. One of the previous lead Unix sysadmins had dabbled a little with Puppet when Puppet itself was a very new tool, and his work spurred the use of configuration management at the university. More and more of the infrastructure became managed with Puppet and it became a fundamental part of day to day operations and was critical to the smooth functioning of the Unix team.

It’s the kind of story you tend to hear all over the place when asking how people came to use Puppet.

Access to the Puppet manifests was fairly tightly controlled. After all, if you had access to the Puppet manifests running on a machine then you basically had root on the machine, so it made sense to lock things down. People had to show up and show some merit in the organization before they were given read access to the git repository containing the manifests. They were then encouraged to learn about Puppet and make contributions, but a senior sysadmin had to review and merge your code.

» Read full post

Configuration Management as Legos

This was my sysadvent entry for the 2012 series. For anyone that didn’t get a chance to read this year’s sysadvent series, go catch up!

Configuration management is hard. Configuring systems properly is a lot of hard work, and trying to manage services and automate system configuration is a serious undertaking.

Even when you’ve managed to get your infrastructure organized in Puppet manifests or Chef cookbooks, organizing your code can get ugly, fast. All too often a new tool has to be managed under a short deadline, so any sort of code written to manage it solves the immediate problem and no more. Quick fixes and temporary code can build up, and before you know it your configuration management becomes a tangled mess. Nobody intends for their configuration management tool to get out of hand, but without guidelines for development all it takes is a few instances of git commit -a -m 'Good enough' for the rot to set in.

» Read full post

Partial Templates with Puppet

Partial templates are a design pattern that are pervasive pretty much every where templating is used. The concept is very simple - any sort of data that needs to be reused can be pulled out into a file by itself, and applied within other templates. This also means that if there’s a particularly complex set of templated behavior, that behavior can be isolated and maintained by itself. And of course if you’re using a partial template, if there’s a bugfix applied to that partial then you make the change in one place.

Partial templates are frequently implemented as one template calling out to the templating engine with another template, and embedding the result inside of that first template. For instance, Ruby on Rails considers any template that’s prefixed with an underscore to be a partial template. So given the following code fragment:

<%%= render "menu" %>

When this trivial template is evaluated, the file _menu.html.erb will be rendered and inserted into this template, per Rails conventions.

» Read full post

The Angry Guide to Puppet 3

Update: An additional list of upgrade issues to be aware of can be found on the Puppet wiki.

A little while back I was asked to make the intrepid foray into the Puppet RCs. I was sent into the wild with only a machete and my wits, to explore the new world that would soon be upon us. I have returned, bloodied, torn, but gloriously victorious, and I am here to share my triumph with you all.

How’s that for a dramatic introduction?

Puppet 3 is a significant milestone for Puppet, and is the biggest such milestone since the jump from 0.25 to 2.6, feature wise. It brings a lot of new features, tons of bugfixes and improvements, and mindblowing speed improvements.

One significant part of this release is with Telly, Puppet is going to adhere to semantic versioning. Since Telly is a major version increment, this is a backwards incompatible change, meaning that previously conventional behavior can be SMASHED.

» Read full post

Reading Puppet: The Transaction

So far in this blog series, we’ve talked about the configurer and pluginsync, and how those both use the catalog. However, these don’t go into the nuts and bolts mechanisms of the actual application method of the catalog. The catalog itself is only really a data structure, and when you call catalog.apply, the catalog itself actually hands off all of the work to the Transaction.

The Transaction is the part of Puppet that drives the actual state changes performed on the system. Part of this process involves the handling of dependencies and ordering for resources (which is harder than you might expect). Another part of this is taking a single resource, determining what state that it’s currently in, and applying changes if they’re required. Another role of the Transaction is recording all of these events, like resources being out of sync and how they were synced, and logging them accordingly.

» Read full post

Uptime means nothing

At one of the places that I worked, uptime was a pretty big deal. If a machine had less than 100 days of uptime, clearly something was wrong. If you’re running Windows then you’re going to have to reboot for monthly patches, and Windows desktops would be rebooted weekly or daily. By contrast, we had boxes whose uptime stretched to 200, 300, 500, or more days. We were convinced that this meant that we were providing excellent service.

The reality is that uptime doesn’t mean a thing. What does matter is availability. It’s easy to confuse that - if the machine’s up, it’s available right? However, if you have a machine that hasn’t rebooted in a year, this is what I can tell you about it

Uptime isn’t a badge of pride, and if you are focusing only on uptime then you are ignoring the cost of the inevitable downtimes. If you’re trying to maintain uptime because you don’t have a way to maintain service, you have two options - bite the bullet and take the availability hit, or forego rebooting and take on the risks of vulnerability and a possibly prolonged downtime due to unaddressed issues. The opposite of downtime isn’t uptime, it’s availability.

» Read full post

Puppet, the Catalog, and You

If you’ve been using Puppet for any meaningful amount of time, you’ve heard the term ‘catalog’ thrown around. The official glossary defines the catalog as “a compilation of all the resources that will be applied to a given system and the relationships between those resources.” Well sure, that’s great, apparently the catalog holds a set of resources for a host. What does it look like? Is it a simple array, or perhaps a hash? How is the data stored? Is it a dump truck? Is it a series of tubes?

The catalog is a very important part of puppet, but it’s a very big topic and has a number of touch points into the rest of the system. This post is going to be a high level view of the catalog, what it is, and how it works.

From manifests to a catalog

When dealing with Puppet, you’ll find people throw around terms like catalogs and graphs and assume that you follow along, because they’ve been using Puppet long enough that these terms are completely familiar to them. (Alternately, they double majored in Computer Science and Mathematics, with a minor in pain.) But coming from the outside with a non-CS background, these terms can be as clear as mud.

» Read full post

Reading Puppet: Pluginsync

In my previous post on the Configurer, I only made a passing reference to pluginsync. However, pluginsync is rather important both because it’s the mechanism in which Puppet can be extended, but it’s a really interesting window into how Puppet configures itself. We’ll be looking at Puppet::Configurer::PluginHandler and Puppet::Configurer::Downloader in this post.

PluginHandler is a module that contains the basic pluginsync logic, and it’s mixed in to the Configurer. It’s a convenient way to extract that logic out of the Configurer. The really interesting bit is the Downloader, because it shows how Puppet can use the same mechanisms to configure your system to configure itself.


The PluginHandler is actually rather small; only implementing the following methods:

To kick off things, in Puppet::Configurer#prepare, the #download_plugins method is called.

» Read full post

Reading Puppet: the Configurer

Every time I dive into the source of Puppet, I seem to forget everything about as fast as I figure it out. I have the attention of a small overstimulated chipmunk, and there’s just a lot of detail to absorb so contents tend to slip out of my brain. In light of this, I’ve decided that I’m going to try to blog on each module/class that I manage to decipher. It’ll force me to get my thoughts in one place. I’m also hoping that this will help other people who go delving into the source.

Note: All of this is done against 2.7.x. While I would love to start tearing into 3.0.0, it introduces some new behavior that I don’t want to talk about yet.

Also, this is just what I’ve been able to derive while reading the source, so I could be wrong. If you find something erroneous, please find me in #puppet on freenode and let me know. (Yes, I need to add comments to my blog. It’s on the TODO list.)

Getting started: Puppet::Configurer

The Configurer is the heart of the normal Puppet agent. When you think about the different stages of a normal agent run, it’s all kicked off by the Configurer. It handles pluginsync, uploading facts, retrieving a catalog, applying the catalog, and then submitting the report.

» Read full post

Fighting with Thor

I started writing this blog post and realized “holy expletives, this might be the most epic blog post title I’ve ever written!” But no, I’m not duking it out with a norse god.

When I first discovered Thor, I was all sorts of thrilled. Before I encountered Thor, all of my command line programs relied very heavily on optparse and I had to add a ton of options and logic to handle subcommands. When I encountered Thor, this was a whole new world. It allowed you to add a lot of discrete tasks without pain. It had excellent option parsing that was both (mostly) simple and powerful.

Better yet, it wasn’t rake. Rake was built as a replacement for Make, but since it allowed people to generate discrete tasks in ruby, it’s been largely coopted into a scripting framework. But fuck’s sake, it’s hacky.

Don’t believe me?

task :eatbabies => :environment do
  puts "OM NOM NOM"

» Read full post

We Just Wanted to Make Telescopes

(Preface: Edsger Dijkstra isn’t the source of the following quote; Hal Abelson is.)

Every time Edsger Djikstra is quoted as saying

Computer Science is no more about computers than astronomy is about telescopes.

I want to commit acts of violence, because it’s always used in conjunction with ignoring someone’s concerns.

An article titled Let’s Not Call It “Computer Science” If We Really Mean “Computer Programming” was recently posted to Hacker News, which was promptly jumped on by a number of commentors. One of the common responses to the article was that it was effectively saying “Math is hard, let’s go shopping.”

This is what I like to call “bullshit.”

(This is my blog, I get to swear on it. If you object to me swearing when I find a topic frustrating, please fuck off.)

» Read full post

I don't want to see your backtraces

Seriously. People. If you’re writing a ruby application, at the top level, do some basic exception handling. Please. If I’m not actively debugging your code, don’t let me see your backtrace.


A prime example of this is dealing with interrupts. Let’s say I’m using your tool, and I hit ctrl-c. which raises an Interrupt which is a descendant of Exception, so you can raise and rescue it. Perhaps I started running something and then realized I made a mistake. Maybe something’s taking too long. But when I hit ctrl-c, I want the application to clean up and exit in an expedient manner. I don’t want to see 40 or so lines of back trace, none of which I care about.

Vagrant and Veewee are two nasty culprits. Both Vagrant and Veewee are really excellent tools, and I can’t live without them. However, I’m very indecisive and wind up killing startup of the application so I can go correct a config file, for example. What happens when I hit ctrl+c?

% vagrant status
^C/Users/adrien/.rvm/rubies/ruby-1.9.3-p0/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require': Interrupt
        from /Users/adrien/.rvm/rubies/ruby-1.9.3-p0/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
        from /Users/adrien/.rvm/gems/ruby-1.9.3-p0@global/gems/net-ssh-2.2.2/lib/net/ssh/prompt.rb:81:in `<module:SSH>'
        from /Users/adrien/.rvm/gems/ruby-1.9.3-p0@global/gems/net-ssh-2.2.2/lib/net/ssh/prompt.rb:1:in `<module:Net>'


» Read full post

Git submodules are probably not the answer

2013-05-31 Update: Using Puppet? Check out R10K!

As stated in a random blog post, git and puppet can be the best of friends and make your life a lot easier. However, when using git and puppet modules, you’ll run into a couple of cases where using git and puppet starts becoming harder.

Case 1: Using external modules

Puppet modules, when properly written, are extremely easy to reuse in your own environment. For example, why write an NTP module when someone else has already done it? So you have this module, and it happens to be on github - but uh oh, your puppet code is already in git. How do you compose your current set of modules with the external modules you want to add?

Case 2: Publishing your internal modules

You’ve taken the time to carefully build a set of modules, and they are wondrous pieces of art that you want to share with the world. However, you have a single git repository that contains all of your history, and you need to somehow extract this module from the rest of your code base so that you can throw it up on github. You may also receive fixes and updates to the community, so you need to be able to sync changes back down. What is one to do?

» Read full post

Config files: better without logic

AKA: Why Ruby isn’t, and will never be, a configuration format

Using Ruby as a method for storing configuration data is increasingly common. Instead of having YAML, JSON, XML (oh god pointy) or INI (lol) files storing your lovely little configuration bits, we can start dumping all our data into Ruby, using hashes or DSLs. The author of configatron has astutely noted that if you use Ruby to store configuration, you can have Procs in your configuration files!

So wait, wait, let’s take a step back and ask the question - when did a Proc become configuration data? Do we really need the ability?

Seriously, by the time you need a Proc as config data, I’m pretty sure you’re doing something wrong. It may seem easier in the short run - harness all the power of Ruby in your configuration - but after a while, you start getting little applications in your configuration. Suddenly, these little applications become big applications, and then the gates of Sheol crumble and the undead swarm up to join the living - wait, maybe not that but. But still, it’s messy and invites abuse.

» Read full post

Stop writing scripts and start writing libraries

A lot of us (especially the sysadmin types) started off programming with scripts. It was probably bash, or perl, or (like me, to my chagrin) Windows batch scripts. It was the simple way of getting started - no fighting with compilers, simple syntax, and bam, shit started getting done. This was awesome, and I’m sure that the excitement of creation has gotten many people excited about coding.

However, a lot of people decided that scripts were good enough and stopped there. Work was getting done, so why fix what’s not broken? And admittedly, this seems like a pretty straightforward approach, and it’s why shell and perl scripts are the underpinnings of a good chunk of the Internet.

Let’s talk about gitweb.

No, really, go take a quick look. Go get a line count and then come back.

I’m guessing that gitweb started out as a small CGI script. It didn’t need to do much, just print out trees and blobs, right? Well, it would be cool if we could view diffs. And logs would be nice. And hey, being able to fork would rock! And you know, we should print better HTML, so have a nice header! And having authorization would be really nice too. Oh wait, we almost forgot syntax highlighting -

» Read full post

Design, information, and the new literacy

Being exposed to the idea of User Experience and design has been a very eye opening experience for me. It may sound rather obvious to take into account user expectations and perspectives, but it’s generally not emphasized in college level Computer Science classes, or in education at large. After all, when you’re working on a project, be it a script, program, document, or website, you’re the producer and thus you focus on the production, and not necessarily the consumption. So to take into account the opposite perspective of “how do I create the best product for others” involves an inversion of the default ordering.

Working on my website has been a very interesting case study on this topic. This is the first site that I’ve created that actually looks somewhat pleasant, as opposed to utilitarian at best, and downright ugly at worst. Since I have no art or design background, it took a lot of reading and research to figure out how one actually builds an attractive site. Design is an entire domain of knowledge that I’ve previously been ignorant of, but now that I understand it, it’s hard to understate the importance of it.

» Read full post

Obligatory introductions and metabloggery

WARNING: This product contains chemicals known to the State of California to cause cancer and birth defects or other reproductive harm.

Trying to write an introductory blog entry is a bit of a pain. You could go with some sort of banal introduction of yourself and your intention of the blog, or you could wax eloquent on the nature of writing. I’ve been trying to phrase this for a while so I think I’m just going to hammer out some words and be done with it.

I had the privilege of writing a couple of blogs for the Puppet Labs blog, and I discovered that I greatly enjoyed it. It’s great to be able to share some of your ideas and find out that what you thought was common knowledge is actually somewhat insightful. After those posts, I started storing up a string of topics that I could talk about in the future. However a lot of those topics weren’t on the topic of Puppet, and some of them were either rants or entirely opinion pieces that may not necessarily belong on a company blog, but could definitely find a home out on the digital ether.

» Read full post