Monday, April 20, 2015

It Ain't the Tool

I'm a motorcyclist.  One thing I learned a few years into riding was that it doesn't really matter what you ride-- it's how you ride it.  On the track, it's not uncommon to think you're going as fast as physics permit, then be passed by someone on a less-capable bike.  Worse, that person isn't even trying hard!

As engineers, we're often led to believe that we are limited by our tools.  If we had a better text editor, a better shell, a better programming language-- then, we could be better engineers or sysadmins or programmers.  I have that thought all the time, and then I think to myself: it ain't the tool.

Take a simple example: the shell.  In my decades of Unix system administration, I have never met someone who used more than 15% of the shell's capabilities on the command line.  I'm not even talking about shell scripts-- I'm talking about the command line-- the most basic part of administering a system.  We all learn some basic bits and just use them again and again.  The people who work faster (and who avoid RSI) are the ones who incorporate more of the built-in functionality into their skill set. 

I am no exception.  I estimate that I use about 5% of the shell's capabilities on a given day.  When I pair with someone, they almost always ask for my bash history--which is, sadly, not going to show what I did because the difference between how I work and how they work is that I am using readline commands and history substitution.  So what boggles the mind is that with my mere 5%, they think I'm fast.

Want an example?  Here's the simplest one I could make up on the spot:

You are logged into a debian system as an unprivileged user and you want to update apt's cache. You run:

apt-get update

and get an error because you need to be the superuser to run that command.  To correct, what I typically see people do is type twenty keystrokes:

sudo apt-get update (enter)

Or worse, type twenty-one:

(up arrow) (left arrow fourteen times) sudo (space) (enter)

Instead, they could type eight and stay on the home row:

(ctrl P) (ctrl A) sudo (space) (enter)

or even:

sudo (space) !! (enter)

This is a simple example of what I'm talking about.  Learning this one simple task saves about two seconds each time you use it.   Adding a few more history substitutions and readline commands (in emacs mode) will save at least an hour per day if you spend much time on the command line.  

If you want to go fast, you have to learn your tools! Put aside fifteen minutes a week to learn (and apply) some aspect of the tool you use most.  Fifteen minutes!  You'll be running circles around your coworkers before you expect.

Friday, February 20, 2015

On Simplicity

If you want your infrastructure to be simple to manage, design a simple infrastructure.

Time and time again, I see intelligent engineers design complex systems, then struggle with managing those systems.  Not every system is simple, of course, but complexity will always carry a cost, regardless of the tools you adopt.

Adoption of a tool designed to simplify management of a system does not simplify that system.

I'm a strong proponent of configuration management.  A well-done configuration management system makes your infrastructure easier to manage.  If the infrastructure is already overly complex, though, it will be very difficult to create a simple configuration management system.  I generally recommend modifying the infrastructure itself rather than creating a complex program (module, manifest, cookbook, recipe, etc) to recreate a complex setup.

Thursday, January 22, 2015

On Chef Recipes and the Two Phases of Convergence

In my Chef consulting, one error I've seen novices make is not keeping in mind when recipe code is evaluated.  Recipes are evaluated in two passes: the compile phase and the execute phase.  The compile phase prepares all the resources to be converged-- any ruby code that is not enclosed in a lazy block or ruby_block block will be evaluated at this time.  This includes variables for templates!  Anyway, the bad recipes follow this pattern:

1:  file '/tmp/my_file' do  
2:   content 'hello, world'  
3:  end  
5:  my_file_data ='/tmp/my_file')  
7:  file '/tmp/other_file' do  
8:   content my_file_data  
9:  end  

A naive reading of this recipe would suggest that three things happen: a new file called "/tmp/my_file" will be created (if missing) containing the string "hello, world", the file will be read into the my_file_data variable, and another file "/tmp/other_file" will be created with the same content.

That's not what actually happens here.  In fact, this recipe won't converge at all unless /tmp/my_file has been previously created by something else.  Instead, you'll get an Errno::ENOENT.

The call is evaluated in the compile phase, but the file resources are created in the execute phase, which happens after the compile phase has completed! If you need to access something created in the execute phase, the accessing code also needs to evaluate in the execute phase, or you can push the file resources into the compile phase. Moving resource convergence into the compile phase isn't ideal, though, so I generally advise the former over the latter.

Here's one way to make this recipe converge:

1:  file '/tmp/my_file' do  
2:   content 'hello, world'  
3:  end  
5:  my_file_data = nil  
6:  ruby_block 'read_file' do   
7:   block { my_file_data ='/tmp/my_file') }  
8:  end  
10:  file '/tmp/other_file' do  
11:   content lazy { my_file_data }  
12:  end  
Three things are changed here.

  1.  The my_file_data variable is declared in the compile phase to make it scoped appropriately for the /tmp/other_file file resource. 
  2.  The reading of that file is put into a ruby_block resource to make it happen during the execute phase.
  3. The content of the /tmp/other_file resource is enclosed in a lazy block to delay evaluation of the my_file_data variable until the execute phase.
Keep in mind when your code is being evaluated when you write your recipes.

Monday, December 29, 2014

GnuPG and proxy support

Via a long and tortuous route, I discovered an issue today with GnuPG and proxy support.

In short, if you are behind a proxy and you need to import an apt key using apt-key adv from, you're out of luck.  The bug reports for this issue are here on GnuPG's bug tracker and here on Ubuntu's bug tracker.

The good news:  since the issue is relatively minor-- the omission of a Host: header-- you can create a simple proxy server to proxy to your proxy!  Here's what I did in ruby:

#!/usr/bin/env ruby

require 'webrick'
require 'webrick/httpproxy'

# This is a proxy server which will make sure every GET request 
# has a Host: header added which corresponds to the request host.  
# It exists to work around the bug at
class GnuPGFixingProxyServer < WEBrick::HTTPProxyServer
  def setup_proxy_header(req, res)
    header = super
    header['Host'] ||=

proxy_server = ENV['http_proxy'] ? URI(ENV['http_proxy']) : nil
proxy = '', 
                                   Port: 3128, 
                                   ProxyURI: proxy_server)

trap 'INT' do


After that, I could tell apt to use http://localhost:3128 as its proxy, and as long as the http_proxy environment variable was set to the real proxy, it would send the requests out with the Host: header added if missing.

A this time, I run this proxy by running the script on the host needing to import the key prior to importation.  With a few minor changes, I'll be incorporating it into the provisioning process so all hosts can have this out the gate.