You’re Using Transients Wrong

The Transients API is an incredibly useful API in WordPress, and unfortunately one of the most misunderstood. I’ve seen almost everyone misunderstand what transients are used for and how they work, so I’m here to dispel those myths.

For those not familiar with transients, here’s a quick rundown of how they work. You use a very simple API in WordPress that acts basically as a key-value store, with an expiration. After the expiration time, the entry will be invalidated and the transient will be deleted from the transient store. Transients essentially operate the same as options, but with an additional expiration field.

By default in WordPress, transients are actually powered by the same backend as options. Internally, when you set a transient (say foo), it gets transparently translated to two options: one for the transient data (_transient_foo) and an additional one for the expiration (_transient_timeout_foo). Once requested, this will then be stored in the internal object cache and subsequent accesses in the same request will reuse the value, in much the same way options are cached. One of the most powerful parts of the transient API is that it uses the object cache, allowing a full key-value store to be used in the backend. However the default implementation, and how the object cache can change this, is where two major incorrect assumptions come from.

Object Caching and the Database

The first incorrect assumption that developers make is to assume the database will always be the canonical store of transient data. One big issue here is attempting to directly manipulate transient data via the option API; after all, transients are just a special type of option, right?

In the real world however, anything past your basic site will use an object cache backend. Popular choices here include APC (including the new APCu) and Memcache, which both cache objects in memory, not the database. With these backends, using the option API will return invalid or no data, as the data is never stored in the database.1

I’ve seen this used in real world plugins to determine if a transient is about to expire by directly reading _transient_timeout_foo. This will break and cause the transient to always be counted as expired with a non-default cache. Before you think about how to do this in a cross-backend compatible way: you can’t. Some backends simply can’t do this, and until WordPress decides to provide an API for this, you can’t predict internal behaviour of the backends.


The second incorrect assumption that most developers make is that the expiration date is when the transient will expire. In fact, the inline documentation even states that the parameter specifies the “time until expiration in seconds”. This assumption is correct for the built-in data store: WordPress only invalidates transients when attempting to read them (which has lead to garbage collection problems in the past). However, this is not guaranteed for other backends.

As I noted previously, transients use the object cache for non-default implementations. The really important part to note here is that the object cache is a cache, and absolutely not a data store. What this means is that the expiration is a maximum age, not a minimum or set point.

One place this can happen easily is with Memcache set in Least Recently Used (LRU) mode. In this mode, Memcache will automatically discard entries that haven’t been accessed recently when it needs room for new entries. This means less frequently accessed data (such as that used by cron data) can be discarded before it expires.

What the transient API does guarantee is that the data will not exist past the expiration time. If I set a transient to expire in 24 hours, and then attempt to access it in 25 hours time, I know that it will have expired. On the other hand, I could access it in 5 seconds in a different request and find that it has already expired.

Real world issues are common with the misunderstanding of expiration times. For WordPress 3.7, it was proposed to wipe all transients on upgrade for performance reasons. Although this eventually was changed to just expired transients, it revealed that many developers expect that data will exist until the expiration. As a concrete example of this, WooCommerce Subscriptions originally used transients for payment-related locking. Eventually, Brent (the lead developer) found that these locks were being silently dropped and users could in fact be double-billed in some cases. This is not a theoretical issue, but a real-world instance of the expiration age issue. The solution to this particular issue was to swap it out for options, which are guaranteed to not be dropped.

When Should I Use Transients?

“This all sounds pretty doom and gloom, Ryan, but surely transients have a valid use?”, you say. Correct, astute reader, they’re a powerful tool in the right circumstances and a much simpler API than others.

Transients are perfect for caching any sort of data that should be persisted across requests. By default, WordPress’ built-in object cache uses global state in the request to cache any data, making it useless for caching persistent data. Transients fill the gap here, by using the object cache if available and falling back to database storage if you have a non-persistent cache.

One application of this persistence caching that fits perfectly is fragment caching. Fragment caching applies full page caching techniques (like object caching) to individual components of your page, such as a sidebar or a specific post’s content. Mark Jaquith’s popular implemention previously eschewed transients due to the lack of garbage collection combined with key-based caching, however this is not a concern with the upcoming WordPress 3.7.

Another useful application of transient storage is for caching long-running tasks. Tasks like update checking involve remote server calls, which can be costly both in terms of time and bandwidth, so caching these makes sense. WordPress internally caches the result from update checking, ensuring that excess calls to the internal update check procedures don’t cause excessive load on the server. While the object caching API would work here, the default implementation would never cache the result persistently.


Transients are awesome, but there are some important things to watch out for:

  • Transients are a type of cache, not data storage
  • Transients aren’t always stored in the database, nor as options
  • Transients have a maximum age, not a guaranteed expiration
  • Transients can disappear at any time, and you cannot predict when this will occur

Now, go out and start caching your transient data!

  1. The reason I say invalid or no data here is because it’s possible for a transient to be stored in the database before enabling an object cache, so that would be read directly. []

Requests for PHP: Version 1.6

It’s been a while since I released Requests 1.5 two years ago (!), and I’m trying to get back on top of managing all my projects. The code in Requests has been sitting there working perfectly for a long time, so it’s about time to release to a new version.

Announcing Requests 1.6! This release brings a chunk of changes, including:

  • Multiple request support – Send multiple HTTP requests with both fsockopen and cURL, transparently falling back to synchronous when not supported. Simply call Requests::request_multiple(), and servers with cURL installed will automatically upgrade to parallel requests.

  • Proxy support – HTTP proxies are now natively supported via a high-level API. Major props to Ozh for his fantastic work on this.

  • Verify host name for SSL requests – Requests is now the first and only PHP standalone HTTP library to fully verify SSL hostnames even with socket connections. This includes both SNI support and common name checking.

  • Cookie and session support – Adds built-in support for cookies (built entirely as a high-level API). To compliment cookies, sessions can be created with a base URL and default options, plus a shared cookie jar.

  • Opt-in exceptions on errors: You can now call $response->throw_for_status() and a Requests_Exception_HTTP exception will be thrown for non-2xx status codes.

  • PUT, DELETE, and PATCH requests are now all fully supported and fully tested.

  • Add Composer support – You can now install Requests via the rmccue/requests package on Composer

So, how do you upgrade? If you’re using Composer, you can bump your minimum version to 1.6 and then update. (Note that you should remove minimum-stability if you previously set it for Requests.) Otherwise, you can drop the new version in over the top and it will work out of the box. (Version 1.6 is completely backwards compatible with 1.5.)

What about installing for the first time? Just add this to your composer.json:

    "require": {
        "rmccue/requests": ">=1.6"

Alternatively, you can now install via PEAR:

$ pear channel-discover
$ pear install rmccue/Requests

Alternatively, head along to the release page and download a zip or tarball directly.

Along with 1.6, I’ve also created a fancy new site, now powered by Jekyll. This is hopefully a nicer place to read the documentation than on GitHub itself, and should be especially handy to new users.

This release includes a lot of new changes, as is expected for such a long release cycle (although hopefully a little shorter next time). One of the big ones here is the significantly improved SSL support, which should guarantee completely secure connections on all platforms. This involved a lot of learning about how the internals of SSL certificates work, along with working with OpenSSL. Getting them working in a compatible manner was also not particularly easy; I spent about an hour tracking back through PHP’s source code to ensure that stream_socket_client had the same level of availability as fsockopen (it does) all the way back to PHP 5.2.0 (it did).

In all, 19 fantastic third-party contributors helped out with this release, and I’d like to acknowledge those people here:

Feedback on this release would be much appreciated, as always. I look forward to hearing from you and working with you to improve Requests even further!

The Next Stage of WP API

As you may have seen, my Summer of Code project is now over with the release of version 0.6. It’s been a fun time developing it, and an exceptionally stressful time as I tried to balance it with uni work, but worth it nonetheless. Fear not however, as I plan to continue working on the project moving forward. A team has been assembled and we’re about to start work on the API, now in the form of a Feature as a Plugin. To power the team discussions, we’ve also been given access to Automattic’s new o2 project (which I believe is the first public installation).

Throughout the project, I’ve been trying to break new ground in terms of development processes. Although MP6 was the first Feature as a Plugin (and the inspiration for the API’s development style), the API is the first major piece of functionality developed this way and both MP6 and the API will shape how we consider and develop Features as Plugins in the future. However, while MP6 has been developed using the FP model, the development process itself has been less than open, with a more dictatorial style of project management. This works for a design project where a tight level of control needs to be kept, but is less than ideal for larger development projects.

I’ve been critical, both publicly and privately, of some of WordPress’ development processes in the past; in particular, the original form of team-based development was in my opinion completely broken. Joining an existing team was near impossible, starting new discussion within the team is hard, and meetings are inevitably tailored to the team lead’s timezone. The Make blogs also fill an important role as a place for teams to organise, but are more focused towards summarising team efforts and planning future efforts than for the discussion itself.

At the other end of the spectrum is Trac, which is mainly for discussing specifics. For larger, more conceptual discussions, developers are told to avoid Trac and use a more appropriate medium. This usually comes in the form of “there’s a P2 post coming soon, comment there” which is not a great way to hold discussion; it means waiting for a core developer to start the discussion, and your topic might not be at the top of their mind. In addition, Make blogs aren’t really for facilitating discussion, but are more of a announcement blog with incidental discussion.

Since the first iteration of teams, we’ve gotten better at organisation, but I think there’s more we can do.

This is where our team o2 installation comes in. I’ve been very careful to not refer to it as a blog, because it’s not intended as such. Instead, the aim is to bring discussions back from live IRC chats to a semi-live discussion area. Think of it as a middle ground between live dialogue on IRC and weekly updates on a make blog. The idea is that we’ll make frequent posts for both planning and specifics of the project, and hold discussion there rather than on IRC. It’s intended to be a relatively fast-moving site, unlike the existing Make blogs. In addition, o2 should be able to streamline the discussion thanks to the live comment reloading and fluid interface.

Understandably for an experiment like this, there are many questions about how it will work. Some of the questions that have been asked are:

  • Why is this necessary? As I mentioned above, I believe this fits a middle ground between live discussion and weekly updates. The hope is for this to make it easier for everyone to participate.
  • Why isn’t this a Make blog? The Make blogs are great for longer news about projects, but not really for the discussion itself. They’re relatively low traffic blogs for long term planning and discussion rather than places where specifics can be discussed.
  • Why is it hosted on rather than Two main reasons: I wanted to try o2 for this form of discussion; and there’s a certain level of bureaucracy to deal with for Make, whereas setting up a new blog on was basically instant. The plan is to migrate this to Make if the experiment works, of course.
  • If you want to increase participation, why is discussion closed to the team only? Having closed discussion is a temporary measure while the team is getting up to speed and we work out exactly how this experiment will work. Comments will be opened to all after this initial period.

Fingers crossed, this works. We’re off to somewhat of a slow start at the moment, which is to be expected with starting up a large team from scratch on what is essentially an existing project. There’s a lot of work to do here, and we’ve got to keep cracking at the project to keep the momentum going. Fingers crossed, we can start building up steam and forge a new form of organisation for the projects.

A Vagrant and the Puppet Master: Part 2

Having a development environment setup with a proper provisioning tool is crucial to improving your workflow. Once you’ve got your virtual machine set up and ready to go, you need to have some way of ensuring that it’s set up with the software you need.

(If you’d like, you can go and clone the companion repository and play along as we go.)

For this, my tool of choice is Puppet. Puppet is a bit different from other provisioning systems in that it’s declarative rather than imperative. What do I mean by that?

Declarative vs Imperative

Let’s say you’re writing your own provisioning tool from scratch. Most likely, you’re going to be installing packages such as nginx. With your own provisioning tool, you might just run apt-get (or your package manager of choice) to install it:

apt-get install nginx

But wait, you don’t want to run this if you’ve already got it set up, so you’re going to need to check that it’s not already installed, and upgrade it instead if so.

if $( which nginx ) then
    apt-get install nginx
    apt-get update nginx

This is relatively easy for basic things like this, but for more complicated tools, you may have to work this all out yourself.

This is an example of an imperative tool. You say what you want done, and the tool goes and does it for you. There is a problem though: to be thorough, you also need to check that it has actually been done.

However, with a declarative tool like Puppet, you simply say how you want your system to look, and Puppet will work out what to do, and how to transition between states. This means that you can avoid a lot of boilerplate and checking, and instead Puppet can work it all out for you.

For the above example, we’d instead have something like the following:

package {'nginx':
    ensure => latest

This says to Puppet: make sure the nginx package is installed and up-to-date. It knows how to handle any transitions between states rather than requiring you to work this out. I personally prefer Puppet because it makes sense to me to describe how your system should look rather than writing separate installation/upgrading/etc routines.

(To WordPress plugin developers, this is also the same approach that WordPress takes internally with database schema changes. It specifies what the database should look like, and dbDelta() takes care of transitions.)

Getting It Working

So, now that we know what Puppet is going to give us, how do we get it set up? Usually, you’d have to go and ensure that you install Puppet on your machine, but thankfully, Vagrant makes it easy for us. Simply set your provisioning tool to Puppet and point it at your main manifest file:

config.provision :puppet => {
    puppet.manifests_path = "manifests"
    puppet.manifest_file  = "site.pp"
    puppet.module_path    = "modules"
    #puppet.options        = '--verbose --debug'

What exactly is a manifest? A manifest is a file that tells Puppet what you’d like your system to look like. Puppet also has a feature called modules that add functionality for your manifests to use, and I’ll touch on that in a bit, but just trust this configuration for now.

I’m going to assume you’re using WordPress with nginx and PHP-FPM. These concepts are applicable to everyone, so if you’re not, just follow along for now.

First off, we need to install the nginx and php5-fpm packages. The following should be placed into manifests/site.pp:

package {'nginx':
    ensure => latest
package {'php5-fpm':
    ensure => latest

Each of these declarations is called a resource. Resources are the basic building block of everything in Puppet, and they declare the state of a certain object. In this case, we’ve declared that we want the state of the nginx and php5-fpm packages to be ‘latest’ (that is, installed and up-to-date).

The part before the braces is called the “type”. There are a huge number of built-in types in Puppet and we’ll also add some of our own later. The first part inside the braces is called the namevar and must be unique with the type; that is, you can only have one package {'nginx': } in your entire project. The part after the colon is called the attributes of the resource.

Next up, let’s set up your MySQL database. Setting up MySQL is a slightly more complicated task, since it involves many steps (installing, setting configuration, importing schemas, etc), so we’ll want to use a module instead.

Modules are reusable pieces for manifests. They’re more powerful than normal manifests, as they can include custom Ruby code that interacts with Puppet, as well as powerful templates. These can be complicated to create, but they’re super simple to use.

Puppet Labs (the people behind Puppet itself) publish the canonical MySQL module, which is what we’ll be working with here. We’ll want to clone this into our modules directory, which we set previously in our Vagrantfile.

$ mkdir modules
$ cd modules
$ git clone mysql

Now, to use the module, we can go ahead and use the class. I personally don’t care about the client, so we’ll just install the server:

class { 'mysql::server':
    config_hash => { 'root_password' => 'password' }

(You’ll obviously want to change ‘password’ here to something slightly more secure.)

MySQL isn’t much use to us without the PHP extensions, so we’ll go ahead and get those as well.

class { 'mysql::php':
    require => Package['php5-fpm'],

Notice there’s a new parameter we’re using here, called require. This tells Puppet that we’re going to need PHP installed first. Why do we need to do this?

Rearranging Puppets

Puppet is a big fan of being as efficient as possible. For example, while we’re working on installing MySQL, we can go and start setting up our nginx configuration.

To solve this, Puppet has the concept of dependencies. If any step depends on a previous one, you have to specify this dependency explicitly1. Puppet splits running into two parts: first, it does compilation of the resources to work out your dependencies, then it executes the resources in the order you’ve specified.

There are two ways of doing this in Puppet: you can specify require or before on individual resources, or you can specify the dependencies all at once.

# Individual style
class { 'mysql::php':
    require => Package['php5-fpm'],

# Waterfall style
Package['php5-fpm'] -> Class['mysql::php']

I personally find that the require style is nicer to maintain, since you can see at a glance what each resource depends on. I avoid before for the same reason, but these are stylistic choices and it’s entirely up to you as to which you use.

You may have noticed a small subtlety here: the dependencies use a different cased version of the original, with the namevar in square brackets. For example, if I declare package {'nginx': }, I refer to this later as Package['nginx']. This is a somewhat strange thing to get used to when starting out, but you’ll quickly get used to it.

(We’ll get to namespaced resources soon such as mysql::db {'mydb': }, and the same rule applies here to each part of the name, so this would become Mysql::Db['mydb'].)

Important note: It’s important not to declare your resources with capitals, as this actually sets the default attributes. Avoid this unless you’re sure you know what you’re doing.

Setting Up Our Configuration

We’ve now got nginx, PHP, MySQL and the MySQL extensions installed, so we’re now ready to start configuring it for our liking. Now would be a great time to try vagrant up and watch Puppet run for the first time!

Let’s now go and set up both our server directories and the nginx configuration for them. We’ll use the file type for both of these.

file { '/var/www/vagrant.local':
    ensure => directory
file { '/etc/nginx/sites-available/vagrant.local':
    source => "file:///vagrant/vagrant.local.nginx.conf"
file { '/etc/nginx/sites-enabled/vagrant.local':
    ensure => link,
    target => '/etc/nginx/sites-available/vagrant.local'

And the nginx configuration for reference, which should be saved to vagrant.local.nginx.conf next to your Vagrantfile:

server {
    listen 80;
    server_name vagrant.local;
    root /var/www/vagrant.local;

    location / {
        try_files $uri $uri/ /index.php$is_args$args;

    location ~ .php {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+.php)(/.+)$;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_pass unix:/var/run/php5-fpm.sock;
        fastcgi_index index.php;
        include /etc/nginx/fastcgi_params;

(This is not the best way to do this in Puppet, but we’ll come back to that.)

Next up, let’s configure MySQL. There’s a mysql::db type provided by the MySQL module we’re using, so we’ll use that. This works the same way as the file and package types that we’ve already used, but obviously takes some different parameters:

mysql::db {'wordpress':
    user     => 'root',
    password => 'password',
    host     => 'localhost',
    grant    => ['all'],
    require  => Class['mysql::server']

Let’s Talk About Types, Baby

You’ll notice that we’ve used two different syntaxes above for the MySQL parts:

class {'mysql::php': }
mysql::db {'wordpress': }

The differences here are in how these are defined in the module: mysql::php is defined as a class, whereas mysql::db is a type. These reflect fundamental differences in what you’re dealing with behind the resource. Things that you have one of, like system-wide packages, are defined as classes. There’s only one of these per-system; you can only really install MySQL’s PHP bindings once.2

On the other hand, types can be reused for many resources. You can have more than one database, so this is set up as a reusable type. The same is true for nginx sites, WordPress installations, and so on.

You’ll use both classes and types all the time, so understanding when each is used is key.

Moving to Modules

nginx and MySQL are both set up with our settings now, but it’s not really in a very reusable pattern yet. Our nginx configuration is completely hardcoded for the site, which means we can’t duplicate this if we want to set up another site (for example, a staging subdomain).

We’ve used the MySQL module already, but all of our resources are in our manifests directory at the moment. The manifests directory is more for the specific machine you’re working on, whereas the modules directory is where our reusable components should live.

So how do we create a module? First up, we’ll need the right structure. Modules are essentially self-contained reusable parts, so there’s a certain structure we use:

  • modules/<name>/ – The module’s full directory
    • modules/<name>/manifests/ – Manifests for the module, basically the same as your normal manifests directory
    • modules/<name>/templates/ – Templates for the module, written in Erb
    • modules/<name>/lib/ – Ruby code to provide functionality for your manifests

(I’m going to use ‘myproject’ as the module’s name here, but replace that with your own!)

First up, we’ll create our first module manifest. For this first one, we’ll use the special filename init.pp in the manifests directory. Before, we used the names mysql::php and mysql::db, but the MySQL module also supplies a mysql type. Puppet maps a::b to modules/a/manifests/b.pp, but a class called a maps to modules/a/manifests/init.pp.

Here’s what our init.pp should look like:

class myproject {
    if ! defined(Package['nginx']) {
        package {'nginx':
            ensure => latest
    if ! defined(Package['php5-fpm']) {
        package {'php5-fpm':
            ensure => latest

(We’ve wrapped these in defined() calls. It’s important to note that while Puppet is declarative, this is a compile-time check. If you’re making redistributable modules, you’ll always want to use this, as you can’t declare types twice, and users should be able to redefine these in their manifests.)

Next, we want to set up a reusable type for our site-specific resources. To do this in a reusable way, we also need to take in some parameters. There’s one special variable passed in automatically, the $title variable, which represents the namevar. Try to avoid using this directly, but you can use this as a default for your other variables.

Declaring the type looks the same as defining a function in most languages. We’ll also update some of our definitions from before.3

define myproject::site (
    $name = $title,
    $database = 'wordpress',
    $database_user = 'root',
    $database_password = 'password',
    $database_host = 'localhost'
) {
    file { $location:
        ensure => directory
    file { "/etc/nginx/sites-available/$name":
        source => "file:///vagrant/vagrant.local.nginx.conf"
    file { "/etc/nginx/sites-enabled/$name":
        ensure => link,
        target => "/etc/nginx/sites-available/$name"

    mysql::db {$database:
        user     => $database_user,
        password => $database_password,
        host     => $database_host,
        grant    => ['all'],

(This should live in modules/myproject/manifests/site.pp)

Now that we have the module set up, let’s go back to our manifest for Vagrant (manifests/site.pp). We’re going to completely replace this now with the following:

# Although this is declared in myproject, we can declare it here as well for
# clarity with dependencies
package {'php5-fpm':
    ensure => latest
class { 'mysql::php':
    require => [ Class['mysql::server'], Package['php5-fpm'] ],
class { 'mysql::server':
    config_hash => { 'root_password' => 'password' }

class {'myproject': }
myproject::site {'vagrant.local':
    location => '/var/www/vagrant.local',
    require  => [ Class['mysql::server'], Package['php5-fpm'], Class['mysql::php'] ]

Note that we still have the MySQL server setup in the Vagrant manifest, as we might want to split the database off onto a separate server. It’s up to you to decide how modular you want to be about this.

There’s one problem still in our site definition: we still have a hardcoded source for our nginx configuration. Puppet offers a great solution to this in the form of templates. Instead of pointing the file to a source, we can bring in a template and substitute variables.

Puppet gives us the template() function to do just that, and automatically supplies all the variables in scope to be replaced. There’s a great guide and tutorial that explain this further, but most of it is self-evident. The main thing to note is that template() function’s template location is in the form <module>/<filename>, which maps to modules/<module>/templates/<filename>.

Our file resource should now look like this instead:

file { "/etc/nginx/sites-available/$name":
    content => template('myproject/site.nginx.conf.erb')

Now, we’ll create our template. Note the lack of hardcoded pieces.

server {
    listen 80;
    server_name <%= name %>;
    root <%= location %>;

    location / {
        try_files $uri $uri/ /index.php$is_args$args;

    location ~ .php {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+.php)(/.+)$;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_pass unix:/var/run/php5-fpm.sock;
        fastcgi_index index.php;
        include /etc/nginx/fastcgi_params;

(This should be saved to modules/myproject/templates/site.nginx.conf.erb)

Our configuration will now be automatically generated, and the name and location will be imported from the parameters to the typedef.

If you’d really like to go crazy with this, you can basically parameterise everything you want to change. Here’s an example from one of mine:

server {
    listen <%= listen %>;
    server_name <% real_server_name.each do |s_n| -%><%= s_n %> <% end -%>;
    access_log <%= real_access_log %>;
    root <%= root %>;

<% if listen == '443' %>
    ssl on;
    ssl_certificate <%= real_ssl_certificate %>;
    ssl_certificate_key <%= real_ssl_certificate_key %>;

    ssl_session_timeout <%= ssl_session_timeout %>;

    ssl_protocols SSLv2 SSLv3 TLSv1;
    ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
    ssl_prefer_server_ciphers on;
<% end -%>

<% if $front_controller %>
    location / {
        fastcgi_param SCRIPT_FILENAME $document_root/<%= front_controller %>;
<% else %>
    location / {
        try_files $uri $uri/ /index.php?$args;
        index <%= index %>;

    location ~ .php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+.php)(/.+)$;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
<% end -%>
        fastcgi_pass <%= fastcgi_pass %>;
        fastcgi_index index.php;
        include /etc/nginx/fastcgi_params;

    location ~ /.ht {
        deny all;

<% if $builds %>
    location /static/builds/ {
        alias <%= root %>/data/builds/;
<% end -%>

<% if include  != '' %>
    <%include.each do |inc| %>
        include <%= inc %>;
    <% end -%>
<% end -%>


There’s one small problem with our nginx setup. At the moment, our sites won’t be loaded in by nginx until the next manual restart/reload. Instead, what we need is a way to tell nginx that we need to reload when the files are updated.

To do this, we’ll first define the nginx service in our init.pp manifest.

service { 'nginx':
    ensure     => running,
    enable     => true,
    hasrestart => true,
    restart    => '/etc/init.d/nginx reload',
    require    => Package['nginx']

Now, we’ll tell our site type to send a notification to the service when we should reload. We use the notify metaparameter here, and we’ve already set the service up above to recognise that as a “reload” command.

file { "/etc/nginx/sites-available/$name":
    content => template('myproject/site.nginx.conf.erb'),
    notify => Service['nginx']
file { "/etc/nginx/sites-enabled/$name":
    ensure => link,
    target => "/etc/nginx/sites-available/$name",
    notify => Service['nginx']

nginx will now be notified that it needs to reload when we both create/update the config, as well as when we actually enable it.

(We need it on the config proper in case we update the configuration in the future, since the symlink won’t change in that case. The notification relates specifically to the resource, even if said resource is the link itself.)

We should now have a full installation set up and ready to serve from your Vagrant install. If you haven’t already, boot up your virtual machine:

$ vagrant up

If you change your Puppet manifests, you should reprovision:

$ vagrant provision

Machine vs Application Deployment

There can be a bit of a misunderstanding as to what should be in your Puppet manifests. This is something that can be a bit confusing, and I must admit that I was originally confused as well.

Puppet’s main job is to control machine deployment. This includes things like installing software, setting up configuration, etc. There’s also the separate issue of application deployment. Application deployment is all about deploying new versions of your code.

The part where these two can get conflated is installing your application and configuring it. For WordPress, you usually want to ensure that WordPress itself is installed. This is something that is probably outside of your application, since it’s fairly standard, and it only happens once. You should use Puppet here for the database configuration, since it knows about the system-wide configuration which is specific to the machine, not the application.

You probably also want to ensure that certain plugins and themes are enabled. This is something that should not be handled in Puppet, since it’s part of your application’s configuration. Instead, you should create a must-use plugin that ensures these are set up correctly. This ensures that if your app is updated and rolled out, you don’t have to use Puppet to reprovision your server.

(If you do push this into your Puppet configuration, bear in mind that updating your application will now involve both deploying the code and reprovisioning the server.)

Wrapping Up

If you’d like, you can now go and clone the companion repository and try running it to test it out.

Hopefully by now you should have a good understanding both of Vagrant and Puppet. It’s time to start applying these tools to your workflow and adjusting them to how you want to use them. Keep in mind that rules are made to be broken, so you don’t have to follow my advice to the letter. Experiment, and have fun!

  1. There are a few cases where this doesn’t apply, but you should be explicit anyway. For example, files will autodepend on their directory’s resource if it exists. []
  2. Yes, I realise you can do per-user installation, but a) that’s an insane setup; and b) you’ll need to handle package management yourself this way. []
  3. This previously used hardcoded database credentials. Thanks to James Collins for catching this! []

Introducing WP API

As many of you are aware, I was accepted into Google’s Summer of Code program this year to work on a JSON REST API for WordPress. WordPress already has internal APIs for manipulating data via the admin-ajax.php handler in addition to the XML-RPC API. However, XML can be a huge pain to both safely create and parse, and the existing admin API is locked down to authenticated users and is also tailored to the admin interface. The goal of this project is to create a general data API that speaks the common language of the web and uses easily parsable data.

I’d now like to introduce the official repository and issue tracker. There’s also the SVN repository which is kept in sync.

For the next few months, my schedule will be busy implementing the API. Each week from now through the final submission has an individual plan, presented below.

May 27: Acceptance of Project, ensure up-to-speed on existing code
June 3: Work on design documents (response types/collections) and ensure agreement with mentors and interested parties (#264)
June 10: Complete core post type serialisation/deserialisation (basic reading/writing of raw data complete) (#265)
June 17: Work on collection pagination and metadata infrastructure (the full collection of posts can now be accessed and is correctly paginated, allowing for browsing via the API) (#266)
June 24: Creation of main collection views (main post archive, per date, search) (#267)
July 1: Further work on indexes and browsability (#268)
July 8: Create (independent) REST API unit tests for all endpoints covered so far (#269)
July 15: Creation of a Backbone.js example client for testing (#270)
July 22: Spare week to act as a buffer, since some tasks may take longer than expected
July 29: Midterm evaluation!
August 5: Creation/porting of existing generic post type API with page-specific data (#271)
August 12: Creation of attachment-related API (uploading and management) (#272)
August 19: Creation of revision API, and extending the post API to expose revisions (#273, #274)
August 26: Creation of term and taxonomy API (#275)
September 2: Finalisation of term and taxonomy API, and updating of test clients (#276)
September 9: Final testing with example clients (especially with various proxies and in live environments) and security review (#277)
September 15: Spare week for buffer
September 22: Final checking for bugs and preparation for final submission

At the end of each week throughout development, I’ll post a weekly update and tag a new release version, in a manner similar to the release process of MP6. The first release of the API will be posted shortly.

For those looking to keep track of development, I’ll be posting about the API here, which you can follow via the feed. A GSoC P2 is on its way and will be the official place to post comments and feedback (I’ll be crossposting back to here once that’s up). In the mean time, I’ll be posting on this blog and accepting comments here, which is a great way to ask questions and post feedback.

A Vagrant and the Puppet Master: Part 1

In my development workflow, my tools are the thing I deliberate over most. As anyone who follows me on Twitter can attest to, I’m a huge fan of Git and Sublime Text, and conversely I hate Subversion and PhpStorm. I genuinely believe that my tools can make or break how I work and I’m always looking to improve this, constantly searching out for new tools.

By far and away, the tool that has changed how I work the most in the past year is Vagrant with the Puppet configuration tool. For those who don’t know, Vagrant is a tool to create and manage virtual machines, while Puppet is a tool to configure and manage server configuration. The two work extremely well together as a tool for developing in a clean, reproducible environment. Plus, it also provides an easy way to replicate server configuration between your development and production servers.

So, how does it work? Let’s walk through how to set up a development server, plus using that configuration in production!

The first step to getting started is to work out what operating system you want to use. Personally, I’m a fan of Ubuntu Server (12.04 LTS, Precise Pangolin, to be specific), so I’ll be using that in examples, but you can use whatever you like. Some official boxes are available for Ubuntu, while others are available on (you can also build your own, but until you’re familiar with Vagrant, I’d recommend using a premade box).

To start off with, you’ll want to download your chosen box to avoid having to redownload it every time you recreate your box.

$ vagrant box precise32

Next up, you want to create your Vagrant configuration and get ready to boot it up. This will create a Vagrantfile in your current directory, so set yourself up a new directory where all your Vagrant-related stuff will live.

$ mkdir example-site/
$ cd example-site/
$ vagrant init precise32

Now, let’s boot up your new virtual machine and make sure it works. The up command will create a virtual machine from your base box if you don’t already have one, boot it, then provision it for usage. We’ll come back to that part in a bit.

$ vagrant up

To get access to your new (running) virtual machine, you’ll want SSH access. If you’re on Linux/Mac, vagrant ssh will work perfectly, but it’s a little harder on Windows. Vagrant tries to detect if your system supports command-line SSH, but doesn’t detect Cygwin environments. For cross-platform parity, I set up an alias called vsh that points to vagrant ssh on my Mac, and the SSH command on Windows, which looks something like:

# Mac/Linux
$ alias vsh='vagrant ssh'

# Windows
$ alias vsh='ssh -p 2222 -i "~/.vagrant.d/insecure_private_key" vagrant@'

We’re done testing our basic setup, so we can shut our VM down and destroy it, since we’ll want to boot from scratch next time.

$ vagrant destroy

We’ve now verified that the virtual machine works nicely, so let’s bust open your Vagrantfile and get tweaking it. Networking is the first thing we’ll need to get set up, so that we can access our server. Vagrant automatically forwards port 22 from the VM to port 2222 locally so that we can connect, but we also need port 80 for nginx, and we might need more later. We can either set up separate forwarded ports, or enable private networking (my preferred option). Uncomment the private networking line to enable it: :private_network, ip: ""

This IP address can be whatever you want, but you need to make sure it’s not covered by your existing network’s subnet. This is usually fine unless you have a custom subnet, in which case 10.x.x.x might be a better choice.

You’ll also want to set up a hostname for this. In your /etc/hosts file (on Windows, C:WindowsSystem32driversetchosts), point vagrant.local to this IP, along with any subdomains you may want. (Note: I’ve seen people use other names here like Keep in mind that these may end up being actual domain names some day with ICANN’s new TLD policy, whereas .local is reserved for exactly this use.) vagrant.local

We’ve now got a working Vagrant setup. In part 2, we’ll take a look at setting up Puppet to get your software working automatically.

Editing Commits With Git

As developers who use GitHub, we use and love pull requests. But what happens when you only want part of a pull request?

Let’s say, for example, that we have a pull request with three commits. Unfortunately, the pull request’s author accidentally used spaces instead of tabs in the second commit, so we want to fix this up before we pull it in.

The main tool we’re going to use here is interactive rebasing. Rebasing is a way to rewrite your git repository’s history, so you should never use it on code that you’ve pushed up. However, since we’re bringing in new code, it’s perfectly fine to do this. The PR’s author may need to reset their repository, but their changes should be on separate branches to avoid this.

So, let’s get started. First, follow GitHub’s first two instructions to merge (click the little i next to the Merge Pull Request button):

git checkout -b otheruser-master master
git pull master

Now that we have our repository up-to-date with theirs, it’s time to rewrite history. We want to rebase (rewrite) the last three commits, so we specify the range as HEAD~3:

git rebase -i HEAD~3

This puts us into the editor to change the rebase history. It should look like this:

pick c6ffde3 Point out how obvious it is
pick 9686795 We're proud of our little file!
pick a712c2c Add another file.

There are a number of commands we can pick here. For us, we want to leave the first and last unchanged, so we’ll keep those as pick. However, we want to edit the second commit, so we’ll change pick to edit there. Saving the file then spits us back out to the terminal. It’ll also tell us that when we’re done, we should run git commit --amend and git rebase --continue.

Internally, what this does is replay all the commits up until the one with edit. After it replays that commit, it pauses and waits for us to continue it.

Now, we can go and edit the file. Make the changes you need here, then as it told us, it’s time to make amends:

git commit --amend

This merges the changes you just made with the previous commit (the one with the misspelling) into a new commit. Once we’ve done that, we need to continue the rebase, as all the commits after this one have to be rewritten to point to our new one in the history.

git rebase --continue

Our history rewriting is now done! It’s now time to merge it back in to master, push up our changes and close the pull request. First we’ll need to switch back to master, then we can merge and push.

git checkout master
git merge otheruser-master master
git push

Congratulations, you just (re)wrote history!

For those of you who want to try this, I’ve made a test repository for you to work on. Try it out and see if you can fix it yourself!

Thanks to Japh for asking the question that inspired this post!