On Contribution

Every day, I open my computer in to Slack. The first thing I see, every day, is the WordPress Slack icon reporting a problem.

Every day, I click the icon and log in through WordPress.org, to be told that my Slack account is still disabled.


Although I can no longer access Slack, I am still contributing as best as I can. I’m still working on GitHub issues, Trac tickets, comments on the Make P2s, and through chatting directly with other contributors.

I am trying as best as I can not to let it affect my work. Contributing without using Slack is difficult – back-and-forth over an issue that could be a fluid dialogue becomes long paragraphs over days instead. Attending meetings is impossible – I am reduced to reading the notes – and helping others is much more difficult than it should be.1

But no matter what I do, it’s intensely demotivating to be reminded, every day, that I have been declared persona non grata.

I’ve still received no communication from anyone in the project as to why I can no longer access Slack or why I am blocked on Twitter.

I’ve spent more than 20 years – more than two-thirds of my life – contributing to WordPress. I have poured much of my life into this project, spent sleepless nights worrying about it, and dealt with the stress and burnout caused by the politics, personalities, and personal attacks.

Although I’ve been exiled by the project officially, I don’t feel like I’m “on the outside” because of the community. Many, many people have reached out, and I thank them for that.

Automattic’s response to the injunction this week says that they’re “continuing to protect the open source ecosystem”, but one thing is clear: WP Engine did not block me.

I like to think I can shrug all of this off, that I don’t really care that much, but I can’t. Because I do care. Because I believe – I still believe – that this thing matters.


I am trying the best that I can.

Sign, Symbol, Road Sign

But, it’s hard to imagine wanting to continue to work on WordPress after this.

Edit: A few hours since publishing this, I have been blocked from WordPress.org, and hence from contributing on Trac as well.

  1. I am lucky. My job does not depend on contributing. []

A Stronger Foundation for the Ecosystem

The feud between Automattic and WP Engine has continued, with WordPress.org blocking access by WP Engine’s servers.

In WP Engine Must Win, I wrote about my thoughts on the legal argument on this battle, and why it is important that WP Engine win the trademark case in order to protect the ecosystem. I also touched on the moral argument:

The case that companies should contribute certain amounts (for example, 5% of time or resources) is one that reasonable people can argue over and disagree about – and we see other cases of this across the open source community. Raising whether certain companies are meaningfully contributing is the sort of advocacy befitting the Foundation, whether you agree with the specifics or not.

However, we should not confuse this worthwhile advocacy with the stunning claims that Matt and Automattic are making, and the impact this would cause upon the industry and the project itself.

The confusion between these arguments has clouded much of this discussion, and I’ve had both public and private responses to my post which have expressed gratitude for helping to clarify these.1 Richard Best’s excellent WP and Legal Stuff has also broken these arguments out, and I strongly encourage reading everything he has written on this topic.

Setting aside the legal argument, I want to address the broader points that Matt has made about the sustainability of the ecosystem, and about what companies should contribute.

I agree with Matt that for the strength of the ecosystem, we must defend the ideals of the project.

Protecting open source

Freedom Zero is the freedom to run the program for any purpose, and it is a foundational ideal of free software. The GPL license is clear that there is no obligation for users to contribute back to the software.

For a long time, this was enough. However, I have personally changed my view on these obligations over the years, and this latest case brings it further into focus.

It’s clear that we live in a world where open source and free software won. The challenges these licenses were created to face have been defeated. However, these licenses are ill-equipped to deal with the tragedy of the commons that is modern exploitation of open source.

Matt is right when he speaks about the ethos of open source being what makes it work, and we need to step up to reinforce this. There is a strong case to be made that bad actors are plundering the plentiful fields of open source and exploiting the spirit of the ecosystems, and if we do not act, the commons may crumble.

It’s also clear that we don’t have the right tools to deal with these problems right now. The WordPress trademark is being used in this legal battle since it is one of the only tools that’s available, but in doing so, has created negative consequences we now must all live with.

We need something better than this.

Empowering the WordPress Foundation

In order to meet these challenges, we need new tools.

The best way we can do this is to empower the WordPress Foundation, whose mission is:

To ensure free access, in perpetuity, to the software projects we support. People and businesses come and go, so it is important to ensure that the source code for these projects will survive beyond the current contributor base, that we may create a stable platform for web publishing for generations to come.

Currently, the Foundation is underequipped to achieve this mission, and does so only indirectly.

WordPress needs a strong Foundation to ensure its longevity into the future, one which is capable of fighting for the spirit of the ecosystem.

We can take inspiration from other open source ecosystems about this, including from the Drupal Association, the Python Software Foundation, and the Linux Foundation. We can follow their models by empowering the WordPress Foundation in three key areas.

The WordPress Foundation must be enabled and responsible

The Foundation needs to be stronger than it is today, and enabled to achieve its goals. It must also be trusted by the ecosystem in its role.

Currently, the Foundation plays a minor role in the operations and financial backing of the project. Its primary roles traditionally have been stewardship of the trademark, and operation of WordCamps – the latter of which is now run by a subsidiary public benefit corporation.

The largest costs for community services – employing contributors and running WordPress.org – are borne primarily by generous contributions directly from Matt and Automattic, along with many other contributors from other companies to the project. Consequently, there’s little perceived benefit to direct donations to the Foundation, and burden continues to fall to Matt and Automattic. These donations are truly commendable2, but we need to build a system that does not rely on this alone.

By acting transparently and being more active, the Foundation could build trust and earn the ability to solicit more support.

A clear start to achieve this is to empower the Foundation with a steering committee or board comprised of active community members, which can join Matt in actively driving the mission. At a minimum, this committee should be responsible for the Foundation’s use of trademarks and its policies – it may also make sense for it to have a say in the project’s direction and roadmap, as in Joost’s proposal.3

The Foundation could take a further step towards ensuring the continuity of the project by directly employing key contributors, following the model of the Linux Foundation, which employs key contributors like Linus Torvalds, Greg Kroah-Hartman, and Shuah Khan.4

With trust built in the Foundation, it could solicit memberships/sponsorships more strongly from companies, following the model of many other successful foundations, letting those companies benefit from the goodwill it creates (as Automattic does with their donations). In doing so, it could become financially independent, enabling WordPress to truly survive in perpetuity.

This ensures both that the Foundation can meet its goals of ensuring access even as people and businesses come and go, and also ensures the Foundation itself survives any changes – creating a virtuitous cycle.

The WordPress Foundation must be clear

To ensure it can be trusted by the community, the Foundation needs to be clear on its policies, especially on the trademark policy and on community involvement.

Until the talk at WordCamp US, it was widely understood that the Five for the Future program was a suggested, voluntary program, encouraging companies to contribute 5% of their time or resources to the project. However, it is clear from Automattic’s actions that some level of contribution is now a requirement in the community.

The Foundation (not Automattic) should publish clear guidance about the expectations on the ecosystem. These expectations should be published in a contribution agreement, which should be enforced (more on that in a moment) via contractual obligations – rather than by tenuous trademark enforcement.

If policies change, or a suggestion moves to a requirement, this must be clearly, openly communicated with appropriate timelines, and not retrospective. This ensures the Foundation can be trusted, and allows the ecosystem to act confidently, avoiding the chilling effects of uncertainty.

The WordPress Foundation must have teeth

In order to place pressure on the ecosystem to act well, the Foundation must have teeth. It must be prepared and equipped to vigorously defend the community and ecosystem.

WordCamps and community events are a vital part of the ecosystem, and many companies derive value from both sponsorships and attendance. In the same way that a code of conduct for individuals is enforced, the Foundation should be unafraid to require the contribution agreement for participation.

The central services provided by WordPress.org should also be part of the Foundation’s tools. Whilst the implementation and communication around the block on WP Engine leaved much to desire, the sentiment of companies exploiting a free service is right, and the Foundation should be equipped to use it. This includes limiting the use of WordPress.org’s APIs as well as listing in the WordPress.org directory.

In order to enable the Foundation to use it as a tool, WordPress.org must be under the Foundation’s direct control.

In addition to these tools, the Foundation also controls the trademark. While I believe the specific case against WP Engine is overextended and dangerous to the community, the Foundation should defend its trademark in legitimate cases involving market confusion.

This includes enforcement of any licensed usage of the trademark. It is clear that the primary confusion is between Automattic’s WordPress.com product and the WordPress open source project – so much so that Automattic itself has to clarify to consumers. The Foundation can continue to act as a guard against intentional confusion and check that its licensees correct and clarify these cases.5

Moving forward

Putting the Foundation at the heart of defending the ethos and freedoms gives the whole community the ability to work together.

Matt put it best:

I believe that software, and in fact entire companies, should be run in a way that assumes that the sum of the talent of people outside your walls is greater than the sum of the few you have inside. None of us are as smart as all of us. Given the right environment — one that leverages the marginal cost of distributing software and ideas — independent actors can work toward something that benefits them, while also increasing the capability of the entire community.

Matt and Automattic have given immense amounts to the project, and a stronger Foundation gives us all the capability to share the burden. It provides a path forward towards a community that is sustainable in the long term, which encourages and creates good actors, and guides others to the right path.

This fight is bigger than any two companies going head to head, and this specific legal battle obscures the big picture of what we need to achieve.

WP Engine must win the trademark battle, but the open source ecosystem must win this war.

  1. To be clear, I have not and will not be speaking privately to anyone directly involved in the legal dispute. Apologies if I have not replied to your messages, but I welcome public replies to anything you disagree with. []
  2. I have worked on or with WordPress for more than 20 years, and not once have I doubted Matt’s belief in and commitment to open source. []
  3. This steering committee need not be the Foundation’s board directly. Joost would also make an excellent member of this committee. []
  4. Historically, Audrey fulfilled a similar role for WordPress, however this role has been absorbed into Automattic. []
  5. While I don’t think Matt or Automattic leadership wilfully confuse these, it is important that an independent group keeps this in check, especially as Automattic continues to grow. []

WP Engine Must Win

On stage at WordCamp US last week, Matt Mullenweg gave a keynote presentation which made a wide range of points about contribution, the ethics of open source, and the commitments various companies make to contributing. In particular, he called out WP Engine in what was a fairly clear direction to the community to stop using them. This was then followed with a post on WordPress.org.

Since then, further details have emerged about the conversations happening behind the scenes, as a result of WP Engine’s cease and desist, and Matt’s live Twitter Spaces (thanks to Courtney Robertson for her notes). It has also emerged that the WordPress Foundation has filed trademarks for “managed WordPress” and “hosted WordPress”.

In particular, the following details from WP Engine’s letter stand out:

Automattic CFO Mark Davies told a WP Engine board member that Automattic would “go to war” if WP Engine did not agree to pay its competitor Automattic a significant percentage of its gross revenues – tens of millions of dollars in fact – on an ongoing basis. Mr. Davies suggested the payment ostensibly would be for a “license” to use certain trademarks like WordPress, even though WP Engine needs no such license. WP Engine’s uses of those marks to describe its services – as all companies in this space do – are fair uses under settled trademark law and consistent with WordPress’ own guidelines.

These have been confirmed by Automattic’s counter letter, which also states Automattic is asking WP Engine to pay 8% of their revenue.

Until yesterday, the stated policy of the WordPress Foundation was:

All other WordPress-related businesses or projects can use the WordPress name and logo to refer to and explain their services

And:

The abbreviation “WP” is not covered by the WordPress trademarks and you are free to use it in any way you see fit.

(As of writing, the page was last updated at 2024-09-24T16:45:36; the prior version recorded by the Internet Archive was active at 2024-09-24T02:45:55.)

As WP Engine’s filing notes, it is long established trademark case law that trademarks may be used descriptively under fair use. In the phrases “hosted WordPress”, “headless WordPress, “WordPress platform” (etc), the term “WordPress” is clearly being used descriptively – it is website hosting for the WordPress open source software.

There are no other terms that can substitute, and a reasonable person who understands that WordPress is an open source, installable project can clearly make this distinction. In the same manner, seeing other hosts offering “Apache & PHP hosting” is clearly descriptive, and not an attempt to pass off as officially licensed products of the respective trademark holders. (“Managed WordPress”, a term the Foundation has now filed trademarks for, has been used by the community since at least 2010 since it was popularised by Pagely.)

The first statement may have been the WordPress Foundation’s policy, but it is also clearly explaining cases of fair use. The second statement is a matter of fact: “WP” is not trademarked.

The trademark policy now states (2024-09-24 22:21):

The abbreviation “WP” is not covered by the WordPress trademarks, but please don’t use it in a way that confuses people. For example, many people think WP Engine is “WordPress Engine” and officially associated with WordPress, which it’s not. They have never once even donated to the WordPress Foundation, despite making billions of revenue on top of WordPress.

If you would like to use the WordPress trademark commercially, please contact Automattic, they have the exclusive license. Their only sub-licensee is Newfold.

(This policy page was authored by Matt, who is also CEO of Automattic. Automattic and Newfold also have a business relationship beyond trademark licensing, with Bluehost Cloud using Automattic’s WP Cloud infrastructure-as-a-service product. Newfold is also an investor in Automattic.)

Across these conversations, there is a clear letter of intent from Matt, Automattic, and the WordPress Foundation: if you use the term “WordPress” commercially in any way, Automattic may dictate the terms under which you may use it.

If Automattic were to win this legal argument, this would mean it is no longer possible for “WordPress agencies” to use the term, nor for hosts to offer “WordPress hosting”, nor for “WordPress plugins” to be commercially available. These would, under their argument, not be fair use, but rather an attempt to pass off your products as officially sanctioned by the WordPress Foundation and Automattic.

The only way any of these businesses would be able to operate is under the terms that Automattic chooses – in WP Engine’s case, that was 8% of their revenue. Any company could be subject to a shakedown for an arbitrary amount, or face ruinous legal action and intimidation in the public space.

If Automattic had the right to dictate any use of the trademark, this would be a severe net-negative for the WordPress project, the WordPress Foundation, and for open source projects in general. It would severely encumber any company merely seeking to describe the products and servicesthey offer.

It would also have a chilling effect upon any commercial activity using WordPress, as any business could be targeted by Automattic for licensing fees, even those using the trademark descriptively, fairly, and in good faith.

This would directly work against the WordPress Foundation’s non-profit goal of serving the public good.

The case that companies should contribute certain amounts (for example, 5% of time or resources) is one that reasonable people can argue over and disagree about – and we see other cases of this across the open source community. Raising whether certain companies are meaningfully contributing is the sort of advocacy befitting the Foundation, whether you agree with the specifics or not.

However, we should not confuse this worthwhile advocacy with the stunning claims that Matt and Automattic are making, and the impact this would cause upon the industry and the project itself.

WP Engine must win this legal battle for the continued health and vibrancy of the WordPress project.

How WordPress Knows What Page You’re On

In the spirit of Dan Abramov’s Overreacted blog, where he deep-dives into React on his personal blog, I thought I’d do the same for WordPress. If there’s something you’d like to see, let me know!

Since WordPress 1.0, WordPress has supported “pretty permalinks”; that is, human-readable permalinks. This system is built for a lot of flexibility, and allows users to customise the format to their liking, using “rewrite tags”.

Screenshot of the WordPress permalinks screen, showing the presets as well as custom input with "available tags" buttons

Pretty permalinks is implemented through the Rewrite system, but how that works can be a bit obscure, even if you’re familiar with it.

“Rewrites”, for those who aren’t familiar, are how WordPress maps a pretty permalink to something it can use internally. This internal representation is called a “query” (which is a bit of an overloaded term), and is eventually used to build a MySQL query which fetches the requested posts from the database.

This “query” is not exactly the same as what you might think of as a query in WordPress. It’s a mix of parameters used in WP_Query to get posts (called “query arguments” or “query args”) as well as information about the request itself (called “query variables” or “query vars”). Query vars are typically only useful for the main query, and include routing information like whether a 404 error occurred. This will hopefully be clearer later.

Let’s step through, chronologically, how WordPress handles turns your request into this query.

Aside: WP_Rewrite?

If you’re a seasoned WordPress developer, you might know Rewrites through the WP_Rewrite class. But perhaps surprisingly (or not, if you know how WordPress has evolved), rewrites are actually handled in the little-known WP class instead. Additionally, some (in fact, many) URLs and patterns are routed outside of regular rewrites.

We’re going to take a look at the whole process from where it starts, not just WP_Rewrite. The rewrite process really begins as soon as WordPress starts handling the request.

Bootstrapping

Before WordPress can get started with anything, it needs to first bootstrap everything. How this general process works is a topic for a different day, so I’ll just talk about the relevant bits here.

The key steps in the bootstrap process are:

Already during the bootstrap process, there are a few places where redirects or full requests can be served back. The most common case with full-page caching enabled is that the cache will serve back a request using its own routing. The other cases are mostly error cases, with the exception of multisite, which I’ll cover later.

Note that all of these cases happen before the Rewrite system is started, so it’s not possible to use rewrites to handle favicons, multisite routing, or caching. This is all by design, as these checks have to run early either for performance or to check for basic bootstrapping errors.

You can however use the various hooks provided in the bootstrapping process to handle these requests, if you register your callbacks before wp-settings.php is loaded. You can also handle it in your wp-config.php; don’t forget that’s just PHP, so you can run whatever code you want there.

Initialising the Routing

After the basic bootstrapping in WordPress is done, we get into the actual routing instantiation. Firstly, WordPress instantiates the critical routing classes (WP_Rewrite and WP).

Instantiating WP_Rewrite fires off rewrite initialisation. This loads in all the various settings and sets properties that can later be used for rewrite generation. This also includes setting the “verbose page rules” flag, which is used when your permalink structure contains one of a few specific tags: those which start with slugs, and would potentially cause pages and posts to have conflicting permalinks. Verbose rules change how routing happens later, causing WordPress to “double-check” the URL during routing.

Before WordPress 3.3 (specifically, #16687), verbose page rules caused one-rule-per-page to be generated, which (needless to say) wasn’t great for performance on large sites. This was changed to instead check only when necessary.

Once this done, our oft-forgotten friend wp-blog-header.php kicks off the actual routing. This runs WP::parse_request which is where the actual routing in WordPress is (generally) done. Basically the first thing this does is to load in the “rewrite rules”.

Generating the Rules

Before we can start doing any routing, we need to convert the user settings to something we can actually work with. Specifically, we need to generate the rewrite rules.

Rewrite rules are essentially a gigantic map of regular expression to “query”. For example, the basic rule for categories looks like:

'category/(.+?)/?$' => 'index.php?category_name=$matches[1]'

If you’ve ever used any other routing in pretty much any web framework, you might wonder what the hell the thing on the right is. This is a WordPress “query string” (which is not the same thing as WP_Query). Essentially, all “pretty” permalinks in WordPress map to this intermediate “ugly” form first, before then being mapped into a WordPress query. This ensures compatibility with sites that don’t support pretty permalinks, but means that WordPress doesn’t directly support “rich” routing (such as callbacks, complex queries, etc).

To generate these rules, we go back to the WP_Rewrite class, which attempts to load cached rewrites from the rewrite_rules option, and generates it if it is not available.

Building a Set of Rules

There are many sets of rewrite rules that are generated, and each is generated from a “permastruct” (for “permalink structure”) and an “endpoint mask”. The permastruct specifies the general format of the set of rules to generate, and the “endpoint mask” controls which suffixes (“endpoints”) are added to the permastruct.

A permastruct is a string with static parts and “rewrite tags”. Rewrite tags look like %post_id% and represent a dynamic part of the rewrite rule. WordPress contains a few built-in permastructs: “date”, “year”, and “month” for date archives; “category”, and “tag” for the built-in terms, “author” for author archives; “search” for search results pages; “page” for static pages, “feed” and “comment feed” for RSS/Atom feeds. It also has the main permastruct for single post pages, and “extra” permastructs as registered by plugins or themes.

The permastruct is combined with an endpoint mask, which is a bitmask specifying which additional rules to add to the main endpoint. WordPress includes 13 endpoint masks, plus 3 helper masks (EP_NONE, EP_ALL, and EP_ALL_ARCHIVES). These can be combined with bitwise operators (|, &, ~) to activate multiple endpoint masks at once.

Endpoint masks are very confusing for those unfamiliar with bitwise operations, so you typically don’t see them used much outside of WordPress core’s routes. Also, they’re not very extensible, as custom endpoint masks will conflict with each other. Avoid doing anything special with these, and generally follow existing guides on how to use them. Jon Cave’s post on Make/Plugins is the best way to understand them if you really want to get into it.

The permastruct and endpoint mask are passed to WP_Rewrite::generate_rewrite_rules(), which replaces the rewrite tags with their regular expression equivalents. It does additional parsing to then generate additional rules based on which rewrite tags were used, and using the endpoint mask. I won’t go into the specifics of this, as this is optimised code with lots of weirdness, but suffice to say it converts the parameters into an array of rules.

For example, the main post rewrite rules are generated using the user-set permastruct with the EP_PERMALINK endpoint mask. This takes the rewrite_rules setting as the permastruct (which looks like /%post_id%/%postname%/). generate_rewrite_rules() turns this into rewrite rules to match things like post attachments, feeds, pages (as in, paged posts), comment pages, embeds, and the combination of all of these.

Collecting all the Sets

WordPress repeats the rewrite generation for each set of permastructs it knows about (plus the “extra” permastructs added by plugins or themes), and it then combines them into a single set of rules. It also adds in some additional static rules (for things like deprecated feeds and robots.txt). It runs a couple of filters to allow plugins and themes to add other static rules as well.

Extra permastructs are typically generated by core helpers like register_post_type() or register_taxonomy(). Plugins don’t typically add permastructs manually, as the generation makes a lot of assumptions about things you want.

Once all of this is done, WordPress saves the rules into the rewrite_rule option to avoid having to regenerate them on the next request. However, if a plugin has flushed the write rules before wp_loaded, this saving is deferred to wp_loaded to ensure plugins don’t break the whole system.

Now that we know we have rewrite rules (whether loaded from the option or generated fresh), we can finally get around to routing our requests.

Matching the Rules

Back in WP::parse_request(), we now have the full rewrite rule array ready to use. First, we set up and normalise the incoming request on top of the stuff already done during bootstrapping. This includes removing any path prefixes if WordPress is installed in a subdirectory (or if we’re on a subdirectory site in multisite).

Root requests (i.e. for /) are normalised to the empty string (''), and matched directly to the '$' rule, which improves performance for one of the most commonly-requested pages on the site. (As '$' is also (typically) the last rule in the rewrite array, this also saves us running potentially hundreds of regular expression checks that will never match.)

All other requests go into the main matching loop. This loop takes every rewrite rule and attempts to match the regular expression against the requested path (twice, in case the URL needs decoding). If the rewrite rule matches, the “query” for the rule is stored, and the loop breaks (as only one rule can match). If no matches are found, $wp->matched_rule remains unset.

If verbose page rules are set and the “query” contains the pagename query var, the loop first checks to see if the URL actually matches a real post. (It also checks that the post is public to ensure drafts aren’t accidentally exposed via their URL.) This check allows multiple post types to have overlapping rewrite rules, and means that potentially multiple rules can match a single request.

If a match is found, WordPress then parses the URL using the “query” string from the rule. This transforms a URL like /42/my-post/ into an array of query vars like [ 'p' => 42, 'name' => 'my-post' ]. This transformation is done using regular expressions which understand how to turn $matches[1] into the first item of the rule’s regular expression result.

This parser is used to maintain backwards compatibility with the older “parser”, which simply used eval() to parse the “query” into query vars.

WordPress also checks if the current request is for wp-admin/ or for a direct PHP file, and resets the query vars if so.

At this point, we’ve converted the requested URL into query vars, so the main part of the routing is done. All that’s left is to check that the query vars are allowed to be used, combine in $_GET (query string) and $_POST (data from the request body) variables, and apply some light sanitisation. Further permission checks and cleanup is also done to ensure everything is fairly normal. If any errors occurred, the error query var is also set to enable it to be handled later.

Using the Query Vars

With the query vars all set and established, WordPress now starts using them. It does error handling based on the error query var as part of sending headers, and bails from the request if specific errors were hit (403, 500, 502, or 503 errors). It turns off browser and proxy caching for logged-in users, and sends various caching headers for feeds, and sends the HTML Content-Type for everything else.

All the other query vars are passed as query arguments to WP_Query, and this sets the “main” query. After this is done, 404 requests are sent if WP_Query didn’t manage to find anything (with some conditions on that). If a 404 occurred during routing, WordPress checks this when parsing the query vars, and sets the internal 404 flag.

The specifics of how querying and rendering the results are done is out of scope for this explanation, but has been explained to death elsewhere, as you’ll actually need to interact with this in plugins and themes.

Special Cases

Multisite

While rewrite rules handle matching requests inside a site, a different system is using for matching requests to sites first. This is for a few different reasons: rewrite rules can be changed by plugins, which are site-specific; site data needs to be loaded first for rewrite settings; and multisite routing uses both the domain and the path.

Multisite routing is kicked off when ms-settings.php is loaded in wp-settings.php. The routing first loads sunrise.php, which traditionally handled “domain mapping”; that is, routing external domains to sites. WordPress 3.9 enabled doing this internally in WordPress by simply setting the site’s URL to the external domain, but plugins are still required for multiple domains. (The sunrise file can also be used for many other purposes, but routing remains one of its main purposes.)

If the sunrise process did not handle the routing, WordPress normalises the host and path, then uses this information (along with the SUBDOMAIN_INSTALL flag) to try and find the current site. The mechanisms by which it does this are fairly readable, so I’ll leave it as an exercise to the reader to look into this: simply read and follow the source of ms_load_current_site_and_network().

Once the site has been routed, the site’s details are loaded into relevant global variables. This includes the site’s URL (home_url()), which is later stripped during normalisation in WP::parse_request() (see “Matching the Rules”). This ensures that any path for the multisite install is not used when matching rewrite rules.

REST API

The REST API uses its own routing and endpoints for a few reasons. Unlike regular WordPress requests, the REST API does not always generate a “main” query, so it does not need the query var mapping system. Additionally, REST API “endpoints” (no relation to “endpoint masks”) are matched using both the HTTP method (typically GET, POST, PUT, or DELETE) and the path, unlike regular WordPress rewrites, which are method-agnostic.

The routing inside the REST API is much more similar to traditional routing in non-WordPress contexts, and it matches the pair of HTTP method and path to a callback rather than a query.

To bootstrap the process, the REST API registers rewrite rules which match /wp-json/(.*) to a custom query var called rest_route. After the rewrite system has matched the request URL to this rewrite rule (on parse_request), the REST API checks this query var. If it’s set, it initialises WP_REST_Server, and handles the routing inside WP_REST_Server::serve_request().

The API first sends some setup and security headers, then does some further setup for JSONP responses. It then initialises a WP_REST_Request object. This object contains all the data about the request, and allows the API to be re-entrant: that is, you can run multiple REST requests in one WordPress request, because all the “global” information is contained in this object. The API then checks that no errors occurred during authentication, and if everything is good, it then “dispatches” the request.

WP_REST_Request::dispatch() runs a similar routing loop to WP::parse_request(), but without special cases for verbose rules. Unlike rewrite rules, each route can have multiple “endpoints” (i.e. callbacks). If the route matches, the API loops over each endpoint (called “handler” in the code) and checks whether the method for the endpoint also matches.

If it matches, the callback is then called, with some other stuff around it. Exactly how these requests work is a topic for a different post, as the API does a lot of special handling around this.

Once the callback has been run, the end result is a WP_REST_Response object. This object contains the response data as well as any headers or status code to send back to the client. Headers are then sent back to the client before encoding the response data as JSON and finally echoing it to the client. Back in rest_api_loaded(), the WordPress request is now finished off, ensuring that further routing/handling in the WP class is skipped.

Limitations

The design of Rewrites is classic WordPress: it maintains wide compatibility, both forward and backward, through clever and careful design. There’s much to like about this system, but the core feature of mapping “pretty” permalinks back to “ugly” permalinks is very smart. This makes compatibility between the two inherent, and it ensures new code is automatically compatible.

The biggest problem is that Rewrites is inherently tied to post querying. To be clear, this is not a problem with Rewrites, but rather with the overall design of the frontend system in WordPress. This makes routes not tied to posts much more difficult to design and implement. While this worked well for the original, blog-focussed nature of WordPress (where essentially everything was simply a filtered post archive), it has been stretched to its limits as a modern CMS.

This is evident in the REST API, where posts are no longer the main content type, and anything (users, themes, the current theme) in WordPress is addressable via a URL. When I designed the REST API’s routing, it was with these limitations in mind, which is why it uses a completely custom router. This router also works by “skipping” the main query, which it actually does by exiting before queries and templates are loaded. This is workable for a separated system like the API, but isn’t a good idea if you want to instead design user-facing pages which actually use templates (say, for a checkout page).

Understanding Rewrites can also be tough if you don’t know where to start, which is why a lot of people miss key parts or don’t quite understand the flow. A significant part of this is the organic way in which the WP and WP_Rewrite classes have grown, which means that understanding the flow requires a lot of flicking back and forth. I’d wager that quite a lot of WordPress developers don’t even know the WP class exists and acts as the main engine of the request; I didn’t until I really dug into Rewrites while working on core.

So Much More

There’s a lot more that happens that I didn’t cover here, so let me know if you want to see any more detail on anything specific. Just knowing where to start can be challenging some times, particularly with these systems that have organically grown.

Also, if there’s anything else you’d like to see a breakdown of, let me know! I’d like to demystify more of WordPress if you found this useful.

A Future API

I published a post today over on Post Status about the future of the WordPress REST API.

The year is 2020. WordPress powers over 35% of the web now. The REST API has been in WordPress core for a few years; the year after the REST API was merged into core, WordPress gained nearly 5% marketshare.

Many people all across the web are using the API. Here are some of their stories.

It’s quite a long post, but I’d love you to read it. 🙂

By the way, if you’re not a member of the Post Status Club, I’d recommend signing up. Fantastic value for money; Brian’s daily Notes email is the primary way I read WordPress news these days.

Beginnings

I announced a little while ago that I was making a change in my life. Over the past month-and-a-bit, I’ve been talking with many people and deciding where I want to spend the next stage of my career.

I’m delighted to announce that I’ve accepted a position working at Human Made. I’ve been hearing great things about Human Made for a long time, and, after talking to Tom, Joe and Noel, decided they’d be a fantastic fit.

In my day-to-day work at Human Made, I’ll be working on both client work as well as products, such as happytables. In fact, I’ve already begun shipping code, and had my first deploy last week (along with my first broken deploy, and my first scramble-to-fix-the-fatal-errors). I also shipped a cool little timezone widget that shows exactly what time of day it is for the humans that compose Human Made:

Timezone widget screenshot, showing avatars with their associated current time

I’m looking forward to seeing where this change takes me. If the first week is any indication, I definitely made the right choice.

See also: my post on the Human Made blog.


In other news, I can’t resist linking to a great piece of music by a famous French duo, that seems at least somewhat relevant:

Using Custom Authentication Tokens in WordPress

Much has been written about the ability in WordPress to replace the authentication handlers. Essentially, this involves replacing WordPress’ built-in system of username and password combinations with a custom handler, such as Facebook Connect or LDAP.

However, basically nothing appears to have been written on the other side of authentication: replacing WordPress’ cookie-based authentication tokens. The process of authentication in WordPress is simple and looks something like this:

  1. Check the client’s cookies – If we have valid cookies, skip to step 6
  2. Redirect the user to the login page
  3. Show the user a login form
  4. Check the submitted data against the database
  5. Issue cookies for the now-authenticated user
  6. Proceed to the admin panel

The existing authenticate hook allows users to swap out step 4 reasonably easily, and existing hooks allow replacing steps 2 and 3. The problem, however, is swapping out cookies in steps 1 and 5.

There’s a few reasons you might want to swap out the existing cookie handling: you’re passing data over something that’s not HTTP (CLI interface, e.g.); you’re using a custom authentication method (OAuth, e.g.); or, as with anything in WordPress plugins, some far-out idea that I can’t even fathom. Any of these require swapping out cookies for your custom system, however there’s not quite any good way to do so.

The existing solution to this is to hook into something like plugins_loaded and check there, however this will occur on every request, even if you don’t actually need to be authenticated. This makes it hard to issue error responses (such as HTTP 401/403 codes) without also denying access to non-authenticated requests.1

The correct way to do this really would be to use a late-check system the same way WordPress itself does. All WordPress core functions eventually filter down to get_currentuserinfo()2, which in turn calls wp_validate_auth_cookie(). It’s worth mentioning at this point that all of is_user_logged_in(), wp_get_current_user() and get_currentuserinfo() contain a total of zero hooks. We get our first respite in wp_validate_auth_cookie() with the auth_cookie_malformed action, however setting a user here is then overridden straight afterwards by wp_set_current_user( 0 ).

*sigh*

So, here’s the workaround solution. Hopefully this helps someone else out.

(This is also filed as ticket #26706.)

  1. This is less of an issue if you can detect whether a client is passing authentication, such as checking for existence of a header, but some naive clients send authentication headers with every request anyway. This happens to be the scenario I find myself in. []
  2. wp_get_current_user() e.g. calls it, is_user_logged_in() calls wp_get_current_user(), etc []

Change

It’s time for a change.

Fifteen years ago, I started school. Seven years ago, I finished primary school and started high school. Three years ago, I finished school. And two years ago, I started university.

For me, attending university was a natural course of action. The question was always one of what I would study at university, not whether I would. In my last years at school, I stressed over this question, as most graduating high-schoolers do.

My choices became clearer to me as I came closer to the end of my degree. I decided that engineering was where my talents were.

In the meantime, I was approaching having spent nine years of my life writing software. Programming had always been something that I’d enjoyed, and I’d become relatively good at it. It was natural that I could take my talent and apply it to a career.

However, I didn’t.

I was afraid that doing something I loved as a career would cause me to eventually become sick of it, and that wasn’t something I wanted.

So I chose electrical engineering, in the hope that it’d be similar enough to what I’d done and enjoyed previously, but different enough to prove a challenge.


I’ve just completed the second year of my five-year degree. This year, I failed four subjects: three math subjects and an electrical engineering subject.

It’s not that I found the subjects particularly hard. They were challenging, certainly, but it wasn’t impossible to overcome that.

No, instead, it’s that I stopped caring. I stopped caring about my grades. I stopped caring about what I was learning. I just didn’t care.


For the most part, I’ve found my subjects to be quite similar to subjects at school. You put in enough effort, and you do well. The material you learn is sometimes interesting, but mostly you just learn it to learn it.

But I have noticed one important thing. The subjects that I care about the most, the subjects that I enjoy, and consequently, the subjects that I do best in are the computing systems subjects.

For me, the logical puzzles and strange syntax just click. When given problems to solve, it’s intuitive for me to look at them and immediately have the outline of a solution in my head. I can see the solution to problems before other people have worked out where to start.

I look at a problem and my immediate thought is to work out how to solve it. I love the challenge presented, and I love making things that solve it.

And yet, I continue studying the other subjects in the vain hope that I’ll learn to enjoy them just as much. Someday, I think to myself, it will start being enjoyable.


I’ve changed immensely in the past two years.

After leaving home, mainly for practical reasons, I’ve become a different person entirely. Although I love my parents immensely, I could never really become an adult until I’d moved out. I didn’t know this until after the fact, of course.

However, there was one significant part of me that didn’t change: my plan in life. Up until I left home, I’d stayed the course. I’d moved from school into university without a second thought, just because it never really occurred to me to do otherwise. I hadn’t really considered my choice, if you could even call it that.

But as I grew as a person, I realised that I needed to reevaluate. While continuing on the path had a familiarity to it, I couldn’t ignore the other possibilities staring me in the face.


I’ve always said to myself that I’d rather do something I loved and earn a pittance than do something I hated and be rich.

However, I don’t think I’d ever really thought about just what that means. It was a set of empty words to me, not something I truly lived my life by.

I think it’s time that I stopped repeating hollow phrases to myself and actually did something about it.


I’m dropping out of university to follow my passion.

It’s a decision that I should have made a long time ago, and I regret not making it earlier.

I still have concerns that I’ll end up hating what I do, but change is something I have to accept and deal with if it happens.

Maybe this is change for the worse, and I end up deciding this isn’t what I want to do. I’m okay with that now, because at least I will have tried.

But maybe, just maybe, this is the best decision I’ll make in a long time.


I’m now taking serious offers for full-time work. If you’re hiring a WordPress developer, or know someone who is, contact me at r@rotorised.com

The Next Stage of WP API

As you may have seen, my Summer of Code project is now over with the release of version 0.6. It’s been a fun time developing it, and an exceptionally stressful time as I tried to balance it with uni work, but worth it nonetheless. Fear not however, as I plan to continue working on the project moving forward. A team has been assembled and we’re about to start work on the API, now in the form of a Feature as a Plugin. To power the team discussions, we’ve also been given access to Automattic’s new o2 project (which I believe is the first public installation).

Throughout the project, I’ve been trying to break new ground in terms of development processes. Although MP6 was the first Feature as a Plugin (and the inspiration for the API’s development style), the API is the first major piece of functionality developed this way and both MP6 and the API will shape how we consider and develop Features as Plugins in the future. However, while MP6 has been developed using the FP model, the development process itself has been less than open, with a more dictatorial style of project management. This works for a design project where a tight level of control needs to be kept, but is less than ideal for larger development projects.

I’ve been critical, both publicly and privately, of some of WordPress’ development processes in the past; in particular, the original form of team-based development was in my opinion completely broken. Joining an existing team was near impossible, starting new discussion within the team is hard, and meetings are inevitably tailored to the team lead’s timezone. The Make blogs also fill an important role as a place for teams to organise, but are more focused towards summarising team efforts and planning future efforts than for the discussion itself.

At the other end of the spectrum is Trac, which is mainly for discussing specifics. For larger, more conceptual discussions, developers are told to avoid Trac and use a more appropriate medium. This usually comes in the form of “there’s a P2 post coming soon, comment there” which is not a great way to hold discussion; it means waiting for a core developer to start the discussion, and your topic might not be at the top of their mind. In addition, Make blogs aren’t really for facilitating discussion, but are more of a announcement blog with incidental discussion.

Since the first iteration of teams, we’ve gotten better at organisation, but I think there’s more we can do.

This is where our team o2 installation comes in. I’ve been very careful to not refer to it as a blog, because it’s not intended as such. Instead, the aim is to bring discussions back from live IRC chats to a semi-live discussion area. Think of it as a middle ground between live dialogue on IRC and weekly updates on a make blog. The idea is that we’ll make frequent posts for both planning and specifics of the project, and hold discussion there rather than on IRC. It’s intended to be a relatively fast-moving site, unlike the existing Make blogs. In addition, o2 should be able to streamline the discussion thanks to the live comment reloading and fluid interface.

Understandably for an experiment like this, there are many questions about how it will work. Some of the questions that have been asked are:

  • Why is this necessary? As I mentioned above, I believe this fits a middle ground between live discussion and weekly updates. The hope is for this to make it easier for everyone to participate.
  • Why isn’t this a Make blog? The Make blogs are great for longer news about projects, but not really for the discussion itself. They’re relatively low traffic blogs for long term planning and discussion rather than places where specifics can be discussed.
  • Why is it hosted on WordPress.com rather than Make.WordPress.org? Two main reasons: I wanted to try o2 for this form of discussion; and there’s a certain level of bureaucracy to deal with for Make, whereas setting up a new blog on WP.com was basically instant. The plan is to migrate this to Make if the experiment works, of course.
  • If you want to increase participation, why is discussion closed to the team only? Having closed discussion is a temporary measure while the team is getting up to speed and we work out exactly how this experiment will work. Comments will be opened to all after this initial period.

Fingers crossed, this works. We’re off to somewhat of a slow start at the moment, which is to be expected with starting up a large team from scratch on what is essentially an existing project. There’s a lot of work to do here, and we’ve got to keep cracking at the project to keep the momentum going. Fingers crossed, we can start building up steam and forge a new form of organisation for the projects.

A Vagrant and the Puppet Master: Part 2

Having a development environment setup with a proper provisioning tool is
crucial to improving your workflow. Once you’ve got your virtual machine set
up
and ready to
go, you need to have some way of ensuring that it’s set up with the software
you need.

(If you’d like, you can go and clone the
companion repository and
play along as we go.)

For this, my tool of choice is Puppet. Puppet is a bit different from other
provisioning systems in that it’s declarative rather than imperative. What do I
mean by that?

Declarative vs Imperative

Let’s say you’re writing your own provisioning tool from scratch. Most likely,
you’re going to be installing packages such as nginx. With your own provisioning
tool, you might just run apt-get (or your package manager of choice) to
install it:

apt-get install nginx

But wait, you don’t want to run this if you’ve already got it set up, so you’re
going to need to check that it’s not already installed, and upgrade it instead
if so.

if $( which nginx ) then
    apt-get install nginx
else
    apt-get update nginx
end

This is relatively easy for basic things like this, but for more complicated
tools, you may have to work this all out yourself.

This is an example of an imperative tool. You say what you want done, and the
tool goes and does it for you. There is a problem though: to be thorough, you
also need to check that it has actually been done.

However, with a declarative tool like Puppet, you simply say how you want your
system to look, and Puppet will work out what to do, and how to transition
between states. This means that you can avoid a lot of boilerplate and checking,
and instead Puppet can work it all out for you.

For the above example, we’d instead have something like the following:

package {'nginx':
    ensure => latest
}

This says to Puppet: make sure the nginx package is installed and up-to-date. It
knows how to handle any transitions between states rather than requiring you to
work this out. I personally prefer Puppet because it makes sense to me to
describe how your system should look rather than writing separate
installation/upgrading/etc routines.

(To WordPress plugin developers, this is also the same approach that WordPress
takes internally with database schema changes. It specifies what the database
should look like, and dbDelta() takes care of transitions.)

Getting It Working

So, now that we know what Puppet is going to give us, how do we get it set up?
Usually, you’d have to go and ensure that you install Puppet on your machine,
but thankfully, Vagrant makes it easy for us. Simply set your provisioning tool
to Puppet and point it at your main manifest file:

config.provision :puppet => {
    puppet.manifests_path = "manifests"
    puppet.manifest_file  = "site.pp"
    puppet.module_path    = "modules"
    #puppet.options        = '--verbose --debug'
}

What exactly is a manifest? A manifest is a file that tells Puppet what you’d
like your system to look like. Puppet also has a feature called modules that add
functionality for your manifests to use, and I’ll touch on that in a bit, but
just trust this configuration for now.

I’m going to assume you’re using WordPress with nginx and PHP-FPM. These
concepts are applicable to everyone, so if you’re not, just follow along
for now.

First off, we need to install the nginx and php5-fpm packages. The following
should be placed into manifests/site.pp:

package {'nginx':
    ensure => latest
}
package {'php5-fpm':
    ensure => latest
}

Each of these declarations is called a resource. Resources are the basic
building block of everything in Puppet, and they declare the state of a certain
object. In this case, we’ve declared that we want the state of the nginx and
php5-fpm packages to be ‘latest’ (that is, installed and up-to-date).

The part before the braces is called the “type”. There are a huge number of
built-in types
in
Puppet and we’ll also add some of our own later. The first part inside the
braces is called the namevar and must be unique with the type; that is, you can
only have one package {'nginx': } in your entire project. The part after the
colon is called the attributes of the resource.

Next up, let’s set up your MySQL database. Setting up MySQL is a slightly more
complicated task, since it involves many steps (installing, setting
configuration, importing schemas, etc), so we’ll want to use a module instead.

Modules are reusable pieces for manifests. They’re more powerful than normal
manifests, as they can include custom Ruby code that interacts with Puppet, as
well as powerful templates. These can be complicated to create, but they’re
super simple to use.

Puppet Labs (the people behind Puppet itself) publish the canonical MySQL
module
, which is what we’ll be
working with here. We’ll want to clone this into our modules directory, which we
set previously in our Vagrantfile.

$ mkdir modules
$ cd modules
$ git clone git@github.com:puppetlabs/puppetlabs-mysql.git mysql

Now, to use the module, we can go ahead and use the class. I personally don’t
care about the client, so we’ll just install the server:

class { 'mysql::server':
    config_hash => { 'root_password' => 'password' }
}

(You’ll obviously want to change ‘password’ here to something slightly
more secure.)

MySQL isn’t much use to us without the PHP extensions, so we’ll go ahead and get
those as well.

class { 'mysql::php':
    require => Package['php5-fpm'],
}

Notice there’s a new parameter we’re using here, called require. This tells
Puppet that we’re going to need PHP installed first. Why do we need to do this?

Rearranging Puppets

Puppet is a big fan of being as efficient as possible. For example, while we’re
working on installing MySQL, we can go and start setting up our
nginx configuration.

To solve this, Puppet has the concept of dependencies. If any step depends on a
previous one, you have to specify this dependency explicitly1. Puppet
splits running into two parts: first, it does compilation of the resources to
work out your dependencies, then it executes the resources in the order
you’ve specified.

There are two ways of doing this in Puppet: you can specify require or
before on individual resources, or you can specify the dependencies all
at once.

# Individual style
class { 'mysql::php':
    require => Package['php5-fpm'],
}

# Waterfall style
Package['php5-fpm'] -> Class['mysql::php']

I personally find that the require style is nicer to maintain, since you can
see at a glance what each resource depends on. I avoid before for the same
reason, but these are stylistic choices and it’s entirely up to you as to which
you use.

You may have noticed a small subtlety here: the dependencies use a different
cased version of the original, with the namevar in square brackets. For example,
if I declare package {'nginx': }, I refer to this later as Package['nginx'].
This is a somewhat strange thing to get used to when starting out, but you’ll
quickly get used to it.

(We’ll get to namespaced resources soon such as mysql::db {'mydb': }, and the
same rule applies here to each part of the name, so this would become
Mysql::Db['mydb'].)

Important note: It’s important not to declare your resources with capitals,
as this actually sets the default attributes. Avoid this unless you’re sure you
know what you’re doing.

Setting Up Our Configuration

We’ve now got nginx, PHP, MySQL and the MySQL extensions installed, so we’re now
ready to start configuring it for our liking. Now would be a great time to try
vagrant up and watch Puppet run for the first time!

Let’s now go and set up both our server directories and the nginx configuration
for them. We’ll use the file type for both of these.

file { '/var/www/vagrant.local':
    ensure => directory
}
file { '/etc/nginx/sites-available/vagrant.local':
    source => "file:///vagrant/vagrant.local.nginx.conf"
}
file { '/etc/nginx/sites-enabled/vagrant.local':
    ensure => link,
    target => '/etc/nginx/sites-available/vagrant.local'
}

And the nginx configuration for reference, which should be saved to
vagrant.local.nginx.conf next to your Vagrantfile:

server {
    listen 80;
    server_name vagrant.local;
    root /var/www/vagrant.local;

    location / {
        try_files $uri $uri/ /index.php$is_args$args;
    }

    location ~ .php {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+.php)(/.+)$;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_pass unix:/var/run/php5-fpm.sock;
        fastcgi_index index.php;
        include /etc/nginx/fastcgi_params;
    }
}

(This is not the best way to do this in Puppet, but we’ll come back to that.)

Next up, let’s configure MySQL. There’s a mysql::db type provided by the MySQL
module we’re using, so we’ll use that. This works the same way as the file and
package types that we’ve already used, but obviously takes some different
parameters:

mysql::db {'wordpress':
    user     => 'root',
    password => 'password',
    host     => 'localhost',
    grant    => ['all'],
    require  => Class['mysql::server']
}

Let’s Talk About Types, Baby

You’ll notice that we’ve used two different syntaxes above for the MySQL parts:

class {'mysql::php': }
mysql::db {'wordpress': }

The differences here are in how these are defined in the module: mysql::php is
defined as a class, whereas mysql::db is a type. These reflect fundamental
differences in what you’re dealing with behind the resource. Things that you
have one of, like system-wide packages, are defined as classes. There’s only one
of these per-system; you can only really install MySQL’s PHP bindings once.2

On the other hand, types can be reused for many resources. You can have more
than one database, so this is set up as a reusable type. The same is true for
nginx sites, WordPress installations, and so on.

You’ll use both classes and types all the time, so understanding when each is
used is key.

Moving to Modules

nginx and MySQL are both set up with our settings now, but it’s not really in a
very reusable pattern yet. Our nginx configuration is completely hardcoded for
the site, which means we can’t duplicate this if we want to set up another site
(for example, a staging subdomain).

We’ve used the MySQL module already, but all of our resources are in our
manifests directory at the moment. The manifests directory is more for the
specific machine you’re working on, whereas the modules directory is where our
reusable components should live.

So how do we create a module? First up, we’ll need the right structure. Modules
are essentially self-contained reusable parts, so there’s a certain structure
we use:

  • modules/<name>/ – The module’s full directory
    • modules/<name>/manifests/ – Manifests for the module, basically the same
      as your normal manifests directory
    • modules/<name>/templates/ – Templates for the module, written in Erb
    • modules/<name>/lib/ – Ruby code to provide functionality for your
      manifests

(I’m going to use ‘myproject’ as the module’s name here, but replace that with
your own!)

First up, we’ll create our first module manifest. For this first one, we’ll use
the special filename init.pp in the manifests directory. Before, we used
the names mysql::php and mysql::db, but the MySQL module also supplies a
mysql type. Puppet maps a::b to modules/a/manifests/b.pp, but a class
called a maps to modules/a/manifests/init.pp.

Here’s what our init.pp should look like:

class myproject {
    if ! defined(Package['nginx']) {
        package {'nginx':
            ensure => latest
        }
    }
    if ! defined(Package['php5-fpm']) {
        package {'php5-fpm':
            ensure => latest
        }
    }
}

(We’ve wrapped these in defined() calls. It’s important to note that while
Puppet is declarative, this is a compile-time check. If you’re making
redistributable modules, you’ll always want to use this, as you can’t declare
types twice, and users should be able to redefine these in their manifests.)

Next, we want to set up a reusable type for our site-specific resources. To do
this in a reusable way, we also need to take in some parameters. There’s one
special variable passed in automatically, the $title variable, which
represents the namevar. Try to avoid using this directly, but you can use this
as a default for your other variables.

Declaring the type looks the same as defining a function in most languages.
We’ll also update some of our definitions from before.3

define myproject::site (
    $name = $title,
    $location,
    $database = 'wordpress',
    $database_user = 'root',
    $database_password = 'password',
    $database_host = 'localhost'
) {
    file { $location:
        ensure => directory
    }
    file { "/etc/nginx/sites-available/$name":
        source => "file:///vagrant/vagrant.local.nginx.conf"
    }
    file { "/etc/nginx/sites-enabled/$name":
        ensure => link,
        target => "/etc/nginx/sites-available/$name"
    }

    mysql::db {$database:
        user     => $database_user,
        password => $database_password,
        host     => $database_host,
        grant    => ['all'],
    }
}

(This should live in modules/myproject/manifests/site.pp)

Now that we have the module set up, let’s go back to our manifest for Vagrant
(manifests/site.pp). We’re going to completely replace this now with
the following:

# Although this is declared in myproject, we can declare it here as well for
# clarity with dependencies
package {'php5-fpm':
    ensure => latest
}
class { 'mysql::php':
    require => [ Class['mysql::server'], Package['php5-fpm'] ],
}
class { 'mysql::server':
    config_hash => { 'root_password' => 'password' }
}

class {'myproject': }
myproject::site {'vagrant.local':
    location => '/var/www/vagrant.local',
    require  => [ Class['mysql::server'], Package['php5-fpm'], Class['mysql::php'] ]
}

Note that we still have the MySQL server setup in the Vagrant manifest, as we
might want to split the database off onto a separate server. It’s up to you to
decide how modular you want to be about this.

There’s one problem still in our site definition: we still have a hardcoded
source for our nginx configuration. Puppet offers a great solution to this in
the form of templates. Instead of pointing the file to a source, we can bring
in a template and substitute variables.

Puppet gives us the template() function to do just that, and automatically
supplies all the variables in scope to be replaced. There’s a great
guide
and
tutorial that explain this
further, but most of it is self-evident. The main thing to note is that
template() function’s template location is in the form <module>/<filename>,
which maps to modules/<module>/templates/<filename>.

Our file resource should now look like this instead:

file { "/etc/nginx/sites-available/$name":
    content => template('myproject/site.nginx.conf.erb')
}

Now, we’ll create our template. Note the lack of hardcoded pieces.

server {
    listen 80;
    server_name <%= name %>;
    root <%= location %>;

    location / {
        try_files $uri $uri/ /index.php$is_args$args;
    }

    location ~ .php {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+.php)(/.+)$;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_pass unix:/var/run/php5-fpm.sock;
        fastcgi_index index.php;
        include /etc/nginx/fastcgi_params;
    }
}

(This should be saved to modules/myproject/templates/site.nginx.conf.erb)

Our configuration will now be automatically generated, and the name and location
will be imported from the parameters to the typedef.

If you’d really like to go crazy with this, you can basically parameterise
everything you want to change. Here’s an example from one of mine:

server {
    listen <%= listen %>;
    server_name <% real_server_name.each do |s_n| -%><%= s_n %> <% end -%>;
    access_log <%= real_access_log %>;
    root <%= root %>;

<% if listen == '443' %>
    ssl on;
    ssl_certificate <%= real_ssl_certificate %>;
    ssl_certificate_key <%= real_ssl_certificate_key %>;

    ssl_session_timeout <%= ssl_session_timeout %>;

    ssl_protocols SSLv2 SSLv3 TLSv1;
    ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
    ssl_prefer_server_ciphers on;
<% end -%>

<% if $front_controller %>
    location / {
        fastcgi_param SCRIPT_FILENAME $document_root/<%= front_controller %>;
<% else %>
    location / {
        try_files $uri $uri/ /index.php?$args;
        index <%= index %>;
    }

    location ~ .php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+.php)(/.+)$;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
<% end -%>
        fastcgi_pass <%= fastcgi_pass %>;
        fastcgi_index index.php;
        include /etc/nginx/fastcgi_params;
    }

    location ~ /.ht {
        deny all;
    }

<% if $builds %>
    location /static/builds/ {
        internal;
        alias <%= root %>/data/builds/;
    }
<% end -%>

<% if include  != '' %>
    <%include.each do |inc| %>
        include <%= inc %>;
    <% end -%>
<% end -%>
}

Notifications!

There’s one small problem with our nginx setup. At the moment, our sites won’t
be loaded in by nginx until the next manual restart/reload. Instead, what we
need is a way to tell nginx that we need to reload when the files are updated.

To do this, we’ll first define the nginx service in our init.pp manifest.

service { 'nginx':
    ensure     => running,
    enable     => true,
    hasrestart => true,
    restart    => '/etc/init.d/nginx reload',
    require    => Package['nginx']
}

Now, we’ll tell our site type to send a notification to the service when we
should reload. We use the notify metaparameter here, and we’ve already set the
service up above to recognise that as a “reload” command.

file { "/etc/nginx/sites-available/$name":
    content => template('myproject/site.nginx.conf.erb'),
    notify => Service['nginx']
}
file { "/etc/nginx/sites-enabled/$name":
    ensure => link,
    target => "/etc/nginx/sites-available/$name",
    notify => Service['nginx']
}

nginx will now be notified that it needs to reload when we both create/update
the config, as well as when we actually enable it.

(We need it on the config proper in case we update the configuration in the
future, since the symlink won’t change in that case. The notification relates
specifically to the resource, even if said resource is the link itself.)

We should now have a full installation set up and ready to serve from your
Vagrant install. If you haven’t already, boot up your virtual machine:

$ vagrant up

If you change your Puppet manifests, you should reprovision:

$ vagrant provision

Machine vs Application Deployment

There can be a bit of a misunderstanding as to what should be in your Puppet
manifests. This is something that can be a bit confusing, and I must admit that
I was originally confused as well.

Puppet’s main job is to control machine deployment. This includes things like
installing software, setting up configuration, etc. There’s also the separate
issue of application deployment. Application deployment is all about deploying
new versions of your code.

The part where these two can get conflated is installing your application and
configuring it. For WordPress, you usually want to ensure that WordPress itself
is installed. This is something that is probably outside of your application,
since it’s fairly standard, and it only happens once. You should use Puppet here
for the database configuration, since it knows about the system-wide
configuration which is specific to the machine, not the application.

You probably also want to ensure that certain plugins and themes are enabled.
This is something that should not be handled in Puppet, since it’s part of
your application’s configuration. Instead, you should create a must-use plugin
that ensures these are set up correctly. This ensures that if your app is
updated and rolled out, you don’t have to use Puppet to reprovision your server.

(If you do push this into your Puppet configuration, bear in mind that updating
your application will now involve both deploying the code and reprovisioning the
server.)

Wrapping Up

If you’d like, you can now go and clone the
companion repository and
try running it to test it out.

Hopefully by now you should have a good understanding both of Vagrant and
Puppet. It’s time to start applying these tools to your workflow and adjusting
them to how you want to use them. Keep in mind that rules are made to be broken,
so you don’t have to follow my advice to the letter. Experiment, and have fun!

  1. There are a few
    cases where this doesn’t apply, but you should be explicit anyway. For example,
    files will autodepend on their directory’s resource if it exists. []
  2. Yes,
    I realise you can do per-user installation, but a) that’s an insane setup; and
    b) you’ll need to handle package management yourself this way. []
  3. This previously used
    hardcoded database credentials. Thanks to James Collins for catching this! []