Patch WordPress via GitHub

A few days ago, I started tweeting about the Stack Overflow Developer Survey, where 74% of developers surveyed said they dread working with WordPress. I received a tonne of replies that I’m still working through, and I’ll post about that soon.

One reply that did come up a few times was contributing via GitHub. Matt announced in the State of the Word that you’d soon be able to contribute to WP via pull requests, however that hasn’t happened so far. I had a few discussions with some of the core team about this, but alas it never got anywhere.

However, after this discussion, I realised I could do something about it right now as a proof-of-concept. Trac exposes an XML-RPC interface, and GitHub exposes a REST API, so hooking the two up only requires a minimal amount of code.

So, introducing GitHub-to-Patch, a tiny utility to allow submitting PRs to WordPress.

Here’s how you submit a pull request for WordPress using this:

  1. Find the ticket on Trac you want to upload a patch to.
  2. Submit a pull request to the WordPress/WordPress repo, then close it to keep GitHub clean. (You can still continue to update it.)
  3. Head to the GitHub-to-Patch page.
  4. Select your pull request.
  5. Enter the ticket number.
  6. Enter your Trac/WordPress.org username and password.
  7. Preview the patch you’re about to submit and verify the details.
  8. Done! You should also leave a comment about the patch you just added. :)

If you update your PR and want to upload your changes, simply repeat the same process; Trac will automatically name the patches correctly to avoid overwriting previous ones.

Internally, the utility uses GitHub’s API to get a patch format of the pull request, then uses Trac’s XML-RPC API to upload. This requires your WordPress.org credentials, and because of cross-origin policy, also requires an intermediary server. :( I hope to fix this in the future, either by integrating the tool into Trac itself, or by using OAuth with WordPress.org. In the meantime, if you don’t trust my server, you can install and run the tool from GitHub with minimal effort.

In the future, I’ll likely create a PR bot to automatically close PRs and point users to the tool, and to note when people have uploaded their PR as a patch.

Thanks to Eric Andrew Lewis for his pull request to the grunt-patch-wordpress repo that made me realise I could do this. :)

The (Complex) State of Meta in the WordPress REST API

One of the other discussion points in our recent API meeting was the state of meta in the REST API. We recently made the somewhat-controversial decision to remove generic meta handling from the API. As we didn’t have time to get into the specifics in the meeting, I wanted to expand on exactly what we’re doing here, and our future plans.1

WordPress has four different types of meta: post meta, comment meta, term meta, and user meta. These broadly act the same, so for simplicity’s sake, I’ll be grouping them together as just “meta”.

Meta also falls into two broad groups: plugin data, and user input. The distinction here is that plugin meta is set by a plugin programmatically, whereas user input is set via the Custom Fields metabox. These are broad categorisations, but the general difference is that plugin meta tends to be “protected” (typically prefixed with an underscore), whereas user input meta is any sort of freeform name (and occasionally no name at all).

Solution for Plugin Data

Right now, there is a viable solution for plugins to handle meta through the REST API: register_rest_field(). This function allows registering extra fields on a resource (like a post) and handling them in your own code.

For example, let’s say we have a plugin that adds “featured emoji” to a post, which saves a string of emoji characters for a post. We already have a metabox for this in the admin, and now we want to expose it via the API. This is super easy:

register_rest_field( 'post', 'featured_emoji', array(
    'get_callback' => function ( $data ) {
        return get_post_meta( $data['id'], '_featured_emoji', true );
    },
    'update_callback' => function ( $value, $post ) {
        // TODO: sanitize and validate this field better
        $value = sanitize_text_field( $value );

        update_post_meta( $post->ID, '_featured_emoji', wp_slash( $value ) );
    },
    'schema' => array(
        'description' => __( 'Featured emoji for the post to add a little flavour.', 'femoji' ),
        'type' => 'string',
        'context' => array( 'view', 'edit' ),
    ),
));

Solution for Custom Fields

User input meta is also handled, using the generic meta API. This is the /wp/v2/posts/{id}/meta route in the API, which is the route that was recently pulled out of the API plugin itself.

This route is practically only useful for replicating the Custom Fields metabox in the post editor and is not generally useful for plugins and themes. In fact, the endpoints have feature parity with the Custom Fields metabox and the same rules around visibility: if it appears in the metabox, it appears in the API (and vice versa).

Why Separate Solutions?

You may be wondering why we can’t use the same solution for both groups of meta. There are a number of complex issues here, but the key issue is that we cannot reliably separate the two groups. Unlike custom types (post types, taxonomies), meta doesn’t have to be registered before use. This is super handy most of the time, but also means that meta is a bit of a minefield. This leads to surprising behaviour for API users: plugin meta is (mostly) not available via the /meta endpoint.

Protected Meta

The _ prefix is used throughout WordPress to indicate that a field is “protected”. Unfortunately, exactly what “protected” means is usually undefined, but the one thing it reliably indicates is that the key shouldn’t be exposed through the Custom Fields metabox. As the /meta endpoint is designed to mirror the metabox, we don’t expose protected meta via the endpoints. This means that this endpoint isn’t useful for many plugins.

You can, however, whitelist individual keys by filtering is_protected_meta. This allows exposing plugin data via this standard meta API; for example, to expose WooCommerce’s _price field:

add_filter( 'is_protected_meta', function ( $protected, $key, $type ) {
    if ( $type === 'post' && $key === '_price' ) {
        // Expose the `_price` meta value publicly
        return true;
    }
    return $protected;
}, 10, 3 );

This can be somewhat confusing though, because protected meta is still not exposed if it falls into one of a few other categories. In addition, it will now appear in the Custom Fields metabox as well.

Complex Values & Serialized Data

One of the categories of meta we can’t expose is serialized data. This applies regardless of whether the meta field is marked as protected or not. This is potentially surprising to plugin authors who might be explicitly whitelisting their meta field for the API, and yet it still isn’t exposed. The key reason for this is that accepting serialized data is either lossy or unsafe.

To understand why serialized data is unsafe, we need to look at what serialized data actually is. At its core, serialization is a way to pack complex data into simple data, in this case a string. We need to include enough data to reverse the process to ensure the process is lossless. The PHP serialization format encodes the two pieces of data that a variable contains: the type, and the value. For simple scalar values, the scalar type itself is encoded: integers become i:val;, such as i:42;; strings become s:size:value; such as s:3:foo;, etc. Arrays are encoded in a more complex way, as they need to encode the type (array), size, keys, and values: this is encoded as a:size:{key;value} where key is a serialized scalar value and value is any serialized value. For example, array('foo' => 42) serializes to a:1:{s:3:"foo";i:42;}.

Objects are slightly more complex, because the “type” itself is complex and includes the class. The format is very similar to arrays (as objects are essentially just the property array), but with the a type replaced with O:classnamelength:classname as the type. This gives a value like O:16:"WP_HTTP_Response":3:{s:4:"data";N;s:7:"headers";a:0:{}s:6:"status";i:200;}.2

The object type is where the problems with serialized meta arise. When a serialized value is unserialized, these classes are instantiated, and the __wakeup() method on the class is executed if it exists. Because of this, allowing serialized data to be saved allows remote code execution by the client saving the data. For example, if an attacker finds a class (and you only need one) with a __wakeup method, they can execute that code by submitting serialized data. Alternatively, if a class assumes that one of its properties is safe to run eval on, or to pass into the database directly, this can be exploited too.

This may sound a bit daft, but this is not a theoretical bug. YAML supports deserializing data into Ruby objects with --- !ruby/hash:classname. This wasn’t generally seen as an issue until it was discovered that a specific ActionDispatch object in Ruby on Rails was running eval on one of its properties. As a result, every Rails site was vulnerable to arbitrary code execution, which is one of the worst classes of bugs.

Serialized objects are not inherently dangerous, but they massively increase the attack surface of the API. Exposing serialized objects as read-only is almost a potential privacy issue, as it leaks internal implementation details (class names). For these reasons, we made a calculated decision not to allow serialized data.

One potential solution to allowing complex data is to convert it to JSON-native data. The issue with this is that JSON-encoding data is lossy. PHP objects will be converted down to a generic JSON object, and associative arrays and object data cannot be distinguished. Additionally, PHP doesn’t distinguish between numerically-indexed arrays (JSON lists) and associative arrays (JSON objects). These issues mean that simply sending back the object you received will cause data loss.

For these reasons, we can’t support serialized data in the API via any endpoints, including meta and a future options endpoint.3

Permissions

As a result of most meta not being registered, the permissions area is a bit sketchy. While the add_post_meta, edit_post_meta and delete_post_meta meta-capbilities exist, there’s no similar meta-capability for reading post meta. This is the key reason meta is only available while authenticated, as we need to instead fall back to edit_post.

This again is a result of user input meta and plugin data not being clearly defined. In a very early version of the API, user input meta was exposed by default, until it was noted that users often use these fields for internal notes and workflow. (Despite this, the_meta() template tag exists to output these fields on the frontend.)

In addition, plugins adding meta have no fine-grained controls over meta field access. While write capabilities can be controlled precisely, whether someone can read the meta fields depends on how they’re used and can be inconsistent.

Making It All Better

So, how do we fix all of this? A while ago we talked about loosening the rules, but it turned out this wasn’t viable without core changes to WordPress. During the hackday for A Day of REST a few weeks ago, one of the groups took on this issue and came up with a plan. Key to this plan is changing core to support better meta registration.4

These changes to core should improve meta usage not just in the REST API, but also across the board for the rest of core and plugins. This also helps to lay some of the groundwork and low-level infrastructure for the fields API in a future version of WordPress.5 Expanding this out allows better tooling around meta as well; for example, we may be able to clean up metadata for deactivated plugins if meta is registered consistently.

As tooling and infrastructure develops around meta fields (including the fields API), this may allow us to solve the complex data issues as well. Being able to explicitly say that a field contains a list of strings (e.g.) would allow us to safely expose the values, and avoid potential data loss from JSON serialisation.

These changes will take time to finalise and execute, and it will be a while until the ecosystem fully adopts these changes. In the meantime though, we’d like to ship a REST API. Without these changes, we don’t have the ability to automatically expose plugin meta, however plugins can already register their fields manually, and future changes would simply provide better tools for developers.

We believe it’s in the WordPress project’s best interest to ship what we have and continue to iterate as we make these changes. Holding back the rest of the API for completion’s sake benefits nobody.


Thank you to the Meta Team at A Day of REST for volunteering to tackle this complex issue, and for their comprehensive discussion and planning. Thanks also to Brian Krogsgard for proofreading, and to Daniel Bachhuber, Joe Hoyle, and Rachel Baker for being generally awesome.

  1. We also had to gloss over exactly how progressive enhancement works, so I fleshed this out in a recent post if you missed it. []
  2. Objects implementing the Serializable interface instead use C: instead of O:, but these are not supported by WordPress for historical reasons. []
  3. “What about XML-RPC?” you may ask. The XML-RPC API only allows reading serialized meta, which is a minor privacy issue as it may expose internal implementation but is not a security issue. However, since serialized meta can’t be saved via the XML-RPC API, attempting to write the data you just read for a field will cause it to be saved double-encoded, which means it’s lossy. []
  4. Did you know there’s a system in core to register meta? I didn’t before we tackled this problem in the API, and that’s a key part of the problem. []
  5. This “groundwork” consists of expanding the scope of register_meta to take arbitrary parameters similar to register_post_type, plus promoting register_meta and making sure people know it actually exists. []

Progressive Enhancement With the WordPress REST API

In a REST API discussion today, we discussed the future of the REST API. Something I touched upon briefly in that meeting is the concept of progressive enhancement with the REST API. Since this topic hasn’t been brought up much previously, I want to elaborate on how progressive enhancement works.

Progressive enhancement is our key solution to a couple of related problems: forward-compatibility with future features and versions of WordPress, and robust handling of data types in WordPress. Progressive enhancement also unblocks the REST API project and ensures there’s no need to wait until the REST API has parity with every feature of the WordPress admin.

For instance, custom post types can do basically whatever they want with their data, so we wanted a robust system for indicating feature support via the REST API. For example, post types which don’t have the editor support flag won’t have content registered, similar to how the admin doesn’t show the content editor for those post types. In addition, plugins can do even crazier stuff like conditionally changing post types. The system in the REST API can handle these cases with ease, providing clients the ability to adapt on-the-fly to the format of the data they’re editing or displaying.

We also recognise that the REST API needs the ability to adapt to future versions of WordPress, and we want to avoid as many breaking changes as possible. Building the abilities for feature detection enables forwards-compatibility via progressive enhancement, and gives clients a reliable paradigm to safely check whether a WordPress supports a feature before trying to use it.

The progressive enhancement concept builds heavily on the model already used by browsers for this purpose. If you want to build a site that uses geolocation (e.g.), you can easily detect support for that and build while waiting for browser support, even including polyfills. Feature detection with the REST API can allow the same technique, and allow polyfilling while waiting for the long-tail of sites to update.

The interplay with the complexity of custom post types is almost a bonus here. If I’m building a replica of the post editor, using feature detection to select which “metaboxes” to show is basically a necessity. In the case of meta, clients need to be robust enough to do this already, as plugins can remove plugin support for custom-fields from the built-in post types, and clients need to respond to this.

Progressive enhancement exists in the REST API already, and is easily usable and accessible by clients that want to ensure robustness.

Building With Progressive Enhancement Today

As an example, let’s say I’m building a simple editor today that uses the REST API. Imagine essentially a slimmed down version of Calypso or MarsEdit.

My editor allows me to write posts, save them as drafts, edit them again later, and publish them when I’m ready. After the post is published, I can update and save, and that affects the live post. I can’t do post previews, as there’s no autosave support built in.1

For now, I build my client without the autosave support, and instead bake autosave features into the editor itself. The WordPress admin already does this with localStorage saving for offline connections, and this system doesn’t require server-side support.

Progressively Enhancing In A Future Release

In a future release, we have the autosaving process nailed down, so we mark our extra feature plugin as done and merge it into core. The autosave endpoint then gets rolled out in the next WordPress major release.

In my client, I want to add the extra server-side autosave support on top of my local autosaving. To do this, I look to see if the feature is supported on the site. In this case, the “feature” I want is the POST endpoint on the /wp/v2/posts/{id}/autosave route, so I check the index to see if the route is there and supports the method.

I see that the site supports the method, so my client transparently starts using server-side autosaves.

This feature detection already exists in the REST API today. For instance, compare the results of http://demo.wp-api.org/wp-json/ and https://wordpress.org/wp-json/ You can easily see which supports creating posts by inspecting for /wp/v2/posts and see that it supports POST.

Plugin Detection

REST API clients can also easily detect which plugins are available on a site. REST API endpoints are registered with two parts to their name: the namespace and the route. The namespace typically looks like name/v1 with a unique slug combined with the version; for the core plugin, this is wp/v2. This system works similarly to function namespacing in plugins currently, and we expect (and strongly recommend) that plugins and themes treat this as their unique slice of the API space.

Let’s say I want to check if WooCommerce is installed on a site. I simply fetch the index route and check the namespaces key to see if woocommerce/v3 is registered.

Again, plugin detection already exists in the REST API. Compare again http://demo.wp-api.org/wp-json/ and https://wordpress.org/wp-json/. The demo site supports the core endpoints, as it has wp/v2 registered, whereas wordpress.org only has the oEmbed endpoints.

More Granular Detection

We can far more granular with detection too. Each route supplies information about the schema that the response follows (and request data when creating or updating resources). This is available either via an OPTIONS request to the route, or by fetching the index with ?context=help.

To detect fields, we simply need to check the schema for that field. If generic meta support is pushed back to a future release, enabling clients to interact with this would be easy. For argument’s sake, let’s say it’s added as the custom_fields property on the post resource.

To detect feature support, we simply need to do an OPTIONS request on /wp/v2/posts/42 (for post 42), then check that $.schema.properties.custom_fields exists, and matches the format we’re expecting. We can then display a “custom fields” metabox-style interface in the editor for this.

Again, this level of feature detection already exists in the REST API today, and even more than that, we already recommend using this process for existing endpoints. When interacting with custom post types, you can detect whether the post type is hierarchical by checking for $.schema.properties.parent. You can detect whether a post supports reordering by checking for $.schema.properties.menu_order.

This applies even when not working with custom post types: you can detect whether a post supports featured images and whether the site/theme supports them by checking that $.schema.properties.featured_media exists. This isn’t a theoretical concern, robust editors already need to do this, as themes have differing support for WordPress features, and these changes need to flow through clients. In addition, plugins have essentially unlimited flexibility, and clients need to recognise this and support it in order to maximise compatibility across the long-tail of WordPress installs and configurations.

Meta with register_rest_field

One thing that was glossed over is that despite us pulling support for generic meta, we still have opt-in support for meta handling at a lower level.

If I’m a plugin author that wants to add my own data to a post response, I can simply use code like:

register_rest_field( 'post', 'rm_data', array(
    'get_callback' => function ( $data ) {
        return get_post_meta( $data['id'], '_rm_custom_field', true );
    },
    'update_callback' => function ( $value, $post ) {
        update_post_meta( $post->ID, '_rm_custom_field', sanitize_text_field( $value ) );
    },
    'schema' => array(
        'description' => 'My custom field!',
        'type' => 'string',
        'context' => array( 'view', 'edit' ),
    ),
));

Since I’ve registered the schema data, this is automatically added to the schema for me. Clients can then detect my feature automatically by checking for $.schema.properties.rm_data. The API here gives me feature detection for free. The proposal to enhance register_meta in core (https://core.trac.wordpress.org/ticket/35658) will enable even easier integration with the API.

Moving Forward

Right now, the REST API team, and the WordPress community, needs a clear path forward to get from the feature-plugin as it exists today to a sustainable long-term project. Being able to ship and move on is a key part of this, as well as providing the room to expand in the future.

We believe that the progressive enhancement approach is the best approach for continuing API development. Progressive enhancement is a paradigm the REST API project ​must​ adopt, if it’s an API we want to add to (without breaking backwards compatibility) over the next 10 years.

  1. I’m choosing autosave support here, however it’s very possible this will be completed and merged in the very near future. It’s a convenient example to use though. []

A Future API

I published a post today over on Post Status about the future of the WordPress REST API.

The year is 2020. WordPress powers over 35% of the web now. The REST API has been in WordPress core for a few years; the year after the REST API was merged into core, WordPress gained nearly 5% marketshare.

Many people all across the web are using the API. Here are some of their stories.

It’s quite a long post, but I’d love you to read it. :)

By the way, if you’re not a member of the Post Status Club, I’d recommend signing up. Fantastic value for money; Brian’s daily Notes email is the primary way I read WordPress news these days.

How I Would Solve Plugin Dependencies

One of the longest standing issues with the plugin system in WordPress is how to solve the issue of dependencies. Plugins and themes want to bring in libraries, other plugins, or parent themes, but right now, the solutions are somewhat terrible. I thought it was time to get my thoughts down on (virtual) paper.

What’s the problem?

Software is invariably never built in isolation (“no man is an island”), so they are naturally drawn to using external libraries. Extending an existing system is also extremely useful; we can see that from the plugin ecosystem in WordPress itself.

However, right now, there’s no good way to do these in a way that interoperates with other plugins and sites. There are various third-party solutions, but often these require code duplication or offer a substandard user experience.

The Jetpack Problem

This lack of proper dependencies is one of the key reasons behind the system of ever-growing codebases, and is exactly why Jetpack is a gigantic plugin rather than being split out. In an ecosystem with a proper dependency system, Jetpack would simply be the “core” of other plugins, being depended on for core functionality, and offering UI to tie it all together.

One of my personal key problems with Jetpack is that it duplicates the plugin functionality in WordPress (poorly, at times), and hence doesn’t work with standard tooling. Real dependencies would help to solve this. A future Jetpack with a plugin dependency system shouldn’t look any different to the current UI, but would use real plugins internally. This would ensure that the Jetpack core stays lightweight while still offering all the functionality.

Changing this to use a real dependency system would have benefits both for developers and users. The install process of Jetpack could be improved by allowing the core of the plugin to be downloaded first, letting the user set up and configure Jetpack while the rest of the plugin downloads in the background. Users and developers concerned about the size of the plugin could install only the parts they need, reducing file size and potential attack surface across the plugin.

User Experience

In the wider ecosystem, we can see other plugins running into the same issue. The largest plugins, including WooCommerce, EDD and Yoast SEO, have some form of an extension list to attempt to solve this, but invariably end up offering a poorer user experience, sending users off to other sites.

Without creating a full library to handle this for a plugin, invariably we end up with terrible UX. I’ve seen plugins do everything from pop up a message on install saying “search for X, and install it”, to straight up installing plugins and breaking a site completely. This run-time verification also breaks workflow for version-controlled sites, as plugin installation and upgrading is typically done independently of the site itself.

Products vs Services

On a more selfish note, plugins like the REST API would see increased adoption from plugin and theme developers if they could use a unified, simple system to require it. For developers who actually care about user experience, giving terrible messages to users or including a complex library just for dependencies isn’t something they want to handle.

This has partially stymied adoption of the API, as “product” developers (theme and plugin developers) don’t want to offer a substandard experience, Worse, it has skewed our development pattern towards “service” developers (agencies doing work for clients, and teams running SaaS platforms), who have the ability to run anything they like without running into these issues. This means that very real issues that we need to tackle in order to scale to the long-tail may be deprioritised in favour of those affecting services.

How do we solve it?

This is one of those ideas that I’ve had floating around in my head for a while, basically fully-formed, but with no time to execute. I’m writing this as a guide to how I see the problem being solved, with the hopes that someone has the ability to execute this the way it should be done. Imagine this as a blueprint for a successful project, albeit not the final design.

(Note that whenever I say a plugin, I actually mean plugins or themes, as behaviour should be the same for both.)

Internal Workings, ft. Composer

Any PHP developer who has worked outside of WordPress recently will know Composer. Composer, for those who aren’t aware, is a command-line tool for managing dependencies in PHP. Composer is also not a good solution to the dependency problem for WordPress plugins: it requires CLI access and knowledge, it has a somewhat clunky interface and user experience (edit a JSON file, then generate a lock file and a vendor directory, then maybe commit one or more of those), and it also requires PHP 5.3+ (a non-starter for core integration, currently).

However, one of the key parts of Composer is the dependency solver, which is a port of the libzypp solver. This is a “SAT solver”: it takes note of what’s available and of what something requires, then it works out whether it can install the software (it solves the satisfiability problem). This solver is the key to working out the dependency chain for openSUSE packages (where libzypp is originally from), and the same system is used by Composer. This system would be a fantastic base for a plugin dependency system.

Developer User Experience (DUX)

The experience for developers needs to be a familiar one. Plugin headers are a great place to start, but they quickly become untenable in their current state, as they’re not built for complexity (check any theme with more than a few tags to see what I mean). It’s possibly that with some tweaking they could be used, but this may be hard to achieve.

Ideally, we’d want the dependencies to be declarative, since this would help out a bunch of automated tooling. However, we can’t solve every problem at once. For bootstrapping this project off the ground, procedural code will work just fine.

I have a semi-working proof of concept that looks something like this:

The top three lines of code are all that’s required to check if your dependencies exist. We can automatically detect which plugin called the function, and parsing it out is relatively simple; we just then need to pass it to WP.org to see if we can get it working.

I’ve also written up some more complex usage patterns for the system for developers doing more advanced usage. (Note that the documents linked here relate to an early prototype I was working on, so not everything there matches this document; notably, allowing Composer dependencies isn’t something I’d suggest for right now.)

End-User Experience (EUX)

The end-user experience is key to gaining adoption. You need to offer an experience that users are familiar with, and that doesn’t require a bunch of manual steps. We are working on computers, after all, which are meant to automate the dumb tasks for us.

The EUX starts before the user even installs a plugin or theme. The information screen needs to show them what the plugin needs (the full dependency tree, not just direct ones), as well as any potential conflicts with existing plugins. Installing that plugin should then also ensure that the dependencies are also installed, failing if any of the dependencies fails to install correctly. All of this needs to occur before the plugin is actually run, ensuring that the plugin doesn’t have to worry about double-checking everything before it can actually do any code. (This tends to overcomplicate a codebase with no gain.)

Once a plugin and its dependencies are installed, they then need to be maintained. Plugins should receive regular updates as usual, but the end user needs to at least be warned if an update will break compatibility with another. To accommodate urgent, breaking changes, users must be allowed to update plugins even if it would cause incompatibility, and the dependency system should ensure that the other plugins are disabled as needed. (If autoupdates for plugins are added to core, this would still be a manual process.) Trust the user to do the right thing, but ensure they cannot break their own system.

On the other end, uninstalling a plugin should correspondingly offer to remove anything it depends on if not being used by anything else. This again should always be the user’s choice, as depended-on plugins may have use apart from just being a dependency.

Distribution

Getting these plugin dependencies available is the hardest part of the equation. Developers need to be able to depend on (ha ha) the system being available to them, otherwise it’s not going to get adoption regardless of how great it is. This is true for any core feature (like a REST API), but especially so for something that needs to essentially be hidden from the user.

The end goal here is core integration. If the solution doesn’t end up in core at the end, the project has failed, as it’s not ubiquitous. If this happens, throw out what you need and try again, but it must be in core to be a viable solution for many users.

The best alternative, and best way to bootstrap in the meantime, is to aim for integration into Jetpack. Jetpack is one of the most widely used plugins, giving you a huge userbase straight out of the gate. This solution would also be incredibly valuable to Jetpack in making it more modular, and allowing it to shed some of the weight it currently has. Obviously, no one except the Jetpack team has a say over this, but it’s a good way to get your foot in the door. (Plus, it gets the Jetpack team potential extra lock-in benefits, as everyone would need to require Jetpack, albeit temporary.)

There’s precedent in WordPress’ past for this too. Sidebar widgets were originally developed as a plugin by Automattic, then eventually integrated into WordPress core. Widgets used WordPress.com to bootstrap their development process, and in a modern WordPress, would likely piggy-back on Jetpack as well.

Potential Issues

One key potential issue I see is dependency versions. By allowing plugins to require certain versions, it’s possible to end up in situations where unrelated plugins cannot both be installed due to a mutual incompatibility with a library. This could be caused by a plugin requiring too specific a version (“only version 1.2.5, please!”) or an actual incompatibility between major branches. In order to balance these concerns, it may be wise to only allow requiring major versions, with the responsibility on plugin developers to stick to this system.

We also need to be careful to avoid situations like DLL Hell, where mutual incompatibilities between plugins cause installs and upgrades to be impossible without breaking something else. Encouraging plugins to maintain full compatibility is a top priority, which removing the ability to depend on specific versions may help with.

Distribution will be the biggest issue. It may be tempting to bundle with another large plugin (Yoast SEO, WooCommerce, etc), but you risk fragmentation by allowing bundling with more than one plugin, and no one’s going to want to be left without it if it’s that good. We can already see this problem with some of the libraries out there now, where mutually incompatible versions are used by different plugins.

Finally

I’m desperately hoping this post serves as inspiration for someone to create a proper solution to this. I don’t care if it gets solved the way I’ve thought of, there are plenty of other ways to skin this particular cat, and none of them is the “right” way.

(I started on a solution, but truly don’t have the time to dedicate to this. However, I’m willing to offer every piece of code I wrote for the prototype right now to kickstart this.)

What we need is something better than the current solutions. And not just better, but radically better.

Will you be the one to create it?

Beginnings

I announced a little while ago that I was making a change in my life. Over the past month-and-a-bit, I’ve been talking with many people and deciding where I want to spend the next stage of my career.

I’m delighted to announce that I’ve accepted a position working at Human Made. I’ve been hearing great things about Human Made for a long time, and, after talking to Tom, Joe and Noel, decided they’d be a fantastic fit.

In my day-to-day work at Human Made, I’ll be working on both client work as well as products, such as happytables. In fact, I’ve already begun shipping code, and had my first deploy last week (along with my first broken deploy, and my first scramble-to-fix-the-fatal-errors). I also shipped a cool little timezone widget that shows exactly what time of day it is for the humans that compose Human Made:

Timezone widget screenshot, showing avatars with their associated current time

I’m looking forward to seeing where this change takes me. If the first week is any indication, I definitely made the right choice.

See also: my post on the Human Made blog.


In other news, I can’t resist linking to a great piece of music by a famous French duo, that seems at least somewhat relevant:

Using Custom Authentication Tokens in WordPress

Much has been written about the ability in WordPress to replace the authentication handlers. Essentially, this involves replacing WordPress’ built-in system of username and password combinations with a custom handler, such as Facebook Connect or LDAP.

However, basically nothing appears to have been written on the other side of authentication: replacing WordPress’ cookie-based authentication tokens. The process of authentication in WordPress is simple and looks something like this:

  1. Check the client’s cookies – If we have valid cookies, skip to step 6
  2. Redirect the user to the login page
  3. Show the user a login form
  4. Check the submitted data against the database
  5. Issue cookies for the now-authenticated user
  6. Proceed to the admin panel

The existing authenticate hook allows users to swap out step 4 reasonably easily, and existing hooks allow replacing steps 2 and 3. The problem, however, is swapping out cookies in steps 1 and 5.

There’s a few reasons you might want to swap out the existing cookie handling: you’re passing data over something that’s not HTTP (CLI interface, e.g.); you’re using a custom authentication method (OAuth, e.g.); or, as with anything in WordPress plugins, some far-out idea that I can’t even fathom. Any of these require swapping out cookies for your custom system, however there’s not quite any good way to do so.

The existing solution to this is to hook into something like plugins_loaded and check there, however this will occur on every request, even if you don’t actually need to be authenticated. This makes it hard to issue error responses (such as HTTP 401/403 codes) without also denying access to non-authenticated requests.1

The correct way to do this really would be to use a late-check system the same way WordPress itself does. All WordPress core functions eventually filter down to get_currentuserinfo()2, which in turn calls wp_validate_auth_cookie(). It’s worth mentioning at this point that all of is_user_logged_in(), wp_get_current_user() and get_currentuserinfo() contain a total of zero hooks. We get our first respite in wp_validate_auth_cookie() with the auth_cookie_malformed action, however setting a user here is then overridden straight afterwards by wp_set_current_user( 0 ).

*sigh*

So, here’s the workaround solution. Hopefully this helps someone else out.

(This is also filed as ticket #26706.)

  1. This is less of an issue if you can detect whether a client is passing authentication, such as checking for existence of a header, but some naive clients send authentication headers with every request anyway. This happens to be the scenario I find myself in. []
  2. wp_get_current_user() e.g. calls it, is_user_logged_in() calls wp_get_current_user(), etc []

Change

It’s time for a change.

Fifteen years ago, I started school. Seven years ago, I finished primary school and started high school. Three years ago, I finished school. And two years ago, I started university.

For me, attending university was a natural course of action. The question was always one of what I would study at university, not whether I would. In my last years at school, I stressed over this question, as most graduating high-schoolers do.

My choices became clearer to me as I came closer to the end of my degree. I decided that engineering was where my talents were.

In the meantime, I was approaching having spent nine years of my life writing software. Programming had always been something that I’d enjoyed, and I’d become relatively good at it. It was natural that I could take my talent and apply it to a career.

However, I didn’t.

I was afraid that doing something I loved as a career would cause me to eventually become sick of it, and that wasn’t something I wanted.

So I chose electrical engineering, in the hope that it’d be similar enough to what I’d done and enjoyed previously, but different enough to prove a challenge.


I’ve just completed the second year of my five-year degree. This year, I failed four subjects: three math subjects and an electrical engineering subject.

It’s not that I found the subjects particularly hard. They were challenging, certainly, but it wasn’t impossible to overcome that.

No, instead, it’s that I stopped caring. I stopped caring about my grades. I stopped caring about what I was learning. I just didn’t care.


For the most part, I’ve found my subjects to be quite similar to subjects at school. You put in enough effort, and you do well. The material you learn is sometimes interesting, but mostly you just learn it to learn it.

But I have noticed one important thing. The subjects that I care about the most, the subjects that I enjoy, and consequently, the subjects that I do best in are the computing systems subjects.

For me, the logical puzzles and strange syntax just click. When given problems to solve, it’s intuitive for me to look at them and immediately have the outline of a solution in my head. I can see the solution to problems before other people have worked out where to start.

I look at a problem and my immediate thought is to work out how to solve it. I love the challenge presented, and I love making things that solve it.

And yet, I continue studying the other subjects in the vain hope that I’ll learn to enjoy them just as much. Someday, I think to myself, it will start being enjoyable.


I’ve changed immensely in the past two years.

After leaving home, mainly for practical reasons, I’ve become a different person entirely. Although I love my parents immensely, I could never really become an adult until I’d moved out. I didn’t know this until after the fact, of course.

However, there was one significant part of me that didn’t change: my plan in life. Up until I left home, I’d stayed the course. I’d moved from school into university without a second thought, just because it never really occurred to me to do otherwise. I hadn’t really considered my choice, if you could even call it that.

But as I grew as a person, I realised that I needed to reevaluate. While continuing on the path had a familiarity to it, I couldn’t ignore the other possibilities staring me in the face.


I’ve always said to myself that I’d rather do something I loved and earn a pittance than do something I hated and be rich.

However, I don’t think I’d ever really thought about just what that means. It was a set of empty words to me, not something I truly lived my life by.

I think it’s time that I stopped repeating hollow phrases to myself and actually did something about it.


I’m dropping out of university to follow my passion.

It’s a decision that I should have made a long time ago, and I regret not making it earlier.

I still have concerns that I’ll end up hating what I do, but change is something I have to accept and deal with if it happens.

Maybe this is change for the worse, and I end up deciding this isn’t what I want to do. I’m okay with that now, because at least I will have tried.

But maybe, just maybe, this is the best decision I’ll make in a long time.


I’m now taking serious offers for full-time work. If you’re hiring a WordPress developer, or know someone who is, contact me at r@rotorised.com

You’re Using Transients Wrong

The Transients API is an incredibly useful API in WordPress, and unfortunately one of the most misunderstood. I’ve seen almost everyone misunderstand what transients are used for and how they work, so I’m here to dispel those myths.

For those not familiar with transients, here’s a quick rundown of how they work. You use a very simple API in WordPress that acts basically as a key-value store, with an expiration. After the expiration time, the entry will be invalidated and the transient will be deleted from the transient store. Transients essentially operate the same as options, but with an additional expiration field.

By default in WordPress, transients are actually powered by the same backend as options. Internally, when you set a transient (say foo), it gets transparently translated to two options: one for the transient data (_transient_foo) and an additional one for the expiration (_transient_timeout_foo). Once requested, this will then be stored in the internal object cache and subsequent accesses in the same request will reuse the value, in much the same way options are cached. One of the most powerful parts of the transient API is that it uses the object cache, allowing a full key-value store to be used in the backend. However the default implementation, and how the object cache can change this, is where two major incorrect assumptions come from.

Object Caching and the Database

The first incorrect assumption that developers make is to assume the database will always be the canonical store of transient data. One big issue here is attempting to directly manipulate transient data via the option API; after all, transients are just a special type of option, right?

In the real world however, anything past your basic site will use an object cache backend. Popular choices here include APC (including the new APCu) and Memcache, which both cache objects in memory, not the database. With these backends, using the option API will return invalid or no data, as the data is never stored in the database.1

I’ve seen this used in real world plugins to determine if a transient is about to expire by directly reading _transient_timeout_foo. This will break and cause the transient to always be counted as expired with a non-default cache. Before you think about how to do this in a cross-backend compatible way: you can’t. Some backends simply can’t do this, and until WordPress decides to provide an API for this, you can’t predict internal behaviour of the backends.

Expiration

The second incorrect assumption that most developers make is that the expiration date is when the transient will expire. In fact, the inline documentation even states that the parameter specifies the “time until expiration in seconds”. This assumption is correct for the built-in data store: WordPress only invalidates transients when attempting to read them (which has lead to garbage collection problems in the past). However, this is not guaranteed for other backends.

As I noted previously, transients use the object cache for non-default implementations. The really important part to note here is that the object cache is a cache, and absolutely not a data store. What this means is that the expiration is a maximum age, not a minimum or set point.

One place this can happen easily is with Memcache set in Least Recently Used (LRU) mode. In this mode, Memcache will automatically discard entries that haven’t been accessed recently when it needs room for new entries. This means less frequently accessed data (such as that used by cron data) can be discarded before it expires.

What the transient API does guarantee is that the data will not exist past the expiration time. If I set a transient to expire in 24 hours, and then attempt to access it in 25 hours time, I know that it will have expired. On the other hand, I could access it in 5 seconds in a different request and find that it has already expired.

Real world issues are common with the misunderstanding of expiration times. For WordPress 3.7, it was proposed to wipe all transients on upgrade for performance reasons. Although this eventually was changed to just expired transients, it revealed that many developers expect that data will exist until the expiration. As a concrete example of this, WooCommerce Subscriptions originally used transients for payment-related locking. Eventually, Brent (the lead developer) found that these locks were being silently dropped and users could in fact be double-billed in some cases. This is not a theoretical issue, but a real-world instance of the expiration age issue. The solution to this particular issue was to swap it out for options, which are guaranteed to not be dropped.

When Should I Use Transients?

“This all sounds pretty doom and gloom, Ryan, but surely transients have a valid use?”, you say. Correct, astute reader, they’re a powerful tool in the right circumstances and a much simpler API than others.

Transients are perfect for caching any sort of data that should be persisted across requests. By default, WordPress’ built-in object cache uses global state in the request to cache any data, making it useless for caching persistent data. Transients fill the gap here, by using the object cache if available and falling back to database storage if you have a non-persistent cache.

One application of this persistence caching that fits perfectly is fragment caching. Fragment caching applies full page caching techniques (like object caching) to individual components of your page, such as a sidebar or a specific post’s content. Mark Jaquith’s popular implemention previously eschewed transients due to the lack of garbage collection combined with key-based caching, however this is not a concern with the upcoming WordPress 3.7.

Another useful application of transient storage is for caching long-running tasks. Tasks like update checking involve remote server calls, which can be costly both in terms of time and bandwidth, so caching these makes sense. WordPress internally caches the result from update checking, ensuring that excess calls to the internal update check procedures don’t cause excessive load on the WordPress.org server. While the object caching API would work here, the default implementation would never cache the result persistently.

Summary

Transients are awesome, but there are some important things to watch out for:

  • Transients are a type of cache, not data storage
  • Transients aren’t always stored in the database, nor as options
  • Transients have a maximum age, not a guaranteed expiration
  • Transients can disappear at any time, and you cannot predict when this will occur

Now, go out and start caching your transient data!

  1. The reason I say invalid or no data here is because it’s possible for a transient to be stored in the database before enabling an object cache, so that would be read directly. []

Requests for PHP: Version 1.6

It’s been a while since I released Requests 1.5 two years ago (!), and I’m trying to get back on top of managing all my projects. The code in Requests has been sitting there working perfectly for a long time, so it’s about time to release to a new version.

Announcing Requests 1.6! This release brings a chunk of changes, including:

  • Multiple request support – Send multiple HTTP requests with both fsockopen and cURL, transparently falling back to synchronous when not supported. Simply call Requests::request_multiple(), and servers with cURL installed will automatically upgrade to parallel requests.

  • Proxy support – HTTP proxies are now natively supported via a high-level API. Major props to Ozh for his fantastic work on this.

  • Verify host name for SSL requests – Requests is now the first and only PHP standalone HTTP library to fully verify SSL hostnames even with socket connections. This includes both SNI support and common name checking.

  • Cookie and session support – Adds built-in support for cookies (built entirely as a high-level API). To compliment cookies, sessions can be created with a base URL and default options, plus a shared cookie jar.

  • Opt-in exceptions on errors: You can now call $response->throw_for_status() and a Requests_Exception_HTTP exception will be thrown for non-2xx status codes.

  • PUT, DELETE, and PATCH requests are now all fully supported and fully tested.

  • Add Composer support – You can now install Requests via the rmccue/requests package on Composer

So, how do you upgrade? If you’re using Composer, you can bump your minimum version to 1.6 and then update. (Note that you should remove minimum-stability if you previously set it for Requests.) Otherwise, you can drop the new version in over the top and it will work out of the box. (Version 1.6 is completely backwards compatible with 1.5.)

What about installing for the first time? Just add this to your composer.json:

{
    "require": {
        "rmccue/requests": ">=1.6"
    }
}

Alternatively, you can now install via PEAR:

$ pear channel-discover pear.ryanmccue.info
$ pear install rmccue/Requests

Alternatively, head along to the release page and download a zip or tarball directly.

Along with 1.6, I’ve also created a fancy new site, now powered by Jekyll. This is hopefully a nicer place to read the documentation than on GitHub itself, and should be especially handy to new users.

This release includes a lot of new changes, as is expected for such a long release cycle (although hopefully a little shorter next time). One of the big ones here is the significantly improved SSL support, which should guarantee completely secure connections on all platforms. This involved a lot of learning about how the internals of SSL certificates work, along with working with OpenSSL. Getting them working in a compatible manner was also not particularly easy; I spent about an hour tracking back through PHP’s source code to ensure that stream_socket_client had the same level of availability as fsockopen (it does) all the way back to PHP 5.2.0 (it did).

In all, 19 fantastic third-party contributors helped out with this release, and I’d like to acknowledge those people here:

Feedback on this release would be much appreciated, as always. I look forward to hearing from you and working with you to improve Requests even further!