The Magic Sauce: Cloudfront IAM Policies for Use with W3 Total Cache

If you want to improve the performance of your WordPress installation, W3 Total Cache is one of the best options out there for optimizing page delivery and script minification. But if you want to take your performance web-scale, you’ll need to consider a commercial-grade Content Delivery Network (CDN). Fortunately, W3 Total Cache offer multiple options, and (in my humble opinion) Amazon Web Services Cloudfront running on top of an S3 bucket is the best CDN option available. While the W3 Total Cache FAQ has a decent overview of how to configure a basic CloudFront/S3 CDN, AWS architectures that implement the higher level of security offered by assigning specific IAM accounts to Cloudfront distributions and S3 buckets can present a problem.

Even though Frederick Townes deserves a Nobel Prize for creating W3 Total Cache, he didn’t quite nail it when comes to parsing messages generated by the AWS API when the plugin tries to connect to Cloudfront.

Here is the most common error messages I ran into when trying to configure W3 Total Cache to work with Cloudfront within a PCI 3 compliant AWS architecture:

Error: Unable to list buckets (S3::listBuckets(): [AccessDenied] Access Denied).

W3 Total cache tries to create a dropdown list of S3 Buckets to which the IMA account has access. While listBucket is a valid IAM policy to list the contents of an S3 bucketS3, listBuckets isn’t valid. The correct policy reference is ListAllMyBuckets.

The correct IAM policy should look like this:

"Statement": [
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "arn:aws:s3:::*"

But this only covers the policy to list the S3 Buckets. If you’re using W3 Total Cache with Cloudfront, you also need to be able to list the Cloudfront distributions:

"Version": "2014-10-31",

From a security perspective, it’s not ideal that the IAM policy has to be able to list the distributions and buckets. A better solution would be to provide a field in which to manually enter the bucket or distribution name.

Hack Away – Musings on When to Decouple Application Components

First, I’m going to warn you that this isn’t a “How-To” post, or a “Take a look at this nifty new tech” post. This is just me brainstorming on how to choose when to decouple a specific component from where it normally lives inside a “closed-box” web application.

If an application might have the potential to one day to the cloud, it’s components should be design from the start so they can be easily decoupled. The best strategy to support future decoupling is to practice good Object Oriented Programming (OOP) and organize your code into pretty functions and classes, perhaps even adopt a Model View Control (MVC) design pattern. Writing thousands of lines of procedural (spaghetti) code means a hard time down the road, but this mess can usually be split into templates that can be referenced and reused via “includes” (I work a lot in PHP and ColdFusion, and there are a million and one ways to include a code template).

Once the code code is separated into templates or class libraries, it’s not too difficult to write web services interfaces into that functionality: building REST APIs that pop out XML and JSON is fun, but sometimes all that is required of a fledgling web service is to dump out a comma delimited list.

Unless these baby web services are behind some sort of firewall and not publically accessible to the world, some sort of security mechanism to authorize request is necessary, whether it’s a full blown OATH provider or just a super-secret revolving hashed key (wait, I already said OATH).

I have a pretty good handle on HOW to decouple application services ( I’ve been building these things since 1997), but I’ve never really tried to codify WHEN should I decouple application services.

Thinking about my prototypical Web application, I’m always trying to solve the problem of one or two components dragging the whole system down. They don’t do it all the time, so throwing more resources at the whole system to solve an intermittent problem seems extremely wasteful.

I keep coming back to some measure of latency as the solution: how long does it take the application components to talk to each other within a closed box, versus how much latency do have trying to transfer that data over the network?

I hate mathematics, but I’m going to try to formulate how to determine if decoupling makes sense.

Request Latency + Component A Processing Time + Response Latency = Component Latency

Pretty simple, huh?

In the closed-box application, Request Latency and Response Latency should be effectively zero, since components can speak to each other directly. The only ways to improve the performance are to optimize the code or throw more horsepower at the system. Code optimization is always a good idea, but vertical scaling hits hardware limitations pretty quickly, and is often a huge waste when trying to solve variable performance issues for a single component.

In this imaginary distributed application, I’m yanking out that problem component and giving it its own set of resources. The rest of the application can run just as it always did, but instead of throwing massive resources at the whole black box, I can throw modest resources at the distributed component. There’s definitely take a hit on on Request Latency and Response Latency (both of which are tied to network latency, Web server latency, and possibly lunar tidal effects).

Measuring the Request Latency and Response Latency will help point to whether or not it makes sense to decouple. First, some debugging capability is required inside the black-box application component to track how long a component processes from the time it receives a request to the time it sends out the result (I’m sure we all do this for all of our code already, right?). To facilitate debugging, I typically set a timestamp variable at the start of the component, and another at the end of the component, and if debugging is enabled, I pass the difference of the two to whatever I’m using for debugging (often just an HTML comment). This is the time to beat.

For the distributed component, one way to capture this latency is to spin up a simple “Hello World” application on the newly provisioned distributed application server (although to get a more realistic picture I’d probably send the same volume of data, or static result dump from the original component instead of the “Hello World” text). From the primary application server, I can run a CURL (PHP), or CFHTTP (ColdFusion), or whatever flavor of HTTP Request I prefer for that application. Before and after the request, the component should output a timestamp. The differences in the time stamps will give a good idea of the combined Request and Response latency. For even better insight, I could add a start timestamp and stop timestamp to the “Hello world” and pass that back to the application.

If the “Hello World” distributed application Request/Response Latency is longer than the black box Component Processing Latency, it’s time to scale up the “hardware” until there is a significant performance difference. In a cloud environment, this type of experimental scalability is possible, but for anyone using physical servers, this tactic may be cost prohibitive (and a clear indicator you should start using cloud resources right away).

Once the performance target is identified, it’s time to decide if the cost of the “infrastructure” is worth the gains from reworking code to support distributed a distributed application architecture.

If there is no real difference between the black-box component and the distributed component, then it may be necessary to look for performance bottlenecks elsewhere in the overall application architecture (database performance, client connections to Web servers, under-provisioned network connections, rogue processes, etc.).

So, to boil it all down to a theory, if the Request/Response latency in a distributed application network is lower than the processing time for a black-box component, that component may be a good candidate for repackaging as a distributed application.

I have a several projects on my plate this month that will let me test out this metric. It may be an oversimplified approach to a complex problem, but when working with a few million lines of spaghetti code, you’ve got to start somewhere.

Not There Yet: Hacking Twitter Bootstrap Responsive Navigation Bar


For the last few weeks, I’ve been building a new responsive theme in WordPress using the Twitter Bootstrap framework. Halfway through the project, I realized that the responsive drop-down navigation bar built into Bootstrap kind of sucks. At first glance, it works smoothly, resizing and realigning itself without a hitch; moving through breakpoints without a jitter or a stutter; popping up a cute, tasty little navburgerbad responsive design bug just when my menus are getting too tight.

And then I try to actually visit one of the main sections of the site, I end up just opening and closing a drop-down in “desktop mode”, or playing the accordion in “mobile mode”.

I recorded it for posterity on Twitter:

Those top level sections of the site are pretty important (even if Team Bootsrap thinks otherwise), so finding a solution was pretty important.

I scoured Google for every possible iteration of responsive drop-down navigation that doesn’t suck, and while I did find some that were pretty cool, they still sucked in some important, obnoxiously obvious way.

And I really want my tasty navburger.

Digging into Bootstrap, it does some pretty clever manipulation of link attributes via CSS and JavaScript, particularly with its use of the custom attribute “data-toggle”, which has mysterious and magical properties in Bootstrap, not unlike the fabled unicorn.

From many may years of experience developing dynamic UX components in JavaScript (well, hacking away at and cobbling together the work of people who actually know what they’re doing), I know the best way to deal with a unicorn is to kill it. And then eat it.

It makes for a tasty navburger.

To kill that “data-toggle” unicorn, I decided to use some jQuery to rearrange classes and snip out the “unnecessary” attribute.

Here’s what I came up with:


// JavaScript Document
// Toggle the Bootstrap Navigation link class so it actually links
;(function ($) {
    $(function() {
        var links = $('a.dropdown-toggle').click(function() {
            $(this).mouseup(function() {
                $('.dropdown-toggle').attr("data-toggle", "dropdown");
                $(this).attr("data-toggle", "");

This worked pretty well when actually using a device equipped with a real mouse, but my iPad Mini had no idea what a “click” is, and simply ignored my efforts to get to those top level pages in the navigation bar and navburger.

I had one of those “DUH” moments when I remembered that mobile devices are always a bit touchy (did you catch what I did with that pun?).

Again, it was time to show Bootstrap who was in charge here, so I Google’d (yes, Google is who is in charge here) “touch click jQuery“, and after browsing a but, decided against jQuery Mobile (of which I’m actually a fan, but didn’t want the unnecessary bloat) in favor of jQuery.Tap,  a lovely jQuery extension that seemed perfect for my little project.

I added jquery.tap.js to my /js folder and referenced it in the footer (because that’s the way I roll these days), and added the following code to my toggleLink.js

;(function ($) {
    $(function() {
        var links = $('a.dropdown-toggle').on('click tap', function() {
            $(this).on('click tap', function() {
                $('.dropdown-toggle').attr("data-toggle", "dropdown");
                $(this).attr("data-toggle", "");

This solution still isn’t perfect, but it at least makes the Bootstrap responsive drop-down navigation functional. The biggest issue is that the current code takes three clicks to make the the top navigation items link to their respective pages.

I’ll continue hammering away at this over the next week or so, so make sure to check back. And if you have THE solution, please share!

QUORA: How should I structure PHP files to output nicely formatted HTML?

Curtis Oden added this answer.

Start by putting <?php on the first line and ?> on the last line. You’ll now have to either explicitly ‘echo’ the HTML to the screen, or close PHP ( ?> ) before your HTML, and then reopen it after.

Here’s a rough example:

//This is my PHP application

//The whitepace above won’t go to the browser
//START awesome conditional code.

$helloWorld = ‘Hello World’;

//END awesome conditional code.

Hello Wild World

Sometimes I just like to shout: “.”

And sometimes i like to whisper: “
echo ($helloWorld);
And sometimes ….
echo (‘I feel fancy and feel like singing”‘ . $helloWorld . ‘.”‘);

//This is the end…

And there are a dozen other ways to do this. Just remember: what happens in PHP, stays in PHP.

See question on Quora

Read more…

QUORA: How can you inspire programmers to work longer work weeks voluntarily?

Curtis Oden voted up this answer.

Contrary to popular belief, most of us have family, friends, children,
and social lives. We don’t need “motivation” to work long hours, we need you people in management to give us clear, concise, and comprehensive specs (yes, you can do all three), ask us for time estimates based on those specs (because no, you do NOT have the expertise to know how long a particular bit of code should take to write), and then harbor sane expectations.

“This is going to take about 120 man hours” does not mean “we’d like to have 100 man hours, but if you ‘negotiate’ hard we can ‘finish’ in 80
and then spend another 40 in unpaid, salaried overtime because we have nothing better to do”. Programming time is not something you can haggle over, and we are not bored kids who can’t think of anything better to do with our time than work long hours for free.

No, this isn’t an angry, frustrated rant; I work in a company that does
what I described above, and the result is that we finish projects on
time and on budget and we bill clients what the Read more…