Sivel.net  Throwing Hot Coals


VMware ESXi Upgrade Hell

This past weekend, I had a little free time before my family woke up, and I decided I’d bring my free standalone ESXi server up to patch level. How mistaken was I, that this would be accomplished within the 30 minutes I had available. The next 5 hours would be telling…

I logged into the VMware website and downloaded all of the product patches released since install, and proceeded to install them via esxcli software vib install -v.

After rebooting, I checked the “PublicSwtich0” vswitch to see if it’s uplink was missing, which seems to happen on every reboot, due to initilization order I imagine (this is on unsupported hardware with 3rd party drivers). It was of course missing, and easily remediated with esxcli network vswitch standard uplink add --uplink-name=vmnic32 --vswitch-name=PublicSwitch0. A side note, for whatever reason, is that I cannot add the uplink back via the GUI, it must be done via the CLI.

After adding the uplink back to the vswitch, I still couldn’t access the VM via the interface linked with PublicSwitch0. I tried all sorts of random things, as I’m not an ESXi pro, and after fumbling around for at least an hour, nothing had worked. I had resigned myself to the fact that it was broken, and deployed some critical services to a public cloud for temporary hosting (yay for finally creating Ansible playbooks for my personal stuff). I decided, without any good option, that I was going to re-install ESXi. To do this originally I had to create a custom ISO with the NIC drivers I needed inside of a Windows VM, since I have no Windows machines, and then follow some complex instructions to create a bootable flash drive. I immediately went to go grab the flash drive I used to find it missing. 30 minutes of searching and I never found it.

I decided that I should go buy hardware that was supported, and after about 30 minutes of searching and realizing I wasn’t about to spend thousands on hardware with dual NICs that could hold at least 4 drives, I curled into a ball and cried…Well, not really.

I went back to it, and after thinking things through I decided to see if I could find a way to downgrade the updated packages. Of course this wasn’t as easy as I wanted. I investigated recovery mode, which was of no help. I found the --allow-downgrades flag for esxcli software profile update. Things finally started to make sense, and I found the version I had been running originally and executed:

esxcli software profile update -p ESXi-6.5.0-20170701001s-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

And was immediately met with an error of:

[InstallationError]
[Errno 28] No space left on device
       vibs = VMware_locker_tools-light_6.5.0-0.23.5969300

A bit of Google searching and experimentation and I was able to do the following:

cd /tmp
wget http://hostupdate.vmware.com/software/VUM/PRODUCTION/main/esx/vmw/vib20/tools-light/VMware_locker_tools-light_6.5.0-0.23.5969300.vib
esxcli software vib install -f -v /tmp/VMware_locker_tools-light_6.5.0-0.23.5969300.vib
esxcli software profile update -p ESXi-6.5.0-20170701001s-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

I excitedly rebooted, to find that the NIC still didn’t seem to be working. I did some uninstall/install dances with the drivers, rebooted like 8 times, still nothing. I’d really screwed myself.

Finally I decided to delete PublicSwitch0 by first changing the Network for the VMs using that vswitch, to utilize vSwitch0, then deleting PublicSwitch0. Afterwhich, I recreated PublicSwitch0, created a Public Network port group, reassigned the Network for the affected VMs, powered them on, and…it worked!

I decided to update again, and used esxcli software profile update to get to ESXi-6.5.0-20171204001-standard and again after rebooting, nothing worked, but at least killing the PublicSwitch0 and recreating it resolved the problem.

I’m working on creating a script to delete PublicSwitch0 and recreate it. I’ll then have an easily repeatable fix I can use each time I upgrade.

Anyway, maybe this helps someone out there.

VMware

About

IF YOU REALLY want to hear about it, the first thing you’ll probably want to know is where I was born, and what my lousy childhood was like, and how my parents were occupied and all before they had me, and all that David Copperfield kind of crap, but I don’t feel like going into it, if you want to know the truth. (In case that was too subtle for you I am a big fan of The Catcher in the Rye by J.D. Salinger)

In any case I am employed as a Senior Principal Software Engineer at Ansible by Red Hat, working in San Antonio, TX.

I spend the majority of my time writing Python, contributing to Ansible and drinking booze, although maybe not in that order.

With that out of the way enjoy this site.


Manually Install Silverlight On Mac

A few weeks ago Aaron Brazell mentioned on Twitter that he had been unable to run Silverlight on his brand new Macbook Air:

This intrigued me, as many random things do. I love attempting to resolve obscure issues, and after watching him struggle for a few days I decided to help out. I spent about an hour, and learned some really cool things about the installation process for Mac apps packaged as ‘.pkg’ files, and how to go about installing them manually.

I had a hard time finding the information anywhere, and figured that, while this is somewhat specific to Silverlight, that it may be useful to others.

Although I use a Mac, and love the beauty of it’s UI, I spend most of my time on the command line. I am a Linux Systems/DevOps Engineer by trade, so I of course interact with most of my daily tasks from the command line.

I needed to download a copy of the Silverlight.dmg file, but quickly found that if you hit the Silverlight site, and already have Silverlight installed you couldn’t get to the download. Fortunately they link you to an uninstall page on their site, so I just deleted the paths specified there:

rm -rf /Library/Internet\ Plug-Ins/Silverlight.plugin /Library/Receipts/Silverlight.pkg /Library/Receipts/Silverlight_W2_MIX.pkg /Library/Internet\ Plug-Ins/WPFe.plugin /Library/Receipts/WPFe.pkg

I restarted my browser, hit the Silverlight site again, and downloaded the Silverlight.dmg file. I did take this opportunity, to inspect my HTTP requests from my browser, and determined the actual URL where the file lives for future reference.

After downloading and double clicking to mount, you can just navigate directly into /Volumes/Silverlight/Silverlight.pkg from the command line. On Mac ‘.app’ and ‘.pkg’ as well as many other items that appear to be files, are actually just specially named directories. Mac styles them to look like files. If you really want, you can right click on such an item and select ‘Show Package Contents’.

Once inside, I took a look around, and quickly noticed that the Contents/Archive.pax.gz file was where the majority of the data was located based on size, and looking in the Contents/Resources directory, I found some simple shell scripts and perl scripts.

There is an InstallationCheck perl script, that is used to validate that your system meets the requirements. After looking into it, I couldn’t determine why it would fail to succeed, and neither could Aaron. Attempting to modify this file and install, resulted in the installer reporting some generic error, which was the result of the signature of the InstallationCheck file being different than the stored value. With that option gone, I took a look at the other files.

I found preflight was a shell script version of the uninstall instructions on the site. And postflight went around cleaning some things up and generating CPU specific optimized libraries for Silverlight to use, as opposed to just-in-time compilation.

Back to Archive.pax.gz

I quickly recognized the ‘.gz’ extension, as that is a standard gzip file extension. I however, did not recognize the ‘.pax’ file extension, although after reading a little about it), I am a little surprised I didn’t.

In any case, after gunzipping and unarchiving using pax, You basically get a directory hierarchy that can be dropped into the root (/) partition on your Mac. So to keep from wasting any more of your time, let’s get on to the actual steps to get it working:

Note: I wouldn’t try just copy/pasting that whole block. Run each command separately to avoid potential issue.

cd ~/Downloads
curl -A "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_2) AppleWebKit/536.26.17 (KHTML, like Gecko) Version/6.0.2 Safari/536.26.17" \
    -Lo Silverlight.dmg http://www.microsoft.com/getsilverlight/handlers/getsilverlight.ashx
hdiutil attach Silverlight.dmg
cp -r /Volumes/Silverlight/Silverlight.pkg ~/Downloads/Silverlight.pkg
hdiutil detach /Volumes/Silverlight
cd ~/Downloads/Silverlight.pkg/Contents/
sudo ./Resources/preflight
gunzip Archive.pax.gz
pax -r -f Archive.pax
sudo cp -r Library/ /Library/
cd Resources/
sed -i '.bak' -e 's/rm\ -rf\ coregen.*)/)/' postflight
sudo PACKAGE_PATH=~/Downloads/Silverlight.pkg ./postflight

Close your web browser(s) and reopen visiting the following URL to test Silverlight: http://www.microsoft.com/getsilverlight/default.aspx

At this point you should have a Silverlight working on your Mac, or at least it was for Aaron:

That Github Gist, still exists, and contains the same steps as outlined above.

Most of those instructions are pretty self explanatory, the one that is not is probably the sed command. Basically in postflight it kicks off a number of commands into the background that utilize a binary called coregen_i386. It also deletes the coregen_i386 binary. In my testing I found that it often deleted the coregen_i386 binary before all of the coregen_i386 commands had executed, causing some of them to fail. So the sed command does an inplace edit of the postflight file to remove the rm -rf commands to delete the coregen_i386 and coregen_x86_64 binaries.

Anyway, hopefully this helps someone else. Enjoy!

HowTo Mac Questions Technology

Rocky Roads Ahead

Over the next 2 months (or thereabout), expect things to not always work exactly as they may now on my site. Some things unrelated to the actual install of WordPress on my site may be unavailable during this time. I don’t think anyone other than myself uses them, so I think everything should mostly go unnoticed.

I’m going to be performing some shuffling, and hope to keep it as minimally impacting as possible.

If you spot something strange, give me a shout!

Fun News WordPress

Shadowbox JS Plugin Pulled from the WordPress.org Repository

Update: The Shadowbox JS plugin is back in the repo!

I was notified on December 28, 2011 that a complaint was made to the WordPress.org plugins repository team that the Shadowbox JS plugin contained non-GPL code (shadowbox.js). This is correct, and I have known about it for some time, and although I had permission from the author to include it in the plugin download, it still doesn’t make it GPL. Due to this, the plugin has been pulled from the WordPress.org Plugins Repository.

It will likely be some time until I can get it back into the repo, as it is going to require some pretty large updates to the plugin to make use of the WP_Filesystem class to ensure that I can have the plugin reliably download the shadowbox files and put them into place so that they can be used.

I could always strip shadowbox out of the plugin, and require users to download and upload manually, but the user experience of doing such would be horrid, and likely cause a lot of users to stop using the plugin.

In the mean time the plugin can be downloaded from http://dl.sivel.net/wordpress/plugin/shadowbox-js.3.0.3.9.zip

If you wish to test out the upcoming version that is addressing the issue of including shadowbox.js to get it back into the repository, you can download it from http://dl.sivel.net/wordpress/plugin/shadowbox-js.3.0.3.10a.zip

News Plugins Shadowbox WordPress

WordPress Caching Comparisons Part 2

This post has been on my mind for quite some time now, ever since I wrote Part 1 over 1 year ago.

Part 1 only really addressed opcode 1 and Object caching 2 and didn’t really touch page caching 3. In this post I have revisited all tests and added in comparisons of using both the APC Object Cache + Batcache plugins as well as using the W3 Total Cache plugin.

Tests

  • No opcode, no caching
  • APC opcode, no caching
  • APC opcode, APC object caching plugin
  • APC opcode, W3 Total Cache APC object caching
  • APC opcode, APC object caching plugin, Batcache page caching
  • APC opcode, W3 Total Cache APC object and page caching

Comparison Stats

  • PHP generation time 4
  • Number of include/include_once/require/require_once calls 5
  • Number of stat() calls per dtruss/strace 6
  • cURL time to start transfer 7
  • Apache Bench (ab) tests for concurrency 8 and requests per second

For the above stats gathering, with PHP generation time and cURL time to start transfer, 102 sets were collected, the first 2 were dropped due to cache priming, the remaining 100 were used, and averaged. With the Apache Bench tests, 12 sets were used, dropping the highest and lowest value, and averaging across the remaining 10. Include and stat() counts were gathered over 5 sets not requiring averaging as they were the same between runs.

To find the optimal concurrency and req/s for Apache Bench, I performed manual testing, visually inspecting the results until I reached what I classified as a “sweet spot”. Using the “sweet spot” stats, I performed additional sets to gather the averages for requests per second.

The Setup

  • 256MB Rackspace Cloud Server
  • Ubuntu 11.04 amd64
  • Apache 2.2.17 - Default Ubuntu Install, no modifications, default document root located at /var/www
  • PHP 5.3.5 (mod_php) - Default Ubuntu Install, no modifications
  • PHP APC 3.1.3p1 - Default Ubuntu Install, no modifications
  • MySQL 5.1.54 - Default Ubuntu Install, no modifications
  • WordPress 3.3-beta4-r19470 - Default Install, requests made to the “home” page
  • APC Object Cache trunk version
  • Batcache trunk version
  • W3 Total Cache 0.9.2.4

I have not compared static file caching yet and hope to compare W3 Total Cache and WP Super Cache in the future. In this comparison I am mainly focusing on opcode, object caching and page caching.

I am going to try to keep this comparison about the stats only, and not make this a critique or review of the plugin, although in some cases this will not be possible.

Test Data

No opcode and no caching:
PHP Generation Time: 0.13787 seconds
Number of includes: 80
Number of stat calls: 266
cURL time to start transfer: 0.15463 seconds
Apache Bench Concurrency: 15
Apache Bench Requests Per Second: 19.1483 req/s

APC opcode and no caching:
PHP Generation Time: 0.05088 seconds
Number of includes: 80
Number of stat calls: 148
cURL time to start transfer: 0.05673 seconds
Apache Bench Concurrency: 60
Apache Bench Requests Per Second: 68.2636 req/s

APC opcode and APC Object caching:
PHP Generation Time: 0.03407 seconds
Number of includes: 81
Number of stat calls: 148
cURL time to start transfer: 0.03975 seconds
Apache Bench Concurrency: 260
Apache Bench Requests Per Second: 77.7214 req/s

APC opcode and W3TC APC Object caching:
PHP Generation Time: 0.03993 seconds
Number of includes: 102
Number of stat calls: 285
cURL time to start transfer: 0.04591 seconds
Apache Bench Concurrency: 200
Apache Bench Requests Per Second: 67.581 req/s

APC opcode and APC Object and Page caching with Batcache:
PHP Generation Time: N/A
Number of includes: Unable to collect
Number of stat calls: 41
cURL time to start transfer: 0.00316 seconds
Apache Bench Concurrency: 600
Apache Bench Requests Per Second: 147.2156 req/s

APC opcode and W3TC APC Object and Page caching:
PHP Generation Time: N/A
Number of includes: Unable to collect
Number of stat calls: 87
cURL time to start transfer: 0.00625 seconds
Apache Bench Concurrency: 500
Apache Bench Requests Per Second: 147.8425 req/s

Conclusions

I can state the following about just enabling APC in PHP, if you do nothing else, you should at least do this:

  1. 170% PHP generation time improvement by enabling APC opcode caching
  2. 172% Time to start transfer improvement by enabling APC opcode caching
  3. 300% concurrency improvement by enabling APC opcode caching
  4. 256% requests per second improvement by enabling APC opcode caching

I see performance improvements using both APC+Batcache and W3 Total Cache. However, in all tests, APC+Batcache seems to outperform W3 Total Cache, in PHP generation time, number of includes, number of filesystem stat() calls, time to start transfer, number of concurrent requests and requests per second with relation to concurrency.

I was able to push APC+Batcache to 700 concurrent requests, but req/s dropped. W3TC capped out at 500 concurrent requests, and would go no further, however 500 requests per second provided the highest req/s for W3TC.

W3TC does provide a lot of additional functionality to help reduce load on the server, such as tweaking client side caching, and using a CDN, where APC+Batcache does not, although there are small unitasking plugins that can add the missing functionality for you such as:

APC+Batcache consists of adding 3 new files, and no new directories. The W3TC download consists of 60 new directories and 351 files. The directory listing level for W3TC being as deep as it is, 5 levels deep past the directory for the plugin itself, causes a significant increase in filesystem stat() commands.

Most shared hosting providers as well as many multiserver environments will often host their web roots on NFS, and the more filesystem stat() calls, the worse performance you will see, especially under higher load.

Something else to note, is a lot can be done on the server to also improve performance. You can also use caching applications that logically sit in front of the webserver to cache, instead of using caching plugins, which will also improve performance. There are probably eleventy billion ways to improve performance, so if in doubt, consult an expert to help.

Notes:

  1. opcode: A technique of optimizing the PHP code and caching the bytecode compiled version of the code, to reduce the compilation time incurred during PHP code execution #
  2. Object Caching: An in memory key-value storage for arbitrary data, to reduce processing, and storage of external calls to speed up retrieval and display of information #
  3. Page Caching: Full caching of HTML output for web pages #
  4. PHP generation time: The amount of time taken to compile and execute the PHP code into the resulting HTML #
  5. Include/Require Count: The number of calls to the PHP include, include_once, require and require_once functions, which are used to load a separate file #
  6. stat() call count: The number of unix system calls that return information about files, directories and other filesystem related objects. #
  7. Start Transfer Time: The amount of time between the request from the client to the server, and when the server begins returning data to the client #
  8. Concurrency: The number of concurrent client requests to the server #
Code CoolStuff HowTo PHP Technology Uncategorized WordPress

WordPress Caching Comparisons Part 1

For some time now I have been wanting to write an up to date XCache object cache plugin for WordPress. Around 4 years ago I did an opcode caching comparison between APC, XCache and eAccelerator. My results had shown that at the time that XCache was the fastest of the 3. Unfortunately I didn’t think to keep that data around. As a result of these tests I had standardized the environment I was working on with XCache, and have never thought twice about it. Since I use XCache for opcode caching everywhere, it seemed like writing such an object cache plugin would be beneficial. After writing the plugin I figured it best to test performance, comparing it to the Memcached object cache and the APC Object cache. I tweeted a lot during my initial testing, and got an overwhelming response to write up a post, and here we are…

I’ll try to make this comparison comprehensive, but it can be a little difficult to always cover everything.

The test environment:

  • Toshiba T135-S1310
  • Intel SU4100 64bit Dual-Core 1.3GHz
  • 4GB DDR3 Memory
  • Ubuntu 10.10 64bit
  • Apache 2.2.16
  • PHP 5.3.3
  • PHP XCache 1.3.0
  • PHP APC 3.1.3p1
  • Memcached 1.4.5
  • Pecl Memcached 3.0.4
  • MySQL 5.1.49 No caching configured
  • cURL 7.21.0
  • WordPress 3.1-alpha (r16527) Default install with Twenty Ten and no plugins other than the one I mention below

The times are based off of the standard timer_stop() code often found in the footer.php of themes, in this case added using the wp_footer filter through a mu (must use) plugin:

<?php
add_action('wp_footer', 'print_queries', 1000);
function print_queries() {
?>


<!-- <?php echo get_num_queries(); ?> queries. <?php timer_stop(1); ?> seconds. -->


<?php
}

cURL was used to make the HTTP requests and grab the value from the comment created by the above code:

for (( c=1; c<=101; c++ )); do curl -s http://wordpress.trunk/ | grep '</body>' -B 1 | head -1 | awk -F"queries. " '{print $2}' | awk -F" seconds" '{print $1}'; done;

In each data set I gather 101 results and omit result 1 so that we only have results after the initial cache is generated. The tests are only performed on the home page.

The tests:

  1. No Object or Opcode Cache
  2. Memcached Object Cache with no Opcode Cache
  3. Memcached Object Cache with APC Opcode Cache
  4. Memcached Object Cache with XCache Opcode Cache
  5. APC Object and Opcode Cache
  6. APC Opcode Cache with no Object Cache
  7. XCache Object and Opcode Cache
  8. XCache Opcode Cache with no Object Cache

I didn’t evaluate eAccelerator due to the fact that it isn’t available in the Ubuntu repositories and I did not feel likely compiling…

The results (in seconds):

For a larger view of the spreadsheet above or if you cannot see it, take a look here.

These results are quite interesting and actually shocked me a little bit. The first thing that I found when developing an up to date XCache Object Cache plugin was that it can’t handle objects! So the plugin has to serialize all data when setting, and unserialize when retrieving. This of course is going to add overhead to every operation.

When I first tested the Memcached Object Cache I was surprised at how little it improved speed. It took me about an hour to realize that the comparison of just using Memcached was unfair as it didn’t include any Opcode caching, adding an Opcode cache brings it more in line with what I would expect.

Using an opcode cache improves performance by over 200% on a stock WordPress install without using any object caching. While APC and XCache provided similar results, my tests still show XCache to be ever so slightly faster as an opcode cache.

Where we see the biggest difference between the 3 of these caches when using APC for both opcode and object caching.

Assuming we are using both Opcode and Object caching here are the results from best to worst:

  1. APC
  2. Memcached (With either APC or XCache)
  3. XCache

At this point the single largest failure of XCache is it’s inability to store objects, so I am pretty much planning on dropping XCache on my servers in favor of APC, which will be included with PHP as of PHP 6. I would likely still see marginal speed improvements using XCache on sites that I am not using XCache for an object cache, but on those that I am I’ll get much improved performance off of APC or Memcached.

Now why would I want to use APC over Memcached or vice versa? Well, the one thing that Memcached provides that APC doesn’t is the ability to share the cache between servers. In a load balanced multi web server environment, using APC you would be duplicating the cache on all of the servers as APC provides no way to share this data or allow for remote connections. Memcached however, being a PHP independent daemon can be used for pooling resources and allowing remote connections. You also can get more bang for your buck with Memcached in a load balanced multi server environment because of it’s pooling capability. The pooling capability allows you to dedicate say 128MB of RAM to each memcached instance and when pooled together will give you 128MB x N where N is the number of servers in the pool. Anyway, I digress…

In the end, if you have WordPress hosted on a single web server, APC is the way to go. If you are in a multi web server environment, Memcached is the way to go, but remember to install an Opcode cache as well. If you are crazy and just want to use more CPU cycles, XCache is the way to go.

Some of you may be thinking “why would I need an object cache in addition opcode caching, if the results are similar?” Well, under higher load an object cache will respond better than MySQL, even with MySQL caching. In addition, other factors with MySQL can come into play, such as connectivity to the MySQL server. It may be on another server, with not enough memory, slow disks, with an overloaded network, which decreases performance. Any time that an update query is run, MySQL will flush the whole cache. Another benefit, is we are rarely, if ever, going to use the data exactly as it is given to us from the MySQL query. In the end we are going to process the data before displaying, an object cache allows you to store the processed data, rather than the raw data from the query saving CPU cycles required for the processing. Individually these items may not consume much time, but added together and in a more efficient delivery system, this can make a huge difference.

Now for any of you who go run out and install Memcached, if you install version 1.4.x make sure you get at least pecl memcached 2.2.6 or 3.0.4. Memcached made a change that breaks deletes with earlier pecl memcached versions, which adversely affects WordPress.

A few additional things that I have been asked to talk about are using caching with a WordPress Network, output caching with Batcache and query counts. I promise to get to those, but I just wanted to get this out sooner rather than later.

Yo Dawg! We heard you like caching so we put a cache in your cache, so you can optimize while you optimize…Sorry couldn’t resist.

Code CoolStuff HowTo PHP Technology WordPress

Slides from my WordCamp NYC Talk

This past weekend I spoke at WordCamp NYC about Building a High Performance WordPress Environment in a panel presentation with Scott Taylor

The slides for my portion of the presentation can be found at SlideShare and…well…right here:

https://www.slideshare.net/mattmartz/wordcamp-nyc

Locations New York New York City Talks US WordCamp WordPress

Headed to WordCamp New York City

It’s that time of the year again and my favorite WordCamp is about to begin. Last year I made the trip to WordCamp NYC and spoke in two sessions. I had an immense amount of fun last year, and even though I have moved from the East Coast to Texas I couldn’t pass up the opportunity to attend again.

The company that I now work for, Rackspace, who is a big supporter of WordPress will be sponsoring the event as well as sending myself and Rob Taylor.

As with last year I will be speaking, this time about Performance and Optimization. The Performance and Optimization session will be a panel with myself and Scott Taylor. I will be focusing on “Building a High Performance WordPress Environment” and Scott will be focusing on “Front End Optimization Tools”. If we are able to get a third person on board they will be covering “CDNs and Offloading”. We will be keeping the presentations short to allow time for a question and answer session as well as some discussions between the panel members about certain aspects of performance and optimization in WordPress.

WordCamp NYC will also be holding a Genius Bar as does every other WordCamp, but the one thing most other WordCamps don’t have is me. One other special event happening at WordCamp NYC is the Hacker Room, where myself, Andrew Nacin, Daryl Koopersmith and other WordPress core contributors will be spending time over the 2 day event to write patches for WordPress 3.1 which is just about to hit feature freeze. If you are interested in helping out and getting started with WordPress core development please stop by.

Locations New York New York City Talks US WordCamp WordPress

WordPress One Liner to Customize Author Permalink Redeux

Nearly 2 years ago I wrote a one liner for someone in the WordPress IRC channel to change the author permalink structure. At the time I had not taken the time to really understand WP_Rewrite and as such didn’t understand the implications of flushing the rewrite rules on each page load.

It is sufficient to say that since then, I have taken the time to understand it better and I am fully aware of the negative implications of performing flushes every time someone hits your site. For one thing the default behavior of flush_rules() is to also update the .htaccess file as well as updating the serialized array in your wp_options table that contain the internal WordPress rewrites. Assuming you are using a nasty permalink structure that starts with something like %category% or %postname% that serialized array can grow exponentially with the number of pages you have 1.

To make a long story sort, I have known that I should have changed the code for almost as long as the post has been published, but I was too lazy to do anything about it. It took a few carefully placed pokes and prods to get me moving, and as such I have updated the post to reflect the removal of using flush_rules().

Notes:

  1. Otto explained this a bit more in depth at http://ottopress.com/2010/category-in-permalinks-considered-harmful/ #
Code One Liner PHP Snippet WordPress