A while back I wrote a blog post about loading my homepage in 1 HTTP request. As I said back then, this was only an experiment, but as promised I have done some testing to see if this was any use at all.
First, let me explain how I did the tests. I did it locally (so network lag should not be an issue), and with phantom js, a headless webkit browser. So this should mean that the page is fully loaded in a browser at the times presented, and that it's not just the request that is done. I also did 1000 runs on each setting, just to have a lot of numbers. I never tried it going back to normal images instead of base64 encoded though. Should probably do that too at some point. Anyway, here are the results:
First up: No caching, no aggregation, just how the frontpage of this blog is in this theme: An average of 1525 ms. Not very impressive. But the optimization effort is not very impressive either.
Second: Cache pages for anonymous users. Because the test requests are not logged in, and none of my visitors are logged in either. 131ms. That is improvement. This is localhost of course, but that is also the point.
Then: Turn on the boost module. Serving plain HTML pages with a couple of aggregated CSS and JS files: 85ms. Pretty darn fast. Let's try to get rid of the remaining HTTP requests.
The current version, how I now serve my homepage: 1 HTTP requests, plain HTML from boost. 92ms. Darn, that is actually slower again. Browser cache definitely plays a role for rendering fast. Can I do some more tweaking to this?
On to the last test: Get rid of all the whitespace I can find in CSS and JS, and try again in 1 HTTP request. Down to 89ms!
Ok, conclusions: If you have a feeling your visitors will visit more than one page on your site, you have no need to go all the way to 1 HTTP request. And a couple of disclaimers: This will probably vary depending on connections, how fast your computer is (for browser cache), and probably the results will vary from where in the world you are accessing my site. But with some variables out of the way, it seems that eliminating all requests is no performace gain compared to lowering the number of HTTP requests and minifying. Also, is it not interesting that getting rid of whitespace saved more than 3% in load time? Another reason why it's probably best to minify, gzip and aggregate. And also, other caching methods than boost are probably more efficient, but I keep this blog at a cheap host, so Varnish, memcache and so on is not an option. And frankly, this post can not cover both front and back end performance, right? For my next test: does anyone have any good tools on doing the same tests for a “headless mobile device”?
Dave•Sunday, Jul 8th 2012 (over 10 years ago)
It would be interesting to see how things compared with more of a real-world test by using a service such as webpagetest.org