Updating the example in my previous post - note that the JSON code path is unchanged, as this format doesn’t use a layout or view, so doesn’t need them as part of the ETag calculation
The gem can be installed by adding ‘gem “etag_for”’ to your Gemfile
The Rails Fragment Cache is the easiest way of making your view template code run fast. A fragment is a piece of generated output, usually HTML, that is a part of the total output, usually the HTML page, that is returned to a user. The documentation on caching with Rails explains the basics, but doesn’t go into any depth, especially for new use cases and techniques that have been developed and expanded over the last year or so.
Here are six techniques and tools that can make your fragment caching easy to manage yet provide the benefits of turbo-charging your view rendering.
HTTP 1.1 added a new header called an entity-tag that is shortened to ETag. This entity-tag was provided to allow a client browser to ask a server application if the content it already has is the currently available version, and for the server to quickly determine this and respond with either a simple ‘Yes’, or ‘No, here’s latest content’ response.
The Wikipedia Article for ETag goes into some of the technical details, but doesn’t really give any hint as to how to use this when building a web site of our own.
A good ETag value is one where it is unlikely that different content produces the same value. Since ETags are specific to each URL of your website, this process doesn’t need to use the highest security mechanisms available to you, but it must be pretty good. It also needs to be fast. There’s no point having a performance feature that is slower than not using it at all.
For this use, the MD5 hash works really well. Although MD5 hashing can be broken, and different content can be created to have the same hash value, in our application, for ETag generation, these deficiencies aren’t important. MD5 digests can be generated very quickly, so add very little overhead.
I’m building some code that may become a Gem that I release soon. I’ve extracted my code to a ActiveSupport::Concern, but I need to define a combination of instance methods and static methods (or class methods) on the parent object. This is complicated because I need to be able to define the names of these methods depending on a parameter, and I want to have multiple instances of this code on any particular model. This last requirement precludes me using the easy example code that is all over Google search. Here’s the equivalent code before I extracted to a ActiveSupport::Concern module
My Concern module looks like this
The magic here is the define_singleton_method that is new in Ruby 1.9.1 - it works the same as define_method, but for class methods instead of instance methods.
It took me 3 hours to find this on the Googles, hopefully it will be easier to find next time
I I’d like to provide an overview of my process for improving the performance of a Ruby on Rails application, leaving the specific changes to other, more detailed posts.
The first thing to do is to measure the size of each of these pieces for the pages you care about. I usually start with the page that users are expected to never the site at, either the public landing page, or right after a Login page for private sites. Modern browsers have Developer Tools built in that include a Network chart that shows all of the information you need to have at this stage.
Happy New Year everyone.
In the spirit of doing something new, I’m starting a monthly newsletter dedicated to improving the performance of your Ruby on Rails applications. The Faster Rails newsletter will take a subject every month and offer some insight into how you can use a particular feature of Ruby on Rails to make your application faster.
In upcoming issues, I’ll cover in depth how to use the asset pipeline, the upcoming Ruby on Rails version 4, and how to make sure your database access is optimal. All of these newsletters will show you how to use Rails features you already have available to make your applications as fast as possible to attract more users and retain them for longer.
In addition to the main content, I’ll include some links to other resources that will be useful to make your Rails apps faster.
After running the code update from my previous blog post about delaying the load of external scripts I realized that something was very broken.
The code seemed to work for me, on my browser, but my Google Analytics count went almost all the way down to zero. I’m guessing that the script code just wasn’t running at all using the window.onload technique. Hooking in these events on all browsers is clearly not as easy as it looks!
I’ve bitten the bullet and added JQuery to this blog and changed the way the external scripts are implemented.
Here’s my application.js
I was trying something out with this blog to speed up the loading of the pages. What I was measuring didn’t really make a large difference in the page load time, but I did notice a huge amount of time spent loading all of the social pieces of the page - Disqus, Twitter and Google Plus.
Since I’m not using JQuery on this blog, I don’t have an easy $(document).ready() way to run all these pieces after the page has loaded, yet just putting the script into the page causes it to run inline and delay the page being ready for interaction.
WebPageTest reported that the page was taking between 3.5 and 4 seconds to get to the ‘loaded’ state, with 2 of those seconds being the social pieces.
I’ve recently added ETag support to this blog. There’s a post coming soon on how to use the Rails cache helpers to short-circuit the generation of dynamic content that your user’s browser already has. While I was monitoring the situation with the Heroku logs command, I saw something a little strange in the requests being processed. Here is a request from Pingdom for the home page of the blog (with the times removed to save space)
The main thing to notice here is that my application code returned a 304 Not Modified response, but Heroku returned a 200 OK response to the client. Heroku doesn’t cache responses for Cedar apps, so what’s going on here?
The big clue was this data: Dalli: 18.2ms (Dalli is a memcache client library). For such a small request, why is so much time spent getting data from memcache? Perhaps it’s because the entire response actually came from memcache somewhere in the middleware stack that doesn’t log its output. Which other middleware uses memcache?
If you look at the Network debug tab in most modern browsers, the waterfall display clearly shows that the browser doesn’t do any work to start rendering your page until the initial response is received from the server. The faster you can return the browser response, the sooner the user will see the content you want to display.
When I’m speeding up a Rails application, I like to work through a six step process that gets to the ultimate performance for a particular feature. Not all requests can get to the sixth level, but the more you can move your Rails application through these steps, the faster and more responsive your application will be.
This is the worst case scenario, and sadly, it’s the default scenario for a lot of web applications. Every time the browser requests a page, the Rails application looks up some data, renders the results and sends the content back to the browser. This is hard to fix for the very first request of a dynamic application, but we can do much better. Rails 3.x applications automatically use the next level, so are always a little better than the worst case.