If you look at the Network debug tab in most modern browsers, the waterfall display clearly shows that the browser doesn’t do any work to start rendering your page until the initial response is received from the server. The faster you can return the browser response, the sooner the user will see the content you want to display.
When I’m speeding up a Rails application, I like to work through a six step process that gets to the ultimate performance for a particular feature. Not all requests can get to the sixth level, but the more you can move your Rails application through these steps, the faster and more responsive your application will be.
1. Server Creates Entire Response and Sends It
This is the worst case scenario, and sadly, it’s the default scenario for a lot of web applications. Every time the browser requests a page, the Rails application looks up some data, renders the results and sends the content back to the browser. This is hard to fix for the very first request of a dynamic application, but we can do much better. Rails 3.x applications automatically use the next level, so are always a little better than the worst case.
2. Server Creates Entire Response but Sends a ‘Nothing Changed’ response
In Rails 3.x, the default middleware stack includes Rack::ETag that allows the middleware to signal to the browser that the content it has hasn’t changed since the last time it was requested. This reduces the bandwidth used by your application, but doesn’t reduce the amount of work done by your application server process. The good news is that you get this improvement for free with Rails 3.x - you do not need to make any code changes to take advantage of this feature.
3. Server Creates Response from Cached Fragments and Sends It
The server code loads up enough of the data needed by the page to use to find the fragments of the page that are cached and uses those fragments to build the response rather than running all of the rendering code. Depending on how you decide where to perform caching, a lot of the pieces of the page can be shared between users, and can be cached for a very long time. The longer the fragments are cached, the lower the total cost of rendering over the lifetime of your application.
If you have enough fragments on a page, the cost of finding those fragments may become the slowest part of the page. There is a recently released gem to allow many fragments to be retrieved simultaneously to increase the overall speed of these pages.
In addition, the Rack::ETag middleware mechanism will be used to reduce bandwidth if the user has seen this exact page before
4. Server Sends Response but Does No Work
In Rails, this is done with Action Caching. The entire response is saved in a cache store and used to respond to future requests without doing any work to generate the output. Action Caching does run all the before_filter code for an action, so authentication can be performed (which can be construed as doing work, but its often hard to skip this step).
Action Caching can be hard to achieve. There are often pieces of the page that are customized for a particular user, and caching every possible version of a page is usually the wrong way to improve performance. Expiring complex pages can be very error prone.
The Rack::ETag middleware mechanism will be used to reduce bandwidth if the user has seen this exact page before.
5. Server Does Nothing when asked
Rails 3.x added support for Conditional GET support inside actions. This allows you to write specific code to detect when the browser already has the current page and short circuit the page generation process.
If you can use the ETag information to avoid performing some or all of the database lookups needed to render the page, then you will increase the speed of displaying the page to your user.
This mechanism doesn’t work for the first view of a page by a user, but for often visited pages, this technique is almost the fastest way to display a page to your user for views after the first one
6. Server Does Nothing
Image Sprites is a technique that works on images to reduce the number of requests and improve your application performance.
For all of these static files, and with the Rails Asset Pipeline set to generate and use digests, you can also set HTTP cache headers on these files to allow browsers to not even request these files for a long time in the future, yet be assured that if you make changes to your application, these changes will be picked up when you deploy your new code.
As an extreme example, I recently worked with an application that had a mouse over effect that performed a server request to render some data and display a small box with that data. It occurred to me that the page in question already had all of that data displayed on it, so all of those requests were actually not needed. I made a small change to pre-render that data into the page and update the mouseover code to use the local data instead.
For this page, with appropriate caching, the initial display maybe 50ms slower, but the mouseover effects are each 300-500ms faster than before and no ‘waiting’ graphic is needed whilst the user is waiting for the popup to appear.
To get the ultimate performance from your Rails application, you have to think clearly about what data you need to get to your user and how best to do that in an efficient way. With the above techniques, you can improve upon what Rails 3 gives you out of the box and make your users happier. Happier customers are better customers!