How do I make my Rails app faster?

January 8, 2013 Link to post  Permalink

Request Timeline I I’d like to provide an overview of my process for improving the performance of a Ruby on Rails application, leaving the specific changes to other, more detailed posts.

The timeline at the top shows a basic view of what happens when the user clicks on a link to get a page from your site. There are three main pieces, the Connection - DNS lookup, setting up a connection to the server, SSL negotiation. The next part is your Ruby on Rails code that processes the request and generates the response. The final part is when the Browser processes the HTML, applies the CSS, runs the Javascript, and downloads images and other page widgets.

The first thing to do is to measure the size of each of these pieces for the pages you care about. I usually start with the page that users are expected to never the site at, either the public landing page, or right after a Login page for private sites. Modern browsers have Developer Tools built in that include a Network chart that shows all of the information you need to have at this stage.

The very first item in the Network chart is your main web page. For a Ruby on Rails app, this is all based on your Rails code on the server. The longer it takes for this piece to start, then finish, the longer your user is waiting before anything else can start.

If your Rails code is taking more than a second to get HTML back to the browser, this is a good place to start looking to improve performance. Looking at the Rails log files will give you a good idea of what parts of the code are taking a long time to execute.

These are the first techniques I use to make my Rails code faster.

Eager loading of associated data

If you’re loading an object in the controller, and then accessing associated records later in the view code, then you should change the object loading code to include those associated records. The reduces the number of queries that Active Record makes to your database, which is typically on a different server machine.

Here’s an example of a Blog Post with multiple Comments. The first code causes a query for the post, then one query for every comment

# controller
def show
  @post = BlogPost.find(params[:id])

# view
<%= @post.comments.each do |comment| %>
  <%= render :partial => 'comment' %>
<% end %>

Making a small change to the controller code reduces the number of queries to 2 - ActiveRecord does some magic to merge the query results together ~~~ # controller def show @post = BlogPost.includes(:comments).find(params[:id]) end ~~~

Cache models to avoid database calls

Often, it is possible to cache items using the Rails.cache object. Usually you want this to be attached to a memcache instance for best performance. You need to be able to expire these objects in some way, but I won’t cover that here, as that is a much bigger topic

Using the same BlogPost model

# controller
def show
  @post = BlogPost.lookup(params[:id])

# model
class BlogPost < ActiveRecord::Base
  def self.lookup(id)
    Rails.cache.fetch "blog_post:#{id}" do 

Fragment Cache

The Rails Fragment Cache is a mechanism to allow you to cache the parts of your view that don’t change frequently. After a fragment is generated, it is stored in the Rails cache to be used later. A useful way of avoiding the problem of expiring specific items in the cache is to use a value that changes any time the cached object changes. Typically, Rails objects have an updated_at value that is changed every time that object is saved. This value can be used as part of the fragment cache key to use the cached fragment if the object hasn’t changed, and re-create the fragment if the object is different. For ActiveRecord objects, Rails builds this key automatically!

This technique requires a cache store like memcache that can automatically get rid of older items that are no-longer used, otherwise your cache will grow uncontrollably.

Here’s an example from our blog software - the cache key will be something like “blogposts/{id}-{updated_at}”

# view code
<%= cache @post do %>
  <% # Do stuff to render the object %>
<% end %>

Rails 4 will add an exciting new dimension to this feature that I’ll be covering in a later post.

Use The HTTP ETag helpers

Rails recently added helper methods to make it easy to handle HTTP ETags in your controller code, fresh_when and stale?. These can be used to respond to requests with HTTP ETag headers - adding the ETag to the response, and allowing you to skip the response processing completely.

Here’s an example for our blog post rendering

def show
  @post = BlogPost.lookup(params[:id])
  fresh_when etag: @post, last_modified: @post.updated_at, public: true

Look how we combined two techniques here - this action processing does a single memcache lookup and then returns the HTTP Not Modified response code to the client, for clients that have visited before.

In addition, for public content, the Rack::Cache middleware can help improve the performance for user who haven’t seen this content before too. Double Win!


In many real world Ruby on Rails applications, the interactions are more complicated than those shown here, but the basic principals remain. With some very simple changes, the performance and scalability of your Rails application can be improved quite a lot, and this maybe all you need to get your application to acceptable performance.