Faster Page Speed with Multi Fetch Fragment gem

December 14, 2012 Link to post  Permalink

Hunter Gatherer Cartoon Rails Fragment Cache is one of the best ways to increase performance of your application. The render code hunts for the pieces it needs and gathers them from, typically, a memcache server to build your page piece by piece. It would be much faster if you could batch these up in some way to reduce the overhead of gathering data from memcache a single piece at a time.

A few months ago, I wrote some code that used Rails Fragment Caching heavily to increase page rendering speed. At the time, I wrote some quick and dirty code that batched all the Fragment Cache calls together and used the Rails cache multi_read method to get them all in a single call from my memcache server. The code worked, and it was fast, but I sure didn’t want to show it to anyone else.

A few days ago, Nathan Kontny introduced a new Rails Gem (multi_fetch_fragments) that takes what I was doing and packaged it together in a really clean implementation that he talks about in a blog post about Rails faster partial rendering and caching. The source code can be seen on github

Improve Page Speed with Pagination in Rails

December 13, 2012 Link to post  Permalink

When you start a new web application, there’s likely to be very little data in your system. Hopefully, over time your users will find your application useful and add data. At some point, loading index pages will become slower and slower due to this extra data. It’s time for pagination.

The math is pretty simple. If you have 500 items on a page, and you can reduce that to, for instance, 50 items, the time to display those items is likely to be 10 times faster. Now, there are some constants here with header/footer/sidebar pieces that are the same no matter how many items you have, but the difference between 500 and 50 items will be very obvious to your users.

There are two different paginates available for Rails. I’ve been using will_paginate for several years, but there is a new kid on the block called kaminari. Which one you use may just be a matter of taste. Try one and switch if you don’t like it!

Many Sites in a single Heroku application

December 12, 2012 Link to post  Permalink

I recently wrote about using the same codebase to run multiple Heroku apps. There are some situations where it may be better to run multiple applications in a single Heroku app. The whole CRAZ8 domain is running as a single Heroku application, frankly because there’s not much at the root portion, and the Blog code isn’t very taxing with the number of users I have.

Rails 3.1 added some routing goodness to make this fairly easy to do. The typical use of sub-domains in a Rails application is to provide different users or accounts with a different url. For a low-traffic site like this, we can use this feature to host completely different systems under sub-domains, but all under a single codebase.

I have a root domain, craz8.com, a blog subdomain, and, if anyone uses ‘www.craz8.com’, I want to redirect them back to the root domain.

Here is a summary of my routes.rb file:

3 Free Rails Performance Features

December 11, 2012 Link to post  Permalink

Just by building your application with Ruby on Rails 3.1+, there are a number of performance features that you get just by following the Rails conventions. These are features that make your application faster to your users, and faster applications have been shown to have happier users.

And you get them all for Free!

Rails adds Rack Middleware for ETag (Rack::ETag) and ConditionalGet (Rack:ConditionalGet) that, together, help to reduce the amount of data that is sent to the client, without you having to do anything (although you can add code to make this work even more to your advantage, which is something I’ll write about later).

Every action that is executed that returns content, and is not marked as ‘no-cache’, will have an ETag generated for it. If a browser requests the same action, and the same content is generated - making the ETags match - then the ConditionalGet middleware will throw away the response you generated and send a Not Modified response instead, as the browser already has that content.

Amazon CloudFront for a Rails application

December 10, 2012 Link to post  Permalink

If you want to provide the best experience to the users of your Rails application, you should serve all of your static content using a Content Delivery Network, or CDN. There are many CDN providers, but one of the easiest to sign up for and configure is Amazon’s CloudFront service.

Here’s how I configured my Rails blog to use CloudFront to serve all the static assets.

In the AWS Console

Now you have to edit a configuration option in your Rails code. In the production.rb configuration file, make this change to use the host name you just created:

Heroku Configuration for Performance Rails Apps

December 8, 2012 Link to post  Permalink

The Heroku Platform as a Service is a great way to deploy your Rails apps quickly, but there are a few differences in running a Rails app on their platform.

In a typical Rails application, you want to have your web server, using configuration in Apache or Nginx, deliver static files to the browser. This avoids running any Rails code for the simple case of sending bytes to the network. Heroku doesn’t work this way. As your Rails app is deployed, Heroku injects the Rack::Static middleware into your middleware stack to serve static files from your /public directory. This includes your pre-compiled assets.

This is equivalent to the following entry in your production.rb file

Since you are using the config.assets.digest = true option for your pre-compiled assets, you can afford to set these static assets to be cached by browsers for as long as you need. Add this to your production.rb file

Run Admin code in a separate Heroku application

November 11, 2012 Link to post  Permalink

I am currently working on a Ruby on Rails application that has a very lightweight user facing component, and a much heavier Admin system that involves more UI and more complex processing to display system statistics.

To improve the scalability of the user facing site, I decided to split the Admin interface out to a separate web application and run it as a separate application.  To avoid actually creating two different Rails applications, and to re-use my application models between applications, I am running both of these applications from the same codebase, configured at runtime to be either the User or Admin apps.

Here’s how I did this.

Create an initialization file config/initializes/admin_app.rb with the following contents

Mac OS X Boot Drive Selection Gotcha

Some time ago, I used this Lifehacker article to add an Intel SSD to my Macbook Pro. The end of this article includes the information on how to change the disk to boot from to use the SSD as your boot drive.

What this article doesn’t mention is that this selection is just for that specific boot time. The next time you boot up, the original disk will be used.

For the last 4 months, I’ve been booting off my original disk, and barely using my screaming fast SSD drive. Since I rarely reboot this machine, and I have 8GB of RAM, the slowness of this solution hasn’t been obvious to me. Today, I installed Parallels, and this insists on being on the boot disk, so I noticed that I was booting from the wrong disk.

With a bit of poking around in Google, I found this document in the Apple Knowledge Base which tells of how to change the boot disk during boot time, and in the very last paragraph, has this useful information:

Watch out for Mongo DB slow queries

September 26, 2009 Link to post  Permalink

I’m adding a feature to a production application that I’m using MongoDB to implement. I’ve rolled this feature out to our production servers in a mirror mode so I can see it perform, but no production users are effected by this new code.

The first thing I notice is that my code to update a specific record is running in 400 to 4000 miliseconds! I need this to be running in 10-100ms for this to be a useful feature. What’s going on here?

I check the production MongoDB, and the mongo server is running at 95-100% CPU and and is outputting a bunch of slow lookups to its log.

I add some more processing units to add more capacity, but this doesn’t work at all! In fact, everything is now taking 1000-8000 miliseconds to process. It looks like MongoDB is single threaded on a simple query. Since everyone is raving about how great MongoDB is, how is it my code is making it look really, really bad?

My new iPhone's here, but its still a brick

Received our new iPhone 3G[S] via UPS today. They arrived before 10am on the day of availability!

We hook them up to iTunes to activate, run through the entire process, and get to the point where it says ‘Activation may take 48 hours’

WTF?

We’re now 12 hours in, and our phones are still just expensive iPod Touch devices!