After a short delay, I’ve upgraded the software for this blog and am planning on getting back into blogging regularly.
How long have I been away? Well, upgrading from Rails 3 to Rails 6 is not something I recommend in a single step. Since this blog is actually a very simple Rails app, it was easier to create a new Rails application and move the relevant code into the new app than it was to try to upgrade the code in-place. Missing 2 major versions of Rails is quite a gap.
At the same time as upgrading, I wanted to simplify the template and remove as much cruft as possible, but not go as far as using a static site generator. I’m using the Pure CSS system, with some minor customizing for my own content. This code also uses Turbolinks to reduce time between pages.
I’m building a site, and have added the sitemap to help the Googles find the pages and give them an idea when the pages need to be rescanned. I use the excellent sitemap_generator gem, and it works really well.
My typical workflow during development is to run the sitemap generator on my development machine, with the Sitemap host set to the production server, and then to check the generated file in and push to the production server.
This works great to get something up and running, but very quickly your production server data will not match your development server data, particularly where the updated_at values are used, and this technique starts to break down.
The next step can be to run your code locally, but connect to the Production database to get the correct data. In the latest Rails versions - 4.1 and above - the database.yml file can include a URL to the database server. This makes it easy to configure your production database locally to allow this task to run.
I recently read a pretty good e-book on parallel and concurrent programming in Ruby Working with Ruby Threads by Jesse Storimer. I decided to try out running something with Rails and threading enabled, so I converted this blog to run with Puma and Rubinius on Heroku. Here’s how I did that
Add Puma to the Gemfile and disable Unicorn
Also change the Heroku Procfile to use Puma with a minimum of 5 threads and a maximum of 16 - the defaults are 0 and 16
Update the Gemfile to tell Heroku to use Rubinius:
I recently had an application where I needed to find the location for an incoming IP address. I used the MaxMind GeoLite City database to provide the raw data, and I wanted this data inside my own database so I could link to the data records as needed.
The GeoLite data has two tables, the Location data that has the City, Latitude and Longitude, and a Blocks table that has a start/end IP address and a location_id from the Location table.
My initial migration looked like this, with a start_ip and end_ip as a bigint (to avoid problems with signed/unsigned conversions):
This worked well enough to find a location based on IP address, but on my MacBook Pro with PostgreSQL, with 1M block records, each lookup was taking over 230ms.
Last week, I spent a few days at MicroConf in Las Vegas. There have been a small number of blog posts about how useful this conference is, and Christoph has a very good set of posts summarizing each of the MicroConf speakers
This was my second year, and I would recommend you all skip it next year to give me a better chance of getting a ticket!
A lot of directly actionable items came out of the conference, and Rob Walling asked at the start to make sure we all came away with at least 3 - this is a great way to ensure your customers get value: ask them to look for it as they go, so they don’t get to the end and go ‘ummmm’.
In a recent project, there was an API call to an external service to retrieve some data. The code was written in the easiest way possible, which is to make the service call when the data is needed. For this application, the data is needed to populate a drop-down in a form, so the user is waiting for this call to complete before they can see the page they requested.
This particular service takes at least 1 second, and usually 1.5 seconds to return the data needed. Every single time the user visits this page. This clearly needs to change.
If you have service calls where the parameters to the call are likely to be re-used, and the results for a given combination of parameters is the same, then you can easily cache the service results and re-use them across requests.
In my project, the service call was performed inside a Rails view helper method during rendering of the content. The easiest way to cache this call is to use the Rails fragment cache view helper to avoid the call:
Ruby on Rails has the ability to easily save the results of expensive operations to memcache for later use without the overhead of that expensive operation. Some things are easier to cache than others, and today I’m going to talk about one of the easiest things to cache in a Rails application - a single object, typically for a show action.
The Rails documentation lists all the different cache stores that can be used in your application. For any non-trivial application, one that will run on more than a single web server, then you’ll need to use something like memcache to store your data and share it between servers.
The current best way of accessing a memcache store is the Dalli gem. This will be the default store in Rails 4
Here’s my dalli configuration from production.rb for this blog on Heroku
Updating the example in my previous post - note that the JSON code path is unchanged, as this format doesn’t use a layout or view, so doesn’t need them as part of the ETag calculation
The gem can be installed by adding ‘gem “etag_for”’ to your Gemfile
The Rails Fragment Cache is the easiest way of making your view template code run fast. A fragment is a piece of generated output, usually HTML, that is a part of the total output, usually the HTML page, that is returned to a user. The documentation on caching with Rails explains the basics, but doesn’t go into any depth, especially for new use cases and techniques that have been developed and expanded over the last year or so.
Here are six techniques and tools that can make your fragment caching easy to manage yet provide the benefits of turbo-charging your view rendering.
HTTP 1.1 added a new header called an entity-tag that is shortened to ETag. This entity-tag was provided to allow a client browser to ask a server application if the content it already has is the currently available version, and for the server to quickly determine this and respond with either a simple ‘Yes’, or ‘No, here’s latest content’ response.
The Wikipedia Article for ETag goes into some of the technical details, but doesn’t really give any hint as to how to use this when building a web site of our own.
A good ETag value is one where it is unlikely that different content produces the same value. Since ETags are specific to each URL of your website, this process doesn’t need to use the highest security mechanisms available to you, but it must be pretty good. It also needs to be fast. There’s no point having a performance feature that is slower than not using it at all.
For this use, the MD5 hash works really well. Although MD5 hashing can be broken, and different content can be created to have the same hash value, in our application, for ETag generation, these deficiencies aren’t important. MD5 digests can be generated very quickly, so add very little overhead.