I’m adding a feature to a production application that I’m using MongoDB to implement. I’ve rolled this feature out to our production servers in a mirror mode so I can see it perform, but no production users are effected by this new code.
The first thing I notice is that my code to update a specific record is running in 400 to 4000 miliseconds! I need this to be running in 10-100ms for this to be a useful feature. What’s going on here?
I check the production MongoDB, and the mongo server is running at 95-100% CPU and and is outputting a bunch of slow lookups to its log.
I add some more processing units to add more capacity, but this doesn’t work at all! In fact, everything is now taking 1000-8000 miliseconds to process. It looks like MongoDB is single threaded on a simple query. Since everyone is raving about how great MongoDB is, how is it my code is making it look really, really bad?
My app is a Rails application using MongoMapper, and a common pattern I use in my Rails apps is to use this code to get a single record:
This code returns a nil if the record isn’t found. This version will throw an exception if the record isn’t found.
In MySQL, the difference in query speed is essentially zero!
In MongoDB – not so much.
The difference in performance on my data set between the two queries is staggering:
The find(:first) method takes about 200ms, locks the collection during this time, and sucks CPU. The find(id) method takes about 1ms and, even if it locks the collection, returns fast enough to not matter in my case. CPU usage is now reasonable again.
A quick update to my code, and now my operations are taking 6-400ms (there’s some other work in there too that causes the variation in timing)