Recently, we’ve had reports on our apps that users are getting 400 Bad Request errors from our servers. On checking out our log files, there was nothing to see! All is working well. Yet the reports continue. What’s going on?
We’re using nginx 0,6,34, and the reports the users are seeing mention this. These errors are definitely coming from our servers, but still nothing in the error logs for the server.
After a lot of head scratching and diagnosing, it seems that we have a number of large cookies being set, and the users who are getting these large cookies are the ones having problems. Lets check the interwebs for some help here. Here’s a relevant post on the Ruby Forum
It seems that we need to increase the size of the header space that nginx needs to process headers. The setting for this is large_client_header_buffers, and the size part of this needs to be big enough to handle the entire cookie header being sent.
We set this to 8K and tried again. Still getting errors!
Putting a little test app together to play with cookie sizes, I quickly find the documentation for large_client_header_buffers and see that it works at the http or the server level. Well, it doesn’t work at the server level at all! That’s where we had it. The same setting at the http level works great though!
Now we have a fix!
Put the setting for large_client_header_buffers at the http level and set them big enough to handle large cookies! How big can that be? HTTP defined a max number and size of cookies, right? Lets go see what that is!
Oh dear. Modern browsers have given up on these size restrictions! You can now expect to be able to handle 40 cookies of 4K max each, or Safari could decide to send you 100 of those. On every request. Nice job browser makers! Just what I need is a 50K request to get my 1K icons or 10K images from my site.
So, watch out for large cookies and configure your nginx correctly before your users get those Bad Requests!