Friday, May 25, 2012

MongoDB pain points

Recently I was contacted by 10gen's account manager soliciting a feedback on our use of MongoDB at the company I'm working at. I wrote a lengthy reply on what we do with MongoDB and the problems we see with it and never heard back. It was a shame to all these feedback to go to waste so I decided to repost it with minor edits in my blog. So here it comes ...

We are using MongoDB at IPONWEB for quite long time - for ~2 years already for a number of high loaded projects. Our company specializes at creating products for display advertising and mostly we are using MongoDB to keep track of user data in our adservers. The main reason we are using MongoDB is raw performance. We are using MongoDB mostly as dumb NoSQL key-value database where we try to keep data fully cached in RAM. With rare exceptions we are not using any fancy features like complex queries, map-reduce and so on but rather limit ourselves to queries by a primary key. We do use sharding because as I mentioned above we have to put whole database to RAM so we often have to split database across multiple servers. Generally we are very price sensitive about costs of the installation so we are always looking at reducing hardware costs for our databases. Giving this background the following limitations in MongoDB implementation cause us the most grief:

a) lack of online database defragmentation in MongoDB. Currently the only way to compact MongoDB database is to stop the database and run compact or repair. On our datasets this process runs for considerable time. We do have to defragment  database pretty often to keep RAM usage under control. Fragmented vs non-fragmented database can be easily be two times bigger what in our case means two time higher hardware costs.

b) realistically for our use case we can do MongoDB resharding only offline. Adserving is extremely sensitive to any latencies and if we add more shards to existing cluster we more or less forced to take the application offline until resharding finishes.

c) lack of good support of SSD. The way MongoDB works now switching from using more RAM with HDD as backing storage in favor of using less RAM with SSD backing storage doesn't seem to be cost effective. SSD if priced per 1GB is roughly two times cheaper then RAM but if we place our data on SSD we have to reserve at least two more time space on SSD if we want to be able to run repair on the data (this is because running repair requires two times more space). Other reason we considered using SSD as backing storage instead of HDD is write performance in some applications where it was a limitation. But from our limited benchmarking we found small performance difference because it looks like single thread write lock in MongoDB becomes a bottleneck rather then underlying storage.

d) minor point: underlying networking protocol could be more efficient with some optimizations. If you send many small queries and get small documents as result MongoDB creates separate TCP packets for each request/response. Under high load especially in case of virtualized hardware (i.e. EC2) this introduces additional high overhead. We have our own client driver which tries to pack multiple requests in single TCP packets and it makes noticeable difference in performance on EC2. But this is a partial solution because responses from MongoDB and communications between mongos and mongod are still inefficient.

e) another minor point: BSON format is very wasteful in terms of memory usage. Giving that we try to minimize our database sizes to reduce hardware costs the recent trend in our use of MongoDB is instead of representing data as BSON document do serialization to some more compact format and instead store our data as big binary blobs (i.e. to simplify our documents look like { _id = '....', data = '... serialized data ...'}

By the way at some point we evaluated switching to CitrusLeaf. This product supposedly addresses some of the above issues (mostly a, b and c) but it seems that expected savings in hardware costs would be offset by licensing costs so at least for now we are not going to.