In this blog post, we’ll look at some common MongoDB topologies used in database deployments.
The question of the best architecture for MongoDB will arise in your conversations between developers and architects. In this blog, we wanted to go over the main sharded and unsharded designs, with their pros and cons.
We will first look at “Replica Sets.” Replica sets are the most basic form of high availability (HA) in MongoDB, and the building blocks for sharding. From there, we will cover sharding approaches and if you need to go that route.
From the MongoDB manual:
A replica set in MongoDB is a group of
mongod processes that maintain the same data set. Replica sets provide redundancy and high availability, and are the basis for all production deployments.
Short of sharding, this is the ideal way to run MongoDB. Things like high availability, failover and recovery become automated with no action typically needed. If you expect large growth or more than 200G of data, you should consider using this plus sharding to reduce your mean time to recovery on a restore from backup.
- Elections happen automatically and unnoticed by application setup with retry
- Rebuilding a new node, or adding an additional read-only node, is as easy as “rs.add(‘hostname’)”
- Can skip building indexes to improve write speed
- Can have members
- hidden in other geographic location
- delayed replication
- analytics nodes via taggings
- Depending on the size of the oplog used, you can use 10-100+% more space to hold to change data for replication
- You must scale up not out meaning more expensive hardware
- Recovery using a sharded approach is faster than having is all on a single node ( parallelism)
Flat Mongos (not load balanced)
This is one of MongoDB’s more suggested deployment designs. To understand why, we should talk about the driver and the fact that it supports a CSV list of mongos hosts for fail-over.
You can’t distribute writes in a single replica set. Instead, they all need to go to the primary node. You can distribute reads to the secondaries using Read Preferences. The driver keeps track of what is a primary and what is a secondary and routes queries appropriately.
Conceptually, the driver should have connections bucketed into the mongos they go to. This allowed the 3.0+ driver to be semi-stateless and ask any connection to a specific mongos to preform a getMore to that mongos. In theory, this allows slightly more concurrency. Realistically you only use one mongos, since this is only a fail-over system.
- Mongos is on its own gear, so it will not run the application out of memory
- If Mongos doesn’t respond, the driver “fails-over” to the next in the list
- Can be put closer to the database or application depending on your network and sorting needs
- You can’t use mongos in a list evenly, so it is only good for fail-over (not evenness) in most drivers. Please read specific drivers for support, and test thoroughly.
Load Balanced (preferred if possible)
According to the Mongo docs:
You may also deploy a group of mongos instances and use a proxy/load balancer between the application and the mongos. In these deployments, you must configure the load balancer for client affinity so that every connection from a single client reaches the same mongos.
This is the model used by platforms such as ObjectRocket. In this pattern, you move mongos nodes to their own tier but then put them behind a load-balancer. In this design, you can even out the use of mongos by using a least-connection system. The challenge, however, is new drivers have issues with getMores. By this we mean the getMore selects a new random connection, and the load balancer can’t be sure which mongos should get it. Thus it has a one in N (number of mongos) chance of selecting the right one, or getting a “Cursor Not Found” error.
- Ability to have an even use of mongos
- Mongos are separated from each other and the applications to prevent memory and CPU contention
- You can easily remove or add mongos to help scale the layer without code changes
- High availability at every level (multiple mongos, multiple configs, ReplSet for high availability and even multiple applications for app failures)
- If batching is used, unless switched to an IP pinning algorithm (which loses evenness) you can get “Cursor Not Found” errors due to the wrong mongos getting getMore and bulk connector connections
By and large, this is one of the most typical deployment designs for MongoDB sharding. In it, we have each application host talking to a mongos on the local network interface. This ensures there is very little latency to the application from the mongos.
Additionally, this means if a mongos fails, at most its own host is affected instead of the wider range of all application hosts.
- Local mongos on the loopback interface mean low to no latency
- Limited scope of outage if this mongos fails
- Can be geographically farther from the data storage in cases where you have a DR site
- Mongos is a memory hog; you could steal from your application memory to support running it here
- Made worse with large batches, many connections, and sorting
- Mongos is single-threaded and could become a bottleneck for your application
- It is possible for a slow network to cause bad decision making, including duplicate databases on different shards. The functional result is data writing intermittently to two locations, and a DBA must remediate that at some point (think MMM VIP ping pong issues)
- All sorting and limits are applied on the application host. In cases where the sort uses an index this is OK, but if not indexed the entire result set must be held in memory by mongos and then sorted, then returned the limited number of results to the client. This is the typical cause of mongos OOM’s errors due to the memory issues listed before.
The topologies are above cover many of the deployment needs for MongoDB environments. Hope this helps, and list any questions in the comments below.