Friday, March 25, 2016

SHARDING + MONGODB

Sharding and Replica Sets Illustrated

This post assumes you know what replica sets and sharding are.
Step 1: Don’t use sharding
Seriously. Almost no one needs it. If you were at the point where you needed to partition your MySQL database, you’ve probably got a long ways to go before you’ll need to partition MongoDB (we scoff at billions of rows).
Run MongoDB as a replica set. When you really need the extra capacity then, and only then, start sharding. Why?
  1. You have to choose a shard key. If you know the characteristics of your system before you choose a shard key, you can save yourself a world of pain.
  2. Sharding adds complexity: you have to keep track of more machines and processes.
  3. Premature optimization is the root of all evil. If you application isn’t running fast, is it CPU-bound or network-bound? Do you have too many indexes? Too few? Are they being hit by your queries? Check (at least) all of these causes, first.
Using Sharding
shard is defined as one or more servers with one master. Thus, a shard could be a single mongod (bad idea), a master-slave setup (better idea), or a replica set (best idea).
Let’s say we have three shards and each one is a replica set. For three shards, you’ll want a minimum of 3 servers (the general rule is: minimum of N servers for N shards). We’ll do the bare minimum on replica sets, too: a master, primary, and arbiter for each set.
Mugs are MongoDB processes. So, we have three replica sets:
"teal", "green", and "blue" replica sets
“M” stands for “master,” “S” stands for “slave,” and “A” stands for “arbiter.” We also have config servers:
A config server
…and mongos processes:
A mongos process
Now, let’s stick these processes on servers (serving trays are servers). Each master needs to do a lot, so let’s give each primary its own server.

Now we can put a slave and arbiter on each box, too.

Note how we mix things up: no replica set is housed on a single server, so that if a server goes down, the set can fail over to a different server and be fine.
Now we can add the three configuration servers and two mongos processes. mongos processes are usually put on the appserver, but they’re pretty lightweight so we’ll stick a couple on here.

A bit crowded, but possible!
In case of emergency…
Let’s say we drop a tray. CRASH! With this setup, your data is safe (as long as you were using w) and the cluster loses no functionality (in terms of reads and writes).
Chunks will not be able to migrate (because one of the config servers is down), so a shard may become bloated if the config server is down for a very long time.
Network partitions and losing two server are bigger problems, so you should have more than three servers if you actually want great availability.
Let’s start and configure all 14 processes at once!
Or not.  I was going to go through the command to set this whole thing up but… it’s really long and finicky and I’ve already done it in other posts. So, if you’re interested, check out my posts on setting up replica sets and sharding.
Combining the two is left as an exercise for the reader.

No comments:

Post a Comment