How to upgrade your MongoDB deployment to 3.0 version

MongoDBShardedClusterUpgradeIt is the moment to upgrade our MongoDB installation, do we know how to do it? In this post we will explain how to achieve it for all sort of deployments, a stand-alone node, a Replica set or a Sharded Cluster. We will talk about how to upgrade to 3.0 version.

Release and Upgrade Notes

At the very beginning we must always read the Release Notes and the Upgrade Notes, specially between major releases. These are the official documentation urls:

Binary downloads

Regardless what kind of installation we have, we will always need the binary files of the new version. We can download them from this url:

Downloads – MongoDB

We will obtain a compressed file that will be necessary to descompress to access those binaries.

As always, we will choose the most appropiate version, we can pick out between:

SO 64 bit 32 bit
Mac OS X


It will be necessary to have installed the 2.6 version in order to upgrade to 3.0.

Once 3.0 installed we will not be able to downgrade to a less version than 2.6.5.

Package upgrades

If we have installed MongoDB from apt, yum or zypper repositories, we will upgrade the version using the package manager. We can read the instructions at this url: installation instructions.

Upgrade of a standalone node installation

These are the steps we must follow:

  1. Shut down the MongoDB instance.
  2. Replace the existing binary with the 3.0 mongod binary.
  3. Restart mongod.

With this purpose we can use one of the following commands:

We will restart mongod instance by:

Upgrading a Replica Set


When carrying out maintenance work (as it can be a version upgrade)  in the nodes of a Replica Set, we will have two goals:

  1. Do not run any risk.
  2. Do not interrupt the service at any time.


  1. The Replica Set must have a minimum of three nodes, with two full secondaries for not having any risk and for having two data copies in every moment. This would not be possible with, for example, one primary, one secondary and an arbiter, because we would only have one copy of data (primary’s) while upgrading.
  2. We do not want to lose any operation while we are upgrading, for this reason the oplog time must be big enough. The oplog is a capped collection in which MongoDB stores all the activity occurred with our data. We will use this register for catching up the node that has just been upgraded when it is working again in the Replica Set. We will not lose any operation when the time needed to upgrade is less than the time we are able to store in the oplog. We can know the size of our oplog window using this command:

Steps to follow with secondary nodes

Logically, in order to keep two copies of our data in every moment, the Replica Set’s secondaries must be upgrade one by one.

  1. Shutdown the mongod instance. The Replica Set still works with the primary and one secondary (both of them will continue to make ping to the downloaded node for checking its state).
  2. Replace the 2.6 binary with 3.0 binary.
  3. Start the instance with the same options it had.
  4. We must wait for the upgraded node to catch up before beginning with other secondary. Looking at the optimeDate value returned by the rs.status() command we can know if the replication is over (it must be equal for all the Replica Set members).

Steps to follow with the primary node

  1. We close existing connections to drivers, convert our primary to secondary and force an election. We get all this with the rs.stepDown() command (step down the primary and force the set to failover). The drivers will automatically establish connections with the new primary node and with little time penalty. We remember that a Replica Set failover is not instantaneous and, while complete, writes are not supported.
  2. Now, all we need is to apply the four steps for the secondaries.

Upgrading a Sharded Cluster

All members of a cluster must be upgraded to 2.6 version in order to upgrade it to 3.0 version.

A Sharded Cluster is made up of Replica Sets but, also, it has:

  • Config Servers (they keep the database which tells us the shard where our data is stored)
  • mongos processes (they enroute the client requests to the shard returned by the config server)


  • Please, make sure you have got a backup of the ‘config’ database before upgrading the cluster.


  1. While we are upgrading the cluster, we must be sure that no client is updating the meta-data (config database).
  2. First we will upgrade the cluster’s metadata, then the mongos processes and, finally, the mongod’s.

Steps to follow

  1. Disable the balancer (if there are operations in progress MongoDB will wait until they are finished).
  2. Upgrade the cluster’s metadata.
    1. Upgrade one mongos to 3.0 version.
    2. Start this mongos with the same options it had and with this new one –upgrade (this execution will be finished when the upgrade is over). MongoDB will not make splits nor chunk moves while this execution is in progress. This message will be sent out when the metadata upgrade is well finished:
    3. Upgrade to 3.0 version the remaining mongos instances.
    4. Start the three mongos without the –upgrade option.
    5. Upgrade the three config servers, one by one, like they were mongod standalone nodes (shutdown, upgrade and start). We know that in a production environment is recommendable to have three config servers. They keep key information, in a proprietary database, related to the data contained in each shard, therefore, before upgrading them is very recommendable to make a backup. All of them store the same information, so, you only have to stop one service and make the backup from it. This is possible because if one of them is off, automatically the metadata turns to read-only mode.
    6. Upgrade all shard secondary nodes. The secondaries of a shard must be upgraded one by one, however, we can upgrade them at once if they belong to distinct shards.
    7. Upgrade, one by one, all shard primary nodes. It is not recommended to upgrade at a time because the mongos’s will enroute all writes to the new primaries and the system will be busy establishing new connections. Remember that a stepDown over a primary destroy all existing connections and new ones are established.
    8. Re-enable the balancer.

If we try to upgrade the ‘config’ database before stopping the balancer MongoDB return us an error.

Both like myself disclaim all responsibility for any problems that may arise in updating your MongoDB deployments. This article is written for information and educational purposes only. Naturally, you always must read the official MongoDB sources.

Leave a comment

Your email address will not be published. Required fields are marked *

nineteen − eleven =