From e3c135a0f0fe81c75fd1906ff81b0d9fe2645f27 Mon Sep 17 00:00:00 2001 From: Bob Grabar Date: Wed, 26 Jun 2013 14:17:10 -0400 Subject: [PATCH] BEFORE file: build/master/json/core/.json source: source/core/replica-set-architectures.txt stats: coleman-liau: 14.255014005602241 flesch-ease: 29.54934629665823 flesch-level: 13.262844765813306 foggy: count: 81 factor: 0.1134453781512605 threshold: 3 sentence-count: 43 sentence-len-avg: 16 smog-index: 10.969770756190233 word-count: 714 AFTER file: build/master/json/core/replica-set-architectures.json source: source/core/replica-set-architectures.txt stats: coleman-liau: 12.615140186915884 flesch-ease: 40.02162305295951 flesch-level: 11.221352024922119 foggy: count: 52 factor: 0.08099688473520249 threshold: 3 sentence-count: 45 sentence-len-avg: 14 smog-index: 9.27011772238663 word-count: 642 --- source/core/replica-set-architectures.txt | 156 ++++++++++------------ 1 file changed, 73 insertions(+), 83 deletions(-) diff --git a/source/core/replica-set-architectures.txt b/source/core/replica-set-architectures.txt index d2e3e6c5fdf..0df5dd24e69 100644 --- a/source/core/replica-set-architectures.txt +++ b/source/core/replica-set-architectures.txt @@ -7,58 +7,40 @@ Replica Set Deployment Architectures .. default-domain:: mongodb -The architecture and design of the :term:`replica set` deployment can -have a great impact on the set's capacity and capability. This section -provides an overview of the architectural possibilities for -replica set deployments. However, for most production deployments, a -conventional 3-member replica set with -:data:`~local.system.replset.members[n].priority` values of ``1`` is -sufficient. +The architecture of a :term:`replica set ` affects the +set's operations. This section provides strategies for replica-set +deployments and describes common architectures. -It always makes sense to let the application requirements dictate the -architecture of the MongoDB deployment. Avoid adding unnecessary -complexity to your deployment. +The standard deployment for a production system is a three-member +replica set in which any member can become :term:`primary`. When +deploying a replica set, let your application requirements dictate the +architecture you choose. Avoid unnecessary complexity. -Plan a Replica Set Deployment ------------------------------ +Determine the Number of Members +------------------------------- -When developing an architecture for your replica set, consider the -following factors: +Add members in a replica set according to these strategies. Run an Odd Number of Members to Ensure Successful Elections ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Ensure that the members of the replica set will always be able to elect -a :term:`primary`. Run an odd number of members or run an -:term:`arbiter` on one of your application servers if you have an even -number of members. - -Distribute the Replica Set Geographically -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Consider keeping one or two members of the set in an off-site data -center, but make sure to configure the member -:data:`~local.system.replset.members[n].priority` to prevent it from -becoming primary. - -Ensure One Location in a Geographically Distributed System has a Quorum -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -With geographically distributed members, know where the "quorum" of -members will be in the case of any network partitions. Attempt to ensure -that the set can elect a primary among the members in the primary data -center. +An odd number of members ensures that the replica set is always able to +elect a primary. If you have an even number of members, you can create +an odd number without increasing storage needs by running an +:term:`arbiter` on an application server. .. _replica-set-architectures-consider-fault-tolerance: -Consider Fault Tolerance -~~~~~~~~~~~~~~~~~~~~~~~~ +Use Fault Tolerance to Help Decide How Many Members +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The "fault tolerance" level is the number of members that can be offline +without blocking the set's ability to elect a primary. Fault tolerance +is a factor of replica-set size, as shown in the following table. -When determining how many members to deploy in a replica set, consider -the relationship between the size of a replica set and fault -tolerance, or the number of set members that can become unavailable -without affecting the availability or the ability of the set to elect -a :term:`primary`. The following table illustrates this relationship: +Adding a member to the replica set does not *always* increase the fault +tolerance. In such cases, however, having an additional member can +provide support for dedicated functions, such as backups or reporting. .. list-table:: :header-rows: 1 @@ -66,7 +48,7 @@ a :term:`primary`. The following table illustrates this relationship: * - Number of Members - - Majority Required to Elect New Primary + - Majority Required to Elect a New Primary - Fault Tolerance @@ -94,54 +76,64 @@ a :term:`primary`. The following table illustrates this relationship: - 2 -Adding a member to the replica set does not *always* increase the level -of tolerance for service interruptions. However, although the fault -tolerance may not always increase, having additional members provide -support for dedicated functionality, such as dedicated backups and -reporting. - -Run Hidden and Delayed Members for Dedicated Functions +Add Hidden and Delayed Members for Dedicated Functions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Consider including a :ref:`hidden ` or -:ref:`delayed member ` in your replica set -to support dedicated functionality, like backups, reporting, and -testing. - -Use Tags to Ensure Write Operations Propagate Efficiently -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Add :ref:`hidden ` or :ref:`delayed +` members to support dedicated functions, +such as backup, reporting, or testing. -Create custom write concerns with :ref:`replica set tags -` to ensure that applications can -control the threshold for a successful write operation. Use these write -concerns to ensure that operations propagate to specific data centers or -to machines of different functions before returning successfully. +Add Members to Load Balance on Read-Heavy Deployments +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Add Additional Members to Load Balance on Read-Heavy Deployments -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +In a deployment with high read traffic, you can improve read throughput +by distributing reads to secondary members. As your deployment grows, +add or move members to secondary data centers to improve redundancy and +availability. -For those deployments that rely heavily on distributing reads to -secondary members, add additional members as the load increases. As -your deployment grows, consider adding or moving replica set members to -secondary data centers or to geographically distinct locations for -additional redundancy and availability. While many architectures are -possible, always ensure that the quorum of members required to elect a -primary remains in your main facility. +Always ensure that your main facility contains the quorum of members +needed to elect a primary. Add New Members Ahead of Demand ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -The process of establishing a new replica set member can be resource -intensive on existing members. As a result, deploy new members to -existing replica sets significantly before the current demand saturates -the existing members. +Add new members to existing replica sets well before the current demand +saturates the existing members. + +Determine the Distribution of Members +------------------------------------- + +Distribute members in a replica set according to these strategies. + +Geographically Distribute Members to Provide Data Recovery +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To provide data recovery if your data center fails, keep at least one +member in an off-site data center. Set the member's +:data:`~local.system.replset.members[n].priority` to 0 to prevent it +from becoming primary. + +Keep a Majority of Members in One Location to Ensure Successful Elections +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +When a replica set is distributed over different locations, network +partitions can prevent members in one center from seeing those in +another. In an election, members must see each other to create a +majority. To ensure that the replica set members can confirm a majority and +elect a primary, keep a majority of the set’s members in one location. + +Use Tags to Ensure Write Operations Propagate Efficiently +--------------------------------------------------------- + +Use :ref:`replica set tags ` to +ensure that operations propagate to specific data centers or to machines +with specific functions. Use Journaling to Protect Against Power Failures -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +------------------------------------------------ -Journaling is particularly useful for protection against power failures, -especially if your replica set resides in a single data center or power -circuit. +Enable journaling as protection against power failures, especially if +your replica set resides in a single data center or power circuit. 64-bit versions of MongoDB after version 2.0 have journaling enabled by default. @@ -149,11 +141,9 @@ default. Architectures ------------- -There is no single ideal :term:`replica set` architecture for every -deployment or environment. Indeed the flexibility of replica sets might -be their greatest strength. The following deployment patterns are -necessarily not mutually exclusive, and you can combine features of -each architecture in your own deployment. +The following are common deployment patterns for replica sets. These are +neither exclusive nor exhaustive. You can combine features of each +architecture in your own deployment. .. include:: /includes/dfn-list-replica-set-architectures.rst