Skip to main content

Deploying Across Multiple Zones

Deploying Across Multiple Zones

Cloud providers generally allow their virtual networks to span multiple zones within a given region. For some deployments, you may want to take advantage of this to deploy different nodes in different zones. For example, if you deploy a mirrored sharded cluster in which each data node includes a failover pair and a DR async (see Mirrored Configuration Requirements), you can accomplish the cloud equivalent of putting physical DR asyncs in remote data centersOpens in a new tab by deploying the failover pair and the DR async in two different zones.

To specify multiple zones when deploying on AWS, GCP, Azure, and Tencent, populate the Zone field in the defaults file with a comma-separated list of zones. Here is an example for AWS:

{
    "Provider": "AWS",
    ...
    "Region": "us-west-1",
    "Zone": "us-west-1b,us-west-1c"
}

For GCP:


    "Provider": "GCP",
    ...
    "Region": "us-east1",
    "Zone": "us-east1-b,us-east1-c"
}

For Azure:

    "Provider": "Azure",
    ...
    "Region": "Central US",
    "Zone": "1,2"

For Tencent:

    "Provider": "Tencent",
    ...
    "Region": "na-siliconvalley",
    "Zone": "na-siliconvalley-1,na-siliconvalley-2"

The specified zones are assigned to nodes in round-robin fashion. For example, if you use the AWS example and provision four nonmirrored DATA nodes, the first and third will be provisioned in us-west-1b, the second and fourth in us-west-1c.

For mirrored configuration, round-robin distribution may lead to undesirable results, however; for example, the preceding Zone specifications would place the primary and backup members of mirrored DATA, DM, or DS nodes in different zones, which might not be appropriate for your application due to higher latency between the members (see Network Latency ConsiderationsOpens in a new tab in the High Availability Guide). To choose which nodes go in which zones, you can add the ZoneMaps field to a node definition in the definitions.json file to specify a particular zone specified by the Zone field for a single node or a pattern for zone placement for multiple nodes. This is shown in the following specifications for a distributed cache cluster with a mirrored data server:

defaults.json

"Mirror": "True"
"Region": "us-west-1",
"Zone": "us-west-1a,us-west-1b,us-west-1c"

definitions.json

"Role": "DM",
"Count": "4”,
"MirrorMap": "primary,backup,async,async",
"ZoneMap": "0,0,1,2",
...
"Role": "AM",
"Count": "3”,
"MirrorMap": "primary,backup,async,async",
"ZoneMap": "0,1,2",
...
"Role": "AR",
...

This places the primary and backup mirror members in us-west-1a and one application server in each zone, while the asyncs are in different zones from the failover pair to maximize their availability if needed — the first in us-west-1b and the second in us-west-1c. The arbiter node does not need a ZoneMap field to be placed in us-west-1a with the failover pair; round-robin distribution will take care of that.

You could also use this approach with a mirrored sharded cluster in which each data node mirror contains a failover pair and a DR async, as follows:

defaults.json

"Mirror": "True"
"Region": "us-west-1",
"Zone": "us-west-1a,us-west-1b,us-west-1c"

definitions.json:

"Role": "DATA",
"Count": "12”,
"MirrorMap": "primary,backup,async",
"ZoneMap": "0,0,1",
...
"Role": "COMPUTE",
"Count": "8”,
"ZoneMap": "0",
...
"Role": "AR",
"ZoneMap": "2",
...

This would place the failover pair of each of the four data node mirrors and the eight compute nodes in us-west-1a, the DR async of each data node mirror in us-west-1b, and the arbiter in us-west-1c.

FeedbackOpens in a new tab