Skip to main content

ICM Cluster Topology and Mirroring

ICM Cluster Topology and Mirroring

ICM validates the node definitions in the definitions file to ensure they meet certain requirements; there are additional rules for mirrored configurations. Bear in mind that this validation does not include preventing configurations that are not functionally optimal, for example a single AM node, a single WS node, five DATA nodes with just one COMPUTE node or vice-versa, and so on.

In both nonmirrored and mirrored configurations,

  • In a sharded cluster, COMPUTE nodes are assigned to DATA nodes (and QS nodes to DS nodes) in round-robin fashion.

  • If both AM and WS nodes are included, AM nodes are bound to the DM and WS nodes to the AM nodes; if just AM nodes or just WS nodes are included, they are all bound to the DM.

This section contains the following subsections:

Rules for Mirroring

All data nodes in a sharded cluster must be mirrored, or all unmirrored. This requirement is reflected in the following ICM topology validation rules.

When the Mirror field is set to false in the defaults file (the default), mirroring is never configured, and provisioning fails if more than one DM node is specified in the definitions file.

When the Mirror field is set to true, mirroring is configured where possible, and the mirror roles of the DATA, DS, or DM nodes (primary, backup, or DR async) are determined by the value of the MirrorMap field (see General Parameters) in the node definition, as follows:

  • If MirrorMap is not included in the relevant node definition, the nodes are configured as mirror failover pairs using the default MirrorMap value, primary,backup:

    • If an even number of DATA or DS nodes is defined, they are all configured as failover pairs; for example, specifying six DATA nodes deploys three data node mirrors containing failover pairs and no DR asyncs. If an odd number of DATA or DS nodes is defined, provisioning fails.

    • If two DM nodes are defined, they are configured as a failover pair; if any other number is defined, provisioning fails.

  • If MirrorMap is included in the node definition, the nodes are configured according to its value, as follows:

    • The number of DATA or DS nodes must be a multiple of the number of roles specified in the MirrorMap value or fewer. For example, suppose the MirrorMap value, is primary,backup,async, as shown:

      "Role": "DATA",
      "Count": "",
      "MirrorMap": "primary,backup,async"
      
      

      In this case, DATA or DS nodes would be configured as follows:

      Value of Count Result
      3 or multiples of 3 One or more mirrors containing a failover pair and a DR async
      2 A single mirror containing a failover pair
      1, 4 or more but not multiples of 3 Provisioning fails
    • The number of DM nodes must be the same as the number of roles specified in the MirrorMap value or fewer; if a single DM node is specified, provisioning fails.

  • If more than one AR (arbiter) node is specified, provisioning fails. (While a best practice, use of an arbiter is optional, so an AR node need not be included in a mirrored configuration.)

All asyncs deployed by ICM are DR asyncs; reporting asyncs are not supported. Up to 14 asyncs can be included in a mirror. For information on mirror members and possible configurations, see Mirror ComponentsOpens in a new tab in the High Availability Guide.

There is no relationship between the order in which DATA, DS, or DM nodes are provisioned or configured and their roles in a mirror. Following provisioning, you can determine which member of each pair is the intended primary failover member and which the backup using the icm inventory command. To see the mirror member status of each node in a deployed configuration when mirroring is enabled, use the icm ps command.

Nonmirrored Configuration Requirements

A nonmirrored cluster consists of the following:

  • One or more DATA (data nodes in a sharded cluster).

  • If DATA nodes are included, zero or more COMPUTE (compute nodes in a sharded cluster); best practices are at least as many COMPUTE nodes as DATA nodes and the same number of COMPUTE nodes for each DATA node.

  • If no DATA nodes are included:

    • Exactly one DM (distributed cache cluster data server, standalone InterSystems IRIS instance, shard master data server in namespace-levelOpens in a new tab sharded cluster).

    • Zero or more AM (distributed cache cluster application server).

    • Zero or more DS (shard data servers in namespace-level sharded cluster).

    • Zero or more QS (shard query servers in namespace-level sharded cluster, cannot be deployed without corresponding DS nodes)

  • Zero or more WS (web servers).

  • Zero or more LB (load balancers).

  • Zero or more VM (virtual machine nodes).

  • Zero or more CN (container nodes).

  • Zero or one BH (bastion host).

  • Zero AR (arbiter node is for mirrored configurations only).

The relationships between some of these nodes types are pictured in the following examples.

ICM Nonmirrored Topologies
null
null

Mirrored Configuration Requirements

A mirrored cluster consists of:

  • If DATA nodes (data nodes in a node-level sharded cluster) are included:

    • A number of DATA matching the MirrorMap value, default or explicit, as described in Rules for Mirroring.

    • Zero or more COMPUTE (compute nodes in a node-level sharded cluster); best practices are at least one COMPUTE node per DATA node mirror, and the same number of COMPUTE nodes for each DATA node mirror.

  • If no DATA nodes are included:

    • Two DM as a mirrored shard master data server in a namespace-level sharded cluster, data server in a distributed cache cluster, or standalone InterSystems IRIS instance, or more than two if DR asyncs are specified by the MirrorMap field, as described in Rules for Mirroring.

    • If a namespace-level sharded cluster:

      • A number of DS (shard data servers) matching the MirrorMap value, default or explicit, as described in Rules for Mirroring.

      • Zero or more QS (shard query servers), as described in the foregoing for COMPUTE nodes.

    • Zero or more AM as application servers in a distributed cache cluster.

  • Zero or one AR (arbiter node is optional but recommended for mirrored configurations).

  • Zero or more WS (web servers).

  • Zero or more LB (load balancers).

  • Zero or more VM (virtual machine nodes).

  • Zero or more CN (container nodes).

  • Zero or one BH (bastion host).

The following fields are required for mirroring:

  • Mirroring is enabled by setting key Mirror in your defaults.json file to true.

    "Mirror": "true"
    
  • To include DR asyncs in DATA, DS, or DM mirrors, you must include the MirrorMap field in your definitions file to specify that those beyond the first two are DR async members. The value of MirrorMap must always begin with primary,backup, for example:

    "Role": "DM",
    "Count": "5”,
    "MirrorMap": "primary,backup,async,async,async",
    ...
    

    For information on the relationship between the MirrorMap value and the number of DATA, DS, or DM nodes defined, see Rules for Mirroring. MirrorMap can be used in conjunction with the Zone and ZoneMap fields to deploy async instances across zones; see Deploying Across Multiple Zones.

Automatic LB deployment (see Role LB: Load Balancer) is supported for providers AWS, GCP, Azure, and Tencent; when creating your own load balancer, the pool of IP addresses to include are those of DATA, COMPUTE, AM, or WS nodes, as called for by your configuration and application.

Note:

A mirrored DM node that is deployed without AM or WS nodes or a load balancer (LB node) must have some appropriate mechanism for redirecting application connections following failover; see Redirecting Application Connections Following Failover or Disaster RecoveryOpens in a new tab in the “Mirroring” chapter of the High Availability Guide for more information.

The relationships between some of these nodes types are pictured in the following examples.

ICM Mirrored Topologies
null
null
FeedbackOpens in a new tab