Skip to main content

Mirror Data Nodes for High Availability

Mirror Data Nodes for High Availability

An InterSystems IRIS mirror is a logical grouping of physically independent InterSystems IRIS instances simultaneously maintaining exact copies of production databases, so that if the instance providing access to the databases becomes unavailable, another can automatically and quickly take over. This automatic failover capability provides high availability for the InterSystems IRIS databases in the mirror. The High Availability GuideOpens in a new tab contains detailed information about InterSystems IRIS mirroring.

The data nodes in a sharded cluster can be mirrored to give them a failover capability, making them highly available. Each mirrored data node in a sharded cluster includes at least a failover pairOpens in a new tab of instances, one of which operates as the primary at all times, while the other operates as the backup, ready to take over as primary should its failover partner become unavailable. Data node mirrors in a sharded cluster can also include one or more DR (disaster recovery) async membersOpens in a new tab, which can be promoted to failover member to replace a disabled failover partner or provide disaster recovery if both failover partners become unavailable. For typical configurations, it is strongly recommended that DR asyncs be located in separate data centers or cloud availability zones from their failover pairs to minimize the chances of all of them being affected by the same failure or outage.

A sharded cluster must be either all mirrored or all nonmirrored; that is, mirrored and nonmirrored data nodes cannot be mixed in the same cluster.

The manual procedures for configuring mirrored data nodes (using either the Management Portal or the %SYSTEM.Cluster API) recognize existing mirror configurations. This means you can configure a sharded cluster of mirrored data nodes from either nonmirrored or mirrored instances, as follows, depending on your existing circumstances and requirements::

  • When you configure nonmirrored instances as a mirrored sharded cluster, each intended primary you add is automatically configured as a mirror primary before it is attached to the cluster. You can then add another nonmirrored instance as its backup, which is automatically configured as such before it is attached to the cluster.

  • When you configure existing mirrors as a sharded cluster, each primary you add is attached to the cluster as a data node primary without any changes to its existing mirror configuration. You can then add its failover partner to the cluster by identifying it as the backup of that primary. (If the mirrored instance you added as a data node primary does not have a failover partner, you can identify a nonmirrored instance as the backup; it is automatically configured as such before it is attached to the cluster.)

  • Similarly, you can add a nonmirrored instance as a DR async member of a data node mirror to have it automatically configured as such before being attached to the cluster, or add a DR async member of an existing mirror whose failover members have already been attached to the cluster by identifying it as such.

  • In all cases, the globals databases of the cluster and master namespaces (IRISCLUSTER and IRISDM by default) are added to the mirror (the former on all data nodes and the latter on the node 1 mirror); when you configure existing mirror members, any mirrored databases remain mirrored following sharded cluster configuration.

Generally speaking, the best practice is to either begin with all unmirrored instances and configure the mirrors as part of sharded cluster deployment, or configure all of the data nodes from existing mirrors.

You can deploy a mirrored sharded cluster using any of the methods described in Deploying the Sharded Cluster. The automatic deployment methods described there all include mirroring as an option. This section provides mirrored cluster instructions to replace the final step of the manual procedures included in that section. First, execute the steps described in that section, then use one of the procedures described here to complete deployment, as follows:

  1. Plan the data nodes

  2. Estimate the database cache and database sizes

  3. Provision or identify the infrastructure

    Note:

    Remember that you must provide two hosts for each mirrored data node planned in the first step. For example, if you plan calls for eight data nodes, your cluster requires 16 hosts. As is recommended for all data nodes in a sharded cluster, the two hosts comprising a mirror should have identical or at least closely comparable specifications and resources.

  4. Deploy InterSystems IRIS on the data node hosts

  5. Configure the mirrored cluster nodes using either:

However you deploy the cluster, the recommended best practices are as follows:

  • Load balance application connections across all of the mirrored data nodes in the cluster.

  • Deploy and fully configure the mirrored sharded cluster, with all members of all mirrored data nodes (failover pair plus any DR asyncs) attached to the cluster, before any data is stored on the cluster. However, you can also convert an existing nonmirrored cluster to a mirrored cluster, regardless of whether there is data on it.

  • To enable transparent query execution after data node mirror failover, include compute nodes in the mirrored cluster.

Note:

The Management Portal mirroring pages and the %SYSTEM.MIRROR API allow you to specify more mirror settings than the %SYSTEM.ClusterOpens in a new tab API and Management Portal sharding pages described here; for details, see the “Configuring Mirroring” chapter of the High Availability Guide. Even if you plan to create a cluster from nonmirrored instances and let the Management Portal or API automatically configure the mirrors, it is a good idea to review the procedures and settings in Creating a Mirror in the “Configuring Mirroring” chapter before you do so.

You cannot use the procedures in this section to deploy a mirrored namespace-level cluster. You can, however, deploy a nonmirrored namespace-level cluster as described in Deploying the Namespace-level Architecture and then convert it to a mirrored cluster as described in Convert a Nonmirrored Cluster to a Mirrored Cluster.

Mirrored Cluster Considerations

When deploying a mirrored sharded cluster, bear in mind the important points described in the following sections.

Including Compute Nodes in Mirrored Clusters for Transparent Query Execution Across Failover

Because they do not store persistent data, compute nodes are not themselves mirrored, but including them in a mirrored cluster can be advantageous even when the workload involved does not match the advanced use cases described in Deploy Compute Nodes for Workload Separation and Increased Query Throughput (which provides detailed information about compute nodes and procedures for deploying them).

If a mirrored sharded cluster is in asynchronous query mode (the default) and a data node mirror fails over while a sharded query is executing, an error is returned and the application must retry the query. There are two ways to address this problem — that is, to enable sharded queries to execute transparently across failover — as follows:

  • Set the cluster to synchronous query mode. This has drawbacks, however; in synchronous mode, sharded queries cannot be canceled, and they make greater use of the IRISTEMP database, increasing the risk that it will expand to consume all of its available storage space, interrupting the operation of the cluster.

  • Include compute nodes in the cluster. Because the compute node has a mirror connection to the mirrored data node it is assigned to, compute nodes enable transparent query execution across failover in asynchronous mode.

In view of the options, if transparent query execution across failover is important for your workload, InterSystems recommends including compute nodes in your mirrored sharded cluster (there must be at least as many as there are mirrored data nodes). If your circumstances preclude including compute nodes, you can use the RunQueriesAsync option of the $SYSTEM.Sharding.SetOption()Opens in a new tab API call (see %SYSTEM.Sharding API) to change the cluster to synchronous mode, but you should do so only if transparent query execution across failover is more important to you than the ability to cancel sharded queries and manage the size of IRISTEMP.

Creating Cluster and Master Namespaces Before Deploying

A sharded cluster is initialized when you configure the first data node, which is referred to as data node 1, or simply node 1. By default, this includes creating the cluster and master namespaces, named IRISCLUSTER and IRISDM, respectively, as well as their default globals databases and the needed mappings. However, to control the names of the cluster and master namespaces and/or the characteristics of their globals databases, you can create one or both namespaces and their default databases before configuring the cluster, then specify them during the procedure. If you plan to do this when deploying a mirrored sharded cluster, you cannot begin with unmirrored instances, but instead must take the following steps in the order shown:

  1. Configure a mirror for each prospective data node, including two failover members in each.

  2. Create the intended cluster namespace (using the default name IRISCLUSTER, or optionally another name) on each mirror primary, and the intended master namespace (default name IRISDM) on the primary of the mirror you will attach as data node 1. When creating each namespace, at the Select an existing database for Globals prompt select Create New Database, and at the bottom of the second page of the Database Wizard, set Mirrored database? to Yes to create the namespace’s default globals database as a mirrored database.

  3. Configure the mirrors as data nodes as described in the procedures in this section, specifying the namespaces you created as the cluster and master namespaces.

Enabling IP Address Override on All MIrror Members

In some cases the hostname known to the InterSystems IRIS instance on an intended cluster node does not resolve to an appropriate address, or no hostname is available. If for this or any other reason you want other cluster nodes to communicate with a node using its IP address instead, you can enable this by providing the node’s IP address, at the Override hostname with IP address prompt in the Management Portal or as an argument to the $SYSTEM.Cluster.InitializeMirrored()Opens in a new tab and $SYSTEM.Cluster.AttachAsMirroredNode()Opens in a new tab calls. When you do this for one member of a mirror, you must do it for all mirror members, as follows:

  • When configuring unmirrored instances as mirrored data nodes, ensure that you enable IP address override for all mirror members using the prompt or calls cited above.

  • When configuring an existing mirror as a data node, however, you must enable IP address override on all members of the mirror before adding it to the cluster. Regardless of whether you are using the Management Portal or API procedure, do this on each node by opening an InterSystems TerminalOpens in a new tab window and (in any namespace) calling $SYSTEM.Sharding.SetNodeIPAddress()Opens in a new tab (see %SYSTEM.Sharding API), for example:

    set status = $SYSTEM.Sharding.SetNodeIPAddress("00.53.183.209")
    

    Once you have used this call on a node, you must use the IP address you specified, rather than the hostname, to refer to the node in other API calls, for example, when attaching a mirror backup to the cluster using the $SYSTEM.Cluster.AttachAsMirroredNode()Opens in a new tab call, in which you must identify the attached primary.

Updating a Mirrored Cluster’s Metadata

If you make mirroring changes to the mirrored data nodes of a cluster outside of the %SYSTEM.ClusterOpens in a new tab API and Management Portal sharding pages, for example using the Management Portal mirroring pages or the SYS.MirrorOpens in a new tab API, you must update the mirrored cluster’s metadata after doing so; for more information, see Updating the Cluster Metadata for Mirroring Changes.

Configure the Mirrored Cluster Using the Management Portal

For information about opening the Management Portal in your browser, see the instructions for an instance deployed in a containerOpens in a new tab or one installed from a kitOpens in a new tab in the InterSystems IRIS Connection InformationOpens in a new tab section of InterSystems IRIS Basics: Connecting an IDE.

To configure the mirrored cluster using the Management Portal, follow these steps:

  1. On both the intended node 1 primary and the intended node 1 backup, open the Management Portal for the instance, select System Administration > Configuration > System Configuration > Sharding > Enable Sharding, and on the dialog that displays, click OK (here is no need to change the Maximum Number of ECP Connections setting). Then restart the instance as indicated (there is no need to close the browser window or tab containing the Management Portal; you can simply reload it after the instance has fully restarted).

  2. On the intended primary, navigate to the Configure Node-Level page (System Administration > Configuration > System Configuration > Sharding > Configure Node-Level) and click the Configure button.

  3. On the CONFIGURE NODE-LEVEL CLUSTER dialog, select Initialize a new sharded cluster on this instance and respond to the prompts that display as follows:

    1. Select a Cluster namespace and Master namespace from the respective drop-downs, which both include

      • The default names (IRISCLUSTER and IRISDM) of new namespaces to be created, along with their default globals databases.

      • All eligible existing namespaces.

      Initializing a sharded cluster creates by default cluster and master namespaces named IRISCLUSTER and IRISDM, respectively, as well as their default globals databases and the needed mappings. However, to control the names of the cluster and master namespaces and the characteristics of their globals databases, you can create one or both namespaces and their default databases before configuring the cluster, then specify them during the procedure. For example, given the considerations discussed in Globals Database Sizes, you may want to do this to control the characteristics of the default globals database of the cluster namespace, or shard database, which is replicated on all data nodes in the cluster.

      Note:

      If you want to use existing namespaces when configuring existing mirrors as a sharded cluster, follow the procedure in Creating Cluster and Master Namespaces Before Deploying.

      If the default globals database of an existing namespace you specify as the cluster namespace contains any globals or routines, initialization will fail with an error.

    2. In some cases, the hostname known to InterSystems IRIS does not resolve to an appropriate address, or no hostname is available. If for this or any other reason, you want other cluster nodes to communicate with this node using its IP address instead, enter the IP address at the Override hostname prompt.

    3. Select Enable Mirroring, and add the arbiterOpens in a new tab’s location and port if you intend to configure one (which is a highly recommended best practice).

  4. Click OK to return to the Configure Node-Level page, which now includes two tabs, Shards and Sharded Tables. Node 1 is listed under Shards, including its cluster address (which you will need in the next procedure), so you may want to leave the Management Portal for node 1 open on the Configure Node-Level page, for reference. The mirror name is not yet displayed because the backup has not yet been added.

  5. On the intended node 1 backup, navigate to the Configure Node-Level page as for the primary, and click the Configure button.

  6. On the CONFIGURE NODE-LEVEL CLUSTER dialog, select Add this instance to an existing sharded cluster and respond to the prompts that display as follows:

    1. Enter the address displayed for the node 1 primary on the Shards tab of the Configure Node-Level page (as described in an earlier step) as the Cluster URL.

    2. Select data at the Role prompt to configure the instance as a data node.

    3. In some cases, the hostname known to InterSystems IRIS does not resolve to an appropriate address, or no hostname is available. If for this or any other reason, you want other cluster nodes to communicate with this node using its IP address instead, enter the IP address at the Override hostname prompt.

    4. Select Mirrored cluster and do the following:

      • Select backup failover from the Mirror role drop-down.

      • If you configured an arbiter when initializing node 1 in a previous step, add the same arbiter location and port as you did there.

  7. Click OK to return to the Configure Node-Level page, which now includes two tabs, Shards and Sharded Tables. The node 1 primary and backup you have configured so far are listed under Shards in the node 1 position, with the assigned mirror name included.

  8. For each remaining mirrored data node, repeat the previous steps, beginning with the Enable Sharding option and a restart for both instances. When you add the primary to the cluster, enter the node 1 primary’s address as the Cluster URL, as described in the preceding, but when you add the backup, enter the address of the primary you just added as the Cluster URL (not the address of the node 1 primary).

  9. When each mirrored data node has been added, on one of the primaries in the cluster, navigate to the Configure Node-Level page and click Verify Shards to verify that the new mirrored node is correctly configured and can communicate with the others. You can also wait until you have added all the mirrored data nodes to do this, or you can make the verification operation automatic by clicking the Advanced Settings button and selecting Automatically verify shards on assignment on the ADVANCED SETTINGS dialog. (Other settings in this dialog should be left at the defaults when you deploy a sharded cluster.)

Configure the Mirrored Cluster Nodes Using the %SYSTEM.Cluster API

To configure the mirrored cluster using the API, do the following:

  1. On the intended node 1 primary, open the InterSystems TerminalOpens in a new tab for the instance and call the $SYSTEM.Cluster.InitializeMirrored()Opens in a new tab method, for example:

    set status = $SYSTEM.Cluster.InitializeMirrored()
    
    Note:

    To see the return value (for example, 1 for success) for the each API call detailed in these instructions, enter:

    zw status
    

    If a call does not succeed, display the user-friendly error message by entering:

    do $SYSTEM.Status.DisplayError(status) 
    

    This call initializes the cluster on the node in the same way as $SYSTEM.Cluster.Initialize()Opens in a new tab, described in Configure Node 1 in “Deploy the Cluster Using the %SYSTEM.Cluster API”; review that section for explanations of the first four arguments (none required) to InitializeMirrored(), which are the same as for Initialize(). If the instance is not already a mirror primary, you can use the next five arguments to configure it as one; if it is already a primary, these are ignored. The mirror arguments are as follows:

    • Arbiter host

    • Arbiter port

    • Directory containing the Certificate Authority certificate, local certificate, and private key file required to secure the mirror with TLS, if desired. The call expects the files to be named CAFile.pem, CertificateFile.pem, and PrivateKeyFile.pem, respectively.

    • Name of the mirror.

    • Name of this mirror member.

    Note:

    The InitializeMirrored() call returns an error if

    • The current InterSystems IRIS instance is already a node of a sharded cluster.

    • The current instance is already a mirror member, but not the primary.

    • You specify (in the first two arguments) a cluster namespace or master namespace that already exists, and its globals database is not mirrored.

  2. On the intended node 1 backup, open the TerminalOpens in a new tab for the InterSystems IRIS instance and call $SYSTEM.Cluster.AttachAsMirroredNode()Opens in a new tab, specifying the host and superserver port of the node 1 primary as the cluster URL in the first argument, and the mirror role backup in the second, for example:

    set status = $SYSTEM.Cluster.AttachAsMirroredNode("IRIS://node1prim:1972","backup")
    

    If you supplied an IP address as the fourth argument to InitializeMirrored() when initializing the node 1 primary, use the IP address instead of the hostname to identify node 1 in the first argument, for example:

    set status = $SYSTEM.Cluster.AttachAsMirroredNode("IRIS://100.00.0.01:1972","backup")
    
    Note:

    The default superserver port number of a noncontainerized InterSystems IRIS instance that is the only such on its host is 1972. To see or set the instance’s superserver port number, select System Administration > Configuration > System Configuration > Memory and Startup in the instance’s Management Portal. (For information about opening the Management Portal for the instance and determining the superserver port, see the instructions for an instance deployed in a containerOpens in a new tab or one installed from a kitOpens in a new tab in InterSystems IRIS Connection InformationOpens in a new tab in InterSystems IRIS Basics: Connecting an IDE.)

    This call attaches the node as a data node in the same way as $SYSTEM.Cluster.AttachAsDataNode()Opens in a new tab, as described in Configure the Remaining Data Nodes in “Deploy the Cluster Using the %SYSTEM.Cluster API”, and ensures that it is the backup member of the node 1 mirror. If the node is backup to the node 1 primary before you issue the call — that is, you are initializing an existing mirror as node 1 — the mirror configuration is unchanged; if it is not a mirror member, it is added to the node 1 primary’s mirror as backup. Either way, the namespace, database, and mappings configuration of the node 1 primary are replicated on this node. (The third argument to AttachAsMirroredNode is the same as the second for AttachAsDataNode, that is, the IP address of the host, included if you want the other cluster members to use it in communicating with this node.)

    If there are any intended DR async members of the node 1 mirror, use AttachAsMirroredNode() to attach them, with the substitution of drasync for backup as the second argument, for example:

    set status = $SYSTEM.Cluster.AttachAsMirroredNode("IRIS://node1prim:1972","drasync")
    

    As with attaching a backup, if you are attaching an existing member of the mirror, its mirror configuration is unchanged; otherwise, the needed mirror configuration is added. Either way, the namespace, database, and mappings configuration of the node 1 primary are replicated on the new node.

    Note:

    Attempting to attach an instance that is a member of a different mirror from that of the node 1 primary causes an error.

  3. To configure mirrored data nodes other than node 1, use $SYSTEM.Cluster.AttachAsMirroredNode()Opens in a new tab to attach both the failover pair and any DR asyncs to the cluster, as follows:

    1. When adding a primary, specify any existing primary in the cluster URL and primary as the second argument. If the instance is not already the primary in a mirror, use the fourth argument and the four that follow to configure it as the first member of a new mirror; the arguments are as listed for the InitializeMirrored() call in the preceding. If the instance is already a mirror primary, the mirror arguments are ignored if provided.

    2. When adding a backup, specify its intended primary in the cluster URL and backup as the second argument. If the instance is already configured as backup in the mirror in which the node you specify is primary, its mirror configuration is unchanged; if it is not yet a mirror member, it is configured as the second failover member.

    3. When adding a DR async, specify its intended primary in the cluster URL and drasync as the second argument. If the instance is already configured as a DR async in the mirror in which the node you specify is primary, its mirror configuration is unchanged; if it is not yet a mirror member, it is configured as a DR async.

    Note:

    The AttachAsMirroredNode() call returns an error if

    • The current InterSystems IRIS instance is already a node in a sharded cluster.

    • The role primary is specified and the cluster node specified in the cluster URL (first argument) is not a mirror primary, or the current instance belongs to a mirror in a role other than primary.

    • The role backup is specified and the cluster node specified in the first argument is not a mirror primary, or is primary in a mirror that already has a backup failover member.

    • The role drasync is specified and the cluster node specified in the first argument is not a mirror primary.

    • The role backup or drasync is specified and the instance being added already belongs to a mirror other than the one whose primary you specified.

    • The cluster namespace (or master namespace, when adding the node 1 backup) already exists on the current instance and its globals database is not mirrored.

  4. When you have configured all of the data nodes, you can call the $SYSTEM.Cluster.ListNodes()Opens in a new tab method to list them. When a cluster is mirrored, the list indicates the mirror name and role for each member of a mirrored data node, for example:

    set status = $system.Cluster.ListNodes()
    NodeId  NodeType    Host          Port   Mirror  Role
    1       Data        node1prim     1972   MIRROR1 Primary
    1       Data        node1back     1972   MIRROR1 Backup
    1       Data        node1dr       1972   MIRROR2 DRasync
    2       Data        node2prim     1972   MIRROR2 Primary
    2       Data        node2back     1972   MIRROR2 Backup
    2       Data        node2dr       1972   MIRROR2 DRasync
    

Convert a Nonmirrored Cluster to a Mirrored Cluster

This section provides a procedure for converting an existing nonmirrored sharded cluster to a mirrored cluster. The following is an overview of the tasks involved:

  • Provision and prepare at least enough new nodes to provide a backup for each existing data node in the cluster.

  • Create a mirror on each existing data node and then call $SYSTEM.Sharding.AddDatabasesToMirrors()Opens in a new tab on node 1 to automatically convert the cluster to a mirrored configuration.

  • Create a coordinated backup of the now-mirrored master and shard databases on the existing data nodes (the first failover member in each mirror) as described in Coordinated Backup and Restore of Sharded Clusters.

  • For each intended second failover member (new node), select the first failover member (existing data node) to be joined, then create databases on the new node corresponding to the mirrored databases on the first failover member, add the new node to the mirror as second failover member, and restore the databases from the backup made on the first failover member to automatically add them to the mirror.

  • To add a DR async to the failover pair you have created in a data node mirror, create databases on the new node corresponding to the mirrored databases on the first failover member, add the new node to the mirror as a DR async, and restore the databases from the backup made on the first failover member to automatically add them to the mirror.

  • Call $SYSTEM.Sharding.VerifyShards()Opens in a new tab on any of the mirror primaries (original data nodes) to validate information about the backups and add it to the sharding metadata.

You can perform the entire procedure within a single maintenance window (that is, a scheduled period of time during which the application is offline and there is no user activity on the cluster), or you can split it between two maintenance windows, as noted in the instructions.

The detailed steps are provided in the following. If you are not already familiar with it, review the section Deploy the Cluster Using the %SYSTEM.Cluster API before continuing. Familiarity with mirror configuration procedures, as described in the Configuring MirroringOpens in a new tab chapter of the High Availability Guide, is also helpful but not required; the steps in this procedure provide links to that chapter where appropriate.

Important:

When a node-level nonmirrored cluster is converted to mirrored using this procedure, it becomes a namespace-level cluster, and can be managed and modified using only the %SYSTEM.Sharding API and the namespace-level pages in the Management Portal.

  1. To use this procedure, you must know the names of the cluster’s cluster namespace and master namespace, which were determined when you deployed the cluster. For example, in the procedure provided in “Configure the Cluster Using the Management Portal”, step 4 in the task Configure Data Node 1 discusses selecting the cluster and master namespaces; similarly, in “Configure the Cluster Using the %SYSTEM.Cluster API”, the discussion of the initial API call in Configure Node 1 includes determining the cluster and master namespaces.

  2. Prepare the nodes that are to be added to the cluster as backup failover members according to the instructions in the first two steps of “Deploy the Cluster Using the %SYSTEM.Cluster API”, Provision or identify the infrastructure and Deploy InterSystems IRIS on the Data Nodes. The host characteristics and InterSystems IRIS configuration of the prospective backups should be the same as the existing data nodes in all respects (see Mirror Configuration GuidelinesOpens in a new tab in the High Availability Guide).

    Note:

    It may be helpful to make a record, by hostnames or IP addresses, of the intended first failover member (existing data node) and second failover member (newly added node) of each failover pair.

  3. Begin a maintenance window for the sharded cluster.

  4. On each current data node, start the ISCAgentOpens in a new tab, then create a mirror and configure the first failover memberOpens in a new tab.

  5. To convert the cluster to a mirrored configuration — that is, to incorporate the mirrors you created in the previous step into the cluster’s configuration and metadata — open the InterSystems TerminalOpens in a new tab for the instance on node 1 and in the master namespace call the $SYSTEM.Sharding.AddDatabasesToMirrors() method (see %SYSTEM.Sharding API) as follows:

    set status = $SYSTEM.Sharding.AddDatabasesToMirrors()
    
    Note:

    To see the return value (for example, 1 for success) for the each API call detailed in these instructions, enter:

    zw status
    

    Reviewing status after each call is a good general practice, as a call might fail silently under some circumstances. If a call does not succeed (status is not 1), display the user-friendly error message by entering:

    do $SYSTEM.Status.DisplayError(status) 
    

    The AddDatabasesToMirrors() call does the following:

    • Adds the master and shard databases on node 1 (see Initialize node 1 in “Deploy the Cluster Using the %SYSTEM.Cluster API”) and the shard databases on the other data nodes to their respective mirrors.

    • Reconfigures all ECP connections between nodes as mirror connectionsOpens in a new tab, including those between compute nodes (if any) and their associated data nodes.

    • Reconfigures remote databasesOpens in a new tab on all data nodes and adjusts all related mappings accordingly.

    • Updates the sharding metadata to reflect the reconfigured connections, databases, and mappings.

    When the call has successfully completed, the sharded cluster is in a fully usable state (although failover is not yet possible because the backup failover members have not yet been added).

  6. Perform a coordinated backup of the data nodes (that is, one in which all nodes are backed up at the same logical point in time). Specifically, on each of the first failover members (the existing data nodes), back up the shard database, and on node 1, also back up the master database. Before the backup, confirm that you have identified the right databases by examining the instance’s configuration parameter file (CPF)Opens in a new tab, as follows:

    • Identify the shard database by finding the [Map.clusternamespace] section of the CPF — for example, if the cluster namespace is CLUSTERNAMESPACE, the section would be [Map.CLUSTERNAMESPACE] — and locating the IRIS.SM.Shard and IS.* global prefix mappings, the target of which is the shard database. Additional global prefixes may be mapped to the shard database, as shown in the following, which identifies the shard database as SHARDDB.

      [Map.CLUSTERNAMESPACE]
      Global_IRIS.SM.Shard=SHARDB
      Global_IRIS.Shard.*=SHARDDB
      Global_IS.*=SHARDDB
      Package_IRIS.Federated=SHARDDB
      
    • On node 1, also locate the [Namespaces] section, where the master database is shown after the master namespace as its default globals database. For example, the following shows MASTERDB, the master database, as the default globals databases of MASTERNAMESPACE.

      [Namespaces]
      %SYS=IRISSYS
      CLUSTERNAMESPACE=SHARDDB
      MASTERNAMESPACE=MASTERDB
      USER=USER
      
  7. Optionally, end the current maintenance window and allow application activity while you prepare the prospective second failover members and DR async mirror members (if any) in the following steps.

  8. On each node to be added to the cluster as a second failover member or DR async:

    1. Start the ISCAgentOpens in a new tab.

    2. Create a namespace with the same name as the cluster namespace on the intended first failover member (CLUSTERNAMESPACE in the examples above), configuring as its default globals database a local database with the same name as the shard database on the first failover member (SHARDDB in the examples).

    3. On the intended second failover member or DR async to be added to the node 1, also create a namespace with the same name as the master namespace on the intended first failover member, configuring as its default globals database a local database with the same name as the master database. Using the examples above, you would create a MASTERNAMESPACE namespace with a database called MASTERDB as its default globals database.

  9. If not in a maintenance window, start a new one.

  10. On each new node, perform the tasks required to add any nonmirrored instance as the second failover member or a DR async member of an existing mirror that includes mirrored databases containing data, as follows:

    Note:

    Sharding automatically updates the cluster namespace definition, creates all the needed mappings, ECP server definitions, and remote database definitions, and propagates to the shards any user-defined mappings in the master namespace. Therefore, the only mappings that must be manually created during this process are any user-defined mappings in the master namespace, which must be created only in the master namespace on the node 1 second failover member. ECP server definitions and remote database definitions need not be manually copied.

  11. Open the InterSystems TerminalOpens in a new tab for the instance on any of the primaries (original data nodes) and in the cluster namespace (or the master namespace on node 1) call the $SYSTEM.Sharding.VerifyShards()Opens in a new tab method (see %SYSTEM.Sharding API) as follows:

    set status = $SYSTEM.Sharding.VerifyShards()
    

    This call automatically adds the necessary information about the second failover members of the mirrors to the sharding metadata.

    Note:

    All of the original cluster nodes must be the current primary of their mirrors when this call is made. Therefore, if any mirror has failed over since the second failover member was added, arrange a planned failover back to the original failover member before performing this step. (For one procedure for planned failover, see Maintenance of Primary Failover MemberOpens in a new tab; for information about using the iris stop command for the graceful shutdown referred to in that procedure, see Controlling InterSystems IRIS Instances.)

Important:

With the completion of the last step above, the maintenance window can be terminated. However, InterSystems strongly recommends testing each mirror by executing a planned failover (see above) before the cluster goes into production.

Updating the Cluster Metadata for Mirroring Changes

When you make changes to the mirror configuration of one or more data nodes in a mirrored cluster using any means other than the API or Management Portal procedures described here — that is, using the Mirroring pages of the Management Portal, the ^MIRROR routine, or the SYS.MIRROR API) — you must update the cluster’s metadata by either calling $SYSTEM.Sharding.VerifyShards()Opens in a new tab (see %SYSTEM.Sharding API) in the cluster namespace or using the Verify Shards button on the Configure Node-Level page of the Management Portal (see Configure the Mirrored Cluster Using the Management Portal) on any current primary failover member in the cluster. For example, if you perform a planned failoverOpens in a new tab, add a DR asyncOpens in a new tab, demote a backup member to DR asyncOpens in a new tab, or promote a DR async to failover member, verifying the shards updates the metadata to reflect the change. Updating the cluster metadata is an important element in maintaining and utilizing a disaster recovery capability you have established by including DR asyncs in your data node mirrors; for more information, see Disaster Recovery of Mirrored Sharded Clusters.

A cluster’s shards can be verified after every mirroring configuration operation, or just once after a sequence of operations, but if operations are performed while the cluster is online, it is advisable to verify the shards immediately after any operation which adds or removes a failover member.

FeedbackOpens in a new tab