ICM’s distributed management mode uses the Consul service discovery tool from Hashicorp to give multiple users in any networked locations management access to a single ICM deployment. This is done through the use of multiple ICM containers, each of which includes a Consul client clustered with one or more Consul servers storing the needed state files.
Distributed Management Mode Overview
The initial ICM container, used to provision the infrastructure, is called the primary ICM container (or just the “primary ICM”). During the provisioning phase, the primary ICM does the following:
Deploys 1, 3, or 5 Consul servers on CN nodes, which are clustered together with its Consul client.
At the conclusion of the icm provision command,
When a user executes the provided docker run command, a secondary ICM container (or “secondary ICM”) is created, and an interactive container session is started in the provider-appropriate directory (for example, /Samples/GCP). The secondary ICM automatically pulls the deployment’s state files from the Consul cluster at the start of every ICM command, so it always has the latest information. This creates a container that for all intents and purposes is a duplicate of the primary ICM container, with the one exception that it cannot provision or unprovision infrastructure. All other ICM commands are valid.
Configuring Distributed Management Mode
To create the primary ICM container and the Consul cluster, do the following:
Add the ConsulServers field to the defaults.json file to specify the number of Consul servers:
Possible values are 1, 3, and 5. A single Consul server represents a single point of failure and thus is not recommended. A five-server cluster is more reliable than a three-server cluster, but incurs greater cost.
Include a CN node definition in the definitions.json file specifying at least as many CN nodes as the value of the ConsulServers field, for example:
Add the consul.sh script in the ICM container to the docker run command for the primary ICM, as follows:
docker run --name primary-icm --init -d -it --cap-add SYS_TIME
When you issue the icm provision command on the primary ICM command line, a Consul server is deployed on each CN node as it is provisioned until the specified number of servers is reached. When the command concludes successfully, the primary ICM pushes the state files to the Consul cluster, and its output includes the secondary ICM creation command. When you subsequently issue any command in the primary ICM that might alter the instances file, such as icm run or icm upgrade, the primary ICM pushes the new file to the Consul cluster. When you use the icm unprovision command in the primary ICM to unprovision the deployment, its state files are removed from the Consul cluster.
The icm run command for the secondary ICM provided in output by icm provision includes an encryption key (16-bytes, Base64 encoded) allowing the new ICM container to join the Consul cluster, for example:
docker run --name icm --init -d -it --cap-add SYS_TIME intersystems/icm:latest-em
You can use the secondary ICM creation command as many times as you wish, in any location that has network access to the deployment.
In both primary and secondary ICM containers, the consul members command can be used to display information about the Consul cluster, for example:
/Samples/GCP # consul members
Node Address Status Type Build Protocol DC Segment
consul-Acme-CN-TEST-0002.weave.local 22.214.171.124:8301 failed server 1.1.0 2 dc1 <all>
consul-Acme-CN-TEST-0003.weave.local 126.96.36.199:8301 alive server 1.1.0 2 dc1 <all>
consul-Acme-CN-TEST-0004.weave.local 188.8.131.52:8301 alive server 1.1.0 2 dc1 <all>
3be7366b4495 172.17.0.4:8301 alive client 1.1.0 2 dc1 <default>
e0e87449a610 172.17.0.3:8301 alive client 1.1.0 2 dc1 <default>
Consul containers are also included in the output of the icm ps command, as shown in the following:
Samples/GCP # icm ps
Pulling from consul cluster...
...pulled from consul cluster
Machine IP Address Container Status Health Image
------- ---------- --------- ------ ------ -----
Acme-DM-TEST-0001 184.108.40.206 weave Up weaveworks/weave:2.3.0
Acme-DM-TEST-0001 220.127.116.11 weavevolumes-2.3.0 Created weaveworks/weaveexec:2.3.0
Acme-DM-TEST-0001 18.104.22.168 weavedb Created weaveworks/weavedb:2.3.0
Acme-CN-TEST-0004 22.214.171.124 consul Up consul:1.1.0
Acme-CN-TEST-0004 126.96.36.199 weave Up weaveworks/weave:2.3.0
Acme-CN-TEST-0004 188.8.131.52 weavevolumes-2.3.0 Created weaveworks/weaveexec:2.3.0
Acme-CN-TEST-0004 184.108.40.206 weavedb Created weaveworks/weavedb:2.3.0
Acme-CN-TEST-0002 220.127.116.11 consul Up consul:1.1.0
Acme-CN-TEST-0002 18.104.22.168 weave Up weaveworks/weave:2.3.0
Acme-CN-TEST-0002 22.214.171.124 weavevolumes-2.3.0 Created weaveworks/weaveexec:2.3.0
Acme-CN-TEST-0002 126.96.36.199 weavedb Created weaveworks/weavedb:2.3.0
Acme-CN-TEST-0003 188.8.131.52 consul Up consul:1.1.0
Acme-CN-TEST-0003 184.108.40.206 weave Up weaveworks/weave:2.3.0
Acme-CN-TEST-0003 220.127.116.11 weavevolumes-2.3.0 Created weaveworks/weaveexec:2.3.0
Acme-CN-TEST-0003 18.104.22.168 weavedb Created weaveworks/weavedb:2.3.0
Because no concurrency control is applied to ICM commands, simultaneous conflicting commands issued in different ICM containers cannot all succeed; the results are based on timing and may include errors. For example, suppose two users in different containers simultaneously issue the command icm rm -machine Acme-DM-TEST-0001. One user will see this:
Removing container iris on Acme-DM-TEST-0001...
...removed container iris on Acme-DM-TEST-0001
while the other will see the following:
Removing container iris on Acme-DM-TEST-0001...
Error: No such container: iris
However, when no conflict exists, the same command can be run simultaneously without errors, for example icm rm -machine Acme-DM-TEST-0001 and icm rm -container customsensors -machine Acme-DM-TEST-0001.
Upgrading ICM Using Distributed Management Mode
Because distributed management mode stores a deployment’s state files on the Consul cluster, as described in Distributed Management Mode Overview, it provides an easy way to upgrade an ICM container without losing these files.
Beyond the benefits of having the latest version, upgrading ICM is necessary when you upgrade your InterSystems containers, because the major versions of the image from which you launch ICM and the InterSystems images you deploy must match. For example, you cannot deploy a 2022.2 version of InterSystems IRIS using a 2022.1 version of ICM. Therefore you must upgrade ICM before upgrading your InterSystems containers.
To upgrade an ICM container in distributed management mode, use these steps:
Use the secondary ICM icm run command provided at the end of provisioning by the primary ICM container, as described in Configuring Distributed Management Mode, to create a secondary ICM container from the ICM image you want to upgrade to. (Primary and secondary ICM containers created from different ICM images are compatible.)
In the primary ICM container, issue the command consul.sh show-master-token to get the value of the Consul token.
In the upgraded secondary ICM container, issue the command consul.sh convert-to-thick Consul_token to convert it to the primary ICM container.
Use docker stop and docker rm to stop and remove the old primary ICM container.
Because this is the recommended way to upgrade an ICM container that is managing a current deployment, you may want to create a primary ICM container every time you use ICM, whether you intend to use distributed management or not, so that this option is available.