Skip to main content
Previous sectionNext section

Sharing ICM Deployments

There are a number of situations in which different users on different systems might want to use ICM to manage or interact with the same deployment. For example, one user may be responsible for provisioning the infrastructure, while another in a different location is responsible for application deployment and upgrades.

However, ICM deployment is defined by its input files and results in the generation of several output files. Without access to these state files in the ICM container from which the deployment was made, it is difficult for anyone to manage or monitor the deployment (including the original deployer, should those files be lost).

To aid in this task, ICM can be run in distributed management mode, in which it stores the deployment’s state files on a Consul cluster for access by other additional ICM containers. If distributed management mode is not used, state files can also be shared manually.

Sharing Deployments in Distributed Management Mode

ICM’s distributed management mode uses the Consul service discovery tool from Hashicorp to give multiple users in any networked locations management access to a single ICM deployment. This is done through the use of multiple ICM containers, each of which includes a Consul client clustered with one or more Consul servers storing the needed state files.

Distributed Management Mode Overview

The initial ICM container, used to provision the infrastructure, is called the primary ICM container (or just the “primary ICM”). During the provisioning phase, the primary ICM does the following:

  • Deploys 1, 3, or 5 Consul servers on CN nodes, which are clustered together with its Consul client.

  • At the conclusion of the icm provision command,

    • Pushes the deployment’s state files (see State Files for specifics) to the Consul cluster.

    • Outputs a docker run command for creating subsequent ICM containers for the deployment.

When a user executes the provided docker run command, a secondary ICM container (or “secondary ICM”) is created, and an interactive container session is started in the provider-appropriate directory (for example, /Samples/GCP). The secondary ICM automatically pulls the deployment’s state files from the Consul cluster at the start of every ICM command, so it always has the latest information. This creates a container that for all intents and purposes is a duplicate of the primary ICM container, with the one exception that it cannot provision or unprovision infrastructure. All other ICM commands are valid.

Configuring Distributed Management Mode

To create the primary ICM container and the Consul cluster, do the following:

  1. Add the ConsulServers field to the defaults.json file to specify the number of Consul servers:

    “ConsulServers”: “3”
    

    Possible values are 1, 3, and 5. A single Consul server represents a single point of failure and thus is not recommended. A five-server cluster is more reliable than a three-server cluster, but incurs greater cost.

  2. Include a CN node definition in the definitions.json file specifying at least as many CN nodes as the value of the ConsulServers field, for example:

    {
         "Role": "CN",
         "Count": "3",
         "StartCount": "7",
         "InstanceType": "t2.small"
    }
    
    
  3. Add the consul.sh script in the ICM container to the docker run command for the primary ICM, as follows:

    docker run --name primaryICM -it --cap-add SYS_TIME intersystems/icm:stable consul.sh
    

When you issue the icm provision command on the primary ICM command line, a Consul server is deployed on each CN node as it is provisioned until the specified number of servers is reached. When the command concludes successfully, the primary ICM pushes the state files to the Consul cluster, and its output includes the secondary ICM creation command. When you subsequently issue any command in the primary ICM that might alter the instances.json file, such as icm run or icm upgrade, the primary ICM pushes the new file to the Consul cluster. When you use the icm unprovision command in the primary ICM to unprovision the deployment, its state files are removed from the Consul cluster.

The icm run command for the secondary ICM provided in output by icm provision includes an encryption key (16-bytes, Base64 encoded) allowing the new ICM container to join the Consul cluster, for example:

docker run -it --name ICM --cap-add SYS_TIME acme/icm:stable 
  consul.sh qQ6MPKCH1YzTb0j9Yst33w==

You can use the secondary ICM creation command as many times as you wish, in any location that has network access to the deployment.

In both primary and secondary ICM containers, the consul members command can be used to display information about the Consul cluster, for example:

/Samples/GCP # consul members
Node                                  Address               Status  Type    Build  Protocol DC  Segment
consul-Acme-CN-TEST-0002.weave.local  104.196.151.243:8301  failed  server  1.1.0  2        dc1 <all>
consul-Acme-CN-TEST-0003.weave.local  35.196.254.13:8301    alive   server  1.1.0  2        dc1 <all>
consul-Acme-CN-TEST-0004.weave.local  35.196.128.118:8301   alive   server  1.1.0  2        dc1 <all>
3be7366b4495                          172.17.0.4:8301       alive   client  1.1.0  2        dc1 <default>
e0e87449a610                          172.17.0.3:8301       alive   client  1.1.0  2        dc1 <default>
Copy code to clipboard

Consul containers are also included in the output of the icm ps command, as shown in the following:

Samples/GCP # icm ps
Pulling from consul cluster...
CurrentWorkingDirectory: /Samples/GCP
...pulled from consul cluster
Machine            IP Address       Container           Status   Health   Image
-------            ----------       ---------           ------   ------   -----
Acme-DM-TEST-0001  35.227.32.29     weave               Up                weaveworks/weave:2.3.0
Acme-DM-TEST-0001  35.227.32.29     weavevolumes-2.3.0  Created           weaveworks/weaveexec:2.3.0
Acme-DM-TEST-0001  35.227.32.29     weavedb             Created           weaveworks/weavedb:latest
Acme-CN-TEST-0004  35.196.128.118   consul              Up                consul:1.1.0
Acme-CN-TEST-0004  35.196.128.118   weave               Up                weaveworks/weave:2.3.0
Acme-CN-TEST-0004  35.196.128.118   weavevolumes-2.3.0  Created           weaveworks/weaveexec:2.3.0
Acme-CN-TEST-0004  35.196.128.118   weavedb             Created           weaveworks/weavedb:latest
Acme-CN-TEST-0002  104.196.151.243  consul              Up                consul:1.1.0
Acme-CN-TEST-0002  104.196.151.243  weave               Up                weaveworks/weave:2.3.0
Acme-CN-TEST-0002  104.196.151.243  weavevolumes-2.3.0  Created           weaveworks/weaveexec:2.3.0
Acme-CN-TEST-0002  104.196.151.243  weavedb             Created           weaveworks/weavedb:latest
Acme-CN-TEST-0003  35.196.254.13    consul              Up                consul:1.1.0
Acme-CN-TEST-0003  35.196.254.13    weave               Up                weaveworks/weave:2.3.0
Acme-CN-TEST-0003  35.196.254.13    weavevolumes-2.3.0  Created           weaveworks/weaveexec:2.3.0
Acme-CN-TEST-0003  35.196.254.13    weavedb             Created           weaveworks/weavedb:latest
Copy code to clipboard
Note:

Because no concurrency control is applied to ICM commands, simultaneous conflicting commands issued in different ICM containers cannot all succeed; the results are based on timing and may include errors. For example, suppose two users in different containers simultaneously issue the command icm rm -machine Acme-DM-TEST-0001. One user will see this:

Removing container iris on Acme-DM-TEST-0001...
...removed container iris on Acme-DM-TEST-0001

while the other will see the following:

Removing container iris on Acme-DM-TEST-0001...
Error: No such container: iris

However, when no conflict exists, the same command can be run simultaneously without errors, for example icm rm -machine Acme-DM-TEST-0001 and icm rm -container customsensors -machine Acme-DM-TEST-0001.

Upgrading ICM Using Distributed Management Mode

Because distributed management mode stores a deployment’s state files on the Consul cluster, as described in Distributed Management Mode Overview, it provides an easy way to upgrade an ICM container without losing these files.

Beyond the benefits of having the latest version, upgrading ICM is necessary when you upgrade your InterSystems containers, because the major versions of the image from which you launch ICM and the InterSystems images you deploy must match. For example, you cannot deploy a 2019.4 version of InterSystems IRIS using a 2019.3 version of ICM. Therefore you must upgrade ICM before upgrading your InterSystems containers.

To upgrade an ICM container in distributed management mode, use these steps:

  1. Use the secondary ICM icm run command provided at the end of provisioning by the primary ICM container, as described in Configuring Distributed Management Mode, to create a secondary ICM container from the ICM image you want to upgrade to. (Primary and secondary ICM containers created from different ICM images are compatible.)

  2. In the primary ICM container, issue the command consul.sh show-master-token to get the value of the Consul token.

  3. In the upgraded secondary ICM container, issue the command consul.sh convert-to-thick Consul_token to convert it to the primary ICM container.

  4. Use docker stop and docker rm to stop and remove the old primary ICM container. 

Because this is the recommended way to upgrade an ICM container that is managing a current deployment, you may want to create a primary ICM container every time you use ICM, whether you intend to use distributed management or not, so that this option is available.

Note:

For information about upgrading a pre-2019.3 ICM container to release 2019.4, see the Release Notes and Upgrade Checklist.

Sharing Deployments Manually

This section explains how to share ICM deployments manually, describing which state files are required to share a deployment, methods for accessing them from outside the container, and how to persist those files so an ICM-driven deployment can be shared with other users or accessed from another location.

State Files

The state files are read from and written to the current working directory, though all of them can be overridden to use a custom name and location. Input files are as follows:

  • defaults.json — Override with -defaults filepath

  • definitions.json — Override with -definitions filepath

Any security keys, InterSystems IRIS licenses, or other files referenced from within these configuration files should be considered input as well.

Output files are as follows:

  • instances.json — Override with -instances filepath

  • state/ — Override with -stateDir path

The layout of the files under state/ is as follows:

definition 0/
definition 1/
...
definition N/
Copy code to clipboard

Under each definition directory are the following files:

  • terraform.tfvars — Terraform inputs

  • terraform.tfstate — Terraform state

A variety of log files, temporary files, and other files appear in this hierarchy as well, but they are not required for sharing a deployment.

Note:

For provider PreExisting, no Terraform files are generated.

Maintaining Immutability

InterSystems recommends that you avoid generating state files local to the ICM container, for the following reasons:

  • Immutability is violated.

  • Data can be lost if container removed/updated/replaced.

  • Ability to edit configuration files within the ICM container is limited.

  • Tedious and error-prone copying of state files out of the container is required.

A better practice is to mount a directory from the host within the ICM container to use as your working directory; that way all changes within the container are always available on the host. This can be accomplished using the Docker --volume option when the ICM container is first created, as follows:

$ docker run it -cap-add SYS_TIME --volume <host_path>:<container_path> <image>
Copy code to clipboard

Overall, you would take these steps:

  1. Stage input files on the host in host_path.

  2. Create, start, and attach to ICM container.

  3. Navigate to container_path.

  4. Issue ICM commands.

  5. Exit or detach from ICM container.

The state files (both input and output) are then present in host_path. See the sample script in Launch ICM for an example of this approach.

Persisting State Files

Methods of preserving and sharing state files with others include:

  • Make a tar/gzip

    The resulting archive can be emailed, put on an FTP site, a USB stick, and so on.

  • Make backups to a location from which others can restore

    Register the path to the state files on the host with a backup service.

  • Mount a disk volume accessible by others in your organization

    The path to the state files could be a Samba mount, for example.

  • Specify a disk location backed up to the cloud

    You might use services such as Dropbox, Google Drive, OneDrive, and so on.

  • Store in a document database

    This could be cloud-based or on-premises.

The advantage of the latter three methods is that they allow others to modify the deployment. Note however that ICM does not support simultaneous operations issued from more than one ICM container at a time, so a policy ensuring exclusive read-write access would need to be enforced.