Skip to main content
Previous sectionNext section

Using ICM

This chapter explains how to use ICM to deploy an InterSystems IRIS configuration in a public cloud, as follows. The sections in it explain the steps involved in using ICM to deploy a sample InterSystems IRIS configuration on AWS, as follows:

  • Review deployments based on two sample use cases that are used as examples throughout the chapter.

  • Use the docker run command with the ICM image provided by InterSystems to start the ICM container and open a command line.

  • Obtain the cloud provider and SSL/TLS credentials needed for secure communications by ICM.

  • Decide how many of each nodes type you want to provision and make other needed choices, then create the defaults.json and definitions.json configuration files needed for the deployment.

  • Use the icm provision command to provision the infrastructure and explore ICM’s infrastructure management commands. You can reprovision at any time to modify existing infrastructure, including scaling out or in.

  • Run the icm run command to deploy containers, and explore ICM’s container and service management commands. You can redeploy when the infrastructure has been reprovisioned or when you want to add, remove, or upgrade containers.

  • Run the icm unprovision command to destroy the deployment.

For comprehensive lists of the ICM commands and command-line options covered in detail in the following sections, see ICM Commands and Options.

ICM Use Cases

This chapter is focused on two typical ICM use cases, deploying the following two InterSystems IRIS configurations:

Most of the steps in the deployment process are the same for both configurations. The primary difference lies in the definitions files; see Define the Deployment for detailed contents. Output shown for the provisioning phase (The icm provision Command) is for the distributed cache cluster; output shown for the deployment phase (Deploy and Manage Services) is for the sharded cluster.

Launch ICM

ICM is provided as a Docker image. Everything required by ICM to carry out its provisioning, deployment, and management tasks — for example Terraform, the Docker client, and templates for the configuration files — is included in the ICM container. Therefore the only requirement for the Linux, macOS or Microsoft Windows system on which you launch ICM is that Docker is installed.

Important:

ICM is supported on Docker Enterprise Edition and Community Edition version 18.09 and later; Enterprise Edition only is supported for production environments.

Note:

Multiple ICM containers can be used to manage a single deployment, for example to make it possible for different people to execute different phases of the deployment process; for detailed information, see the appendix “Sharing ICM Deployments.”

Identifying the Repository and Image

To download and run the ICM image, and to deploy the InterSystems IRIS image in the cloud using ICM, you need to identify the Docker repository in which these images are located and the credentials you need to log into that repository. The repository must be accessible to the cloud provider you use (that is, not behind a firewall) so it can download the ICM and InterSystems IRIS images, and for security must require ICM to authenticate using credentials distributed by your organization.

InterSystems images are distributed as Docker tar archive files, available in the InterSystems Worldwide Response Center (WRC) download area. Your enterprise may already have added these images to its Docker repository; in this case, you should get the location of the repository and the needed credentials from the appropriate IT administrator. If your enterprise has a Docker repository but has not yet added the InterSystems images, get the location of the repository and the needed credentials, obtain the tar archive files containing the ICM and InterSystems IRIS images from the WRC, and add each of them to the repository using the following steps on the command line:

  1. docker load -i archive_file

  2. docker tag docker.intersystems.com/intersystems/image_name:version your_repository/image_name:version

    For example:

    docker tag docker.intersystems.com/intersystems/icm:latest acme/icm:latest
    Copy code to clipboard
  3. docker login your_repository

    For example:

    docker login docker.acme.com
    Username: gsanchez@acme.com
    Pasword: **********
    Copy code to clipboard
  4. docker push your_repository/image_name:version

    For example:

    docker push acme/icm:latest 
    Copy code to clipboard

If your organization does not have an internet accessible Docker repository, you can use the free (or extremely low cost) Docker Hub for your testing. Docker Hub allows you to create organizations and add users to them to provide secure access to your images.

Running the ICM Container

To launch ICM from the command line on a system on which Docker is installed, use the docker run command, which actually combines three separate Docker commands to do the following:

  • Download the ICM image from the repository if it is not already present locally (can be done separately with the docker pull command).

  • Creates a container from the ICM image (docker create command).

  • Start the ICM container (docker start command).

For example:

docker run --name icm -it --cap-add SYS_TIME intersystems/icm:latest
Copy code to clipboard

The -i option makes the command interactive and the -t option opens a pseudo-TTY, giving you command line access to the container. From this point on, you can interact with ICM by invoking ICM commands on the pseudo-TTY command line. The --cap-add SYS_TIME option allows the container to interact with the clock on the host system, avoiding clock skew that may cause the cloud service provider to reject API commands.

The ICM container includes a /Samples directory that provides you with samples of the elements required by ICM for provisioning, configuration, and deployment. The /Samples directory makes it easy for you to provision and deploy using ICM out of the box. Eventually, you can use locations outside the container to store these elements and InterSystems IRIS licenses, and either mount those locations as external volumes when you launch ICM (see Manage data in Docker in the Docker documentation) or copy files into the ICM container using the docker cp command.

Of course, the ICM image can also be run by custom tools and scripts, and this can help you accomplish goals such as making these external locations available within the container, and saving your configuration files and your state directory (which is required to remove the infrastructure and services you provision) to persistent storage the container as well. A script, for example, could do the latter by capturing the current working directory in a variable and using it to mount that directory as a storage volume when running the ICM container, as follows:


#!/bin/bash
clear

# extract the basename of the full pwd path
MOUNT=$(basename $(pwd))
docker run --name icm -it --volume $PWD:$MOUNT --cap-add SYS_TIME intersystems/icm:stable 
printf "\nExited icm container\n"
printf "\nRemoving icm container...\nContainer removed:  "
docker rm icm
Copy code to clipboard

You can mount multiple external storage volumes when running the ICM container (or any other). When deploying InterSystems IRIS containers, ICM automatically formats, partitions, and mounts several storage volumes; for more information, see Storage Volumes Mounted by ICM in the “ICM Reference” chapter.

Note:

On a Windows host, you must enable the local drive on which the directory you want to mount as a volume is located using the Shared Drives option on the Docker Settings ... menu; see Using InterSystems IRIS Containers with Docker for Windows on InterSystems Developer Community for additional requirements and general information about Docker for Windows.

Important:

When an error occurs during an ICM operation, ICM displays a message directing you to the log file in which information about the error can be found. Before beginning an ICM deployment, familiarize yourself with the log files and their locations as described in Log Files and Other ICM Files.

Upgrading an ICM Container

Distributed management mode, which allows different users on different systems to use ICM to manage with the same deployment, provides a means of upgrading an ICM container while preserving the needed state files of the deployment it is managing (see The State Directory and State Files). Because this is the recommended way to upgrade an ICM container that is managing a deployment, you may want to configure distributed management mode each time you use ICM, whether you intend to use distributed management or not, so that this option is available. For information about upgrading ICM in service discovery mode, see Upgrading ICM Using Distributed Management Mode.

Obtain Security-Related Files

ICM communicates securely with the cloud provider on which it provisions the infrastructure, with the operating system of each provisioned node, and with Docker and several InterSystems IRIS services following container deployment. Before defining your deployment, you must obtain the credentials and other files needed to enable secure communication.

Cloud Provider Credentials

To use ICM with one of the public cloud platforms, you must create an account and download administrative credentials. To do this, follow the instructions provided by the cloud provider; you can also find information about how to download your credentials once your account exists in the Provider-Specific Parameters section of the “ICM Reference” chapter. In the ICM configuration files, you identify the location of these credentials using the parameter(s) specific to the provider; for AWS, this is the Credentials parameter.

When using ICM with a vSphere private cloud, you can use an existing account with the needed privileges, or create a new one. You specify these using the Username and Password fields.

SSH and SSL/TLS Keys

ICM uses SSH to provide secure access to the operating system of provisioned nodes, and SSL/TLS to establish secure connections to Docker, InterSystems Web Gateway, and JDBC, and between nodes in InterSystems IRIS mirrors, distributed cache clusters, and sharded clusters. The locations of the files needed to enable this secure communication are specified using several ICM parameters, including:

  • SSHPublicKey

    Public key of SSH public/private key pair used to enable secure connections to provisioned host nodes; in SSH2 format for AWS and OpenSSH format for other providers.

  • SSHPrivateKey

    Private key of SSH public/private key pair.

  • TLSKeyDir

    Directory containing TLS keys used to establish secure connections to Docker, InterSystems Web Gateway, JDBC, and mirrored InterSystems IRIS databases.

You can create these files, either for use with ICM, or to review them in order to understand which are needed, using two scripts provided with ICM, located in the directory /ICM/bin in the ICM container. The keygenSSH.sh script creates the needed SSH files and places them in the directory /Samples/ssh in the ICM container. The keygenTLS.sh script creates the needed SSL/TLS files and places them in /Samples/tls. You can then specify these locations when defining your deployment, or obtain your own files based on the contents of these directories.

For more information about the security files required by ICM and generated by the keygen* scripts, see ICM Security and Security-Related Parameters in the “ICM Reference” chapter.

Important:

The keys generated by these scripts, as well as your cloud provider credentials, must be fully secured, as they provide full access to any ICM deployments in which they are used.

The keys by the keygen* scripts are intended as a convenience for your use in your initial test deployments. (Some have strings specific to InterSystems Corporation.) In production, the needed keys should be generated or obtained in keeping with your company's security policies.

Define the Deployment

To provide the needed parameters to ICM, you must select values for a number of fields, based on your goal and circumstances, and then incorporate these into the defaults and definitions files to be used for your deployment. You can begin with the template defaults.json and definitions.json files provided within in the ICM container in the /Samples directory tree, for example /Samples/AWS.

As noted in Configuration, State and Log Files, defaults.json is often used to provide shared settings for multiple deployments in a particular category, for example those that are provisioned on the same platform, while separate definitions.json files define the node types that must be provisioned and configured for each deployment. For example, the separate definitions files illustrated here define the two target deployments described at the start of this chapter: the distributed cache cluster includes two DM nodes as a mirrored data server, three load-balanced AM nodes as application servers, and an arbiter (AR) node, while the sharded cluster includes six DATA nodes configured as three load-balanced mirrored data nodes, plus an arbiter node. At the same time, the deployments can share a defaults.json file because they have a number of characteristics in common; for example, they are both on AWS. use the same credentials, provision in the same region and availability zone, and deploy the same InterSystems IRIS image.

While some fields (such as Provider) must appear in defaults.json and some (such as Role) in definitions.json, others can be used in either depending on your needs. In this case, for example, the InstanceType field appears in the shared defaults file and both definitions files, because the DM, AM, DATA, and AR nodes all require different compute resources; for this reason a single defaults.json setting, while establishing a default instance type, is not sufficient.

The following sections explain how you can customize the configuration of the InterSystems IRIS instances you deploy and review the contents of both the shared defaults file and the separate definitions files. Each field/value pair is shown as it would appear in the configuration file.

Bear in mind that ICM allows you to modify the definitions file of an existing deployment and then reprovision and/or redeploy to add or remove nodes or to alter existing nodes. For more information, see Reprovisioning the Infrastructure and Redeploying Services.

Important:

Both field names and values are case-sensitive; for example, to select AWS as the cloud provider you must include “Provider”:”AWS” in the defaults file, not “provider”:”AWS”, “Provider”:”aws”, and so on.

Note:

The fields included here represent a subset of the potentially applicable fields; see ICM Configuration Parameters for comprehensive lists of all required and optional fields, both general and provider-specific.

Shared Defaults File

The field/value pairs shown in the table in this section represent the contents of a defaults.json file that can be used for both the distributed cache cluster deployment and the sharded cluster deployment. As described at the start of this section, this file can be created by making a few modifications to the /Samples/AWS/default.json file, which is illustrated in the following:

{
    "Provider": "AWS",
    "Label": "Sample",
    "Tag": "TEST",
    "DataVolumeSize": "10",
    "SSHUser": "ubuntu",
    "SSHPublicKey": "/Samples/ssh/insecure-ssh2.pub",
    "SSHPrivateKey": "/Samples/ssh/insecure",
    "DockerImage": "intersystems/iris:stable",
    "DockerUsername": "xxxxxxxxxxxx",
    "DockerPassword": "xxxxxxxxxxxx",
    "TLSKeyDir": "/Samples/tls/",
    "LicenseDir": "/Samples/license/",
    "Region": "us-west-1",
    "Zone": "us-west-1c",
    "DockerVersion": "ce-18.09.8.ce",
    "AMI": "ami-c509eda6",
    "InstanceType": "m4.large",
    "Credentials": "/Samples/AWS/sample.credentials",
    "ISCPassword": "",
    "Mirror": "false",
    "UserCPF": "/Samples/cpf/iris.cpf"
}
Copy code to clipboard

The order of the fields in the table matches the order of this sample defaults file.

In the defaults file for a different provider, some of the fields have different provider-specific values, while others are replaced by different provider-specific fields. For example, in the Tencent defaults file, the value for InstanceType is S2.MEDIUM4, a Tencent-specific instance type that would be invalid on AWS, while the AWS AMI field is replaced by the equivalent Tencent field, ImageID. You can review these differences by examining the varying defaults.json files in the /Samples directory tree and referring to the General Parameters and Provider-Specific Parameters tables in the “ICM Reference” chapter.

Note:

The pathnames provided in the fields specifying security files in this sample defaults file assume you have placed your AWS credentials in the /Samples/AWS directory and used the keygen*.sh scripts to generate the keys as described in Obtain Security-Related Files. If you have generated or obtained your own keys, these may be replaced by internal paths to external storage volumes mounted when the ICM container is run, as described in the Launch ICM. For additional information about these files, see ICM Security and Security-Related Parameters in the “ICM Reference” chapter.

Shared Characteristic /Samples/AWS/defaults.json Customization example Customization explanation
Platform to provision infrastructure on, in this case Amazon Web Services; see Provisioning Platforms in the “Essential ICM Elements” chapter. "Provider": "AWS", n/a If value is changed to GCP, Azure, Tencent, vSphere, or PreExisting, different fields and values from those shown here are required.
Naming pattern for provisioned nodes is Label-Role-Tag-NNNN, where Role is the value of the Role field in the definitions file, for example ANDY-DATA-TEST-0001. Modify these so that node names indicate ownership and purpose.
"Label": "Sample", "Label": "ANDY", Update to identify owner.
"Tag": "TEST", n/a Existing value identifies purpose.
Size (in GB) of the persistent data volume to create for InterSystems IRIS containers; see Storage Volumes Mounted by ICM in the “ICM Reference” chapter. Can be overriden for specific node types in the definitions file. “DataVolumeSize”: “10”,
“DataVolumeSize”:”250”,
If all deployments using the defaults file consist of sharded cluster (DATA) nodes only, enlarging the default size of the data volume is recommended.
Nonroot account with sudo access, used by ICM for SSH access to provisioned nodes. On AWS, the required value depends on the AMI; see Security-Related Parameters in the “ICM Reference” chapter.
"SSHUser": "ubuntu",
"SSHUser": "ec2–user", Change to this account if specifying a Red Hat Enterprise Linux AMI, rather than Ubuntu.
Locations of needed security key files; see Obtain Security-Related Files and Security-Related Parameters. Because provider is AWS, the SSH2–format public key in /Samples/ssh/ is specified.
"SSHPublicKey": "/Samples/ssh/secure-ssh2.pub", "SSHPublicKey": "/mydir/keys/mykey.pub", If you stage your keys on a mounted external volume, update the paths to reflect this.
"SSHPrivateKey": "/Samples/ssh/secure-ssh2", “SSHPrivateKey": "/mydir/keys/mykey.ppk",
"TLSKeyDir": "/Samples/tls/", "TLSKeyDir": "/mydir/keys/tls/",
Docker image to deploy, credentials to log into Docker repository, version of Docker to install on provisioned nodes; see Identifying the Repository and Image and General Parameters in the “ICM Reference” chapter.
"DockerImage": "intersystems/iris:stable",
“DockerImage”: “acme/iris:latest” If you pushed the InterSystems IRIS image to your organization’s repository, update the image spec.
Note: InterSystems IRIS images for standard platforms are named iris; those for ARM platforms are named iris-arm64.
"DockerUsername": "...", "DockerUsername": "AndyB", Update to use your Docker credentials.
"DockerPassword": "...", "DockerPassword": "password",
"DockerVersion": "ce-18.09.8.ce", "DockerVersion": "ce-18.09.9.ce", The version in each /Samples/.../defaults.json is generally correct for the platform. However. if your organization uses a different version of Docker, you may want that version installed on the cloud nodes instead.
Location of InterSystems IRIS license keys staged in the ICM container and individually specified by the LicenseKey fields in the definitions file; see InterSystems IRIS Licensing for ICM in the “ICM Rference” chapter. “LicenseDir”: “/Samples/Licenses”,
“LicenseDir”: “/mydir/licenses”,
If you stage your licenses on a mounted external volume, update the paths to reflect this.
Geographical region of provider’s compute resources in which infrastructure is to be provisioned; see General Parameters. "Region": "us-west-1", "Region": "us-east-2", If you want to provision in another valid combination of region and availability zone, update the values to reflect this.
Availability zone within specified region in which to locate provisioned nodes; see General Parameters. "Zone": "us-west-1c", "Zone": "us-east-2a",
AMI to use as platform and OS template for nodes to be provisioned; see Amazon Web Services (AWS) Parameters in the “ICM Reference” chapter. "AMI": "ami-c509eda6", “AMI”: “ami-e24b7d9d”, If you want to provision from another valid combination of AMI and instance type, update the values to reflect this.
Instance type to use as compute resources template for nodes to be provisioned; see Amazon Web Services (AWS) Parameters.
"InstanceType": "m4.large", "InstanceType": "m5ad.large",
Credentials for AWS account; see Amazon Web Services (AWS) Parameters.
"Credentials": "/Samples/AWS/credentials",
“Credentials”: “/mydir/aws-credentials”, If you stage your credentials on a mounted external volume, update the path to reflect this.
Password for deployed InterSystems IRIS instances. Recommended approach is to specify on the deployment command line (see Deploy and Manage Services) to avoid displaying password in a configuration file
“ISCPassword”: "",
(delete) Remove in favor of specifying password by using -password option of icm run command.
Whether specific node types (including DM and DATA) defined in even numbers are deployed as mirrors (see Rules for Mirroring).
"Mirror": "true"
n/a Both deployments are mirrored.
The CPF merge file to be used to override initial CPF settings for deployed InterSystems IRIS instances (see Deploying with Customized InterSystems IRIS Configurations in the “ICM Reference” chapter). "UserCPF": "/Samples/cpf/iris.cpf"   Remove unless you are familiar with the CPF merge feature and CPF settings (see the Configuration Parameter File Reference).
Important:

The major versions of the image from which you launched ICM and the InterSystems IRIS image you specify using the DockerImage field must match; for example, you cannot deploy a 2019.4 version of InterSystems IRIS using a 2019.3 version of ICM. For information about upgrading ICM before you upgrade your InterSystems containers, see Upgrading ICM Using Distributed Management Mode in the appendix “Sharing Deployments in Distributed Management Mode”.

Distributed Cache Cluster Definitions File

The definitions.json file for the distributed cache cluster must define the following nodes:

  • Two data servers (role DM), configured as a mirror

  • Three application servers (role AM)

  • Load balancer for application servers

  • Arbiter node for data server mirror

This configuration is illustrated in the following:

Distributed Cache Cluster to be Deployed by ICM
images/gicm_ecp_config.png

The table that follows lists the field/value pairs that are required for this configuration.

Definition Field: Value
  • Two data servers (DM) using a standard InterSystems IRIS license, configured as a mirror because “Mirror”: “true” in shared defaults file
  • Instance type, OS volume size, data volume size override settings in defaults file to meet data server resource requirements
"Role": "DM",
"Count": "2",
"LicenseKey": "standard-iris.key,”
"InstanceType": "m4.xlarge",
"OSVolumeSize": "32",
"DataVolumeSize": "150",
  • Three application servers (AM) using a standard InterSystems IRIS license
  • Numbering in node names starts at 0003 to follow DM nodes 0001 and 0002
  • Load balancer for application servers is automatically provisioned
"Role": "AM",
"Count": "3",
"LicenseKey": "standard-iris.key”,
"StartCount": "3",
"LoadBalancer": "true",
  • One arbiter (AR) for data server mirror, no license required, use of arbiter image overrides InterSystems IRIS image specified in defaults file
  • Node is numbered 0006
  • Instance type overrides defaults file because arbiter requires only limited resources
"Role": “AR”
"Count": "1"
"DockerImage": "intersystems/arbiter:stable"
"StartCount": "6"
"InstanceType": "t2.small",

A definitions.json file incorporating the settings in the preceding table would look like this:

[
    {
        "Role": "DM",
        "Count": "2",
        "LicenseKey": "standard-iris.key”,
        "InstanceType": "m4.xlarge",
        "OSVolumeSize": "32",
        "DataVolumeSize": "150"
    },
    {
        "Role": "AM",
        "Count": "3",
        "LicenseKey": "standard-iris.key”,
        "StartCount": "3",
        "LoadBalancer": "true"
    },
    {
        "Role": "AR",
        "Count": "1",
        "DockerImage": "intersystems/arbiter:stable",
        "StartCount": "6",
        "InstanceType": "t2.small"
    }
]
Copy code to clipboard

Sharded Cluster Definitions File

The definitions.json file for the sharded cluster configuration must define three load-balanced mirrored DATA nodes. This is illustrated in the following:

Sharded Cluster to be Deployed by ICM
images/gicm_shard_config.png

The table that follows lists the field/value pairs that are required for this configuration.

Definition Field: Value
  • Six data nodes (DATA) using an InterSystems IRIS sharding license, configured as three mirrors because “Mirror”: “true” in shared defaults file
  • Instance type and data volume size override settings in defaults file to meet data node resource requirements
  • Load balancer for data nodes is automatically provisioned
"Role": "DATA"
"Count": "6"
"LicenseKey": "sharding-iris.key”
"InstanceType": "m4.4xlarge"
“DataVolumeSize": "250"
"LoadBalancer": "true"
  • One arbiter (AR) for data server mirror, no license required, use of arbiter image overrides InterSystems IRIS image specified in defaults file
  • Node is numbered 0007
  • Instance type overrides defaults file because arbiter requires only limited resources
"Role": “AR”
"Count": "1"
"DockerImage": "intersystems/arbiter:stable"
"StartCount": "7"
"InstanceType": "t2.small",

A definitions.json file incorporating the settings in the preceding table would look like this:

[
    {
        "Role": "DATA",
        "Count": "6",
        "LicenseKey": "sharding-iris.key”,
        "InstanceType": "m4.xlarge",
        "DataVolumeSize": "250",
        "LoadBalancer": "true"
    },
    {
        "Role": "AR",
        "Count": "1",
        "DockerImage": "intersystems/arbiter:stable",
        "StartCount": "7",
        "InstanceType": "t2.small"
    }
]
Copy code to clipboard
Note:

The DATA and COMPUTE node types were added to ICM in Release 2019.3 to support the node-level sharding architecture. Previous versions of this document described the namespace-level sharding architecture, which involves a different, larger set of node types. The namespace-level architecture remains in place as the transparent foundation of the node-level architecture and is fully compatible with it, and the node types used to deploy it are still available in ICM. For information about all available node types, see ICM Node Types.

For more detailed information about the specifics of deploying a sharded cluster, such as database cache size and data volume size requirements, see Deploying the Sharded Cluster in the “Horizontally Scaling for Data Volume with Sharding” chapter of the Scalability Guide.

The recommended best practice is to load-balance application connections across all of the data nodes in a cluster.

Customizing InterSystems IRIS Configurations

Every InterSystems IRIS instance, including those running in the containers deployed by ICM, is installed with a predetermined set of configuration settings, recorded in its configuration parameters file (CPF). You can use the UserCPF field in your defaults file (as illustrated in the /Samples/AWS/defaults.json file in Shared Defaults File) to specify a CPF merge file, allowing you to override one or more of these configuration settings for all of the InterSystems IRIS instances you deploy, or in your definitions file to override different settings for different node types, such as the DM and AM nodes in a distributed cache cluster or the DATA and COMPUTE nodes in a sharded cluster. For example, as described in Planning an InterSystems IRIS Sharded Cluster in the “Sharding” chapter of the Scalability Guide, you might want to adjust the size of the database caches on the data nodes in a sharded cluster, which you could do by overriding the value of the [config]/globals CPF setting for the DATA definitions only. For information about using a merge file to override initial CPF settings, see Deploying with Customized InterSystems IRIS Configurations in the “ICM Reference” chapter.

A simple CPF merge file is provided in the /Samples/cpf directory in the ICM container, and the sample defaults files in all of the /Samples provider subdirectories include the UserCPF field, pointing to this file. Remove UserCPF from your defaults file unless you are sure you want to merge its contents into the default CPFs of deployed InterSystems IRIS instances.

Information about InterSystems IRIS configuration settings, their effects, and their installed defaults is provided in the Installation Guide, the System Administration Guide, and the Configuration Parameter File Reference.

Provision the Infrastructure

ICM provisions cloud infrastructure using the HashiCorp Terraform tool.

Note:

ICM can deploy containers on existing cloud, virtual or physical infrastructure; see Deploying on a Preexisting Cluster for more information.

The icm provision Command

The icm provision command allocates and configures host nodes, using the field values provided in the definitions.json and defaults.json files (as well as default values for unspecified parameters where applicable). By default, the input files in the current working directory are used, and during provisioning, the state directory, instances.json file, and log files are created under it (for details on these files see Configuration, State and Log Files in the chapter “Essential ICM Elements”).

Because the state directory and the generated instances.json file (which serves as input to subsequent reprovisioning, deployment, and management commands) are unique to a deployment, setting up a directory for each deployment and using these default locations within it is generally the simplest effective approach. For example, in the scenario described here, in which two different deployments share a defaults file, the simplest approach might be to set up one deployment in a directory such as /Samples/AWS, as shown in Shared Defaults File, then copy that entire directory (for example to /Samples/AWS2) and replace the first definitions.json file with the second.

Alternatively, you can use the -definitions or -defaults options to specify a nondefault location for one or both of these configuration files, but if you do, you must also include these options for all subsequent ICM commands you run for this deployment. For example, if you execute the command icm provision -defaults ./config_files, you must add -defaults ./config_files to all subsequent commands you issue for that deployment. The same applies to the use of the -stateDir option to specify a nondefault location for the state directory and the -instances option to do the same for the instances.json file.

While the provisioning operation is ongoing, ICM provides status messages regarding the plan phase (the Terraform phase that validates the desired infrastructure and generates state files) and the apply phase (the Terraform phase that accesses the cloud provider, carries out allocation of the machines, and updates state files). Because ICM runs Terraform in multiple threads, the order in which machines are provisioned and in which additional actions applied to them is not deterministic. This is illustrated in the sample output that follows.

At completion, ICM also provides a summary of the host nodes and associated components that have been provisioned, and outputs a command line which can be used to unprovision (delete) the infrastructure at a later date.

The following example is excerpted from the output of provisioning of the distributed cache cluster described in Define the Deployment.

$ icm provision 
Starting init of ANDY-TEST...
...completed init of ANDY-TEST
Starting plan of ANDY-DM-TEST...
...
Starting refresh of ANDY-TEST...

...
Starting apply of ANDY-DM-TEST...
...
Copying files to ANDY-DM-TEST-0002...
...
Configuring ANDY-AM-TEST-0003...
...
Mounting volumes on ANDY-AM-TEST-0004...
...
Installing Docker on ANDY-AM-TEST-0003...
...
Installing Weave Net on ANDY-DM-TEST-0001...
...
Collecting Weave info for ANDY-AR-TEST-0006...
...
...collected Weave info for ANDY-AM-TEST-0005
...installed Weave Net on ANDY-AM-TEST-0004

Machine            IP Address     DNS Name                                           Region    Zone
-------            ----------     --------                                           ------    ----
ANDY-DM-TEST-0001+ 00.53.183.209  ec2-00-53-183-209.us-west-1.compute.amazonaws.com  us-west-1 c
ANDY-DM-TEST-0002- 00.53.183.185  ec2-00-53-183-185.us-west-1.compute.amazonaws.com  us-west-1 c
ANDY-AM-TEST-0003  00.56.59.42    ec2-00-56-59-42.us-west-1.compute.amazonaws.com    us-west-1 c
ANDY-AM-TEST-0005  00.67.1.11     ec2-00-67-1-11.us-west-1.compute.amazonaws.com     us-west-1 c
ANDY-AM-TEST-0003  00.193.117.217 ec2-00-193-117-217.us-west-1.compute.amazonaws.com us-west-1 c
ANDY-LB-TEST-0002  (virtual AM)   ANDY-AM-TEST-1546467861.amazonaws.com              us-west-1 c
ANDY-AR-TEST-0006  00.53.201.194  ec2-00-53-201-194.us-west-1.compute.amazonaws.com  us-west-1 c
To destroy: icm unprovision [-cleanUp] [-force]

Copy code to clipboard
Important:

Unprovisioning public cloud host nodes in a timely manner avoids unnecessary expense. The icm unprovision command line in the icm provision output reflects your use of the -definitions, -defaults, -stateDir, or -instances options, if any; for example, if you use the command icm provision -stateDir /Samples/state/statedir06June2020, the icm unprovision command line in the output includes this option. Because such override options are required for the icm unprovision command to be successful, it is a useful practice to immediately copy and store this icm unprovision command line, so you can easily replicate it when unprovisioning. This output also appears in the icm.log file.

Interactions with cloud providers sometimes involve high latency leading to timeouts and internal errors on the provider side, and errors in the configuration files can also cause provisioning to fail. Because the icm provision command is fully reentrant, it can be issued multiple times until ICM completes all the required tasks for all the specified nodes without error. For more information, see the next section, Reprovisioning the Infrastructure.

Reprovisioning the Infrastructure

To make the provisioning process as flexible and resilient as possible, the icm provision command is fully reentrant — it can be issued multiple times for the same deployment. There are two primary reasons for reprovisioning infrastructure by executing the icm provision command more than once, as follows:

  • Overcoming provisioning errors

    Interactions with cloud providers sometimes involve high latency leading to timeouts and internal errors on the provider side, If errors are encountered during provisioning, the command can be issued multiple times until ICM completes all the required tasks for all the specified nodes without error.

  • Modifying provisioned infrastructure

    When your needs change, you can modify infrastructure that has already been provisioned, including configurations on which services have been deployed, at any time by changing the characteristics of existing nodes, adding nodes, or removing nodes.

When you repeat the icm provision command following an error, if the working directory does not contain the configuration files, you must repeat any location override options, this file does not yet exist, so you must use the -stateDir option to specify the incomplete infrastructure you want to continue provisioning. When you repeat the command to modify successfully provisioned infrastructure, however, you do not need to do so; as long as you are working in the directory containing the instances.json file, it is automatically used to identify the infrastructure you are reprovisioning. (If working elsewhere, you can use the -instances option to indicate the location of the file.) This is shown in the sections that follow.

Overcoming Provisioning Errors

When you issue the icm provision command and errors prevent successful provisioning, the state directory is created, but the instances.json file is not. Simply issue the icm provision command again, using the-stateDir option to specify the state subdirectory’s location if it is not in the current working directory. This indicates that provisioning is incomplete and provides the needed information about what has been done and what hasn’t. For example, suppose you encounter the problem in the following:

$ icm provision
Starting plan of ANDY-DM-TEST...
...completed plan of ANDY-DM-TEST
Starting apply of ANDY-AM-TEST...
Error: Thread exited with value 1
See /Samples/AWS/state/Sample-DS-TEST/terraform.err
Copy code to clipboard

Review the indicated errors, fix as needed, then run icm provision again in the same directory:

$ icm provision 
Starting plan of ANDY-DM-TEST...
...completed plan of ANDY-DM-TEST
Starting apply of ANDY-DM-TEST...
...completed apply of ANDY-DM-TEST
[...]
To destroy: icm unprovision [-cleanUp] [-force]
Copy code to clipboard

Modifying Provisioned Infrastructure

At any time following successful provisioning — including after successful services deployment using the icm run command — you can alter the provisioned infrastructure or configuration by modifying your definitions.json file and executing the icm provision command again. (If you are not issuing the command in the directory containing the definitions file, instances file, or state directory, you must include the needed override options — -definitions, -instances, or -stateDir — to specify their locations.) If changing a deployed configuration, you would then execute the icm run command again, as described in Redeploying Services.

You can modify existing infrastructure or a deployed configuration in the following ways.

  • To change the characteristics of one or more nodes, change settings within the node definitions in the definitions file. You might want to do this to vertically scale the nodes; for example, in the following definition, you could change the DataVolumeSize setting (see General Parameters) to increase the sizes of the DM nodes’ data volumes:

    {
        "Role": "DM",
        "Count": "2",
        "LicenseKey": "standard-iris.key”,
        "InstanceType": "m4.xlarge",
        "OSVolumeSize": "32",
        "DataVolumeSize": "25"
     },
    
    Copy code to clipboard
    Caution:

    Modifying attributes of existing nodes such as changing disk sizes, adding CPUs, and so on may cause those nodes (including their persistent storage) to be recreated. This behavior is highly specific to each cloud provider, and caution should be used to avoid the possibility of corrupting or losing data.

    Important:

    Changes to the Label and Tag fields in the definitions.json file are not supported when reprovisioning.

  • To add nodes, modify the definitions.json file in one or more of the following ways:

    • Add a new node type by adding a definition. For example, if you have deployed a sharded cluster with data nodes only, you can add compute nodes by adding an appropriate COMPUTE node definition to the definitions file.

    • Add more of an existing node type by increasing the Count specification in its definition. For example, to add two more application servers to a distributed cache cluster that already has two, you would modify the AM definition by changing “Count”: “2” to “Count”: “4”. When you add nodes to existing infrastructure or a deployed configuration, existing nodes are not restarted or modified, and their persistent storage remains intact.

      Note:

      When you add data nodes to a deployed sharded cluster after it has been loaded with data, you can automatically redistribute the sharded data across the new servers (although this must be done with the cluster offline); for more information, see Add Shard Data Servers and Rebalance Data in the “Horizontally Scaling InterSystems IRIS for Data Volume with Sharding” chapter of the Scalability Guide.

      Generally, there are many application-specific attributes that cannot be modified by ICM and must be modified manually after adding nodes.

    • Add a load balancer by adding “LoadBalancer": "true" to DATA, COMPUTE, AM, or WS node definitions.

  • To remove nodes, decrease the Count specification in the node type definition. To remove all nodes of a given type, reduce the count to 0.

    Caution:

    Do not remove a definition entirely; Terraform will not detect the change and your infrastructure or deployed configuration will include orphaned nodes that ICM is no longer tracking.

    Important:

    When removing one or more nodes, you cannot choose which to remove; rather nodes are unprovisioned on a last in, first out basis, so the most recently created node is the first one to be removed. This is especially significant when you have added one or more nodes in a previous reprovisioning operation, as these will be removed before the originally provisioned nodes.

    You can also remove load balancers by removing “LoadBalancer": "true" from a node definition, or changing the value to false.

There are some limitations to modifying an existing configuration through reprovisioning, as follows:

  • You cannot remove nodes that store data — DATA, DM, and DS.

  • You cannot change a configuration from nonmirrored to mirrored, or the reverse.

  • You can add DATA nodes to a mirrored sharded cluster only in an even number (to be configured as failover pairs). You can add DM nodes to a nonsharded mirrored configuration only as async members of the existing DM mirror, assuming the definition includes the appropriate MirrorMap setting, as described in Rules for Mirroring.

  • You can add an arbiter (AR node) to a mirrored configuration, but it must be manually configured as arbiter, using the management portal or ^MIRROR routine, for each mirror in the configuration.

By default, when issuing the icm provision command to modify existing infrastructure, ICM prompts you to confirm; you can avoid this, for example when using a script, by using the -force option.

Remember that after reprovisioning a deployed configuration, you must issue the icm run command again to redeploy.

Infrastructure Management Commands

The commands in this section are used to manage the infrastructure you have provisioned using ICM.

Many ICM command options can be used with more than one command. For example, the -role option can be used with a number of commands to specify the of the nodes for which the command should be run — for example, icm inventory -role AM lists only the nodes in the deployment that are of type AM — and the -image option, which specifies an image from which to deploy containers for both the icm run and icm upgrade commands. For complete lists of ICM commands and their options, see ICM Commands and Options in the “ICM Reference” chapter.

icm inventory

The icm inventory command lists the provisioned nodes, as at the end of the provisioning output, based on the information in the instances.json file (see The Instances File in the chapter “Essential ICM Elements”). For example:

$ icm inventory
Machine            IP Address     DNS Name                                           Region    Zone
-------            ----------     --------                                           ------    ----
ANDY-DM-TEST-0001+ 00.53.183.209  ec2-00-53-183-209.us-west-1.compute.amazonaws.com  us-west-1 c
ANDY-DM-TEST-0002- 00.53.183.185  ec2-00-53-183-185.us-west-1.compute.amazonaws.com  us-west-1 c
ANDY-AM-TEST-0003  00.56.59.42    ec2-00-56-59-42.us-west-1.compute.amazonaws.com    us-west-1 c
ANDY-AM-TEST-0005  00.67.1.11     ec2-00-67-1-11.us-west-1.compute.amazonaws.com     us-west-1 c
ANDY-AM-TEST-0003  00.193.117.217 ec2-00-193-117-217.us-west-1.compute.amazonaws.com us-west-1 c
ANDY-LB-TEST-0002  (virtual AM)   ANDY-AM-TEST-1546467861.amazonaws.com              us-west-1 c
ANDY-AR-TEST-0006  00.53.201.194  ec2-00-53-201-194.us-west-1.compute.amazonaws.com  us-west-1 c
Copy code to clipboard
Note:

When mirrored nodes are part of a configuration, initial mirror failover assignments are indicated by a + (plus) following the machine name of each intended primary and a - (minus) following the machine name of each intended backup, as shown in the preceding example. These assignments can change, however; following deployment, use the icm ps command to display the mirror member status of the deployed nodes.

You can also use the -machine or -role options to filter by node name or role, for example, with the same cluster as in the preceding example:

$ icm inventory -role AM
Machine           IP Address     DNS Name                                           Region    Zone
-------           ----------     --------                                           ------    ----
ANDY-AM-TEST-0003 00.56.59.42    ec2-00-56-59-42.us-west-1.compute.amazonaws.com    us-west-1 c
ANDY-AM-TEST-0005 00.67.1.11     ec2-00-67-1-11.us-west-1.compute.amazonaws.com     us-west-1 c
ANDY-AM-TEST-0003 00.193.117.217 ec2-00-193-117-217.us-west-1.compute.amazonaws.com us-west-1 c
Copy code to clipboard

If the fully qualified DNS names from the cloud provider are too wide for a readable display, you can use the -options option with the wide argument to make the output wider, for example:

icm inventory -options wide
Copy code to clipboard

For more information on the -options option, see Using ICM with Custom and Third-Party Containers.

icm ssh

The icm ssh command runs an arbitrary command on the specified host nodes. Because mixing output from multiple commands would be hard to interpret, the output is written to files and a list of output files provided, for example:

$ icm ssh -command "ping -c 5 intersystems.com" -role DM
Executing command 'ping -c 5 intersystems.com' on ANDY-DM-TEST-0001...
Executing command 'ping -c 5 intersystems.com' on ANDY-DM-TEST-0002...
...output in ./state/ANDY-DM-TEST/ANDY-DM-TEST-0001/ssh.out
...output in ./state/ANDY-DM-TEST/ANDY-DM-TEST-0002/ssh.out
Copy code to clipboard

However, when the -machine and/or -role options are used to specify exactly one node, as in the following, or there is only one node, the output is also written to the console:

$ icm ssh -command "df -k" -machine ANDY-DM-TEST-0001
Executing command 'df -k' on ANDY-DM-TEST-0001...
...output in ./state/ANDY-DM-TEST/ANDY-DM-TEST-0001/ssh.out 

Filesystem     1K-blocks    Used Available Use% Mounted on
rootfs          10474496 2205468   8269028  22% /
tmpfs            3874116       0   3874116   0% /dev
tmpfs            3874116       0   3874116   0% /sys/fs/cgroup
/dev/xvda2      33542124 3766604  29775520  12% /host
/dev/xvdb       10190100   36888   9612540   1% /irissys/data
/dev/xvdc       10190100   36888   9612540   1% /irissys/wij
/dev/xvdd       10190100   36888   9612540   1% /irissys/journal1
/dev/xvde       10190100   36888   9612540   1% /irissys/journal2
shm                65536     492     65044   1% /dev/shm
Copy code to clipboard

The icm ssh command can also be used in interactive mode to execute long-running, blocking, or interactive commands on a host node. Unless the command is run on a single-node deployment, the -interactive flag must be accompanied by a -role or -machine option restricting the command to a single node. If the -command option is not used, the destination user's default shell (for example bash) is launched.

See icm exec for an example of running a command interactively.

Note:

Two commands described in Service Management Commands, icm exec (which runs an arbitrary command on the specified containers) and icm session (which opens an interactive session for the InterSystems IRIS instance on a specified node) can be grouped with icm ssh as a set of powerful tools for interacting with your ICM deployment. The icm scp command, which securely copies a file or directory from the local ICM container to the host OS of the specified node or nodes, is frequently used with icm ssh.

icm scp

The icm scp command securely copies a file or directory from the local ICM container to the host OS of the specified node or nodes. The command syntax is as follows:

icm scp -localPath local-path [-remotePath remote-path]
Copy code to clipboard

Both localPath and remotePath can be either files or directories. If remotePath is a directory, it must contain a trailing forward slash (/), or it will be assumed to be a file. If both are directories, the contents of the local directory are recursively copied; if you want the directory itself to be copied, remove the trailing slash (/) from localPath.

The default for the optional remote-path argument is /home/ssh-user. The root directory of this path, /home, is the default home directory; to change it, specify a different root directory using the Home field. The user specified by the SSHUser field must have the needed permissions for remotePath.

Note:

See also the icm cp command, which copies a local file or directory on the specified node into the specified container.

Deploy and Manage Services

ICM carries out deployment of software services using Docker images, which it runs as containers by making calls to Docker. Containerized deployment using images supports ease of use and DevOps adaptation while avoiding the risks of manual upgrade. In addition to Docker, ICM also carries out some InterSystems IRIS-specific configuration over JDBC.

There are many container management and orchestration tools available, and these can be used to extend ICM’s deployment and management capabilities.

The icm run Command

The icm run command pulls, creates, and starts a container from the specified image on each of the provisioned nodes. By default, the image specified by the DockerImage field in the configuration files is used, and the name of the deployed container is iris. This name is reserved for and should be used only for containers created from the following InterSystems images (or images based on these images):

  • iris — Contains an instance of InterSystems IRIS.

    The InterSystems IRIS images distributed by InterSystems, and how to use one as a base for a custom image that includes your InterSystems IRIS-based application, are described in detail in Running InterSystems Products in Containers. In addition to the iris image for standard platforms, InterSystems IRIS images for ARM platforms are named iris-arm64.

    When deploying the iris image or the spark image (below), you can override one or more InterSystems IRIS configuration settings for all of the iris or spark containers you deploy, or override different settings for containers deployed on different node types; for more information, see Deploying with Customized InterSystems IRIS Configurations in the “ICM Reference” chapter.

  • spark — Contains an instance of InterSystems IRIS plus Apache Spark.

    The spark image allows you to conveniently add Apache Spark capabilities to the data nodes of an ICM-deployed sharded cluster. The deployed spark-based containers create a Spark framework corresponding to the deployment by starting a Spark master on data node 1 and a Spark worker on all the data nodes, preconfigured to connect to the InterSystems IRIS instances running in the containers. For more information on using Spark with InterSystems IRIS, see Using the InterSystems IRIS Spark Connector.

    Note:

    To start Spark in the deployed containers when using this image, you must include "spark": "true" in the defaults file.

  • arbiter — Contains an ISCAgent instance to act as mirror arbiter.

    The arbiter image is deployed on an AR node, which is configured as the arbiter host in a mirrored deployment. For more information on mirrored deployment and topology, see ICM Cluster Topology.

  • webgateway — Contains an InterSystems Web Gateway installation along with an Apache web server.

    The webgateway image is deployed on a WS node, which is configured as a web server for DATA or DATA and COMPUTE nodes in a node-level sharded cluster, AM or DM nodes in other configurations.

By including the DockerImage field in each node definition in the definitions.json file, you can run different InterSystems IRIS images on different node types. For example, you must do this to run the arbiter image on the AR node and the webgateway image on WS nodes while running the iris image on the other nodes. For a list of node types and corresponding InterSystems images, see ICM Node Types in the “ICM Reference” chapter.

Important:

If the wrong InterSystems image is specified for a node by the DockerImage field or the -image option of the icm run command — for example, if the iris image is specified for an AR (arbiter) node, or any InterSystems image for a CN node — deployment fails, with an appropriate message from ICM. Therefore, when the DockerImage field specifies the iris or spark image in the defaults.json file and you include an AR or WS definition in the definitions.json file, you must use include the DockerImage field in the AR or WS definition to override the default and specify the appropriate image (arbiter or webgateway, respectively).

The major versions of the image from which you launched ICM and the InterSystems images you specify using the DockerImage field must match; for example, you cannot deploy a 2019.4 version of InterSystems IRIS using a 2019.3 version of ICM. For information about upgrading ICM before you upgrade your InterSystems containers, see Upgrading ICM Using Distributed Management Mode in the appendix “Sharing Deployments in Distributed Management Mode”.

For information about using InterSystems images other than iris (such as the arbiter image) outside of ICM, contact the InterSystems Worldwide Response Center (WRC).

Note:

Container images from InterSystems comply with the Open Container Initiative (OCI) specification, and are built using the Docker Enterprise Edition engine, which fully supports the OCI standard and allows for the images to be certified and featured in the  Docker Hub registry.

InterSystems images are built and tested using the widely popular container Ubuntu operating system, and are therefore supported on any OCI-compliant runtime engine on Linux-based operating systems, both on premises and in public clouds.

For a brief introduction to the use of InterSystems IRIS in Docker containers, including a hands-on experience, see First Look: InterSystems IRIS in Docker Containers; for detailed information about deploying InterSystems IRIS and InterSystems IRIS-based applications in Docker containers using methods other than ICM, see Running InterSystems IRIS in Containers.

You can also use the -image and -container command-line options with icm run to specify a different image and container name. This allows you to deploy multiple containers created from multiple images on each provisioned node by using the icm run command multiple times — the first time to run the images specified by the DockerImage fields in the node definitions and deploy the iris container (of which there can be only one) on each node, as described in the foregoing paragraphs, and one or more subsequent times with the -image and -container options to run a custom image on all of the nodes or some of the nodes. Each container running on a given node must have a unique name. The -machine and -role options can also be used to restrict container deployment to a particular node, or to nodes of a particular type, for example, when deploying your own custom container on a specific provisioned node.

Another frequently used option, -iscPassword, specifies the InterSystems IRIS password to set for all deployed InterSystems IRIS containers; this value could be included in the configuration files, but the command line option avoids committing a password to a plain-text record. If the InterSystems IRIS password is not provided by either method, ICM prompts for it (with typing masked).

Note:

For security, ICM does not transmit the InterSystems IRIS password (however specified) in plain text, but instead uses a cryptographic hash function to generate a hashed password and salt locally, then sends these using SSH to the deployed InterSystems IRIS containers on the host nodes.

Given all of the preceding, consider the following three examples of container deployment using the icm run command. (These do not present complete procedures, but are limited to the procedural elements relating to the deployment of particular containers on particular nodes.)

  • To deploy a distributed cache cluster with one DM node and several AM nodes:

    1. When creating the defaults.json file, as described in Configuration, State, and Log Files and Define the Deployment, include the following to specify the default image from which to create the iris containers:

      "DockerImage": "intersystems/iris:stable"
      Copy code to clipboard
    2. Execute the following command on the ICM command line:

      icm run -iscPassword "<password>"
      Copy code to clipboard

      A container named iris containing an InterSystems IRIS instance with its initial password set as specified is deployed on each of the nodes; ICM performs the needed ECP configuration following container deployment.

  • To deploy a basic sharded cluster with Spark, mirrored DATA nodes, and an AR (arbiter) node:

    1. When creating the defaults.json file, as described in Configuration, State, and Log Files and Define the Deployment, include the following to specify the default image from which to create the iris containers, to enable ICM to start Spark in the containers when deployed, and to enable mirroring (as described in Rules for Mirroring):

      "DockerImage": "intersystems/spark:stable",
      "Spark": "true",
      "Mirror": "true"
      Copy code to clipboard
    2. When creating the definitions.json file, override the DockerImage field in the defaults file for the AR node only by specifying the arbiter image in the AR node definition, for example:

        {
             "Role": "AR",
             "Count": "1",
             "DockerImage": "intersystems/arbiter:stable"
         }
      
      Copy code to clipboard
    3. Execute the following command on the ICM command line:

      icm run -iscPassword "<password>"
      Copy code to clipboard

      A container named iris containing both an InterSystems IRIS instance with its initial password set as specified and Apache Spark is deployed on each of the DATA nodes; a container named iris containing an ISCAgent to act as mirror arbiter is deployed on the AR node; ICM performs the needed sharding, Spark, and mirroring configuration following container deployment.

  • To deploy a DM node with a stand-alone InterSystems Iris instance in the iris container and an additional container created from a custom image, plus several WS nodes connected to the DM:

    1. When creating the definitions.json file, as described in Configuration, State, and Log Files and Define the Deployment, specify the iris image for the DM node and the webgateway image for the WS nodes, for example:

        {
             "Role": "DM",
             "Count": "1",
             "DockerImage": "intersystems/iris-arm64:stable"
         },
         {
             "Role": "WS",
             "Count": "3",
             "DockerImage": "intersystems/webgateway:stable"
         }
      
      
      Copy code to clipboard
    2. Execute the following command on the ICM command line:

      icm run 
      Copy code to clipboard

      ICM prompts for the initial InterSystems IRIS password with typing masked, and a container named iris containing an InterSystems IRIS instance is deployed on the DM node, a container named iris containing an InterSystems Web Gateway installation and an Apache web server is deployed on each of the WS nodes, and ICM performs the needed web server configuration following container deployment.

    3. Execute another icm run command to deploy the custom container on the DM node, for example either of the following:

      icm run -container customsensors -image myrepo/sensors:stable -role DM
      
      icm run -container customsensors -image myrepo/sensors:stable -machine ANDY-DM-TEST-0001
      Copy code to clipboard

      A container named customsensors created from the image sensors in your repository is deployed on the DM node.

Bear in mind the following further considerations:

  • The container name iris remains the default for all ICM container and service management commands (as described in the following sections), so when you execute a command involving an additional container you have deployed using another name, you must refer to that name explicitly using the -container option. For example, to remove the custom container in the last example from the DM node, you would issue the following command:

    icm rm -container customsensors -machine ANDY-DM-TEST-0001
    Copy code to clipboard

    Without -container customsensors, this command would remove the iris container by default.

  • The DockerRegistry, DockerUsername, and DockerPassword fields are required to specify and log into (if it is private) the Docker repository in which the specified image is located; for details see Docker Repositories.

  • If you use the -namespace command line option with the icm run command to override the namespace specified in the defaults file (or the default of IRISCLUSTER if not specified in defaults), the value of the Namespace field in the instances.json file (see The Instances File in the chapter “Essential ICM Elements”) is updated with the name you specified, and this becomes the default namespace when using the icm session and icm sql commands.

Additional Docker options, such as --volume, can be specified on the icm run command line using the -options option, for example:

icm run -options "--volume /shared:/host" image intersystems/iris:stable
Copy code to clipboard

For more information on the -options option, see Using ICM with Custom and Third-Party Containers.

The -command option can be used with icm run to provide arguments to (or in place of) the Docker entry point; for more information, see Overriding Default Commands.

Because ICM issues Docker commands in multiple threads, the order in which containers are deployed on nodes is not deterministic. This is illustrated in the example that follows, which represents output from deployment of the sharded cluster configuration described in Define the Deployment. Repetitive lines are omitted for brevity.

$ icm run -definitions definitions_cluster.json
Executing command 'docker login' on ANDY-DATA-TEST-0001...
...output in /Samples/AWS/state/ANDY-DATA-TEST/ANDY-DATA-TEST-0001/docker.out
...
Pulling image intersystems/iris:stable on ANDY-DATA-TEST-0001...
...pulled ANDY-DATA-TEST-0001 image intersystems/iris:stable
...
Creating container iris on ANDY-DATA-TEST-0002...
...
Copying license directory /Samples/license/ to ANDY-DATA-TEST-0003...
...
Starting container iris on ANDY-DATA-TEST-0004...
...
Waiting for InterSystems IRIS to start on ANDY-DATA-TEST-0002...
...
Configuring SSL on ANDY-DATA-TEST-0001...
...
Enabling ECP on ANDY-DATA-TEST-0003...
...
Setting System Mode on ANDY-DATA-TEST-0002...
...
Acquiring license on ANDY-DATA-TEST-0002...
...
Enabling shard service on ANDY-DATA-TEST-0001...
...
Assigning shards on ANDY-DATA-TEST-0001...
...
Configuring application server on ANDY-DATA-TEST-0003...
...
Management Portal available at: http://ec2-00-56-140-23.us-west-1.compute.amazonaws.com:52773/csp/sys/UtilHome.csp 
Copy code to clipboard

At completion, ICM outputs a link to the Management Portal of the appropriate InterSystems IRIS instance. When multiple data nodes are deployed, the provided Management Portal link is for the lowest numbered among them, in this case the instance, running in the IRIS container on ANDY-DATA-TEST-001.

Redeploying Services

To make the deployment process as flexible and resilient as possible, the icm run command is fully reentrant — it can be issued multiple times for the same deployment. When an icm run command is repeated, ICM stops and removes the affected containers (the equivalent of icm stop and icm rm), then creates and starts them from the applicable images again, while preserving the storage volumes for InterSystems IRIS instance-specific data that it created and mounted as part of the initial deployment pass (see Storage Volumes Mounted by ICM in the “ICM Reference” chapter).

There are four primary reasons for redeploying services by executing an icm run command more than once, as follows:

  • Redeploying the existing containers with their existing storage volumes.

    To replace deployed containers with new versions while preserving the instance-specific storage volumes of the affected InterSystems IRIS containers, thereby redeploying the existing instances, simply repeat the original icm run command that first deployed the containers. You might do this if you have made a change in the definitions files that requires redeployment, for example you have updated the licenses in the directory specified by the LicenseDir field.

  • Redeploying the InterSystems IRIS containers without the existing storage volumes.

    To replace the InterSystems IRIS containers in the deployment without preserving their instance-specific storage volumes, you can delete that data for those instances before redeploying using the following command:

    icm ssh -command "sudo rm -rf /<mount_dir>/*/*"
    Copy code to clipboard

    where mount_dir is the directory (or directories) under which the InterSystems IRIS data, WIJ, and journal directories are mounted (which is /irissys/ by default, or as configured by the DataMountPoint, WIJMountPoint, Journal1MountPoint, and Journal2MountPoint fields; for more information, see Storage Volumes Mounted by ICM in the “ICM Reference” chapter). You can use the -role or -machine options to limit this command to specific nodes, if you wish. When you then repeat the icm run command that originally deployed the InterSystems IRIS containers, those that still have instance-specific volumes are redeployed as the same instances, while those for which you deleted the volumes are redeployed as new instances.

  • Deploying services on nodes you have added to the infrastructure, as described in Reprovisioning the Infrastructure.

    When you repeat an icm run command after adding nodes to the infrastructure, containers on the existing nodes are redeployed as described in the preceding (with their storage volumes, or without if you have deleted them) while new containers are deployed on the new nodes. This allows the existing nodes to be reconfigured for the new deployment topology, if necessary.

  • Overcoming deployment errors.

    If the icm run command fails on one or more nodes due to factors outside ICM’s control, such as network latency and disconnects or interruptions in cloud provider service (as indicated by error log messages), you can issue the command again; in most cases, deployment will succeed on repeated tries. If the error persists, however, and requires manual intervention — for example, if it is caused by an error in one of the configuration files — you may need to delete the storage volumes on the node or nodes affected, as described in the preceding, before reissuing icm run after fixing the problem. This is because ICM recognizes a node without instance-specific data as a new node, and marks the storage volumes of an InterSystems IRIS container as fully deployed only when all configuration is successfully completed; if configuration begins but fails short of success and the volumes are not marked, ICM cannot redeploy on that node. In a new deployment, you may find it easiest to issue the command icm ssh -command "sudo rm -rf /irissys/*/*" without -role or -machine constraints to roll back all nodes on which InterSystems IRIS is to be redeployed.

Container Management Commands

The commands in this section are used to manage the containers you have deployed on your provisioned infrastructure.

Many ICM command options can be used with more than one command. For example, the -role option can be used with a number of commands to specify the type of node for which the command should be run — for example, icm inventory -role AM lists only the nodes in the deployment that are of type AM — and the -image option, which specifies an image from which to deploy containers for both the icm run and icm upgrade commands. For complete lists of ICM commands and their options, see ICM Commands and Options in the “ICM Reference” chapter.

icm ps

When deployment is complete, the icm ps command shows you the run state of containers running on the nodes, for example:

$ icm ps -container iris
Machine             IP Address    Container Status Health  Image
-------             ----------    --------- ------ ------  -----
ANDY-DATA-TEST-0001 00.56.140.23  iris      Up     healthy intersystems/iris:stable
ANDY-DATA-TEST-0002 00.53.190.37  iris      Up     healthy intersystems/iris:stable
ANDY-DATA-TEST-0003 00.67.116.202 iris      Up     healthy intersystems/iris:stable
ANDY-DATA-TEST-0004 00.153.49.109 iris      Up     healthy intersystems/iris:stable
Copy code to clipboard

If the -container restriction is omitted, all containers running on the nodes are listed. This includes both other containers deployed by ICM (for example, Weave network containers, or any custom or third party containers you deployed using the icm run command) and any deployed by other means after completion of the ICM deployment..

Beyond node name, IP address, container name, and the image the container was created from, the icm ps command includes the following columns:

  • Status — One of the following status values generated by Docker: created, restarting, running, removing (or up), paused, exited, or dead.

  • Health — For iris, arbiter, and webgateway containers, one of the values starting, healthy, or unhealthy; for other containers none (or blank). When Status is exited, Health may display the exit value (where 0 means success).

    For iris containers the Health value reflects the health state of the InterSystems IRIS instance in the container. (For information about the InterSystems IRIS health state, see System Monitor Health State in the “Using the System Monitor” chapter of the Monitoring Guide). For arbiter containers it reflects the status of the ISCAgent, and for webgateway containers the status of the InterSystems Web Gateway web server. Bear in mind that unhealthy may be temporary, as it can result from a warning that is subsequently cleared.

  • Mirror — When mirroring is enabled (see Rules for Mirroring), the mirror member status (for example PRIMARY, BACKUP, SYNCHRONIZING) returned by the %SYSTEM.Mirror.GetMemberStatus()mirroring API call. For example:

    $ icm ps -container iris
    Machine             IP Address    Container Status Health  Mirror  Image
    -------             ----------    --------- ------ ------  ------  -----
    ANDY-DATA-TEST-0001 00.56.140.23  iris      Up     healthy PRIMARY intersystems/iris:stable
    ANDY-DATA-TEST-0002 00.53.190.37  iris      Up     healthy BACKUP  intersystems/iris:stable
    ANDY-DATA-TEST-0003 00.67.116.202 iris      Up     healthy PRIMARY intersystems/iris:stable
    ANDY-DATA-TEST-0004 00.153.49.109 iris      Up     healthy BACKUP  intersystems/iris:stable
    
    Copy code to clipboard

    For an explanation of the meaning of each status, see Mirror Member Journal Transfer and Dejournaling Status in the “Mirroring” chapter of the High Availability Guide.

Additional deployment and management phase commands are listed in the following. For complete information about these commands, see ICM Reference.

icm stop

The icm stop command stops the specified containers (or iris by default) on the specified nodes, or on all nodes if no machine or role constraints provided). For example, to stop the InterSystems IRIS containers on the application servers in the distributed cache cluster configuration:


$ icm stop -container iris -role DS

Stopping container iris on ANDY-DATA-TEST-0001...
Stopping container iris on ANDY-DATA-TEST-0002...
Stopping container iris on ANDY-DATA-TEST-0004...
Stopping container iris on ANDY-DATA-TEST-0003...
...completed stop of container iris on ANDY-DATA-TEST-0004
...completed stop of container iris on ANDY-DATA-TEST-0001
...completed stop of container iris on ANDY-DATA-TEST-0002
...completed stop of container iris on ANDY-DATA-TEST-0003
Copy code to clipboard

icm start

The icm start command starts the specified containers (or iris by default) on the specified nodes, or on all nodes if no machine or role constraints provided). For example, to restart one of the stopped application server InterSystems IRIS containers:

$ icm start -container iris -machine ANDY-DATA-TEST-0002...
Starting container iris on ANDY-DATA-TEST-0002...
...completed start of container iris on ANDY-DATA-0002
Copy code to clipboard

icm pull

The icm pull command downloads the specified image to the specified machines. For example, to add an image to data node 1 in the sharded cluster:

$ icm pull -image intersystems/webgateway:stable -role DATA
Pulling ANDY-DATA-TEST-0001 image intersystems/webgateway:stable...
...pulled ANDY-DATA-TEST-0001 image intersystems/webgateway:stable

Copy code to clipboard

Note that the -image option is not required if the image you want to pull is the one specified by the DockerImage field in the definitions file, for example:

"DockerImage": "intersystems/iris-arm64:stable",
Copy code to clipboard

Although the icm run automatically command pulls any images not already present on the host, an explicit icm pull might be desirable for testing, staging, or other purposes.

icm rm

The icm rm command deletes the specified container (or iris by default), but not the image from which it was started, from the specified nodes, or from all nodes if no machine or role is specified. Only a stopped container can be deleted.

icm upgrade

The icm upgrade command replaces the specified container on the specified machines. ICM orchestrates the following sequence of events to carry out an upgrade:

  1. Pull the new image

  2. Create the new container

  3. Stop the existing container

  4. Remove the existing container

  5. Start the new container

By staging the new image in steps 1 and 2, the downtime required between steps 3-5 is kept relatively short.

For example, to upgrade the InterSystems IRIS container on an application server:

$ icm upgrade -image intersystems/iris:latest -machine ANDY-AM-TEST-0003
Pulling ANDY-AM-TEST-0003 image intersystems/iris:latest...
...pulled ANDY-AM-TEST-0003 image intersystems/iris:latest
Stopping container ANDY-AM-TEST-0003...
...completed stop of container ANDY-AM-TEST-0003
Removing container ANDY-AM-TEST-0003...
...removed container ANDY-AM-TEST-0003
Running image intersystems/iris:latest in container ANDY-AM-TEST-0003...
...running image intersystems/iris:latest in container ANDY-AM-TEST-0003
Copy code to clipboard

The -image option is required for the icm upgrade command. When the upgrade is complete, the value of the DockerImage field in the instances.json file (see The Instances File in the chapter “Essential ICM Elements”) is updated with the image you specified.

Note:

The major versions of the image from which you launch ICM and the InterSystems images you deploy must match. For example, you cannot deploy a 2019.4 version of InterSystems IRIS using a 2019.3 version of ICM.

If you are upgrading a container other than iris, you must use the -container option to specify the container name.

For important information about upgrading InterSystems IRIS containers, see Upgrading InterSystems IRIS Containers in Running InterSystems Products in Containers.

Service Management Commands

These commands let you interact with the services running in your deployed containers, including InterSystems IRIS.

Many ICM command options can be used with more than one command. For example, the -role option can be used with a number of commands to specify the type of node for which the command should be run — for example, icm exec -role AM runs the specified command on only the nodes in the deployment that are of type AM — and the -image option, which specifies an image from which to deploy containers for both the icm run and icm upgrade commands. For complete lists of ICM commands and their options, see ICM Commands and Options in the “ICM Reference” chapter.

A significant feature of ICM is the ability it provides to interact with the nodes of your deployment on several levels — with the node itself, with the container deployed on it, and with the running InterSystems IRIS instance inside the container. The icm ssh (described in Infrastructure Management Commands), which lets you run a command on the specified host nodes, can be grouped with the first two commands described in this section, icm exec (run a command in the specified containers) and icm session (open an interactive session for the InterSystems IRIS instance on a specified node) as a set of powerful tools for interacting with your ICM deployment. These multiple levels of interaction are shown in the following illustration.

Interactive ICM Commands
images/gicm_three_shells.png

icm exec

The icm exec command runs an arbitrary command in the specified containers, for example


$ icm exec -command "df -k" -machine ANDY-DM-TEST-0001
Executing command in container iris on ANDY-DM-TEST-0001
...output in ./state/ANDY-DM-TEST/ANDY-DM-TEST-0001/docker.out

Filesystem     1K-blocks    Used Available Use% Mounted on
rootfs          10474496 2205468   8269028  22% /
tmpfs            3874116       0   3874116   0% /dev
tmpfs            3874116       0   3874116   0% /sys/fs/cgroup
/dev/xvda2      33542124 3766604  29775520  12% /host
/dev/xvdb       10190100   36888   9612540   1% /irissys/data
/dev/xvdc       10190100   36888   9612540   1% /irissys/wij
/dev/xvdd       10190100   36888   9612540   1% /irissys/journal1
/dev/xvde       10190100   36888   9612540   1% /irissys/journal2
shm                65536     492     65044   1% /dev/shm
Copy code to clipboard

Because mixing output from multiple commands would be hard to interpret, when the command is executed on more than one node, the output is written to files and a list of output files provided.

Additional Docker options, such as --env, can be specified on the icm exec command line using the -options option; for more information on the -options option, see Using ICM with Custom and Third-Party Containers.

Because executing long-running, blocking, or interactive commands within a container can cause ICM to time out waiting for the command to complete or for user input, the icm exec command can also be used in interactive mode. Unless the command is run on a single-node deployment, the -interactive flag must be accompanied by a -role or -machine option restricting the command to a single node. A good example is running a shell in the container:

$ icm exec -command bash -machine ANDY-AM-TEST-0004 -interactive
Executing command 'bash' in container iris on ANDY-AM-TEST-0004...
[root@localhost /] $ whoami
root
[root@localhost /] $ hostname
iris-ANDY-AM-TEST-0004
[root@localhost /] $ exit
Copy code to clipboard

Another example of a command to execute interactively within a container is an InterSystems IRIS command that prompts for user input, for example iris stop: which asks whether to broadcast a message before shutting down the InterSystems IRIS instance.

The icm cp command, which copies a local file or directory on the specified node into the specified container, is useful with icm exec.

icm session

When used with the -interactive option, the icm session command opens an interactive session for the InterSystems IRIS instance on the node you specify. The -namespace option can be used to specify the namespace in which the session starts; the default is the ICM-created namespace (IRISCLUSTER by default). For example:

$ icm session -interactive -machine ANDY-AM-TEST-0003 -namespace %SYS

Node: iris-ANDY-AM-TEST-0003, Instance: IRIS

%SYS> 
Copy code to clipboard

You can also use the -command option to provide a routine to be run in the InterSystems IRIS session, for example:

icm session -interactive -machine ANDY-AM-TEST-0003 -namespace %SYS -command ^MIRROR
Copy code to clipboard

Additional Docker options, such as --env, can be specified on the icm exec command line using the -options option; for more information on the -options option, see Using ICM with Custom and Third-Party Containers.

Without the -interactive option, the icm session command runs the InterSystems IRIS ObjectScriptScript snippet specified by the -command option on the specified node or nodes. The -namespace option can be used to specify the namespace in which the snippet runs. Because mixing output from multiple commands would be hard to interpret, when the command is executed on more than one node, the output is written to files and a list of output files provided. For example:

$ icm session -command 'Write ##class(%File).Exists("test.txt")' -role AM
Executing command in container iris on ANDY-AM-TEST-0003...
Executing command in container iris on ANDY-AM-TEST-0004...
Executing command in container iris on ANDY-AM-TEST-0005...
...output in ./state/ANDY-AM-TEST/ANDY-AM-TEST-0003/ssh.out
...output in ./state/ANDY-AM-TEST/ANDY-AM-TEST-0004/ssh.out
...output in ./state/ANDY-AM-TEST/ANDY-AM-TEST-0005/ssh.out
Copy code to clipboard

When the specified -machine or -role options limit the command to a single node, output is also written to the console, for example

$ icm session -command 'Write ##class(%File).Exists("test.txt")' -role DM
Executing command in container iris on ANDY-DM-TEST-0001
...output in ./state/ANDY-DM-TEST/ANDY-DM-TEST-0001/docker.out

0
Copy code to clipboard

The icm sql command, which runs an arbitrary SQL command against the containerized InterSystems IRIS instance on the specified node (or all nodes), is similar to icm session.

icm cp

The icm cp command copies a local file or directory on the specified node into the specified container. The command syntax is as follows:

icm cp -localPath local-path [-remotePath remote-path]
Copy code to clipboard

Both localPath and remotePath can be either files or directories. If both are directories, the contents of the local directory are recursively copied; if you want the directory itself to be copied, include it in remotePath.

The remotePath argument is optional and if omitted defaults to /tmp; if remotePath is a directory, it must contain a trailing forward slash (/), or it will be assumed to be a file. You can use the -container option to copy to a container other than the default iris.

Note:

See also the icm scp command, which securely copies a file or directory from the local ICM container to the specified host OS.

icm sql

The icm sql command runs an arbitrary SQL command against the containerized InterSystems IRIS instance on the specified node (or all nodes), for example:

$ icm sql -command "SELECT Name,SMSGateway FROM %SYS.PhoneProviders" -role DM
Executing command in container iris on ANDY-DM-TEST-0001...
...output in ./state/ANDY-DM-TEST/ANDY-DM-TEST-0001/jdbc.out

Name,SMSGateway
AT&amp;T Wireless,txt.att.net
Alltel,message.alltel.com
Cellular One,mobile.celloneusa.com
Nextel,messaging.nextel.com
Sprint PCS,messaging.sprintpcs.com
T-Mobile,tmomail.net
Verizon,vtext.com
Copy code to clipboard

The -namespace option can be used to specify the namespace in which the SQL command runs; the default is the ICM-created namespace (IRISCLUSTER by default).

Because mixing output from multiple commands would be hard to interpret, when the command is executed on more than one node, the output is written to files and a list of output files provided.

The icm sql command can also be interactively on a single node, opening an InterSystems IRIS SQL Shell (see the “Using the SQL Shell Interface” chapter of Using InterSystems SQL ). Unless the command is run on a single-node deployment, the -interactive flag must be accompanied by a -role or -machine option restricting the command to a single node. For example:

$ icm sql -interactive -machine ANDY-QS-TEST-0002
SQL Command Line Shell
----------------------------------------------------
The command prefix is currently set to: <<nothing>>.
Enter <command>, 'q' to quit, '?' for help.
Copy code to clipboard

As with the noninteractive command, you can use the -namespace option interactively to specify the namespace in which the SQL shell runs; the default is the ICM-created namespace (IRISCLUSTER by default).

icm docker

The icm docker command runs a Docker command on the specified node (or all nodes), for example:

$ icm docker -command "status --no-stream" -machine ANDY-DM-TEST-0002
Executing command 'status --no-stream' on ANDY-DM-TEST-0002...
...output in ./state/ANDY-DM-TEST/ANDY-DM-TEST-0002/docker.out

CONTAINER     CPU %  MEM USAGE / LIMIT  MEM %  NET I/O     BLOCK I/O        PIDS
3e94c3b20340  0.01%  606.9MiB/7.389GiB  8.02%  5.6B/3.5kB  464.5MB/21.79MB  0
1952342e3b6b  0.10%  22.17MiB/7.389GiB  0.29%  0B/0B       13.72MB/0B       0
d3bb3f9a756c  0.10%  40.54MiB/7.389GiB  0.54%  0B/0B       38.43MB/0B       0
46b263cb3799  0.14%  56.61MiB/7.389GiB  0.75%  0B/0B       19.32MB/231.9kB  0
Copy code to clipboard

The Docker command should not be long-running (or block), otherwise control will not return to ICM. For example, if the ---no-stream option in the example is removed, the call will not return until a timeout has expired.

Unprovision the Infrastructure

Because public cloud platform instances continually generate charges and unused instances in private clouds consume resources to no purpose, it is important to unprovision infrastructure in a timely manner.

The icm unprovision command deallocates the provisioned infrastructure based on the state files created during provisioning. If the state subdirectory is not in the current working directory, the -stateDir option is required to specify its location. As described in Provision the Infrastructure, destroy refers to the Terraform phase that deallocates the infrastructure. One line is created for each entry in the definitions file, regardless of how many nodes of that type were provisioned. Because ICM runs Terraform in multiple threads, the order in which machines are unprovisioned is not deterministic.

$ icm unprovision -cleanUp
Type "yes" to confirm: yes
Starting destroy of ANDY-DM-TEST...
Starting destroy of ANDY-AM-TEST...
Starting destroy of ANDY-AR-TEST...
...completed destroy of ANDY-AR-TEST
...completed destroy of ANDY-AM-TEST
...completed destroy of ANDY-DM-TEST
Starting destroy of ANDY-TEST...
...completed destroy of ANDY-TEST
Copy code to clipboard

The -cleanUp option deletes the state directory after unprovisioning; by default, the state directory is preserved. The icm unprovision command prompts you to confirm unprovisioning by default; you can use the -force option to avoid this, for example when using a script.