Learning
Community
Open Exchange
Global Masters
InterSystems IRIS Data Platform 2019.3 / Deployment and Installation / InterSystems Cloud Manager Guide / ICM Reference
Previous section   Next section

ICM Reference

The following topics provide detailed information about various aspects of ICM and its use:

ICM Commands and Options

The first table that follows lists the commands that can be executed on the ICM command line; the second table lists the options that can be included with them. Both tables include links to relevant text.
Each of the commands is covered in detail in the “Using ICM” chapter. Command-line options can be used either to provide required or optional arguments to commands (for example, icm exec -interactive) or to set field values, overriding ICM defaults or settings in the configuration files.
Note:
The command table does not list every option that can be used with each command, and the option table does not list every command that can include each option.
ICM Commands
Command Description Important Options
Provisions host nodes
-definitions, -defaults, -instances
Lists provisioned host nodes
-machine, -role, -json, -options
Destroys host nodes
-stateDir, -cleanup, -force
merge Merges infrastructure provisioned in separate regions into a new definitions file for multiregion deployment -options, -localPath
Executes an operating system command on one or more host nodes
-command, -machine, -role
Copies a local file to one or more host nodes
-localPath, -remotePath, -machine, -role
Deploys a container on host nodes
-image, -container, -namespace, -options, -iscPassword, -command, -machine, -role
Displays run states of containers deployed on host nodes
-container, -json
Stops containers on one or more host nodes
-container, -machine, -role
Starts containers on one or more host nodes
-container, -machine, -role
Downloads an image to one or more host nodes
-image, -container, -machine, -role
Deletes containers from one or more host nodes
-container, -machine, -role
Replaces containers on one or more host nodes
-image, -container, -machine, -role
Executes an operating system command in one or more containers
-container, -command, -interactive, -options, -machine, -role
Opens an interactive session for an InterSystems IRIS instance in a container or executes an InterSystems IRIS ObjectScriptScript snippet on one or more instances
-namespace, -command, -interactive, -options, -machine, -role
Copies a local file to one or more containers
-localPath, -remotePath, -machine, -role
Executes a SQL statement on the InterSystems IRIS instance
-namespace, -command, -machine, -role
install Installs InterSystem IRIS instances from a kit in containerless mode -machine, -role
uninstall Uninstalls InterSystems IRIS instances installed from a kit in containerless mode -machine, -role
Executes a Docker command on one or more host nodes
-container, -machine, -role
ICM Command-Line Options
Option Meaning Default Described in
-help Display command usage information and ICM version   ---
-version Display ICM version   ---
-verbose Show execution detail false (can be used with any command)
-definitions filepath Host node definitions file ./definitions.json Configuration, State and Log Files
-defaults filepath Host node defaults file ./defaults.json
-instances filepath Host node instances file ./instances.json
-stateDir dir Machine state directory OS-specific The State Directory and State Files
-force Don't confirm before reprovisioning or unprovisioning false
-cleanUp Delete state direcorty after unprovisioning false
-machine regexp Target machine name pattern match (all) icm inventory icm exec
-role role Role of the InterSystems IRIS instance or instances for which a command is run, for example DATA or AM (all) icm inventory
-namespace namespace Namespace to create on deployed InterSystems IRIS instances and set as default execution namespace for the session and sql commands IRISCLUSTER The Definitions File icm session
-image image Docker image to deploy; must include repository name. DockerImage value in definitions file icm run
-options options Additional Docker options none icm inventory Using ICM with Custom and Third-Party Containers
-container name Name of the container icm ps command: (all)
other commands: iris
icm run
-command cmd Command or query to execute none icm ssh icm exec
-interactive Redirect input/output to console for the exec and ssh commands false icm ssh
-localPath path Local file or directory none icm scp
-remotePath path Remote file or directory /home/SSHUser (value of SSHUser field)
-iscPassword password Password for deployed InterSystems IRIS instances iscPassword value in configuration file icm run
-json Enable JSON response mode false Using JSON Mode
Important:
Use of the -verbose option, which is intended for debugging purposes only, may expose the value of iscPassword and other sensitive information, such as DockerPassword. When you use this option, you must either use the -force option as well or confirm that you want to use verbose mode before continuing.

ICM Node Types

This section described the types of nodes that can be provisioned and deployed by ICM and their possible roles in the deployed InterSystems IRIS configuration. A provisioned node’s type is determined by the Role field.
The following table summarizes the detailed node type descriptions that follow.
ICM Node Types
Node Type Configuration Role(s) InterSystems Image to Deploy
DATA Sharded cluster data node iris (InterSystems IRIS instance)
OR
spark (InterSystems IRIS instance Apache Spark)
COMPUTE Sharded cluster compute node iris (InterSystems IRIS instance)
Distributed cache cluster data server
Stand-alone InterSystems IRIS instance
[namespace-level architecture: shard master data server]
iris (InterSystems IRIS instance)
OR
spark (InterSystems IRIS instance Apache Spark)
[namespace-level architecture: shard data server]
[namespace-level architecture: shard query server]
iris (InterSystems IRIS instance)
Distributed cache cluster application server
[namespace-level architecture: shard master application server]
Mirror arbiter
arbiter (InterSystems IRIS mirror arbiter)
Web server
webgateway (InterSystems Web Gateway)
Load balancer
Virtual machine
CN Custom and third-party container node
BH Bastion host
Important:
The above table includes sharded cluster roles for the namespace-level sharding architecture, as documented in previous versions of this guide. These roles (DM, DS, QS) remain available for use in ICM but cannot be combined with DATA or COMPUTE nodes in the same deployment.
The InterSystems images shown in the preceding table are required on the corresponding node types, and cannot be deployed on nodes to which they do not correspond. If the wrong InterSystems image is specified for a node by the DockerImage field or the -image option of the icm run command — for example, if the iris image is specfied for an AR (arbiter) node, or any InterSystems image for a CN node — deployment fails, with an appropriate message from ICM. For a detailed discussion of the deployment of InterSystems images, see The icm run Command in the “Using ICM” chapter.

Role DATA: Sharded Cluster Data Node

In Release 2019.3 of InterSystems IRIS, the node-level architecture (as described in Overview of InterSystems IRIS Sharding and other sections of the “Horizontally Scaling for Data Volume with Sharding” chapter of the Scalability Guide) becomes the primary sharding architecture, with the data node as its basic element. A typical sharded cluster consists only of data nodes, across which the sharded data is partitioned, and therefore requires only a DATA node definition in the definitions.json file; if DATA nodes are defined, the deployment must be a sharded cluster. Because all sharded and nonsharded data, metadata, and code is visible from any node in the cluster, application connections can be load balanced across all of the nodes to take greatest advantage of parallel query processing and partitioned caching.
DATA nodes can be mirrored if provisioned in an even number; ICM requires that all be mirrored or none, which is the recommended best practice for sharded clusters. Async members of DATA node mirrors are not supported.
A load balancer may be assigned to DATA nodes; see Role LB: Load Balancer.
The only distinction between data nodes in a sharded cluster is that the first node configured (known as node 1) stores all of the nonsharded data, metadata, and code for the cluster in addition to its share of the sharded data. The difference in storage requirements, however, is typically very small.

Role COMPUTE: Sharded Cluster Compute Node

For advanced use cases in which extremely low query latencies are required, potentially at odds with a constant influx of data, compute nodes can be added to a sharded cluster to provide a transparent caching layer for servicing queries, separating the query and data ingestion workloads and improving the performance of both. (For more information see Deploy Compute Nodes for Workload Separation and Increased Query Throughput in the Scalability Guide.)
Adding compute nodes yields significant performance improvement only when there is at least one compute node per data node. The COMPUTE nodes defined in your definitions file are therefore distributed as evenly as possible across the data nodes, and you should define at least as many COMPUTE nodes as DATA nodes. (If the number of DATA nodes in the definitions file is greater than the number of COMPUTE nodes, ICM issues a warning.) Configuring multiple compute nodes per data node can further improve the cluster’s query throughput; the recommended best practice is to define the same number of COMPUTE nodes for each DATA node (for example, eight compute nodes for four data nodes).
Because COMPUTE nodes support query execution only and do not store any data, their instance type and other settings can be tailored to suit those needs, for example by emphasizing memory and CPU and keeping storage to the bare minimum. Because they do not store data, COMPUTE nodes cannot be mirrored.
A load balancer may be assigned to COMPUTE nodes; see Role LB: Load Balancer.

Role DM: Distributed Cache Cluster Data Server, Standalone Instance, Shard Master Data Server

If multiple nodes of role AM and a DM node (nonmirrored or mirrored) are specified without any nodes of role DS, they are deployed as an InterSystems IRIS distributed cache cluster, with the former serving as application servers and the latter as an data server.
A node of role DM (nonmirrored or mirrored) deployed by itself becomes a standalone InterSystems IRIS instance.
Note:
In an InterSystems IRIS sharded cluster with DS nodes (shard data servers) under the namespace-level architecture (see Namespace-level Sharding Architecture in the chapter “Horizontally Scaling InterSystems IRIS for Data Volume with Sharding” in the Scalability Guide), a single DM node serves as the shard master data server, providing application access to the shard data servers (DS nodes) on which the sharded data is stored and hosting nonsharded tables. (If shard master application servers [AM nodes] are included in the cluster, they provide application access instead.) A shard master data server can be mirrored by deploying two nodes of role DM and specifying mirroring.

Role DS: Shard Data Server

Under the namespace-level architecture, a data shard stores one horizontal partition of each sharded table loaded into a sharded cluster. A node hosting a data shard is called a shard data server. A cluster can have two or more shard data servers up to over 200. Shard data servers can be mirrored by deploying an even number and specifying mirroring.

Role QS: Shard Query Server

Under the namespace-level architecture, shard query servers provides query access to the data shards to which they are assigned, minimizing interference between query and data ingestion workloads and increasing the bandwidth of a sharded cluster for high volume multiuser query workloads. If shard query servers are deployed they are assigned round-robin to the deployed shard data servers. Shard query servers automatically redirect application connections when a mirrored shard data server fails over.

Role AM: Distributed Cache Cluster Application Server, Shard Master Application Server

If multiple nodes of role AM and a DM node are specified without any nodes of role DS, they are deployed as an InterSystems IRIS distributed cache cluster, with the former serving as application servers and the latter as a data server. When the data server is mirrored, application connection redirection following failover is automatic.
A load balancer may be assigned to AM nodes; see Role LB: Load Balancer.
Note:
When included in a sharded cluster with DS nodes (shard data servers) under the namespace-level architecture, shard master application servers provide application access to the sharded data, distributing the user load across multiple nodes just as application servers in a distributed cache cluster do. If the shard master data server is mirrored, two or more shard master application servers must be included.

Role AR: Mirror Arbiter

When DATA nodes (sharded cluster DATA nodes), a DM node (distributed cache cluster data server, stand-alone InterSystems IRIS instance, or namespace-level shard master data server), or DS nodes (namespace-level shard data servers) are mirrored, deployment of an arbiter node to facilitate automatic failover is highly recommended. One arbiter node is sufficient for all of the mirrors in a cluster; multiple arbiters are not supported and are ignored by ICM, as are arbiter nodes in a nonmirrored cluster.
The AR node does not contain an InterSystems IRIS instance, using a different image to run an ISCAgent container. This arbiter image must be specified using the DockerImage field in the definitions file entry for the AR node; for more information, see The icm run Command.
For more information about the arbiter, see the “Mirroring” chapter of the High Availability Guide.

Role WS: Web Server

A deployment may contain any number of web servers. Each web server node contains an InterSystems Web Gateway installation along with an Apache web server. ICM populates the remote server list in the InterSystems Web Gateway with all of the available AM nodes (distributed cache cluster application servers or namespace-level shard master application servers). If no AM nodes are available, it instead uses the DM node (distributed cache cluster data server or namespace-level shard master data server); if mirrored, a mirror-aware connection is created. Communication between the web server and remote servers is configured to run in SSL/TLS mode.
Note:
In this release, WS nodes are not compatible with a node-level sharded cluster and should not be included in such a deployment.
A load balancer may be assigned to WS nodes; see Role LB: Load Balancer.
The WS node does not contain an InterSystems IRIS instance, using a different image to run a Web Gateway container. As described in The icm run Command, the webgateway image can be specified by including the DockerImage field in the WS node definition in the definitions.json file, for example:
{
    "Role": "WS",
    "Count": "3",
    "DockerImage": "intersystems/webgateway:stable",
    "ApplicationPath": "/acme",
    "AlternativeServers": "LoadBalancing"
}

If the ApplicationPath field is provided, its value is used to create an application path for each instance of the Web Gateway. The default server for this application path is assigned round-robin across Web Gateway instances, with the remaining remote servers making up the alternative server pool. For example, if the preceding sample WS node definition were part of a deployment with three AM nodes, the assignments would be like the following:
Instance
Default Server
Alternative Servers
ACME-WS-TEST-0001
ACME-AM-TEST-0001
ACME-AM-TEST-0002, ACME-AM-TEST-0003
ACME-WS-TEST-0002
ACME-AM-TEST-0002
ACME-AM-TEST-0001, ACME-AM-TEST-0003
ACME-WS-TEST-0003
ACME-AM-TEST-0003
ACME-AM-TEST-0001, ACME-AM-TEST-0002
The AlternativeServers field determines how the Web Gateway distributes requests to its target server pool. Valid values are LoadBalancing (the default) and FailOver. This field has no effect if the ApplicationPath field is not specified.
For information about using the InterSystems Web Gateway, see the Web Gateway Configuration Guide.

Role LB: Load Balancer

ICM automatically provisions a load balancer node when the provisioning platform is AWS, GCP, or Azure, and the definition of nodes of type DATA, COMPUTE, AM, or WS in the definitions file sets LoadBalancer to true. For a generic load balancer for VM or CN nodes, additional parameters must be provided.

Predefined Load Balancer for DATA, COMPUTE, AM, and WS Nodes

For nodes of role LB, ICM configures the ports and protocols to be forwarded as well as the corresponding health checks. Queries can be executed against the deployed load balancer the same way one would against a data node in a sharded cluster or a distributed cache cluster application server.
To add a load balancer to the definition of DATA, COMPUTE, AM, or WS nodes, add the LoadBalancer field, for example:
{
    "Role": "AM",
    "Count": "2",
    "LoadBalancer": "true"
}

The following example illustrates the nodes that would be created and deployed given this definition:
$ icm inventory
Machine           IP Address    DNS Name                              Region   Zone
-------           ----------    --------                              ------   ----
ACME-AM-TEST-0001 54.214.230.24 ec2-54-214-230-24.amazonaws.com       us-west1 c
ACME-AM-TEST-0002 54.214.230.25 ec2-54-214-230-25.amazonaws.com       us-west1 c
ACME-LB-TEST-0000 (virtual AM)  ACME-AM-TEST-1546467861.amazonaws.com us-west1 c

Queries against this cluster can be executed against the load balancer the same way they would be against the AM nodes servers.
The LoadBalancer field can be added to more than one definition in a deployment; for example a distributed cache cluster can contain AM nodes receiving load-balanced connections from and a WS tier receiving load-balanced application connections. Currently, a single automatically provisioned load balancer cannot serve multiple node types (for example, both DATA and COMPUTE nodes), so each requires its own load balancer. This does not preclude the user from manually deploying a custom or third-party load balancer to serve the desired roles.

Generic Load Balancer

A load balancer can be added to VM (virtual machine) and CN (container) nodes by providing the following additional keys:
  • ForwardProtocol
  • ForwardPort
  • HealthCheckProtocol
  • HealthCheckPath
  • HealthCheckPort
The following is an example:
{
    "Role": "VM",
    "Count": "2",
    "LoadBalancer": "true",
    "ForwardProtocol": "tcp",
    "ForwardPort": "443",
    "HealthCheckProtocol": "http",
    "HealthCheckPath": "/csp/status.cxw",
    "HealthCheckPort": "8080"
}

More information about these keys can be found in ICM Configuration Parameters.
Note:
A load balancer does not require (or allow) an explicit entry in the definitions file.
Some cloud providers create a DNS name for the load balancer that resolves to multiple IP addresses; for this reason, the value displayed by the provider interface as DNS Name should be used. If a numeric IP address appears in the DNS Name column, it simply means that the given cloud provider assigns a unique IP address to their load balancer, but doesn't give it a DNS name.
Because the DNS name may not indicate to which resources a given load balancer applies, the values displayed under IP Address are used for this purpose.
Load balancers on different cloud providers may behave differently; be sure to acquaint yourself with load balancer details on platforms you provision on.
For providers VMware and PreExisting, you may wish to deploy a custom or third-party load balancer.

Role VM: Virtual Machine Node

A cluster may contain any number of virtual machine nodes. A virtual machine node provides a means of allocating host nodes which do not have a predefined role within an InterSystems IRIS cluster. Docker is not installed on these nodes, though users are free to deploy whatever custom or third-party software (including Docker) they wish.
The following commands are supported on the virtual machine node:
A load balancer may be assigned to VM nodes; see Role LB: Load Balancer.

Role CN: Container Node

A cluster may contain any number of container nodes. A container node is a general purpose node with Docker installed. You can deploy any custom and third-party containers you wish on a CN node, except InterSystems IRIS containers, which will not be deployed if specified. All ICM commands are supported for container nodes, but most will be filtered out unless they use the -container option to specify a container other than iris, or the either the -role or -machine option is used to limit the command to CN nodes (see ICM Commands and Options).
A load balancer may be assigned to CN nodes; see Role LB: Load Balancer.

Role BH: Bastion Host

You may want to deploy a configuration that offers no public network access. If you have an existing private network, you can launch ICM on a node on that network and deploy within it. If you do not have such a network, you can have ICM configure a private subnet and deploy your configuration on it. Since ICM is not running within that private subnet, however, it needs a means of access to provision, deploy, and manage the configuration. The BH node serves this purpose.
A bastion host is a host node that belongs to both the private subnet configured by ICM and the public network, and can broker communication between them. To use one, you define a single BH node in your definitions file and set PrivateSubnet to true in your defaults file. For more information, see Deploying on a Private Network.

ICM Cluster Topology and Mirroring

ICM validates the node definitions in the definitions file to ensure they meet certain requirements; there are additional rules for mirrored configurations. Bear in mind that this validation does not include preventing configurations that are not functionally optimal, for example a single AM node, a single WS node, five DATA nodes with just one COMPUTE node or vice-versa, and so on.
In both nonmirrored and mirrored configurations,
  • COMPUTE nodes are assigned to DATA nodes (and QS nodes to DS nodes) in round-robin fashion.
  • If both AM and WS nodes are included, AM nodes are bound to the DM and WS nodes to the AM nodes; if just AM nodes or just WS nodes are included, they are all bound to the DM.
    Note:
    In this release, WS nodes are not compatible with a node-level sharded cluster and should not be included in such a deployment.
This section contains the following subsections:

Rules for Mirroring

The recommended general best practice for sharded clusters is that either all DATA nodes be mirrored or none are mirrored. This is reflected in the following ICM topology validation rules.
When the Mirror field is set to false in the defaults file (the default), mirroring is never configured, and provisioning fails if more than one DM node or an odd number of DATA or DS nodes is specified in the definitions file.
When the Mirror field is set to true, mirroring is configured where possible, as follows:
  • All DATA nodes and DS nodes are configured as mirror failover pairs, for example specifying six DATA nodes deploys three mirrored data nodes; if an odd number of DATA or DS nodes is defined, provisioning fails. Mirrors in sharded configurations cannot contain async members.
  • If two DM nodes are specified in the definitions file, they are configured as a mirror failover pair using the default MirrorMap value, primary,backup. If one DM is specified, provisioning fails.
  • If more than two DMs are specified, and the MirrorMap field in the node definition matches the number of nodes specified and indicates that those beyond the failover pair are disaster recovery (DR) asyncs, they are configured as a failover pair and the specified number of DR asyncs. For example, the following definition creates a mirror consisting of a failover pair and two DR asyncs:
    "Role": "DM",
    "Count": "4",
    "MirrorMap": "primary,backup,async,async"
    
    
    The number of DM nodes can also be less than the number of elements in MirrorMap; in the example above, changing Count to 2 would deploy a primary and backup, while making it 3 would deploy the failover pair and one async.
    All asyncs deployed by ICM are DR asyncs; reporting asyncs are not supported. Up to 14 asyncs can be included in a mirror. For information on mirror members and possible configurations, see Mirror Components in the “Mirroring” chapter of the High Availability Guide.
    DM node mirrors in sharded configurations cannot contain async members.
  • If more than one AR (arbiter) node is specified, provisioning fails.
To see the mirror member status of each node in a configuration when mirroring is enabled, use the icm ps command.
Note:
There is no relationship between the order in which DATA, DM, or DS nodes are provisioned or configured and their roles in a mirror. You can determine which member of each pair is the primary failover member and which the backup using the icm inventory command, the output of which indicates each primary with a + (plus) and each backup with a - (minus).

Nonmirrored Configuration Requirements

A nonmirrored cluster consists of the following:
  • One or more DATA (data nodes in a sharded cluster).
  • If DATA nodes are included, zero or more COMPUTE (compute nodes in a sharded cluster); best practices are at least as many COMPUTE nodes as DATA nodes and the same number of COMPUTE nodes for each DATA node.
  • If no DATA nodes are included:
    • Exactly one DM (distributed cache cluster data server, standalone InterSystems IRIS instance, shard master data server in namespace-level sharded cluster).
    • Zero or more AM (distributed cache cluster application server, shard master application server in namespace-level sharded cluster).
    • Zero or more DS (shard data servers in namespace-level sharded cluster).
    • Zero or more QS (shard query servers in namespace-level sharded cluster).
  • Zero or more WS (web servers).
  • Zero or more LB (load balancers).
  • Zero or more VM (virtual machine nodes).
  • Zero or more CN (container nodes).
  • Zero or one BH (bastion host).
  • Zero AR (arbiter node is for mirrored configurations only).
The relationships between some of these nodes types are pictured in the following examples.
ICM Nonmirrored Topologies
images/gicm_nonmirrored_topo.png
images/gicm_nonmirrored_topo_data.png

Mirrored Configuration Requirements

A mirrored cluster consists of:
  • An even number of DATA (data nodes in a sharded cluster).
  • If DATA nodes are included, zero or more COMPUTE (compute nodes in a sharded cluster); best practices are at least as many COMPUTE nodes as DATA nodes and the same number of COMPUTE nodes for each DATA node.
  • If no DATA nodes are included:
    • Two DM (distributed cache cluster data server, standalone InterSystems IRIS instance) or more, if DR asyncs are specified by the MirrorMap field; exactly two DM (shard master data server in namespace-level sharded cluster).
    • Zero or more AM (distributed cache cluster application server, shard master application server in namespace-level sharded cluster).
    • Even number of DS (shard data servers in namespace-level sharded cluster).
    • Zero or more QS (shard query servers in namespace-level sharded cluster).
  • Zero or one AR (arbiter node is optional but recommended for mirrored configurations)
  • Zero or more WS (web servers).
  • Zero or more LB (load balancers).
  • Zero or more VM (virtual machine nodes).
  • Zero or more CN (container nodes).
  • Zero or one BH (bastion host).
Note:
A mirrored DM node that is deployed in the cloud without AM nodes must have some appropriate mechanism for redirecting application connections; see Redirecting Application Connections Following Failover or Disaster Recovery in the “Mirroring” chapter of the High Availability Guide for more information.
The following fields are required for mirroring:
  • Mirroring is enabled by setting key Mirror in your defaults.json file to true.
    "Mirror": "true"
    
  • To deploy more than two DM nodes, you must include the MirrorMap field in your definitions file to specify that those beyond the first two are DR async members, as follows:
    "MirrorMap": "primary,backup,async,..."
    
    The value of MirrorMap must always begin with primary,backup, and the number of elements in the field value must match the number of DM nodes, for example:
    "Role": "DM",
    "Count": "5”,
    "MirrorMap": "primary,backup,async,async,async",
    ...
    
    MirrorMap can be used in conjunction with the Zone and ZoneMap fields to deploy async instances across zones; see Deploying Across Multiple Zones.
    Note:
    Async mirror members are not currently supported in node-level or namespace-level sharded clusters.
Automatic LB deployment (see Role LB: Load Balancer) is supported for providers Amazon AWS, Microsoft Azure, and Google Cloud Platform; when creating your own load balancer, the pool of IP addresses to include are those of DATA, COMPUTE, AM, or WS nodes, as called for by your configuration and application.
The relationships between some of these nodes types are pictured in the following examples.
ICM Mirrored Topologies
images/gicm_mirrored_topo.png
images/gicm_mirrored_topo_data.png

Storage Volumes Mounted by ICM

On each node on which it deploys an InterSystems IRIS container, ICM formats, partitions, and mounts four volumes for persistent data storage by InterSystems IRIS using the durable %SYS feature (see Durable %SYS for Persistent Instance Data in Running InterSystems IRIS in Containers). The volumes are mounted as separate devices under /dev/ on the host node, with names determined by the fields DataDeviceName, WIJDeviceName, Journal1DeviceName, and Journal2DeviceName. Another volume for use by Docker is mounted in the same way, with DockerDeviceName field specifying the name of the device. The sizes of these volumes can be specified using the DataVolumeSize, WIJVolumeSize, Journal1VolumeSize, Journal2VolumeSize, and DockerVolumeSize parameters (see General Parameters).
For all providers other than type PreExisting, ICM attempts to assign reasonable defaults for the device names, as shown in the following table. The values are highly platform and OS-specific, however, and may need to be overridden in your defaults.json file. (For PreExisting deployments, see Storage Volumes in the “Deploying on a Preexisting Cluster” appendix.)
Parameter AWS GCP vSphere Azure
DockerDeviceName
xvdb
sdb
sdb
sdc
DataDeviceName
xvdc
sdc
sdc
sdd
WIJDeviceName
xvdd
sdd
sdd
sde
Journal1DeviceName
xvde
sde
sde
sdf
Journal2DeviceName
xvdf
sdf
sdf
sdg
Note:
For restrictions on DockerDeviceName, see the DockerStorageDriver parameter in General Parameters.
This arrangement allows you to easily follow the recommended best practice of supporting performance and recoverability by using separate file systems for storage by InterSystems IRIS, as described in Separating File Systems for Containerized InterSystems IRIS in Running InterSystems Products in Containers.
Within the InterSystems IRIS container, ICM mounts the devices according to the fields shown in the following table:
Parameter Default
DataMountPoint
/irissys/data
WIJMountPoint
/irissys/wij
Journal1MountPoint
/irissys/journal1
Journal2MountPoint
/irissys/journal2

InterSystems IRIS Licensing for ICM

InterSystems IRIS instances deployed in containers require licenses just as do noncontainerized instances. General InterSystems IRIS license elements and procedures are discussed in the “Licensing” chapter of the System Administration Guide.
License keys cannot be included in InterSystems IRIS container images, but must be added after the container is created and started. ICM addresses this as follows:
  • The needed license keys are staged in a directory within the ICM container, or on a mounted volume, that is specified by the LicenseDir fields in the defaults.json file, for example /Samples/License.
  • One of the license keys in the staging directory is specified by the LicenseKey field in each definition of node types DATA, COMPUTE, DM, AM, DS, and QS in the definitions.json file, for example:
    "Role": "DM",
    "LicenseKey": "standard-iris.key”,
    "InstanceType": "m4.xlarge",
    ...
    
  • ICM configures a license server on DATA node 1 or the DM node, which serves the specified licenses to the InterSystems IRIS nodes (including itself) during deployment.
All nodes in a sharded cluster require a sharding license. When deployed in nonsharded configurations, a standard license is sufficient for DM and AM nodes. No license is required for AR, LB, WS, VM, and CN nodes; if included in the definition for one of these, the LicenseKey field is ignored.

ICM Security

The security measures included in ICM are described in the following sections:
For information about the ICM fields used to specify the files needed for the security described here, see Security-Related Parameters.

Host Node Communication

A host node is the host machine on which containers are deployed. It may be virtual or physical, running in the cloud or on-premises.
ICM uses SSH to log into host nodes and remotely execute commands on them, and SCP to copy files between the ICM container and a host node. To enable this secure communication, you must provide an SSH public/private key pair and specify these keys in the defaults.json file as SSHPublicKey and SSHPrivateKey. During the configuration phase, ICM disables password login on each host node, copies the private key to the node, and opens port 22, enabling clients with the corresponding public key to use SSH and SCP to connect to the node.
Other ports opened on the host machine are covered in the sections that follow.

Docker

During provisioning, ICM downloads and installs a specific version of Docker from the official Docker web site using a GPG fingerprint. ICM then copies the TLS certificates you provide (located in the directory specified by the TLSKeyDir field in the defaults file) to the host machine, starts the Docker daemon with TLS enabled, and opens port 2376. At this point clients with the corresponding certificates can issue Docker commands to the host machine.

Weave Net

During provisioning, ICM launches Weave Net with options to encrypt traffic and require a password (provided by the user) from each machine joining the Weave network.
Note:
You can disable encryption of Weave Net traffic by setting the WeavePassword to the literal "null" in the defaults.json file. (By default, this parameter is generated by ICM and is set to the Weave Net password provided by the user.)

Weave Scope, Rancher

ICM does not install these monitoring products by default. They are deployed with authentication enabled; credentials must be provided in the defaults.json file. For more information, see Monitoring in ICM.

InterSystems IRIS

For detailed and comprehensive information about InterSystems IRIS security, see the InterSystems IRIS Security Administration Guide.

Security Level

ICM expects that the InterSystems IRIS image was installed with Normal security (as opposed to Minimal or Locked Down).

Predefined Account Password

To secure the InterSystems IRIS instance, the default password for predefined accounts must be changed by ICM. The first time ICM runs the InterSystems IRIS container, passwords on all enabled accounts with non-null roles are changed to a password provided by the user. If you don’t want the InterSystems IRIS password to appear in the definitions files, or in your command-line history using the -iscPassword option, you can omit both; ICM interactively prompts for the password, masking your typing. Because passwords are persisted, they are not changed when the InterSystems IRIS container is restarted or upgraded.

JDBC

ICM opens JDBC connections to InterSystems IRIS in SSL/TLS mode (as required by InterSystems IRIS), using the files located in the directory specified by the TLSKeyDir field in the defaults file.

Mirroring

ICM creates mirrors with SSL/TLS enabled (see the “Mirroring" chapter of the High Availability Guide), using the files located in the directory specified by the TLSKeyDir field in the defaults file. Failover members can join a mirror only if SSL/TLS enabled.

InterSystems Web Gateway

ICM configures WS nodes to communicate with DM and AM nodes using SSL/TLS, using the files located in the directory specified by the TLSKeyDir field in the defaults file.

Centralized Security

InterSystems recommends the use of an LDAP server to implement centralized security across the nodes of a sharded cluster or other ICM deployment. For information about using LDAP with InterSystems IRIS, see the “Using LDAP” chapter of the Security Administration Guide.

Private Networks

ICM can deploy on an existing private network (not accessible from the Internet) if you configure the access it requires. ICM can also create a private network on which to deploy and configure its own access through a bastion host. For more information on using private networks, see Deploying on a Private Network.

Deploying Across Multiple Zones

Some cloud providers allow their virtual networks to span multiple zones within a given region. For some deployments, you may want to take advantage of this to deploy different nodes in different zones, For example, if you deploy a DM mirror that includes a failover pair and two DR asyncs (see Mirrored Configuration Requirements), you can accomplish the cloud equivalent of putting physical DR asyncs in remote data centers by deploying the failover pair, the first async, and the second async in three different zones.
To specify multiple zones when deploying on AWS, GCP, and Azure, populate the Zone field in the defaults file with a comma-separated list of zones. Here is an example for AWS:
{
    "Provider": "AWS",
    . . .
    "Region": "us-west-1",
    "Zone": "us-west-1b,us-west-1c"
}

For GCP:

    "Provider": "GCP",
    . . .
    "Region": "us-east1",
    "Zone": "us-east1-b,us-east1-c"
}

For Azure:
    "Provider": "Azure",
    . . .
    "Region": "Central US",
    "Zone": "1,2"

The specified zones are assigned to nodes in round-robin fashion. For example, if you use the first example and provision four DS nodes, the first and third will be provisioned in us-west-1b, the second and fourth in us-west-1c.
Round-robin distribution may lead to undesirable results, however; the preceding Zone specifications would place the primary and backup members of mirrored DM or DS nodes in different zones, for example, which might not be appropriate for your application due to higher latency between the members (see Network Latency Considerations in the “Mirroring” chapter of the High Availability Guide). To choose which nodes go in which zones, you can add the ZoneMaps field to a node definition in the definitions.json file to specify a particular zone specified by the Zone field for a single node or a pattern for zone placement for multiple nodes. This is shown in the following specifications for a distributed cache cluster with a mirrored data server:
  • defaults.json
    "Region": "us-west-1",
    "Zone": "us-west-1a,us-west-1b,us-west-1c"
    
    
  • definitions.json
    "Role": "DM",
    "Count": "4”,
    "MirrorMap": "primary,backup,async,async",
    "ZoneMap": "0,0,1,2",
    ...
    "Role": "AM",
    "Count": "3”,
    "MirrorMap": "primary,backup,async,async",
    "ZoneMap": "0,1,2",
    ...
    "Role": "AR",
    ...
    
This places the primary and backup mirror members in us-west-1a and one application server in each zone, while the asyncs are in different zones from the failover pair to maximize their availability if needed — the first in us-west-1b and the second in us-west-1c. The arbiter node does not need a ZoneMap field to be placed in us-west-1a with the failover pair; round-robin distribution will take care of that.

Deploying Across Multiple Regions

For a number of reasons, deploying across multiple cloud provider regions requires additional steps in the deployment process. These are summarized in the following:
  1. Merge the multiregion infrastructure using the icm merge command.
  2. Review the merged definitions.json file to reorder and update as needed.
  3. Reprovision the merged infrastructure using the icm provision command.
  4. Deploy services on the merged infrastructure as a Preexisting deployment using the icm run command.
  5. When unprovisioning the infrastructure, issue the icm unprovision command separately in the original session directories.
Important:
Although the failover members of a mirror can be deployed in different regions, this is not recommended due to the problems in mirror operation caused by the typically high network latency between regions. For more information on latency considerations for mirrors, see Network Latency Considerations in the “Mirroring” chapter of the High Availability Guide.
Note:
Deployment across regions and deployment on a private network, as described in Deploying on a Private Network, are not compatible in this release.

Provision the Infrastructure

The separate sessions for provisioning infrastructure in each region (specified by the Region field for AWS and GCP and by the Location field for Azure) should be conducted in separate working directories within the same ICM container. For example, you could begin by copying the provided /Samples/GCP directory (see Define the Deployment in the “Using ICM” chapter) to /Samples/GCP/us-east1 and /Samples/GCP/us-west1. In the definitions and defaults files for each, specify the region desired region, and node definitions and features to match the eventual multiregion deployment. For example, if you want to deploy a mirror failover pair in one region and a DR async member of the mirror in another, include the appropriate region and zone and "Mirror": "true" in both defaults files, and define two DMs (for the failover pair) in one region in its definitions file, a third DM (for the async) in the other, and a single AR (arbiter) node in one or the other. Each defaults file should have a unique Label and/or Tag to prevent conflicts. This example is shown in the following:
# defaults.json us-east1:

"Provider": "GCP",
"Label": "Acme",
"Tag": "EAST",
"Region": "us-east1",
"Zone": "us-east1b",
"Mirror": "true",

# defaults.json us-west1:

"Provider": "GCP",
"Label": "Acme",
"Tag": "WEST",
"Region": "us-west1",
"Zone": "us-west1a",
"Mirror": "true",

# definitions.json us-east1:

"Role": "DM",
"Count": "2",
...
"Role": "AR"
"StartCount": "3",

# definitions.json us-west1:

"Role": "DM",
"Count": "1",

If a given definition doesn't satisfy topology requirements for a single-region deployment, for example a single DM node defined when Mirror is set to true, disable topology validation by including "SkipTopologyValidation": "true" in the defaults file.
Use the icm provision command in each working directory to provision the infrastructure in each region. The output of icm provision, and of the icm inventory command, executed in each directory, shows you the infrastructure you are working with, for example:
$ icm provision
...
Machine            IP Address     DNS Name                   Region    Zone
-------            ----------     --------                   ------    ----
Acme-DM-EAST-0001+ 104.196.97.112 112.97.196.104.google.com  us-east1  b
Acme-DM-EAST-0002- 104.196.97.113 113.97.196.104.google.com  us-east1  b
Acme-AR-EAST-0003  104.196.97.114 114.97.196.104.google.com  us-east1  b

$ icm provision
...
Machine            IP Address     DNS Name                   Region    Zone
-------            ----------     --------                   ------    ----
Acme-DM-WEST-0001+ 104.196.91.134 134.91.196.104.google.com  us-west1  a

Merge the Provisioned Infractructure

The icm merge command scans the configuration files in the current working directory and the additional directories specified to create merged configuration files that can be used for a Preexisting deployment in a specified new directory. For example, to merge the definitions and defaults files in /Samples/GCP/us-east1 and /Samples/GCP/us-west1 into a new set in /Samples/GCP/merge, you would issue the following commands:
$ cd /Samples/GCP/us-east1
$ mkdir ../merge
$ icm merge -options ../us-west1/instances.json -localPath /Samples/GCP/merge

Review the Merged Definitions File

When you examine the new configuration files, you will see that Provider has been changed to PreExisting in the defaults file. (The previous Provider field and others have been moved into the definitions file; they are displayed by the icm inventory command, but otherwise have no effect.) The Label and/or Tag can be modified if desired.
The definitions in the merged definitions file have been converted for use with provider PreExisting. As described in Definitions File in the appendix “Deploying on a Preexisting Cluster”, the definitions.json file for a Preexisting deployment contains exactly one entry per node (rather than one entry per role with a Count field to specify the number of nodes of that role). Each node is identified by its IP address or fully-qualified domain name. Either the IPAddress or DNSName field must be included in each definition, as well as the SSHUser field. (The latter specifies a nonroot user with passwordless sudo access, as described in SSH in “Deploying on a Preexisting Cluster”.) In the merged file, the definitions have been grouped by region; they should be reordered to reflect desired placement of mirror members, if necessary, and a suitable mirror map defined (see Mirrored Configuration Requirements and Deploying Across Multiple Zones). After review, the definitions file for our example would look like this:
[
    {
        "Role":"DM",
        "IPAddress":"104.196.97.112",
        "LicenseKey": "ubuntu-sharding-iris.key",
        "SSHUser": "icmuser",
        "MirrorMap": "primary,backup,async"
    },
    {
        "Role":"DM",
        "IPAddress":"104.196.97.113",
        "LicenseKey": "ubuntu-sharding-iris.key",
        "SSHUser": "icmuser",
        "MirrorMap": "primary,backup,async"
    },
    {
        "Role":"DM",
        "IPAddress":"104.196.91.134",
        "LicenseKey": "ubuntu-sharding-iris.key",
        "SSHUser": "icmuser",
        "MirrorMap": "primary,backup,async"
    },
    {
        "Role":"AR",
        "IPAddress":"104.196.97.114",
        "SSHUser": "icmuser",
        "StartCount": "4"
    }

]

Reprovision the Merged Infrastructure

Reprovision the merged infrastructure by issuing the icm provision command in the new directory (/Samples/GCP/merge in the example). The output shows the merged infrastructure in one list:
$ icm provision
...
Machine             IP Address     DNS Name                   Region    Zone
-------             ----------     --------                   ------    ----
Acme-DM-MERGE-0001+ 104.196.97.112 112.97.196.104.google.com  us-east1  b
Acme-DM-MERGE-0002- 104.196.97.113 113.97.196.104.google.com  us-east1  b
Acme-DM-MERGE-0003  104.196.91.134 134.91.196.104.google.com  us-west1  a
Acme-AR-MERGE-0004  104.196.97.114 114.97.196.104.google.com  us-east1  b

Deploy Services on the Merged Infrastructure

Use the icm run command to deploy services on your merged infrastructure, as you would for any deployment, for example
$ icm run
. . .
-> Management Portal available at: http://112.97.196.104.google.com:52773/csp/sys/UtilHome.csp
$ icm ps
Machine            IP Address     Container  Status Health  Mirror    Image
-------            ----------     ---------  ------ ------  ------    -----
Acme-DM-MERGE-0001 104.196.97.112 iris       Up     healthy PRIMARY   isc/iris:stable
Acme-DM-MERGE-0002 104.196.97.113 iris       Up     healthy BACKUP    isc/iris:stable
Acme-DM-MERGE-0003 104.196.91.134 iris       Up     healthy CONNECTED isc/iris:stable
Acme-AR-MERGE-0004 104.196.97.114 arbiter    Up     healthy           isc/arbiter:stable

Unprovision the Merged Infrastructure

When the time comes to unprovision the multiregion deployment, return to the original working directories to issue the icm unprovision command, and then delete the merged working directory. In our example, you would do the following:
$ cd /Samples/GCP/us-east1
$ icm unprovision -force -cleanUp
...
...completed destroy of Acme-EAST
$ cd /Samples/GCP/us-west1
$ icm unprovision -force -cleanUp
...
...completed destroy of Acme-WEST
$ rm -rf /Samples/GCP/merge

Deploying on a Private Network

ICM configures the firewall on each host node to expose the only the ports and protocols required for its intended role. For example, the ISCAgent port is exposed only if mirroring is enabled and the role is one of AR, DATA, DM, or DS.
However, you may not want your configuration accessible from the public Internet at all. When this is the case, you can use ICM to deploy a configuration on a private network, so that it offers no direct public access. If ICM itself is deployed on that network, it is able to provision and deploy in the normal manner, but if it is not, you must provision a node outside the public network that gives ICM access to that network, called a bastion host. Given these factors, there are three approaches to using a private network:
  • Install and run ICM within an existing private network, which you describe to ICM using several fields, some of which vary by provider.
  • Have ICM provision a bastion host to give it access to the private network, and provision and deploy the configuration on either:
    • A private network created by ICM.
    • An existing private network, which you describe using the appropriate fields.

Deploy Within an Existing Private Network

If you deploy ICM on an existing private network and want to provision and deploy on that network, as shown in the following illustration, you need to add fields to the defaults and definitions files for the configuration you want to deploy.
ICM Deployed within Private Subnet
images/gicm_private.png
To deploy on an existing private network, follow these steps:
  1. Obtain access to a node that resides within the private network. This may require use of a VPN or intermediate host.
  2. Install Docker and ICM on the node as described in Launch ICM in the “Using ICM” chapter.
  3. Add the following fields to the defaults.json file:
    "PrivateSubnet": "true",
    "net_vpc_cidr": "10.0.0.0/16",
    "net_subnet_cidr": "10.0.2.0/24"
    
    The net_vpc_cidr and net_subnet_cidr fields (shown with sample values) specify the CIDRs of the private network and the node’s subnet within that network, respectively.
  4. Add the appropriate provider-specific fields to the defaults.json file, as follows:
    Provider
    Key
    Description
    all PrivateSubnet Must be set to true
    net_vpc_cidr CIDR of the private network (see Note)
    net_subnet_cidr CIDR of the ICM node’s subnet within the private network (see Note)
    GCP
    Network
    Google VPC
    Subnet Google subnetwork
    Azure
    ResourceGroupName
    AzureRM resource group
    VirtualNetworkName
    AzureRM virtual network
    SubnetId
    AzureRM subnet ID (see Note)
    AWS (see Note)
    VPCId
    AWS VPC ID
    SubnetIds
    Comma-separated list of AWS subnet IDs, one for each element specified by the Zone field.
    Note:
    When provisioning on Azure, a unique SubnetId and corresponding net_subnet_cidr must be provided for every entry in the definitions file (but ResourceGroupName and VirtualNetworkName remain in the defaults file). This includes the BH definition when deploying a bastion host, as described in the following section.
    To deploy IRIS within an existing private VPC on AWS, you must create a node within that VPC on which you can deploy and use ICM. If you want to reach this ICM host from outside the VPC, you can specify a route table and Internet gateway for ICM to use instead of creating its own. To do this, add the RouteTableId and InternetGatewayId fields to your defaults.json file, for example:
    "RouteTableID": "rtb-00bef388a03747469",
    "InternetGatewayId": "igw-027ad2d2b769344a3"
    
    When provisioning on GCP, the net_subnet_cidr field is descriptive, not proscriptive; it should be an address space which includes the node’s subnet, as well as any others within the network should have access to the deployed configuration.
  5. Use icm provision and icm run to provision and deploy your configuration.
Bear the following in mind when deploying on a private network.
  • Viewing web pages on any node within the private network, for example the Management Portal, requires a browser that also resides within the private network, or for which a proxy or VPN has been configured.
  • Any DNS name shown in the output of ICM commands is just a copy of the local IP address.
  • Private network deployment across regions is currently not supported.

Deploy on a Private Network Through a Bastion Host

If you set the PrivateSubnet field to true in the defaults file but don't include the fields required to use an existing network, ICM creates a private network for you. You cannot complete the provisioning phase in this situation, however, because ICM is unable to configure or otherwise interact with the machines it just allocated. To enable its interaction with nodes on the private network it creates, ICM can optionally create a bastion host, a host node that belongs to both the private subnet and the public network and can broker communication between them.
ICM Deployed Outside a Private Network with a Bastion Host
images/gicm_bastion.png
To create a private network and a bastion host providing ICM with access to that network, add a definition for a single node of type BH to the definitions.json file, for example:
   {
       "Role": "DATA",
       "Count": "3"
   },
   {
       "Role": "BH",
       "Count": "1",
       "StartCount: 4"
   }

To deploy and use a bastion host with an existing private network, add a BH definition to the definitions file, as above, and include the fields necessary to specify the network in the defaults file (as describe in the previous section). ICM automatically sets the "PrivateSubnet" option to "true" when a BH node definition is included in definitions.json
The bastion host can be accessed using SSH, allowing users to tunnel SSH commands to the private network. Using this technique, ICM is able to allocate and configure compute instances within the private network from outside, allowing provisioning to succeed, for example:
$ icm inventory
Machine             IP Address     DNS Name                     Region   Zone
-------             ----------     --------                     ------   ----
Acme-BH-TEST-0004   35.237.125.218 218.125.237.35.bc.google.com us-east1 b
Acme-DATA-TEST-0001 10.0.0.2       10.0.0.2                     us-east1 b
Acme-DATA-TEST-0002 10.0.0.3       10.0.0.3                     us-east1 b
Acme-DATA-TEST-0003 10.0.0.4       10.0.0.4                     us-east1 b

Once the configuration is deployed, it is possible to run the ssh command against any node, for example:
# icm ssh -role DATA -interactive
ubuntu@ip-10.0.0.2:~$

If you examine the command being run, however, you can see that it is routed through the bastion host:
$ icm ssh -role DATA -interactive -verbose
ssh -A -t -i /Samples/ssh/insecure -p 3022 ubuntu@35.237.125.218
ubuntu@ip-10.0.0.2:~$

On the other hand, for other commands to succeed, ICM needs access to ports and protocols besides SSH. To do this, ICM configures tunnels between the bastion host and nodes within the cluster for Docker, JDBC, and HTTP. This allows commands such as icm run, icm exec, and icm sql to succeed.
Bear the following in mind when deploying a bastion host:
  • The address of the configuration’s Management Portal is that of the bastion host.
  • For security reasons, no private keys are stored on the bastion host.
  • Any DNS name shown in the output of ICM commands is just a copy of the local IP address.
  • Provisioning of load balancers in a deployment that includes a bastion host is not supported.
  • Use of a bastion host with multiregion deployments (see Deploying Across Multiple Regions) and in distributed management mode (see the appendix “Sharing ICM Deployments”) are currently not supported.
    Note:
    When you create a custom VPC in the Google portal, you are required to create a default subnet. If you are provisioning with a bastion host and will use the subnet created by ICM, you should delete this default subnet before provisioning (or give it an address space that won't collide with the default address space 10.0.0.0/16).

Monitoring in ICM

ICM offers the following two basic monitoring facilities:
Neither are deployed by default; you must specify them in your defaults file using the Monitor field.
Important:
For security reasons, these monitoring facilities are not recommended for production use.

Weave Scope

Weave Scope, a product of Weaveworks, runs as a distributed system; there is no preferred node, and its web server is available on all host nodes (at port 4040). Because Weave Scope runs on top of Weave Net, it can be deployed only for Overlay Networks of type "weave". Weave can be deployed by including the following in your defaults.json file:
"Monitor": "scope"
Because the free version of Weave Scope does not offer authentication or HTTPS for its web interface, the ProxyImage parameter lets you specify an additional Docker image that ICM uses as a (reverse) proxy to the native Weave Scope interface, for example:
"ProxyImage": "intersystems/https-proxy-auth:stable"
The proxy configures HTTPS to use the SSL keys located in the directory specified by the TLSKeyDir parameter and carries out authentication using the MonitorUsername and MonitorPassword parameters.
When provisioning is complete, the port for the Weave Scope is displayed, for example:
Weave Scope available at https://54.191.23.24:4041
The Weave Scope UI is available at all node IP addresses, not just the one listed.
Note:
Fully-qualified domain names may not work with unsigned certificates, in which case use the IP address instead.

Rancher

ICM installs Rancher version 1.6, a product of Rancher Labs. Rancher consists of a Rancher Server and several Rancher Agents, and can be deployed using the following key in your defaults.json file:
"Monitor": "rancher"
ICM also requires the following keys to configure local authentication using Rancher's REST API:
"MonitorUsername": "<username>",
"MonitorPassword": "<password>"
When provisioning is complete, the URL for the Rancher Server is displayed, for example:
Rancher Server available at: http://00.186.24.14:8080/env/1a5/infra/hosts
For information about alternative authentication methods available for Rancher, see http://docs.rancher.com/rancher/latest/en/configuration/access-control/.

ICM Troubleshooting

When an error occurs during an ICM operation, ICM displays a message directing you to the log file in which information about the error can be found. Before beginning an ICM deployment, familiarize yourself with the log files and their locations as described in Log Files and Other ICM Files.
In addition to the topics that follow, please see Additional Docker/InterSystems IRIS Considerations in Running InterSystems IRIS in Containers for information about important considerations when creating and running InterSystems IRIS images container images.

Host Node Restart and Recovery

When a cloud host node is shut down and restarted due to an unplanned outage or to planned action by the cloud provider (for example, for preventive maintenance) or user (for example, to reduce costs), its IP address and domain name may change, causing problems for both ICM and deployed applications (including InterSystems IRIS).
This behavior differs by cloud provider. GCP and Azure preserve IP address and domain name across host node restart by default, whereas on AWS this feature is optional (see AWS Elastic IP Feature
Reasons a host node might be shut down include the following:
  • Unplanned outage
    • Power outage
    • Kernel panic
  • Preventive maintenance initiated by provider
  • Cost reduction strategy initiated by user
Methods for intentionally shutting down host nodes include:
  • Using the cloud provider user interface
  • Using ICM:
    icm ssh -command 'sudo shutdown'
    

AWS Elastic IP Feature

The AWS Elastic IP feature preserves IP addresses and domain names across host node restarts. ICM disables this feature by default is because it incurs additional charges on stopped machines (but not running ones), and because AWS allows only five Elastic IP addresses per region (or VPC) unless a request is made to increase this limit. To enable this feature, set the ElasticIP field to true in your defaults.json file. For more information on this feature, see Elastic IP Addresses in the AWS documentation.

Recovery and Restart Procedure

If the IP address and domain name of a host node change, ICM can no longer communicate with the node and a manual update is therefore required, followed by an update to the cluster. The Weave network deployed by ICM includes a decentralized discovery service, which means that if at least one host node has kept its original IP address, the other host nodes will be able to reach it and reestablish all of their connections with one another. However, if the IP address of every host node in the cluster has changed, an additional step is needed to connect all the nodes in the Weave network to a valid IP address.
The manual update procedure is as follows:
  1. Go to the web console of the cloud provider and locate your instances there. Record the IP address and domain name of each, for example:
    Node
    IP Address
    Domain Name
    ACME-DATA-TEST-0001
    54.191.233.2
    ec2-54-191-233-2.amazonaws.com
    ACME-DATA-TEST-0002
    54.202.223.57
    ec2-54-202-223-57.amazonaws.com
    ACME-DATA-TEST-0003
    54.202.223.58
    ec2-54-202-223-58.amazonaws.com
  2. Edit the instances.json file (see The Instances File in the chapter “Essential ICM Elements”) and update the IPAddress and DNSName fields for each instance, for example:
    "Label" : "SHARDING",
    "Role" : "DATA",
    "Tag" : "TEST",
    "MachineName" : "ACME-DATA-TEST-0001",
    "IPAddress" : "54.191.233.2",
    "DNSName" : "ec2-54-191-233-2.amazonaws.com",
    
  3. Verify that the values are correct using the icm inventory command:
    $ icm inventory
    Machine                 IP Address    DNS Name                        Region   Zone
    -------                 ----------    --------                        ------   ----
    ACME-DATA-TEST-0001 54.191.233.2  ec2-54-191-233-2.amazonaws.com  us-east1 b
    ACME-DATA-TEST-0002 54.202.223.57 ec2-54-202-223-57.amazonaws.com us-east1 b
    ACME-DATA-TEST-0003 54.202.223.58 ec2-54-202-223-58.amazonaws.com us-east1 b
    
  4. Use the icm ps command to verify that the host nodes are reachable:
    
    $ icm ps -container weave
    Machine                   IP Address      Container   Status   Health    Image
    -------                   ----------      ---------   ------   ------    -----
    ACME-DATA-TEST-0001   54.191.233.2    weave       Up                 weaveworks/weave:2.0.4
    ACME-DATA-TEST-0002   54.202.223.57   weave       Up                 weaveworks/weave:2.0.4
    ACME-DATA-TEST-0003   54.202.223.58   weave       Up                 weaveworks/weave:2.0.4
    
    
  5. If all of the IP addresses have changed, select one of the new addresses, such as 54.191.233.2 in our example. Then connect each node to this IP address using the icm ssh command, as follows:
    $ icm ssh -command "weave connect --replace 54.191.233.2"
    Executing command 'weave connect 54.191.233.2' on host ACME-DATA-TEST-0001...
    Executing command 'weave connect 54.191.233.2' on host ACME-DATA-TEST-0002...
    Executing command 'weave connect 54.191.233.2' on host ACME-DATA-TEST-0003...
    ...executed on ACME-DATA-TEST-0001
    ...executed on ACME-DATA-TEST-0002
    ...executed on ACME-DATA-TEST-0003
    

Correcting Time Skew

If the system time within the ICM containers differs from Standard Time by more than a few minutes, the various cloud providers may reject requests from ICM. This can happen if the container is unable to reach an NTP server on startup (initial or after being stopped or paused). The error appears in the terraform.err file as some variation on the following:
Error refreshing state: 1 error(s) occurred:

    # icm provision
    Error: Thread exited with value 1
    Signature expired: 20170504T170025Z is now earlier than 20170504T171441Z (20170504T172941Z   15 min.)
    status code: 403, request id: 41f1c4c3-30ef-11e7-afcb-3d4015da6526 doesn’t run for a period of time

The solution is to manually run NTP, for example:
ntpd -nqp pool.ntp.org
and verify that the time is now correct. (See also the discussion of the --cap-add option in Launch ICM.)

Timeouts Under ICM

When the target system is under extreme load, various operations in ICM may time out. Many of these timeouts are not under direct ICM control (for example, from cloud providers); other operations are retried several times, for example SSH and JDBC connections.
SSH timeouts are sometimes not identified as such. For instance, in the following example, an SSH timeout manifests as a generic exception from the underlying library:
# icm cp -localPath foo.txt -remotePath /tmp/
2017-03-28 18:40:19 ERROR Docker:324 - Error: 
java.io.IOException: com.jcraft.jsch.JSchException: channel is not opened. 
2017-03-28 18:40:19 ERROR Docker:24 - java.lang.Exception: Errors occurred during execution; aborting operation 
        at com.intersystems.tbd.provision.SSH.sshCommand(SSH.java:419) 
        at com.intersystems.tbd.provision.Provision.execute(Provision.java:173) 
        at com.intersystems.tbd.provision.Main.main(Main.java:22)

In this case the recommended course of action is to retry the operation (after identifying and resolving its proximate cause).
Note that for security reasons ICM sets the default SSH timeout for idle sessions at ten minutes (60 seconds x 10 retries). These values can be changed by modifying the following fields in the/etc/ssh/sshd_config file:
ClientAliveInterval 60
ClientAliveCountMax 10

Docker Bridge Network IP Address Range Conflict

For container networking, Docker uses a bridge network (see Use bridge networks in the Docker documentation) on subnet 172.17.0.0/16 by default. If this subnet is already in use on your network, collisions may occur that prevent Docker from starting up or prevent you from being able to reach your deployed host nodes. This problem can arise on the machine hosting your ICM container, your InterSystems IRIS cluster nodes, or both.
To resolve this, you can edit the bridge network’s IP configuration in the Docker configuration file to reassign the subnet to a range that is not in conflict with your own IP addresses (your IT department can help you determine this value). To make this change, add a line like the following to the Docker daemon configuration file:
"bip:" "192.168.0.1/24"
If the problem arises with the ICM container, edit the file /etc/docker/daemon.json on the container’s host. If the problem arises with the host nodes in a deployed configuration, edit the file /ICM/etc/toHost/daemon.json in the ICM container; by default this file contains the value in the preceding example, which is likely to avoid problems with any deployment type except PreExisting.
Detailed information about the contents of the daemon.json file can be found in Daemon configuration file in the Docker documentation; see also Configure and troubleshoot the Docker daemon.

Weave Network IP Address Range Conflict

By default, the Weave network uses IP address range 10.32.0.0/12. If this conflicts with an existing network, you may see an error such as the following in log file installWeave.log:
Network 10.32.0.0/12 overlaps with existing route 10.0.0.0/8 on host
ERROR: Default --ipalloc-range 10.32.0.0/12 overlaps with existing route on host.
You must pick another range and set it on all hosts.

This is most likely to occur with provider PreExisting if the machines provided have undergone custom network configuration to support other software or local policies. If disabling or moving the other network is not an option, you can change the Weave configuration instead, using the following procedure:
  1. Edit the following file local to the ICM container:
    /ICM/etc/toHost/installWeave.sh
    
  2. Find the line containing the string weave launch. If you're confident there is no danger of overlap between Weave and the existing network, you can force Weave to continue use the default range by adding the underscored text in the following:
    sudo /usr/local/bin/weave launch --ipalloc-range 10.32.0.0/12 --password $2 
    
    You can also simply move Weave to another private network, as follows:
    sudo /usr/local/bin/weave launch --ipalloc-range 172.30.0.0/16 --password $2
    
  3. Save the file.
  4. Reprovision the cluster.

Huge Pages

On certain architectures you may see an error similar to the following in the InterSystems IRIS messages log:
0 Automatically configuring buffers 
1 Insufficient privileges to allocate Huge Pages; non-root instance requires CAP_IPC_LOCK capability for Huge Pages. 
2 Failed to allocate 1316MB shared memory using Huge Pages. Startup will retry with standard pages. If huge pages 
  are needed for performance, check the OS settings and consider marking them as required with the InterSystems IRIS 
  'memlock' configuration parameter.

This can be remedied by providing the following option to the icm run command:
-options "--cap-add IPC_LOCK"

ICM Configuration Parameters

These tables describe the fields you can include in the configuration files (see Configuration, State and Log Files) to provide ICM with the information it needs to execute provisioning and deployment tasks and management commands.

General Parameters

Parameter Meaning
LicenseDir Location of InterSystems IRIS license keys staged in the ICM container and individually specified by the LicenseKey field (below); see InterSystems IRIS Licensing for ICM.
LicenseKey License key for the InterSystems IRIS instance on one or more provisioned DATA, COMPUTE, DM, AM, DS, or QS nodes, staged within the ICM container in the location specified by the LicenseDir field (above).
ISCPassword Password that will be set for the _SYSTEM, Admin, and SuperUser accounts on the InterSystems IRIS instances on one or more provisioned nodes. Corresponding command-line option: -iscPassword.
DockerImage Docker image to be used for icm run and icm pull commands. Must include the repository name. Corresponding command-line option: -image.
DockerRegistry DNS name of the server hosting the Docker repository storing the image specified by DockerImage. If not included, ICM uses Docker’s public registry located at registry-1.docker.io.
DockerUsername Username to use for Docker login to the respository specified in DockerImage on the registry specified by DockerRegistry. Not required for public repositories. If not included and the repository specified by DockerImage is private, login fails.
DockerPassword Password to use for Docker login, along with DockerUsername. Not required for public repositories. If this field and the repository specified by DockerImage is private. ICM prompts you (with masked input) for a password. (If the value of this field contains special characters such as $, |, (, and ), they must be escaped with two \ characters; for example, the password abc$def must be specified as abc\\$def.)
DockerVersion Version of Docker installed on provisioned nodes. Default is ce-18.09.1.ce.
Important:
The Docker images from InterSystems optionally deployed by ICM comply with the OCI support specification, and are supported on Enterprise Edition and Community Edition 18.03 and later. Docker Enterprise Edition only is supported for production environments.
Not all combinations of platform and Docker version are supported by Docker; for detailed information from Docker on compatibility, see the Compatibility Matrix and About Docker CE.
DockerURL
URL of the Docker Enterprise Edition repository associated with your subscription or trial; when provided, triggers installation of Docker Enterprise Edition on provisioned nodes, instead of Docker Community Edition.
DockerStorageDriver Determines the storage driver used by Docker. Values include overlay2 and devicemapper (the default).
Note:
  • If DockerStorageDriver is set to overlay2, the FileSystem parameter must be set to xfs.
  • If DockerStorageDriver is set to overlay2, the DockerDeviceName parameter (see Device Name Parameters) must be set to null.
Home Root of the home directory on a provisioned node. Default: /home.
Mirror If true, InterSystems IRIS instances are deployed with mirroring enabled; see Mirrored Configuration Requirements. Default: false.
MirrorMap Determines mirror member types; see Rules for Mirroring. Valid values are primary, backup, async. Default: primary,backup.
Spark
If true, and the spark image is specified by the DockerImage field, Spark is deployed with InterSystems IRIS in the iris container (see The icm run Command for more information). Default: false.
LoadBalancer Set to true in definitions of node type AM, WS, VM, or CN for automatic provisioning of load balancer on providers AWS, GCP, and Azure. Default: false.
Namespace Namespace to be created during deployment; can also be specified or overriden by the command-line option -namespace. This namespace is also the default namespace for the icm session and icm sql commands. Default: IRISCLUSTER.
Provider Platform to provision infrastructure on; see Provisioning Platforms. Default: none.
Count Number of nodes to provision from a given entry in the definitions file. Default: 1.
Label Name shared by all nodes in this deployment, for example ACME; cannot contain dashes.
Role Role of the node or nodes to be provisioned by a given entry in the definitions file, for example DM or DATA; see ICM Node Types.
Tag Additional name used to differentiate between deployments, for example TEST; cannot contain dashes.
StartCount Numbering start for a particular node definition in the definitions file. For example, if the DS node definition includes "StartCount": "3", the first DS node provisioned is named Label-DS-Tag-0002.
Zone Zone in which to locate a node or nodes to be provisioned. Example: us-east1-b. (See also definitions in the Provider-Specific Parameters tables.) For information on using this field with multiple zones, see Deploying Across Multiple Zones.
ZoneMap
When multiple zones are specified, specifies which nodes are deployed in which zones. Default: 0,1,2,...,255. For information on using this field with multiple zones, see Deploying Across Multiple Zones.
FileSystem
Type of file system to use for persistent volumes. Valid values are ext2, ext3, ext4, xfs, and btrfs. Default: xfs.
Note:
See the DockerStorageDriver parameter for restrictions.
DataMountPoint
The location on the host node where the persistent volume described by DataDeviceName (see Device Name Parameters) will be mounted. Default: /irissys/data.
WIJMountPoint *
The location on the host node where the persistent volume described by WIJDeviceName (see Device Name Parameters) will be mounted. Default: /irissys/wij.
Journal1MountPoint *
The location on the host node where the persistent volume described by Journal1DeviceName (see Device Name Parameters) will be mounted. Default: /irissys/journal1j.
Journal2MountPoint *
The location on the host node where the persistent volume described by Journal2DeviceName (see Device Name Parameters) will be mounted. Default: /irissys/journal2j.
OSVolumeSize Size (in GB) of the OS volume to create for deployments other than type PreExisting. Default: 32.
Note:
In some cases, this setting must be greater than or equal to a value specific to the OS image template or snapshot; for more information, see Creating a Virtual Machine from a Template in the Terraform documentation..
DockerVolumeSize
Size (in GB) of the block storage device used for the Docker thin pool for providers other than PreExisting. This volume corresponds to the DockerDeviceName parameter (see Device Name Parameters) . Default: 10.
DataVolumeSize Size (in GB) of the persistent data volume to create for deployments other than type PreExisting. This volume corresponds to the DataDeviceName parameter (see Device Name Parameters) and will be mounted at DataMountPoint. Default: 10.
WIJVolumeSize Size (in GB) of the persistent WIJ volume to create for deployments other than PreExisting. This volume corresponds to the WIJDeviceName parameter (see Device Name Parameters) and will be mounted at WIJMountPoint. Default: 10.
Journal1VolumeSize Size (in GB) of the persistent Journal volume to create for deployments other than type PreExisting. This volume corresponds to the Journal1DeviceName parameter (see Device Name Parameters) and will be mounted at Journal1MountPoint. Default: 10.
Journal2VolumeSize Size (in GB) of the alternate persistent Journal volume to create for deployments other than type PreExisting. This volume corresponds to the Journal2DeviceName parameter (see Device Name Parameters) and will be mounted at Journal2MountPoint. Default: 10.
SystemMode String to be shown in the Management Portal masthead. Certain values (LIVE, TEST, FAILOVER, DEVELOPMENT) trigger additional changes in appearance. Default: blank.
ISCglobals *
Database cache allocation from system memory. See globals in the Configuration Parameter File Reference and Memory and Startup Settings in the “Configuring InterSystems IRIS” chapter of the System Administration Guide, Default: 0,0,0,0,0,0 (automatic allocation)
ISCroutines *
Routine cache allocation from system memory. See routines in the Parameter File Reference and Memory and Startup Settings in the “Configuring InterSystems IRIS” chapter of the System Administration Guide. Default: 0 (automatic allocation).
ISCgmheap *
Size of the generic memory heap (in KB). See gmheap in the Configuration Parameter File Reference. Default: 37568.
ISClocksiz *
Maximum size of shared memory for locks (in bytes). See locksiz in the Configuration Parameter File Reference. Default: 16777216.
ISCbbsiz *
Maximum memory per process (KB). See bbsiz in the Configuration Parameter File Reference. Default: 262144.
ISCmemlock *
Enable/disable locking shared memory or the text segment into memory. See memlock in the Configuration Parameter File Reference. Default: 0.
ApplicationPath
Application path to create for definitions of type WS. Do not include a trailing slash. Default: none.
AlternativeServers
Remote server selection algorithm for definitions of type WS. Valid values are LoadBalancing and FailOver. Default: LoadBalancing.
Overlay Determines the Docker overlay network type; normally "weave", but may be set to "host" for development or debug purposes, or when deploying on a preexisting cluster. Default: weave (host when deploying on a preexisting cluster).
Monitor Deploy Weave Scope for basic monitoring by sepcifying the value scope. Default: none.
MonitorUsername
Username to use in authenticating to Weave Scope. Default: none.
MonitorPassword
Password to use in authenticating to Weave Scope. Default: none.
ProxyImage
Docker image used to provide authentication and HTTPS for Weave Scope monitoring.
ForwardPort Port to be forwarded by a given load balancer (both 'from' and 'to'). Defaults:
  • AM: SuperServerPort
  • WS: 80
  • VM/CN: (user provided)
ForwardProtocol Protocol to be forwarded by a given load balancer. Defaults:
  • AM: tcp
  • WS: http
  • VM/CN: (user provided)
HealthCheckPort Port used to verify health of instances in the target pool. Defaults:
  • AM: SuperServerPort
  • WS: 80
  • VM/CN: (user provided)
HealthCheckProtocol Protocol used to verify health of instances in the target pool. Defaults:
  • AM: tcp
  • WS: http
  • VM/CN: (user provided)
HealthCheckPath Path used to verify health of instances in the target pool. Defaults:
  • AM: /csp/user/isc_status.cxw
  • WS: N/A (path not used for TCP health checks)
  • VM/CN: (user provided for HTTP health checks)
ISCAgentPort Port used by InterSystems IRIS ISC Agent. Default: 2188.
JDBCGatewayPort * Port used by InterSystems IRIS JDBC Gateway. Default: 62972.
SuperServerPort * Port used by InterSystems IRIS Superserver. Default: 51773.
WebServerPort * Port used by InterSystems IRIS Web Server (Management Portal). Default: 52773.
LicenseServerPort
Port used by InterSystems IRIS License Server. Default: 4002.
SparkMasterPort Port used by Spark Master. Default: 7077.
SparkWorkerPort Port used by Spark Worker. Default: 7000.
SparkMasterWebUIPort Port used by Spark Master Web UI. Default: 8080.
SparkWorkerWebUIPort Port used by Spark Worker Web UI. Default: 8081.
SparkRESTPort Port used for Spark REST API. Default: 6066.
SparkDriverPort
Port used for Spark Driver. Default: 7001.
SparkBlocKManagerPort
Port used for Spark Block Manager. Default: 7005.
* The following parameters in the preceding table map directly to parameters in the iris.cpf file of the InterSystems IRIS instance on nodes of type DATA, COMPUTE, DM, AM, DS, and QS:
ICM Name iris.cpf Name
SuperServerPort
DefaultPort
WebServerPort
WebServerPort
JDBCGatewayPort
JDBCGatewayPort
WIJMountPoint
wijdir
Journal1MountPoint
CurrentDirectory
Journal2MountPoint
AlternateDirectory
ISCglobals
globals
ISCroutines
routines
ISCgmheap
gmheap
ISCbbsiz bbsiz
ISClocksiz
locksiz
ISCmemlock memlock
The LicenseServerPort field appears in the [LicenseServers] block of the iris.cpf file, bound to the name of the configured license server (see InterSystems IRIS Licensing for ICM).

Security-Related Parameters

The parameters in the following table are used to identify files and information required for ICM to communicate securely with the provisioned nodes and deployed containers.
Parameter Meaning
SSHUser Nonroot account with sudo access used by ICM for access to provisioned nodes. Root of SSHUser’s home directory can be specified using the Home field. Required value is provider-specific, as follows:
  • AWS — As per AMI specification (usually "ec2-user" for Red Hat Enterprise Linux instances)
  • vSphere — As per VM template
  • Azure — At user's discretion
  • GCP — At user's discretion
SSHPassword Initial password for the user specified by SSHUser. Required for marketplace Docker images and deployments of type vSphere, Azure, and PreExisting. This is used only during provisioning, at the conclusion of which password logins are disabled.
SSHOnly If true, ICM does not attempt SSH password logins during provisioning (providers PreExisting and vSphere only). Because this prevents ICM from logging in using a password, it requires that you stage your public SSH key (as specified by the SSHPublicKey field) on each node. Default: false.
SSHPublicKey Public key of SSH public/private key pair; required for all deployments.
For provider AWS, must be in SSH2 format, for example:
---- BEGIN SSH2 PUBLIC KEY --- AAAAB3NzaC1yc2EAAAABJQAAAQEAoa0 ---- BEGIN SSH2 PUBLIC KEY ---For other providers, must be in OpenSSH format, for example:ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAoa0
SSHPrivateKey Private key of SSH public private key pair, required, in RSA format, for example:-----BEGIN RSA PRIVATE KEY----- MIIEogIBAAKCAQEAoa0ex+JKzC2Nka1 -----END RSA PRIVATE KEY-----
TLSKeyDir Directory containing TLS keys used to establish secure connections to Docker, InterSystems Web Gateway, JDBC, and mirrored InterSystems IRIS databases, as follows:
  • ca.pem
  • cert.pem
  • key.pem
  • keycert.pem
  • server-cert.pem
  • server-key.pem
  • keystore.p12
  • truststore.jks
  • SSLConfig.properties
SSLConfig Path to an SSL/TLS configuration file used to establish secure JDBC connections. Default: If this parameter is not provided, ICM looks for a configuration file in /TLSKeyDir/SSLConfig.Properties (see previous entry).

Provider-Specific Parameters

This tables in this section list parameters used by ICM that are specific to each provider, as follows:
Note:
Some of the parameters listed are used with more than one provider.

Selecting Machine Images

Cloud providers operate data centers in various regions of the world, so one of the important things to customize for your deployment is the region in which your cluster will be deployed. Another choice is which virtual machine images to use for the host nodes in your cluster. Although the sample configuration files define valid regions and machine images for all cloud providers, you will generally want to change the region to match your own location. Because machine images are often specific to a region, both must be selected.
At this release, ICM supports provisioning of and deployment on host nodes running Red Hat Enterprise Linux, version 7.4 or 7.5, so the machine images you select must run this operating system.

Amazon Web Services (AWS) Parameters

Parameter Meaning
Credentials Path to a file containing Amazon AWS credentials in the following format. Download from https://console.aws.amazon.com/iam/home?#/users.
[default]
aws_access_key_id = access_key_id
aws_secret_access_key = secret_access_key
aws_session_token = session_token
aws_security_token = security_token
SSHUser Nonroot account with sudo access used by ICM for access to provisioned nodes (see Security-Related Parameters). Root of SSHUser’s home directory can be specified using the Home field. Required value is determined by the selected AMI; for Red Hat Enterprise Linux images, the required value of SSHUser is usually ec2-user.
AMI AMI to use for a node or nodes to be provisioned; see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html. Example: ami-a540a5e1.
Region Region to use for a node or nodes to be provisioned; see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html. Example: us-west-1. For information on deploying a single configuration in more than one region, see Deploying Across Multiple Regions.
Zone Availability zone to use for a node or nodes to be provisioned; see link in previous entry. Example: us-west-1c. For information on deploying a single configuration in more than one zone, see Deploying Across Multiple Zones.
ElasticIP Enables the Elastic IP feature to preserve IP address and domain name across host node restart; for more information, see AWS Elastic IP Feature. Default: false.
InstanceType Instance Type to use for a node or nodes to be provisioned; see https://aws.amazon.com/ec2/instance-types/. Example: m4.large.
VPCId
Existing Virtual Private Cloud (VPC) to be used in the deployment, instead of allocating a new one; the specified VPC is not deallocated during unprovision. If not specified, a new VPC is allocated for the deployment and deallocated during unprovision.
Note:
Internal parameter net_subnet_cidr must be provided if the VPC is not created in the default address space 10.0.0.0/16; for example, for a VPC in the range 172.17.0.0/16, you would need to specify net_subnet_cidr as 172.17.%d.0/24.
OSVolumeType Determines maximum OSVolumeSize. See http://docs.aws.amazon.com/cli/latest/reference/ec2/create-volume.html. Default: standard.
OSVolumeIOPS
IOPS count for OSVolume. Must be nonzero for volumes of type iops. Default: 0.
DockerVolumeType
Determines maximum DockerVolumeSize (see OSVolumeType). Default: standard.
DockerVolumeIOPS
IOPS count for DockerVolume. Must be nonzero for volumes of type iops. Default: 0.
DataVolumeType Determines maximum DataVolumeSize (see OSVolumeType). Default: standard.
DataVolumeIOPS
IOPS count for DataVolume. Must be nonzero for volumes of type iops. Default: 0.
WIJVolumeType Determines maximum WIJVolumeSize (see OSVolumeType). Default: standard.
WIJVolumeIOPS
IOPS count for WIJVolume. Must be nonzero for volumes of type iops. Default: 0.
Journal1VolumeType Determines maximum Journal1VolumeSize (see OSVolumeType). Default: standard.
Journal1VolumeIOPS
IOPS count for Journal1Volume. Must be nonzero for volumes of type iops. Default: 0.
Journal2VolumeType Determines maximum Journal2VolumeSize (see OSVolumeType). Default: standard.
Journal2VolumeIOPS
IOPS count for Journal2Volume. Must be nonzero for volumes of type iops. Default: 0.
RouteTableId Existing route table to use for access to ICM; if provided, ICM uses these instead of allocating its own (and does not deallocate during unprovision). No default.
InternetGatewayID Existing Internet gateway to use for access to ICM; if provided, ICM uses these instead of allocating its own (and does not deallocate during unprovision). No default.

Google Cloud Platform (GCP) Parameters

Parameter Meaning
Credentials JSON file containing account credentials. Download from https://console.developers.google.com/
Project Google project ID.
MachineType Machine type resource to use for a node or nodes to be provisioned. See https://cloud.google.com/compute/docs/machine-types. Example: n1-standard-1.
Region Region to use for a node or nodes to be provisioned; see https://cloud.google.com/compute/docs/regions-zones/regions-zones. Example: us-east1. For information on deploying a single configuration in more than one region, see Deploying Across Multiple Regions.
Zone Zone in which to locate a node or nodes to be provisioned. Example: us-east1-b. For information on deploying a single configuration in more than one zone, see Deploying Across Multiple Zones.
Image The source image from which to create this disk. See https://cloud.google.com/compute/docs/images. Example: centos-cloud/centos-7-v20160803.
OSVolumeType Determines disk type for the OS volume. See https://cloud.google.com/compute/docs/reference/beta/instances/attachDisk. Default: pd-standard.
DockerVolumeType
Determines disk type for the Docker block storage device (see OSVolumeType). Default: pd-standard.
DataVolumeType Determines disk type for the persistent Data volume (see OSVolumeType). Default: pd-standard.
WIJVolumeType Determines disk type for the persistent WIJ volume (see OSVolumeType). Default: pd-standard.
Journal1VolumeType Determines disk type for the persistent Journal1 volume (see OSVolumeType). Default: pd-standard.
Journal2VolumeType Determines disk type for the persistent Journal1 volume (see OSVolumeType). Default: pd-standard.

Microsoft Azure (Azure) Parameters

Parameter Meaning
Size The size of a node or nodes to be provisioned; see https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines-windows-sizes. Example: Standard_DS1.
Location Location in which to provision a node or nodes; see https://azure.microsoft.com/en-us/regions/. Example: Central US. For information on deploying a single configuration in more than one region, see Deploying Across Multiple Regions.
Zone Zone in which to locate a node or nodes to be provisioned. Possible values are 1, 2, and 3. For information on deploying a single configuration in more than one zone, see Deploying Across Multiple Zones.
SubscriptionId Credentials which uniquely identify the Microsoft Azure subscription.
ClientId Azure application identifier.
ClientSecret Provides access to an Azure application.
TenantId Azure Active Directory tenant identifier.
PublisherName Entity providing a given Azure image. Example: OpenLogic.
Offer Operating system of a given Azure image. Example: Centos.
Sku Major version of the operating system of a given Azure image. Example: 7.2.
Version Build version of a given Azure image. Example: 7.2.20170105.
AccountTier
Account tier, either HDD (Standard) or SSD (Premium).
AccountReplicationType
Account storage type: locally-redundant storage (LRS), geo-redundant storage (GRS), zone-redundant storage (ZRS), or read access geo-redundant storage (RAGRS).
ResourceGroupName
Existing Resource Group to be used in the deployment, instead of allocating a new one; the specified group is not deallocated during unprovision. If not specified, a new Resource Group is allocated for the deployment and deallocated during unprovision.
VirtualNetworkName
Existing Virtual Network to be used in the deployment, instead of allocating a new one; the specified network is not deallocated during unprovision. If not specified, a new Virtual Network is allocated for the deployment and deallocated during unprovision.
Note:
Internal parameter net_subnet_cidr must be provided if the network is not created in the default address space 10.0.%d.0/24.
SubnetId
Existing Subnet to be used in the deployment, instead of allocating a new one; the specified subnet is not deallocated during unprovision. If not specified, a new Subnet is allocated for the deployment and deallocated during unprovision. Value is an Azure URI of the form:
/subscriptions/subscription/resourceGroups/resource_group/providers /Microsoft.Network/virtualNetworks/virtual_network/subnets/subnet_name
UseMSI
If true, authenticates using a Managed Service Identity in place of ClientId and ClientSecret. Requires that ICM be run from a machine in Azure.
Default: false
CustomImage
Image to be used to create the OS disk, in place of the marketplace image described by the PublisherName, Offer, Sku, and Version fields. Value is an Azure URI of the form:
/subscriptions/subscription/resourceGroups/resource_group/providers /Microsoft.Compute/images/image_name

VMware vSphere (vSphere) Parameters

Parameter Meaning
Server Name of the vCenter server. Example: tbdvcenter.iscinternal.com.
Datacenter Name of the datacenter.
VSphereUser Username for vSphere operations.
VSpherePassword Password for vSphere operations.
VCPU Number of CPUs in a node or nodes to be provisioned. Example: 2.
Memory Amount of memory (in MB) in a node or nodes to be provisioned. Example: 4096.
DatastoreCluster
Collection of datastores where virtual machine files will be stored. Example: DatastoreCluster1.
DNSServers List of DNS servers for the virtual network. Example: 172.16.96.1,172.17.15.53
DNSSuffixes List of name resolution suffixes for the virtual network adapter. Example: iscinternal.com
Domain FQDN for a node to be provisioned. Example: iscinternal.com
NetworkInterface Label to assign to a network interface. Example: VM Network
Template Virtual machine master copy. Example: centos-7
GuestID
Guest ID for the operating system type. See Enum - VirtualMachineGuestOsIdentifier on the VMware support website. Default: centos64Guest.
WaitForGuestNetTimeout
Time (in minutes) to wait for an available IP address on a virtual machine. Default: 5.
ShutdownWaitTimeout
Time (in minutes) to wait for graceful guest shutdown when making necessary updates to a virtual machine. Default: 3.
MigrateWaitTimeout
Time (in minutes) to wait for virtual machine migration to complete. Default: 10.
CloneTimeout
Time (in minutes) to wait for virtual machine cloning to complete. Default: 30.
CustomizeTimeout
Time (in minutes) that Terraform waits for customization to complete. Default: 10.
DiskPolicy
Disk provisioning policy for the deployment (see About Virtual Disk Provisioning Policies in the VMware documentation). Values are:
  • thin — Thin Provision
  • lazy — Thick Provision Lazy Zeroed
  • eagerZeroedThick — Thick Provision Eager Zeroed
Default: lazy.
ResourcePool
Name of a vSphere resource pool. Example: ResourcePool1.
SDRSEnabled
If specified, determines whether Storage DRS is enabled for a virtual machine; otherwise, use current datastore cluster settings. Default: Current datastore cluster settings.
SDRSAutomationLevel
If specified, determines Storage DRS automation level for a virtual machine; otherwise, use current datastore cluster settings. V;ues are automated or manual. Default: Current datastore cluster settings.
SDRSIntraVMAffinity
If provided, determines Intra-VM affinity setting for a virtual machine; otherwise, use current datastore cluster settings. Values include:
  • true — All disks for this virtual machine will be kept on the same datastore.
  • false — Storage DRS may locate individual disks on different datastores if it helps satisfy cluster requirements.
Default: Current datastore cluster settings.
SCSIControllerCount
Number of SCSI controllers for a given host node; must be between 1 and 4. The OS volume is always be placed on the first SCSI controller. vSphere may not be able to create more SCSI controllers than were present in the template specified by the Template field.
Default: 1
DockerVolumeSCSIController
SCSI controller on which to place the Docker volume. Must be between 1 and 4 and may not exceed SCSIControllerCount.
Default: 1
DataVolumeSCSIController
SCSI controller on which to place the Data volume. Must be between 1 and 4 and may not exceed SCSIControllerCount.
Default: 1
WIJVolumeSCSIController
SCSI controller on which to place the WIJ volume. Must be between 1 and 4 and may not exceed SCSIControllerCount.
Default: 1
Journal1VolumeSCSIController
SCSI controller on which to place the Journal1 volume. Must be between 1 and 4 and may not exceed SCSIControllerCount.
Default: 1
Journal2VolumeSCSIController
SCSI controller on which to place the Journal2 volume. Must be between 1 and 4 and may not exceed SCSIControllerCount.
Default: 1
Note:
The requirements for the VMware vSphere template are similar to those described in Host Node Requirements for preexisting clusters (for example, passwordless sudo access).
To address the needs of the many users who rely on VMware vSphere, it is supported by this release of ICM. Depending on your particular vSphere configuration and underlying hardware platform, the use of ICM to provision virtual machines may entail additional extensions and adjustments not covered in this guide, especially for larger and more complex deployments, and may not be suitable for production use. Full support is expected in a later release.

PreExisting Cluster (PreExisting) Parameters

Parameter Meaning
IPAddress This is a required field (in the definitions file) for provider PreExisting and is a generated field for all other providers.
DNSName FQDN of the host node, or its IP Address if unavailable. Deployments of type PreExisting can optionally populate this field (in the definitions file) to provide names for display by the icm inventory command; if you do this, first verify that the name you are providing is resolvable from the ICM container. This is a generated field for all other providers.

Device Name Parameters

The parameters in the following table specify the devices (under /dev) on which persistent volumes appear (see Storage Volumes Mounted by ICM). Defaults are available for all providers other than PreExisting, but these values are highly platform and OS-specific and may need to be overridden in your defaults.json file. For PreExisting deployments, see Storage Volumes in the “Deploying on a Preexisting Cluster” appendix.
Parameter AWS GCP vSphere Azure
DockerDeviceName
xvdb
sdb
sdb
sdc
DataDeviceName
xvdc
sdc
sdc
sdd
WIJDeviceName
xvdd
sdd
sdd
sde
Journal1DeviceName
xvde
sde
sde
sdf
Journal2DeviceName
xvdf
sdf
sdf
sdg
Note:
For restrictions on DockerDeviceName, see the DockerStorageDriver parameter (see General Parameters).

Generated Parameters

These parameters are generated by ICM during provisioning, configuration, and deployment. They should generally be treated as read-only and are included here for information purposes only.
Parameter Meaning
Member In initial configuration of a mirrored pair, set to primary, backup, or async. (Actual role of each failover member is determined by mirror operations.)
MachineName Generated from Label-Role-Tag-####.
WeaveArgs
Generated by executing weave dns-args on the host node.
WeavePeers
List of IP addresses of every host node except the current on.
WeavePassword
Password used to encrypt traffic over Weave Net; disable encryption by setting to the literal "null" in the defaults.json file.
StateDir
Location on the ICM client where temporary, state, and log files will be written. Default: ICM-nnnnnnnnn. Command-line option: -stateDir.
DefinitionIndex
Assigns an index to each object in the definitions.json file; this is used to uniquely number load balancer instances (which would otherwise have the same names).
TargetRole
The role associated with the resources being managed by a given load balancer.
MirrorSetName
Name assigned to a failover mirror.
InstanceCount
Total number of instances in this deployment.
Previous section   Next section