docs.intersystems.com
Home  /  Architecture  /  InterSystems Cloud Manager Guide  /  ICM Reference


InterSystems Cloud Manager Guide
ICM Reference
[Back]  [Next] 
InterSystems: The power behind what matters   
Search:  


The following topics provide detailed information about various aspects of ICM and its use:
ICM Commands and Options
The first table that follows lists the commands that can be executed on the ICM command line; the second table lists the options that can be included with them. Both tables include links to relevant text.
Each of the commands is covered in detail in the Using ICM chapter. Command-line options can be used either to provide required or optional arguments to commands (for example, icm exec -interactive) or to set field values, overriding ICM defaults or settings in the configuration files (for example, icm run -namespace "MIRROR1").
Note:
The command table does not list every option that can be used with each command, and the option table does not list every command that can include each option.
ICM Commands
Command Description Important Options
Provisions compute nodes
-definitions, -defaults, -instances
Lists provisioned compute nodes
-json
Destroys compute nodes
-stateDir, cleanup, force
Executes an operating system command on one or more compute nodes
-command, -machine, -role
Copies a local file to one or more compute nodes
-localPath, -remotePath, -machine, -role
Deploys a container on compute nodes
-image, -container, -namespace, -options, -iscPassword, -command, -machine, -role
Displays run states of containers deployed on compute nodes
-container, -json
Stops containers on one or more compute nodes
-container, -machine, -role
Starts containers on one or more compute nodes
-container, -machine, -role
Downloads an image to one or more compute nodes
-image, -container, -machine, -role
Deletes containers from one or more compute nodes
-container, -machine, -role
Replaces containers on one or more compute nodes
-image, -container, -machine, -role
Executes an operating system command in one or more containers
-container, -command, -interactive. -options, -machine, -role
Opens an interactive session for an InterSystems IRIS instance in a container or executes an InterSystems IRIS ObjectScriptScript snippet on one or more instances
-namespace, -command, -interactive, machine, -role
Copies a local file to one or more containers
-localPath, -remotePath, -machine, -role
Executes a SQL statement on the InterSystems IRIS instance
-namespace, -command,. -machine, -role
Executes a Docker command on one or more compute nodes
-container, -machine, -role
ICM Command-Line Options
Option Meaning Default Described in
-help Display command usage information   ---
- Show execution detail False (can be used with any command)
-definitions filepath Compute node definitions file ./definitions.json Configuration, State and Log Files
-defaults filepath Compute node defaults file ./defaults.json
-instances filepath Compute node instances file ./instances.json
-stateDir dir Machine state directory OS-specific The State Directory and State Files
-force Don't confirm before reprovisioning or unprovisioning False
-cleanUp Delete state direcroty after unprovisioning False
-machine regexp Target machine name pattern match (all) icm sshicm exec
-role role Role of the InterSystems IRIS instance or instances for which a command is run, for example DM or QS (all) icm ssh
-namespace namespace Namespace to create on deployed InterSystems IRIS instances and set as default execution namespace for the session and sql commands USER The Definitions Fileicm session
-image image Docker image to deploy; must include repository name. DockerImage value in definitions file icm run
-options options Additional Docker options none Using ICM with Custom and Third-Party Containers
-container name Name of the container icm ps command: (all)
other commands: iris
icm run
-command cmd Command or query to execute none icm sshicm exec
-interactive Redirect input/output to console for the exec and ssh commands False icm ssh
-localPath path Local file or directory none icm scp
-remotePath path Remote file or directory /home/SSHUser (value of SSHUser field)
-iscPassword password Password for deployed InterSystems IRIS instances iscPassword value in configuration file icm run
-json Enable JSON response mode False Using JSON Mode
Important:
Use of the -verbose option, which is intended for debugging purposes only, may expose the value of iscPassword and other sensitive information, such as DockerPassword. When you use this option, you must either use the -force option as well or confirm that you want to use verbose mode before continuing.
ICM Node Types
This section described the types of nodes that can be provisioned and deployed by ICM and their possible roles in the deployed InterSystems IRIS configuration. A provisioned node’s type is determined by the Role field.
The following table summarizes the detailed node type descriptions that follow it.
Node Type Configuration Role(s)
Shard master data server
Distributed cache cluster data server
Stand-alone InterSystems IRIS instance
Shard master application server
Distributed cache cluster application server
Shard data server
Shard query server
Mirror arbiter
Load balancer
Web server
Virtual machine
Role DM: Shard Master Data Server, Distributed Cache Cluster Data Server, Standalone Instance
In an InterSystems IRIS sharded cluster (see the chapter Horizontally Scaling InterSystems IRIS for Data Volume with Sharding in the Scalability Guide), the shard master provides application access to the data shards on which the sharded data is stored and hosts nonsharded tables. (If shard master application servers [role AM] are included in the cluster, they provide application access instead.)
The node hosting the shard master is called the shard master data server. A shard master data server can be mirrored by deploying two nodes of role DM and specifying mirroring. The InterSystems IRIS Management Portal is typically accessed on the shard master data server; ICM provides a link to the portal on this node at the end of the deployment phase.
If multiple nodes of role AM and a DM node (nonmirrored or mirrored) are specified without any nodes of role DS (shard data server), they are deployed as an InterSystems IRIS distributed cache cluster, with the former serving as application servers and the latter as an data server.
Finally, a node of role DM (nonmirrored or mirrored) deployed by itself becomes a standalone InterSystems IRIS instance.
Role AM: Shard Master Application Server, Distributed Cache Cluster Application Server
When included in a sharded cluster, shard master application servers provide application access to the sharded data, distributing the user load across multiple nodes just as application servers in a distributed cache cluster do. If the shard master data server is mirrored, two or more shard master application servers must be included.
SQL and InterSystems IRIS ObjectScript commands that can be issued against the shard master data server (using the icm sql and icm session commands) can be issued against any of the shard master application servers. Shard master application servers automatically redirect application connections when a mirrored shard master data server fails over.
If multiple nodes of role AM and a DM node are specified without any nodes of role DS (shard data server), they are deployed as an InterSystems IRIS distributed cache cluster, with the former serving as application servers and the latter as an data server. When the data server is mirrored, application connection redirection following failover is automatic.
Role DS: Shard Data Server
A data shard stores one horizontal partition of each sharded table loaded into a sharded cluster. A node hosting a data shard is called a shard data server. A cluster can have two or more shard data servers up to over 200.
Shard data servers can be mirrored by deploying an even number and specifying mirroring.
Role QS: Shard Query Server
Query shards provides query access to the data shards to which they are assigned, minimizing interference between query and data ingestion workloads and increasing the bandwidth of a sharded cluster for high volume multiuser query workloads. A node hosting a query shard is called a shard query server. If shard data servers are deployed they are assigned round-robin to the deployed shard data servers.
Shard query servers automatically redirect application connections when a mirrored shard data server fails over.
Role AR: Mirror Arbiter
When a shard master data server, a distributed cache cluster data server, a stand-alone InterSystems IRIS instance, or shard data servers are mirrored, deployment of an arbiter node to facilitate automatic failover is highly recommended. One arbiter node is sufficient for all of the mirrors in a cluster; multiple arbiters are not supported and are ignored by ICM, as are arbiter nodes in a nonmirrored cluster.
The AR node does not contain an InterSystems IRIS instance, using a different image to run an ISCAgent container. This arbiter image must be specified using the DockerImage field in the definitions file entry for the AR node; for more information, see The icm run Command.
For more information about the arbiter, see the Mirroring chapter of the High Availability Guide.
Role LB: Load Balancer
ICM automatically provisions a load balancer node when the provisioning platform is AWS, GCP, or Azure, and the definition of nodes of type AM or WS in the definitions file contains the following parameter:
"LoadBalancer": "true"
For a custom load balancer, additional parameters must be provided.
Predefined Load Balancer
For nodes of role LB, ICM configures the ports and protocols to be forwarded as well as the corresponding health checks. Queries can be executed against the deployed load balancer the same way one would against a shard master application server or distributed cache cluster application server.
To add a load balancer to the definition of AM or WS nodes, add the LoadBalancer field, for example:
{
    "Label": "ISC",
    "Role": "AM",
    "Tag": "TEST",
    "Count": "2",
    "LoadBalancer": "true"
}
The following example illustrates the nodes that would be created and deployed given this definition:
$ icm inventory
Machine              IP Address       DNS Name
-------              ----------       --------
ISC-AM-TEST-0001     54.214.230.24    ec2-54-214-230-24.amazonaws.com
ISC-AM-TEST-0002     54.214.230.25    ec2-54-214-230-25.amazonaws.com
ISC-LB-TEST-0000     (virtual AM)     ISC-AM-TEST-1546467861.amazonaws.com
Queries against this cluster can be executed against the load balancer the same way they would be against the AM nodes servers.
Currently, a single automatically provisioned load balancer cannot serve multiple node types (for example, both web servers and application servers), so each requires its own load balancer. This does not preclude the user from manually creating a custom load balancer for the desired roles.
Virtual Machine Load Balancer
A load balancer can be added to VM (virtual machine) nodes by providing the following additional keys:
The following is an example:
{
    "Label": "ISC",
    "Role": "VM",
    "Tag": "TEST",
    "Count": "2",
    "LoadBalancer": "true",
    "ForwardProtocol": "tcp",
    "ForwardPort": "443",
    "HealthCheckProtocol": "http",
    "HealthCheckPath": "/csp/status.cxw",
    "HealthCheckPort": "8080"
}
More information about these keys can be found in ICM Configuration Parameters.
A load balancer does not require (or allow) an explicit entry in the definitions file.
Some cloud providers create a DNS name for the load balancer that resolves to multiple IP addresses; for this reason, the value displayed by the provider interface as DNS Name should be used. If a numeric IP address appears in the DNS Name column, it simply means that the given cloud provider assigns a unique IP address to their load balancer, but doesn't give it a DNS name.
Because the DNS name may not indicate to which resources a given load balancer applies, the values displayed under IP Address are used for this purpose.
For providers VMware and PreExisting, you may wish to deploy a custom or third-party load balancer.
Role WS: Web Server
A cluster may contain any number of web servers. Each web server node contains an InterSystems Web Gateway installation along with an Apache web server. ICM populates the remote server list in the InterSystems Web Gateway with all of the available AM nodes (shard master application servers or distributed cache cluster application servers). If no AM nodes are available, it instead uses the DM node (shard master data server or distributed cache cluster data server); if mirrored, a mirror-aware connection is created. Finally, communication between the web server and remote servers is configured to run in SSL/TLS mode.
The WS node does not contain an InterSystems IRIS instance, using a different image to run a Web Gateway container. This webgateway image can be specified by including the DockerImage field in the WS node definitions in the definitions.json file; for more information, see The icm run Command.
Role VM: Virtual Machine Node
A cluster may contain any number of virtual machine nodes. A virtual machine node provides a means of allocating compute instances which do not have a predefined role within an InterSystems IRIS cluster. Docker is not installed on these nodes, though users are free to deploy whatever custom or third-party software (including Docker) they wish.
The following commands are supported on the virtual machine node:
A load balancer may be assigned to a virtual machine node.
ICM Cluster Topology
ICM validates the node definitions in the definitions file to ensure they meet certain requirements; there are additional rules for mirrored configurations. Bear in mind that this validation does not include configurations that are not functionally optimal, for example a single AM node, a single WS, 10 DS with only one QS node or vice-versa, and so on.
In both nonmirrored and mirrored configurations, QS nodes are assigned to DS nodes in round-robin fashion. If AM and WS nodes are not both included, they are all bound to the DM; if they are both included, AM nodes are bound to the DM and WS nodes to the AM nodes.
Rules for Mirroring
ICM topology validation enforces the following rules for configuring mirroring:
When the Mirror field is set to False in the defaults file (the default), mirroring is never configured, and provisioning fails if more than one DM node is specified in the definitions file.
When the Mirror field is set to True, mirroring is configured where possible, as follows:
Note:
The recommended general best practice for sharded clusters is that either the shard master data server (DM node) and the shard data servers (DS nodes) are all mirrored, or that none are mirrored.
There is no relationship between the order in which DM or DS nodes are provisioned or configured and their roles in a mirror. You can determine which member of each pair is the primary failover member and which the backup using the icm inventory command, the output of which indicates each primary with a + (plus) and each backup with a - (minus).
Nonmirrored Configuration Requirements
A non-mirrored cluster consists of the following:
The relationships between these nodes are pictured in the following illustration.
ICM Nonmirrored Topology
Note:
The sharding manager, which handles communication between the shard master data server and the shard data servers in a basic cluster, enables direct connections between the shard master application servers and the shard query servers in this more complex configuration. For detailed information about connections within a sharded cluster, see the chapter Horizontally Scaling InterSystems IRIS for Data Volume with Sharding in the Scalability Guide.
Mirrored Configuration Requirements
A mirrored cluster consists of:
Note:
A mirrored DM node that is deployed in the cloud without AM nodes must have some appropriate mechanism for redirecting application connections; see Redirecting Application Connections Following Failover or Disaster Recovery in the “Mirroring” chapter of the High Availability Guide for more information.
Mirroring is enabled by setting key Mirror in your defaults.json file to True.
"Mirror": "True"
Mirroring requires the choice of a namespace other than the default of USER. Namespace can be specified using the -namespace option with the icm run command or by including the Namespace field in your defaults file, or in the DM and DS definitions in your definitions file, as follows:
"Namespace": "MIRROR1"
Remote database bindings are assigned in the same fashion as a non-mirrored cluster, however the targets are mirrored pairs rather than individual servers.
Automatic LB deployment is supported for providers Amazon AWS, Microsoft Azure, and Google Cloud Platform; when creating your own load balancer, the pool of IP addresses to include are those of all AM and WS nodes.
The relationships between these nodes are pictured in the following illustration.
ICM Mirrored Topology
InterSystems IRIS Licensing for ICM
InterSystems IRIS instances deployed in containers require licenses on most node types. Unless otherwise specified, each InterSystems IRIS instance requires a distinct license, for example four AM nodes require four different licenses. Licenses are specified using the ISCLicensefield, which specifies the absolute path to one or more InterSystems IRIS licenses, in the configuration files. License requirements for each role are described in the following:
Licenses for Roles AM, QS
Nodes with role AM and QS require a standard InterSystems IRIS license key. There are no special requirements for this key, though if it is a platform-specific key it must be compatible with the operating system of the image specified by the DockerImage field. When multiple nodes of the same role are specified in a single definition, a comma-separated list of licenses is required, for example:
"Role": "AM"
"Count": "3"
"ISCLicense": "/License/Standard/iris1.key,/License/Standard/iris2.key,/License/Standard/iris3.key"
Licenses for Role DM
The license specified for a DM node must be a sharding license. When the DM node is the master of a sharded cluster, this license is used to generate licenses for use by the shard data servers (role DS). Regardless of whether the DM node is part of a sharded cluster and whether it is mirrored (two DM nodes), a single shard master license is required, as follows:
"Role": "DM"
"Count": "2"
"InstanceType": "m4.xlarge"
"ISCLicense": "/License/shardmaster/iris.key"
Licenses for Role DS
For nodes with role DS, a license is not required (and should not be provided); at runtime, the shard data server acquires a license from the shard master data server (role DM). Because this license is not persisted, licenses are reacquired from the shard master data server each time the shard data server starts.
Licenses for Roles WS, AR, LB
For nodes with role WS, AR, and LB, a license is not required (and will not be used if provided).
Licenses for Preexisting Cluster Deployments
When deploying on a preexisting cluster (see Deploying on a Preexisting Cluster), each definition describes a single instance, and therefore requires a single license:
"Role": "AM",
"IPAddress": "172.16.110.135",
"ISCLicense": "/Cloud/license/Standard/iris.key"
ICM Security
The security measures included in ICM are described in the following sections:
For information about the ICM fields used to specify the files needed for the security described here, see Security-Related Parameters.
Compute Node Communication
This is the host machine on which containers are deployed. It may be virtual or physical, running in the cloud or on-premises.
ICM uses SSH to log into compute nodes and remotely execute commands on them, and SCP to copy files between the ICM container and a compute node. To enable this secure communication, you must provide an SSH public/private key pair and specify these keys in the defaults.json file as SSHPublicKey and SSHPrivateKey. During the configuration phase, ICM disables password login on each compute node, copies the private key to the node, and opens port 22, enabling clients with the corresponding public key to use SSH and SCP to connect to the node.
Other ports opened on the host machine are covered in the sections that follow.
Docker
During provisioning, ICM downloads and installs a specific version of Docker from the official Docker web site using a GPG fingerprint. ICM then copies the TLS certificates you provide (located in the directory specified by the TLSKeyDir field in the defaults file) to the host machine, starts the Docker daemon with TLS enabled, and opens port 2376. At this point clients with the corresponding certificates can issue Docker commands to the host machine.
Weave Net
During provisioning, ICM launches Weave Net with options to encrypt traffic and require a password (provided by the user) from each machine joining the Weave network.
Rancher, Weave Scope
ICM does not configure access control for these tools, and for this reason they are not installed by default. For more information see Monitoring in ICM.
InterSystems IRIS
For detailed and comprehensive information about InterSystems IRIS security, see the InterSystems IRIS Security Administration Guide.
Security Level
ICM expects that the InterSystems IRIS image was installed with Normal security (as opposed to Minimal or Locked Down).
Predefined Account Password
To secure the InterSystems IRIS instance, the password for predefined accounts is set to a value that is undiscoverable and unrecoverable, but can be changed by ICM. The first time ICM runs the InterSystems IRIS container, passwords on all enabled accounts with non-null roles are changed to a password provided by the user. If you don’t want the InterSystems IRIS password to appear in the definitions files, or in your command-line history using the -iscPassword option, you can omit both; ICM interactively prompts for the password, masking your typing. Because passwords are persisted, they are not changed when the InterSystems IRIS container is restarted or upgraded.
JDBC
ICM opens JDBC connections to InterSystems IRIS in SSL/TLS mode (as required by InterSystems IRIS), using the files located in the directory specified by the TLSKeyDir field in the defaults file.
Mirroring
ICM creates mirrors (both DM pairs and DS pairs) with SSL/TLS enabled (see the “Mirroring" chapter of the High Availability Guide), using the files located in the directory specified by the TLSKeyDir field in the defaults file. Failover members can join a mirror only if SSL/TLS enabled.
InterSystems Web Gateway
ICM configures WS nodes to communicate with DM and AM nodes using SSL/TLS, using the files located in the directory specified by the TLSKeyDir field in the defaults file.
Centralized Security
InterSystems recommends the use of an LDAP server to implement centralized security across the nodes of a sharded cluster or other ICM deployment. For information about using LDAP with InterSystems IRIS, see the Using LDAP chapter of the Security Administration Guide.
Monitoring in ICM
ICM offers the following two basic monitoring facilities:
Neither are deployed by default; you must specify them in your defaults file using the Monitor field.
Rancher
Rancher is a product of Rancher Labs and consists of a Rancher Server and several Rancher Agents. Rancher can be deployed using the following key in your defaults.json file:
"Monitor": "rancher"
When provisioning is complete, the URL for the Rancher Server is displayed, for example:
Rancher Server available at: http://00.186.24.14:8080/env/1a5/infra/hosts
ICM does not secure Rancher; users provisioning with Rancher are advised to enable access control using one of the methods documented at http://docs.rancher.com/rancher/latest/en/configuration/access-control/. Once you've selected an access control method, go to the Rancher Server URL displayed at the completion of provisioning and use the Admin menu to configure access control.
Weave Scope
Weave Scope is a product of Weaveworks. Weave Scope runs as a combined client/server; there is no preferred node, and its web server is available on all compute instances (at port 4040). Because Weave Scope runs on top of Weave Net, it can be deployed only for Overlay Networks of type "weave". Weave can be deployed by including the following in your defaults.json file:
"Monitor": "scope"
When provisioning is complete, the port for the Weave Scope is displayed, for example:
Weave Scope available on port 4040 (all compute instances)
ICM Troubleshooting
When an error occurs during an ICM operation, ICM displays a message directing you to the log file in which information about the error can be found. Before beginning an ICM deployment, familiarize yourself with the log files and their locations as described in Log Files and Other ICM Files.
In addition to the topics that follow, please see Additional Docker/InterSystems IRIS Considerations in Running InterSystems IRIS in Containers for information about important considerations when creating and running InterSystems IRIS images container images.
Compute Node Restart and Recovery
When a cloud compute node is shut down and restarted due to an unplanned outage or to planned action by the cloud provider (for example, for preventive maintenance) or user (for example, to reduce costs), its IP address and domain name may change, causing problems for both ICM and deployed applications (including InterSystems IRIS).
This behavior differs by cloud provider. By default, GCP and Azure preserve IP address and domain name across compute node restart, whereas AWS does not.
If the IP address and domain name of compute nodes change, ICM can no longer communicate with the node and a manual update to ICM is therefore required, followed by an update to the cluster. The procedure is as follows:
  1. Go to the web console of the cloud provider and locate your instances there. Record the IP address and domain name of each, for example:
    Node
    IP Address
    Domain Name
    ISC-DM-TEST-0001
    54.191.233.2
    ec2-54-191-233-2.amazonaws.com
    ISC-DS-TEST-0002
    54.202.223.57
    ec2-54-202-223-57.amazonaws.com
    ISC-DS-TEST-0003
    54.202.223.58
    ec2-54-202-223-58.amazonaws.com
  2. Edit the instances.json file and update the IPAddress and DNSName fields for each instance, for example:
    "Label" : "ISC",
    "Role" : "DM",
    "Tag" : "TEST",
    "MachineName" : "ISC-DM-TEST-0001",
    "IPAddress" : "54.191.233.2",
    "DNSName" : "ec2-54-191-233-2.amazonaws.com",
  3. Verify that the values are correct using the icm inventory command:
    $ icm inventory
    Machine            IP Address       DNS Name                      
    -------            ----------       --------                      
    ISC-DM-TEST-0001   54.191.233.2     ec2-54-191-233-2.amazonaws.com
    ISC-DS-TEST-0002   54.202.223.57    ec2-54-202-223-57.amazonaws.com
    ISC-DS-TEST-0003   54.202.223.58    ec2-54-202-223-58.amazonaws.com
  4. Use the icm ps command to verify that the compute instances are reachable:
    $ icm ps -container weave
    Machine            IP Address      Container   Status   Image
    -------            ----------      ---------   ------   -----
    ISC-DM-TEST-0001   54.191.233.2    weave       Up       weaveworks/weave:2.0.4
    ISC-DS-TEST-0002   54.202.223.57   weave       Up       weaveworks/weave:2.0.4
    ISC-DS-TEST-0003   54.202.223.58   weave       Up       weaveworks/weave:2.0.4
    
  5. The Weave network deployed by ICM includes a decentralized discovery service, which means that if at least one compute instance has kept its original IP address, the other compute instances will be able to reach it and reestablish all of their connections with one another. However, if the IP address of every compute instance in the cluster has changed, an additional step is needed to connect all the nodes in the Weave network to a valid IP address. Select one of the new IP addresses, such as 54.191.233.2 in our example. Then connect each node to this IP address using the icm ssh command, as follows:
    $ icm ssh -command "weave connect --replace 54.191.233.2"
    Executing command 'weave connect 54.191.233.2' on host ISC-DM-TEST-0001...
    Executing command 'weave connect 54.191.233.2' on host ISC-DS-TEST-0002...
    Executing command 'weave connect 54.191.233.2' on host ISC-DS-TEST-0003...
    ...executed on ISC-DM-TEST-0001
    ...executed on ISC-DS-TEST-0002
    ...executed on ISC-DS-TEST-0003
Correcting Time Skew
If the system time within the ICM containers differs from Standard Time by more than a few minutes, the various cloud providers may reject requests from ICM. This can happen if the container is unable to reach an NTP server on startup (initial or after being stopped or paused). The error appears in the terraform.err file as some variation on the following:
Error refreshing state: 1 error(s) occurred:

    # icm provision
    Error: Thread exited with value 1
    Signature expired: 20170504T170025Z is now earlier than 20170504T171441Z (20170504T172941Z   15 min.)
    status code: 403, request id: 41f1c4c3-30ef-11e7-afcb-3d4015da6526 doesn’t run for a period of time
The solution is to manually run NTP, for example:
ntpd -nqp pool.ntp.org
and verify that the time is now correct. (See also the discussion of the --cap-add option in Launch ICM.)
Timeouts Under ICM
When the target system is under extreme load, various operations in ICM may time out. Many of these timeouts are not under direct ICM control (for example, from cloud providers); other operations are retried several times, for example SSH and JDBC connections.
SSH timeouts are sometimes not identified as such. For instance, in the following example, an SSH timeout manifests as a generic exception from the underlying library:
# icm cp -localPath foo.txt -remotePath /tmp/
2017-03-28 18:40:19 ERROR Docker:324 - Error: 
java.io.IOException: com.jcraft.jsch.JSchException: channel is not opened. 
2017-03-28 18:40:19 ERROR Docker:24 - java.lang.Exception: Errors occurred during execution; aborting operation 
        at com.intersystems.tbd.provision.SSH.sshCommand(SSH.java:419) 
        at com.intersystems.tbd.provision.Provision.execute(Provision.java:173) 
        at com.intersystems.tbd.provision.Main.main(Main.java:22)
In this case the recommended course of action is to retry the operation (after identifying and resolving its proximate cause).
Note that for security reasons ICM sets the default SSH timeout for idle sessions at ten minutes (60 seconds x 10 retries). These values can be changed by modifying the following fields in the/etc/ssh/sshd_config file:
ClientAliveInterval 60
ClientAliveCountMax 10
IP Addresses Blocked by Docker Bridge Network
The Docker bridge network uses 172.17.0.0/24 as a subnet by default. This prevents containers from reaching IP addresses in the 172.17.N.N range. As a result, ICM may not be able to communicate with preexisting compute instances whose IP addresses fall in this range.
To resolve this, you can edit the bridge network’s IP configuration in the Docker configuration file to reassign the subnet to a range that doesn’t conflict with your own IP address range(s). To make this change, edit the Docker daemon configuration file (see Configure and troubleshoot the Docker daemon in the Docker documentation) and add the following line:
"bip": "172.19.0.1/24"
A different IP specification can be used, but the one provided is known to be effective.
Weave Network IP Address Range Conflict
By default, the Weave network uses IP address range 10.32.0.0/12. If this conflicts with an existing network, you may see an error such as the following in log file installWeave.log:
Network 10.32.0.0/12 overlaps with existing route 10.0.0.0/8 on host
ERROR: Default --ipalloc-range 10.32.0.0/12 overlaps with existing route on host.
You must pick another range and set it on all hosts.
This is most likely to occur with provider PreExisting if the machines provided have undergone custom network configuration to support other software or local policies. If disabling or moving the other network is not an option, you can change the Weave configuration instead, using the following procedure:
  1. Edit the following file local to the ICM container:
    /ICM/etc/toHost/installWeave.sh
  2. Find the line containing the string weave launch. If you're confident there is no danger of overlap between Weave and the existing network, you can force Weave to continue use the default range by adding the underscored text in the following:
    sudo /usr/local/bin/weave launch --ipalloc-range 10.32.0.0/12 --password $2 
    You can also simply move Weave to another private network, as follows:
    sudo /usr/local/bin/weave launch --ipalloc-range 172.30.0.0/16 --password $2
  3. Save the file.
  4. Reprovision the cluster.
Huge Pages
On certain architectures you may see an error similar to the following in the InterSystems IRIS messages log:
0 Automatically configuring buffers 
1 Insufficient privileges to allocate Huge Pages; non-root instance requires CAP_IPC_LOCK capability for Huge Pages. 
2 Failed to allocate 1316MB shared memory using Huge Pages. Startup will retry with standard pages. If huge pages 
  are needed for performance, check the OS settings and consider marking them as required with the InterSystems IRIS 
  'memlock' configuration parameter.
This can be remedied by providing the following option to the icm run command:
-options "--cap-add IPC_LOCK"
ICM Configuration Parameters
These tables describe the fields you can include in the configuration files (see Configuration, State and Log Files) to provide ICM with the information it needs to execute provisioning and deployment tasks and management commands.
General Parameters
Parameter Meaning
ISCLicense License for the InterSystems IRIS instance on one or more provisioned nodes; see InterSystems IRIS Licensing for ICM.
ISCPassword Password that will be set for the _SYSTEM, Admin, and SuperUser accounts on the InterSystems IRIS instances on one or more provisioned nodes. Corresponding command-line option: -iscPassword.
DockerImage Docker image to be used for icm run and icm pull commands. Must include the repository name. Corresponding command-line option: -image.
DockerRegistry DNS name of the server hosting the Docker repository storing the image specified by DockerImage. If not included, ICM uses Docker’s public registry located at registry-1.docker.io.
DockerUsername Username to use for Docker login to the respository specified in DockerImage on the registry specified by DockerRegistry. Not required for public repositories. If not included and the repository specified by DockerImage is private, login fails.
DockerPassword Password to use for Docker login, along with DockerUsername. Not required for public repositories. If this field and the repository specified by DockerImage is private. ICM prompts you (with masked input) for a password. (If the value of this field contains special characters such as $, |, (, and ), they must be escaped with two \ characters; for example, the password abc$def must be specified as abc\\$def.)
Home Root of the home directory on a provisioned node. Default: /home.
Mirror If True, InterSystems IRIS instances are deployed with mirroring enabled; see Mirrored Configuration Requirements. Default: False.
Spark
If True, and the spark image is specified by the DockerImage field, Spark is deployed with InterSystems IRIS in the iris container (see The icm run Command for more information). Default: false.
LoadBalancer Set to True in definitions of node type AM, WS, or VM for automatic provisioning of load balancer on providers AWS, GCP, and Azure. Default: false.
Namespace Namespace to be created during deployment. The namespace specified is also set as the default namespace for the icm session and icm sql commands. The predefined namespace USER may be used in non-mirrored deployments only. For more information, see The Definitions File. Default: USER. Command-line option: -namespace.
ISCglobals *
Database cache allocation from system memory. See globals in the Parameter File Reference and Memory and Startup Settings in the “Configuring InterSystems IRIS” chapter of the System Administration Guide, Default: 0,0,0,0,0,0 (automatic allocation)
ISCroutines *
Routine cache allocation from system memory. See routines in the Parameter File Reference and Memory and Startup Settings in the “Configuring InterSystems IRIS” chapter of the System Administration Guide. Default: 0 (automatic allocation).
ISCgmheap *
Size of the generic memory heap (in KB). See gmheap in the Parameter File Reference and gmheap in the Advanced Memory Settings section of the Additional Configuration Settings Reference. . Default: 37568.
ISClocksiz *
Maximum size of shared memory for locks (in bytes). See locksiz in the Parameter File Reference. Default: 16777216.
ISCbbsiz *
Maximum memory per process (KB). See bbsiz in the Parameter File Reference. Default: 262144.
ISCmemlock *
Enable/disable locking shared memory or the text segment into memory. See memlock in the Parameter File Reference. Default: 0.
Overlay Determines the Docker overlay network type; normally "weave", but may be set to "host" for development or debug purposes, or when deploying on a preexisting cluster. Default: weave (host when deploying on a preexisting cluster).
ISCAgentPort Port used by InterSystems IRIS ISC Agent. Default: 2188.
JDBCGatewayPort * Port used by InterSystems IRIS JDBC Gateway. Default: 62972.
SuperServerPort * Port used by InterSystems IRIS Superserver. Default: 51773.
WebServerPort * Port used by InterSystems IRIS Web Server (Management Portal). Default: 52773.
SparkMasterPort Port used by Spark Master. Default: 7077.
SparkWorkerPort Port used by Spark Worker. Default: 7000.
SparkMasterWebUIPort Port used by Spark Master Web UI. Default: 8080.
SparkWorkerWebUIPort Port used by Spark Worker Web UI. Default: 8081.
SparkRESTPort Port used for Spark REST API. Default: 6066.
SparkDriverPort
Port used for Spark Driver. Default: 7001.
SparkBlocKManagerPort
Port used for Spark Block Manager. Default: 7005.
Count Number of nodes to provision from a given entry in the definitions file. Default: 1.
Label Name shared by all nodes in this deployment, for example AcmeCorp; cannot contain dashes.
Role Role of the node or nodes to be provisioned by a given entry in the definitions file, for example DM or DS; see ICM Node Types.
Tag Additional name used to differentiate between deployments, for example TEST or TEST; cannot contain dashes.
StartCount Numbering start for a particular node definition in the definitions file. For example, if the DS node definition includes "StartCount": "3", the first DS node provisioned is named Label-DS-Tag-0002.
DataMountPoint
The location on the compute instance where the persistent volume described by DataDeviceName (see Device Name Parameters) will be mounted. Default: /irissys/data.
WIJMountPoint *
The location on the compute instance where the persistent volume described by WIJDeviceName (see Device Name Parameters) will be mounted. Default: /irissys/wij.
Journal1MountPoint *
The location on the compute instance where the persistent volume described by Journal1DeviceName (see Device Name Parameters) will be mounted. Default: /irissys/journal1j.
Journal2MountPoint *
The location on the compute instance where the persistent volume described by Journal2DeviceName (see Device Name Parameters)will be mounted. Default: /irissys/journal2j.
OSVolumeSize Size (in GB) of the OS volume to create for deployments other than type PreExisting. Default: 10.
Because this setting must be greater than or equal to the volume size of the original remplate or snapshot, it should never be set to less than the default.
DataVolumeSize Size (in GB) of the persistent data volume to create for deployments other than type PreExisting. This volume corresponds to the DataDeviceName parameter (see Device Name Parameters) and will be mounted at DataMountPoint. Default: 10.
WIJVolumeSize Size (in GB) of the persistent WIJ volume to create for deployments other than PreExisting. This volume corresponds to the WIJDeviceName parameter (see Device Name Parameters) and will be mounted at WIJMountPoint. Default: 10.
Journal1VolumeSize Size (in GB) of the persistent Journal volume to create for deployments other than type PreExisting. This volume corresponds to the Journal1DeviceName parameter (see Device Name Parameters) and will be mounted at Journal1MountPoint. Default: 10.
Journal2VolumeSize Size (in GB) of the alternate persistent Journal volume to create for deployments other than type PreExisting. This volume corresponds to the Journal2DeviceName parameter (see Device Name Parameters) and will be mounted at Journal2MountPoint. Default: 10.
SystemMode String to be shown in the Management Portal masthead. Certain values (LIVE, TEST, FAILOVER, DEVELOPMENT) trigger additional changes in appearance. Default: blank.
Provider Platform to provision infrastructure on; see Provisioning Platforms. Default: none.
Monitor Deploy basic monitoring: rancher: Rancher, scope: Weave Scope . Default: none.
ForwardPort Port to be forwarded by a given load balancer (both 'from' and 'to'). Defaults:
ForwardProtocol Protocol to be forwarded by a given load balancer. Defaults:
  • AM: tcp
  • WS: http
  • VM: (user provided)
HealthCheckPort Port used to verify health of instances in the target pool. Defaults:
HealthCheckProtocol Protocol used to verify health of instances in the target pool. Defaults:
  • AM: tcp
  • WS: http
  • VM: (user provided)
HealthCheckPath Path used to verify health of instances in the target pool. Defaults:
* The following parameters in the preceding table map directly to parameters in the iris.cpf file of the InterSystems IRIS instance on nodes of type DM, AM, DS, and QS:
ICM Name iris.cpf Name
SuperServerPort
DefaultPort
WebServerPort
WebServerPort
JDBCGatewayPort
JDBCGatewayPort
WIJMountPoint
wijdir
Journal1MountPoint
CurrentDirectory
Journal2MountPoint
AlternateDirectory
ISCglobals
globals
ISCroutines
routines
ISCgmheap
gmheap
ISCbbsiz bbsiz
ISClocksiz
locksiz
ISCmemlock memlock
Security-Related Parameters
The parameters in the following table are used to identify files and information required for ICM to communicate securely with the provisioned nodes and deployed containers.
Parameter Meaning
SSHUser Nonroot account with sudo access used by ICM for access to provisioned nodes. Root of SSHUser’s home directory can be specified using the Home field. Required value is provider-specific, as follows:
  • AWS — As per AMI specification (usually "ec2-user" for Red Hat Enterprise Linux instances)
  • vSphere — As per VM template
  • Azure — At user's discretion
  • GCP — At user's discretion
SSHPassword Initial password for the user specified by SSHUser. Required for marketplace Docker images and deployments of type vSphere, Azure, and PreExisting. This is used only during provisiong, at the conclusion of which password logins are disabled.
SSHOnly If True, ICM does not attempt SSH password logins during provisioning (providers PreExisting and vSphere only). Default: False.
SSHPublicKey Public key of SSH public/private key pair; required for all deployments.
For provider AWS, must be in SSH2 format, for example:
---- BEGIN SSH2 PUBLIC KEY --- AAAAB3NzaC1yc2EAAAABJQAAAQEAoa0 ---- BEGIN SSH2 PUBLIC KEY ---
For other providers, must be in OpenSSH format, for example:
ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAoa0
SSHPrivateKey Private key of SSH public private key pair, required, in RSA format, for example:
-----BEGIN RSA PRIVATE KEY----- MIIEogIBAAKCAQEAoa0ex+JKzC2Nka1 -----END RSA PRIVATE KEY-----
TLSKeyDir Directory containing TLS keys used to establish secure connections to Docker, InterSystems Web Gateway, JDBC, and mirrored InterSystems IRIS databases, as follows:
SSLConfig Path to an SSL/TLS configuration file used to establish secure JDBC connections. Default: If this parameter is not provided, ICM looks for a configuration file in /TLSKeyDir/SSLConfig.Properties (see previous entry).
Provider-Specific Parameters
This tables in this section list parameters used by ICM that are specific to each provider, as follows:
Note:
Some of the parameters listed are used with more than one provider.
Selecting Machine Images
Cloud providers operate data centers in various regions of the world, so one of the important things to customize for your deployment is the region in which your cluster will be deployed. Another choice is which virtual machine images to use for the computes nodes in your cluster. Although the sample configuration files define valid regions and machine images for all cloud providers, you will generally want to change the region to match your own location. Because machine images are often specific to a region, both must be selected.
At this release, ICM supports provisioning of and deployment on compute nodes running Red Hat Enterprise Linux, version 7.2 or later, so the machine images you select must run this operating system.
Amazon Web Services (AWS) Parameters
Parameter Meaning
Credentials Path to a file containing Amazon AWS credentials in the following format:
[default]aws_access_key_id = XXXXXXXXXXXXXXXXXX aws_secret_access_key = YYYYYYYYYYYYYYYYYYYY
Download from https://console.aws.amazon.com/iam/home?#/users.
SSHUser Nonroot account with sudo access used by ICM for access to provisioned nodes (see Security-Related Parameters). Root of SSHUser’s home directory can be specified using the Home field. Required value is determined by the selected AMI; for Red Hat Enterprise Linux images, the required value of SSHUser is usually ec2-user.
AMI AMI to use for a node or nodes to be provisioned; see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html. Example: ami-a540a5e1.
Region Region to use for a node or nodes to be provisioned; see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html. Example: us-west-1.
Zone Availability zone to use for a node or nodes to be provisioned; see link in previous entry. Example: us-west-1c.
InstanceType Instance Type to use for a node or nodes to be provisioned; see https://aws.amazon.com/ec2/instance-types/. Example: m4.large.
OSVolumeType Determines maximum OSVolumeSize. See http://docs.aws.amazon.com/cli/latest/reference/ec2/create-volume.html. Default: standard.
DataVolumeType Determines maximum DataVolumeSize (see OSVolumeType). Default: standard.
WIJVolumeType Determines maximum WIJVolumeSize (see OSVolumeType). Default: standard.
Journal1VolumeType Determines maximum Journal1VolumeSize (see OSVolumeType). Default: standard.
Journal2VolumeType Determines maximum Journal2VolumeSize (see OSVolumeType). Default: standard.
Google Cloud Platform (GCP) Parameters
Parameter Meaning
Credentials JSON file containing account credentials. Download from https://console.developers.google.com/
Project Google project ID.
MachineType Machine type resource to use for a node or nodes to be provisioned. See https://cloud.google.com/compute/docs/machine-types. Example: n1-standard-1.
Region Region to use for a node or nodes to be provisioned; see https://cloud.google.com/compute/docs/regions-zones/regions-zones. Example: us-east1.
Zone Zone in which to locate a node or nodes to be provisioned. Example: us-east1-b.
Image The source image from which to create this disk. See https://cloud.google.com/compute/docs/images. Example: centos-cloud/centos-7-v20160803.
OSVolumeType Determines disk type for the OS volume. See https://cloud.google.com/compute/docs/reference/beta/instances/attachDisk.Default: pd-standard.
DataVolumeType Determines disk type for the persistent Data volume (see OSVolumeType). Default: pd-standard.
WIJVolumeType Determines disk type for the persistent WIJ volume (see OSVolumeType). Default: pd-standard.
Journal1VolumeType Determines disk type for the persistent Journal1 volume (see OSVolumeType). Default: pd-standard.
Journal2VolumeType Determines disk type for the persistent Journal1 volume (see OSVolumeType). Default: pd-standard.
Microsoft Azure (Azure) Parameters
Parameter Meaning
Image The name of an existing VM or OS image to use for a node or nodes to be provisioned; see https://azure.microsoft.com/en-us/marketplace/virtual-machines/all/. Example: OpenLogic 7.2.
Size The size of a node or nodes to be provisioned; see https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines-windows-sizes. Example: Standard_DS1.
Location Location in which to provision a node or nodes; see https://azure.microsoft.com/en-us/regions/. Example: Central US.
TimeZone Time zone in which to provision a node or nodes. Example: America/New_York.
SubscriptionId Credentials which uniquely identify the Microsoft Azure subscription.
ClientId Azure application identifier.
ClientSecret Provides access to an Azure application.
TenantId Azure Active Directory tenant identifier.
PublisherName Entity providing a given Azure image. Example: OpenLogic.
Offer Operating system of a given Azure image. Example: Centos.
Sku Major version of the operating system of a given Azure image. Example: 7.2.
Version Build version of a given Azure image. Example: 7.2.20170105.
AccountType Account tier and storage type. Example: Standard_LRS.
VMware vSphere (vSphere) Parameters
Parameter Meaning
Server Name of the vCenter server. Example: tbdvcenter.iscinternal.com.
Datacenter Name of the datacenter.
VsphereUser Username for vSphere operations.
VspherePassword Password for vSphere operations.
VCPU Number of CPUs in a node or nodes to be provisioned. Example: 2.
Memory Amount of memory (in MB) in a node or nodes to be provisioned. Example: 4096.
Datastore Storage location for virtual machine files. Example: LocalStore1
DNSServers List of DNS servers for the virtual network. Example: 172.16.96.1,172.17.15.53
DNSSuffixes List of name resolution suffixes for the virtual network adapter. Example: iscinternal.com
Gateway Ipv4 gateway IP address to use. Example: 172.16.111.254
Domain FQDN for a node to be provisioned. Example: iscinternal.com
NetworkInterface Label to assign to a network interface. Example: VM Network
IPV4PrefixLength Prefix length to use when statically assigning an Ipv4 address. Example: 21
Template Virtual machine master copy. Example: centos-7
Note:
To address the needs of the many users who rely on VMware vSphere, this release of ICM supports vSphere (through the Terraform vSphere plugin version 0.3) as an early adopter option. Depending on your particular vSphere configuration and underlying hardware platform, the use of ICM to provision virtual machines may entail additional extensions and adjustments not covered in this guide, especially for larger and more complex deployments, and may not be suitable for production use. Full support is expected in a later release.
PreExisting Cluster (PreExisting) Parameters
Parameter Meaning
IPAddress This is a required field (in the definitions file) for provider PreExisting and is a generated field for all other providers.
DNSName FQDN of the compute instance, or its IP Address if unavailable. Deployments of type PreExisting may populate this field (in the definitions file) to provide names for display by the icm inventory command. This is a generated field for all other providers.
Device Name Parameters
The four parameters in the following table specify the devices (under /dev) on which persistent volumes appear. Defaults are available for all providers other than PreExisting, but these values are highly platform and OS-specific and may need to be overridden in your defaults.json file.
Parameter AWS GCP vSphere Azure
DataDeviceName
xvdb
sdb
sdb
sdc
WIJDeviceName
xvdc
sdc
sdc
sdd
Journal1DeviceName
xvdd
sdd
sdd
sde
Journal2DeviceName
xvde
sde
sde
sdf
Generated Parameters
These parameters are generated by ICM during provisioning, configuration, and deployment. They should generally be treated as read-only and are included here for information purposes only.
Parameter Meaning
Member In initial configuration of a mirrored pair, either primary or backup (actual role of each failover member is determined by mirror operations).
MachineName Generated from Label-Role-Tag-####.
WeaveArgs
Generated by executing weave dns-args on the compute instance.
WeavePeers
List of IP addresses of every compute instance except the current on.
StateDir
Location on the ICM client where temporary, state, and log files will be written. Default: ICM-nnnnnnnnn. Command-line option: -stateDir.
RancherServer
Compute instance designated as the Rancher server. Default: Randomly selected node with role QS.
DefinitionIndex
Assigns an index to each object in the definitions.json file; this is used to uniquely number load balancer instances (which would otherwise have the same names).
TargetRole
The role associated with the resources being managed by a given load balancer.
MirrorSetName
Name assigned to a failover mirror.
InstanceCount
Total number of instances in this deployment.