docs.intersystems.com
Home  /  Architecture  /  InterSystems Cloud Manager Guide  /  Essential ICM Elements


InterSystems Cloud Manager Guide
Essential ICM Elements
[Back]  [Next] 
InterSystems: The power behind what matters   
Search:  


The chapter describes the essential elements involved in using ICM, including the following:
ICM Image
ICM is provided as a Docker image, which you run to start the ICM container. Everything required by ICM to carry out its provisioning, deployment, and management tasks — for example Terraform, the Docker client, and templates for the configuration files — is included in this container. See Launch ICM for more information about the ICM container.
The system on which you launch ICM must be supported by Docker as a platform for hosting Docker containers, have Docker installed, and have adequate connectivity to the cloud platform on which you intend to provision infrastructure and deploy containers.
Provisioning Platforms
ICM can provision virtual compute nodes and associated resources on the following platforms:
Before using ICM with one of these platforms, you should be generally familiar with the platform. You will also need account credentials; see Obtain Security Credentials for more information.
ICM can also configure existing virtual or physical clusters (provider PreExisting) as needed and deploy containers on them, just as with the nodes it provisions itself.
Deployment Platforms
ICM supports deployment of containers on nodes running enterprise-level operating system platforms supported for the purpose by InterSystems, currently Red Hat Enterprise Linux, version 7.2 or later.
Docker images created by InterSystems are supported on the Supported Server Platforms listed in InterSystems Support Platforms and on Red Hat Enterprise Linux, SUSE Enterprise Linux, Ubuntu, and CentOS for development.
Node Types and InterSystems IRIS Architecture
ICM deploys one InterSystems IRIS™ instance per provisioned node, and the role that each instance plays in an InterSystems IRIS configuration is determined by the Role field value under which the node and the instance on it are provisioned, deployed, and configured.
Defining Nodes in the Deployment
In preparing to use ICM, you must define your target configuration (see Define the Deployment) by selecting the types and numbers of nodes you want to include. If you want to deploy an InterSystems IRIS sharded cluster, for example, you must decide beforehand how many shard data servers (role DS), shard query servers (role QS), if any, and shard master application servers (role AM), if any, will be included in the cluster, and whether the shard master data server (role DM) and shard data servers are to be mirrored. The specifications for the desired nodes are then entered in the definitions file (see Configuration, State, and Log Files) to provide the needed input to ICM. This is shown in the following simplified example defining a shard master data server, six shard data servers, and three shard master application servers:
[
    {
        "Role": "DM",
        "Count": "1"
    },
    {
        "Role": "DS",
        "Count": "6"
    },
    {
        "Role": "AM",
        "Count": "3"
}
]
The Mirror field, which determines whether mirroring is configured, appears in the defaults file; it is covered at length in ICM Cluster Topology.
Defining Sharded or Traditional InterSystems IRIS Architecture
ICM gives you the choice of deploying an InterSystems IRIS sharded cluster, an InterSystems IRIS distributed cache cluster, or a stand-alone InterSystems IRIS instance. This is possible because two node types, DM and AM, can represent different roles in the configuration depending on what is deployed with them, as follows:
These configurations are shown in the following illustration:
Stand-alone, Distributed Cache, and Sharded Configurations as Deployed by ICM
ICM also provisions and deploys web servers and load balancers as needed, and arbiter nodes as part of mirrored configurations.
For more information about ICM node types and the ways in which they can be configured together, see ICM Node Types and ICM Cluster Topology. For detailed information about InterSystems IRIS sharded and distributed caching architectures, see the Scalability Guide.
Field Values
The provisioning, deployment, and configuration work done by ICM is determined by a number of field values that you provide. For example, the Provider field specifies the provisioning platform to use; if its value is AWS, the specified nodes are provisioned on AWS.
The are two ways to specify field values:
There are also defaults for most ICM fields. In descending order of precedence, these specifications are ranked as follows:
  1. Command line option
  2. Entry in definitions.json configuration file
  3. Entry in defaults.json configuration file
  4. ICM default value
This avoids repeating commonly used fields while allowing flexibility. The defaults file (defaults.json) can be used to provide values (when a value is required or to override the defaults) for multiple deployments in a particular category, for example those that are provisioned on the same platform. The definitions file definitions.json) provides values for a particular deployment, such as specifying the nodes in the cluster and the labels that identify them, but can also contain fields included in the defaults file, in order to override the defaults.json value for a particular node type in a particular deployment. Specifying a field value as a command line options let you override both configuration files in the context of a particular command in a particular deployment.
The following illustration shows three deployments sharing a single defaults.json file, while two also share a definitions.json file. For the latter, different InterSystems IRIS namespaces are created during deployment by adding an option to the icm run command.
ICM Configuration Files Define Deployments
For a list of all fields and definitions of their values, see ICM Reference.
Command Line
The ICM command line interface allows users to specify an action in the orchestration process — for example, icm provision to provision infrastructure, or icm run to deploy by running the specified containers — or a container or service management command, such as upgrading a container or connecting to InterSystems IRIS. ICM executes these commands using information from configuration files. If either of the input configuration files (see Configuration, State and Log Files) is not specified on the command line, ICM uses the file (definitions.json or defaults.json) that exists in the current working directory; if one or both are not specified and do not exist, the command fails. For a complete list of ICM commands, see ICM Reference.
The command line can be also used to specify options, which have several purposes, as follows:
Options can have different purposes when used with different commands. For example, with the icm run command, the -container option provides the name for the new container being started from the specified image, whereas with the icm ps command it specifies the name of the existing deployed containers for which you want to display run state information.
For comprehensive lists of ICM commands and command-line options, see ICM Commands and Options.
Configuration, State, and Log Files
ICM uses JSON files as both input and output, as follows:
When an ICM task results in the generation of new or changed data (for example, an IP address), that information is recorded in JSON format for use by subsequent tasks. ICM ignores all fields which it does not recognize or that do not apply to the current task, but passes these fields on to subsequent tasks, rather than generating an error. This behavior has the following advantages:
The JSON files used by ICM include the following:
ICM also generates a number of other log, output, and error files, which are located in the current working directory or the state directory.
The Definitions File
The definitions file (definitions.json) describes a set of compute nodes to be provisioned for a particular deployment. The file consists of a series of JSON objects representing node definitions, each of which contains a list of attributes as well as a count to indicate how many nodes of that type should be created. Some fields are required, others are optional, and some are provider-specific (that is, for use with AWS, Google Cloud Platform, Microsoft Azure or VMware vSphere).
Most fields can appear in this file, repeated as need for each node definition. Some fields, however, must be the same across a single deployed configuration, and therefore cannot be changed from the default, or specified if there is no default, by entries in the definitions file. The Provider field is a good example, for obvious reasons.
Another field that must be in the defaults file is Namespace, which specifies the user namespace to create on deployed InterSystems IRIS instances; default globals and routines databases of the same name are created as well. The user namespace also becomes the default namespace for the icm session and icm sql commands. If unspecified, Namespace defaults to USER. If you used the definitions file to create different namespaces on different nodes, or left some with the default of USER but not others, sharded and distributed cache clusterconfiguration might not succeed.
Note:
If you use the -namespace command line option with the icm provision or icm run commands to override the namespace specified in the defaults file (or the default of USER if not specified in defaults), you must also use the -namespace option to specify the created namespace when using the aforementioned commands.
The user namespace globals database is used for persistent storage (outside the deployed container) of instance-specific InterSystems IRIS data; for more information, see Durable %SYS for Persistent Instance Data in Running InterSystems IRIS in Containers.
The following shows the contents of a sample definitions.json file for provisioning a distributed cache cluster consisting of a data server ("Role": "DM""), three application servers ("Role": "AM"), and a mirror arbiter node ("Role": "AR") on AWS.
Note that while the values for Label and Tag are the same for all three node types and could therefore be included in the defaults.json file if desired (see The Defaults File), the others vary and therefore must be included in the node definitions in definitions.json. Some fields must be in definitions.json because they are restricted to certain node types; for example, if LoadBalancer is set to True for nodes of type application server (AM) or web server (WS), a load balancer is automatically provisioned, but applying this setting to other node types causes errors.
The definitions.json is sometimes used to override either ICM defaults or settings in the defaults.json. As an example, the DataVolumeSize field appears only for the data server nodes here because the others use the ICM default value, while the DM node and AR node definitions include an InstanceType field overriding the InstanceType field in defaults.json, and the AR node definition includes a DockerImage field overriding that in defaults.json:
[
    {
        "Role": "DM",
        "Label": "ISC",
        "Tag": "TEST",
        "Count": "2",
        "DataVolumeSize": "50",
        "InstanceType": "m4.xlarge"
    },
    {
        "Role": "AM",
        "Label": "ISC",
        "Tag": "TEST",
        "Count": "3",
        "StartCount": "3",
        "LoadBalancer": "true"
    },
    {
        "Role": "AR",
        "Label": "ISC",
        "Tag": "TEST",
        "Count": "1",
        "StartCount": "4",
        "InstanceType": "t2.small",
        "DockerImage": "iscrepo/arbiter:stable"
    }
]
The Defaults File
Generally, the defaults file defines fields that are the same across all deployments of a particular type, such as those provisioned on a particular cloud platform.
As noted in The Definitions File, while most fields can be in either input file, some must be the same across a deployment and cannot be specified separately for each node type, for example Provider. In addition to these, there may be other fields that you want to apply to all nodes in all deployments, overriding them when desired on the command line or in the definitions file. Fields of both types are included in the defaults.json file. Including as many fields as you can in the defaults file keeps definitions files smaller and more manageable.
The format of the defaults file is a single JSON object; the values it contains are applied to every field whose value is not specified (or is null) in a command line option or the definitions file.
The following shows the contents of a sample defaults.json file used with the sample definitions.json file provided in The Definitions File. Some of the defaults specified in the former are overridden by the latter, including OSVolumeSize, DataVolumeSize, and InstanceType.
{
    "Provider": "AWS",
    "Label": "Sample",
    "Tag": "TEST",
    "OSVolumeSize": "15",
    "DataVolumeSize": "10",
    "SSHUser": "ec2-user",
    "SSHPublicKey": "/Samples/ssh/insecure-ssh2.pub",
    "SSHPrivateKey": "/Samples/ssh/insecure",
    "DockerImage": "iscrepo/iris:stable",
    "DockerUsername": "xxxxxxxxxxxx",
    "DockerPassword": "xxxxxxxxxxxx",
    "TLSKeyDir": "/Samples/tls",
    "Region": "us-west-1",
    "Zone": "us-west-1c",
    "AMI": "ami-d1315fb1",
    "InstanceType": "m4.large",
    "Credentials": "/Samples/AWS/sample.credentials",
    "SystemMode": "TEST",
    "ISCPassword": "",
    "Namespace": "DB",
    "Mirror": "false"
}
The Instances File
The instances file (instances.json), generated by ICM during the provisioning phase, describes the set of compute nodes that have been successfully provisioned. This information provides input to the deployment and management phase, and the file must therefore be available during this phase, and its path provided to ICM if it is not in the current working directory. By default, the instances file is created in the current working directory; you can change this using the -instances option, but note that if you do you must supply the alternate location by using the -instances option with all subsequent commands.
While the definitions file contains only one entry for each node type, including a Count value to specify the number of nodes of that type, the instances file contains an entry for reach node actually provisioned. For example, the sample definitions file provided earlier contains three entries — one for three application servers, one for two data servers, and one for an arbiter — but the resulting instances file would contain six objects, one for each provisioned node.
All of the parameters making up each node’s definition — including those in the definitions and defaults file, those not specified in the configuration files that have default values, and internal ICM parameters — appear in its entry, along with the node’s machine name constructed from the Label, Role, and Tag fields), its IP address, and its DNS name. The location of the subdirectory for that node in the deployment’s state directory is also included.
The State Directory and State Files
ICM writes several state files, including logs and generated scripts, to a unique subdirectory for use during the lifetime of the provisioned infrastructure. By default, this subdirectory is created in the current working directory, with a name of the form ICM-UID, for example:
./ICM-172807747058302123/
State files generated during provisioning include the Terraform overrides file and state file, terraform.tfvars and terraform.tfstate, as well as Terraform output, error and log files. A set of these Terraform-related files is created in a separate subdirectory for each node type definition in the definitions file, for example:
./ICM-172807747058302123/DCC-DM-TEST
./ICM-172807747058302123/DCC-AM-TEST
./ICM-172807747058302123/DCC-AR-TEST
Important:
ICM relies on the state files it creates for accurate, up to date information about the infrastructure it has provisione; without them, the provisioning state may be difficult or impossible to for ICM to reconstruct, resulting in errors, and perhaps even the need for manual intervention. For this reason, InterSystems strongly recommends making sure the state directory is located on storage that is reliable and reliably accessible, with an appropriate backup mechanism in place. Note that if you use the -stateDir command line option with the icm provision command to override the default location, you must continue using the -stateDir option to specify that location in all subsequent provisioning commands.
Log Files and Other ICM Files
ICM writes several log, output and error files to the current working directory and within the state directory tree. The icm.log file in the current working directory records ICM’s informational, warning, and error messages. Other files within the state directory tree record the results of commands, including errors. For example, errors during the provisioning phase are typically recorded in the terraform.err file.
Important:
When an error occurs during an ICM operation, ICM displays a message directing you to the log file in which information about the error can be found. Before beginning an ICM deployment, familiarize yourself with the log files and their locations.
Docker Repositories
Each image deployed by ICM is pulled (downloaded) from a Docker repository. Many Docker images can be freely downloaded from public Docker repositories; private repositories such as the InterSystems repository, however, require a Docker login.
Logging Into a Docker Repository
As part of the deployment phase, ICM logs each node into the Docker respository you specify, using credentials supplied by you, before deploying the image specified by the DockerImage field in one of the configuration files or on the command line using the -image option. (The repository name must be included in the image specification.) You can include the following three fields in the defaults.json file to provide the needed information:
Setting Up a Docker Repository
You may want to set up a Docker repository so you can store InterSystems images (and your own images) locally rather than relying on the network availability for critical applications. For information on doing this, see Deploy a registry server in the Docker documentation.