docs.intersystems.com
Home  /  Architecture  /  InterSystems Cloud Manager Guide  /  Deploying on a Preexisting Cluster


InterSystems Cloud Manager Guide
Deploying on a Preexisting Cluster
[Back]  [Next] 
InterSystems: The power behind what matters   
Search:  


ICM provides you with the option of allocating your own cloud or virtual compute nodes or physical servers to deploy containers on. The provisioning phase usually includes allocation and configuration subphases, but when the Provider field is set to PreExisting, ICM bypasses the allocation phase and moves directly to configuration. There is no unprovisioning phase for a preexisting cluster.
Compute Node Requirements
The preexisting compute nodes must satisfy the criteria listed in the following sections.
Operating System
Red Hat Enterprise Linux 7.3 or later (Centos 7 should work as well).
SSH
ICM requires that SSH be installed and the SSH daemon running.
Additionally, a nonroot account must be specified in the SSHUser field in the defaults file. This account should have the following properties:
ICM can log in as SSHUser using SSH keys or password login. Even if password logins are enabled, ICM will always try to log in using SSH first.
If you've configured your machines with SSH keys, you must populate the SSHKey field in your configuration file with the corresponding key.
During the configuration phase, ICM configures SSH login and disables password login by default. If you don't wish password login to be disabled, you can touch the following sentinel file in the home directory of the SSHUser account:
mkdir -p ICM
touch ICM/disable_password_login.done
If you've configured your machines with a password, specify it using the SSHPassword field in your configuration file. ICM assumes these credentials are insecure.
Enabling password login and specifying the SSHPassword field does not remove the requirement that ICM be able to carry out all postconfiguration operations via SSH.
Ports
To avoid conflicting with local security policies and because of variations among operating systems, ICM does not attempt to open any ports. The following table contains the default ports that must be opened to make use of various ICM features. As described in General Parameters, the ports are configurable, for example:
"JDBCGatewayPort": "62975"
If you change one or more from the defaults in this way, you must ensure that the ports you specify are open.
Port Protocol Service Notes
22 tcp SSH Required.
2376 tcp Docker (TLS mode) Required.
80 tcp Web Required to access the public Apache web server on nodes of role WS (web server).
53 tcp
udp
DNS Required for Weave DNS.
6783 tcp
udp
Weave Net Required for Overlay=Weave (default for all providers except PreExisting).
4040 tcp Weave Scope Required for Weave monitoring.
500
4500
udp Rancher Required for Rancher monitoring.
7077
7000
8080
8081
6066
7001
7005
tcp
Spark
Required for web access to InterSystems IRIS+Spark containers. Different ports may be specified using the fields SparkMasterPort, SparkWorkerPort, SparkMasterWebUIPort, SparkWorkerWebUIPort, SparkRESTPort, SparkDriverPort, and SparkBlocKManagerPort, respectively.
7077
8080
8081
tcp Spark Required for web access to Spark containers. Different ports may be specified using the Spark*Port fields.
1972 tcp InterSystems IRIS™ Superserver Required. A different port may be specified using the SuperServerPort field.
57772 tcp InterSystems IRIS Webserver Required. A different port may be specified using the WebServerPort field.
2188 tcp InterSystems IRIS ISCAgent Required for mirroring.A different port may be specified using the ISCAgentPort field.
62972 tcp InterSystems IRIS JDBC Gateway Port Required to use the JDBC Gateway. A different port may be specified using the JDBCGatewayPort field.
Storage Volumes
ICM must be able to format, partition, and mount volumes designated for use by InterSystems IRIS. These volumes are mounted by whatever InterSystems IRIS container is currently running on the host machine.
The devices must appear beneath /dev. The names of the devices are determined by the following keys in your instances, definitions, or defaults file:
DataDeviceName
WIJDeviceName 
Journal1DeviceName
Journal2DeviceName
For example, the following entry corresponds to /dev/sdb:
"DataDeviceName": "sdb"
For all providers other than type PreExisting, ICM attempts to assign a reasonable default; however, these values are both provider and OS-specific and may need to be updated or overridden in your deployment.
ICM mounts the above devices according to the following keys in your instances, definitions, or defaults files, the defaults for which are shown in the following:
"DataMountPoint": "/intersys/data"
"WIJMountPoint": "/intersys/wij"
"Journal1MountPoint": "/intersys/journal1"
"Journal2MountPoint": "/intersys/journal2"
Definitions File
The main difference between PreExisting and the other providers is the contents of the definitions file, which contains exactly one entry per node, rather than one entry per role with a Count field to specify the number of nodes of that role. Each node is identified by its IP address or fully-qualified domain name. The fields shown in the following table are required for each node definition in a preexisting cluster deployment (along with other required fields described in other sections of this document):
Parameter Description Example
IPAddress IP address of the preexisting node. 172.16.110.9
DNSName Fully-qualified DNS name of the preexisting node; can be used in place of IPAddress. Must be resolvable by ICM and within the cluster. (For all other providers this is an output field, rather than an input field.) subnet2node21.mycointernal.com
SSHUser Nonroot user with passwordless sudo access. icmuser