Skip to main content

Using the InterSystems Kubernetes Operator

This document explains how to use the InterSystems Kubernetes Operator (IKO) to deploy InterSystems IRIS sharded clusters on Kubernetes platforms.

Why would I use Kubernetes?

KubernetesOpens in a new window is an open-source orchestration engine for automating deployment, scaling, and management of containerized workloads and services, and excels at orchestrating complex SaaS (software as a service) applications. You provision a Kubernetes-enabled cluster and tell Kubernetes the containerized services you want to deploy on it and the policies you want them to be governed by; Kubernetes transparently provides the needed resources in the most efficient way possible, repairs or restores the configuration when problems with those resources cause it to deviate from what you specified, and can scale automatically or on demand. In the simplest terms, Kubernetes deploys a multicontainer application in the configuration and at the scale you specify on any Kubernetes-enabled platform, and keeps the application operating exactly as you described it.

Why do I need the InterSystems Kubernetes Operator?

In Kubernetes, a resource is an endpoint that stores a collection of API objects of a certain kind, from which an instance of the resource can be created or deployed as an object on the cluster. For example, built-in resources include, among many others, pod (a set of running containers), service (a network service representing an application running on a set of pods), and persistentvolume (a directory containing persistent data, accessible to the containers in a pod).

The InterSystems Kubernetes Operator (IKO) extends the resources built into the Kubernetes API with a custom resourceOpens in a new window called IrisCluster, representing an InterSystems IRIS cluster. An instance of this resource — that is, a sharded cluster, or a standalone InterSystems IRIS instance, optionally configured with application servers in a distributed cache cluster — can be deployed on any Kubernetes platform on which the IKO is installed and benefit from all the features of Kubernetes such as its services, role-based access control (RBAC), and so on.

The IrisCluster resource isn’t required to deploy InterSystems IRIS under Kubernetes. But because Kubernetes is application-independent, you would need to create custom definitions and scripts to handle all the needed configuration of the InterSystems IRIS instances or other components in the deployed containers, along with networking, persistent storage requirements, and so on. Installing the IKO automates these tasks. By putting together a few settings that define the cluster, for example the number of data and compute nodes, whether they should be mirrored, and where the Docker credentials needed to pull the container images are stored, you can easily deploy your InterSystems IRIS cluster exactly as you want it. The operator also adds InterSystems IRIS-specific cluster management capabilities to Kubernetes, enabling tasks like adding data or compute nodes, which you would otherwise have to do manually by interacting directly with the instances.

Start with your use case

Before beginning your work with the IKO, examine your use case and answer these questions:

Detailed information about how the IrisCluster’s topology is defined is provided in Define the IrisCluster topology.

Plan your deployment

For the most beneficial results, it is important to fully plan the configuration of your sharded cluster, distributed cache cluster, or standalone instance and its data, including:

For detailed information about InterSystems IRIS sharded clusters, see Horizontally Scaling for Data Volume with ShardingOpens in a new window in the Scalability Guide. If you are instead deploying a distributed cache cluster, there is important information to review in Horizontally Scaling for User Volume with Distributed CachingOpens in a new window in the same guide.

Learn to speak Kubernetes

While it is possible to use the IKO if you have not already worked with Kubernetes, InterSystems recommends having or gaining a working familiarity with KubernetesOpens in a new window before deploying with the IKO.

Choose a platform and understand the interface

When you have selected the Kubernetes platform you will deploy on, create an account and familiarize yourself with the provided interface(s) to Kubernetes. For example, to use GKE on Google Cloud Platform, you can open a Google Cloud ShellOpens in a new window terminal and file editor to use GCP’s gcloud command line interface and the Kubernetes kubectl command line interface. Bear in mind that the configuration of your Kubernetes environment should include access to the availability zones in which you want to deploy the sharded cluster.

The instructions in this document provide examples of gcloud commands

Deploy a Kubernetes container cluster to host the IrisCluster

The Kubernetes clusterOpens in a new window is the structure on which your containerized services are deployed and through which they are scaled and managed. The procedure for deploying a cluster varies to some degree among platforms. In planning and deploying your Kubernetes cluster, bear in mind the following considerations:

  • The IKO deploys one InterSystems IRIS or arbiter container (if a mirrored cluster) per Kubernetes pod, and attempts to deploy one pod per Kubernetes cluster nodeOpens in a new window when possible. Ensure that

    • You are deploying the desired number of nodes to host the pods of your sharded cluster, including the needed distribution across zones if more than one zone is specified (see below).

    • The required compute and storage resources will be available to those nodes.

  • If your sharded or distributed cache cluster is to be mirroredOpens in a new window and you plan to enforce zone antiaffinity using the preferredZones fields in the IrisCluster definition to deploy the members of each failover pair in separate zones and the arbiter in an additional zone, the container cluster must be deployed in three zones. For example, if you plan to use zone antiaffinity and are deploying the cluster using the gcloud command-line interface, you might select zones us-east1-b,c,d and create the container cluster with a command like this:

    $ gcloud container clusters create my-IrisCluster --node-locations us-east1-b,us-east1-c,us-east1-d
    Copy code to clipboard

Upgrade Helm if necessary

Helm packages Kubernetes applications as chartsOpens in a new window, making it easy to install them on any Kubernetes platform. Because the IKO Helm chart requires Helm version 3, you must confirm that this is the version on your platform, which you can do by issuing the command helm version. If you need to upgrade Helm to version 3, you can use the curl script at https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3Opens in a new window. For example:

$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  6827  100  6827    0     0  74998      0 --:--:-- --:--:-- --:--:-- 75855
Helm v3.2.3 is available. Changing from version .
Downloading https://get.helm.sh/helm-v3.2.3-linux-amd64.tar.gz
gcloudPreparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm
Copy code to clipboard

Download the IKO archive and upload the extracted contents to Kubernetes

Obtain the IKO archive file, for example iris_operator-3.0.0.79.0-unix.tar.gz, from the InterSystems Worldwide Response Center (WRC)Opens in a new window download area and extract its contents. Next, upload the extracted directory, with the same base name as the archive file (for example iris_operator-3.0.0.79.0) to the Kubernetes platform. This directory contains the following:

  • The image/ directory contains an archive file containing the IKO image.

  • The chart/iris-operator directory contains the Helm chart for the operator.

  • The samples/ directory contains template .yaml and .cpf files, as described later in this procedure.

  • A README file, which recaps the steps needed to obtain and install the IKO.

Locate the IKO image

To install the IKO, Kubernetes must be able to download (docker pull) the IKO image. To enable this, you must provide Kubernetes with the registry, repository, and tag of the IKO image and the credentials it will use to authenticate to the registry. Generally, there are two approaches to downloading the image:

  • The IKO image is available from the InterSystems Container Registry (ICR). Using the InterSystems Container RegistryOpens in a new window lists the images currently available from the ICR, for example containers.intersystems.com/iris-operator:w.0.0.79.0, and explains how to obtain login credentials for the ICR.

  • You can use Docker commands to load the image from the image archive you extracted from the IKO archive in the previous step, then add it to the appropriate repository in your organization’s container registry, for example:

    $ docker load -i iris_operator-3.0.0.79.0/image/iris_operator-3.0.0.79.0-docker.tgz
    fd6fa224ea91: Loading layer [==================================================>] 3.031MB/3.031MB 
    32bd42e80893: Loading layer [==================================================>] 75.55MB/75.55MB 
    Loaded image: intersystems/iris-operator:3.0.0.79.0
    $ docker images
    REPOSITORY                  TAG         IMAGE ID      CREATED       SIZE 
    intersystems/iris-operator  3.0.0.79.0  9a3756aed423  3 months ago  77.3MB
    $ docker tag intersystems/iris-operator:3.0.0.79.0 kubernetes/intersystems-operator
    $ docker login docker.acme.com
    Username: pmartinez@acme.com
    Pasword: **********
    Login Succeeded
    $ docker push kubernetes/intersystems-operator
    The push refers to repository [docker.acme.com/kubernetes/intersystems-operator]
    4393194860cb: Pushed
    0011f6346dc8: Pushed
    340dc52ed535: Pushed
    latest: sha256:f483e14a1c6b7a13bb7ec0ab1c69f4588da2c253e8765232 size 77320
    
    Copy code to clipboard

Create a secret for IKO image pull information

Kubernetes secretsOpens in a new window let you securely and flexibly store and manage sensitive information such as credentials that you want to pass to Kubernetes. When you want Kubernetes download an image, you can create a Kubernetes secret of type docker-registry containing the URL of the registry and the credentials needed to log into that registry to pull the images from it. Create such a secret for the IKO image you located in the previous step. For example, if you pushed the image to your own registry, you would use a kubectl commandOpens in a new window like the following to create the needed secret. The username and password in this case would be your credentials for authenticating to the registry (docker-email is optional).

$ kubectl create secret docker-registry acme-pull-secret 
  --docker-server=https://docker.acme.com --docker-username=*****
  --docker-password='*****' --docker-email=**********
Copy code to clipboard

Update the values.yaml file

In the chart/iris-operator directory, ensure that the fields in operator section near the top of the values.yaml file correctly describe the IKO image you want to pull to install the IKO, for example:

operator:
  registry: docker.acme.com/kubernetes
  repository: intersystems-operator
  tag: latest
Copy code to clipboard

Further down in the file, in the imagePullSecrets section, provide the name of the secret you created to hold the credentials for this registry, for example:

imagePullSecrets:
  name: acme-pull-secret
Copy code to clipboard

Install the IKO

Use Helm to install the operator on the Kubernetes cluster. For example, on GKE you would use the following command:

$ helm install intersystems iris_operator-3.0.0.79.0/chart/iris-operator
NAME: intersystems
LAST DEPLOYED: Mon Jun 15 16:43:21 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To verify that InterSystems Kubernetes Operator has started, run:
  kubectl --namespace=default get deployments -l "release=intersystems, app=iris-operator"
bbinstoc@cloudshell:~ (isc-mktplace-dev)$ kubectl get deployments -l "release=intersystems, app=iris-operator"
NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
intersystems-iris-operator   0/1     1            0           30s
Copy code to clipboard

Define the IrisCluster topology

An IrisCluster can be deployed as a sharded cluster, a distributed cache cluster, or a standalone InterSystems IRIS instance. All three topologies can be mirrored. Compute nodes can optionally be added to a sharded cluster, and application servers are added to a standalone instance to create a distributed cache cluster. As described in detail in the following section, the topology deployed, including additional node types that can be added, is determined by the node definitions in the topology section of the definition file, as follows:

Create the IrisCluster definition file

You are now ready to create your IrisCluster definition YAML file. The following sections provide:

  • A listing of all fields defined in the IrisCluster custom resource definition (CRD) that can be used in a definition file. Each field is linked to the information you need to use it. Required fields are indicated as such.

  • The contents of the iris-sample.yaml file in the /Samples directory, which is a customizable sample IrisCluster definition file containing commonly used fields. To assist you in selecting the ones you want to include, most of the contents are commented out, with a brief explanation and with each field linked to the information about it, as in the CRD,

  • An explanation of each field, suggestions for customizing it, and instructions for any actions you need to take before using it, including creating needed Kubernetes objects. For example, the section about the licenseKeySecret field explains that you must create a Kubernetes secret containing the InterSystems IRIS license key and then specify that secret by name in the field.

Note:

For each section of the definition, there are numerous other Kubernetes fields that can be included; this document discusses only those specific to or required for an IrisCluster definition.

Review the IrisCluster custom resource definition (CRD)

apiVersion: intersystems.com/v1alpha1
kind: IrisCluster
metadata:
  name: name
spec:
  licenseKeySecret:
    name: name
  configSource:
    name: name
  imagePullSecrets:
  - name: name
  - ...
  storageClassName: name
  updateStrategy:
    type: {RollingUpdate|OnDelete}
  volumeClaimTemplates:
  - metadata:
      name: nameN
    spec:
      accessModes:
      - {ReadWriteOnce|ReadOnlyMany|ReadWriteMany}
      resources:
        requests:
          storage: size
      storageClassName: nameN
  - ...
  serviceTemplate:
    spec:
      type: LoadBalancer
  topology:
    data:
    [common IRIS node fields, optional for all node types]
      image: registry/repository/image:tag
      updateStrategy:
        type: {RollingUpdate|OnDelete}
      preferredZones:
        - zoneN
        - ...
      podTemplate:
        core.PodTemplateSpec
    [end common IRIS node fields]
      shards: N
      mirrored: {true|false}
      storage{DB|WIJ|Journal1|Journal2}:
        resources:
          requests:
            storage: size
      volumeMounts:
      - name: volumeClaimTemplateName
        mountPath: pathN
      - ...
    compute:
      see common IRIS node fields
      replicas: N
      storage{DB|WIJ|Journal1|Journal2}:
        resources:
          requests:
            storage: size
      volumeMounts:
      - name: volumeClaimTemplateName
        mountPath: pathN
      - ...
    arbiter:
      see common IRIS node fields
    webgateway:
      see common IRIS node fields
      type: {apache|apache-lockeddown|nginx}
      replicas: N
      applicationPaths:
        - pathN
        - ...
      alternativeServers: {FailOver|LoadBalancing}
      storageDB:
        resources:
          requests:
            storage: spec
    sam:
      see common IRIS node fields
      storage{SAM|Grafana}:
        resources:
          requests:
            storage: spec
   iam:
      see common IRIS node fields
      storagePostgres:
        resources:
          requests:
            storage: spec
Copy code to clipboard

Review the sample IrisCluster definition file

## uncommented fields deploy one InterSystems IRIS data server

## WARNING: default password is not reset, to do so include
## configSource below 

## include commented fields for purposes described; see documentation at 
## https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=AIKO_clusterdef_sample

## update image tags (from "tag:") before using; see list of available images at 
## https://docs.intersystems.com/components/csp/docbook/Doc.View.cls?KEY=PAGE_containerregistryOpens in a new window

   apiVersion: intersystems.com/v1alpha1
   kind: IrisCluster
   metadata:
     name: sample
   spec:

## provide InterSystems IRIS license key if required
#     licenseKeySecret:
#       name: iris-key-secret
  
## specify files used to customize the configurations of
## InterSystems IRIS nodes, including passwordHash parameter 
## to set the default password, securing InterSystems IRIS                                      
#     configSource:
#       name: iris-cpf
   
## provide repository credentials if required to pull images
#     imagePullSecrets:
#       - name: iris-pull-secret

## specify platform-specific storage class used to allocate storage
## volumes (default: use platform-defined class)
#     storageClassName: iris-ssd-storageclass

## select update strategy (default: RollingUpdate)
#     updateStrategy:
#       type: RollingUpdate
   
## create external IP address(es)for the cluster 
## ("type: LoadBalancer" is required)
#     serviceTemplate:
#       spec:
#         type: LoadBalancer  

## topology: defines node types to be deployed; only data: is required

     topology:
       data:
         image: containers.intersystems.com/intersystems/iris:tag

## deploy a sharded cluster of data nodes and (optionally) compute
## nodes; if not included, "data:" definition in "topology:" deploys 
## a single data server, "compute:" adds application servers
#         shards: 2
 
## deploy mirrored data nodes or data server (default: nonmirrored)
#         mirrored: true
   
## override default size of storage volumes for data nodes (additional 
## volume names: storageWIJ, storageJournal1, storageJournal2); can 
## also be included in compute: definition
#         storageDB:
#           resources:
#             requests:
#               storage: 10Gi
   
## constrain nodes to platform-specific availability zones (can be 
## included in other node definitions)
#         preferredZones:
#           - us-east1-a
#           - us-east1-b

## deploy compute nodes, or application servers if "shards:" 
## not included; use "replicas:" to specify how many
#       compute:
#         image: containers.intersystems.com/intersystems/iris:tag
#         replicas: 2

## deploy arbiter for mirrored data nodes (or data server)
#       arbiter:
#         image: containers.intersystems.com/intersystems/arbiter:tag

## deploy webgateway (web server) nodes
#       webgateway:  
#         image: containers.intersystems.com/intersystems/webgateway:tag
#         replicas: 2
#         applicationPaths:
#           - /external
#           - /internal   
#         alternativeServers: LoadBalancing

## deploy System Alerting and Monitoring (SAM) with InterSystems IRIS
#      sam:
#        image: containers.intersystems.com/intersystems/sam:tag

## deploy InterSystems API Manager (IAM) with InterSystems IRIS
#      iam:
#        image: containers.intersystems.com/intersystems/iam:tag
Copy code to clipboard

apiVersion: Define the IrisCluster

apiVersion: intersystems.com/v1alpha1
kind: IrisCluster
metadata:
  name: cluster-name
spec:
Copy code to clipboard

Required

The first four fields, which define the object you are defining, are required by KubernetesOpens in a new window.

Change the value of the name field in metadata to the name you want to give the cluster IrisCluster.

The spec section contains nested fields, required and optional, that specify an IrisCluster object.

licenseKeySecret: Provide a secret containing the InterSystems IRIS license key

  licenseKeySecret:
    name: name
Copy code to clipboard

Optional

The licenseKeySecret field specifies a Kubernetes secret containing the InterSystems IRIS license key to be activated in all of the InterSystems IRIS containers in the cluster.

Upload the sharding-enabled license key for the InterSystems IRIS images in your sharded cluster, and create a Kubernetes secretOpens in a new window of type generic to contain the key, allowing it to be mounted on a temporary file system within the container, for example:

$ kubectl create secret generic iris-key-secret --from-file=iris.key
Copy code to clipboard

Finally, update the name field in licenseKeySecret with the name of the secret you created. However, if you did not create a sescret, for example because you are deploying from an InterSystems IRIS Community EditionOpens in a new window image, omit this field.

If you are not deploying a sharded cluster but rather a configuration with a single data node (see Define the IrisCluster topology), the license you use does not have to be sharding-enabled.

configSource: Create configuration files and provide a config map for them

  configSource:
    name: name
Copy code to clipboard

Optional

The configSource field specifies a Kubernetes config mapOpens in a new window containing one or all of the following:

  • A configuration merge file called common.cpf used to customize the configurations of InterSystems IRIS cluster nodes (data and compute) when deployed.

    Important:

    For effective security, the common.cpf file (or both the data.cpf and compute.cpf files, below) should include the passwordHash parameter to set the default InterSystems IRIS password.

  • A configuration merge file called data.cpf used to further customize data nodes only; settings in this file override the same settings in common.cpf.

  • A configuration merge file called compute.cpf used to further customize compute nodes only; settings in this file override the same settings in common.cpf.

  • An InterSystems Web Gateway configuration file CSP.ini to be installed on Web Gateway nodes when deployed.

Important:

All configuration merge (.cpf) files are optional, and they can be included in any combination. At least one, however, should be included so that the default InterSystems IRIS password can be reset.

Some of the configuration performed by the IKO, for example mirror configuration, uses settings than can be specified in merge files; any settings specified by the IKO override the same parameters set in user-provided .cpf files.

Kubernetes config maps keep your containerized applications portable by separating configuration artifacts, such as these files, from image content. To use configuration merge files to customized the configurations of the IrisCluster’s InterSystems IRIS nodes (data and compute), provide your own Web Gateway configuration file for the Web Gateway nodes, or both, you should:

Prepare the configuration merge files

The configuration parameter file, also called the CPF, defines the configuration of an InterSystems IRIS instance. On startup, InterSystems IRIS reads the CPF to obtain the values for most of its settings. The configuration merge feature allows you to specify a merge file that overwrites the values of one or more settings that in the default CPF that comes with an instance when it is deployed. For details, see Using Configuration Merge to Deploy Customized InterSystems IRIS Instances in Running InterSystems Products in Containers, as well as Introduction to the Configuration Parameter FileOpens in a new window in the Configuration Parameter FIle Reference.

To use configuration merge when deploying your IrisCluster, you can customize the template common.cpf, data.cpf, and compute.cpf files provided in the samples/ directory with the CPF settings you want to apply to all InterSystems IRIS nodes, the data nodes, and the compute nodes (if included) respectively. The provided common.cpf contains a only sample passwordHashOpens in a new window setting, described in the next section; the other templates contain only the SystemModeOpens in a new window setting, which displays text on the InterSystems IRIS Management Portal.

For numerous helpful examples of parameters you might customize for several purposes, see Useful Merge Parameters for Automated DeploymentOpens in a new window in Running InterSystems Products in Containers. For example, the data nodes in a sharded cluster must be configured to allocate a database cache of the appropriate size; Deploy Sharded ClustersOpens in a new window in “Useful Parameters” illustrates how you would add the [config] section globalsOpens in a new window parameter to the data.cpf file to configure the database caches of the data nodes at the size you calculated. This is shown in the following, with a value of 20 GB (the globals setting is in MB):

[StartUp]
SystemMode=my-IrisCluster
[config]
globals= 0,0,20480,0,0,0
Copy code to clipboard

Set the default InterSystems IRIS password

InterSystems IRIS is installed with several predefined user accountsOpens in a new window, the initial password for which is SYS. For effective security, it is important that this default password be changed immediately upon deployment of all InterSystems IRIS containers. Although the configuration merge files are optional, you should include a common.cpf containing the passwordHashOpens in a new window parameter, as illustrated in the template common.cpf, provided, to reset the default password on all InterSystems IRIS nodes, for example:

[Startup]
PasswordHash=dd0874dc346d23679ed1b49dd9f48baae82b9062,10000,SHA512
Copy code to clipboard

InterSystems publishes through the ICR the intersystems/passwordhash image, from which you can create a container that generates the hash for you; for more information about this, the passwordHash parameter, and the default password,, see Authentication and PasswordsOpens in a new window in Running InterSystems Products in Cotainers.

If you do not provide a common.cpf, you should provide both a data.cpf and a compute.cpf, even if you don’t intend to deploy compute nodes or application servers, as you may add them later.

Prepare the Web Gateway configuration file

As described in Configure the Web Gateway in the Web Gateway Configuration Guide, the Web Gateway’s configuration is managed using the Web Gateway Management pages, but contained in the CSP.ini file (much as an InterSystems IRIS instance’s configuration is contained in the iris.cpf file).

If you do not include a CSP.ini file in your config map, the IKO will generate one appropriate for the cluster and install it on every Web Gateway node, configuring the Web Gateway as follows:

  • If compute nodes are included in the deployment, all data and compute nodes are added to the pool of remote serversOpens in a new window across which connections are distributed.

  • If no compute nodes are included, all data nodes are added to the remote server pool. (If there is a single data node, it is the only remote server in the pool.)

  • If the data nodes are mirrored, the Web Gateway configuration is mirror aware, that is, connections are always to the mirror primary.

  • The password required to connect to the data node InterSystems IRIS instances (plaintext or non-cryptographic hash) will be left blank for users to configure in the Web Gateway portal.

    Important:

    The Web Gateway uses the CSPSystem predefined accountOpens in a new window to authenticate to the InterSystems IRIS instances in its remote server pool. For information about the default password for this account, and about changing it (which you need to do on all data and compute nodes in your IrisCluster deployment, as well as configuring the new password on all of the Web Gateway nodes), see Authentication and PasswordsOpens in a new window in Running InterSystems Products in Containers, Web Gateway Security ParametersOpens in a new window in the Web Gateway Configuration Guide, and Change the Authentication Mechanism for an ApplicationOpens in a new window in Securing Your Instance..

If you have experience with the Web Gateway, you can use a CSP.ini from an existing installation as a template and prepare one to be installed on your Web Gateway nodes by the IKO. You can then further customize their Web Gateway configurations as needed after deployment.

Important:

If you provide your own CSP.ini file with remote server password already set, bear in mind that it may become exposed in deployment.

The CSP.ini file generated by the IKO does not include a SSL/TLS connection; this can be configured in the Web Gateway Management pages on the individual nodes.

Create a config map for the configuration files

Create a Kubernetes config map specifying the files, using a command like this:

$ kubectl create cm iris-cpf --from-file common.cpf --from-file data.cpf 
    --from-file compute.cpf --from-file CSP.ini
Copy code to clipboard

You can then specify iris-cpf as the value for configSource. (If you did not create a config map, do not specify a value for this field.)

imagePullSecrets: Provide a secret containing image pull information

  imagePullSecrets:
  - name: name
  - ...
Copy code to clipboard

Optional

The imagePullSecrets field specifies one or more Kubernetes secrets containing the URL of the registry from which images to be pulled and the credentials required for access.

Kubernetes secretsOpens in a new window let you securely and flexibly store and manage sensitive information such as credentials that you want to pass to Kubernetes. To enable Kubernetes to download an image from a secure registry, you can create a Kubernetes secret of type docker-registry containing the URL of the registry and the credentials needed to log into that registry.

Create another Kubernetes secretOpens in a new window, like the one you created for the IKO image pull information, for the InterSystems IRIS image and others you intend to deploy, such as arbiter, InterSystems Web Gateway, and so on. For example, if Kubernetes will be pulling these images from the InterSystems Container Registry (ICR) as described in Obtain the IKO image,, you would use a command like the one shown below. The username and password in this case would be your ICR docker credentials, which you can obtain as described in Authenticating to the ICROpens in a new window in Using the InterSystems Container Registry (docker-email is optional).

$ kubectl create secret docker-registry intersystems-pull-secret 
  --docker-server=https://containers.intersystems.com --docker-username=*****
  --docker-password='*****' --docker-email=**********
Copy code to clipboard

Finally, update the name field in imagePullSecrets with the name of the secret you created; if you did not create one, do not specify a value for this field. If you create multiple image pull secrets, you can specify multiple secret names in the imagePullSecrets field.

If you include multiple secrets in imagePullSecrets because the images specified in multiple image fields in the definition are in different registries, Kubernetes uses the registry URL in each image field to choose the corresponding image pull secret. If you specify just one secret, it is the default image pull secret for all image pulls in the definition.

storageClassName: Create a default class for persistent storage

  storageClassName: name
Copy code to clipboard

Optional

Specifies the Kubernetes storage class to use by default when requesting persistent volumes.

Kubernetes provides the persistent storage needed by containerized programs (in this case, the data, compute, and Web Gateway nodes) in the form of persistent volumesOpens in a new window; a persistent volume claimOpens in a new window is a specification for a persistent volume, including such characteristics as access modeOpens in a new window, sizeOpens in a new window, and storage classOpens in a new window , that can be specified when requesting one or more of them.

Storage classes provide a way for administrators to describe the types of storage offered in a given Kubernetes environment. For example, different classes (sometimes called “profiles” on other provisioning and deployment platforms) might map to quality-of-service levels, backup policies, or arbitrary policies germane to the deployments involved. For a number of reasons, you might want to specify a default storage class to be used for persistent volumes, including the predefined volumes deployed with data, compute, and Web Gateway nodes in an IrisCluster. You can specify an existing storage class, or a new one you define (the Kubernetes documentationOpens in a new window provides full instructions for doing so) prior to deploying the cluster, as the default by specifiying its name in the StorageClassName field.

Important:

The volumeBindingMode: WaitForFirstConsumerOpens in a new window setting (see the Kubernetes documentationOpens in a new window) is required for the correct operation of the IKO.

If you want to use more than one storage class in your cluster, you can override the default class for one or more of the predefined volumes for a specific node type or when you specify additional volumes using the volumeMounts field. If you do not specify a default, the default storage characteristics are specific to the Kubernetes platform you are using; consult the platform documentation for details.

updateStrategy: Select a Kubernetes update strategy

  updateStrategy:
    type: RollingUpdate
Copy code to clipboard

Optional

Specifies the update strategyOpens in a new window that Kubernetes uses to update the stateful setsOpens in a new window in the deployment. The value can be either RollingUpdate (the default) or OnDelete. This setting can be overridden by using the updateStrategy field within a node type definition in the topology section to specify the update strategy for that type of node only.

volumeClaimTemplates: Request additional storage volumes

  volumeClaimTemplates:
  - metadata:
      name: nameN
    spec:
      accessModes:
      - {ReadWriteOnce|ReadOnlyMany|ReadWriteMany}
      resources:
        requests:
          storage: size
      storageClassName: nameN
  - ...
Copy code to clipboard

Optional

Specifies one or more templates to be used to request additional persistent volumesOpens in a new window for data or compute nodes using the volumeMounts field. Only the fields of the persistent volume claim specOpens in a new window that are shown here — accessMode, resources, and storageClassName — are needed to define the template, but others can be included.

serviceTemplate: Create external IP addresses for the cluster

  serviceTemplate:
    spec:
      type: LoadBalancer
Copy code to clipboard

Optional

Provides access to the IrisCluster deployment by defining one or more Kubernetes servicesOpens in a new window, which expose an application running on a set of pods by assigning an external IP address. At a minimum, the serviceTemplate field (the type field should always have the value LoadBalancer) creates an external IP address assigned to one of the pods in the first stateful setOpens in a new window managing data node pods, which is used to connect requests with the InterSystems IRIS superserver and web server ports (1972 and 52773, respectively) for all connections to the cluster for data ingestion, queries, and other purposes. For exanple, it can be used to access the data node 1 Management Portal, as described in Connect to the IrisCluster. The data node pod associated with the external IP address is selected as follows:

  • If the data nodes (or a single data node representing a standalone instance) are not mirrored — pod 0 in stateful set 0, for example my-iriscluster-data-0.

  • If the data nodes are mirrored and an arbiter node is not included — pod 0 in stateful set 0, for example my-iriscluster-data-0-0.

  • If the data nodes are mirrored and an arbiter node is included in the deployment, the pod within stateful set 0 that contains the current primary, for example either my-iriscluster-data-0-0 or my-iriscluster-data-0-1.

Other services and external IP addresses created, if applicable, represent the first pod in the first stateful set managing Web Gateway pods, SAM pods, and IAM pods, if these nodes are included in the IrisCluster; see Connect to the IrisCluster for connection information.

topology: Define the cluster nodes

  topology:
Copy code to clipboard

Required

Specifies the details of the each type of cluster node to be deployed. As described in Define the IrisCluster topology, the IrisCluster must have one or more data nodes, so the data section, defining the data nodes, is required; all other node types are optional.

data: Define sharded cluster data notes or standalone data server

    data:
      image: registry/repository/image:tag
      updateStrategy:
        type: {RollingUpdate|OnDelete}
      preferredZones:
        - zoneN
        - ...
      podTemplate:
        core.PodTemplateSpec
      shards: N
      mirrored: {true|false}
      storage{DB|WIJ|Journal1|Journal2}:
        resources:
          requests:
            storage: spec
        mountPath
      volumeMounts:
      - name: volumeClaimTemplateN
        mountPath: pathN
      - ...        
Copy code to clipboard

Required

The data section defines the IrisCluster’s data nodes, of which there must be at least one. Only the image field is required within the data section.

image:

      image: containers.intersystems.com/intersystems/iris:2021.1.0.205.0
Copy code to clipboard

Required

The image: field specifies the URL (registry, repository, image name, and tag) of the InterSystems IRIS image from which to deploy data node containers. The example above specifies an InterSystems IRIS image from the InterSystems Container Registry (ICR). The registry credentials in the secret specified by the imagePullSecrets field are used for access to the registry.

Important:

In InterSystems IRIS containers deployed from the secure iris-lockeddown image (see Locked Down InterSystems IRIS ContainerOpens in a new window in Running InterSystems Products in Containers), the private web server is disabled, which also disables the Management Portal. To enable the Management Portal for data nodes deployed from this image, so you can use it to connect to data node 1 (or the single data server) as described in Connect to the IrisCluster, add the webserverOpens in a new window property with the value 1 (webserver=1) to the data.cpf configuration merge file described in configSource: Create configuration files and a config map for them

updateStrategy:

      updateStrategy:
        type: {RollingUpdate|OnDelete}
Copy code to clipboard

Optional

Overrides the top level updateStrategy setting to specify the Kubernetes update strategyOpens in a new window used for the stateful setsOpens in a new window representing this node type only. The value can be either RollingUpdate (the default) or OnDelete.

preferredZones:

      preferredZones:
        - zoneN
        - ...
Copy code to clipboard

Optional

Specifies the zone or zones in which data nodes should be deployed, and is typically used as follows:

  • If mirrored is set to true and at least two zones are specified, Kubernetes is discouraged (but not prevented) from deploying both members of a failover pair in the same zone, which maximizes the chances that at least one is available, and is therefore the best practice for high availability. Bear the following mind, however:

    • Deploying the members of a failover pair in separate zones is likely to slightly increase latency in the synchronous communication between them.

    • Specifying multiple zones for the data nodes means that all of the primaries might not be deployed in the same zone, resulting in slightly increase latency in communication between the data nodes.

    • Specifying multiple zones for data nodes generally makes it impossible to guarantee that nodes of other types (compute, Web Gateway, SAM, IAM) are in the same zone as all of the data node primaries at any given time regardless of your use of preferredZones in their definitions, increasing latency in those connections as well.

    Under most circumstances these interzone latency effects will be negligible, but with some demanding workloads involving high message or query volume, performance may be affected. If after researching the issue of interzone connections on your Kubernetes platform and testing your application thoroughly you are concerned about this performance impact, consider specifying a single zone for your mirrored data nodes.

    Regardless of the zones you specify here, you should use the preferredZones field in the arbiter definition to deploy the arbiter in a separate zone of its own, which also helps optimize mirror availabilityOpens in a new window.

  • The data nodes of an unmirrored cluster are typically deployed in the same zone to minimize latency. If mirrored: false and your Kubernetes cluster includes multiple zones, you can use preferredZones to follow this practice by specifying a single zone in which to deploy the data nodes.

  • The value of the preferredZones: field in the compute definition, if included, should ensure that the compute nodes are deployed in the same zone or zones as the data nodes to minimize latency (see Plan Compute NodesOpens in a new window in the Scalability Guide).

Kubernetes attempts to deploy in the specified zones, but if this is not possible, deployment proceeds rather than failing.

podTemplate:

      podTemplate:
        spec:
          args:        
Copy code to clipboard

Optional

Specifies overrides and additions to the default pod templateOpens in a new window applied to the data node pods. Because containers are run only within a pod on Kubernetes, a pod template can specify the fields that define numerous Kubernetes entities, including the fields defining a containerOpens in a new window, which makes it useful many purposes. Two examples of using containers: fields are provided in the following:

  • Prevent InterSystems IRIS from starting up with the container.

    You can use the args field in the pod template to specify options to the InterSystens IRIS entrypoint application, iris-mainOpens in a new window. For example, if there is something wrong with the InterSystems IRIS configuration which prevents startup from succeeding, iris-main exits, causing the pod to go into a restart loop, which makes it difficult or impossible to diagnose the problem. You can prevent the instance from starting by adding the iris-main option --up false as follows:

    podTemplate:
      spec:
        args:
        - --up
        - "false"
    Copy code to clipboard

    When you do this, the readiness probeOpens in a new window will not be satisfied, and the deployment will be paused indefinitely:

    $ kubectl get pods
    NAME            READY   STATUS    RESTARTS   AGE
    iris-data-0     0/1     Running   0          32s
    Copy code to clipboard

    After addressing the problem, you can either

    • Manually start the instance with a command like the following:

      $ kubectl exec -it iris-data-0 -- iris start IRIS
      Copy code to clipboard
    • Remove the podTemplate override and redeploy the pod.

  • Override the default liveness probe and readiness probeOpens in a new window.

    If you wanted to replace the default liveness probe and readiness probe for data nodes (or another node type), you could specify something like the following in the applicable pod template:

    podTemplate:
      spec:
        livenessProbe:
          exec:
            command:
            - /bin/true
        readinessProbe:
          httpGet:
            path: /csp/user/cache_status.cxw
            port: 52773
    Copy code to clipboard

shards:

      shards: N
Copy code to clipboard

Optional

Specifies the number of data nodes to be deployed as a sharded cluster. If the shards field is omitted, a single standalone instance of InterSystems IRIS is deployed, optionally as the data server in a distributed cache cluster; for more information, see Define the IrisCluster topology.

Data nodes can be added to the deployed cluster by increasing this setting and reapplying the definition (as described in Modifying the IrisCluster), but the setting cannot be decreased .

mirrored:

      mirrored: {true|false}
Copy code to clipboard

Optional

Determines whether the data nodes in the deployment are mirrored.

If the value of mirrored: is true, two mirrored instances are deployed for each data node specified by the shards field. For example, if shards: 4 and mirrored: true, eight data node instances are deployed as four failover pairs, creating a mirrored sharded cluster with four data nodes. If mirrored: is true when shards is omitted, two mirrored instances are deployed as the standalone InterSystems IRIS instance, which can optionally be the mirrored data server of a distributed cache cluster; for details, see Define the IrisCluster topology.

The default for mirrored is false.

Important:

A deployed cluster cannot be changed from unmirrored to mirrored, or mirrored to unmirrored, by changing this setting and reapplying the definition (as described in Modifying the IrisCluster).

storage*:

      storage{DB|WIJ|Journal1|Journal2}:
        resources:
          requests:
            storage: size
Copy code to clipboard

Optional

Specify a custom size for one or more of the four predefined persistent volumesOpens in a new window deployed with each data node, as follows:

The predefined volumes are mounted in /irissys inside the container, and are 2 GB by default. When including an override as illustrated, size (the value of the storage field) can be specified in any unit between kilobytes and exabytesOpens in a new window).

The amount of data storage to be mounted on sharded cluster data nodes is determined during the cluster planning process and should include a comfortable margin for the future growth of your workload. Use a storageDB size override, additional volumes specified by the volumeMounts field, or both to ensure sufficient storage is available to each data node.

The same four fields can be used to modify the same predefined volumes in the compute node definition (compute), which have the same default size of 2 GB. The data storage for sharded cluster compute nodes (or distributed cache cluster application servers), as specified in the storageDB field in the compute section, should be kept to a bare minimumOpens in a new window to conserve resources, as compute nodes do not store data; you may even want to create a separate storage class for compute nodes and specify it by adding the storageClassName: field to the storageDB storage override.

In the Web Gateway node definition (webgateway), you can override the size of the only predefined volume, storageDB volume; the default size of this volume on Web Gateway nodes is 32 mb.

volumeMounts:

      volumeMounts:
      - name: volumeClaimTemplateN
        mountPath: pathN
      - ...
Copy code to clipboard

Optional

Specifies one or more volumes to be deployed with each data node, in addition to the predefined volumes (see storage* ). Each volume is defined by the name of one of the volume claim templates specified in the volumeClaimTemplates and a mountPath, which is a direct reference to a location in the container’s filesystem, visible to the InterSystems IRIS instance, on which to mount the volume.

The volumeMounts field can also be used to specify additional volumes in the compute node definition (compute), although this is less common.

compute: Define sharded cluster compute nodes or application servers

    compute:
      image: containers.intersystems.com/intersystems/iris:2021.1.0.205
      updateStrategy:
        type: {RollingUpdate|OnDelete}
      preferredZones:
        - zoneN
        - ...
      podTemplate:
        core.PodTemplateSpec
      replicas: N
      storage{DB|WIJ|Journal1|Journal2}:
        resources:
          requests:
            storage: spec
        volumeMounts:
      - name: volumeClaimTemplateN
        mountPath: pathN
      - ...
Copy code to clipboard

Optional

The compute section defines the IrisCluster’s compute nodes. As described in Define the IrisCluster topology, if the IrisCluster will be deployed as a sharded clusterOpens in a new window (because the shards field is included in the data section), you can use compute to add compute nodesOpens in a new window to the cluster, but if a single data node will be deployed as a standalone instance because shards is omitted, defining compute nodes adds application servers, creating a distributed cache clusterOpens in a new window.

If the compute section is included, only the image and replicas fields are required. For information about the remaining compute fields, see the data section.

image

image: containers.intersystems.com/intersystems/iris:2021.1.0.205
Copy code to clipboard

Required in optional compute: section

Compute nodes are deployed from the same InterSystems IRIS image as data nodes; an example is shown.

replicas:

         replicas: N
Copy code to clipboard

Required in optional compute: section

Specifies the number of identical compute nodes to deploy. In a sharded cluster, this should be a multiple of the number of data nodes specified by shards (for more information, see Plan Compute NodesOpens in a new window in the Scalability Guide).

Compute nodes can be added to or removed from the deployed IrisCluster by changing this setting and reapplying the definition (as described in Modifying the IrisCluster).

The replicas field also appears in the webgateway section, where it specifies the number of InterSystems Web Gateway (web server) nodes to deploy.

arbiter: Define arbiter for mirrored data nodes

    arbiter
      image: containers.intersystems.com/intersystems/arbiter:2021.1.0.205.0
      updateStrategy:
        type: {RollingUpdate|OnDelete}
      preferredZones:
        - zoneN
      podTemplate:
        core.PodTemplateSpec 
Copy code to clipboard

Optional

The arbiter section defines an arbiter node to be deployed with a mirrored sharded cluster or data server. For general information about the arbiter fields, see the data section, noting the following arbiter-specific information:

webgateway: Define web server nodes

    webgateway:
      image: containers.intersystems.com/intersystems/webgateway:2021.1.0.205.0
      updateStrategy:
        type: {RollingUpdate|OnDelete}
      preferredZones:
        - zoneN
      podTemplate:
        core.PodTemplateSpec
      storageDB:
        resources:
          requests:
            storage: spec
      type: {apache|apache-lockeddown|nginx}
      replicas: N
      applicationPaths:
        - pathN
        - ...
Copy code to clipboard

Optional

The webgateway section defines the Web Gateway nodes to be deployed. Each Web Gateway node includes the InterSystems Web GatewayOpens in a new window, which provides the communications layer between the hosting web server and InterSystems IRIS for web applications, and an Apache or Nginx web server. Multiple Web Gateway nodes can de deployed as a web server tier for a sharded cluster, a distributed cache cluster, or a standalone instance, mirrored or unmirrored.

If the webgateway section is included, only the image and replicas fields are required. For information about the remaining webgateway fields not discussed here, see the data section, noting the following arbiter-specific information:

  • Of the storage* fields listed for specifying storage overrides for the predefined volumes for data and compute nodes, only storageDB can be used in the webgateway definition; the default size of the Web Gateway predefined data volume (used for storing configuration and log files) is 32 MB.

  • Depending on your circumstances, you may want to use preferredZones to locate your web server tier relative to the data and compute nodes they connect to.

As previously described, the serviceTemplate field creates one or more Kubernetes servicesOpens in a new window to expose the IrisCluster to the network through external IP addresses. If you include Web Gateway nodes, an external IP address representing the first pod in the first stateful set managing Web Gateway pods is created. This IP address can be used with the URL listed in Connect to the IrisCluster to connect to the Web Gateway Management pages on that node and review the Web Gateway configuration, which is the same for all Web Gateway nodes, whether the configuration was provided by the IKO or supplied by you as described in Prepare the Web Gateway configuration file.

Important:

At this time, the IKO does not automatically expose all of the Web Gateway nodes to the network. To enable load balancing of application connections across the web server tier, you can manually define a serviceOpens in a new window exposing the nodes, which on some platforms can include a load balancerOpens in a new window.

image

image: containers.intersystems.com/intersystems/iris:2021.1.0.205
Copy code to clipboard

Required in optional webgateway: section

Web Gateway nodes are deployed from an InterSystems webgateway image, an example of which is shown.

type:

      type: {apache|apache-lockeddown|nginx}
Copy code to clipboard

Optional

Specifies deployment of an Apache web server, an Apache web server with locked-down security, or an Nginx web server; the default is apache.

replicas:

      replicas: N
Copy code to clipboard

Required in optional webgateway: section

Specifies the number of identical Web Gateway nodes to deploy.

Important:

While Web Gateway nodes can be added to or removed from the deployed IrisCluster by changing this setting and reapplying the definition (as described in Modifying the IrisCluster), the IKO cannot not automatically regenerate the remote server pools when you do. You can however scale the replicas by setting replicas to zero, deleting the Web Gateway persistent volume claims (see Remove the IrisCluster), and then setting replicas to a non-zero value and reapplying the definition.

The replicas field also appears in the compute section, where it specifies the number of compute nodes to deploy.

applicationPaths:

      applicationPaths:
        - pathN
        - ...
Copy code to clipboard

Optional

Provides a list of application pathsOpens in a new window to configure in the Web Gateway. Application paths should not have a trailing slash, and the path /csp is reserved.

alternativeServers:

      alternativeServers: {FailOver|LoadBalancing}
Copy code to clipboard

Optional

Selects the method by which the Web Gateway on each node determines which InterSystems IRIS server (that is, which data node) in its remote server pool to connect to (see Load Balancing and Failover Between Multiple InterSystems IRIS Server InstancesOpens in a new window in the Web Gateway Guide). Possible values are FailOver and LoadBalancing, with a default of LoadBalancing.

sam: Deploy System Alerting and Monitoring

    sam:
      image: containers.intersystems.com/intersystems/sam:1.0.0.115
      updateStrategy:
        type: {RollingUpdate|OnDelete}
      preferredZones:
        - zoneN
        - ...
      podTemplate:
        core.PodTemplateSpec
      storage{SAM|storageGrafana}:
        resources:
          requests:
            storage: spec
Copy code to clipboard

Optional

The sam section deploys System Alerting and Monitoring (SAM)Opens in a new window, a cluster monitoring solution for InterSystems IRIS data platform, along with the selected InterSystems IRIS topology. For general information about the sam fields, see the data section, noting the following SAM-specific information:

  • SAM is deployed from an InterSystems sam image, an example of which is shown in the image field.

  • The storageSam and storageGrafana storage override fields in the sam section differ in name from those in the data section, but function in the same way, providing the ability to override the sizes of the predefined volumes for the SAM Manager container (storageSAM) and the Grafana container (StorageGrafana). (For information about the SAM volumes, see SAM Component BreakdownOpens in a new window in the System Alerting and Monitoring Guide.)

iam: Deploy InterSystems API Manager

    iam:
      image: containers.intersystems.com/intersystems/iam:2.3.3.2–1
      updateStrategy:
        type: {RollingUpdate|OnDelete}
      preferredZones:
        - zoneN
        - ...
      podTemplate:
        core.PodTemplateSpec
      storagePostgres:
        resources:
          requests:
            storage: spec
Copy code to clipboard

Optional

The iam section deploys the InterSystems API Manager (IAM)Opens in a new window, which enables you to monitor and control traffic to and from web-based APIs, along with the selected InterSystems IRIS topology. For general information about the iam fields, see the data section, noting the following IAM-specific information:

  • The IAM is deployed from an InterSystems iam image, an example of which is shown in the image field.

  • The storagepostgres storage override field in the iam section differs in name from those in the data section, but functions in the same way, providing the ability to override the size of the predefined volume for the IAM container.

Deploy the IrisCluster

Once the definition file (for example my-IrisCluster-definition.yaml) is complete, deployOpens in a new window the IrisCluster with the following command:

$ kubectl apply -f my-IrisCluster-definition.yaml
IrisCluster.intersystems.com/my-IrisCluster created
Copy code to clipboard

Because the IKO extends Kubernetes to add IrisCluster as a custom resource, you can apply commands directly to your cluster. For example, if you want to see its status, you can execute the kubectl get commandOpens in a new window on the IrisCluster, as in the following:

$ kubectl get IrisClusters
NAME             DATA   COMPUTE   MIRRORED   STATUS     AGE
my-IrisCluster   2      2         true       Creating   28s
Copy code to clipboard

Follow the progress of cluster creation by displaying the status of the pods that comprise the deployment, as follows:

$ kubectl get pods
NAME                                         READY   STATUS    RESTARTS   AGE
intersystems-iris-operator-6499fbbf4-s74lk   1/1     Running   1          1h23m
my-IrisCluster-arbiter-0                     1/1     Running   0          36s
my-IrisCluster-data-0-0                      0/1     Running   0          28s

...

$ kubectl get pods
NAME                                         READY   STATUS              RESTARTS   AGE
intersystems-iris-operator-6499fbbf4-s74lk   1/1     Running             1          1h23m
my-IrisCluster-arbiter-0                     1/1     Running             0          49s
my-IrisCluster-data-0-0                      0/1     Running             0          41s
my-IrisCluster-data-0-1                      0/1     ContainerCreating   0          6s

...

$ kubectl get pods
NAME                                         READY   STATUS    RESTARTS   AGE
intersystems-iris-operator-6499fbbf4-s74lk   1/1     Running   1          1h35m
my-IrisCluster-arbiter-0                     1/1     Running   0          10m
my-IrisCluster-compute-0                     1/1     Running   0          10m
my-IrisCluster-compute-1                     1/1     Running   0          9m
my-IrisCluster-data-0-0                      1/1     Running   0          12m
my-IrisCluster-data-0-1                      1/1     Running   0          12m
my-IrisCluster-data-1-0                      1/1     Running   0          11m
my-IrisCluster-data-1-1                      1/1     Running   0          10m
Copy code to clipboard

In the event of an error status for a particular pod, you can examine its log, for example:

$ kubectl logs my-IrisCluster-data-0-1
Copy code to clipboard

Connect to the IrisCluster

As previously described, the serviceTemplate field creates one or more Kubernetes servicesOpens in a new window to expose the IrisCluster to the network through external IP addresses. For example, the service for data node 1, which is always created, is used with the superserver and web server ports (1972 and 52773, respectively) for all connections to the cluster for data ingestion, queries, and other purposes. For example, to load the cluster’s Management PortalOpens in a new window in your browser, get the data node 1 IP address by listing the services representing the IrisCluster, as follows:

$ kubectl get svc 
NAME                       TYPE          CLUSTER-IP   EXTERNAL-IP     PORT(S)                         AGE
my-IrisCluster             LoadBalancer  10.35.245.6  35.196.145.234  1972:30011/TCP,52773:31887/TCP  46m
my-IrisCluster-Webgateway  LoadBalancer  10.35.245.9  35.196.145.177  52773:31887/TCP                 46m
Copy code to clipboard

Next, load the following URL in your browser, substituting the listed external IP address for the one shown here:

http://35.196.145.234:52773/csp/sys/UtilHome.csp
Copy code to clipboard

Other services and external IP addresses created, if applicable, represent the first pod in the first stateful set managing Web Gateway pods, SAM pods, and IAM pods, if these nodes are included in the IrisCluster. The URLs (including ports) for these connections are as follows:

Service
URL including port
Web Gateway, type={nginx|apache}
http://external-ip:80/csp/bin/Systems/Module.cxw
Web Gateway, type=apache-lockeddown
http://external-ip:52773/csp/bin/Systems/Module.cxw
SAM http://external-ip:8080/api/sam/app/index.csp
IAM http://external-ip:8002/overview

Troubleshoot IrisCluster deployment errors

The following kubectl commands may be particularly helpful in determining the reason for a failure during deployment. Each command is linked to reference documentation at kubernetes.ioOpens in a new window, which provides numerous examples of these and other commands that may also be helpful.

The podTemplate field can be useful in exploring deployment and startup errors; examples are provided in that section.

  • kubectl explainOpens in a new window resource

    Lists the fields for the specified resource — for example node, pod, service, persistentvolumeclaim, storageclass, secret, and so on— providing for each a brief explanation and a link to further documentation. This list is useful in understanding the field values displayed by the commands that follow.

  • kubectl describeOpens in a new window resource [instance-name]

    Lists the fields and values for all instances of the specified resource, or for the specified instance of that resource. For example, kubectl describe pods shows you the node each pod is hosted by, the containers in the pod and the names of their data volumes (persistent volume claims), and many other details such as the license key and pull secrets.

  • kubectl getOpens in a new window resource [instance-name] [options]

    Without options, lists basic information for all instances of the specified resource, or for a specified instance of that resource. However, kubectl get -o provides many options for formatting and selecting subsets of the possible output of the command. For example, the command kubectl get IrisCluster -o yaml IrisCluster-name output option displays the details fields by the .yaml definition file for the specified IrisCluster in the same format with their current values. This allows you, for instance, to create a definition file matching an IrisCluster that has been modified since it was created, as these modifications are reflected in the output.

  • kubectl logsOpens in a new window (pod-name | resource/instance-name) [-c container-name]

    Displays the logs for the specified container in a pod or other specified resource instance (for example, kubectl logs deployment/intersystems-operator-name). If a pod includes only a single container, the -c flag is optional. (For more log information, you can use kubectl exec to examine the messages log of the InterSystems IRIS instance on a data or compute node, as described in the next entry.)

  • kubectl execOpens in a new window (pod-name | resource/instance-name) [-c container-name] -- command

    Executes a command in the specified container in a pod or other specified resource instance. If container-name is not specified, the command is executed in the first container, which in an IrisCluster pod is always the InterSystems IRIS container of a data or compute node. For example, you could use kubectl exec in these ways:

Modify the IrisCluster

Generally speaking, you can make changes to your IrisCluster by modifying the definition file (using a change management system to keep track of your modifications) and repeating the kubectl apply command shown in Deploy the IrisCluster. For example, you can add data nodes, change the number of compute nodes (in a sharded cluster or distributed cache cluster), add an arbiter to a mirrored cluster or standalone instance without one or remove the one you originally, add or remove a SAM or IAM deployment, or change the number of Web Gateway nodes (but be sure to see important information about this in the webgateway definition section). However, you cannot reduce the number of data nodes or change the mirror status (mirrored or nonmirrored) of a deployment; other changes may produce unanticipated issues.

Remove the IrisCluster

To fully remove the cluster, you must use kubectl to deleteOpens in a new window not only the cluster, but also the persistent volume claims and (typically) the service associated with it. For example:

$ kubectl delete -f my-IrisCluster-definition.yaml
IrisCluster.intersystems.com "my-IrisCluster" deleted
$ kubectl delete pvc —all
persistentvolumeclaim "iris-data-my-IrisCluster-compute-0" deleted
persistentvolumeclaim "iris-data-my-IrisCluster-compute-1" deleted
persistentvolumeclaim "iris-data-my-IrisCluster-data-0-0" deleted
persistentvolumeclaim "iris-data-my-IrisCluster-data-0-1" deleted
persistentvolumeclaim "iris-data-my-IrisCluster-data-1-0" deleted
persistentvolumeclaim "iris-data-my-IrisCluster-data-1-1" deleted
$ kubectl delete svc iris-svc
service "iris-svc" deleted
Copy code to clipboard

You can also fully remove the IrisCluster, as well as the IKO, by unprovisioning the Kubernetes cluster on which they are deployed. The operator can be deleted without unprovisioning the cluster by issuing the command helm uninstall intersystems.

FeedbackOpens in a new window