Home  /  Architecture  /  InterSystems Cloud Manager Guide  /  Running InterSystems IRIS in Docker Containers

InterSystems Cloud Manager Guide
Running InterSystems IRIS in Docker Containers
[Back]  [Next] 
InterSystems: The power behind what matters   

Containers package applications into platform-independent, fully portable runtime solutions, with all dependencies satisfied and isolated. By doing so, containers bring the following benefits:
With so many advantages, containers are poised to become a natural building block for applications, promoting application delivery and deployment approaches that are simpler, faster, more repeatable, and more robust.
For further general, high level information on containers, research resources such as
Docker containers, specifically, are ubiquitous; they can be found in public and private clouds and are supported on virtual machines (VMs) and bare metal. Docker has penetrated to the extent that all major public cloud "infrastructure as a service" (IaaS) providers support specific container services for the benefit of organizations reducing system administration costs by using Docker containers and letting the cloud provider handle the infrastructure.
InterSystems has been supporting InterSystems IRIS™ in Docker containers for some time and is committed to enabling its customers to take advantage of this innovative technology.
For technical information and to learn about Docker technology step-by-step from the beginning, please see the Docker documentation site.
The remainder of this appendix contains the following sections:
Container Basics
This section covers the important basic elements of creating and using Docker containers.
Container Contents
In essence, a Docker container runs a single specified process; anything that can be managed by a single blocking process (one that waits until its work is complete) can be packaged and run in a container.
A containerized application, while remaining wholly within the container, does not run fully on the operating system (OS) on which the container is running, nor does the container hold an entire operating system for the application to run on. Instead, an application in a Docker container runs natively on the kernel of the host system, while the container provides only the elements needed to run it and make it accessible to the required connections, services, and interfaces — a runtime environment (including file system), the code, libraries, environment variables, and configuration files.
Because it packages only these elements and executes the application natively, a Docker container is both very efficient (running in a discrete, manageable process that takes no more memory than any other executable) and fully portable (remaining completely isolated from the host environment by default, accessing local files and ports only if configured to do so), while at the same time providing standard, well-understood application configuration, behavior, and access. If you are experienced with InterSystems IRIS running on Linux, it doesn’t matter what physical, virtual, or cloud systems and OS platforms your Linux-based InterSystems IRIS containers are running on; you interact with them all in the same way, just as you would with traditional InterSystems IRIS instances running on Linux systems.
The isolation of the application from the host environment is a very important element of containerization, with many significant implications. Perhaps most important of these is the fact that unless specifically configured to do so, a containerized application does not write persistent data, because whatever it writes inside the container is lost when the container is removed and replaced by a new container. Because data persistence is usually a requirement for applications, arranging for data to be stored outside of the container and made available to other and future containers is an important aspect of containerized application deployment.
The Container Image
A container image is the executable package, while a container is a runtime instance of an image — that is, what the image becomes in memory when actually executed. In this sense an image and a container are like any other software that exists in executable form; the image is the executable and the container is the running software that results from executing the image.
A Docker image is defined in a Dockerfile, which begins with a base image providing the runtime environment for whatever is to be executed in the container. Next come specifications for everything needed to prepare for execution of the application — for example, copying or downloading files, setting environment variables, and installing the application. The final step is to define the launch of the application.
The image is created by issuing a docker build command specifying the Dockerfile’s location. The resulting image is placed in the Docker image registry of the local host, from which it can be copied to other Docker image registries.
Running a Container
To execute a container image and create the container — that is, the image instance in memory and the kernel process that runs it — you must execute three separate Docker commands, as follows:
  1. docker pull — Downloads the image from the repository.
  2. docker create — Defines the container instance and its parameters.
  3. docker start — Starts (launches) the container.
For convenience, however, the docker run command combines three separate Docker commands, which it executes in sequence, and is the typical means of creating and starting a container.
The docker run command has a number of options, and it is important to remember that the command that creates the container instance defines its characteristics for its operational lifetime; while a running container can be stopped and then restarted (not a typical practice in production environments), the aspects of its execution determined by the docker run command cannot be changed. For instance, a storage location can be mounted as a volume within the container with an option in the docker run command (for example, --volume /network/netstore3:/netstore3), but the volumes mounted in this fashion in the command are fixed for that instantiation of the image; they cannot be modified or added to.
As with other UNIX® and Linux commands, options to docker commands such as docker run can be specified in their long forms, in which case they are preceded by two hyphens, or their short form, preceded by one. In this document, the long forms are used throughout for clarity, for example, --volume rather than v.
When a containerized application is modified — for example, it is upgraded, or components are added — the existing container is removed, and a new container is created and started by instantiating a different image with the docker run command. Although the purpose is to modify the application, as one might with a traditional application by running an upgrade script or adding a plugin, the new application instance actually has no inherent association with the previous one. Rather, it is the interactions established with the environment outside the container — for example, the container ports you publish to the host with the --publish option of the docker run command, the network you connect the container to with the --network option, and the external storage locations you mount inside the container with the --volume option in order to persist application data — that maintain continuity between separate containers, created from separate images, that represent versions of the same application.
Installing Docker
The Docker Engine consists of an open source containerization technology combined with a workflow for building and running containerized applications.
To install the Docker engine on your servers, see Install Docker in the Docker documentation.
Docker supports a number of different storage drivers to manage images; the default driver depends on the host on which the Docker Engine is installed. As of this writing, InterSystems supports only the devicemapper storage driver for running InterSystems IRIS in production. For more information about the use of devicemapper, see Docker Storage Driver.
Because macOS and Windows do not support devicemapper and the default overlay FS found on these platforms has known limitations, InterSystems recommends that developers working on macOS and Windows use the AUFS storage driver for local images. On both platforms, used the Advanced options for the Docker daemon to include the following in the configuration file:
"storage-driver" : "aufs"
Creating and Running InterSystems IRIS Docker Containers
This section describes what you need to do to run InterSystems IRIS containers using InterSystems images or images you have created, including the following topics:
InterSystems IRIS Docker Images from InterSystems
InterSystems IRIS images provided by InterSystems will be available worldwide from a repository. When the repository is established, the docker pull command can be used to download the image. (Pulling an image from a repository using its immutable digest identifier can even be done within a Dockerfile.)
Creating InterSystems IRIS Docker Images
There is more than one approach to creating a Docker image for InterSystems IRIS. InterSystems recommends designing the Dockerfile (see The Container Image, and in the Docker documentation Best practices for writing Dockerfiles) to do the following in building the image:
Images officially supported by InterSystems contain an internally developed program called isc-main that is used as the entrypoint application to aid in handling InterSystems IRIS inside a container; this means copying the program into the Dockerfile and declaring it as the entrypoint. The isc-main program is described in The isc-main Program.
The most common and recommended approach to building an image that includes your InterSystems IRIS-based application is to base it on an existing InterSystems IRIS image provided by InterSystems, adding the dependencies relevant to your own application solution. This means starting with an image in which InterSystems IRIS is already installed with isc-main as the entrypoint; whatever you include in the Dockerfile is executed subsequent to this, which means you can start and issue commands to the InterSystems IRIS instance. This is illustrated in the following sample Dockerfile:
# Application Installer example
# building from the latest iris 

FROM iscrepo/iris:stable


COPY ./cls/App/Installer.xml $ISC_PACKAGE_INSTALLDIR/mgr/

RUN ccontrol start IRIS \
    && printf "_SYSTEM\n$ISC_IMAGE_PWD\n" | csession IRIS -U %SYS "##class(%SYSTEM.OBJ).Load(\"$ISC_PACKAGE_INSTALLDIR/mgr/Installer.xml\",\"cdk\")" \
    && printf "_SYSTEM\n$ISC_IMAGE_PWD\n" | csession IRIS -U %SYS "##class(App.Installer).implementDesiredSystemState()" \
    && ccontrol stop IRIS quietly
This Dockerfile adds an application installer and some additional routines, including a ZSTU startup routine for the InterSystems IRIS instance, to the installation directory specified by the installed InterSystems IRIS environment variable $ISC_PACKAGE_INSTALLDIR. InterSystems IRIS is then started, and csession (see Connecting to an InterSystems IRIS Instance in the “Using Multiple Instances of InterSystems IRIS” chapter of the System Administration Guide) is used to run ObjectScript methods to load the application installer and install the application, and to specify the desired system settings.
For security reasons, the predefined user accounts in the InterSystems IRIS instance in an image provided by InterSystems have a random, unrecorded password. (See Initial InterSystems IRIS Security Settings in the “Preparing to Install InterSystems IRIS” chapter of the Installation Guide for information on these accounts.) When ICM uses such an image to deploy an InterSystems IRIS container, it automatically changes this password to the one specified by the -iscPassword option, the iscPassword field, or a password entered interactively with masked input (see The icm run Command and ICM Commands and Options). When you deploy an InterSystems IRIS container using an InterSystems image, or one based on it as described in this section, you must include a mechanism for changing this password.
One possibility is to take advantage of the mechanism ICM uses, which involves a script within the container taking as an argument a file containing the desired password. ICM places this file in the container after it is created but before it is started using the docker cp command, then uses an isc-main option to run the script, which brings the instance up in single-user mode, changes the password, and shuts the instance down again, before starting the instance in the normal fashion. You can follow this procedure using the script, which is located at $ISC_PACKAGE_INSTALLDIR/dev/Cloud/ICM/ within the InterSystems IRIS container, or create your own solution after examining the script for the details of its operation.
Assuming that the InterSystems IRIS journal files and CACHE.WIJ file are relocated to persistent storage outside the container, as described in Durable %SYS for Persistent Instance Data, you can reduce the size of an InterSystems IRIS or application image by deleting these files from the installed InterSystems IRIS instance. This can be done by adding a script to the Dockerfile to be run after installation has completed, for example:
kill ^%SYS("Journal")
ccontrol stop
rm $ISC_PACKAGE_INSTALLDIR/mgr/journal/*
The isc-main Program
There are several requirements an application must satisfy in order to run in a Docker container. The isc-main program was developed by InterSystems to enable the InterSystems IRIS and its other products to meet these requirements.
The main process started by the docker run command, called the entrypoint, is required to block (that is, wait) until its work is complete. In the case of a long-running entrypoint application, this process should block until it's been intentionally shut down.
InterSystems IRIS is typically started using the ccontrol start command, which spawns a number of InterSystems IRIS processes and returns control to the command line. Because it does not run as a blocking process, ccontrol is unsuitable for use as the Docker entrypoint application.
The isc-main program solves this problem by starting InterSystems IRIS and then continuing to run as the blocking entrypoint application. The program also gracefully shuts down InterSystems IRIS when the container is stopped, and has a number of useful options. To use it, add the isc-main binary to a Dockerfile and specify it as the entrypoint application, for example:
ADD host_path/isc-main /isc-main
ENTRYPOINT ["/isc-main"]
Docker imposes these additional requirements on the entrypoint application:
In addition to addressing these issues discussed in the foregoing, isc-main provides several options to help tailor the behavior of InterSystems IRIS within a container. The options provided by isc-main are shown in the list that follows; examples of their use are provided in Running InterSystems IRIS Docker Containers.
As with the docker command, the options have a long form with which two hyphens are used and a short form using only one.
Options for isc-main appear after the image name in a docker run command, while the Docker options appear before it.
Option Description Default
-i instance,
Sets the name of the InterSystems IRIS instance to start or stop. IRIS
-d true|false,
Stops InterSystems IRIS (using cstop) on container shutdown true
-u true|false,
Starts InterSystems IRIS (using cstart) on container startup true
-s true|false,
Starts InterSystems IRIS in single-user access mode false
-l log_file,
Specifies a log file to redirect to standard output for monitoring using the docker logs command. none
-k key_file,
Copies the specified InterSystems IRIS license key to the mgr/ subdirectory of the install directory. none
-b command,
--before command
Sets the executable to run (such as a shell script) prior to starting InterSystems IRIS none
-a command,
--after command
Sets the executable to run after starting InterSystems IRIS none
-e command,
--exit command
Sets the executable to run after stopping InterSystems IRIS none
Execute a custom shell command before any other arguments are processed
Execute a custom shell command after any other arguments are processed
--version Prints the isc-main version N/A
Displays usage information and exits N/A
Durable %SYS for Persistent Instance Data
This section describes the durable %SYS feature of InterSystems IRIS, which enables persistent storage of instance-specific data when InterSystems IRIS is run within a container, and explains how to use it.
Durable %SYS is one of many configuration details that ICM takes care of automatically; there is no need to manually implement durable %SYS for containerized InterSystems IRIS images deployed by ICM.
Overview of InterSystems IRIS Durable %SYS
Separation of code and data is one of the primary advantages of containerization; a running container represents "pure code" that can work with any appropriate data source. However, because all applications and programs generate and maintain operating and historical data — such as configuration and language settings, user records, and log files — containerization typically must address the need to enable persistence of code-related data on durable data storage.
When you run an InterSystems IRIS image in a Docker container, the initial state of the InterSystems IRIS instance reflects the instance used to create the image — often, for example, a newly-installed instance — and every time you run a particular image in a container, the instance starts off the same way. If you want instead to upgrade an operating InterSystems IRIS instance by running an upgraded image in a new container, you need to:
The durable %SYS feature accomplishes this by storing the needed data on an external file system, which is mounted as a volume within the container and identified in an environment variable specified when the container is started. In effect, while the InterSystems IRIS instance remains containerized, the instance-specific data exists outside the container, just like the databases in which application data is stored. As long as the data’s storage location is mounted as a volume and identified in the environment variable when the container is run, the instance has access to and uses this instance-specific data; as long as the containerized instance has the same network location as the previous version, it effectively replaces that version, upgrading InterSystems IRIS.
Contents of the Durable %SYS Directory
The durable %SYS directory, as created when a container is first started, contains a subset of the InterSystems IRIS install tree, including:
Locating the Durable %SYS Directory
When selecting the location in which this system-critical instance-specific information is to be stored, bear in mind the following considerations:
There must be at least 200 MB of space available on the specified volume for the durable %SYS directory to initialize. For various reasons, however, including operational files such as journal records and the expansion of system databases, the amount of data in the directory can increase significantly.
The external volume that ICM automatically creates on a provisioned node and uses for an InterSystems IRIS instance’s durable %SYS directory is determined by combining the value of the DataMountPoint field, (/intersys/data by default) with the name of the instance, for example /intersys/data/IRIS. The size of the external volume created by ICM for this purpose is determined by the DataVolumeSize field, which is 10 GB by default. If this is not enough for the durable %SYS needs of one or more instances you are deploying, you can override this default by including a larger value for DataVolumeSize field in the defaults file, or for particular node types in the definitions file (see Configuration, State and Log Files).
Running an InterSystems IRIS Container with Durable %SYS
To use durable %SYS, include in the docker run command the following options:
--volume /<external_host>:/<durable_storage>
--env ISC_DATA_DIRECTORY=/<durable_storage>/<durable_dir>
where external_host is the host path to the durable storage location to be mounted by the container, durable_storage is the name for this location inside the container, and durable_dir is the name of the durable %SYS directory to be created in the location. For example:
docker run --detach \
--publish 57772:57772 \
--volume /data/dur:/dur \
--env ISC_DATA_DIRECTORY=/dur/cconfig \
--hostname crmdb1
--name iris21 iscrepo/iris:latest
InterSystems IRIS uses certain information to ensure that an InterSystems IRIS instance is allowed to mount the specified durable %SYS directory. This information the hostname, which you must therefore specify when running an InterSystems IRIS container; if you do not, you may not be able to start the instance following a crash or other nongraceful shutdown.
When you run an InterSystems IRIS container using these options, the following occurs:
In the case of the example provided, the InterSystems IRIS instance running in container iris21 is configured to use the host path /data/dur/cconfig (which is the path /dur/cconfig inside the container) as the directory for persistent storage of all the files listed in Contents of the Durable %SYS Directory. If durable %SYS data does not already exist in the host directory /data/dur/cconfig (container directory /dur/cconfig) it is copied there from the installation directory. Either way, the instance’s internal pointers are set to container directory /dur/cconfig (host directory /data/dur/cconfig).
See Running InterSystems IRIS Docker Containers for several examples of launching an InterSystems IRIS container with durable %SYS.
The following illustration shows the relationship between the installation directory of a newly installed InterSystems IRIS container and the external durable %SYS directory, with external application databases also depicted.
InterSystems IRIS Installation Directory and Durable %SYS
Identifying the Durable %SYS Directory Location
When you want to manually verify the location of the durable %SYS directory or pass this location programmatically, you have three options, as follows:
Ensuring that Durable %SYS is Specified and Mounted
When a container is run with the ISC_DATA_DIRECTORY environment variable, pointers are set to the durable %SYS files only if the specified volume is successfully mounted.
If ISC_DATA_DIRECTORY is not specified, the InterSystems IRIS instance uses the instance-specific data within the container, and therefore operates as a new instance.
To use durable %SYS, you must therefore ensure that all methods by which your InterSystems IRIS containers are run incorporate these two options.
Separating File Systems for Containerized InterSystems IRIS
In the interests of performance and recoverability, InterSystems recommends that you locate the primary and secondary journal directories of each InterSystems IRIS instance on two separate file systems, which should also be separate from those hosting InterSystems IRIS executables, system databases and the CACHE.WIJ file, with the latter optionally on a fourth file system. Following InterSystems IRIS installation, however, the primary and secondary journal directories are set to the same path, install-dir/mgr/journal, and thus may both be set to /mgr/journal in the durable %SYS directory when durable %SYS is in use.
After the container is started, you can reconfigure the external locations of the primary and secondary directories using the Management Portal or by editing the cache.cpf file, as long as the volumes you relocate them to are always specified when running a new image to upgrade the InterSystems IRIS instance. You can also configure separate file systems when launching the container, as described in Running InterSystems IRIS Docker Containers.
When the durable %SYS directory is in use, the CACHE.WIJ file and some system databases are already separated from the InterSystems IRIS executables, which are inside the container. Under some circumstances, colocating the CACHE.WIJ file with your application databases instead may improve performance.
See File System Recommendations in the "File System and Storage Configuration Recommendations" chapter of the Installation Guide for more information about separation of file systems for InterSystems IRIS.
Running InterSystems IRIS Docker Containers
This section explains how to change the InterSystems IRIS password in an InterSystems container, and provides some examples of docker run commands for launching InterSystems IRIS containers using isc-main options.
Changing the InterSystems IRIS Password
To secure the InterSystems IRIS instance in a container, InterSystems sets the password for predefined accounts to a value that is undiscoverable and unrecoverable, but changeable, and provides a script for making this change, install-dir/dev/Cloud/ICM/ To use the script, do the following:
  1. Create a file within the container, or on an external storage volume if one was mounted when the container was started, containing the new password.
  2. Within the container, run the script with the password file as an argument, for example:
    # /usr/cachesys/dev/Cloud/ICM/ /tmp/password.txt
    The script will shut down the IRIS instance if it is running.
  3. Start the InterSystems IRIS instance with the command ccontrol start instance_name.
This procedure can be scripted by running the InterSystems IRIS container with docker run and then using the docker exec command in a script, for example:
docker exec <instance_name> bash -c 'echo <password> > /tmp/pwd.txt; $ISC_PACKAGE_INSTALLDIR/dev/Cloud/ICM/ /tmp/pwd.txt; ccontrol start IRIS'
The password file can also be preplaced on a volume to be mounted by the container, and users of the durable %SYS feature have additional options available. The isc-main --after command can also be used to run an executable for this task after the InterSystems IRIS instance starts for the first time.
Running an InterSystems IRIS Container: Examples
The following are some examples of docker run commands for launching InterSystems IRIS containers using isc-main options.
  1. To change the password for predefined accounts on an InterSystems IRIS instance being started for the first time, as described in the previous section, you could execute the password change script before starting the instance, as follows:
    docker run iscrepo/iris:stable --before "'echo <password> > /tmp/pwd.txt; \
    $ISC_PACKAGE_INSTALLDIR/dev/Cloud/ICM/ /tmp/pwd.txt"
    You could also use the --before option with acustom scripts, for example to start up a background task:
    docker run iscrepo/iris:stable --before
  2. The required InterSystems IRIS license key must be brought into the container so that an instance can operate. In the example that follows using the isc-main -key option, the license key is specified as being staged in the key/ directory on the volume mounted for the durable %SYS directory — that is, in /home/volumes/durable/key/ — and is copied to the mgr/ directory within the durable %SYS directory (/dur/iris41/mgr/ in the container, /home/volumes/durable/iris41/mgr/ on the external storage) before the InterSystems IRIS instance is started.
    docker run --detach \
    --env ISC_DATA_DIRECTORY=/dur/iris41 \
    --volume /home/volumes/durable:/dur \
    --name iris41 \
    iscrepo/iris:stable \
    --key /dur/key/license.key 
    Because durable %SYS is in use, the license key persists in /home/volumes/durable/iris41/mgr/ and is therefore automatically activated by an InterSystems IRIS container launched with the needed --env and --volume options (see Ensuring that Durable %SYS is Specified and Mounted).
  3. If you write a script to make changes to InterSystems IRIS’s configuration you can either stage the script on the durable %SYS volume (or another that is mounted) or add it to the container, and then run it before the instance is started using the isc-main --before option. For example, you could write a script to change the instance’s cache.cpf file to configure the primary and alternate journal directories so you can place them on separate file systems, as described in Separating File Systems for Containerized InterSystems IRIS. Because cache.cpf is in the durable %SYS directory, these changes are persistent. This is shown in the following example using a script named that takes two arguments and modifies the mgr/cache.cpf file in the directory specified by the ISC_DATA_DIRECTORY variable:
    docker run --detach \
    --env ISC_DATA_DIRECTORY=/dur/iris41 \
    --volume /home/volumes/durable:/dur \
    --volume /network/volume1:/net1
    --volume /network/volume2:/net2
    --name iris41 \
    iscrepo/iris:stable \
    --before ‘/dur/scripts/ /net1/journal_primary /net2/journal_alternate’
    The contents of might be something like the following:
    sed -ie "s#^CurrentDirectory=/usr/cachesys/mgr/journal/#CurrentDirectory=$1#" \
    $ISC_DATA_DIRECTORY/cache.cpf && \
    sed -ie "s#^AlternateDirectory=/usr/cachesys/mgr/journal/#AlternateDirectory=$2#" \
Additional Docker/InterSystems IRIS Considerations
This section describes some additional considerations to bear in mind when creating and running InterSystems IRIS images container images, including the following:
Docker Storage Driver
Docker supports a number of different storage drivers to manage images. As of this writing, InterSystems supports only the devicemapper storage driver for running InterSystems IRIS in production, and you must configure the Docker daemon to use this driver; be sure to read Docker’s explanation of using the devicemapper storage driver in direct-lvm mode for management of container images. The Docker documentation explains how to determine which storage driver your OS uses by default and how to change the driver if need be. See Installing Docker for important information about storage driver configuration on macOS and Windows systems.
When ICM installs Docker on systems on which it will deploy InterSystems IRIS images, it automatically sets the storage driver to devicemapper.
Locating Image Storage on a Separate Partition
The default storage location for Docker container images is /var/lib/docker. Because this is part of the root file system, you might find it useful to mount it on a separate partition, both to avoid running out of storage quickly and to protect against file system corruption. Both Docker and the OS might have trouble recovering when the above problems emerge. For example, SUSE states: “It is recommended to have /var/lib/docker mounted on a separate partition or volume to not affect the Docker host operating system in case of file system corruption.”
A good approach is to set the Docker Engine storage setting to this alternative volume partition. For example, on Fedora-based distributions, edit the Docker daemon configuration file (see Configure and troubleshoot the Docker daemon in the Docker documentation), locate the ExecStart= command line option for the Docker Engine, and add - as an argument.