docs.intersystems.com
Home  /  Running InterSystems Products in Containers


Articles
Running InterSystems Products in Containers
InterSystems: The power behind what matters   
Search:  


This article explains the benefits of deploying software using Docker containers and provides the information you need to deploy InterSystems IRIS™ and InterSystems IRIS-based applications in Docker containers, using the Docker images provided by InterSystems. The article covers the following topics:
For an introduction to this topic, including a brief hands-on experience, see First Look: InterSystems IRIS in Docker Containers.
Why Containers?
Containers package applications into platform-independent, fully portable runtime solutions, with all dependencies satisfied and isolated, and thereby bring the following benefits:
These advantages make containers a natural building block for applications, promoting application delivery and deployment approaches that are simpler, faster, more repeatable, and more robust.
For an introduction to containers and container images from an InterSystems product manager, see What is a Container? and What is a Container Image? on InterSystems Developer Community.
Docker containers, specifically, are ubiquitous; they can be found in public and private clouds and are supported on virtual machines (VMs) and bare metal. Docker has penetrated to the extent that all major public cloud "infrastructure as a service" (IaaS) providers support specific container services for the benefit of organizations reducing system administration costs by using Docker containers and letting the cloud provider handle the infrastructure.
InterSystems has been supporting InterSystems IRIS™ in Docker containers for some time and is committed to enabling its customers to take advantage of this innovative technology.
For technical information and to learn about Docker technology step-by-step from the beginning, please see the Docker documentation site.
InterSystems IRIS in Docker Containers
Because a Docker container packages only the elements needed to run a containerized application and executes the application natively, it provides standard, well-understood application configuration, behavior, and access. If you are experienced with InterSystems IRIS running on Linux, it doesn’t matter what physical, virtual, or cloud system and distribution your Linux-based InterSystems IRIS container is running on; you interact with it in the same way regardless, just as you would with traditional InterSystems IRIS instances running on different Linux systems.
The following describes different aspects of how InterSystems IRIS uses containers.
Container Basics
This section covers the important basic elements of creating and using Docker containers.
Container Contents
In essence, a Docker container runs a single primary process, which can spawn child processes; anything that can be managed by a single blocking process (one that waits until its work is complete) can be packaged and run in a container.
A containerized application, while remaining wholly within the container, does not run fully on the operating system (OS) on which the container is running, nor does the container hold an entire operating system for the application to run on. Instead, an application in a Docker container runs natively on the kernel of the host system, while the container provides only the elements needed to run it and make it accessible to the required connections, services, and interfaces — a runtime environment (including file system), the code, libraries, environment variables, and configuration files.
Because it packages only these elements and executes the application natively, a Docker container is both very efficient (running as a discrete, manageable operating system process that takes no more memory than any other executable) and fully portable (remaining completely isolated from the host environment by default, accessing local files and ports only if configured to do so), while at the same time providing standard, well-understood application configuration, behavior, and access.
The isolation of the application from the host environment is a very important element of containerization, with many significant implications. Perhaps most important of these is the fact that unless specifically configured to do so, a containerized application does not write persistent data, because whatever it writes inside the container is lost when the container is removed and replaced by a new container. Because data persistence is usually a requirement for applications, arranging for data to be stored outside of the container and made available to other and future containers is an important aspect of containerized application deployment.
The Container Image
A container image is the executable package, while a container is a runtime instance of an image — that is, what the image becomes in memory when actually executed. In this sense an image and a container are like any other software that exists in executable form; the image is the executable and the container is the running software that results from executing the image.
A Docker image is defined in a Dockerfile, which begins with a base image providing the runtime environment for whatever is to be executed in the container. For example, InterSystems uses Ubuntu 16.04 LTS as a base for its InterSystems IRIS images, so the InterSystems IRIS instance in a container created from an InterSystems image is running in an Ubuntu 16.04 LTS environment. Next come specifications for everything needed to prepare for execution of the application — for example, copying or downloading files, setting environment variables, and installing the application. The final step is to define the launch of the application.
The image is created by issuing a docker build command specifying the Dockerfile’s location. The resulting image is placed in the Docker image registry of the local host, from which it can be copied to other Docker image registries.
Running a Container
To execute a container image and create the container — that is, the image instance in memory and the kernel process that runs it — you must execute three separate Docker commands, as follows:
  1. docker pull — Downloads the image from the repository.
  2. docker create — Defines the container instance and its parameters.
  3. docker start — Starts (launches) the container.
For convenience, however, the docker run command combines three separate Docker commands, which it executes in sequence, and is the typical means of creating and starting a container.
The docker run command has a number of options, and it is important to remember that the command that creates the container instance defines its characteristics for its operational lifetime; while a running container can be stopped and then restarted (not a typical practice in production environments), the aspects of its execution determined by the docker run command cannot be changed. For instance, a storage location can be mounted as a volume within the container with an option in the docker run command (for example, --volume /network/netstore3:/netstore3), but the volumes mounted in this fashion in the command are fixed for that instantiation of the image; they cannot be modified or added to.
Note:
As with other UNIX® and Linux commands, options to docker commands such as docker run can be specified in their long forms, in which case they are preceded by two hyphens, or their short form, preceded by one. In this document, the long forms are used throughout for clarity, for example, --volume rather than v.
When a containerized application is modified — for example, it is upgraded, or components are added — the existing container is removed, and a new container is created and started by instantiating a different image with the docker run command. Although the purpose is to modify the application, as one might with a traditional application by running an upgrade script or adding a plug-in, the new application instance actually has no inherent association with the previous one. Rather, it is the interactions established with the environment outside the container — for example, the container ports you publish to the host with the --publish option of the docker run command, the network you connect the container to with the --network option, and the external storage locations you mount inside the container with the --volume option in order to persist application data — that maintain continuity between separate containers, created from separate images, that represent versions of the same application.
Installing Docker
The Docker Engine consists of an open source containerization technology combined with a workflow for building and running containerized applications.
To install the Docker engine on your servers, see Install Docker in the Docker documentation.
Docker supports a number of different storage drivers to manage images; the default driver depends on the host on which the Docker Engine is installed. As of this writing, InterSystems supports only the devicemapper storage driver for running InterSystems IRIS in production. For more information about the use of devicemapper, see Docker Storage Driver.
Creating and Running InterSystems IRIS Docker Containers
This section describes what you need to do to run InterSystems IRIS containers using InterSystems images or images you have created, including the following topics:
Using InterSystems IRIS Docker Images
InterSystems IRIS images provided by InterSystems will be available worldwide from a repository. When the repository is established, the docker pull command can be used to download the image. (Pulling an image from a repository can also be done within a Dockerfile.)
The following sections cover several important issues concerning the use of InterSystems IRIS images provided by InterSystems, including:
Docker Platforms Supported by InterSystems
InterSystems supports use of the InterSystems IRIS Docker images it provides on Linux platforms, and the instructions and procedures in this document are intended to be used on Linux. Rather than executing containers as native processes, as on Linux platforms, Docker for Windows creates a Linux VM running under Hyper-V, the Windows virtualizer, to host containers. These additional layers add complexity that prevents InterSystems from supporting Docker for Windows at this time.
We understand, however, that for testing and other specific purposes, you may want to run InterSystems IRIS-based containers from InterSystems under Docker for Windows. For information about the differences between Docker for Windows and Docker for Linux that InterSystems is aware of as they apply to working with InterSystems-provided container images, see Using InterSystems IRIS Containers with Docker for Windows on InterSystems Developer Community; for general information about using Docker for Windows, see Getting started with Docker for Windows in the Docker documentation.
License Keys for InterSystems IRIS Containers
Like any InterSystems IRIS instance, an instance running in a container requires a license key (typically called iris.key). Free temporary keys for evaluation purposes are available from InterSystems; for general information about InterSystems IRIS license keys, see the Managing InterSystems IRIS Licensing chapter of the System Administration Guide.
License keys are not, and cannot be, included in an InterSystems IRIS container image. Instead, you must stage a license key in a storage location accessible to the container, typically a mounted volume, and provide some mechanism for copying it into the container, where it can be activated for the InterSystems IRIS instance running there.
The iris-main program, which runs as the blocking entrypoint application of an InterSystems IRIS container, provides a mechanism for handling the license key. When you use the --key option in a docker run command creating and starting a container, the license key is copied from the location you specify to the mgr/ directory of the InterSystems IRIS instance, and therefore automatically activated when the instance starts.
See The iris-main Program for information about iris-main options, including the --key option, and Running InterSystems IRIS Containers for an example of using the --key option in a docker run command.
Changing the InterSystems IRIS Password
For security reasons, the predefined user accounts in the InterSystems IRIS instance in an image provided by InterSystems have a random, unrecoverable, but changeable password. (For information on the predefined accounts, see Initial InterSystems IRIS Security Settings in the “Preparing to Install InterSystems IRIS” chapter of the Installation Guide .) When you deploy an InterSystems IRIS container using an InterSystems image, or one based on it as described in this section, you must change the password as part of deployment.
The iris-main program, the blocking entrypoint application that is started when you start an InterSystems IRIS container (see The iris-main Program), provides an option for changing the InterSystems IRIS password. This option reads the new password from a file. For example, this docker run command starts an InterSystems IRIS container and changes the password.
echo <password> > /tmp/pwd.txt && docker run --name iris --volume /tmp:/host intersystems/iris:2018.1.1.633.0 \
--password-file /host/password.txt
This approach, however, while convenient, is not fully secure, as it exposes password.
Caution:
Exposing the InterSystems IRIS password in any way constitutes a serious security risk.
To avoid this, you can create the password file in a secure location and copy it to the volume to be mounted by the container, as shown in the following.
cp local/pwd.txt /tmp && docker run --volume /tmp:/host intersystems/iris:2018.1.1.633.0 \
--password-file /host/password.txt
Important:
InterSystems does not support mounting NFS locations as external volumes in InterSystems IRIS containers.
Note:
InterSystems IRIS 2018.1.1 containers do not include the iris-main --password-file option. Instead, you can take advantage of a password change script provided in the container and the iris-main --before option. To do so, you must also use the Docker --volume option to mount the location of the password file in the container. The following example copies the password file to the location to be mounted as a volume, then runs the script with the password file as argument:
cp <password_dir>/pwd.txt /tmp && docker run --volume /tmp:/pwd intersystems/iris:2018.1.1.633.0 \
--before "/usr/irissys/dev/Cloud/ICM/changePassword.sh /pwd/pwd.txt" 
If you do this, however, the password change script runs every time the container is started in the future, causing the container start to fail. The best practice is to avoid this by specifying the environment variable ICM_SENTINEL_DIR, which places a file called change_password.done in the directory it specifies so that the password change script takes no action at future container starts, as follows:
docker run --volume /home/pwd:/pwd intersystems/iris:2018.1.1.633.0 --env ICM_SENTINEL_DIR=/pwd \
--before "/usr/irissys/dev/Cloud/ICM/changePassword.sh /pwd/pwd.txt"
Creating InterSystems IRIS Docker Images
There is more than one approach to creating a Docker image for InterSystems IRIS. One approach is to design the Dockerfile (see The Container Image, and in the Docker documentation Best practices for writing Dockerfiles) to do the following in building the image:
Images officially supported by InterSystems contain an internally developed program called iris-main that is used as the entrypoint application to aid in handling InterSystems IRIS inside a container; this means copying the program into the Dockerfile and declaring it as the entrypoint. The iris-main program is described in The iris-main Program.
Given the complexities in the preceding, the most common and recommended approach to building an image that includes your InterSystems IRIS-based application along with InterSystems IRIS is to base it on an existing InterSystems IRIS image provided by InterSystems, adding the dependencies relevant to your own application solution. This means starting with an image in which InterSystems IRIS is already installed with iris-main as the entrypoint; whatever you include in the Dockerfile is executed subsequent to this, which means you can start and issue commands to the InterSystems IRIS instance. This is illustrated in the following sample Dockerfile:
# Code installer example
#
# building from the InterSystems IRIS image

FROM intersystems/iris:2018.1.1.633.0

# copy in application code
COPY ./code $ISC_PACKAGE_INSTALLDIR/code

# Set a known password for predefined accounts using a script
RUN echo abc123 > /tmp/password.txt
RUN /usr/bin/changePassword.sh /tmp/password.txt

# Compile the application code
RUN iris start iris && \printf '_SYSTEM\nabc123\nzn "USER"\ndo \
$system.OBJ.LoadDir("$ISC_PACKAGE_INSTALLDIR/code/","c")\n' | irissession IRIS

# Recreate unknown password
# RUN head /dev/urandom | tr -dc A-Za-z0-9 | head -c 26 > /tmp/password.txt
# RUN /usr/bin/changePassword.sh /tmp/password.txt
This Dockerfile, based on an InterSystems IRIS image, does the following:
  1. Adds some application code to the installation directory as specified by the installed InterSystems IRIS environment variable $ISC_PACKAGE_INSTALLDIR.
  2. Temporarily changes the InterSystems IRIS password for predefined accounts to a known string.
    Note:
    For 2018.1.1 InterSystems IRIS containers, in this and the final step you can use the provided script described in Changing the InterSystems IRIS Password.
  3. Replaces the known password with a random, unrecorded password.
Important:
For security reasons, the predefined user accounts in the InterSystems IRIS instance in an image provided by InterSystems have a random, unrecorded password. (For information on these accounts, see Initial InterSystems IRIS Security Settings in the “Preparing to Install InterSystems IRIS” chapter of the Installation Guide.) The preceding example shows the password for the instance being set using a script in the container, but the iris-main program provides an option to do this (InterSystems IRIS 2018.1.2 containers only); see Changing the InterSystems IRIS Password.
Note:
An important consideration when creating Docker images is image size. Larger images take longer to download and require more storage on the target machine. A good example of image size management involves the InterSystems IRIS journal files and write image journal (WIJ). Assuming that these files are relocated to persistent storage outside the container (where they should be), as described in Durable %SYS for Persistent Instance Data, you can reduce the size of an InterSystems IRIS or application image by deleting these files from the installed InterSystems IRIS instance within the container. This can be done by adding a script to the Dockerfile to be run after installation has completed, for example:
D INT^JRNSTOP 
kill ^%SYS("Journal")
iris stop
rm $ISC_PACKAGE_INSTALLDIR/mgr/IRIS.WIJ
rm $ISC_PACKAGE_INSTALLDIR/mgr/journal/*
The iris-main Program
There are several requirements an application must satisfy in order to run in a Docker container. The iris-main program was developed by InterSystems to enable the InterSystems IRIS and its other products to meet these requirements.
The main process started by the docker run command, called the entrypoint, is required to block (that is, wait) until its work is complete. In the case of a long-running entrypoint application, this process should block until it's been intentionally shut down.
InterSystems IRIS is typically started using the iris start command, which spawns a number of InterSystems IRIS processes and returns control to the command line. Because it does not run as a blocking process, iris is unsuitable for use as the Docker entrypoint application.
The iris-main program solves this problem by starting InterSystems IRIS and then continuing to run as the blocking entrypoint application. The program also gracefully shuts down InterSystems IRIS when the container is stopped, and has a number of useful options. To use it, add the iris-main binary to a Dockerfile and specify it as the entrypoint application, for example:
ADD host_path/iris-main /iris-main
ENTRYPOINT ["/iris-main"]
Docker imposes these additional requirements on the entrypoint application:
In addition to addressing these issues discussed in the foregoing, iris-main provides a number of options to help tailor the behavior of InterSystems IRIS within a container. The options provided by iris-main are shown in the list that follows; examples of their use are provided in Running InterSystems IRIS Containers.
Options for iris-main appear after the image name in a docker run command, while the Docker options appear before it. As with the docker command, the options have a long form in which two hyphens are used and a short form using only one.
Note:
Options for iris-main appear after the image name in a docker run command, while the Docker options appear before it. As with the docker command, the options have a long form in which two hyphens are used and a short form using only one.
Option Description Default
-i instance,
--instance=instance
Sets the name of the InterSystems IRIS instance to start or stop. IRIS
-d 1|0,
--down=1|0
Stops InterSystems IRIS (using iris stop) on container shutdown (1 – true, 0 – false) 1 (true)
-u 1|0,
--up=1|0
Starts InterSystems IRIS (using iris start) on container startup (1 – true, 0 – false) 1 (true)
-s 1|0,
--nostu=1|0
Starts InterSystems IRIS in single-user access mode (1 – true, 0 – false) 0 (false)
-l log_file,
--log=log_file
Specifies a log file to redirect to standard output for monitoring using the docker logs command. none
-k key_file,
--key=key_file
Copies the specified InterSystems IRIS license key to the mgr/ subdirectory of the install directory. none
-p password_file,
--password-file password_file
Changes the InterSystems IRIS password to the contents of the file and then deletes the file. (InterSystems IRIS 2018.1.2 containers only.) none
-b command,
--before command
Sets the executable to run (such as a shell script) before starting InterSystems IRIS none
-a command,
--after command
Sets the executable to run after starting InterSystems IRIS none
-e command,
--exit command
Sets the executable to run after stopping InterSystems IRIS none
--create=command
Execute a custom shell command before any other arguments are processed
 
--terminate=command
Execute a custom shell command after any other arguments are processed
 
--version Prints the iris-main version N/A
-h,
--help
Displays usage information and exits N/A
Durable %SYS for Persistent Instance Data
This section describes the durable %SYS feature of InterSystems IRIS, which enables persistent storage of instance-specific data when InterSystems IRIS is run within a container, and explains how to use it.
Overview of InterSystems IRIS Durable %SYS
Separation of code and data is one of the primary advantages of containerization; a running container represents "pure code" that can work with any appropriate data source. However, because all applications and programs generate and maintain operating and historical data — such as configuration and language settings, user records, and log files — containerization typically must address the need to enable persistence of program-related data on durable data storage.
When you run an InterSystems IRIS image in a Docker container, the initial state of the InterSystems IRIS instance reflects the instance used to create the image — often, for example, a newly-installed instance — and every time you run a particular image in a container, the instance starts off the same way. If you want instead to upgrade an operating InterSystems IRIS instance by running an upgraded image in a new container, you need to:
The durable %SYS feature accomplishes this by storing the needed data on an external file system, which is mounted as a volume within the container and identified in an environment variable specified when the container is started. In effect, while the InterSystems IRIS instance remains containerized, the instance-specific data exists outside the container, just like the databases in which application data is stored. As long as the data’s storage location is mounted as a volume and identified in the environment variable when the container is run, the instance has access to and uses this instance-specific data; as long as the containerized instance has the same network location as the previous version, it effectively replaces that version, upgrading InterSystems IRIS.
Contents of the Durable %SYS Directory
The durable %SYS directory, as created when a container is first started, contains a subset of the InterSystems IRIS install tree, including but not limited to:
Locating the Durable %SYS Directory
When selecting the location in which this system-critical instance-specific information is to be stored, bear in mind the following considerations:
There must be at least 200 MB of space available on the specified volume for the durable %SYS directory to initialize. For various reasons, however, including operational files such as journal records and the expansion of system databases, the amount of data in the directory can increase significantly.
Running an InterSystems IRIS Container with Durable %SYS
To use durable %SYS, include in the docker run command the following options:
--volume /<external_host>:/<durable_storage>
--env ISC_DATA_DIRECTORY=/<durable_storage>/<durable_dir>
where external_host is the host path to the durable storage location to be mounted by the container, durable_storage is the name for this location inside the container, and durable_dir is the name of the durable %SYS directory to be created in the location. For example:
docker run --detach \
--publish 52773:52773 \
--volume /data/dur:/dur \
--env ISC_DATA_DIRECTORY=/dur/iconfig \
--name iris21 intersystems/iris:2018.1.1.633.0
Important:
InterSystems does not support mounting NFS locations as external volumes in InterSystems IRIS containers.
Note:
The --publish option publishes the InterSystems IRIS instance’s web server port (52773 by default) to the host, so that the instance’s management portal can be loaded into a browser on any host.
When you run an InterSystems IRIS container using these options, the following occurs:
In the case of the example provided, the InterSystems IRIS instance running in container iris21 is configured to use the host path /data/dur/iconfig (which is the path /dur/iconfig inside the container) as the directory for persistent storage of all the files listed in Contents of the Durable %SYS Directory. If durable %SYS data does not already exist in the host directory /data/dur/iconfig (container directory /dur/iconfig) it is copied there from the installation directory. Either way, the instance’s internal pointers are set to container directory /dur/iconfig (host directory /data/dur/iconfig).
See Running InterSystems IRIS Containers for examples of launching an InterSystems IRIS container with durable %SYS.
The following illustration shows the relationship between the installation directory of a newly installed InterSystems IRIS container and the external durable %SYS directory, with external application databases also depicted.
InterSystems IRIS Installation Directory and Durable %SYS
Identifying the Durable %SYS Directory Location
When you want to manually verify the location of the durable %SYS directory or pass this location programmatically, you have three options, as follows:
Ensuring that Durable %SYS is Specified and Mounted
When a container is run with the ISC_DATA_DIRECTORY environment variable, pointers are set to the durable %SYS files only if the specified volume is successfully mounted.
If ISC_DATA_DIRECTORY is not specified, the InterSystems IRIS instance uses the instance-specific data within the container, and therefore operates as a new instance.
To use durable %SYS, you must therefore ensure that all methods by which your InterSystems IRIS containers are run incorporate these two options.
Separating File Systems for Containerized InterSystems IRIS
In the interests of performance and recoverability, InterSystems recommends that you locate the primary and secondary journal directories of each InterSystems IRIS instance on two separate file systems, which should also be separate from those hosting InterSystems IRIS executables, system databases and the IRIS.WIJ file, with the latter optionally on a fourth file system. Following InterSystems IRIS installation, however, the primary and secondary journal directories are set to the same path, install-dir/mgr/journal, and thus may both be set to /mgr/journal in the durable %SYS directory when durable %SYS is in use.
After the container is started, you can reconfigure the external locations of the primary and secondary directories using the Management Portal or by editing the iris.cpf file, as long as the volumes you relocate them to are always specified when running a new image to upgrade the InterSystems IRIS instance. You can also configure separate file systems when launching the container, as described in Running InterSystems IRIS Containers.
Note:
When the durable %SYS directory is in use, the IRIS.WIJ file and some system databases are already separated from the InterSystems IRIS executables, which are inside the container. Under some circumstances, colocating the IRIS.WIJ file with your application databases instead may improve performance.
See File System Recommendations in the "File System and Storage Configuration Recommendations" chapter of the Installation Guide for more information about separation of file systems for InterSystems IRIS.
Running InterSystems IRIS Containers
This section provides some examples of of launching InterSystems IRIS containers with the Docker and iris-main options covered in this document, including:
Note:
The sample docker run commands in this section include only the options relevant to each example and omit options that in practice would be included, as shown (for example) in the sample command in Running an InterSystems IRIS Container with Durable %SYS.
Running an InterSystems IRIS Container: Docker Run Examples
The following are examples of docker run commands for launching InterSystems IRIS containers using iris-main options.
Running an InterSystems IRIS Container: Script Example
The following script was written to quickly create and start an InterSystems IRIS container for testing purposes. The script incorporates the password change method involving the iris-main --password-file option (InterSystems IRIS 2018.1.2 containers only) discussed in Changing the InterSystems IRIS Password, as well as the iris-main --key option to copy in the license key, as described in License Keys for InterSystems IRIS Containers.
#!/bin/bash
# script for quick demo and quick IRIS image testing

# Definitions to toggle_________________________________________
container_image="intersystems/iris:2018.1.2.633.0"

# checks______________________________
if [ -z "$1" ]
  then
    echo "No argument supplied as password"
    exit
fi

# set instance password
echo $1 > /isc/isc.pwd

# the docker run command
docker run -d \
  -p 9091:51773 \
  -p 9092:52773 \
  -p 9093:53773 \
  -v /isc:/ISC \
  -h iris \
  --name iris \
  --init \
  --cap-add IPC_LOCK \
  $container_image \
  --key /ISC/iris.key \
  --password-file /ISC/isc.pwd' \
  --log $ISC_PACKAGE_INSTALLDIR/mgr/messages.log
Running an InterSystems IRIS Container: Docker Compose Example
Docker Compose, a tool for defining and running multicontainer Docker applications, offers an alternative to command-line interaction with Docker. To use Compose, you create a docker-compose.yml containing specifications for the containers you want to create, start, and manage, then use the docker-compose command. For more information, start with Overview of Docker Compose in the Docker documentation.
The following is an example of a compose.yml file. Like the preceding script, it incorporates only elements discussed in this document.
version: '3.2'

services:
  iris:
    image: intersystems/iris:2018.1.1.633.0
    command: --key /ISC/iris.key --password-file/ISC/pwd.isc
    hostname: iris

    ports:
    # 51773 is the superserver default port
    - "9091:51773"
    # 52773 is the webserver/management portal port
    - "9092:52773"

    volumes:
    - isc:/ISC

    environment:
    - ISC_DATA_DIRECTORY=/ISC/iris_conf.d
Upgrading InterSystems IRIS Containers
When a containerized application is upgraded or otherwise modified, the existing container is removed, and a new container is created and started by instantiating a different image with the docker run command. Although the purpose is to modify the application, as one might with a traditional application by running an upgrade script or adding a plug-in, the new application instance actually has no inherent association with the previous one. Rather, it is the interactions established with the environment outside the container — for example, the container ports you publish to the host with the --publish option of the docker run command, the network you connect the container to with the --network option, and the external storage locations you mount inside the container with the --volume option in order to persist application data — that maintain continuity between separate containers, created from separate images, that represent versions of the same application.
Upgrading InterSystems IRIS Containers with Durable %SYS
For InterSystems IRIS, the durable %SYS feature for persisting instance-specific data is used to enable upgrades. As long as the instance in the upgraded container uses the original instance’s durable %SYS storage location and has the same network location, it effectively replaces the original instance, upgrading InterSystems IRIS. If the version of the instance-specific data does not match the version of the new instance, durable %SYS upgrades it to the instance’s version as needed. (For more information about Durable %SYS, see Durable %SYS for Persistent Instance Data.)
Typically, the upgrade command is identical to the command used to run the original container, except for the image tag. In the following docker run command, only the <version number> portion would change between the docker run command that created the original container and the one that creates the upgraded container:
docker run --name iris --publish 9091:51773, 9092:52773, 9093:53773 \
--volume /data/durable:/dur --env ISC_DATA_DIRECTORY=/dur/iconfig \
intersystems/iris:<version number> --key /dur/key/iris.key
Upgrading When Manual Startup is Required
When durable %SYS detects that an instance being upgraded did not shut down cleanly, it prevents the upgrade from continuing. This is because WIJ and journal recovery must be done manually when starting such an instance to ensure data integrity. To correct this, you must use the procedures outlined in Starting InterSystems IRIS Without Automatic WIJ and Journal Recovery in the “Backup and Restore” chapter of the Data Integrity Guide to start the instance and then shut it down cleanly. If the container is running, you can do this by executing the command docker exec -it container_name bash to open a shell inside the container and following the outlined procedures. If the container is stopped, however, you cannot start it without automatically restarting the instance, which could damage data integrity, and you cannot open a shell. In this situation, use the following procedure to achieve a clean shutdown before restarting the container:
  1. Create a duplicate container using the same command you used to create the original, including specifying the same durable %SYS location and the same image, but adding the iris-main -up 0 option (see The iris-main Program). This option prevents automatic startup of the instance when the container starts.
  2. Execute the command docker exec -it container_name bash to open a shell inside the container.
  3. When recovery and startup are complete, shut down the instance using iris stop instance_name.
  4. Start your original container. Because it uses the durable %SYS data that you safely recovered in the duplicate container, normal startup is safe.
Additional Docker/InterSystems IRIS Considerations
This section describes some additional considerations to bear in mind when creating and running InterSystems IRIS images container images, including the following:
Docker Storage Driver
Docker supports a number of different storage drivers to manage images. As of this writing, InterSystems supports only the devicemapper storage driver for running InterSystems IRIS in production, and you must configure the Docker daemon to use this driver; be sure to read Docker’s explanation of using the devicemapper storage driver in direct-lvm mode for management of container images. The Docker documentation explains how to determine which storage driver your OS uses by default and how to change the driver if need be.
Before using InterSystems-provided container images on Windows and macOS platforms, you must configure aufs as the storage driver; for more information, see Using InterSystems IRIS Containers with Docker for Windows on InterSystems Developer Community.
Locating Image Storage on a Separate Partition
The default storage location for Docker container images is /var/lib/docker. Because this is part of the root file system, you might find it useful to mount it on a separate partition, both to avoid running out of storage quickly and to protect against file system corruption. Both Docker and the OS might have trouble recovering when the above problems emerge. For example, SUSE states: “It is recommended to have /var/lib/docker mounted on a separate partition or volume to not affect the Docker host operating system in case of file system corruption.”
A good approach is to set the Docker Engine storage setting to this alternative volume partition. For example, on Fedora-based distributions, edit the Docker daemon configuration file (see Configure and troubleshoot the Docker daemon in the Docker documentation), locate the ExecStart= command line option for the Docker Engine, and add - as an argument.