Skip to main content

Automating Configuration of InterSystems IRIS with Configuration Merge

This document explains how to use configuration merge to deploy or reconfigure InterSystems IRIS.

What is configuration merge?

The configuration merge feature lets you make as many changes as you wish to the configuration of any InterSystems IRIS® instance in a single operation. To use it, you simply record the changes you want to make in a declarative configuration merge file and apply that file to the instance, when it is deployed or at any later point. Configuration merge can easily be used to automatically deploy multiple instances with varying configurations from the same container image or kit, as well as to simultaneously reconfigure multiple running instances, enabling automated reconfiguration of clusters or other multi-instance deployments. Configuration merge can be used with any InterSystems IRIS instance, containerized or locally installed, on any supported UNIX® or Linux platform, including Linux cloud nodes

For examples of using Docker Compose to deploy containers with configuration merge, including creating a namespace and database and deploying out-of-the-box InterSystems IRIS topologies such as mirrors and sharded clusters, see Useful Parameters in Automated Deployment. The Configuration Parameter File Reference contains a comprehensive description of all InterSystems IRIS configuration parameters.

How is InterSystems IRIS configured?

The configuration of an InterSystems IRIS instance is determined by a file in its installation directory named iris.cpf, which contains configuration parameters as name/value pairs. Every time the instance starts, including for the first time after it is deployed, it reads this configuration parameter file, or CPF, to obtain the values for these settings. This allows you to reconfigure an instance at any time by modifying its CPF and then restarting it.

For example, the globals setting in the [config] section of the CPF determines the size of the instance’s database cache. The setting in the CPF of a newly installed instance specifies an initial cache size equal to 25% of total system memory. To change this setting (which is not intended for production use), you can open the instance’s CPF in any text editor and specify the desired cache size as the value of globals, then restart the instance. Most parameters can be changed using other methods; for example, you can also modify the value of globals using the Management Portal or using the methods in the persistent class Config.config. Updating an instance’s CPF, however, is the only general mechanism that lets you make multiple configuration changes to an instance in a single operation and automate the simultaneous configuration of multiple instances.

For an overview of the use and contents of the CPF, see the Configuration Parameter File Reference.

How does configuration merge work?

A configuration merge file is a partial CPF that contains any desired subset of InterSystems IRIS configuration parameters and values. When a merge file is applied to an instance with configuration merge, those settings are merged into the instance’s CPF, replacing the values, as if you had edited the CPF and changed the values manually. If a parameter in the merge file is not present in the original CPF, it is simply added in the appropriate place.

For example, the data and compute nodes in a sharded cluster typically require with a database cache that is much larger than that generally configured for other purposes, and have more shared memory configured as well. To configure an instance to have a larger database cache and more shared memory when deployed, or to reconfigure an existing instance this way, you can apply a configuration merge file that includes the globals parameter (which specifies the size of the database cache) and the gmheap parameter (which specifies the amount of shared memory) with the desired values; these replace the default values in the CPF of the instance. The following illustrates the use of a merge file to update both the parameters when deploying an instance:.

Merge File Updates Memory Settings During Deployment

What can I do with configuration merge?

Their are two primary uses for configuration merge:

Regardless of the specific application of the configuration merge feature, InterSystems recommends keeping the merge files involved under version control to provide a record of all configuration changes, from deployment forward, over the life of an instance or a multi-instance topology.

When you incorporate configuration merge into your automated deployment or reconfiguration process, you can update the process by simply updating the merge files applied. Even in the case of individual instances used for purposes such as development and testing, users can be required get the latest version of the appropriate merge file before deploying or reconfiguring an instance, ensuring that its configuration matches a central specification. With version control, they can even reconfigure to an older standard by selecting a previous version of the merge file.

How do I use configuration merge in deployment?

Applying a configuration merge file during deployment lets you modify the default configurations of the deployed instance before it starts for the first time. This enables you to deploy containers with varying configurations from the same image, or install differently-configured instances from the same kit, directly into a multi-instance topology, rather than having to configure the instances into the desired topology after deployment. For example, in automated containerized deployment of a sharded cluster with compute nodes, you can apply different merge files for data node 1, the remaining data nodes, and the compute nodes in that order, as shown in the following illustration; when all of the instances are up and running, so is the sharded cluster.

Automated Deployment of a Sharded Cluster Using Configuration Merge
An InterSystems IRIS image is modified by three different merge files to deploy containers as the three node types of a shard

In similar fashion, when deploying a mirror, you would apply different configuration merge files for the primary, backup, and async members. Even a mirrored sharded cluster is easily deployed using this approach.

Activating configuration merge during deployment requires only that the location of the merge file be specified by the environment variable ISC_CPF_MERGE_FILE, or by the field used for that purpose in one of the InterSystems IRIS automated deployment tools, InterSystems Cloud Manager (ICM) or the InterSystems Kubernetes Operator (IKO). For example, in manual or scripted deployment:


In deployment by InterSystems Cloud Manager:

"UserCPF": "/Samples/cpf/iris.cpf". 

The specific manner in which you do this depends on the deployment mechanism you are using and whether the instance is containerized or noncontainerized.

Deploying an InterSystems IRIS container with a merge file

When deploying an InterSystems IRIS container, the environment variable can be included in the script or docker-compose.yml file you are using. Examples are shown below.

  • In the following sample deployment script, the merge file specified by ISC_CPF_MERGE_FILE, as well as the license key, are staged on the external volume specified for durable %SYS by ISC_DATA_DIRECTORY so they are accessible inside the container. The option setting ISC_CPF_MERGE_FILE is highlighted.

    # script for quick demo and quick InterSystems IRIS image testing
    # Definitions to toggle_________________________________________
    # the docker run command
    docker run -d 
      -p 9091:1972 
      -p 9092:52773 
      -v /data/durable263:/durable 
      -h iris 
      --name iris 
      --cap-add IPC_LOCK 
      --env ISC_DATA_DIRECTORY=/durable/irisdata
      --env ISC_CPF_MERGE_FILE=/durable/merge/CMF.cpf
      --key /dur/key/iris.key 
  • This sample docker-compose.yml file uses the iris-main --license-config option to configure the instance as a license server for the licenses staged on the external volume but otherwise contains the same elements as the deployment script; again, the line setting ISC_CPF_MERGE_FILE is highlighted.

    version: '3.2'
        image: intersystems/iris-arm64:2021.
        command: --license-config "4691540832 iris,4002,/durable/licenses" 
        hostname: iris
        # 1972 is the superserver default port
        - "9091:1972"
        # 52773 is the webserver/management portal port
        - "9092:52773"
        - /data/durable263:/durable
        - ISC_DATA_DIRECTORY=/durable/irisdata
        - ISC_CPF_MERGE_FILE=/durable/merge/CMF.cpf

For examples of use cases for automated deployment using configuration merge, see Useful parameters in automated deployment.

Installing InterSystems IRIS from a kit with a merge file

When installing InterSystems IRIS from a kit, specify a merge file by setting ISC_CPF_MERGE_FILE the environment variable before calling the irisinstall command, on the same command line. For example, as a user with root privileges:

# ISC_CPF_MERGE_FILE=/tmp/irismerge/CMF/cpf /tmp/iriskit/irisinstall

The procedure is the same for an unattended installation using the irisinstall_silent script, in which required ISC_PACKAGE environment variables will also be specified. For example, as a user without root privileges:

sudo ISC_CPF_MERGE_FILE=/tmp/irismerge/CMF/cpf

Using a merge file when deploying with InterSystems Cloud Manager

InterSystems Cloud Manager (ICM) is the InterSystems IRIS end-to-end provisioning and deployment solution. Using ICM, you can provision infrastructure and deploy containerized InterSystems IRIS-based services on public cloud platforms such as Google Cloud Platform, Amazon Web Services, and Microsoft Azure, or in your private VMware vSphere cloud., as well as existing virtual or hardware systems. ICM deploys your custom and third-party containers alongside those from InterSystems, and can also do containerless installs from InterSystems IRIS kits.

ICM makes the configuration merge functionality available through the UserCPF property, which you can include in either of your configuration files (defaults.json or definitions.json) to specify the location of the merge file to apply. If you include UserCPF in the defaults file, the same merge file is applied to all instances deployed; by adding it to the node definitions in definitions.json, you can apply different merge files to different node types.

For more information about using configuration merge with ICM, see Deploying with Customized InterSystems IRIS Configurations in the InterSystems Cloud Manager Guide.

Using a merge file when deploying with the InterSystems Kubernetes Operator

Kubernetes is an open-source orchestration engine for automating deployment, scaling, and management of containerized workloads and services. The InterSystems Kubernetes Operator (IKO) extends the Kubernetes API with the IrisCluster custom resource, which can be deployed as an InterSystems IRIS sharded cluster, distributed cache cluster, or standalone instance (all optionally mirrored) on any Kubernetes platform. The IKO also adds InterSystems IRIS-specific cluster management capabilities to Kubernetes, enabling automation of tasks like adding nodes to a cluster, which you would otherwise have to do manually by interacting directly with the instances.

When deploying with the IKO, you use a Kubernetes ConfigMap to integrate one or more merge files into the deployment process. For detailed information, see Create configuration files and provide a config map for them in Using the InterSystems Kubernetes Operator.

How do I reconfigure an existing instance using configuration merge?

By automating application of the same merge file to multiple running instances, you can simultaneously reconfigure all of those instances in the same way, applying the same set of configuration changes across your application or cluster. You can avoid updating settings that may have been customized on a per-instance basis and should not be modified simply by omitting these from the merge file, while including only those you know it is safe and desirable to change. A single automated program can of course apply different merge files to different groups of instances (such as different mirror member or cluster nodes types) as described in the previous section.

Applying all configuration changes with a merge file helps you streamline the process of making changes and maintain greater control over the instance’s configuration. Rather than making numerous individual changes from the Terminal, on multiple pages of the Management Portal, or by editing an instance’s CPF manually, you can execute all the changes at once using identical syntax in a merge file. By keeping your merge files under version control, you ensure the availability of configuration history and the option of restoring a previous configuration.

The iris merge command applies a merge file to a running instance. It is executed as follows:

iris merge instance [merge-file] [existing-CPF]


  • instance is the name of the InterSystems IRIS instance.

  • merge-file is the location of the merge file. If not specified, the value of the ISC_CPF_MERGE_FILE environment variable is used, if it exists.

  • existing-CPF is the location of the target CPF for the merge. If not specified, this uses the iris.cpf file located (for installed instances) in the installation directory or (for containers) in the directory specified by the ISC_DATA_DIRECTORY environment variable (or ISC_PACKAGE_INSTALLDIR if durable %SYS and ISC_DATA_DIRECTORY are not in use).

If the specified merge file is not present, or the merge-file argument is omitted and ISC_CPF_MERGE_FILE does not exist, the command fails with an error message.

Some changes merged into a CPF will not take effect immediately, but require a restart. For example, a change in the value of the gmheap parameter, which determines the size of the instance’s generic memory heap (also known as the shared memory heap), does not take effect until the instance is restarted. When your merge file contains one or more such parameters, you may need to apply the merge file as part of a restart, as in the following sample script excerpt:

# restart instance with  the necessary parameters (all on one line)
sudo ISC_CPF_MERGE_FILE=/net/merge_files/config_merge.cpf iris stop IRIS restart

On the other hand, applying a merge file with the iris merge command lets you immediately change settings that do not require a restart, including those that cannot be set during instance startup. A notable example of the latter is adding a new database to an existing mirror. Only the primary mirror member can initiate this action, yet both failover members come up as backup members after a restart, with one eventually becoming primary. The iris merge command makes it possible to use configuration merge to add a new database to a running mirror; for a link to an example, see Create, Modify, and Delete Database-related Objects and Security Items.

When creating a custom InterSystems IRIS container image by starting with an InterSystems IRIS image from InterSystems and adding your own code and dependencies, as described in Creating InterSystems IRIS Images in Running InterSystems Products in Containers, it is very useful to execute the iris merge command in the Dockerfile to reconfigure the instance contained in the InterSystems image. For example, you can include the iris merge command and a merge file with [Actions] parameters to add namespaces and databases to the instance, which will be present in every container created from your custom image.

Managing configuration changes

In addition to the use of configuration merge in deployment or with an existing instance through the iris merge command or during a restart, an instance’s CPF can be altered using the Management Portal, the Config.* classes, or a text editor. These methods are generally used for modifying individual settings on individual instances as needed, rather than reconfiguring multiple instances. If you use configuration merge to automatically deploy and/or reconfigure multiple instances, the strongly recommended best practice is to confine all configuration changes to this method — even when this means, for example, using iris merge merge to change just one or two parameters on one instance. That way, assuming you version and store the merge files you employ, you can maintain a record of the configuration of each instance through time and avoid the possibility of configuration merge overwriting changes made by other means.

In a container, the potential for the latter exists because when an instance is restarted, configuration merge is activated and the merge file to apply is identified by the existence of the ISC_CPF_MERGE_FILE environment variable (which is persistent within a container). If the variable is allowed to persist within the container and the merge file it identifies exists, when the instance is restarted, configuration merge applies the merge file to the instance. This allows you to use configuration merge and a central repository of merge files to apply further changes to existing instances by updating the merge file and restarting. However, if the configuration parameters included in the merge file have been changed on the instance in the container by another method since deployment, the update merge can erase those changes, of which there may not be any record. Confining all configuration changes to configuration merge avoids this. (If the merge file does not exist, startup displays an error message and continues.)

If you do not confine changes to configuration merge, you can avoid the possibility of configuration merge making unwanted changes by including in your automation (using, for example, the iris-main --after option) the scripting of either or both of the following after instance startup:

  • The deletion of the ISC_CPF_MERGE_FILE environment variable in each deployed container.

  • The replacement of the merge file in each container with an empty file.


When ICM uses the configuration merge file specified by the UserCPF parameter to customize containers it deploys, as described in Deploying with Customized InterSystems IRIS Configurations in the ICM Guide, it automatically removes the merge file from the container after deployment, but does not delete the ISC_CPF_MERGE_FILE environment variable.

Useful parameters in automated deployment

The configuration merge feature can be used to update any combination of settings in an instance’s CPF and execute certain operations on the instance as specified in the [Actions] section. Several automated deployment use cases you may find especially useful and that make good examples of the power of the configuration merge feature, along with the parameters involved, are discussed in this section, including

You can find corresponding Docker Compose examples of container deployment with configuration merge in the intersystems-community/configuration-merge-file-2020.4 repo on GitHub.


The examples provided on GitHub are for learning and testing only, and not for production use.

Each parameter name provided by the descriptions in the following is linked to its listing in the Configuration Parameter File Reference so you can easily review a parameter’s purpose and details of its usage; further links to relevant documentation are also provided.

Change the Default Password

As described in Authentication and Passwords in Running InterSystems Products in Containers, you can use the PasswordHash setting in the [Startup] section to customize the default password of the predefined accounts on an instance at deployment, which eliminates the serious security risk entailed in allowing the default password of SYS to remain in effect. (The password of each predefined account should be individually changed following deployment.)

For an example, see 1-Configuration-Memory_Buffers_params on GitHub.

Configure and Allocate Memory

There are a number of parameters affecting an InterSystems IRIS instance’s memory usage, the optimal value of which can depend on the physical memory available, the instance’s role within the cluster, the workload involved, and performance requirements.

For example, the optimal size of an instance’s database cache, which can be specified using the globals parameter, can vary greatly depend on the instance’s role; as noted above, sharded cluster data nodes typically require a relatively large cache. But even within that role, the optimal size depends on the size of the cluster’s sharded data set, and the implemented size may be smaller than optimal due to resource constraints. (For more information, see Planning an InterSystems IRIS Sharded Cluster in the Scalability Guide.) Further, because the database cache should be carefully sized in general, the default database cache setting (the value of globals in the iris.cpf file provided in the container) is intentionally unsuitable for any production environment, regardless of the instance’s role.

Memory usage settings in the [Config] section of the CPF that you might want to update as part of deployment are listed in the following table:

Memory Parameters
[Config] Parameter Specifies
bbsiz Maximum process private memory per process
globals Shared memory allocated to the database cache (not from generic memory heap)
routines Shared memory allocated to the routine cache (not from generic memory heap)
gmheap Shared memory configured as the generic memory heap
memlock Shared memory allocation process
jrnbufs Shared memory allocated to journal buffers from the generic memory heap
locksiz Maximum shared memory allocated to locks from the generic memory heap

For an example, see 1-Configuration-Memory_Buffers_params on GitHub; see also Memory and Startup Settings, Configuring Journal Settings, Monitoring Locks.

Configure SQL and SQL Shell Options and Map SQL Datatypes

You can specify the SQL and SQL Shell settings for instances you are deploying by merging one or more of the parameters in the [SQL] section of the CPF. You can map SQL system datatypes and SQL user datatypes to their InterSystems IRIS data platform equivalents on deployed instances using the [SqlSysDatatypes] and [SqlUserDatatypes] sections of the CPF, respectively.

For an example, see 2-Configuration-SQL_params on GitHub; see also Configuring the SQL Shell.

Create, Modify, and Delete Database-related Objects and Security Items

In addition to updating the default configuration settings, you may want to create, modify, or delete database-related objects or security items on an instance as part deployment or reconfiguration. An example, described in How do I reconfigure an existing instance using configuration merge?, is adding a database to a newly deployed or existing mirror. To this end, configuration merge recognizes a section in the merge file, [Actions], that does not exist in the CPF, and executes the operations it contains.

The operations specified in [Actions] are executed without recording them in the CPF, and only if they have not been executed on the instance — for instance, if an object to be created already exists, the operation is skipped — making them idempotent. For example, if you specify in a merge file the creation of a database called MYAPPDB, and later repeat the merge with the same file (as discussed in Managing configuration changes), a second MYAPPDB is not created.

Include the operations described in the following table in the [Actions] section to create databases (both local and remote) and namespaces as part of deployment. Examples of the many situations in which you might want to do this include the following:

  • When you are deploying a mirror failover pair (or multiple pairs), you can create the same set of mirrored databases (and namespaces for them) on the primary and the backup. When you do this, the mirror is fully operational immediately following deployment.

  • When you are deploying a distributed cache cluster, you can deploy the data server with one or more local databases (and namespaces for them), then deploy the application servers with the data server configured as a remote server (using ECPServers) and remote databases corresponding to the databases you created on it, along with local namespaces for them. As with the mirror example, the distributed cache cluster configured in this way is fully operational immediately after deployment.

    You can also combine this use case with the preceding one to create a distributed cache cluster with a mirrored data server.

  • When you are creating a custom InterSystems IRIS container image from one provided by InterSystems, as described in Creating InterSystems IRIS Images in Running InterSystems Products in Containers, you can execute the iris merge command in the Dockerfile to add namespaces and databases to the instance contained in the InterSystems image, so that they will be present in every container created from your custom image. (Such user-defined databases are included in the instance-specific data cloned to durable storage by the durable %SYS feature, as long as the database files are writable.)

Actions that can currently be executed by configuration merge let you create, delete, and modify the following:

  • Databases, namespaces, global and routine mappings

  • Audit events

  • Users, roles, resources

For a complete list and links to full descriptions of the actions, see [Actions] in the Configuration Parameter File Reference.

For an example, see 3-Configuration-Databases_and_Namespaces on GitHub; see also Configuring Namespaces, Local Databases, Remote Databases, Deploy the Distributed Cache Cluster Using the Management Portal, About InterSystems Authorization, Basic Auditing Concepts.

Deploy Mirrors

You can deploy one or more InterSystems IRIS mirrors by calling separate configuration merge files for the different mirror roles, sequentially deploying the first failover member(s), then the second failover member(s), then DR async members. (Reporting async members must be added to the mirror after deployment.) The parameters required are shown in the following table.

You can also automatically deploy multiple failover pairs, or deploy multiple backups for existing primaries, if the deployment hosts have names ending in -nnnn (or, as a regular expression, .*-[0-9]+$), for example iris-1, iris-2, iris-3 ... . In this case you can use a single merge file for both primaries and backups, then use another to deploy any DR async members following automatic deployment of the failover pairs.

Mirroring Parameters
[Startup] Parameter Specifies
Mirror member role (primary failover member, backup failover member, or DR async)
Automatic failover pairs using hostname matching (primary/backup specification not required; use one merge file for both roles)
Name of the new mirror (when deploying the primary)
Name of mirror the member is joining (when deploying the backup or a DR async)
Name or IP address of primary’s host (when deploying the backup)
Automatic backups for existing primaries using hostname matching
MirrorSSLDir Location of the mirror SSL configuration for the instance.
Location of arbiter to be configured for mirror (when deploying the primary)
Location of arbiter configured for existing primary (when deploying the backup or a DR async)

For examples, see 4-Architecture-Mirror_Members and 6-Architecture-Mirrored_Shard on GitHub; see also Mirroring Architecture and Planning, Configuring Mirroring in the High Availability Guide.

Be sure to read Mirroring with InterSystems IRIS Containers in Running InterSystems Products in Containers before planning containerized deployment of mirrors, or reconfiguring existing containerized instances into mirrors. Among other important consideration, you must ensure that the ISCAgent starts whenever the container for a failover or DR async mirror member starts.

Deploy Sharded Clusters

You can deploy one or more InterSystems IRIS sharded clusters by calling separate configuration merge files for the different node types, sequentially deploying data node 1, then the remaining data nodes, then (optionally) the compute nodes. (Because the instance configured as data node 1 must be running before the other data nodes can be configured, you must ensure that this instance is deployed and successfully started before other instances are deployed as the remaining data nodes.) The parameters required are shown in the following tables.

You can also automatically deploy a cluster of data nodes, or a mirrored cluster of data nodes, if the deployment hosts have names ending in the specified regular expression (for example .*-[0-9]+$, matching iris-1, iris-2, iris-3 ...), in which case you can use a single merge file for all data nodes including node 1. (Following automatic deployment of the data nodes, you can deploy compute nodes using a separate merge file.)

Sharding Parameters
[Startup] Parameter Specifies
EnableSharding Whether the Sharding service is enabled on the instance
The role of the instance in the sharded cluster
Automatic sharded cluster of data nodes only, using hostname matching
ShardClusterURL The existing node to use as a template when adding a data or compute node to the cluster (not used for node 1)
ShardRegexp In automatic deployment of cluster using hostname matching, pattern used to validate names of hosts on which data nodes are to be deployed (ShardRole=AUTO)
ShardMasterRegexp In automatic deployment of cluster using hostname matching, pattern used to identify the host on which node 1 must be deployed (ShardRole=AUTO)
[Config] Parameter Specifies
MaxServerConn Maximum number of concurrent connections from application servers that a data server can accept; must be equal to or greater than the number of nodes in the cluster
MaxServers Maximum number of concurrent connections to data servers that an application server can maintain; must be equal to or greater than the number of nodes in the cluster
globals Shared memory allocated to the database cache; must be significantly larger than default setting for any sharded cluster node.

For a mirrored sharded cluster, the following parameters are also used.

[Startup] Parameter Specifies
A mirrored sharded cluster
(and either)
The mirror status (primary or backup) of the instance
Automatic mirrored data nodes (failover pairs) using hostname matching (in conjunction with automatic deployment of cluster using hostname matching [ShardRole=AUTO])
ArbiterURL Location of arbiter to be configured for mirrored data nodes

For examples, see 5-Architecture-Shard_Instances and 6-Architecture-Mirrored_Shard on GitHub; see also Deploying the Sharded Cluster, Mirror Data Nodes for High Availability, Deploy Compute Nodes for Workload Separation and Increased Query Throughput in the Scalability Guide.