Home  /  Architecture  /  Scalability Guide  /  InterSystems IRIS Scalability Overview

Scalability Guide
InterSystems IRIS Scalability Overview
[Back]  [Next] 
InterSystems: The power behind what matters   

This chapter reviews the scaling features of InterSystems IRIS Data Platform™, and provides guidelines for a first-order evaluation of scaling approaches for your enterprise data platform. Subsequent chapters cover each feature in more detail.
Scaling Matters
What you do matters, and whether you care for ten million patients, process billions of financial orders a day, track a galaxy of stars, or monitor a thousand factory engines, your data platform must not only support your current operations but enable you to scale to meet increasing demands. Each business-specific workload presents a different challenge to the data platform on which it operates--and as a business grows, that challenge becomes even more acute.
For example, consider the two situations in the following illustration:
Comparing Workloads
Both workloads are demanding, but it is hard to say which is more demanding--or how to scale to meet those demands.
We can better understand data platform workloads and what is required to scale them by decomposing them into components that can be independently scaled. One simplified way to break down these workloads is to separate the components of user volume and data volume. A workload can involve many users interacting frequently with a relatively small database, as in the first example, or fewer requests from what could be a single user or process but against massive datasets, like the second. By considering user volume and data as separate challenges, we can evaluate different scaling options. (This division is a simplification, but one that is useful and easy to work with. There are many examples of complex workloads to which it is not easily appled, such as one involving many small data writers and a few big data consumers.)
Vertical Scaling
The first and most straightforward way of addressing more demanding workloads is by scaling up — that is, taking advantage of vertical scalability. In essence, this means making an individual server more powerful so it can keep up with the workload.
Vertical Scaling
Vertical scaling is generally well understood and architecturally straightforward; with good engineering support it can help you achieve a finely tuned system that meets the workload’s requirements. It does have its limits, however:
For more information on vertically scaling InterSystems IRIS Data Platform, see the chapter Vertically Scaling InterSystems IRIS
Horizontal Scaling
When vertical scaling does not provide the complete solution — for example, when you hit the inevitable hardware (or budget) ceiling — data platforms can also be scaled horizontally by clustering a number of smaller servers. That way, instead of adding specific components to a single expensive server, you can add more modest servers to the cluster to support your workload as volume increases. Typically, this implies dividing the single-server workload into smaller pieces, so that each cluster node can handle a single piece.
The financial advantage of this approach lies in the ability to scale using a range of hardware, from dozens of inexpensive commodity systems to a few high-end servers to anywhere in between. Whatever they are, your cluster nodes can also be scaled vertically when this is helpful; for some workloads, fewer more powerful machines may be better than more smaller servers. Horizontal scaling also fits very well with virtual and cloud infrastructure, in which additional nodes can be quickly and easily provisioned as the workload grows, and decommissioned if the load decreases.
On the other hand, horizontal clusters require greater attention to the networking component to ensure that it provides sufficient bandwidth for the multiple systems involved. Horizontal scaling also requires significantly more advanced software, such as InterSystems IRIS™, to fully support multinode collaboration. InterSystems IRIS accomplishes this by providing the ability to scale for both increasing user volume and increasing data volume.
Horizontal Scaling for User Volume
How can you scale horizontally when user volume is getting too big to handle with a single system at an acceptable cost? The short answer is to divide the user workload by connecting different users to different cluster nodes that handle their requests.
Dividing the User Workload
You can do this by using a load balancer to distribute users round-robin, but grouping users with similar requests (such as users of a particular application when multiple applications are in use) on the same node is more effective due to distributed caching, in which users can take advantage of each other’s caches.
InterSystems IRIS provides an effective way to accomplish this through ECP, the Enterprise Cache Protocol. By managing a distributed cache, ECP supports an architectural solution that partitions users across a tier of application servers sitting in front of your data server. Each application server handles user queries and transactions using its own cache, while all data is stored on the data server, which automatically keeps the application server caches in sync. Because each application server maintains its own independent working set in its own cache, adding more servers allows you to handle more users.
InterSystems IRIS ECP Configuration
ECP architecture and distributed caching are entirely transparent to the user and the application code.
For more information on horizontally scaling InterSystems IRIS Data Platform for user volume, see the chapter Horizontally Scaling InterSystems IRIS with ECP
Horizontal Scaling for Data Volume
The data volumes required to meet today’s enterprise needs can be very large. More importantly, if they are queried repeatedly, the working set can get too big to fit into the server’s cache; this means that only part of it can be kept in the cache and disk reads become much more frequent, seriously impacting query performance.
As with user volume, you can horizontally scale for data volume by dividing the workload among several servers. This is done by partitioning the data.
Partitioning the Data Workload
InterSystems IRIS achieves this through its sharding capability. An InterSystems IRIS sharded cluster partitions data storage, along with the corresponding caches, across a number of servers, providing horizontal scaling for queries and data ingestion while maximizing infrastructure value through highly efficient resource utilization.
In a basic sharded cluster, a sharded table is partitioned horizontally into roughly equal sets of rows called shards, which are distributed across a number of shard data servers. For example, if a table with 100 million rows is partitioned across four shard data servers, each stores a shard containing about 25 million rows. Nonsharded tables reside wholly on the shard master data server.
Queries against a sharded table are decomposed by the shard master data server into multiple shard-local queries to be run in parallel on the shard servers; the results are then combined and returned to the user. This distributed data layout can further be exploited for parallel data loading and with third party frameworks like Apache Spark.
In addition to parallel processing, sharding improves query performance by partitioning the cache. Each shard data server uses its own cache for shard-local queries against the data it stores, making the cluster’s cache for sharded data roughly as large as the sum of the caches of all the shard data servers. Adding a shard data server means adding dedicated cache for more data.
As with ECP application server architecture, sharding is entirely transparent to the user and the application.
Sharding comes with some additional options that greatly widen the range of solutions available, including the following:
The following illustration depicts a sharded cluster with shard master application servers, shard query servers, and mirrored shard master data server and shard data servers.
Sharded cluster with application servers, query shards, and mirroring
For more information on horizontally scaling InterSystems IRIS Data Platform for data volume, see the chapter Horizontally Scaling InterSystems IRIS with Sharding
Using InterSystems Cloud Manager to Deploy Horizontally Scaled Configurations
InterSystems recommends using InterSystems Cloud Manager (ICM) to deploy InterSystems IRIS, including both ECP and sharded configurations. By combining plain text declarative configuration files, a simple command line interface, the widely-used Terraform infrastructure as code tool, and InterSystems IRIS deployment in Docker containers, ICM provides you with a simple, intuitive way to provision cloud or virtual infrastructure and deploy the desired InterSystems IRIS architecture on that infrastructure, along with other services. ICM can significantly simplify the deployment process, especially for complex horizontal cluster configurations.
ICM also allows you to conveniently add Apache Spark capabilities to an ICM-deployed sharded cluster and other InterSystems IRIS configurations. In deploying Spark, ICM creates a Spark framework corresponding to the deployment by starting Spark slaves on the DS nodes and a Spark master on the DM node, all preconfigured to connect to the InterSystems IRIS containers running on those nodes.
For more information on using ICM to deploy InterSystems IRIS, see the InterSystems Cloud Manager Guide.
Evaluating Your Workload for InterSystems IRIS Scaling Solutions
The subsequent chapters of this guide cover the individual scalability features of InterSystems IRIS in detail, and you should consult these before beginning the process of scaling your data platform. However, the table below summarizes the overview in this chapter, and provides some general guidelines concerning the scaling approach that might be of the most benefit in your current circumstances.
InterSystems IRIS Scaling Solutions
Scaling Approach
Possible Solutions
Pros and Cons
High multiuser query volume: insufficient computing power, throughput inadequate for query volume
Add CPU cores
Use parallel query execution to leverage high core counts for queries spanning a large dataset
  • Architectural simplicity
  • Hardware finely tuned to workload
  • Nonlinear price/performance ratio
  • Persistent hardware limitations
  • Careful initial sizing required
  • One-way scaling only
High data volume: insufficient memory, database cache inadequate for working set
Add memory and increase cache size to leverage larger memory
Use parallel query execution to leverage high core counts
Other insufficient capacity: bottlenecks in other areas such as network bandwidth
Increase other resources that may be causing bottlenecks
High multiuser query volume: frequent queries from large number of users
Deploy ECP application server configuration (distributed caching)
  • More linear price/performance ratio
  • Can leverage commodity, virtual and cloud-based systems
  • Elastic scaling
  • Emphasis on networking
High data volume: some combination of:
Deploy sharded cluster (partitioned data and partitioned caching), possibly adding:
  • Shard query servers to separate queries from data ingestion and increase throughput
  • Shard master app servers to apply distributed caching to multiuser query volume