docs.intersystems.com
Home  /  Architecture  /  Scalability Guide  /  Vertically Scaling InterSystems IRIS


Scalability Guide
Vertically Scaling InterSystems IRIS
[Back]  [Next] 
InterSystems: The power behind what matters   
Search:  


Scaling a system vertically by increasing its capacity and resources is a common, well-understood practice. Recognizing this, InterSystems IRIS™ includes a number of built-in capabilities that help you leverage the gains. Some operate transparently, while others require specific adjustments on your part to take full advantage.
This chapter discusses how to calculate the memory and CPU requirements of a server hosting an InterSystems IRIS instance and application, both initially and after collecting benchmarking and load testing results and information from existing sites, and how to take the best advantage of vertically scaling by increasing system memory or the CPU core count. In some cases, you may use these guidelines to evaluate whether a system that was chosen based on other criteria (such as corporate standards and cloud budget limits) is roughly sufficient to handle your workload requirements, whereas in others you may use them to plan the system you need based on those requirements.
Memory Management and Scaling for InterSystems IRIS
Memory management is a critical element in optimizing performance. For basic information on memory allocation and management in InterSystems IRIS, see Managing InterSystems IRIS Memory in the “Preparing to Install” chapter of the Installation Guide.
Memory Overview
Generally, there are four main consumers of memory on a server hosting an InterSystems IRIS instance. At a high level, you can calculate the amount of physical memory required by simply adding up the requirements of each of the items on the following list. All of them use real memory, but they can also use virtual memory; a key part of capacity planning is to size a system so that there is enough physical memory to avoid paging.
Calculating Initial Memory Requirements
Of course, every application is different and any given system may require a series of adjustments to optimize memory use. However, the following provides general rules to use as a basis in sizing memory for your application. Benchmarking and performance load testing the application will further influence your estimate of the ideal memory sizing and parameters.
For information on how to allocate memory to routine and database caches, see Memory and Startup Settings in the “Configuring InterSystems IRIS” chapter of the System Administration Guide. For information on how to allocate memory to the generic memory heap, see gmheap in the Configuration Parameter File Reference and gmheap in the Advanced Memory Settings section of the Additional Configuration Settings Reference.
Note:
If you are configuring a data server in a distributed cache cluster, see Increase Data Server Database Caches for ECP Control Structures in the “Horizontally Scaling for User Volume with Distributed Caching” chapter of this guide for important information about adjustments to database cache sizes that may be necessary.
Vertically Scaling for Memory
Performance problems in production systems are often due to insufficient memory for application needs. Adding memory to the server hosting one or more InterSystems IRIS instances lets you allocate more to the database cache, the routine cache, generic memory, or some combination. A database cache that is too small to hold the workload’s working set forces queries to fall back to disk, greatly increasing the number of disk reads required and creating a major performance problem, so this is often a primary reason to add memory. Increases in generic memory and the routine cache may also be helpful under certain circumstances.
CPU Sizing and Scaling for InterSystems IRIS
InterSystems IRIS is designed to make the most of a system’s total CPU capacity. Keep in mind that not all processors or processor cores are alike. There are variations at the surface such as clock speed, number of threads per core, and processor architectures, and also the varying impact of virtualization.
Basic CPU Sizing
Applications vary significantly from one to another, and there is no better measurement of CPU resource requirements than benchmarking and load testing your application and performance statistics collected from existing sites. If neither benchmarking or existing customer performance data is available, start with one of the following calculations:
Important:
These recommendations are only starting points when application-specific data is not available, and may not be appropriate for your application. It is very important to benchmark and load test your application to verify its exact CPU requirements.
Balancing Core Count and Speed
Given a choice between faster CPU cores and more CPU cores, consider the following:
For example, an application with a great many users concurrently running simple queries will benefit from a higher core count, while one with relatively fewer users executing compute-intensive queries would benefit from faster but fewer cores. In theory, both applications would benefit from many fast cores, assuming there is no resource contention when multiple processes are running in all those cores simultaneously. As noted in Calculating Initial Memory Requirements, the number of processor cores is a factor in estimating the memory to provision for a server, so increasing the core count may require additional memory.
Virtualization Considerations for CPU
Production systems are sized based on benchmarks and measurements at live customer sites. Virtualization using shared storage adds very little CPU overhead compared to bare metal, so it is valid to size virtual CPU requirements from bare metal monitoring.
Note:
For hyper-converged infrastructure (HCI) deployments, add 10% to your estimated host-level CPU requirements to cover the overhead of HCI storage agents or appliances.
In determining the best core count for individual VMs, strike a balance between the number of hosts required for availability and minimizing costs and host management overhead; by increasing core counts, you may be able to satisfy the former requirement without violating the latter.
The following best practices should be applied to virtual CPU allocation:
Leveraging Core Count with Parallel Query Execution
When you upgrade by adding CPU cores, you can use an InterSystems IRIS feature called parallel query execution to takes effective advantage of the increased capacity.
Parallel Query Execution
Parallel query execution is built on a flexible infrastructure for maximizing CPU usage that spawns one process per CPU core, and is most effective with large data volumes, such as analytical workloads that make large aggregations.
For more information on parallel query processing, see Parallel Query Processing in the “Optimizing Query Performance “ chapter of the SQL Optimization Guide.