Caché Release Notes and Upgrade Checklist Archive
Caché 2008.1 Upgrade Checklist
[Back] [Next]
Go to:

The purpose of this section is to highlight those features of Caché 2008.1 that, because of their difference in this version, affect the administration, operation, or development activities of existing 2007.1 systems.
General upgrade issues are mentioned at the start of this document. Those customers upgrading their applications from releases earlier than 2007.1 are strongly urged to read the upgrade checklist for earlier versions first. This document addresses only the differences between 2007.1 and 2008.1.

Server Support For CPUs Before Pentium P4
Beginning with version 2008.1, Caché on Intel-based platforms will only install and run on servers using the Pentium P4 or later chipset, that is, those that support the SSE2 cpu extensions. This also applies to other manufacturers equivalent chipsets, for example, those from Advanced Micro Devices (AMD) that are functionally equivalent to the Intel Pentium P4.
This is only for server systems. Caché will still run as a client on earlier CPU versions.
This section contains information of interest to those who are familiar with administering prior versions of Caché and wish to learn what is new or different in this area for version 2008.1. The items listed here are brief descriptions. In most cases, more complete descriptions are available elsewhere in the documentation.
Version Interoperability
A new table showing the interoperability of recent Caché releases has been added to the Supported Platforms document.
Management Portal Changes
Long String Control
The location for enabling use of long strings has changed. In prior versions, it was located on the [Home] > [Configuration] > [Advanced Settings] page. In this version of Caché, it has been moved to [Home] > [Configuration] > [Memory and Startup].
Operational Changes
Journal Rollover Changes
Simple journal rollovers, regardless of their issuers, are no longer logged in journal history global (^%SYS("JOURNAL","HISTORY")). They are still logged in cconsole.log, however.
Journaling always starts in the primary directory, the primary directory is current directory and the secondary, the alternate. That remains the case until journaling gets an error (say disk full) in current (primary) directory and fails over to the alternate (secondary) directory, at which point the secondary directory becomes current directory and the primary, the alternate.
Maximum Path For Caché Databases Reduced
In this version of Caché, the maximum path length used to identify Caché databases and other items in the installation directory has been reduced from 232 to 227 characters. Customers who were close to the old limit may need to adjust some pathnames to fit within the new limit. To allow for variability in Caché file names, the directory portion of the path should be no longer than 195 characters.
If you are installing this version of Caché on a cluster configuration, you will need to:
  1. Find the pathname of the directory holding the PIJ (Pre-Image Journal) files on each node of the cluster. This is the directory specified as the PIJ directory in the Clusters category in the .cpf file or the Management Portal page at [Home] > [Configuration] > [Advanced Settings].
  2. Shutdown the entire cluster cleanly.
  3. Delete the PIJ files on each node of the cluster. These are files whose names are of the form “*.PIJ*.*” located in the PIJ directory for that node.
  4. Upgrade each of the nodes to this version;
  5. Reboot and re-form the cluster.
When Upgrading An ECP Configuration SQL Privileges May Be Lost From the Clients
After converting SQL privileges on the database server, the mpriv global will be deleted. Beginning with this version, Caché no longer saves the privileges in the namespace, but in the (local) SYS database. Therefore, after the conversion, the application server has the privileges, but the application server does not. The privileges will have to manually be set on the client.
Parameter File Name Always Defaults to cache.cpf
The default for the optional configuration argument used when starting Caché from the command line has changed. Previously, when a command line such as
ccontrol start an_instance_name a_config_file
was issued to start Caché, a_config_file.cpf would become the default configuration file used to start later instances.
Beginning with this version, the configuration file will be used only for the current startup. Subsequent start commands which do not explicitly specify a configuration file will cause the Caché instance to be started using the cache.cpf configuration file.
Platform-specific Items
This section holds items of interest to users of specific platforms.
Windows Vista
Older Communication Protocols Not Supported
Due to underlying platform considerations, Caché on Windows Vista does not support DCP on raw Ethernet. Similarly, Caché DDP and LAT are not supported on this platform.
This version of Caché removes support for Apple Mac OS X for PowerPC.
Sun Solaris On SPARC-64
This version of Caché does not support the C++ and Light C++ bindings on this platform. It also does not support multithreaded CALLIN.
SUSE Linux Enterprise For Itanium
This version of Caché does not support the C++ and Light C++ bindings on this platform.
The following known limitations exist in this release of Caché:
DCP Limitations
The following known limitations exist when using DCP:
DDP Limitations
The following known limitations exist when using DDP:
ODBC Limitations
ODBC clients from versions prior to Caché 5.1 are not fully compatible with this version. InterSystems recommends that the client systems be upgraded.
This section contains information of interest to those who have designed, developed and maintained applications running on prior versions of Caché.
The items listed here are brief descriptions. In most cases, more complete descriptions are available elsewhere in the documentation.
ObjectScript Changes
$REPLACE Function Added
This version of Caché adds a new string replacement function, $REPLACE. $REPLACE(S1, S2, S3) finds all occurrences of the string S2 in the string S1 and replaces them with the string S3. All three arguments are required. Unlike $TRANSLATE, S2 and S3 are treated as complete strings, not lists of characters.
$ZDATETIMEH Validation Improved
In this version, ODBC and SQL date-time validation is improved. $ZDATETIMEH will now report an error if any field (day, month, hour, second) is not in the proper format. For example, 002 will no longer be accepted as a valid day within a month because only two digits are allowed.
SQL Changes
Incompatibility: arrow syntax in ON clauses
Prior to Version 2007.1, Caché SQL allowed use of arrow syntax in ON clauses. This was possible because Caché only supported a linear chain of left outer joins (one or more tables left joined to a single table). As a result, combinations of arrow syntax, uses of =* syntax, and ANSI joins could readily be mixed within this very limited support.
Arrow syntax has always been defined as equivalent to =* syntax, and uses the same underlying mechanism. As a result, it can no longer be supported ON clauses.
All other support for arrow syntax remains (WHERE clause, SELECT clause, GROUP BY, ORDER BY, HAVING).
Restriction on LEFT JOIN
There remains a restriction with ON clauses with LEFT JOINs. If all the conditions affecting some table use comparisons that may pass null values, and that table is itself a target of an outer join, Caché may report:
error 94: unsupported use of outer join
An example of such a query is
FROM Table1 
          LEFT JOIN Table2 ON Table1.k = Table2.k 
          LEFT JOIN Table3 ON coalesce(Table1.k,Table2.k) = Table3.k
Collation Order Now Applies to LIKE
The SQL LIKE operator now performs similarly to other comparison operators such as “=” and %STARTSWITH. That is, the expression:
f LIKE <arg>
is now interpreted as:
Collation(f) LIKE Collation(<arg>)
where Coll() is the default collation of f. (Unlike the other operators, in this case the default collation of <arg> does not affect the result.)
This applies to fields with default collation: %SQLSTRING, %SQLUPPER, %UPPER, and %SPACE. It does not apply to %MVR, %PLUS, and %MINUS. It also does not apply to %ALPHAUP and %STRING (because these may remove characters from a string).
The exclusion of %ALPHAUP and %STRING will be changed in a future version.
For collations with truncation, such as %SQLUPPER(50), the collation (and truncation) are applied to the field value and to the pattern. In practice, this means that the pattern used should be shorter than the truncation length, otherwise some cases could yield non-intuitive results.
COMMIT Behavior Changed To Match SQL Standard
The behavior of COMMIT when there are multiple active SAVEPOINTs was modified to conform to the SQL standard. For a pure SQL application, the COMMIT action will commit all nested transactions up to and including the most recent BEGIN TRANSACTION. More generally, for a mixed COS/SQL application where the COS code could have created additional nested transaction levels, a SQL COMMIT will commit all nested transactions up to and including the highest $TLEVEL that was not created by an SQL SAVEPOINT statement.
As has always been the case, applications that call out of SQL and increase $TLEVEL should generally restore the $TLEVEL before returning to SQL. Such applications should not end a transaction that was started by SQL, otherwise SQL-conformant behavior cannot be guaranteed.
TSQL Changes
%TSQL.Manager.load() Replaced
With this release, the load() of %TSQL.Manager has been deprecated. The new, preferred method for loading TSQL source files is $SYSTEM.SQL.TSQL(). This function accepts arguments similar to that of the new %TSQL.Manager.Import() method.
TSQL Procedure Definitions Map to Default Schema
When executing TSQL batches using $SYSTEM.SQL.TSQL() or ##class(%TSQL.Manager).Import(), the classes created to contain procedure definitions will now be in the package mapped to the default schema. For example, if the default schema is 'dbo' then the package in which the procedure classes will be created will be the package defined with the SQLNAME = 'dbo'.
Support For Delimited Ids
This version of Caché adds a checkbox to the SQL Gateway connection setup that prevents using delimited ids in the SQL statements sent over the Gateway. This option was added because Sybase does not support delimited ids; connections that do not select this option will fail if they connect to Sybase databases.
Command-line Debugging Changes
The ZBREAK command has been extended with new arguments that permit finer control over instruction stepping. For example, it is now possible to bypass stepping into language support routines and method calls, particularly the %Destruct method.
Details on the improved operation are contained in the ZBREAK command reference material, and the section on debugging in the guide to Caché ObjectScript.
Studio Changes
Source Control Document Compatibility
The %Studio.SourceControl.Interface class implementation has changed in this version of Caché. However, Studio will now detect which version of the class is being used and act accordingly. This will allow more controlled conversion of existing applications using source control in Caché.
Version Compatibility
This version of Studio will refuse connections to servers running versions of Caché before 5.2. This is a consequence of the security features added in that release. When a connection is refused, developers will see a message, “Version mismatch. Server should be version 5.2 or higher.”
Caché Terminal Changes
Terminal Identification
In this version, the TERMINAL has been modified to display the machine and instance of Caché it is connecting to. These items are displayed BEFORE any prompt for a user ID or password occurs. Those terminal scripts that assume the first output from TERMINAL is the prompt for a username should be changed to explicitly look for the prompt. Otherwise, they may intermittently fail depending on the timing of the output.
Support For Telnet Terminal Types
Caché Terminal now supports the “Terminal Type” telnet option. At process startup, Caché will perform the “Terminal Type” telnet option negotiation. If the telnet client agrees and sends the terminal type, Caché will define the Windows TERM environment variable. The value of this variable can be retrieved from within Caché through the function, $System.Util.GetEnviron("TERM").
This version of Caché provides a mechanism to invoke specific code to be executed when a terminal connection is made, but before the user login takes place.
The terminal initiation code checks for the existence of a routine called, ZWELCOME in the %SYS namespace. If such a routine is found, it is invoked immediately prior to the terminal login sequence, if any. The name of the routine implies its intended use, as a custom identification and welcome message to users.
The ZWELCOME routine executes in the %SYS namespace with an empty $USERNAME and $ROLES set to %ALL. Care should be taken to ensure that the failure modes of ZWELCOME are benign.
Class Changes
The LogicalToDisplay method has been changed in this version. In converting the internal form to an external representation, it removes all NULL ($CHAR(0)) characters from the string. This contradicts the usual expectation that LogicalToDisplay and DisplayToLogical are inverses of one another.
Remove Ability To Force IsValidDT Method Generation For System Datatypes
In a prior release, InterSystems removed the IsValidDT methods from system datatypes to reduce the amount of code generated and improve the speed of the class compiler. At the time, we added a flag that if set would still generate IsValidDT methods for our datatypes in case applications were calling the IsValidDT methods of our datatypes directly from their own code.
This switch has now been removed as it causes problems with the SQL projection of our datatypes.
%XML.Document and %XML.Node
This version of Caché introduces the %XML.Document class that represents an XML document as a Document Object Model (DOM). The DOM may be created either
The document can be written to a destination with the various Writer and Tree methods of %XML.DOM.
The %XML.Node class is used to access the components of the DOM. %XML.Node navigates through the nodes of the DOM rather than representing a fixed node in a DOM tree. The MoveToxxx methods are used to move through the DOM. The properties and methods of %XML.Node are then used to retrieve and modify the node contents.
A set of macros in the file, may also be used to navigate the DOM based on the DocumentId property of %XML.Document.
New $SYSTEM.SQL Functions
Two new functions have been added to this class:
MANAGEDEXTENT Class Parameter Added
A new data management mechanism has been implemented in this release. Because of this, those classes that use the default storage mechanism for managing their data, %Library.CacheStorage now behave differently with regard to their persistent instances.
The globals used by persistent classes that utilize default storage are now registered with the new Extent Manager; the interface to the Extent Manager is through the %ExtentMgr.Util class. This registration process occurs during class compilation. Any errors or name collisions are reported as errors, causing the compile to fail. Collisions must be resolved by the user. This can done by either changing the name of the index or adding explicit storage locations for the data.
The extent metadata is only deleted when the class is deleted. To delete the extent using Studio, right-click on the name of the class in the Workspace window of Caché Studio, and select Delete Class '<classname>' from the menu with the “e” flag is set as the default for the namespace.
The available flags and qualifiers are shown set by the commands
Do $SYSTEM.OBJ.ShowFlags()
Do $SYSTEM.OBJ.ShowQualifiers()
Do $SYSTEM.OBJ.SetFlags()
Do $SYSTEM.OBJ.SetQualifiers()
Recompiling a class will refresh the extent metadata. A failed compile leaves the metadata in the state it was in at the time of the failure.
The user can always explicitly delete the extent metadata using ##class(%ExtentMgr.Util).DeleteExtent(<classname>).
If the user does not want to register the global references used by a class then set the value of the MANAGEDEXTENT class parameter to 0 (zero).
It is possible that an application has been designed with multiple classes intentionally sharing a global reference. In this case, the implementer will need to add MANAGEDEXTENT=0 such classes, if they use default storage. Otherwise, recompilation of an application in the set will generate the error like
ERROR #5564: Storage reference: '^This.App.Global used in 'User.ClassA.cls' 
is already registered for use by 'User.ClassB.cls'
Report in %ZEN has been changed to correct a number issues discovered in earlier versions and to extend some capabilities. A summary follows.
OnCreateResultSet Callback
For reports that use the OnCreateResultSet callback, this callback has been enhanced so that it is now passed an array of user-defined parameters. For example, the method declaration now is:
ClassMethod RS1(ByRef pSC As %Status,  ByRef pParameters) As %ResultSet 
so that the result set request
<group name="city" 
       parameter field="Home_City"/>
so that pParameters(1) will now contain the current value of field Home_City, etc. Prior to this change, there was no mechanism to pass parameters into user-created result sets.
Nested queries
This version corrects an issue with nested queries. A Zen Report can either define one outer query and have the grouping levels break on columns within this query, or it can introduce additional queries at the group level. In this latter case, the nested query is typically fed some parameter from the outer query, for example:
<report sql="SELECT City FROM Table1 ORDER BY City">
<group name="City" 
       sql="SELECT Employee FROM Table2 WHERE City=?">
<parameter field="City"/>
In prior versions, many cases of nested query did not work correctly. Now nested queries are executed only *once* for each new break value from the outer grouping.
In addition, the internal management of nested queries is now more consistent in how references to fields within queries are resolved. The basic rules are that:
  1. Every group (including the outer report tag) defines a "query context" at the "level" of the group. The level specifies how deeply nested the group is from the top of the report definition. If a group does not define a new query, then it uses its the query of its parent group as if it was its own.
  2. References to fields within a <group>,<parameter>, or <attribute>node are resolved by looking at the query of the parent node.
  3. References to fields within <element> and <aggregate>nodes are resolved by looking at the query at the same level as the node.
For example, in
<report sql="SELECT Name FROM Table1">
<element name="A" field="Name"/>
Name comes from Table1. In
<eport sql="SELECT Name FROM Table1">
<ttribute name="A" field="Name"/>
Name cannot be resolved and an error message is generated. In the sequence
<report sql="SELECT Name FROM Table1">
<group name="Name" sql="SELECT Name FROM Table2 WHERE...">
<element name="A" field="Name"/>
Name will be resolved to that in Table2. And finally, given
<report sql="SELECT Name FROM Table1">
<group name="Name" sql="SELECT Name FROM Table2 WHERE...">
<element name="A" field="Name"/>
Name will be resolved to Table1.
Non-Existent Fields
References to non-existent fields within a query now result in “” for the value of the field and not the value, “not found”.
Sibling Groups
It is now possible for a ReportDefinition to define more than one group node at the same level. These are referred to as “sibling” groups and are descibed in more detail in the Zen Reports documentation. There are some special rules in effect for sibling groups:
Sibling groups are typically used when each sibling defines its own query, usually referring to a breaking field from the outer query in their common WHERE clause. They are also used where the siblings do not define their own queries. In this case, the first sibling tests for break conditions and outputs its records, then the subsequent siblings are processed with the same break field.
Node Level
As a convenience, the Report Engine defines a variable, %node(level) to be equal to the current number at the given grouping level. You can use this within an expression within a ReportDefinition. For example,
<attribute expression="$G(%node(2))" name="num"/>
The effect of these changes is that some previously acceptable queries will now be reported as being in error.
System Management Portal
There have been a number of changes and improvements to the system management portal since 2007.1 was released. These are detailed in the administrator section.
Default Configuration File Named Handling Changed
The default configuration file name used when Caché is started has changed. Please refer to the administrator section for the details.