Skip to main content

Introduction to InterSystems Data Fabric Studio with supply chain module

InterSystems Data Fabric Studio™ with supply chain module is a fully managed solution that enables you to centralize data about your supply chain in an easy-to-use low-code environment. You can feed this data to downstream applications, including InterSystems Supply Chain Orchestrator™Opens in a new tab.

Design and Key Features

InterSystems Data Fabric Studio™ supports a division of labor in which those who are familiar with the data sources can catalog, label, and describe the available data, so that others can use that data without requiring that deeper knowledge. This division of labor also means that the technical aspects of transforming, validating, and reconciling the data are automated by those who are knowledgeable with those specific requirements, so that others can readily and directly use the resulting data.

The product is also designed to automate all data processing operations, with detailed control over scheduling. Manual and test options are provided for use before any changes are put into production usage.

When combined with the supply chain module, the product provides the following key features:

  • Extensible supply chain data model, a canonical supply chain data model implemented on InterSystems IRIS data platform, which can be extended/customized via the data model API.

  • Data model APIs for data model discovery and live documentation, such as listing all supply chain data objects, or getting the detailed definition of a supply chain data object.

  • Data access APIs, which support both CRUD operations and advanced search capabilities with sorting and pagination support.

  • The ability to define connections to data sources, including safely storing necessary credentials.

  • The ability to define schemas or data structures and manage their versions. This includes specifying how data is to be extracted (such as whether only new data is extracted), specifying data types, and specifying default values.

  • The ability to define data pipeline recipes that extract data from these external data sources and update tables within Data Fabric Studio. A recipe can define which fields to extract, how to transform the fields if necessary, how to validate the values in the fields, how to reconcile the values with alternative sources of the same data, and finally how to publish the data to a final destination.

  • The ability to automate and schedule the running of the recipes, following the appropriate business calendar.

    To simplify scheduling, the product supports a hierarchical system of entities, each of which can have its own business calendar but can inherit calender details from its parent. An entity can correspond to a business unit or can simply correspond to some external system or organization whose calendar is important to your organization.

  • The ability to define Business Intelligence cubes based on the tables within Data Fabric Studio. The product provides a built-in analytics tool (InterSystems IRIS® Advanced Analytics), but other Business Intelligence systems can also be used.

  • The ability to automate and schedule the building of Business Intelligence cubes.

  • The ability to define snapshots of data for review by regulators or analysts. A snapshot can use one or more tables and it generally provides a flattened (de-normalized) view of the relevant data. As with other items, the product provides the ability to automate and schedule snapshots, so that you can accumulate a series of snapshots of the same data. And you can easily define Business Intelligence cubes based on them, for a longitudinal view of that data.

Users and Where to Start

The product has three general categories of users, each with a different starting place:

  • Administrators, who perform a small set of administrative tasks, related to security, data sources (at a high level), and system defaults. A key first step is defining an initial set of users and data sources, so that others can start work. See Welcome, System Administrators.

  • Data Engineers or Data Stewards, who define the data pipeline—a generic phrase that refers to defining and cataloging the schemas to be used within the system, defining recipes that load data, and scheduling the recipes. See Welcome, Data Engineers.

  • Data Analysts, who use the data in the system to build cubes and connect reporting tools. See Welcome, Data Analysts.

See Also

FeedbackOpens in a new tab