Skip to main content
InterSystems Data Fabric Studio
AskMe (beta)

Data Source Details (2.9)

This page provides details for each type of data source for InterSystems® Data Fabric Studio™.

ExcelSingleFileDir

An ExcelSingleFileDir source is a source that provides data via Microsoft Excel files, periodically written to a specific directory within the file system accessible by Data Fabric Studio. Apart from the format of the file, this data source is the same as FileDir.

FileDir

A FileDir source is a source that provides data via files, periodically written to a specific directory within the file system. The file should contain data separated by delimiters; a common example is a comma-separated value (CSV) file. Another common delimiter is the tab.

For a FileDir source, specify the following details:

Data Source Name

Required. The unique name of this data source. This is the name that users see when they browse the Data Catalog.

Interface Path Location

Required. The directory in which these files will be found, relative to the system base path. Because this is a relative path, it should not start with / or \ (but any leading path separator is automatically removed).

To avoid confusion, this directory should not be used by any other data source.

When you create a FileDir source, the system automatically creates the directory named by Interface Path Location, as well as the subdirectories Samples, Source, Work, and Archive. Using the File Manager explains the purpose of these directories.

JDBC

A JDBC data source provides access to a database via a JDBC connection.

For a JDBC data source, specify the following details:

Data Source Name

Required. The unique name of this data source. This is the name that users see when they browse the Data Catalog.

Credential

Required. The credential that defines the username and password to access the database. Select the applicable credential from the dropdown list.

JDBC URL

Required. Enter the JDBC connection string needed to access the database.

JDBC Database

Required. Select the database vendor and version.

Enable Foreign Tables

If you select this check box, it will be possible to project tables from the selected database as foreign tables within Data Fabric Studio. This is useful when it is not feasible or reasonable to load data directly into Data Fabric Studio. For example, a table may be extremely large and might not be queried frequently. A foreign table is read-only but can otherwise be accessed in the same way as local tables.

If you select this check box, also specify JDBC Foreign Table Local Schema, which is the default name of the schema to contain any foreign tables from this data source.

S3Delimited

An S3Delimited data source provides access to an S3 bucket that contains delimited files.

For an S3Delimited data source, specify the following details:

Data Source Name

Required. The unique name of this data source. This is the name that users see when they browse the Data Catalog.

Credential

Required. The credential that defines the username and password to access the given S3 bucket. Select the applicable credential from the dropdown list.

S3 Bucket Name

Required. The name of the S3 bucket to access.

AWS Session Token

The session token to use when accessing the S3 bucket.

Source Path

Location of the folder from which to load files. You can optionally include %RUNDATE in the path for the ISO date to be injected at the time the files are loaded. The root folder is used if one is not provided.

Samples Path

Location of the folder from which to import file schemas. You can optionally include %RUNDATE in the path for the ISO date to be injected at the time the files are listed. The root folder is used if one is not provided.

Archive Path

Location of the folder to archive previously loaded files. You can optionally include %RUNDATE in the path for the ISO date to be injected at the time of the archiving. The root folder is used if one is not provided.

Target Path

Location of the folder to write files to, in the case when a recipe promotes data to a file on the given S3 bucket.

Salesforce

A Salesforce data source provides access to a Salesforce instance via the Salesforce API.

For a Salesforce data source, specify the following details:

Data Source Name

Required. The unique name of this data source. This is the name that users see when they browse the Data Catalog.

Credential

Required. The credential that defines the username and password to authenticate the Salesforce API. Select the applicable credential from the dropdown list.

Client ID Credentials

Required. Select the appropriate SDS Datasource (DS) Credentials record for the ClientId and Client Secret.

Authentication Server

Required. Specify the server used for authentication (which does not have to be the same server on which Salesforce is running).

Authentication URL

Required. Specify the end point in the Salesforce API to use in requesting access.

API end-point Server

Required. Specify the server on which the Salesforce API is running.

API URL

Required. Specify the end point in the Salesforce API to use in requesting resources.

Port

Specify the web server port to use, if that is not the standard port number.

SftpDelimited

An SftpDelimited data source provides access to an SFTP server that contains delimited files.

For an SftpDelimited data source, specify the following details:

Data Source Name

Required. The unique name of this data source. This is the name that users see when they browse the Data Catalog.

Credential

Required. The credential that defines the username and password to access the given SFTP server. Select the applicable credential from the dropdown list.

Certificate File Path

Location of the certificate file to use when authenticating with the given SFTP server.

Host

Required. The host name of the SFTP server.

Port

Required. The port to use on the SFTP server.

Source Path

Location of the folder from which to load files. You can optionally include %RUNDATE in the path for the ISO date to be injected at the time the files are loaded. The root folder is used if one is not provided.

Samples Path

Location of the folder from which to import file schemas. You can optionally include %RUNDATE in the path for the ISO date to be injected at the time the files are listed. The root folder is used if one is not provided.

Archive Path

Location of the folder to archive previously loaded files. You can optionally include %RUNDATE in the path for the ISO date to be injected at the time of the archiving. The root folder is used if one is not provided.

Target Path

Location of the folder to write files to, in the case when a recipe promotes data to a file on this server.

See Also

FeedbackOpens in a new tab