Create connections connections-fdb
-View Federated Database
Experience Platform Federated Audience Composition lets you build and enrich audiences from the third-party data warehouses and import the audiences to ÃÛ¶¹ÊÓÆµ Experience Platform.
Supported databases supported-databases
To work with your federated database and ÃÛ¶¹ÊÓÆµ Experience Platform, you must first establish a connection between the two sources. With Federated Audience Composition, you can connect to the following databases.
- Amazon Redshift
- Azure Synapse Analytics
- Databricks
- Google BigQuery
- Microsoft Fabric
- Oracle
- Snowflake
- Vertica Analytics
Create connection create
To create a connection, select Federated databases within the Federated data section.
The Federated databases section appears. Select Add federated database to create a connection.
The connection properties popover appears. You can name your connection as well as select what type of database you want to create.
After selecting a type, the Details section appears. This section differs based on the database type previously chosen.
note availability |
---|
AVAILABILITY |
Only Amazon Redshift AWS, Amazon Redshift Spectrum, and Amazon Redshift Serverless are supported. |
After selecting Amazon Redshift, you can add the following details:
table 0-row-2 1-row-2 2-row-2 3-row-2 4-row-2 5-row-2 | |
---|---|
Field | Description |
Server | The name of the data source. |
Account | The account’s username. |
Password | The account’s password. |
Database | The name of the database. If this is specified in the server name, this field can be left blank. |
Working schema | The name of the database’s schema to use for work tables. More information about this feature can be found in the . Note: You can use any schema from the database, including schemas used for temporary data processing, as long as you have the required permissions to connect to this schema. However, you must use distinct working schemas when connecting multiple sandboxes with the same database. |
After selecting Azure Synapse Analytics, you can add the following details:
table 0-row-2 1-row-2 2-row-2 3-row-2 4-row-2 5-row-2 | |
---|---|
Field | Description |
Server | The URL of the Azure Synapse server. |
Account | The username for the Azure Synapse account. |
Password | The password for the Azure synapse account. |
Database | The name of the database. If this is specified in the server name, this field can be left blank. |
Options | Additional options for the connection. For Azure Synapse Analytics, you can specify the type of authentication supported by the connector. Currently, Federated Audience Composition supports ActiveDirectoryMSI . For more information about connection strings, please read the . |
note note |
---|
NOTE |
Secure access to your external Databricks data warehouse through private link is supported. This includes secure connections to Databricks databases hosted on Amazon Web Services (AWS) via private link and Databricks databases hosted on Microsoft Azure via VPN. Please contact your ÃÛ¶¹ÊÓÆµ representative for assistance in setting up secure access. |
After selecting Databricks, you can add the following details:
table 0-row-2 1-row-2 2-row-2 3-row-2 4-row-2 5-row-2 6-row-2 | |
---|---|
Field | Description |
Server | The name of the Databricks server. |
HTTP path | The path to your Cluster or Warehouse. For more information on the path, please read the . |
Password | The access token for the Databricks server. For more information on this value, please read the . |
Catalog | The name of the Databricks Catalog. For more information on catalogs in Databricks, please read the |
Working schema | The name of the database schema to use for the work tables. Note: You can use any schema from the database, including schemas used for temporary data processing, as long as you have the required permissions to connect to this schema. However, you must use distinct working schemas when connecting multiple sandboxes with the same database. |
Options | Additional options for the connection. The available options are listed in the following table. |
For Databricks, you can set the following additional options:
table 0-row-2 1-row-2 | |
---|---|
Options | Description |
TimeZoneName | The name of the time zone to use. This value represents the TIMEZONE session parameter. For more information on time zones, please read the . |
After selecting Google BigQuery, you can add the following details:
table 0-row-2 1-row-2 2-row-2 3-row-2 4-row-2 5-row-2 | |
---|---|
Field | Description |
Service account | The email address of your service account. For more information, please read the . |
Project | The ID of your project. For more information, please read the . |
Dataset | The name of the dataset. For more information, please read the . |
Key file path | The key file to the server. Only json files are supported. |
Options | Additional options for the connection. The available options are listed in the following table. |
For Google BigQuery, you can set the following additional options:
table 0-row-2 1-row-2 2-row-2 3-row-2 4-row-2 5-row-2 6-row-2 7-row-2 8-row-2 | |
---|---|
Options | Description |
ProxyType | The type of proxy used to connect to BigQuery. Supported values include HTTP , http_no_tunnel , socks4 , and socks5 . |
ProxyHost | The hostname or IP address where the proxy can be reached. |
ProxyUid | The port number that the proxy is running on. |
ProxyPwd | The password for the proxy. |
bgpath | Note: This is only applicable for the bulk-load tool (Cloud SDK). The path to the Cloud SDK bin directory on the server. You only need to set this if you’ve moved the google-cloud-sdk directory to another location or if you want to avoid using the PATH variable. |
GCloudConfigName | Note: This is only applicable for the bulk-load tool (Cloud SDK) above version 7.3.4. The name of the configuration that stores the parameters for loading the data. By default, this value is accfda . |
GCloudDefaultConfigName | Note: This is only applicable for the bulk-load tool (Cloud SDK) above version 7.3.4. The name of the temporary configuration to recreate the main configuration for loading data. By default, this value is default . |
GCloudRecreateConfig | Note: This is only applicable for the bulk-load tool (Cloud SDK) above version 7.3.4. A boolean value that lets you decide if the bulk loading mechanism should automatically recreate, delete, or modify the Google Cloud SDK configurations. If this value is set to false , the bulk loading mechanism loads data using an existing configuration on the machine. If this value is set to true , ensure your configuration is properly set up - otherwise, the No active configuration found. Please either create it manually or remove the GCloudRecreateConfig option error will appear, and the loading mechanism will revert to the default loading mechanism. |
After selecting Microsoft Fabric, you can add the following details:
table 0-row-2 1-row-2 2-row-2 3-row-2 4-row-2 | |
---|---|
Field | Description |
Server | The URL for the Microsoft Fabric server. |
Application ID | The application ID for Microsoft Fabric. For more information about the application ID, please read the . |
Client secret | The client secret for the application. For more information about the client secret, please read the . |
Options | Additional options for the connection. The available options are listed in the following table. |
For Microsoft Fabric, you can set the following additional options:
table 0-row-2 1-row-2 | |
---|---|
Option | Description |
Authentication | The type of authentication used by the connector. Supported values include: ActiveDirectoryMSI . For more information, please read the . |
note important |
---|
IMPORTANT |
The Oracle database connector can currently only be used for audience creation and audience enrichment use cases. |
Additionally, before setting up your Oracle database, please contact your ÃÛ¶¹ÊÓÆµ Customer Care representative. |
After selecting Oracle, you can add the following details:
table 0-row-2 1-row-2 2-row-2 3-row-2 | |
---|---|
Field | Description |
Server | The URL for the Oracle server. |
Account | The username of the account. |
Password | The password of the account. |
note note |
---|
NOTE |
Secure access to your external Snowflake data warehouse through private link is supported. Note that your Snowflake account must be hosted on Amazon Web Services (AWS) or Azure and located in the same region as your Federated Audience Composition environment. Please contact your ÃÛ¶¹ÊÓÆµ representative for assistance in setting up secure access to your Snowflake account. |
After selecting Snowflake, you can add the following details:
table 0-row-2 1-row-2 2-row-2 3-row-2 4-row-2 5-row-2 6-row-2 7-row-2 | |
---|---|
Field | Description |
Server | The name of the server. |
User | The username for the account. |
Password | The password for the account. |
Database | The name of the database. If this is specified in the server name, this field can be left blank. |
Working schema | The name of the database schema to use for the work tables. Note: You can use any schema from the database, including schemas used for temporary data processing, as long as you have the required permissions to connect to this schema. However, you must use distinct working schemas when connecting multiple sandboxes with the same database. |
Private key | The private key for your database connection. You can upload a .pem file from your local system. |
Options | Additional options for the connection. The available options are listed in the following table. |
For Snowflake, you can set the following additional options:
table 0-row-2 1-row-2 2-row-2 3-row-2 4-row-2 5-row-2 6-row-2 7-row-2 | |
---|---|
Options | Description |
workschema | The name of the database schema to use for work tables. |
TimeZoneName | The name of the time zone to use. This value represents the TIMEZONE session parameter. By default, the system time zone will be used. For more information on time zones, please read the . |
WeekStart | The day that you want the week to start. This value represents the WEEK_START session parameter. For more information on week start, please read the |
UseCachedResult | A boolean that determines if Snowflake’s cached results will be used. This value represents the USE_CACHED_RESULTS session parameter. By default, this value is set to true. For more information on this parameter, please read the . |
bulkThreads | The number of threads to use for Snowflake’s bulk loader. The more threads added, the better the performance will be for bigger bulk loads. By default, this value is set to 1. |
chunkSize | The file size of the each bulk loader’s chunk. When used concurrently with more threads, you can improve the performance of your bulk loads. By default, this value is set to 128 MB. For more information about chunk sizes, please read the . |
StageName | The name of a pre-provisioned internal staging enviornment. This can be used in bulk loads instead of creating a new temporary stage. |
After selecting Vertica Analytics, you can add the following details:
table 0-row-2 1-row-2 2-row-2 3-row-2 4-row-2 5-row-2 6-row-2 | |
---|---|
Field | Description |
Server | The URL of the Vertica Analytics server. |
Account | The username of the account. |
Password | The password of the account. |
Database | The name of the database. If this is specified in the server name, this field can be left blank. |
Working schema | The name of the database schema to use for the work tables. Note: You can use any schema from the database, including schemas used for temporary data processing, as long as you have the required permissions to connect to this schema. However, you must use distinct working schemas when connecting multiple sandboxes with the same database. |
Options | Additional options for the connection. The available options are listed in the following table. |
For Vertica Analytics, you can set the following additional options:
table 0-row-2 1-row-2 | |
---|---|
Options | Description |
TimeZoneName | The name of the time zone to use. This value represents the TIMEZONE session parameter. For more information on timezones, please read the |
After adding the connection’s details, please note the following additional settings:
You can now select Deploy functions, followed by Add to finalize the connection between the federated database and Experience Platform.