Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

In this article

Table of Contents
maxLevel2
absoluteUrltrue


Articles in this section

Child pages (Children Display)

Overview

You must create Databricks accounts to connect to Databricks Snaps in your Pipelines with the source or target CDWs (databases)This account enables you to write-and-transform data in the target databases hosted in the following cloud locations. The JDBC URL you define for your target database indicates the respective cloud location where the database is hosted. You can configure your Databricks accounts in SnapLogic using either the Designer or the Manager.

Target Database

Supported Cloud Location

Cloud Location in JDBC URL

Databricks Lakehouse Platform (DLP)

AWS

jdbc:spark://<your_instance_code>.cloud.databricks.com

Microsoft Azure

jdbc:spark://<your_instance_code>.azuredatabricks.net

Different settings are required to access your target database depending on your target database type. Choose your database type from the following list to configure your Snap account.

  • Azure Synapse

  • BigQuery

  • Databricks Lakehouse Platform (DLP)

  • Redshift

  • Snowflake

    Supported JDBC JAR Version

    You can configure your Databricks Account to automatically use the recommended JDBC JAR file - databricks-jdbc-2.6.25-1.jar for connecting to your target DLP instance and performing the load and transform operations. 

    Note

    Using Alternate JDBC JAR File Versions

    We recommend that you let the Snaps use the listed JAR file versions. However, you may use a different JAR file version of your choice.

    Snap-Account Compatibility

    Snaps in the Databricks Snap Pack work with different accounts and protocols per the following table.

    Snap

    Databricks Account (Source-wise)

    ADLS Gen2

    ADLS Blob Storage

    AWS S3

    GCS

    JDBC (Any database)

    Databricks - Select

    Databricks - Insert

    Databricks - Delete

    Databricks - Bulk Load

    Databricks - Bulk Unload

    Databricks - Bulk Unload

    Databricks - Multi Execute

    Configuring Databricks Accounts Using SnapLogic Designer

    Drag a Databricks Snap to the Canvas and click the Snap to open its settings. Click the Account tab. You can now either use an existing account or create a new one.

    Selecting an existing account

    SnapLogic organizes and displays all accounts to which you have access, sorting them by account type and location. To select an existing account:

    1. In the Account tab, click the List (blue star)  icon to view the accounts to which you have access, and select the account that you want to use. 

    2. Click the Save (blue star) icon.

    Creating an account

    1. In the Account tab, click Add Account below the Account Reference field.

    2. Select the Location in which you want to create the account, select the Account Type, and click ContinueThe Add Account dialog window associated with the account type appears.

    3. Enter the required account details. For detailed guidance on how to provide the information required for each account type, see the following articles:

    4. Click Validate to verify the account, if the account type supports validation.

    5. Click Apply to complete configuring the Databricks account.

    info

    Enter additional informationont this account in the Notes field of the Info tab. This will help you and other users understand the purpose of the account, especially if there are multiple accounts of the same type.

    Configuring Databricks Accounts Using SnapLogic Manager

    You can use Manager to create accounts without associating them immediately with Pipelines.

    Note

    Accounts in SnapLogic are associated with projects. You can use accounts created in other projects only if you have at least Read access to them.

    1. In the left pane, browse to the project in which you want to create the account and click  Create > Account Databricks, followed by the appropriate account type. The Create Account dialog associated with the selected account type appears.

    2. Repeat the steps numbered 3 through 5 in the Creating an account section.

    Note

    Avoid updating account credentials while Pipelines using that account are executing. Doing so may lead to unexpected results, including your account getting locked.

    Account Settings

    Supported JDBC JAR Versions

    You can configure your Databricks Account to automatically use an appropriate JDBC JAR file for connecting to your target database and performing the load and transform operations. 

    Supported CDW

    Certified JDBC JAR File

    Azure Synapse

    mssql-jdbc-10.2.0.jre8.jar

    BigQuery

    SimbaJDBCDriverforGoogleBigQuery42_1.2.22.1026.zip

    DLP

    databricks-jdbc-2.6.25-1.jar

    Redshift

    redshift-jdbc42-2.1.0.7.jar

    Snowflake

    snowflake-jdbc-3.13.18.jar

    Note

    Using Alternate JDBC JAR File Versions

    We recommend that you let the Snaps use the listed JAR file versions. However, you may use a different JAR file version of your choice

    .

    Snap Pack History

    Expand

    Insert excerpt
    Databricks Snap Pack
    Databricks Snap Pack
    nameDatabricks Snap Pack History
    nopaneltrue


    Child pages (Children Display)