Databricks Account using JDBC (Source: Any SQL Database)

In this article

Overview

You can use this account type to connect Databricks Snaps with data sources that use the Databricks account with JDBC as the source.

Prerequisites

  • A valid Databricks account.

  • Certified JDBC JAR File: databricks-jdbc-2.6.25-1.jar

Limitation

With the basic authentication type for Databricks Lakehouse Platform (DLP) reaching its end of life on July 10, 2024, SnapLogic Databricks pipelines designed to use this authentication type to connect to DLP instances would cease to succeed. We recommend that you reconfigure the corresponding Snap accounts to use the Personal access tokens (PAT) authentication type.

Known Issues

None.

Account Settings

 

  • Asterisk ( * ): Indicates a mandatory field.

  • Suggestion icon ( ): Indicates a list that is dynamically populated based on the configuration.

  • Expression icon ( ): Indicates the value is an expression (if enabled) or a static value (if disabled). Learn more about Using Expressions in SnapLogic.

  • Add icon ( ): Indicates that you can add fields in the field set.

  • Remove icon ( ): Indicates that you can remove fields from the field set.

Field Name

Field Type

Field Dependency

Description

Field Name

Field Type

Field Dependency

Description

Label*

 

Default Value: N/A
Example: STD DB Acc DeltaLake JDBC

String

None.

Specify a unique label for the account.

 

Account Properties*

Use this fieldset to configure the information required to establish a JDBC connection with the account.

Download JDBC Driver Automatically

 

Default Value: Not Selected
Example: Selected

Checkbox

None.

Select this checkbox to allow the Snap account to download the certified JDBC Driver for DLP. The following fields are disabled when this checkbox is selected:

  • JDBC JAR(s) and/or ZIP(s) : JDBC Driver

  • JDBC driver class

To use a JDBC Driver that you choose, clear this checkbox, upload (to SLDB), and choose the required JAR files in the JDBC JAR(s) and/or ZIP(s): JDBC Driver field. 

Use of Custom JDBC JAR version

You can use a different JAR file version other than the recommended listed JAR file versions.

Spark JDBC and Databricks JDBC

If you do not select this checkbox and use an older JDBC JAR file (older than version 2.6.25), ensure that you use: 

  • The old format JDBC URL ( jdbc:spark:// ) instead of the new one ( jdbc:databricks:// )

    • For a JDBC driver prior to version 2.6.25, the JDBC URL starts with jdbc:spark://

    • For a JDBC driver version 2.6.25 or later, the JDBC URL starts with jdbc:databricks://

  • The older JDBC Driver Class com.simba.spark.jdbc.Driver instead of the new com.databricks.client.jdbc.Driver.

JDBC Driver Class

 

Default Value: com.databricks.client.jdbc.Driver
Example: com.databricks.client.jdbc.Driver

String

None.

Specify the JDBC driver class to use.

JDBC JARs

Use this fieldset to define list of JDBC JAR files to be loaded.

JDBC Driver

String

None.

Specify or upload the JDBC driver to use.

JDBC URL*

Default Value: N/A
Example: jdbc:spark://adb-2409532680880038.18.azuredatabricks.net:443/default;transportMode=http;ssl=1;httpPath=sql/protocolv1/o/2409532680880038/0326-212833-drier754;AuthMech=3;

String

None.

Enter the JDBC driver connection string that you want to use in the syntax provided below, for connecting to your DLP instance. Learn more in Microsoft's JDBC and ODBC drivers and configuration parameters.

jdbc:spark://dbc-ede87531-a2ce.cloud.databricks.com:443/default;transportMode=http;ssl=1;httpPath=
sql/protocolv1/o/6968995337014351/0521-394181-guess934;AuthMech=3;UID=token;PWD=<personal-access-token> 

Use Token Based Authentication

 

Default value: Selected
Example: Not selected

Checkbox

None.

Select this checkbox to use token-based authentication for connecting to the target database (DLP) instance. Activates the Token field.

 

Token*

 

Default value: N/A
Example: <Encrypted>

String

When Use Token Based Authentication checkbox is selected.

Enter the token value for accessing the target database/folder path.

 

Database name*

 

Default Value: N/A
Example: Default

String

None.

Enter the name of the database to use by default. This database is used if you do not specify one in the Databricks Select or Databricks Insert Snaps.

 

Source/Target Location*

 

Default Value: None
Example: JDBC

Dropdown

None.

Select the target data warehouse. If you want to load the queries from ADLS Gen2 as source, then the selected data warehouse would serve as a target and vice versa. Following are the available options:

  • None: Select when using the read-only Snaps and you do not need to write anything to the target data warehouse.

  • Amazon S3

  • Azure Blob Storage

  • Azure Data Lake Storage Gen 2

  • DBFS

  • Google Cloud Storage

  • JDBC

 This activates the following fields:

  • Source JDBC URL

  • Source username

  • Source password

Source JDBC URL*

 

Default Value: N/A
Example: jdbc:snowflake://snaplogic.east-us-2.azure.snowflakecomputing.com

String

None.

Specify the JDBC URL of the source table.

Source username

 

Default Value: N/A
Example: db_admin

String

None.

Specify the username of the external source database.

Source password

 

Default Value: N/A
Example: M#!ikA8_0/&!

String

None.

Specify the password for the external source database.

Advanced Properties

Other parameters that you want to specify to configure the account.

URL properties

Use this fieldset to define the account parameter's name and its corresponding value. Click + to add the parameters and the corresponding values. Add each URL property-value pair in a separate row. 

URL property name

 

Default Value: N/A
Example: queryTimeout

N/A

None.

Specify the name of the parameter for the URL property.

 

URL property value

 

Default Value: N/A
Example: 0

N/A

None.

Specify the value for the URL property parameter.

 

Batch size*

 

Default Value: N/A
Example: 3

Integer

None.

Specify the number of Databricks queries that you want to execute at a time.

  • If the Batch Size is one, the query is executed as-is, meaning the Snap skips the batch (nonbatch execution).

  • If the Batch Size is greater than one, the Snap performs the regular batch execution.

Fetch size*

 

Default Value: 100
Example: 12

Integer

None.

Specify the number of rows a query must fetch for each execution. Large values could cause the server to run out of memory.

Min pool size*

 

Default Value: 3
Example: 0

Integer

None.

Specify the minimum number of idle connections that you want the pool to maintain at a time. 

 

Max pool size*

 

Default Value: 15
Example: 0

Integer

None.

Specify the maximum number of connections that you want the pool to maintain at a time.

 

Max life time*

 

Default Value: 60
Example: 50

Integer

None.

Specify the maximum lifetime of a connection in the pool, in seconds:

  • Ensure that the value you enter is a few seconds shorter than any database or infrastructure-imposed connection time limit.

  • 0 indicates an infinite lifetime, subject to the Idle Timeout value.

  • An in-use connection is never retired. Connections are removed only after they are closed.

Minimum value: 0
Maximum value: No limit

Idle Timeout*

 

Default Value: 5
Example: 4

Integer

None.

Specify the maximum amount of time in seconds that a connection is allowed to sit idle in the pool. 

0 indicates that idle connections are never removed from the pool.

Minimum value: 0
Maximum value: No limit

Checkout timeout*

 

Default Value: 10000
Example: 9000

Integer

None.

Specify the maximum time in milliseconds you want the system to wait for a connection to become available when the pool is exhausted.

Minimum value: 0
Maximum value: No limit

Snap Pack History


Related Links