Databricks - Unload
In this article
Overview
You can use this Snap to unload data from the tables in your Databricks Lakehouse Platform (DLP) instance to a file in the cloud location. You can save the file in CSV, JSON, PARQUET, ORC, or AVRO format and as multiple partitions if needed. Ensure that you configure a valid target location in the Snap’s account for saving the file in the preferred format.
Snap Type
Databricks - Unload Snap is a write-type Snap that writes data from a DLP table to a preferred file format.
Prerequisites
Valid access credentials to a DLP instance with adequate access permissions to perform the action in context.
Valid access to the external source data in one of the following: Azure Blob Storage, ADLS Gen2, DBFS, GCP, AWS S3, or another database (JDBC-compatible).
Support for Ultra Pipelines
Does not support Ultra Pipelines.
Limitations
Snaps in the Databricks Snap Pack do not support array, map, and struct data types in their input and output documents.
Known Issues
None.
Snap Views
Type | Format | Number of Views | Examples of Upstream and Downstream Snaps | Description |
---|---|---|---|---|
Input | Document |
|
| A JSON document containing parameterized inputs for the Snap’s configuration, if needed. |
Output | Document |
|
| A JSON document containing the unload request details and the result of the unload operation. |
Error | Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter while running the Pipeline by choosing one of the following options from the When errors occur list under the Views tab. The available options are:
Learn more about Error handling in Pipelines. |
Snap Settings
Asterisk ( * ): Indicates a mandatory field.
Suggestion icon (): Indicates a list that is dynamically populated based on the configuration.
Expression icon ( ): Indicates whether the value is an expression (if enabled) or a static value (if disabled). Learn more about Using Expressions in SnapLogic.
Add icon ( ): Indicates that you can add fields in the fieldset.
Remove icon ( ): Indicates that you can remove fields from the fieldset.
Field Name | Field Type | Field Dependency | Description | |
---|---|---|---|---|
Label*
Default Value: Databricks - Unload | String | None. | The name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your Pipeline.
| |
Enable SQL query editor
Default Value: Not selected | Checkbox | None. | Select this checkbox to write the SQL statement instead of specifying the Database name and Table name. | |
SQL Query Editor*
Default Value: None | String/Expression | Enable SQL query editor checkbox is selected. | Specify the SQL statement, usually the SELECT statement, to indicate the source data for the unload operation. Avoid specifying multiple SQL statements in this query editor field. | |
Database name
Default Value: None. | String/Expression/Suggestion | Enable SQL query editor checkbox is not selected. | Specify the name of the source table’s database in plain text, as an expression, or using the suggestions list for the unload operation. | |
Table name*
Default Value: None. | String/Expression/Suggestion | Enable SQL query editor checkbox is not selected. | Specify the name of the source table in plain text, as an expression, or using the suggestions list. The unload operation is performed on this table when your run the Pipeline. | |
File format*
Default Value: ORC | Dropdown list | None. | Select the file format for the file to which the source table data must be unloaded. | |
File Format Option List | You can use this fieldset to define the list of file format options to apply while unloading the data. Specify one file format option in each row. The list of file format options available vary based on the File format selected. | |||
File format option Default Value: None. | String/Expression/Suggestion | None. | Enter the file format option with the right syntax or choose from the suggested list of file format options that is based on the file format selected. Ensure to edit the option without changing the expected syntax to suit your unloading requirement. | |
Overwrite File
Default Value: Not selected | Checkbox | None. | Select this checkbox to overwrite the file if it exists in the specified target path. | |
Target Path
Default Value: None. | String/Expression | None. | Enter the relative path for saving the target file that contains the unloaded data. Ensure that you have configured the Snap’s account with the details required to access this target location. | |
Repartition
Default Value: None. | String/Expression | None. | Specify the number of partitions (positive integers only) that you need to store the unloaded data file in the target location. | |
Manage Queued Queries Default value: Continue to execute queued queries when pipeline is stopped or if it fails. Example: Cancel queued queries when pipeline is stopped or if it fails | Dropdown list | None. | Select this property to determine whether the Snap should continue or cancel the execution of the queued Databricks SQL queries when you stop the Pipeline. | |
Snap Execution
Default Value: Validate & Execute | Dropdown list | N/A | Select one of the three modes in which the Snap executes. Available options are:
|
Troubleshooting
Error | Reason | Resolution |
---|---|---|
Missing property value | You have not specified a value for the required field where this message appears. | Ensure that you specify valid values for all required fields. |
Examples
Unloading data from a table in DLP to a CSV file
Consider the scenario where we want to unload the employee data for an organization from a DLP table to a CSV file. Let us see how we can achieve this using the Databricks - Unload Snap.
Here is the configuration for the Snap and its account:
Snap Account Configuration | Snap Configuration |
---|---|
|
|
For the account, we configure the Databricks Account to connect to our DLP instance using the basic user credentials.
As this is an unload operation, we choose the Target Location for saving the output CSV file as an AWS S3 location.
We use the Location credentials authorization type (Access key ID and Secret Key) to access the target folder in this S3 location.
We proceed with the default values for the fields in Advanced properties fieldset.
We specify the Database name and Table name (source data) for performing the unload operation. Alternatively, we can use the SQL query editor to specify these details in the form of an SQL statement.
The File Format is CSV as we want a CSV file at the end of the unload operation.
We specify a File Format Option ‘header’ = ‘true’ to include the header (column names from the table) in the output file.
While specifying the target AWS S3 location path to store the CSV file, we also choose to:
Not partition the data (all data in one CSV file) but
Overwrite any existing files in that folder
Upon running this Pipeline, we can find the CSV in the specified target path without any partitions. The Snap output preview, if configured, displays the details of the unload request and the status/result of the operation (Finished/Failed).