In this article
You can use this Snap to execute a Databricks SQL DELETE statement based on specific conditions. Ensure adequate discretion while using this Snap as it can truncate the table if run without specifying a WHERE condition for the DELETE statement.
Databricks - Delete Snap is a write-type Snap that deletes rows from a target DLP table.
Valid access credentials to a DLP instance with adequate access permissions to perform the action in context.
Valid access to the external source data in one of the following: Azure Blob Storage, ADLS Gen2, DBFS, GCP, AWS S3, or another database (JDBC-compatible).
Does not support Ultra Pipelines.
Snaps in the Databricks Snap Pack do not support array, map, and struct data types in their input and output documents.
When you add an input view to this Snap, ensure that you configure the Batch size as 1 in the Snap’s account configuration. For any other batch size, the Snap fails with the exception: Multi-batch parameter values are not supported for this query type
.
Type | Format | Number of Views | Examples of Upstream and Downstream Snaps | Description |
---|---|---|---|---|
Input | Document |
|
| A JSON document containing the reference to the table and rows to be deleted. |
Output | Document |
|
| A JSON document containing the result of the delete operation on the target table. |
Error | Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter while running the pipeline by choosing one of the following options from the When errors occur list under the Views tab. The available options are:
Learn more about Error handling in Pipelines. |
|
Field Name | Field Type | Description |
---|---|---|
Label* Default Value: Databricks - Delete | String | The name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your Pipeline. |
Database name
Default Value: None. | String/Expression/Suggestion | Enter your corresponding DLP database name for the DELETE statement to delete existing rows from the table. |
Table name*
Default Value: None. | String/Expression/Suggestion | Enter your table name for the DELETE statement to delete existing rows from. |
Delete condition (deletes all records from table if left blank) Default Value: N/A | String/Expression/Suggestion | Specify the condition for the DELETE statement to filter the rows to delete from the target table. Specify a valid WHERE clause for the delete condition. If you leave this field blank, the Snap deletes all the records from the table. Specify a valid WHERE clause for the delete condition. If you leave this field blank, the Snap deletes all the records from the table. |
Number of Retries Minimum value: 0 Default value: 0 | Integer | Specifies the maximum number of retry attempts when the Snap fails to write. |
Retry Interval (seconds) Minimum value: 1 Default value: 1 | Integer | Specifies the minimum number of seconds the Snap must wait before each retry attempt. |
Manage Queued Queries Default value: Continue to execute queued queries when pipeline is stopped or if it fails. Example: Cancel queued queries when pipeline is stopped or if it fails | Dropdown list | Select this property to determine whether the Snap should continue or cancel the execution of the queued Databricks SQL queries when you stop the Pipeline. If you select Cancel queued queries when pipeline is stopped or if it fails, then the read queries under execution are cancelled, whereas the write type of queries under execution are not cancelled. Databricks internally determines which queries are safe to be cancelled and cancels those queries. If you select Cancel queued queries when pipeline is stopped or if it fails, then the read queries under execution are cancelled, whereas the write type of queries under execution are not cancelled. Databricks internally determines which queries are safe to be cancelled and cancels those queries. Due to an issue with DLP, aborting an ELT Pipeline validation (with preview data enabled) causes only those SQL statements that retrieve data using bind parameters to get aborted while all other static statements (that use values instead of bind parameters) persist.
To avoid this issue, ensure that you always configure your Snap settings to use bind parameters inside its SQL queries. Due to an issue with DLP, aborting an ELT Pipeline validation (with preview data enabled) causes only those SQL statements that retrieve data using bind parameters to get aborted while all other static statements (that use values instead of bind parameters) persist.
To avoid this issue, ensure that you always configure your Snap settings to use bind parameters inside its SQL queries. |
Snap Execution Default Value: Execute only | Dropdown list | Select one of the three modes in which the Snap executes. Available options are:
|
Error | Reason | Resolution |
---|---|---|
Missing property value | You have not specified a value for the required field where this message appears. | Ensure that you specify valid values for all required fields. |
Consider the scenario where we want to delete information of certain employees from an intermediate data location that runs on DLP. We can achieve this through a Pipeline containing the Databricks - Delete Snap.
We configure this Snap (Pipeline) to delete the employee rows from the company_employees table in our DLP instance if their joining date is before Jan 01, 2010. We also configure an appropriate account for the Snap to connect to the target DLP instance.
Upon validation, the Pipeline deletes the rows satisfying the condition specified and returns the status of the operation in the Snap’s output.
|
|