ELT Execute
In this article
Overview
You can use this Snap to execute SQL queries in the target database—Snowflake, Redshift, Azure Synapse, Databricks Lakehouse Platform, or BigQuery. You can run the following types of queries using this Snap:
Data Definition Language (DDL) queries
Data Manipulation Language (DML) queries
Data Control Language (DCL) queries
Prerequisites
Valid accounts and access permissions to connect to the following:
Source: AWS S3, Azure Cloud Storage, or Google Cloud Storage
Target: Snowflake, Redshift, Azure Synapse, Databricks Lakehouse Platform, or BigQuery
If you want to use the COPY INTO command for loading data into the target database, you must pass (expose) these account credentials inside the SQL statement. Hence, we recommend you to consider using the ELT Load Snap as an alternative.
Limitations
This Snap does not support multi-statement transaction rollback of any of the DDL, DCL or DML statements specified.
Each statement is auto-committed upon successful execution. In the event of a failure, the Snap can rollback only updates corresponding to the failed statement execution. All previous statements (during that Pipeline execution runtime) that ran successfully are not rolled back.
You cannot run Data Query Language (DQL) queries using this Snap. For example,
SELECT
andWITH
query constructs.Use this Snap either at the beginning or in the end of the Pipeline.
This Snap executes the SQL query only during Pipeline Execution. It does NOT perform any action (including showing a preview) during Pipeline validation.
- ELT Snap Pack does not support Legacy SQL dialect of Google BigQuery. We recommend that you use only the BigQuery's Standard SQL dialect in this Snap.
Known Issues
- When your Databricks Lakehouse Platform instance uses Databricks Runtime Version 8.4 or lower, ELT operations involving large amounts of data might fail due to the smaller memory capacity of 536870912 bytes (512MB) allocated by default. This issue does not occur if you are using Databricks Runtime Version 9.0.
- When you configure an ELT Merge Into Snap to perform an Update or Delete operation or an ELT Execute Snap with a MERGE INTO statement that performs Update or Delete operation on a Databricks Lakehouse Platform cluster, it may return an error if multiple source rows attempt to update or delete the same target row. To prevent such errors, you need to preprocess the source table to have only unique rows.
ELT Pipelines created prior to 4.24 GA release using one or more of the ELT Insert Select, ELT Merge Into, ELT Load, and ELT Execute Snaps may fail to show expected preview data due to a common change made across the Snap Pack for the current release (4.26 GA). In such a scenario, replace the Snap in your Pipeline with the same Snap from the Asset Palette and configure the Snap's Settings again.
The Snap’s preview data (during validation) contains a value with precision higher than that of the actual floating point value (float data type) stored in the Delta. For example, 24.123404659344 instead of 24.1234. However, the Snap reflects the exact values during Pipeline executions.
In any of the supported target databases, this Snap does not appropriately identify nor render column references beginning with an _ (underscore) inside SQL queries/statements that use the following constructs and contexts (the Snap works as expected in all other scenarios):
WHERE
clause (ELT Filter Snap)WHEN
clauseON
condition (ELT Join, ELT Merge Into Snaps)HAVING
clauseQUALIFY
clause- Insert expressions (column names and values in ELT Insert Select, ELT Load, and ELT Merge Into Snaps)
- Update expressions list (column names and values in ELT Merge Into Snap)
- Secondary
AND
condition Inside SQL query editor (ELT Select and ELT Execute Snaps)
Workaround
As a workaround while using these SQL query constructs, you can:
- Precede this Snap with an ELT Transform Snap to re-map the '_' column references to suitable column names (that do not begin with an _ ) and reference the new column names in the next Snap, as needed.
- In case of Databricks Lakehouse Platform where CSV files do not have a header (column names), a simple query like
SELECT * FROM CSV.`/mnt/csv1.csv`
returns default names such as _c0, _c1, _c2 for the columns which this Snap cannot interpret. To avoid this scenario, you can:- Write the data in the CSV file to a DLP table beforehand, as in:
CREATE TABLE csvdatatable (a1 int, b1 int,…) USING CSV `/mnt/csv1.csv`
where a1, b1, and so on are the new column names. - Then, read the data from this new table (with column names a1, b1, and so on) using a simple SELECT statement.
- Write the data in the CSV file to a DLP table beforehand, as in:
- In case of Databricks Lakehouse Platform, all ELT Snaps' preview data (during validation) contains a value with precision higher than that of the actual floating point value (float data type) stored in the Delta. For example, 24.123404659344 instead of 24.1234. However, the Snap reflects the exact values during Pipeline executions.
Snap Input and Output
Input/Output | Type of View | Number of Views | Examples of Upstream and Downstream Snaps | Description |
---|---|---|---|---|
Input | Document |
|
| An upstream Snap is not mandatory. Use the input view to connect the Snap as the terminating Snap in the Pipeline. |
Output | Document |
|
| A downstream Snap is not mandatory. Use the output view to connect the Snap as the first Snap in the Pipeline. |
Snap Settings
Click the = (Expression) button in the Snap's configuration, if available, to define the corresponding field value using expression language and Pipeline parameters.
You can use the SQL Expressions and Functions supported for ELT for this purpose. This list is common to all target CDWs supported. You can also use other expressions/functions that your target CDW supports.
- Field names marked with an asterisk ( * ) in the table below are mandatory.
Field Name | Type | Field Dependency | Description | ||||
---|---|---|---|---|---|---|---|
Label* | String | None. | The name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your Pipeline. Default Value: NA Example: ELT Execute for SF | ||||
SQL Statements* | Fieldset | None. | Use this field set to define your SQL statements, one in each row. Click to add a new row. You can add as many SQL statements as you need. | ||||
SQL Statement Editor* | String/Expression | None. | Enter the SQL statement to run, in this field. The SQL statement must follow the SQL syntax as stipulated by the target database—Snowflake, Redshift, Azure Synapse, Databricks Lakehouse Platform or BigQuery. SQL Comments are allowed You can include inline comments and multi-line comments before, inside, or after your statement in this editor. It supports all standard SQL comment syntaxes as listed below:
Default Value: NA Example: drop table base_01_oldcodes; |
Troubleshooting
Error | Reason | Resolution |
---|---|---|
Failure: DQL statements are not allowed. | The ELT Execute Snap does not support Data Query Language (DQL) and hence statements containing | Remove any DQL statements (containing
|
No matching signature for operator = for argument types: INT64. (Target CDW: BigQuery) | BigQuery treats the values of Pipeline parameters as String, by default. Passing a value with any other data type causes this error (INT64 in this example). | Cast any non-String Pipeline parameter used in your SQL statement to its target data type for the Snap to work as expected. Ex: Consider using |
Keyword RANGE is not acceptable as a column name. (CDW: Databricks Lakehouse Platform) | This can happen with any reserved keyword if it is used as a column/field name in the table to be created. | Ensure the enclose such column names (reserved keywords) between backticks (`). For example: `RANGE' STRING . |
[Simba][SparkJDBCDriver](500051) ERROR processing query/statement. Error Code: 0 Cannot create table (' (Target CDW: Databricks Lakehouse Platform) | The specified location contains one of the following:
So, the Snap/Pipeline cannot overwrite this table with the target table as needed. | Ensure that you take appropriate action (mentioned below) on the existing table before running your Pipeline again (to create another Delta table at this location). Move or drop the existing table from the schema manually using one of the following commands: Access the DBFS through a terminal and run:
OR Use a Python notebook and run:
|
[Simba][SparkJDBCDriver](500051) ERROR processing query/statement. Error Code: 0 Cannot create table (' (CDW: Databricks Lakehouse Platform) | A non-Delta table that currently exists is corrupted and needs to be dropped from the schema before creating a Delta-formatted table. However, this corrupted table can only be dropped manually—by accessing the DBFS through a terminal. The Pipeline cannot perform this operation. | Drop the corrupted table and then try creating the new table in Delta format (using the Pipeline). To drop the corrupted table, from the terminal, access the DBFS and run the following command:
|
Syntax error when database/schema/table name contains a hyphen (-) such as in (CDW: Azure Synapse) | Azure Synapse expects any object name containing hyphens to be enclosed between double quotes as in "<object-name>" . | Ensure that you use double quotes for every object name that contains a hyphen when your target database is Azure Synapse. For example: default."schema-1"."order-details" . |
Cannot execute bigger SQL queries. (CDW: Azure Synapse) | The SQL query should not exceed the maximum allowed limit of 32500 Literals per Query. | Execute two or more SQL queries in place of one bigger query. See, http://aka.ms/dwsoftlimits for more details |
Fail to populate 1000 rows. Not enough resources for query planning - too many subqueries or query is too complex. (CDW:Big Query) | The Snap fails to execute if the query is too complex. | Split one large SQL query into two or more simple ones or request a higher quota limit from Google BigQuery. |
Sample Queries for the ELT Execute Snap
Example 1: Updating a Target Table Based on Updates to Another Table
The following Pipeline updates a target (backup) table - OUT_ELT_EXECUTE_SF_003 periodically based on the updates to another table DT_EXECUTE_03 (source). These tables are present in a Snowflake database, and we use data views from this database to present the updates that the Pipeline does in the target table.
There are two steps to achieve this functionality using the ELT Execute Snap:
Create a Pipeline with only the ELT Execute Snap for performing the periodic update.
Create a Scheduled Task from this Pipeline to trigger a job at specific times of the day, as needed. See Tasks Page for information on creating Scheduled Tasks from Pipelines.
This task regularly looks into the DT_EXECUTE_03 table for updates and inserts the latest data from this table into the target (backup) table.
Before we create the Pipeline:
Source Table: DT_EXECUTE_03 |
---|
Target Table: OUT_ELT_EXECUTE_SF_003 |
We configure the ELT Execute Snap to run a DML query, as follows.
Once we create the Scheduled Task (after saving this Pipeline), the task runs as scheduled. Then, the ELT Execute Snap copies the data from the source table and inserts into the target (backup) table.
After the Scheduled Task/Pipeline is run:
Target Table: OUT_ELT_EXECUTE_SF_003 |
---|
Example 2: Using one ELT Execute Snap to Create and Fill a Table
In this example Pipeline, we create a new table in the Redshift database and fill data into this table using an ELT Execute Snap. We later read the data from this table using an ELT Select Snap.
Let us observe the configuration of the ELT Execute Snap (first Snap in the above Pipeline).
ELT Execute Snap |
---|
Snap Output |
We have added two SQL statements into the SQL Statements field set—one for creating/overwriting a table and another for inserting a row into same table. ELT Execute Snap does not have a data preview except for the placeholder SQL statement that indicates the Snap is validated successfully. The Snap executes the SQL queries real-time when we run the Pipeline.
We connect an ELT Select Snap to the ELT Execute Snap to read the data from the newly-created table in the Redshift database. In this Snap:
We use the same ELT Database account that we use for the previous Snap.
Define/select the values for the database, schema and the table name to identify the table that the previous Snap is configured to create.
Alternatively, we can enable the SQL query editor and include the
select * from dev.public.new_table_trg;
statement.It is also important here to note that we cannot run this DQL query using the ELT Execute Snap.
ELT Select Snap |
---|
Snap Output |
Downloads
Important Steps to Successfully Reuse Pipelines
- Download and import the Pipeline into SnapLogic.
- Configure Snap accounts as applicable.
- Provide Pipeline parameters as applicable.