In this article

Overview

You can use this Snap to execute SQL queries in the target database—Snowflake, Redshift, Azure Synapse, Databricks Lakehouse Platform, or BigQuery. You can run the following types of queries using this Snap:


Provide a functional overview of the Snap. Do not mention anything about the Snap's internal technology or techniques. The user should be able to understand what the Snap. Include a screenshot of a well-configured Snap 600px.  

Prerequisites

Valid accounts and access permissions to connect to the following:

If you want to use the COPY INTO command for loading data into the target database, you must pass (expose) these account credentials inside the SQL statement. Hence, we recommend you to consider using the ELT Load Snap as an alternative.

List all prerequisites for using the Snap as a bullet list. Use direct sentences. For example, in case of a Write-type Snap a prerequisite would be that the user must have write access. Include links to external official documentation, if required. Use "None." if there no prerequisites. 

Limitations

List all Snap-specific limitations as a bullet list. Limitations can be imposed by the Snap's development environment and also by the endpoint's API. List both. Use direct sentences. Include links to external official documentation, if required. Use "None." if there are no limitations. Combine Limitations and Known Issues sections into one section and mention "None.", if there are no limitations or known issues to mention, for the Snap.

Known Issues

Snap Input and Output

Type of view: Document/Binary/Both. Get number of views from the Views tab in the Snap. List at least three compatible Snaps in each category. Provide a brief of the input/output required. If the input/output is optional then preface the description with "Optional." For example, "Transaction data complying with the Orderful schema as a JSON document."

Input/OutputType of ViewNumber of ViewsExamples of Upstream and Downstream SnapsDescription
Input 

Document

Retain one.

  • Min: 0
  • Max: 1
  • ELT Select

  • ELT Insert Select

  • ELT Filter

An upstream Snap is not mandatory. Use the input view to connect the Snap as the terminating Snap in the Pipeline.

Output

Document

Retain one.
  • Min: 0
  • Max: 1
  • ELT Select

  • ELT Transform

A downstream Snap is not mandatory. Use the output view to connect the Snap as the first Snap in the Pipeline.

Snap Settings

  • Click the = (Expression) button in the Snap's configuration, if available, to define the corresponding field value using expression language and Pipeline parameters. 

  • Field names marked with an asterisk ( * ) are mandatory.


Field Name

Type

Field Dependency

Description

Label*

String

None.

The name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your Pipeline.

Default Value: NA

ExampleELT Execute for SF

SQL Statements*

Field set

None.

Use this field set to define your SQL statements, one in each row. Click  to add a new row. You can add as many SQL statements as you need.

SQL Statement Editor*

String/Expression

None.

Enter the SQL statement to run, in this field. The SQL statement must follow the SQL syntax as stipulated by the target database—Snowflake, Redshift, Azure Synapse, Databricks Lakehouse Platform or BigQuery.

You can include inline comments and multi-line comments before, inside, or after your statement in this editor. It supports all standard SQL comment syntaxes as listed below:

-- comment text# comment text/* multi-line
comment text */
// comment text


Default Value: NA

Exampledrop table base_01_oldcodes;

Troubleshooting

You can get information for this section from the Snap's source code, typically the Messages.java file in GitHub. Ask the developer to access this file. Do not use "Please" in the Reason or Resolution. If there is "Please" in the error message then contact the developer to have it removed as needed.

ErrorReasonResolution
Enter the error message that is displayed.

Failure: DQL statements are not allowed.

Briefly describe why this error is triggered. An experienced user will be able to troubleshoot the error looking at the reason itself.

The ELT Execute Snap does not support Data Query Language (DQL) and hence statements containing SELECT and WITH are not allowed.

Remove any DQL statements (containing SELECT, WITH) and enter one of the following statement types:

  • Data Definition Language (DDL): CREATE, ALTER, DROP, TRUNCATE, RENAME and so on. 

  • Data Control Language (DCL): GRANT, REVOKE

  • Data Manipulation Language (DML): INSERT, UPDATE, DELETE, MERGECALL and so on.

No matching signature for operator = for argument types: INT64.

(Target CDW: BigQuery)

BigQuery treats the values of Pipeline parameters as String, by default. Passing a value with any other data type causes this error (INT64 in this example).

Cast any non-String Pipeline parameter used in your SQL statement to its target data type for the Snap to work as expected.

Ex: Consider using SELECT * FROM pipe.param01 WHERE id = cast( _id as INT ); instead of SELECT * FROM pipe.param01 WHERE id = _id;

Examples

Include functional examples to demonstrate the various functionalities of the Snap. Examples are different from Use Cases. An example should focus on the Snap in question. Use first-person plural references. Start with the Pipeline's objective and then describe how you would go about using the Snap to fulfill this objective. Include all SLPs in the Downloads section. Use videos if a Snap's configuration is complex and difficult to capture using screenshots. Ensure that the screenshots are optically similar, the size of the text in the screenshots should be readable and similar in size. Max image size 1000 px. Screenshots must include: Pipeline, Snap Configs, Inputs, and Outputs. Refer to ELT Snaps for an instance.

Sample Queries for the ELT Execute Snap

CREATE OR REPLACE WAREHOUSE me_wh WITH warehouse_size='X-LARGE';
CREATE OR REPLACE TABLE "TEST_DATA".NEW_DATA(VARCHAR_DATA VARCHAR(10));
CREATE OR REPLACE TABLE "TEST_DATA".DT_EXECUTE_01(VARCHAR_DATA VARCHAR(100),TIME_DATA TIME,FLOAT_DATA FLOAT,BOOLEAN_DATA BOOLEAN,NUMBER_DATA NUMBER(38,0),DATE_DATA DATE);
TRUNCATE TABLE IF EXISTS "public".simple_data_02;
INSERT OVERWRITE INTO "BIGDATAQA"."TEST_DATA"."OUT_ELT_EXECUTE_SF_003" SELECT * FROM ( SELECT * FROM "BIGDATAQA"."TEST_DATA"."DT_EXECUTE_03" )

Example 1: Updating a Target Table Based on Updates to Another Table

Example title must be gerunds in title case.

The following Pipeline updates a target (backup) table - OUT_ELT_EXECUTE_SF_003 periodically based on the updates to another table DT_EXECUTE_03 (source). These tables are present in a Snowflake database, and we use data views from this database to present the updates that the Pipeline does in the target table.

There are two steps to achieve this functionality using the ELT Execute Snap:

  1. Create a Pipeline with only the ELT Execute Snap for performing the periodic update.

  2. Create a Scheduled Task from this Pipeline to trigger a job at specific times of the day, as needed. See Tasks Page for information on creating Scheduled Tasks from Pipelines.

Before we create the Pipeline:

Source Table: DT_EXECUTE_03

Target Table: OUT_ELT_EXECUTE_SF_003

We configure the ELT Execute Snap to run a DML query, as follows. 

Once we create the Scheduled Task (after saving this Pipeline), the task runs as scheduled. Then, the ELT Execute Snap copies the data from the source table and inserts into the target (backup) table.

After the Scheduled Task/Pipeline is run:

Target Table: OUT_ELT_EXECUTE_SF_003

Download this Pipeline.

Example 2: Using one ELT Execute Snap to Create and Fill a Table

In this example Pipeline, we create a new table in the Redshift database and fill data into this table using an ELT Execute Snap. We later read the data from this table using an ELT Select Snap.

Let us observe the configuration of the ELT Execute Snap (first Snap in the above Pipeline).

ELT Execute Snap

Snap Output

We have added two SQL statements into the SQL Statements field set—one for creating/overwriting a table and another for inserting a row into same table. ELT Execute Snap does not have a data preview except for the placeholder SQL statement that indicates the Snap is validated successfully. The Snap executes the SQL queries real-time when we run the Pipeline.

We connect an ELT Select Snap to the ELT Execute Snap to read the data from the newly-created table in the Redshift database. In this Snap:

ELT Select Snap

Snap Output

Download this Pipeline.

Add a download link to the Pipeline in the the "Download" text above. Explain the Pipeline/Snap configuration in detail if a complex enough example has been used. Ensure that the functionality you are trying to illustrate is captured correctly. Include caveats as appropriate. To insert link: Press Ctrl + K --> Files --> Select the SLP file corresponding to the example. 

Downloads

  1. Download and import the Pipeline into SnapLogic.
  2. Configure Snap accounts as applicable.
  3. Provide Pipeline parameters as applicable.


See Also

Provide links to the endpoint's official documentation, any relevant or related internal documentation, and any other links you have referred to on this page. Care should be taken when referencing external sites/documentation that it is an official site/documentation. Do not refer to forums such as stackoverflow.