On this Page

Overview

You can use this Snap to execute a Snowflake SQL Insert statement with given values. Document keys will be used as the columns to insert into, and their values will be the values inserted. Missing columns from the document will have null values inserted into them. 

Snap Type

  1. The Snowflake - Insert Snap is a Write-type Snap that inserts new records into Snowflake database tables

Prerequisites

Security Prerequisites

You should have the following permissions in your Snowflake account to execute this Snap: 

The following commands enable minimum privileges in the Snowflake Console:

grant usage on database <database_name> to role <role_name>;
grant usage on schema <database_name>.<schema_name>;
 
grant "CREATE TABLE" on database <database_name> to role <role_name>;
grant "CREATE TABLE" on schema <database_name>.<schema_name>;

For more information on Snowflake privileges, refer to Access Control Privileges.

Internal SQL Commands

This Snap uses the INSERT command internally. It enables updating a table by inserting one or more rows into the table.

Use Snowflake - Bulk Load Snap to do efficient bulk load of records instead of using the Snowflake - Insert Snap. The Snowflake Bulk Snaps use the Snowflake’s Bulk API thus improving the performance.

Support for Ultra Pipelines

Works in Ultra Task Pipelines if Batch size is set to 1 in the Snowflake account. Works in Ultra Pipelines. However, we recommend that you not use this Snap in an Ultra Pipeline.

Limitations

Known Issues

Because of performance issues, all Snowflake Snaps now ignore the Cancel queued queries when pipeline is stopped or if it fails option for Manage Queued Queries, even when selected. Snaps behave as though the default Continue to execute queued queries when the Pipeline is stopped or if it fails option were selected.

Snap Views

Type

Format

Number of Views

Examples of Upstream and Downstream Snaps

Description

Input

Document

  • Min: 1

  • Max: 2

  • JSON Generator

  • Binary to Document

This Snap has one document input view by default. 

A second view can be added for table metadata as a document so that the table is created in Snowflake with a similar schema as the source table. This schema is usually from the second output of a database Select Snap. If the schema is from a different database, there is no guarantee that all the data types would be properly handled.

Output

Document

  • Min: 0

  • Max: 1

  • Mapper

  • Snowflake Execute

This Snap has at most one output view.

Error

Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter when running the Pipeline by choosing one of the following options from the When errors occur list under the Views tab:

  • Stop Pipeline Execution: Stops the current pipeline execution if the Snap encounters an error.

  • Discard Error Data and Continue: Ignores the error, discards that record, and continues with the remaining records.

  • Route Error Data to Error View: Routes the error data to an error view without stopping the Snap execution.

Learn more about Error handling in Pipelines.

Snap Settings

  • Asterisk (*): Indicates a mandatory field.

  • Suggestion icon ((blue star)): Indicates a list that is dynamically populated based on the configuration.

  • Expression icon ((blue star)): Indicates whether the value is an expression (if enabled) or a static value (if disabled). Learn more about Using Expressions in SnapLogic.

  • Add icon ((blue star)): Indicates that you can add fields in the field set.

  • Remove icon ((blue star)): Indicates that you can remove fields from the field set.

Field Name

Field Type

Field Dependency

Description

Label*

Default ValueSnowflake - Insert
Example: Load Employee Tables

String

N/A

Specify the name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your pipeline.

Schema name

Default Value: N/A
Example: schema_demo



String/Expression

N/A

Specify the database schema name. In case it is not defined, then the suggestion for the Table Name will retrieve all table names of all schemas. The property is suggestible and will retrieve available database schemas during suggest values.

note

The values can be passed using the pipeline parameters but not the upstream parameter.

The values can be passed using the pipeline parameters but not the upstream parameter.

Table name*

Default Value: N/A
Example: employees_table

String/Expression

N/A

Specify the name of the table to execute insert-on.

note
  • The values can be passed using the pipeline parameters but not the upstream parameter.

  • A table name with special characters is not supported.

  • The values can be passed using the pipeline parameters but not the upstream parameter.

  • A table name with special characters is not supported.

Create table if not present

Default value: Deselected


Checkbox

N/A

Iceberg table

Default Value: Deselected

Checkbox

Appears when you select Create table if not present.

Select this checkbox to create an Iceberg table with the Snowflake catalog. Learn more about how to create and Iceberg table with Snowflake as the Iceberg catalog.

External volume*

Default Value: N/A
Example:

String/Expression/Suggestion

Appears when you select the Iceberg table.

Specify the external volume for the Iceberg table. Learn more about how to configure an external volume for Iceberg tables.

Base location*

 

Default Value: N/A
Example: iceberg_s3_stage

String/Expression

Appears when you select the Iceberg table checkbox.

Specify the Base location for the Iceberg table.

note

The base location is the relative path from the external volume.

The base location is the relative path from the external volume.

Preserve case sensitivity

Default Value: Deselected

Checkbox

N/A

Select this check box to preserve the case sensitivity of the column names.

  • If you do not select Preserve case sensitivity, the input documents are loaded to the target table if the key names in the input documents match the target table column names ignoring the case.

  • If you include a second input view, selecting Preserve case sensitivity has no effect on the column names of the target table, because Snap uses the metadata from the second input view.

Number of retries

Default Value: 0
Example: 3

Integer/Expression

N/A

Specify the maximum number of attempts to be made to receive a response. The request is terminated if the attempts do not result in a response.

If the value is larger than 0, the Snap first downloads the target file into a temporary local file. If any error occurs during the download, the Snap waits for the time specified in the Retry interval and attempts to download the file again from the beginning. When the download is successful, the Snap streams the data from the temporary file to the downstream Pipeline. All temporary local files are deleted when they are no longer needed.

note

Ensure that the local drive has sufficient free disk space to store the temporary local file.

Ensure that the local drive has sufficient free disk space to store the temporary local file.

Minimum value: 0

Retry interval (seconds)

Default Value: 1
Example: 10

Integer/Expression

N/A

Specify the time interval between two successive retry requests. A retry happens only when the previous attempt resulted in an exception. 

Manage Queued Queries

Default Value: Continue to execute queued queries when the Pipeline is stopped or if it fails
Example: Cancel queued queries when the Pipeline is stopped or if it fails

Dropdown list

N/A

Select this property to decide whether the Snap should continue or cancel the execution of the queued Snowflake Execute SQL queries when you stop the pipeline.

note

If you select Cancel queued queries when the pipeline is stopped or if it fails, then the read queries under execution are canceled, whereas the write queries under execution are not canceled. Snowflake internally determines which queries are safe to be canceled and cancels those queries.

If you select Cancel queued queries when the pipeline is stopped or if it fails, then the read queries under execution are canceled, whereas the write queries under execution are not canceled. Snowflake internally determines which queries are safe to be canceled and cancels those queries.

Default Value: Execute only
Example: Validate & Execute

Dropdown list

N/A

note

Instead of building multiple Snaps with interdependent DML queries, we recommend that you use the Multi Execute Snap.
In a scenario where the downstream Snap does depend on the data processed on an Upstream Database Bulk Load Snap, use the Script Snap to add delay for the data to be available.

For example, when performing a create, insert and delete function sequentially on a pipeline, using a Script Snap helps in creating a delay between the insert and delete function. Else, it may turn out that the delete function is triggered even before inserting the records on the table.

Instead of building multiple Snaps with interdependent DML queries, we recommend that you use the Multi Execute Snap.
In a scenario where the downstream Snap does depend on the data processed on an Upstream Database Bulk Load Snap, use the Script Snap to add delay for the data to be available.

For example, when performing a create, insert and delete function sequentially on a pipeline, using a Script Snap helps in creating a delay between the insert and delete function. Else, it may turn out that the delete function is triggered even before inserting the records on the table.

Example


This pipeline reads the data from a table in Oracle and inserts it into a Snowflake table using the Snowflake - Insert Snap. 


The Oracle Select Snap reads data from the table, "TECTONIC". "ADOBEPRASANNA_NEW1124". 

  

The data preview of the Oracle Select Snap is: 

The data from Oracle is inserted into the Snowflake table, adobedata  using the Snowflake Insert Snap.

After the pipeline executes, the Snowflake - Insert Snap shows the following data preview:  

Snap Pack History