In this article

Overview

You can use this Snap to connect to the particular instance in Snowflake, update the records into the given table based upon the given clause, and return the response as a document stream.

Snap Type

Snowflake Update is a Write-type Snap that writes records in Snowflake.

Prerequisites

Security Prerequisites

You should have the following permissions in your Snowflake account to execute this Snap: 

The following commands enable minimum privileges in the Snowflake Console:

grant usage on database <database_name> to role <role_name>;
grant usage on schema <database_name>.<schema_name>;

Learn more: Access Control Privileges.

Internal SQL Commands

This Snap uses the UPDATE command internally. It enables updating the specified rows in the target table with new values.

Use Snowflake - Bulk Upsert Snap to do efficient bulk update of records instead of using Snowflake - Update Snap. The Snowflake Bulk Snaps use the Snowflake’s Bulk API thus improving the performance.

Support for Ultra Pipelines

Works in Ultra Pipelines. However, we recommend that you not use this Snap in an Ultra Pipeline.

Known Issues

Snap Views

Type

Format

Number of views

Examples of Upstream and Downstream Snaps

Description

Input 

Document


  • Min: 1

  • Max: 1

  • Mapper

  • Snowflake Execute

  • Sequence

Table Name, record ID, and record details.

Output

Document

  • Min: 0

  • Max: 1

  • File Writer

  • Mapper


Updates a record.

Error

Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter while running the Pipeline by choosing one of the following options from the When errors occur list under the Views tab. The available options are:

  • Stop Pipeline Execution: Stops the current pipeline execution when the Snap encounters an error.

  • Discard Error Data and Continue: Ignores the error, discards that record, and continues with the rest of the records.

  • Route Error Data to Error View: Routes the error data to an error view without stopping the Snap execution.

Learn more about Error handling in Pipelines.

Snap Settings

  • Asterisk (*): Indicates a mandatory field.

  • Suggestion icon ((blue star)): Indicates a list that is dynamically populated based on the configuration.

  • Expression icon ((blue star)): Indicates whether the value is an expression (if enabled) or a static value (if disabled). Learn more about Using Expressions in SnapLogic.

  • Add icon ((blue star)): Indicates that you can add fields in the field set.

  • Remove icon ((blue star)): Indicates that you can remove fields from the field set.

Field Name

Field Type

Description

Label*


Default ValueSnowflake - Update
ExampleSnowflake - Update


String

Specify the name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your pipeline.

Schema Name


Default Value: N/A
Example: "PUBLIC"

String/Expression/Suggestion

Specify the database schema name. In case it is not defined, then the suggestion for the Table Name will retrieve all tables names of all schemas. The property is suggestible and will retrieve available database schemas during suggest values.

note

The values can be passed using the pipeline parameters but not the upstream parameter.

The values can be passed using the pipeline parameters but not the upstream parameter.


Table Name*


Default Value: N/A
Example: Employee

String/Expression/Suggestion

Specify the name of the table in the instance. The table name is suggestible and requires an account setting.  

note

The values can be passed using the pipeline parameters but not the upstream parameter.

The values can be passed using the pipeline parameters but not the upstream parameter.


Update condition


Default Value: N/A
Example:

Without using expressions

  • EmpId = 12 

  • email = 'you@example.com'

Using expressions

  • "EMPNO=$EMPNO and ENAME=$EMPNAME"

  • email = $email 

  • emp=$emp

  • "emp='" + $emp + "'"

  • "EMPNO=" + $EMPNO + " and ENAME='" + $EMPNAME+ "'"

Using expressions that join strings together to create SQL queries or conditions has a potential SQL injection risk and is hence unsafe. Ensure that you understand all implications and risks involved before using concatenation of strings with '=' Expression enabled. 

String/Expression

Specify the SQL WHERE clause of the update statement. You can define specific values or columns to update (Set condition) in the upstream Snap, such as Mapper Snap, and then use the WHERE clause to apply these conditions on the columns sourced from the upstream Snap. For instance, here is a sample of an Update SQL query:
UPDATE table_name
SET column1 = value1, column2 = value2,
WHERE condition;

note

If the Update Condition field is left blank, the condition is applied on all the records of the target table. 

If the Update Condition field is left blank, the condition is applied on all the records of the target table. 

Refer to the example to understand how to use the Update Condition.

Number of retries


Default Value: 0
Example: 3

Integer/Expression

Specify the maximum number of attempts to be made to receive a response. The request is terminated if the attempts do not result in a response.

note

Ensure that the local drive has sufficient free disk space as large as the expected target file size.

If the value is larger than 0, the Snap first downloads the target file into a temporary local file. If any error occurs during the download, the Snap waits for the time specified in the Retry interval and attempts to download the file again from the beginning. When the download is successful, the Snap streams the data from the temporary file to the downstream Pipeline. All temporary local files are deleted when they are no longer needed.

Ensure that the local drive has sufficient free disk space as large as the expected target file size.

If the value is larger than 0, the Snap first downloads the target file into a temporary local file. If any error occurs during the download, the Snap waits for the time specified in the Retry interval and attempts to download the file again from the beginning. When the download is successful, the Snap streams the data from the temporary file to the downstream Pipeline. All temporary local files are deleted when they are no longer needed.

Retry interval (seconds)

Default Value: 1
Example: 10

Integer/Expression

Specify the time interval between two successive retry requests. A retry happens only when the previous attempt resulted in an exception. 


Manage Queued Queries

Default ValueContinue to execute queued queries when pipeline is stopped or if it fails
Example: Cancel queued queries when pipeline is stopped or if it fails

Dropdown list

Select this property to decide whether the Snap should continue or cancel the execution of the queued Snowflake Execute SQL queries when you stop the pipeline.

note

If you select Cancel queued queries when pipeline is stopped or if it fails, then the read queries under execution are cancelled, whereas the write type of queries under execution are not cancelled. Snowflake internally determines which queries are safe to be cancelled and cancels those queries.

If you select Cancel queued queries when pipeline is stopped or if it fails, then the read queries under execution are cancelled, whereas the write type of queries under execution are not cancelled. Snowflake internally determines which queries are safe to be cancelled and cancels those queries.

Snap Execution

Default Value: Execute only
ExampleValidate & Execute

Dropdown list

Select one of the three modes in which the Snap executes. Available options are:

  • Validate & Execute: Performs limited execution of the Snap, and generates a data preview during Pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during Pipeline runtime.

  • Execute only: Performs full execution of the Snap during Pipeline execution without generating preview data.

  • Disabled: Disables the Snap and all Snaps that are downstream from it.

Instead of building multiple Snaps with inter dependent DML queries, we recommend that you use the Stored Procedure or the Multi Execute Snap.
In a scenario where the downstream Snap does depends on the data processed on an Upstream Database Bulk Load Snap, use the Script Snap to add delay for the data to be available.

For example, when performing a create, insert and a delete function sequencially on a pipeline, using a Script Snap helps in creating a delay between the insert and delete function or otherwise it may turn out that the delete function is triggered even before inserting the records on the table.

Examples

Encoding Binary Data Type And Updating Records In Snowflake Database

The following example Pipeline demonstrates how you can encode binary data (biodata of the employee) and update employee records in the Snowflake database.

Initially, we begin with configuring the File Reader Snap to read data (employee details) from the SnapLogic database.

Then, we configure the Binary to Document and Snowflake Select Snaps. We select ENCODE_BASE64 in Encode or Decode field (to enable the Snap to encode binary data) in Binary to Document Snap.

Binary to Document Snap

Snowflake - Select

Then, we configure the Join Snap to join the document streams from both the upstream Snaps using Outer Join type.

We configure the Mapper Snap to pass the incoming data to Snowflake - Update Snap. Note that the target schema for Bio and Text are in binary and varbinary formats respectively.

We configure the Snowflake - Update Snap to update the existing records in Snowflake database with the inputs from the upstream Mapper Snap. We use the update condition, "BIO = to_binary( '"+$BIO+"','base64')" to update the records.

Upon validation, the Snap updates the records in Snowflake database.

Next, we connect a JSON Formatter Snap with Snowflake - Update Snap and finally configure the File Writer Snap to write the output onto a file.

Download this file.


The following is an example that shows how to update a record in a Snowflake object using the Snowflake Update Snap: 

 The Snowflake Update Snap updates the table, ADOBEDATA  of the schema PRASANNA.

A Mapper Snap maps the object record details that need to be updated  in the input view of the Snowflake Update Snap: 

Successful execution of the pipeline gives the following output view:

Downloads

Snap Pack History