Snowflake - Update

In this article

Overview

You can use this Snap to connect to the particular instance in Snowflake, update the records into the given table based upon the given clause, and return the response as a document stream.

Snap Type

Snowflake Update is a Write-type Snap that writes records in Snowflake.

Prerequisites

Security Prerequisites

You should have the following permissions in your Snowflake account to execute this Snap: 

  • Usage (DB and Schema): Privilege to use database, role, and schema.

The following commands enable minimum privileges in the Snowflake Console:

grant usage on database <database_name> to role <role_name>; grant usage on schema <database_name>.<schema_name>;

Learn more: Access Control Privileges.

Internal SQL Commands

This Snap uses the UPDATE command internally. It enables updating the specified rows in the target table with new values.

Instead of using Snowflake—Update Snap, use Snowflake—Bulk Upsert Snap to efficiently update records in bulk. The Snowflake Bulk Snaps use Snowflake’s Bulk API, thus improving performance.

Support for Ultra Pipelines

Works in Ultra Pipelines. 

Known Issues

Snap Views

Type

Format

Number of views

Examples of Upstream and Downstream Snaps

Description

Type

Format

Number of views

Examples of Upstream and Downstream Snaps

Description

Input 

Document



  • Min: 1

  • Max: 1

  • Mapper

  • Snowflake Execute

  • Sequence

Table Name, record ID, and record details.

Output

Document

  • Min: 0

  • Max: 1

  • File Writer

  • Mapper



Updates a record.

Error

Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter while running the Pipeline by choosing one of the following options from the When errors occur list under the Views tab. The available options are:

  • Stop Pipeline Execution: Stops the current pipeline execution when the Snap encounters an error.

  • Discard Error Data and Continue: Ignores the error, discards that record, and continues with the rest of the records.

  • Route Error Data to Error View: Routes the error data to an error view without stopping the Snap execution.

Learn more about Error handling in Pipelines.

Snap Settings

  • Asterisk (*): Indicates a mandatory field.

  • Suggestion icon (): Indicates a list that is dynamically populated based on the configuration.

  • Expression icon (): Indicates whether the value is an expression (if enabled) or a static value (if disabled). Learn more about Using Expressions in SnapLogic.

  • Add icon (): Indicates that you can add fields in the field set.

  • Remove icon (): Indicates that you can remove fields from the field set.

Field Name

Field Type

Description

Field Name

Field Type

Description

Label*



Default Value: Snowflake - Update
Example: Snowflake - Update



String

Specify the name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your pipeline.

Schema Name



Default Value: N/A
Example: "PUBLIC"

String/Expression/Suggestion

Specify the database schema name. In case it is not defined, then the suggestion for the Table Name will retrieve all tables names of all schemas. The property is suggestible and will retrieve available database schemas during suggest values.

The values can be passed using the pipeline parameters but not the upstream parameter.



Table Name*



Default Value: N/A
Example: Employee

String/Expression/Suggestion

Specify the name of the table in the instance. The table name is suggestible and requires an account setting.  



Update condition



Default Value: N/A
Example:

Without using expressions

  • EmpId = 12 

  • email = 'you@example.com'

Using expressions

  • "EMPNO=$EMPNO and ENAME=$EMPNAME"

  • email = $email 

  • emp=$emp

  • "emp='" + $emp + "'"

  • "EMPNO=" + $EMPNO + " and ENAME='" + $EMPNAME+ "'"

String/Expression

Specify the SQL WHERE clause of the update statement. You can define specific values or columns to update (Set condition) in the upstream Snap, such as Mapper Snap, and then use the WHERE clause to apply these conditions on the columns sourced from the upstream Snap. For instance, here is a sample of an Update SQL query:
UPDATE table_name
SET column1 = value1, column2 = value2,
WHERE condition;

snwflk-update-condition.jpg

Refer to the example to understand how to use the Update Condition.

Number of retries



Default Value: 0
Example: 3

Integer/Expression

Specify the maximum number of attempts to be made to receive a response. The request is terminated if the attempts do not result in a response.

Retry interval (seconds)

Default Value: 1
Example: 10

Integer/Expression

Specify the time interval between two successive retry requests. A retry happens only when the previous attempt resulted in an exception. 



Manage Queued Queries

 

Default Value: Continue to execute queued queries when pipeline is stopped or if it fails
Example: Cancel queued queries when pipeline is stopped or if it fails

Dropdown list

Choose an option to determine whether the Snap should continue or cancel the execution of the queued queries when the pipeline stops or fails.

Snap Execution

 

Default Value: Execute only
Example: Validate & Execute

Dropdown list

Select one of the three modes in which the Snap executes. Available options are:

  • Validate & Execute: Performs limited execution of the Snap, and generates a data preview during Pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during Pipeline runtime.

  • Execute only: Performs full execution of the Snap during Pipeline execution without generating preview data.

  • Disabled: Disables the Snap and all Snaps that are downstream from it.

Examples

Encode Binary data type and update records in Snowflake

The following example Pipeline demonstrates how you can encode binary data (biodata of the employee) and update employee records in the Snowflake database.

Configure the File Reader Snap to read data (employee details) from the SnapLogic database.

Next, configure the Binary to Document and Snowflake Select Snaps. Select ENCODE_BASE64 in Encode or Decode field (to enable the Snap to encode binary data) in Binary to Document Snap.

Binary to Document Snap Configuration

Snowflake - Select Configuration

Binary to Document Snap Configuration

Snowflake - Select Configuration

Configure the Join Snap to join the document streams from both the upstream Snaps using Outer Join type.

Configure the Mapper Snap to pass the incoming data to Snowflake - Update Snap. Note that the target schema for Bio and Text are in binary and varbinary formats respectively.

Configure the Snowflake - Update Snap to update the existing records in Snowflake database with the inputs from the upstream Mapper Snap. Use the update condition, "BIO = to_binary( '"+$BIO+"','base64')" to update the records.

Upon validation, Snap updates the records in the Snowflake database.

Next, we connect a JSON Formatter Snap with the Snowflake - Update Snap and finally configure the File Writer Snap to write the output to a file.

Download this file.


The following is an example that shows how to update a record in a Snowflake object using the Snowflake Update Snap: 

 The Snowflake Update Snap updates the table, ADOBEDATA  of the schema PRASANNA.

A Mapper Snap maps the object record details that need to be updated  in the input view of the Snowflake Update Snap: 

Successful execution of the pipeline gives the following output view:

Downloads

  File Modified

File Example_Snowflake_Update_Snap_Binary_Support.slp

Apr 22, 2021 by Kalpana Malladi

Snap Pack History