Snowflake - Update
In this article
Overview
You can use this Snap to connect to the particular instance in Snowflake, update the records into the given table based upon the given clause, and return the response as a document stream.
Snap Type
Snowflake Update is a Write-type Snap that writes records in Snowflake.
Prerequisites
Security Prerequisites
You should have the following permissions in your Snowflake account to execute this Snap:Â
Usage (DB and Schema): Privilege to use database, role, and schema.
The following commands enable minimum privileges in the Snowflake Console:
grant usage on database <database_name> to role <role_name>;
grant usage on schema <database_name>.<schema_name>;
Learn more:Â Access Control Privileges.
Internal SQL Commands
This Snap uses the UPDATE command internally. It enables updating the specified rows in the target table with new values.
Instead of using Snowflake—Update Snap, use Snowflake—Bulk Upsert Snap to efficiently update records in bulk. The Snowflake Bulk Snaps use Snowflake’s Bulk API, thus improving performance.
Support for Ultra Pipelines
Works in Ultra Pipelines.Â
Known Issues
Snap Views
Type | Format | Number of views | Examples of Upstream and Downstream Snaps | Description |
---|---|---|---|---|
Input | Document |
|
| Table Name, record ID, and record details. |
Output | Document |
|
| Updates a record. |
Error | Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter while running the Pipeline by choosing one of the following options from the When errors occur list under the Views tab. The available options are:
Learn more about Error handling in Pipelines. |
Snap Settings
Asterisk (*): Indicates a mandatory field.
Suggestion icon (): Indicates a list that is dynamically populated based on the configuration.
Expression icon (): Indicates whether the value is an expression (if enabled) or a static value (if disabled). Learn more about Using Expressions in SnapLogic.
Add icon (): Indicates that you can add fields in the field set.
Remove icon (): Indicates that you can remove fields from the field set.
Field Name | Field Type | Description | |
---|---|---|---|
Label* Default Value:Â Snowflake - Update | String | Specify the name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your pipeline. | |
Schema Name Default Value: N/A | String/Expression/Suggestion | Specify the database schema name. In case it is not defined, then the suggestion for the Table Name will retrieve all tables names of all schemas. The property is suggestible and will retrieve available database schemas during suggest values. The values can be passed using the pipeline parameters but not the upstream parameter. | |
Table Name* Default Value: N/A | String/Expression/Suggestion | Specify the name of the table in the instance. The table name is suggestible and requires an account setting.  | |
Update condition Default Value: N/A Without using expressions
Using expressions
| String/Expression | Specify the SQL WHERE clause of the update statement. You can define specific values or columns to update (Set condition) in the upstream Snap, such as Mapper Snap, and then use the WHERE clause to apply these conditions on the columns sourced from the upstream Snap. For instance, here is a sample of an Update SQL query: Refer to the example to understand how to use the Update Condition. | |
Number of retries Default Value: 0 | Integer/Expression | Specify the maximum number of attempts to be made to receive a response. The request is terminated if the attempts do not result in a response. | |
Retry interval (seconds) Default Value: 1 | Integer/Expression | Specify the time interval between two successive retry requests. A retry happens only when the previous attempt resulted in an exception. | |
Manage Queued Queries  Default Value: Continue to execute queued queries when pipeline is stopped or if it fails | Dropdown list | Choose an option to determine whether the Snap should continue or cancel the execution of the queued queries when the pipeline stops or fails. | |
Snap Execution  Default Value: Execute only | Dropdown list | Select one of the three modes in which the Snap executes. Available options are:
|
Examples
Encode Binary data type and update records in Snowflake
The following example Pipeline demonstrates how you can encode binary data (biodata of the employee) and update employee records in the Snowflake database.
Configure the File Reader Snap to read data (employee details) from the SnapLogic database.
Next, configure the Binary to Document and Snowflake Select Snaps. Select ENCODE_BASE64 in Encode or Decode field (to enable the Snap to encode binary data) in Binary to Document Snap.
Binary to Document Snap Configuration | Snowflake - Select Configuration |
---|---|
Configure the Join Snap to join the document streams from both the upstream Snaps using Outer Join type.
Configure the Mapper Snap to pass the incoming data to Snowflake - Update Snap. Note that the target schema for Bio and Text are in binary and varbinary formats respectively.
Configure the Snowflake - Update Snap to update the existing records in Snowflake database with the inputs from the upstream Mapper Snap. Use the update condition, "BIO = to_binary( '"+$BIO+"','base64')"Â
to update the records.
Upon validation, Snap updates the records in the Snowflake database.
Next, we connect a JSON Formatter Snap with the Snowflake - Update Snap and finally configure the File Writer Snap to write the output to a file.
The following is an example that shows how to update a record in a Snowflake object using the Snowflake Update Snap:Â
 The Snowflake Update Snap updates the table, ADOBEDATA  of the schema PRASANNA.
A Mapper Snap maps the object record details that need to be updated  in the input view of the Snowflake Update Snap:Â
Successful execution of the pipeline gives the following output view: