Snowflake - Unload

On this Page

Overview

You can use the Snowflake - Unload Snap to unload the result of a query to a file or files stored on the Snowflake stage, Google Cloud Storage, an external S3 workspace, or on Azure Storage Blob if required. The target Snowflake table is not modified. Once the data is unloaded to the storage, you can get the file either using Snowflake GET or an S3 File Reader Snap.

Snowflake provides the option to use the Cross Account IAM into the external staging. You can adopt the cross-account access through Storage Integration option . With this setup, you don’t need to pass any credentials and access the storage only using the named stage or integration object. Learn more: Configuring Cross Account IAM Role Support for Snowflake Snaps.

Snap Type

The Snowflake - Unload Snap is a Read-type Snap.

Prerequisites

Security Prerequisites

You should have the following permission in your Snowflake account to execute this Snap: 

  • Usage (DB and Schema): Privilege to use database, role, and schema.

The following commands enable minimum privileges in the Snowflake Console:

grant usage on database <database_name> to role <role_name>; grant usage on schema <database_name>.<schema_name>;

For more information on Snowflake privileges, refer to Access Control Privileges.

The below are mandatory when using an external staging location:

When using an Amazon S3 bucket for storage:

  • The Snowflake account should contain S3 Access-key ID, S3 Secret key, S3 Bucket and S3 Folder.

  • The Amazon S3 bucket where the Snowflake will write the output files must reside in the same region as your cluster.

When using a Microsoft Azure storage blob:

  • A working Snowflake Azure database account.

Internal SQL Commands

This Snap uses the COPY INTO command internally. It enables unloading data from a table (or query) into one or more files in one of the following locations:

  • Named internal stage (or table/user stage).

  • Named external stage, that references external location, such as, Amazon S3, Google Cloud Storage, or Microsoft Azure.

  • External location, such as, Amazon S3, Google Cloud Storage, or Microsoft Azure.

Requirements for External Storage Location

The following are mandatory when using an external staging location:

When using an Amazon S3 bucket for storage:

  • The Snowflake account should contain S3 Access-key ID, S3 Secret key, S3 Bucket and S3 Folder.

  • The Amazon S3 bucket where the Snowflake will write the output files must reside in the same region as your cluster.

When using a Microsoft Azure storage blob:

  • A working Snowflake Azure database account.

When using a Google Cloud Storage:

  • Provide permissions such as Public access and Access control to the Google Cloud Storage bucket on the Google Cloud Platform.

Support for Ultra Pipelines

  • Works in Ultra Pipelines. However, we recommend that you not use this Snap in an Ultra Pipeline.

Known Issues

Because of performance issues, all Snowflake Snaps now ignore the Cancel queued queries when pipeline is stopped or if it fails option for Manage Queued Queries, even when selected. Snaps behave as though the default Continue to execute queued queries when the Pipeline is stopped or if it fails option were selected.

Snap Views

Type

Format

Number of Views

Examples of Upstream and Downstream Snaps

Description

Type

Format

Number of Views

Examples of Upstream and Downstream Snaps

Description

Input 

Document

  • Min: 0

  • Max: 1

  • Mapper

  • Copy

  • JSON Generator

A JSON document containing parameterized inputs for the Snap’s configuration, if needed.

Output

Document

  • Min: 0

  • Max: 1

  • JSON Parser

  • CSV Parser

  • Mapper

  • AVRO Parser

A JSON document containing the unload request details and the result of the unload operation.

Error

Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter while running the Pipeline by choosing one of the following options from the When errors occur list under the Views tab. The available options are:

  • Stop Pipeline Execution: Stops the current pipeline execution when the Snap encounters an error.

  • Discard Error Data and Continue: Ignores the error, discards that record, and continues with the rest of the records.

  • Route Error Data to Error View: Routes the error data to an error view without stopping the Snap execution.

Learn more about Error handling in Pipelines.

Snap Settings

  • Asterisk (*): Indicates a mandatory field.

  • Suggestion icon (): Indicates a list that is dynamically populated based on the configuration.

  • Expression icon (): Indicates whether the value is an expression (if enabled) or a static value (if disabled). Learn more about Using Expressions in SnapLogic.

  • Add icon (): Indicates that you can add fields in the field set.

  • Remove icon (): Indicates that you can remove fields from the field set.

Field Name

Field Type

Description

Label*

 

Default ValueSnowflake - Unload
Example: Load Employee Tables

String

Specify the name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your Pipeline.

Query*

 

Default Value: N/A
Example: SELECT * FROM public.company ORDER BY id

String/Expression

Specify a SELECT query. The results of the query are unloaded. In most cases, it is worthwhile to unload data in sorted order by specifying an ORDER BY clause in the query; this approach will save the time Required. to sort the data when it is reloaded.



Staging location


Default Value: Internal
Example: External

Dropdown list/Expression

Specify the staging location for the unload. The expected input for this should be a path to a file or filename-prefix.

The options available include:

  • External: Location that is not managed by Snowflake. The location should be an AWS S3 Bucket or Microsoft Azure Storage blob.

  • Internal: Location that is managed by Snowflake.

Target

 

Default Value: N/A
Example: TestFolder/file_name_prefix

String/Expression

Specify the staging area where the unloaded file(s) are placed. If the staging location is external, it will be put under the S3 Bucket or Microsoft Azure Storage Blob specified for the Account. If the staging location is internal,  the files will be placed in the user’s home folder. 

File Format Type

 

Default Value: None
Example: CSV

 



Dropdown list

The format type for the unloaded file. The options available are None, CSV, and CUSTOMIZED.

 



Customized format identifier

 

Default Value: None

String

Specify the file format object to use for unloading data from the table. The field is valid only when the File format type is specified as Customized. Otherwise, this will be ignored.

File format option


Default Value: None
Example: CSV

String

Specify the file format option. Separate multiple options by using blank spaces and commas.

Copy options 

Overwrite

 

Default Value: Deselected

Checkbox

If selected, the UNLOAD overwrites the existing files, if any, in the location where files are stored.

If Deselected, the option does not remove the existing files or overwrite.

Generate single file

 

Default Value: Deselected

Checkbox

If selected, the UNLOAD will generate a single file. If it is not selected, the filename prefix needs to be included in the path.



Max file size


Default Value: 0
Example: 10

Integer

Specify the maximum size (in bytes) of each file to be generated in parallel per thread. The number should be greater than 0, If it is less than or equals 0, the Snap will use the default size for snowflake: 16000000 (16MB).

Encryption type

 

Default Value: None
Example: Server Side Encryption

Dropdown list

Specify the type of encryption to be used on the data. The available encryption options are:

  • None: Files do not get encrypted.

  • Server Side Encryption: The output files on Amazon S3 are encrypted with server-side encryption.

  • Server-Side KMS Encryption: The output files on Amazon S3 are encrypted with an Amazon S3-generated KMS key. 

KMS key

 

Default Value: N/A
Example: 28e3c2b6-74e2-4a3e-9890-6cd8e1c03661

String/Expression

Specify the KMS key that you want to use for S3 encryption. For more information about the KMS key, see AWS KMS Overview and Using Server Side Encryption.

Include column headings

 

Default Value: Deselcted

Checkbox

If selected, the table column heading will be included in the generated files. If multiple files are generated, the heading will be included in every file.



Validation Mode


Default Value: NONE
Example: RETURN_ROWS

Dropdown list

Select this mode for visually verifying the data before unloading it. If you select NONE, validation mode is disabled or, the unloaded data will not be written to the file. Instead, it will be sent as a response to the output.

The options available include:

  • NONE

  • RETURN_ROWS

Manage Queued Queries

 

Default Value: Continue to execute queued queries when the pipeline is stopped or if it fails
Example: Cancel queued queries when pipeline is stopped or if it fails

Dropdown list

Select this property to decide whether the Snap should continue or cancel the execution of the queued Snowflake Execute SQL queries when you stop the pipeline.

 

Default Value: Execute only
Example: Disabled

Dropdown list

Troubleshooting

The preview on this Snap will not execute the Snowflake UNLOAD operation. Connect a JSON Formatter and a File Writer Snaps to the error view and then execute the pipeline. If there is any error, you will be able to preview the output file in the File Writer Snap for the error information.

Examples

Unloading Data (Including Binary Data Types) From Snowflake Database

The following example Pipeline demonstrates how you can unload binary data as a file and load it into an S3 bucket.


First, we configure the Snowflake - Unload Snap by providing the following query:

select * from EMP2 – this query unloads data from EMP2 table.

Note that we set the File format option as BINARY_FORMAT='UTF-8' to enable the Snap to pass binary data.


Upon validation, the Snap shows the unloadRequest and unloadDestination in its preview.


We connect the JSON Formatter Snap to Snowflake - Unload Snap to transform the binary data to JSON format, and finally write this output to a file in S3 bucket using the File Writer Snap. Upon validation, the File Writer Output Snap writes the output (unload request)- the output preview is as shown below.

Download this Pipeline.


The following example illustrates the usage of the Snowflake Unload Snap.  

In this example, we run the Snowflake SQL query using the Snowflake Unload Snap. The Snap selects the data from the table, @ADOBEDATA2 and writes the records to the table, adobedatanullif  using the File Writer.

Connect the JSON formatter Snap to convert the predefined CSV file format to JSON and write the data using the File Writer Snap.

Successful execution of the pipeline gives the below output preview: 

Downloads

  File Modified

File Example_Snowflake_Unload_Binary_DataType.slp

Apr 22, 2021 by Kalpana Malladi

Snap Pack History