Snowflake – Snowpipe Streaming

In this article

Overview

You can use this Snap to insert data into Snowflake using the Snowpipe Streaming API, which enables the continuous ingestion of data into Snowflake tables as and when it is available.

The Snowpipe Streaming Snap uses the role defined in the account URL properties field. If a role is not defined, the Snap runs a SELECT CURRENT_ROLE() to determine a suitable role, failing which it falls back to the PUBLIC role.

snowflake-snowpipe-streaming-overview.png

Snap Type

The Snowflake – Snowpipe Streaming Snap is a Write-type Snap.

Prerequisites

  • Valid Snowflake KeyPair or OAuth 2.0 account.

  • A valid account with the required permissions.

Support for Ultra Tasks

Limitations and Known Issues

None.

Snap Views

Type

Format

Number of Views

Examples of Upstream and Downstream Snaps

Description

Type

Format

Number of Views

Examples of Upstream and Downstream Snaps

Description

Input 

Document

 

  • Min: 1

  • Max: 2

  • Mapper

  • Copy

Requires the table name where the data has to be inserted and the data flush interval (milliseconds) in which the data is pushed to the database.

Output

Document

 

  • Min: 0

  • Max: 1

  • Mapper

  • Filter

Inserts data in Snowflake tables in specified intervals.

Error

Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter when running the pipeline by choosing one of the following options from the When errors occur list under the Views tab:

  • Stop Pipeline Execution: Stops the current pipeline execution if the Snap encounters an error.

  • Discard Error Data and Continue: Ignores the error, discards that record, and continues with the remaining records.

  • Route Error Data to Error View: Routes the error data to an error view without stopping the Snap execution.

Learn more about Error handling in Pipelines.

Snap Settings

  • Expression icon (): Indicates the value is an expression (if enabled) or a static value (if disabled). Learn more about Using Expressions in SnapLogic.

  • Suggestion icon (): Indicates a list that is dynamically populated based on the configuration.

  • Upload icon (): Indicates that you can upload files. Learn more about managing Files.

Field Name

Field Type

Description

Field Name

Field Type

Description

Label*

 

Default ValueSnowflake – Snowpipe Streaming
ExampleStream Data

String

Specify the name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your pipeline.

 

Schema name

 

Default Value: N/A
Example: “PUBLIC”

String/Expression/Suggestion

Specify the database schema name.

Table Name*

 

Default Value: N/A
Example: “PUBLIC”.”SNOWPIPESTREAMING”

String/Expression/Suggestion

Specify the table name on which the insert operation has to be executed.

Create table if not present

 

Default ValueDeselected

Checkbox

Select this checkbox if you want the table to be automatically created if it does not already exist.

Max client lag*

 

Default Value1000
Example: 1500
Max Value: 60,000

Integer/Expression/Suggestion

Specify the client data flush interval in milliseconds. Adjust this value based on the maximum latency your target system can handle (60,000 ms).

This field also accepts inputs in n-second and n-minute formats. Maximum value is 10 minutes.

Snap execution

Default Value
Example: Validate & Execute

Dropdown list

Select one of the following three modes in which the Snap executes:

  • Validate & Execute: Performs limited execution of the Snap and generates a data preview during pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during pipeline runtime.

  • Execute only: Performs full execution of the Snap during pipeline execution without generating preview data.

  • Disabled: Disables the Snap and all Snaps that are downstream from it.

Troubleshooting

Error

Reason

Resolution

Error

Reason

Resolution

Schema name not found

The schema name is required for Snowpipe Streaming.

Provide a schema name in the Snap configuration.

Examples

Ingest Data from One Table to Another in Snowflake

This example pipeline demonstrates real-time data ingestion from one table to another within Snowflake. As new data is added to the source table, the Snowpipe Streaming Snap detects these changes and then streams the new data directly into the target table without significant delays. Here's the execution flow of the pipeline:

  • Source Table: The table where new data entries are continuously added.

  • Snowpipe Streaming Snap: This Snap monitors the source table for new data and streams the data in real time.

  • Target Table: The destination table where the streamed data is inserted.

snowpipe-streaming-example-pipeline.png

Download this pipeline.

Step 1: Configure the Snowflake - Execute Snap to delete the existing table.

Step 2: Configure the Snowflake - Select Snap with Schema name and Table name fields. Add the second output view for Snowflake - Select to pass the schema to the downstream Snowflake - Snowpipe Streaming Snap to enable table creation.

 

Step 3: Configure the Snowflake - Snowpipe Streaming Snap to ingest the data into the Snowflake database as and when available. On validation, the data is successfully inserted into the Snowflake database.

 

 

Ingest Data from a Source into Snowflake

This example pipeline demonstrates how to use Snowflake - Snowpipe Streaming Snap to ingest data into the Snowflake database from a source as and when it is available. Here's the execution flow of the pipeline:

  1. Data generation: The Sequence Snap generates a set number of data records.

  2. Data formatting: The JSON Formatter Snap converts these records into JSON format.

  3. Data Streaming: The Snowflake Snowpipe Streaming Snap ingests the JSON-formatted data into a specified Snowflake table as soon as the data is available.

Download this pipeline.

Step 1: Configure the Sequence Snap to generate ten documents.

Step 2: Configure the Mapper Snap to pass the values to the downstream Snowflake- Snowpipe Streaming Snap.

Step 3: Configure the Snowflake - Execute Snap to create or replace an existing table with the column datatype.

Step 4: Configure the Mapper Snap to pass the values to the downstream Snowflake- Snowpipe Streaming Snap.

Step 5: Configure the Snowflake—Snowpipe Streaming Snap to ingest the data into the Snowflake database as and when available.

On validation, the data is successfully inserted into the Snowflake database.

Snap Pack History


Related Content