Snowflake – Snowpipe Streaming
In this article
Overview
You can use this Snap to insert data into Snowflake using the Snowpipe Streaming API, which enables the continuous ingestion of data into Snowflake tables as and when it is available.
The Snowpipe Streaming Snap uses the role defined in the account URL properties field. If a role is not defined, the Snap runs a SELECT CURRENT_ROLE()
to determine a suitable role, failing which it falls back to the PUBLIC role.
Snap Type
The Snowflake – Snowpipe Streaming Snap is a Write-type Snap.
Prerequisites
Valid Snowflake KeyPair or OAuth 2.0 account.
A valid account with the required permissions.
Support for Ultra Tasks
Works in Ultra Tasks.
Limitations and Known Issues
None.
Snap Views
Type | Format | Number of Views | Examples of Upstream and Downstream Snaps | Description |
---|---|---|---|---|
Input | Document
|
|
| Requires the table name where the data has to be inserted and the data flush interval (milliseconds) in which the data is pushed to the database. |
Output | Document
|
|
| Inserts data in Snowflake tables in specified intervals. |
Error | Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter when running the pipeline by choosing one of the following options from the When errors occur list under the Views tab:
Learn more about Error handling in Pipelines. |
Snap Settings
Expression icon (): Indicates the value is an expression (if enabled) or a static value (if disabled). Learn more about Using Expressions in SnapLogic.
Suggestion icon (): Indicates a list that is dynamically populated based on the configuration.
Upload icon (): Indicates that you can upload files. Learn more about managing Files.
Field Name | Field Type | Description |
---|---|---|
Label*
Default Value: Snowflake – Snowpipe Streaming | String | Specify the name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your pipeline.
|
Schema name
Default Value: N/A | String/Expression/Suggestion | Specify the database schema name. |
Table Name*
Default Value: N/A | String/Expression/Suggestion | Specify the table name on which the insert operation has to be executed. |
Create table if not present
Default Value: Deselected | Checkbox | Select this checkbox if you want the table to be automatically created if it does not already exist. |
Max client lag*
Default Value: 1000 | Integer/Expression/Suggestion | Specify the client data flush interval in milliseconds. Adjust this value based on the maximum latency your target system can handle (60,000 ms). This field also accepts inputs in n-second and n-minute formats. Maximum value is 10 minutes. |
Snap execution Default Value: | Dropdown list | Select one of the following three modes in which the Snap executes:
|
Troubleshooting
Error | Reason | Resolution |
---|---|---|
| The schema name is required for Snowpipe Streaming. | Provide a schema name in the Snap configuration. |
Examples
Ingest Data from One Table to Another in Snowflake
This example pipeline demonstrates real-time data ingestion from one table to another within Snowflake. As new data is added to the source table, the Snowpipe Streaming Snap detects these changes and then streams the new data directly into the target table without significant delays. Here's the execution flow of the pipeline:
Source Table: The table where new data entries are continuously added.
Snowpipe Streaming Snap: This Snap monitors the source table for new data and streams the data in real time.
Target Table: The destination table where the streamed data is inserted.
Step 1: Configure the Snowflake - Execute Snap to delete the existing table.
Step 2: Configure the Snowflake - Select Snap with Schema name and Table name fields. Add the second output view for Snowflake - Select to pass the schema to the downstream Snowflake - Snowpipe Streaming Snap to enable table creation.
|
|
Step 3: Configure the Snowflake - Snowpipe Streaming Snap to ingest the data into the Snowflake database as and when available. On validation, the data is successfully inserted into the Snowflake database.
|
|
Ingest Data from a Source into Snowflake
This example pipeline demonstrates how to use Snowflake - Snowpipe Streaming Snap to ingest data into the Snowflake database from a source as and when it is available. Here's the execution flow of the pipeline:
Data generation: The Sequence Snap generates a set number of data records.
Data formatting: The JSON Formatter Snap converts these records into JSON format.
Data Streaming: The Snowflake Snowpipe Streaming Snap ingests the JSON-formatted data into a specified Snowflake table as soon as the data is available.
Step 1: Configure the Sequence Snap to generate ten documents.
Step 2: Configure the Mapper Snap to pass the values to the downstream Snowflake- Snowpipe Streaming Snap.
Step 3: Configure the Snowflake - Execute Snap to create or replace an existing table with the column datatype.
Step 4: Configure the Mapper Snap to pass the values to the downstream Snowflake- Snowpipe Streaming Snap.
Step 5: Configure the Snowflake—Snowpipe Streaming Snap to ingest the data into the Snowflake database as and when available.
On validation, the data is successfully inserted into the Snowflake database.
Snap Pack History
Related Content
Have feedback? Email documentation@snaplogic.com | Ask a question in the SnapLogic Community
© 2017-2024 SnapLogic, Inc.