On this Page

Snap type:

Write 

Description:

This Snap Performs a bulk load operation from the input view document stream to the target table. The Snap supports SQL Server database with PolyBase feature, which includes SQL Server 2016 (on-premise) and Data Warehouse. It first formats the input view document stream to a temporary CSV file in Azure Blob storage and then sends a bulk load request to the database to load the temporary Blob file to the target table. 

ETL Transformations & Data Flow

This Snap enables the following ETL operations/flows:

Loads data into a temporarily created Azure blob. Executes the SQl server command to load the above blob into the target table.

  • The Snap reads all the incoming documents and writes them to a temporarily created blob on the Azure storage
  • The Snap executes the following DB commands in sequence:
    • Create a master key, only if it does not exist
    • Create the Database scoped credentials, only if it does not exist
    • Create an external data source
    • Create an external file format
    • Create an external table (blob will be copied this external table)
    • Copy the data from external table to the destination table

Input & Output

  • InputThis Snap must have an upstream Snap that can pass a document output view. Such as Structure or JSON Generator.

  • Output: The Snap outputs one document specifying the records that have been inserted successfully. The records that are not written to the blob successfully are routed to the error view.

Prerequisites:

Bulk load requires a minimum of SQL Server 2016 to work properly.

The database should have PolyBase feature enabled in it.

Support and limitations:

Modes

Limitations and Known Issues

  • If the Snap fails while loading blob into the DB, the temporary blob created remains un-deleted so the data is not lost.
  • Microsoft PolyBase does not support varchar entries which contain more than 1000 characters. As a workaround, if any row contains a varchar entry with more than 1000 characters, use the Azure SQL - Bulk Load Snap instead.
Account: 

This Snap uses account references created on the Accounts page of SnapLogic Manager to handle access to this endpoint. See Azure SQL Account for information on setting up this type of account.


Views:


InputThis Snap has exactly one document input view.
OutputThis Snap has at most one document output view. If an output view is available, it conveys that the bulk load operations were carried out successfully.
ErrorThis Snap has at most one document error view and produces zero or more documents in the view.


Troubleshooting:
  • Ensure the DB interacted with it is at least SQL 2016 with the PolyBase feature enabled.
  • Ensure the DB credentials provided are valid.
  • Ensure the Azure blob storage account is set up properly.
  • Ensure the valid blob account credentials.
  • If the Snap fails when writing to a data warehouse, it writes a new blob in the Azure container. This new blob highlights the first invalid row that caused the bulk load operation to fail.

Settings

Label

Required. The name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your pipeline.

Schema Name


The database schema name. In case it is not defined, then the suggestion for the table name will retrieve all tables names of all schemas. The property is suggest-able and will retrieve available database schemas during suggest values.

The values can be passed using the pipeline parameters but not the upstream parameter.

Example: SYS

Default value: None 

Table Name


Required. The target table to load the incoming data into.

The values can be passed using the pipeline parameters but not the upstream parameter.

Example: users

Default value: None 

Create table if not present


Select this property to create target table in case it does not exist; otherwise the system will through table not found error.

Example: table1

Default value: None

Schema source

Specifies if the schema must be fetched from the input document or from the existing table while loading data into the temporary blob at the time of bulk upload. The options available are: Schema from provided input and Schema from existing table.

Default value: Schema from provided input

Use type default

Specifies how to handle any missing values in input documents. The options available are TRUE and FALSE. If you select TRUE, the Snap replaces every missing value in the input document with its default value in the external table. Supported data types and their default values are:

  • Numeric - 0
  • String - ""
  • Date - 1900-01-01

If you select FALSE, the Snap replaces every missing value in the input document with a null value in the external table.

Default value: TRUE

Bulk insert mode


Specifies if the incoming data should be appended to the target table or overwrite the existing data in that table. The options available are: Append and Overwrite.

Example: Append, Overwrite

Default value: Append 

If you select Overwrite, the Snap overwrites the existing table and schema with the input data.


Database scoped credential


The Scoped credential is used to execute the queries in the bulk load operation. To do bulk load via storage blob, external database resources are required to be created.  This, in turn, requires a "Database Scoped Credential".  Refer to https://msdn.microsoft.com/en-us/library/mt270260.aspx for additional information.

Provide the scoped credentials if one exists on the DB or the Snap will create the temporary scoped credentials and deletes them once the operation is completed. 

Default value: None 

Encoding

The encoding standard for the Input data to be loaded on to the database. The available options are:

None - Select this option only when using the Polybase Bulk Load with SQL Server 2016.

UTF-8 - Select this option for the input standard in UTF-8 when using the Snap with Azure database.

UTF-16 - Select this option for the input standard in UTF-16 when using the Snap with Azure database.

Default value: None

Basic Use Case

In the below example, the Mapper Snap maps the input schema from the upstream to the target metadata on the PolyBase Bulk Load.  

The PolyBase Bulk Load Snap loads the records into the table, "dbo"."9181",  The successful execution of the pipeline displays the success status of the loaded records.

The pipeline performs the below ETL transformations:

Extract: The JSON Generator Snap gets the records to be loaded into the PolyBase table

Transform: The Mapper Snap maps the input schema to the metadata of the PolyBase Bulk Load Snap

Load: The PolyBase Bulk Load Snap loads the records into the required table.

Typical Snap Configurations

The key configuration of the Snap lies in how you pass the statement to write the records . As it applies in SnapLogic, you can pass SQL statements in the following manner:

  •  Without Expression: Directly passing the required statement in the PolyBase Bulk Load Snap.

  • With Expressions
    • Pipeline Parameter: Pipeline parameter set to pass the required values to the PolyBase Bulk Load Snap.

The Mapper Snap maps the input schema to the target fields on the PolyBase table.

The pipeline properties as set to be passed into the Snap:

Advanced Use Case

The following describes a pipeline, with a broader business logic involving multiple ETL transformations, that shows how typically in an enterprise environment, PolyBase functionality is used. Pipeline download link below.

In this pipeline, the PolyBase Bulk Load Snap extracts the data from a table on the Oracle DB using a Oracle Select Snap and  bulk loads into the table on the PolyBase table. The Output preview displays the status of the execution.  

  1. Extract: The Oracle Select Snap reads the records from the Oracle Database. 

  2. Load: The PolyBase Bulk load Snap loads the records into the Azure SQL Database. 

Downloads