SQL Server - Bulk Load

In this article

Overview

You can use this Snap to execute an SQL Server bulk load. This Snap uses the bcp utility program internally to perform the bulk load action. The input data is first written to a temporary data file, and then the bcp utility program loads the data from the data file into the target table.

Snap Type

The SQL Server - Bulk Load Snap is a Write-type Snap that inserts bulk data in one request.

Prerequisites

The BCP utility must be installed on the Groundplex nodes where you want to execute this Snap. To install the BCP utility on a Groundplex:

  1. Download and install the BCP Utility in your Windows or Linux environment. For details on doing so, use the following links:

    1. Installing BCP on Linux

    2. Installing BCP on Windows

  2. Verify that you can run the bcp command. To verify BCP installation, enter bcp on the terminal or the command line console and press Enter.

    The output should look similar to the following. These are the command-line options that can be used with the BCP utility. If you see this output, it indicates that the BCP utility is installed and ready for use.

     

  3. Ensure the path to the bcp command is correctly provided in the Snap.

When using a Windows host as a SnapLogic node with the service installed (jcc.bat install_service), ensure that the service account used on the service credentials can access the database for the bulk copy program utility (BCP) to work. Removing the database permissions result in the error, Unable to invoke BCP". However, you might still be able to execute the bcp -v command on the command line outside the Snaplogic node, despite the lack of service account database permissions.

Support for Ultra Pipelines

Does not work in Ultra Pipelines.

Behavior Change

  • Before the 4.33patches21119 release, empty strings and null values were treated as null when loaded into the SQL server. However, starting from the 433patches21119 release, data in the format of an empty string inserted into a string-based column is stored as an empty string in the SQL server. Similarly, inserting null data into a string-based column is stored as null in the SQL server.
    To ensure consistent handling of both empty strings and null values, we recommend you to update the data to match how you would like it to be represented in the database before performing a bulk load operation.

Limitations & Known Issues

None.

Snap Views

Type

Format

Number of Views

Examples of Upstream and Downstream Snaps

Description

Type

Format

Number of Views

Examples of Upstream and Downstream Snaps

Description

Input

Document

  • Min: 1

  • Max: 2

  • JSON Generator

  • Binary to Document

By default, this Snap has one document input view by default. A second view can be added for metadata for the table as a document so that the target absent table can be created in the database with a similar schema as the source table. This schema is usually from the second output of a database Select Snap. If the schema is from a different database, there is no guarantee that all the data types would be properly handled.

The target table's columns need to be mapped upstream using a Mapper Snap. The Mapper Snap will provide the target schema, which reflects the target table's schema. Learn more: SQL Server - Bulk Load | Table Creation

Output

Document

  • Min: 0

  • Max: 1

  • JSON Generator

  • Binary to Document

A document that represents the result of the bulk load operation.

Error

Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter when running the Pipeline by choosing one of the following options from the When errors occur list under the Views tab:

  • Stop Pipeline Execution: Stops the current pipeline execution if the Snap encounters an error.

  • Discard Error Data and Continue: Ignores the error, discards that record, and continues with the remaining records.

  • Route Error Data to Error View: Routes the error data to an error view without stopping the Snap execution.

Learn more about Error handling in Pipelines.

Snap Settings

  • Asterisk (*): Indicates a mandatory field.

  • Suggestion icon (): Indicates a list that is dynamically populated based on the configuration.

  • Expression icon (): Indicates whether the value is an expression (if enabled) or a static value (if disabled). Learn more about Using Expressions in SnapLogic.

  • Add icon (): Indicates that you can add fields in the field set.

  • Remove icon (): Indicates that you can remove fields from the field set.

Field Name

Field Type

Description

Field Name

Field Type

Description

Label*

 

Default ValueSnowflake - Bulk Load
Example: Load Employee Tables

String

Specify a unique name for the Snap.

Schema Name

 

Default Value: N/A
Example: schema_demo

String/Expression/Suggestion

 

Specify the database schema name. In case it is not defined, then the suggestion for the Table Name will retrieve all tables names of all schemas. The property is suggestible and will retrieve available database schemas during suggest values.

You can pass the values using the pipeline parameters but not the upstream parameter.

Table Name*

 

Default Value: N/A
Example: employees_table

String/Expression/Suggestion

 

Specify the table on which to execute the bulk load operation.

Create table if not present

 

Default Value: Deselected

Checkbox

 

 

 

Select this checkbox to enable the Snap to automatically create a table if a table does not exist.

BCP absolute path

 

Default Value: N/A
Example: C:\bcp.bat

String

Specify the absolute path of the bcp utility program in JCC's file system. If empty, the Snap looks for it in JCC's environment variable PATH.

Handling Unrecognized Character sets in the Data set. As the Snaplex uses the OS's default character set, it cannot recognize characters in other languages. Due to this, unrecognized characters in the data set are replaced with junk values when performing bulk load operations. To mitigate this, create a bcp.bat file and include the following line:

Use the path to this bcp.bat file in the BCP absolute path.

Maximum error count*

 

Default Value10
Example: 12

Integer

Specify the maximum number of rows which can fail before the bulk load operation is stopped.

Batch size

 

Default Value: N/A
Example: 1000

Integer/Expression

Specify the number of records batched per request. If the input has 10,000 records and the batch size is set to 100, the total number of requests batched would be 100.

Minimum Value: 1

Snap Execution

 

Default Value: Execute only
Example: Validate & Execute

Dropdown list

Select one of the three modes in which the Snap executes. Available options are:

  • Validate & Execute: Performs limited execution of the Snap, and generates a data preview during Pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during Pipeline runtime.

  • Execute only: Performs full execution of the Snap during Pipeline execution without generating preview data.

  • Disabled: Disables the Snap and all Snaps that are downstream from it.

Additional Information

Table Creation

When attempting to load data, if the specified table does not exist and the Create table if not present checkbox is selected, the Snap creates the table along with the necessary columns and data types to accommodate the values in the first input document. If you want the table to have the same structure as a source table, you can connect the second output view of a Select Snap to the second input view of this Snap. The additional view in the Select and Bulk Load Snaps transmits metadata about the table, enabling the replication of a table from one database to another.

The metadata document, read by the second input view, contains a snapshot of the JDBC DatabaseMetaData class. By manipulating this document, you can modify the generated ‘CREATE TABLE' statement. For instance, if you wish to rename the 'name' column to 'full_name,’ you can utilize a Mapper (Data) Snap that sets the path '$.columns.Name.COLUMN_NAME' to 'full_name'. The document encompasses the following fields:

  • columns - Contains the result of the getColumns() method with each column as a separate field in the object. Changing the COLUMN_NAME value will change the column's name in the created table. Note that you do not need to change the field name in the row input documents if you change a column name. The Snap automatically translates from the original name to the new name. For example, when changing from name to full_name, the name field in the input document will be put into the "full_name" column. You can also drop a column by setting the COLUMN_NAME value to null or the empty string.  The other fields of interest in the column definition are:

    • TYPE_NAME - The type to use for the column.  If this type is not known to the database, the DATA_TYPE field will be used as a fallback.  If you want to explicitly set a type for a column, set the DATA_TYPE field.

    • _SL_PRECISION - Contains the result of the getPrecision() method.  This field is used along with the _SL_SCALE field for setting the precision and scale of a DECIMAL or NUMERIC field.

    • _SL_SCALE - Contains the result of the getScale() method.  This field is used along with the _SL_PRECISION field for setting the precision and scale of a DECIMAL or NUMERIC field.

  • primaryKeyColumns - Contains the result of the getPrimaryKeys() method with each column as a separate field in the object.

  • declaration - Contains the result of the getTables() method for this table. The values in this object are just informational at the moment.  The target table name is taken from the Snap property.

  • importedKeys - Contains the foreign key information from the getImportedKeys() method. The generated CREATE TABLE statement will include FOREIGN KEY constraints based on the contents of this object. Note that you will need to change the PKTABLE_NAME value if you changed the name of the referenced table when replicating it.

  • indexInfo - Contains the result of the getIndexInfo() method for this table with each index as a separate field in the object.  Any UNIQUE indexes in here will be included in the CREATE TABLE statement generated by this Snap.

Troubleshooting

Error

Reason

Resolution

Error

Reason

Resolution

Some characters appear as junk values after bulk load.

The Snaplex uses character sets defined in the OS on which they are installed. Due to this, any unrecognized character set is not supported by the Snaplex as well. As a result, such characters in the data set are represented as junk values in the database after a bulk load operation.

This problem can be resolved by editing the bcp.bat file to accept custom characters. And using the absolute path to this bcp file in the BCP absolute path property. The bcp.bat file must contain the following:

Example

This example pipeline demonstrates how to load data from table bulk_test_source to table bulk_test_target with SQL Server Bulk Load Snap.

Step 1: Configure the SQL Server Select Snap as follows to get records in the table bulk_test_source and pass them to the SQL Server Bulk Load Snap.

Step 2: Configure the SQL Server Bulk Load Snap as follows to load inputs to the table bulk_test_target. On successful validation, the SQL Server Bulk Load Snap output displays the result of the tables copied.

Snap Pack History