In this article
Overview
You can use this Snap to insert new records into database tables. The Snap executes a SQL Insert statement with the specified values. Key-value pairs are used to insert columns and their respective values into the table columns. Missing columns from the document insert null values into them.
Snap Type
Azure Synapse SQL Insert Snap is a WRITE-type Snap that inserts new records in a SQL database table.
Prerequisites
Valid client ID.
A valid database account with the required permissions.
Support for Ultra Pipelines
Works in Ultra Pipelines.
Limitations
Page lookup error: page "[Beta Documentation] Azure Synapse SQL - Bulk Load" not found.
If you're experiencing issues please see our Troubleshooting Guide.
Known Issues
None.
Snap Views
Type | Format | Number of Views | Examples of Upstream and Downstream Snaps | Description |
---|---|---|---|---|
Input | Document |
|
| The second input view can be added to use the metadata of the source table as a document so that the table is created in SQL with a similar schema as the source table. The metadata is not used if the table already exists. |
Output | Document |
|
| The Snap outputs one document for every record written, hence any document processing Snap can be used downstream. If an output view is available, then the original document that was used to create the statement is written to the output with the status of the insert executed. |
Error | Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter while running the Pipeline by choosing one of the following options from the When errors occur list under the Views tab. The available options are:
Learn more about Error handling in Pipelines. |
Snap Settings
Asterisk (*): Indicates a mandatory field.
Suggestion icon (): Indicates a list that is dynamically populated based on the configuration.
Expression icon (): Indicates whether the value is an expression (if enabled) or a static value (if disabled). Learn more about Using Expressions in SnapLogic.
Add icon (): Indicates that you can add fields in the fieldset.
Remove icon (): Indicates that you can remove fields from the fieldset.
Field Name | Field Type | Description |
---|---|---|
Label* Default Value: Azure Synapse SQL - Insert | String | Specify the name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your Pipeline. |
Schema Name Default Value: None | String/Expression | Specify the database schema name. In case it is not defined, then the suggestion for the Table Name will retrieve all tables names of all schemas. The property is suggestible and will retrieve available database schemas during suggest values. |
Table Name* Default Value: None | String/Expression | Specify the name of a table to which the records are to be inserted. |
Create table if not present Default Value: Deselected | Checkbox | Select this checkbox to create the target table if it does not exist.
Due to implementation details, a newly created table is not visible to subsequent database Snaps during runtime validation. If you wish to immediately use the newly updated data you must use a child Pipeline that is invoked through a Pipeline Execute Snap. |
Preserve case sensitivity Default Value: Deselected | Checkbox | Select this check box to specify whether the letter case used in column labels must be preserved while performing the insert operation. Selecting this option ensures that the precise cases used are retained. |
Number of Retries Default Value: Deselected | String/Expression | Specify the maximum number of attempts to be made to receive a response. The request is terminated if the attempts do not result in a response. |
Retry Interval (seconds) Default Value: 1 | String/Expression | Specify the time interval between two successive retry requests. A retry happens only when the previous attempt resulted in an exception. |
Enable identity insert Default Value: Deselected | Checkbox | Select this checkbox to insert values from the input document into the target table identity column. Ensure that the target table contains an identity column. If you do not select this checkbox, then do not enter any value for the identity column in the input document. |
Snap Execution Default Value: | Dropdown list | Select one of the three modes in which the Snap executes. Available options are:
|
Table Creation
If the table does not exist when the Snap tries to do the insert, and the Create table if not present field is selected, a new table is created with the columns and data types required to hold the values in the first input document. If you would like the table to be created with the same schema as a source table, you can connect the second output view of a Select Snap to the second input view of this Snap. The extra view in the Select and Bulk Load Snaps are used to pass metadata about the table, effectively allowing you to replicate a table from one database to another.
The table metadata document that is read in by the second input view contains a dump of the JDBC DatabaseMetaData class. The document can be manipulated to affect the CREATE TABLE statement that is generated by this Snap. For example, to rename the name column to full_name, you can use a Mapper Snap that sets the path $.columns.name.COLUMN_NAME to full_name. The document contains the following fields:
columns - Contains the result of the getColumns() method with each column as a separate field in the object. Changing the COLUMN_NAME value will change the name of the column in the created table. Note that if you change a column name, you do not need to change the name of the field in the row input documents. The Snap will automatically translate from the original name to the new name. For example, when changing from name to full_name, the name field in the input document is put into the "full_name" column. You can also drop a column by setting the COLUMN_NAME value to null or the empty string. The other fields of interest in the column definition are:
TYPE_NAME - The type to use for the column. If this type is not known to the database, the DATA_TYPE field is used as a fallback. If you want to explicitly set a type for a column, set the DATA_TYPE field.
_SL_PRECISION - Contains the result of the getPrecision() method. This field is used along with the _SL_SCALE field for setting the precision and scale of a DECIMAL or NUMERIC field.
_SL_SCALE - Contains the result of the getScale() method. This field is used along with the _SL_PRECISION field for setting the precision and scale of a DECIMAL or NUMERIC field.
primaryKeyColumns - Contains the result of the getPrimaryKeys() method with each column as a separate field in the object.
declaration - Contains the result of the getTables() method for this table. The values in this object are just informational at the moment. The target table name is taken from the Snap property.
importedKeys - Contains the foreign key information from the getImportedKeys() method. The generatedCREATE TABLE statement will include FOREIGN KEY constraints based on the contents of this object. Note that you will need to change the PKTABLE_NAME value if you changed the name of the referenced table when replicating it.
indexInfo - Contains the result of the getIndexInfo() method for this table with each index as a separated field in the object. Any UNIQUE indexes in here will be included in the CREATE TABLE statement generated by this Snap.
The Snap will not automatically fix some errors encountered during table creation, because it requires user intervention to resolve correctly. For example, if the source table contains a column with a type that does not have a direct mapping in the target database, the Snap fails to execute. You then need to add a Mapper Snap to change the metadata document to explicitly set the values needed to produce a valid "CREATE TABLE" statement.
Examples
Inserting Records by Creating a New Table
This example Pipeline demonstrates how to insert data into a table. The table 'employee1' does not exist on database initially. So the Snap first creates the table 'Employee1' and then inserts the records read and parsed using the upstream Snaps.
First read the data that you want to insert with a File Reader Snap:
The CSV file contains the following data:
The CSV Parser Snap parses the data from the CSV file in a proper format, ready to be inserted into the database using the Azure Synapse SQL Insert Snap.
Then we pass this data to Azure Synapse SQL Insert Snap to be inserted to a table name 'Employee1' under schema 'dbo'.
Here is the sample output of the Snap.
Downloads
Download and import the Pipeline into SnapLogic.
Configure Snap accounts as applicable.
Provide Pipeline parameters as applicable.
Snap Pack History
Page lookup error: page "[Beta Documentation] Azure Synapse SQL Snap Pack" not found.
If you're experiencing issues please see our Troubleshooting Guide.