Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

In this article

Table of Contents
maxLevel2
absoluteUrltrue

Multiexcerpt include
templateData[]
MultiExcerptNameME_OE_ELT_Snap_Accounts
addpanelfalse
PageWithExcerptELT Aggregate

Overview

Use this Snap to perform the INSERT INTO SELECT operation on the specified table. The Snap creates a table and inserts the data if the target table does not exist. After successfully running a Pipeline that contains this Snap, you can check for the data updates made to the target table in one of the following ways: 

  • Validate the Pipeline and review the Snap's output preview data.
  • Query the target table using ELT Select Snap for the latest data available.
  • Open the target database and check for the new data in the target table.

Prerequisites

  • A valid SnapLogic account to connect to the database in which you want to perform the INSERT INTO SELECT operation.
  • Your database account must have the following permissions:
    • SELECT privileges for the source table whose data you want to insert into the target table.
    • CREATE TABLE privileges for the database in which you want to create the table.
    • INSERT privileges to insert data into the target table.

Limitations

  • The input data must correspond to the specified table's schema. You can use the ELT Transform Snap to ensure this.

Multiexcerpt include macro
nameME_ELT_GBQ_StandardSQL
pageELT Aggregate

Known Issues

  • If the last Snap in the Pipeline takes 2 to 5 seconds to update the runtime, the ELT Pipeline statistics are not displayed even after the Pipeline is completed. The UI does not auto-refresh to display the statistics after the runtime.
    Workaround: Close the Pipeline statistics window and reopen it to see the ELT Pipeline statistics.

  • When you return to the Snap Statistics tab from the Extra Details tab in the Pipeline Execution Statistics pane, it contains the status bar (Pipeline execution status) instead of the Download Query Details hyperlink and the individual counts of Records Added, Records Updated, and Records Deleted.

  • When your Databricks Lakehouse Platform instance uses Databricks Runtime Version 8.4 or lower, ELT operations involving large amounts of data might fail due to the smaller memory capacity of 536870912 bytes (512MB) allocated by default. This issue does not occur if you are using Databricks Runtime Version 9.0.
  • Suggestions displayed for the Schema Name field in this Snap are from all databases that the Snap account user can access, instead of the specific database selected in the Snap's account or Settings.

Multiexcerpt include macro
nameME_ELT_KI_Underscore_Refs
pageELT Snap Pack

Snap Input and Output

Input/OutputType of ViewNumber of ViewsExamples of Upstream and Downstream SnapsDescription
Input 

Document

  • Min: 1
  • Max: 1
  • ELT Select
  • ELT Transform
The data to be inserted into the target table. Ensure that the data corresponds to the target table's schema. 
Output

Document

  • Min: 0
  • Max: 1
  • ELT Select
  • ELT Transform

A document containing the SQL SELECT query executed on the target database.

Snap Settings

Info
titleSQL Functions and Expressions for ELT

You can use the SQL Expressions and Functions supported for ELT to define your Snap or Account settings with the Expression symbol = enabled, where available. This list is common to all target CDWs supported. You can also use other expressions/functions that your target CDW supports.


Parameter NameData TypeDescriptionDefault ValueExample 
LabelString
Insert excerpt
File Writer
File Writer
nopaneltrue
ELT Insert-SelectInsert Employee Records
Get preview dataCheckbox

Multiexcerpt include macro
namegetpreviewdata
pageELT Intersect

Not selectedSelected
Database NameString

Required. Enter the name of the database in which the target table is located. Leave it blank to use the database name specified in the account settings.

If your target database is Databricks Lakehouse Platform (DLP), you can, alternatively, mention the file format type for your table path in this field. For example, DELTA, CSV, JSON, ORC, AVRO. See Table Path Management for DLP section below to understand the Snap's behavior towards table paths.

N/A TESTDB
Schema Name (Not applicable to Databricks Lakehouse Platform)String

RequiredEnter the name of the database schema. In case it is not defined, then the suggestion for the schema name retrieves all schema names in the specified database when you click Image Modified.

Multiexcerpt macro
nameME_Schema_Name


  • Ensure that you include the exactly same schema name including the double quotes, if used, when you repeat the schema name in the Target Table Name field.
  • Leave this field blank if your target database is Databricks Lakehouse Platform.



N/A"TEST_DATA"
Target Table NameString

Required. The name of the table or view into which you want to insert the data. 

Info

Only views that can be updated (have new rows) are listed as suggestions. So, Join views are not included. This also implies that the Snap account user has the Insert privileges on the views listed as suggestions.

If your target database is Databricks Lakehouse Platform (DLP), you can, alternatively, mention the target table path in this field. Enclose the DBFS table path between two `(backtick/backquote) characters. For example, `/mnt/elt/mytabletarget`Learn more about the Snap’s behavior toward paths in theTable Path Management for DLP section.

Note

If you choose to include the schema name as part of the target table name, ensure that it is the same as specified in the Schema Name field, including the double quotes. For example, if the schema name specified is "table_schema_1", then the table name should be "table_schema_1"."tablename".


Note

If the target table or view does not exist during run-time, the Snap creates one with the name that you specify in this field and writes the data into it. During Pipeline validation, the Snap creates the new table or view but does not write any records into it.

  • The new table or view will not be dropped if a subsequent/downstream Snap failure occurs during validation.

  • Use double quotes (““) to specify the table or view name if you want to include special characters such as hyphens (-) in the table or view name.

  • A table or view name must always start with a letter.

  • Integers and underscores (_) can also be a part of the name.

  • All characters are automatically converted to uppercase at the backend. Use double-quotes to retainlowercase.


Multiexcerpt macro
nameME_Schema_And_Table_Names


  • Ensure that you include the exactly same schema name, if at all, including the double quotes as specified in the Schema Name field.



N/A

"TEST_DATA"."DIRECT"

EMPLOYEE_DATA

EMPLOYEE_123_DATA

REVENUE"-"OUTLET

"net_revenue"

Advanced OptionsCheckbox Select this checkbox to define the mapping of your source and target table columns scenario with or without the Insert Expressions list. When selected, it activates the Operation Types field.DeselectedSelected
Operation TypesDropdown list

Choose one of the following options that best describes your source data and the INSERT preference:

  • Source Columns Order. Use this operation type to fill the target table columns with the source table data in the same order of columns as in the source table.
  • Some Source and Target Column names are identical. Use this operation type when you want only a subset of the target table columns that are identical to the source to be filled with the source data and optionally define the Insert Expression List to fill the remaining columns in the target table.
  • All Source and Target Column names are identical. Use this operation type when your source and target table have the same set of column names. You cannot specify any Insert Expressions with this operation type.
Source Columns OrderSome Source and Target Column names are identical
Table Option List

This field set enables you to specify the table options you want to use on the target table. These options are populated based on the Snap Account (target CDW) selected. You must specify each table option in a separate row. Click  to add rows. 

This field set contains one field:

  • Table Option
Table OptionString/Suggestion

Click the Suggestions icon (  ) to view and select the table option you want to apply for loading data into the target table.

N/ADISTRIBUTION = HASH ( cust_name )
Insert Expression List

This field set enables you to specify the values for a subset of the columns in the target table. The remaining columns are assigned null values automatically. You must specify each column in a separate row. Click  to add rows. 

This field set is disabled if you select All Source and Target Column names are identical in the Operation Types field.

This field set consists of the following fields:

  • Insert Column
  • Insert Value
Note

You can use this field set to insert data only into an existing table. 


Insert ColumnStringEnter the name of the column in the target table to assign values.N/AORD_AMOUNT
Insert ValueStringEnter the value to assign in the specified column. Repeat the column name if you want to use the values in the source table. You can also use expressions to transform the values.N/A

ORD_AMOUNT

ORD_AMOUNT+20

OverwriteCheckboxSelect to overwrite the data in the target table. If not selected, the incoming data is appended. Not selectedSelected

Snap behavior in different source and target table column scenarios


Data match (operation) type 
Number of source
table columns 
Number of target
table columns 
Insert expression
list specified? 
Snap behavior
Not specified (Advanced Options checkbox is not selected)LessMoreYes
  • Ignores data in the source table columns.
  • Inserts values from the Expression list in the target table columns.
  • Inserts nulls in the remaining target columns.
  • Returns an error if the target table does not exist.
Not specified (Advanced Options checkbox is not selected)SameSameYes
  • Ignores data in the source table columns.
  • Inserts values from the Expression list in the target table columns.
  • Inserts nulls in the remaining target columns.
  • Returns an error if the target table does not exist.
Not specified (Advanced Options checkbox is not selected)MoreLessYes
  • Ignores data in the source table columns.
  • Inserts values from the Expression list in the target table columns.
  • Inserts nulls in the remaining target columns.
  • Returns an error if the target table does not exist.
Not specified (Advanced Options checkbox is not selected)LessMoreNo
  • Inserts values into the target table columns that match the source columns.
  • Inserts nulls in the remaining target columns.
Not specified (Advanced Options checkbox is not selected)SameSameNo
  • Inserts values into the target table columns that match the source columns.
  • Replicates the source table as a new target table, if the target table does not exist.
Not specified (Advanced Options checkbox is not selected)MoreLessNo
  • Inserts values into the target table columns that match the source columns.
  • Inserts nulls in the remaining target table columns.
  • Replicates the source table as a new target table, if the target table does not exist.
Source Columns OrderLessMoreNo
  • Inserts values from the source in the matching target columns.
  • Inserts nulls in the remaining target columns.
Source Columns OrderLessMoreYes
  • Inserts values from the source in the matching target columns.
  • Inserts values from the Insert expression list in the specified target columns.
  • Inserts nulls in the remaining target columns.
  • Returns an error if any specified column name is not found in the target table.
Source Columns OrderSameSameNo
  • Inserts values from the source in the matching target columns.
Source Columns OrderSameSameYes
  • Returns an error that the source table (combined with Insert Expression list) has more columns than the target table.
All Source and Target Column names are identicalLessMoreNot displayed
  • Inserts values from the source in the matching target columns.
  • Inserts nulls in the remaining target columns.
All Source and Target Column names are identicalSameSameNot displayed
  • Inserts values from the source in the matching target columns.
  • Inserts nulls in the remaining target columns.
Some Source and Target Column names are identicalLessMoreNo
  • Inserts values from the source in the matching target columns.
  • Inserts nulls in the remaining target columns.
Some Source and Target Column names are identicalLessMoreYes
  • Assumes that there is no column name in common between the source table and Insert expression list. Returns an error if a common column is found.
  • Inserts values from the source in the matching target columns.
  • Inserts values from the Insert Expression list in the specified target columns.
  • Inserts nulls in the remaining target columns.
  • Returns an error if any specified column name is not found in the target table.
Some Source and Target Column names are identicalSameSameNo
  • Inserts values from the source in the matching target columns.
  • Inserts nulls in the remaining target columns.
Some Source and Target Column names are identicalSameSameYes
  • Assumes that the sum of the source table columns count and the Insert Expression list is not more than the target table columns count. Returns an error if this condition is not satisfied.
  • Inserts values from the source in the matching target columns.
  • Inserts values from the Insert expression list in the specified target columns.
  • Inserts nulls in the remaining target columns.
  • Returns an error if any specified column name is not found in the target table.
Any of the three optionsMore LessYes or No
  • Returns an error that the source table (plus Insert expressions) has more columns than the target table and the Insert Select operation cannot be performed.


Table Path Management for DLP

A table path in Databricks Lakehouse Platform is the folder in the DBFS where the files corresponding to the target table are stored. You need to enclose the DBFS table path between two `(backtick/backquote) characters.

#

File Format Type
(Database Name field)

Table Path exists?#All other requirements
are valid?
Snap Operation Result
1DELTAYesYesSuccess
2DELTANoYesFailure. Snap displays error message.
3DELTAYesNoFailure. Snap displays error message.
4AVRO/CSV/JSON/ORC/otherYesYesSuccess. Snap creates a DELTA table.

# We recommend that you specify a target table path that resolves to a valid data file. Create the required target file, if need be, before running your Pipeline.

Pipeline Execution Statistics

As a Pipeline executes, the Snap shows the following statistics updating periodically. You can monitor the progress of the Pipeline as each Snap performs executesits action.

  • Records Added

  • Records Updated

  • Records Deleted

Image Added

You can view more information when as follows, by clicking the Download Query Details link to download a JSON file. Image RemovedIn case of DLP, the Snap captures and depicts additional information (extraStats) on DML statement executions.

Expand
titleClick here to view a sample query details download as a JSON file

Image Added


Note

Note: The statistics are also available in the output view of the child ELT Pipeline.


Troubleshooting

ErrorReasonResolution

Invalid placement of ELT Insert-Select Snap

You cannot use the ELT Insert-Select Snap at the beginning of a Pipeline.Move the ELT Insert-Select Snap to the middle or to the end of the Pipeline.

Snap configuration invalid

The specified target table does not exist in the database for the Snap to insert the provided subset values.Ensure that the target table exists as specified for the ELT Insert-Select Snap to insert the provided subset values.
Database encountered an error during Insert-Select processing.

Database cannot be blank.

(when seeking the suggested list for Schema Name field)

Suggestions in the Schema Name and Target Table Name fields do not work when you have not specified a valid value for the Database Name field in this Snap.

Specify the target Database Name in this Snap to view and choose from a suggested list in the Schema Name and Target Table Name fields respectively.

SQL exception from Snowflake: Syntax error in one or more positions in the SQL query.
Column names in Snowflake tables are case-sensitive. It stores all columns in uppercase unless they are surrounded by quotes during the time of creation in which case, the exact case is preserved. See, Identifier Requirements — Snowflake Documentation.Ensure that you follow the same casing for the column table names across the Pipeline.

[Simba][SparkJDBCDriver](500051) ERROR processing query/statement. Error Code: 0

Cannot create table ('<schema name>`.`<table name>`'). The associated location (`…<table name>`) is not empty but it's not a Delta table

(Target CDW: Databricks Lakehouse Platform)

The specified location contains one of the following:

  • A non-Delta table (such as CSV, ORC, JSON, PARQUET)

  • A corrupted table

  • A Delta table with a different table schema

So, the Snap/Pipeline cannot overwrite this table with the target table as needed.

Ensure that you take appropriate action (mentioned below) on the existing table before running your Pipeline again (to create another Delta table at this location).

Move or drop the existing table from the schema manually using one of the following commands:

Access the DBFS through a terminal and run:

  • dbfs mv dbfs:/<current_table_path> dbfs:/<new_table_path> to move the table or

  • dbfs rm -r dbfs:/<table_path> to drop the table.

OR

Use a Python notebook and run:

  • dbutils.fs.mv(from: String, to: String, recurse: boolean = false): boolean to move the table/file/directory or

  • dbutils.fs.rm(dir: String, recurse: boolean = false): boolean to drop the table/file/directory.

Syntax error when database/schema/table name contains a hyphen (-) such as in default.schema-1.order-details.

(CDW: Azure Synapse)

Azure Synapse expects any object name containing hyphens to be enclosed between double quotes as in "<object-name>".Ensure that you use double quotes for every object name that contains a hyphen when your target database is Azure Synapse. For example: default."schema-1"."order-details".

Examples

Multiexcerpt macro
nameunioninsertselect

Merging Two Tables and Creating a New Table

We need a query with the UNION clause to merge two tables. To write these merged records into a new table, we need to perform the INSERT INTO SELECT operation. This example demonstrates how we can do both of these tasks.

First, we build SELECT queries to read the target tables. To do so, we can use two ELT Select Snaps, in this example: Read Part A and Read Part B. Each of these Snaps is configured to output a SELECT * query to read the target table in the database. Additionally, these Snaps are also configured to show a preview of the SELECT query's execution as shown:

Read Part A ConfigurationRead Part B Configuration

A preview of the outputs from the ELT Select Snaps is shown below:

Read Part A OutputRead Part B Output

Then, we connect the ELT Union Snap to the output view of the ELT Select Snaps. The SELECT * queries in both of these Snaps form the inputs for the ELT Union Snap. The ELT Union Snap is also configured to eliminate duplicates, so it adds a UNION DISTINCT clause.

Upon execution, the ELT Union Snap combines both incoming SELECT * queries and adds the UNION DISTINCT clause.

To perform the INSERT INTO SELECT operation, add the ELT Insert-Select Snap. We can perform this operation on an existing table. Alternatively, we can also use this Snap to write the records into a new table. To do so, we configure the Target Table Name field with the name of the new table.


The result is a table with the specified table name in the database after executing this Pipeline. 

Download this Pipeline.

Downloads

Note
titleImportant Steps to Successfully Reuse Pipelines
  1. Download and import the Pipeline into SnapLogic.
  2. Configure Snap accounts as applicable.
  3. Provide Pipeline parameters as applicable.

Attachments
patterns*.slp, *.zip


Insert excerpt
ELT Snap Pack
ELT Snap Pack
nopaneltrue


See Also