Vertica - Bulk Load
On this Page
Snap type: | Write | |||||||
---|---|---|---|---|---|---|---|---|
Description: | This Snap executes a Vertica bulk load. | |||||||
Prerequisites: | None. | |||||||
Support and limitations: | Does not work in Ultra Pipelines. | |||||||
Behavior Change | As of 433patches22343, no output is sent to the output view when the Snap receives no input. | |||||||
Account: | This Snap uses account references created on the Accounts page of SnapLogic Manager to handle access to this endpoint. See Configuring Vertica Accounts for information on setting up this type of account. | |||||||
Views: |
| |||||||
Settings | ||||||||
Label* | Specify the name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your pipeline. | |||||||
Schema name* | Specify the database schema name. Selecting a schema, filters the Table name list to show only those tables within the selected schema. The property is suggestible and retrieves available database schemas during suggest values. You can pass the values using Pipeline parameters but not upstream parameters. Example: public Default value: [None] | |||||||
Table name* | Specify the table on which to execute the insert operation. The values can be passed using the pipeline parameters but not the upstream parameter. Default value: None | |||||||
Create table if not present | Select this check box to automatically create the target table if it does not exist. This Snap creates a table based on the data types of the columns generated from the first row of the input document. Due to implementation details, a newly created table is not visible to subsequent database Snaps during runtime validation. If you wish to immediately use the newly updated data you must use a child Pipeline that is invoked through a Pipeline Execute Snap. Default value: Not selected | |||||||
Truncate data | Truncate the existing data before performing the data load. With the Bulk Update Snap, instead of executing truncate and then update, using a Bulk Insert Snap would be faster. Default value: Not selected | |||||||
Maximum error count | Specify the maximum number of rows that can fail before the bulk load operation is stopped. Default value: 0 | |||||||
Truncate columns | Truncate column values that are larger than the maximum column length in the table. | |||||||
Additional options | Additional options to be passed to the COPY command. See Copy Parameters for the available options. Default value: [None] | |||||||
Snap execution | Select one of the three modes in which the Snap executes. Available options are:
|
We recommend you to use Vertica - Multi Execute Snap, instead of building multiple Snaps with inter dependent DML queries.
In a scenario where the downstream Snap depends on the data processed on an upstream Database Bulk Load Snap, use the Script Snap to add delay for the data to be available.
For example, when performing a create, insert and a delete function sequentially on a Pipeline using a Script Snap helps in creating a delay between the insert and delete function or otherwise it may so happen that the delete function is triggered even before inserting the records on the table.
Temporary Files
During execution, data processing on Snaplex nodes occurs principally in-memory as streaming and is unencrypted. When larger datasets are processed that exceeds the available compute memory, the Snap writes Pipeline data to local storage as unencrypted to optimize the performance. These temporary files are deleted when the Snap/Pipeline execution completes. You can configure the temporary data's location in the Global properties table of the Snaplex's node properties, which can also help avoid Pipeline errors due to the unavailability of space. For more information, see Temporary Folder in Configuration Options.Snap Pack History
Have feedback? Email documentation@snaplogic.com | Ask a question in the SnapLogic Community
© 2017-2024 SnapLogic, Inc.