Oracle - Parallel Load
In this article
Overview
You can use this Snap to insert data parallelly across multi-threaded tasks. Compared to the Oracle Bulk Load Snap, this Snap executes a high-performance data load.
The Oracle - Parallel Load Snap always auto-commits its database operations. The Auto commit setting either in the Oracle Thin Account or the Oracle Thin Dynamic Account configured for the Snap does not affect the Snap’s behavior.
This Snap supports Kerberos authentication for Oracle.
Snap Type
The Oracle - Parallel Load Snap is a Write-type Snap that writes the specified data in the target table.
Prerequisites
A valid Oracle Thin Account or Oracle Thin Dynamic Account.
Support for Ultra Pipelines
Does not work in Ultra Pipelines.
Limitations
None.
Known Issues
Both the Schema name and the Table name settings accept schema from upstream Snap and offer suggestions for expressions. However, we recommend that you do not use those suggestions because they cause incorrect behavior.
Snap Views
Type | Format | Number of Views | Examples of Upstream and Downstream Snaps | Description |
---|---|---|---|---|
Input | Document
|
|
| Requires both the Schema name and the Table name to process a specific number of input documents into a prepared insert query and execute all of the prepared input document values on that query as a batch. |
Output | Document
|
|
| The Snap outputs the number of documents successfully inserted into the target table in the output. |
Error | Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter when running the pipeline by choosing one of the following options from the When errors occur list under the Views tab:
Learn more about the behavior of the Oracle - Parallel Load Snap with open error views. |
Snap Settings
Asterisk ( * ): Indicates a mandatory field.
Add icon ( ): Indicates that you can add fields in the field set.
Remove icon ( ): Indicates that you can remove fields from the field set.
Expression icon (): Indicates the value is an expression (if enabled) or a static value (if disabled). Learn more about Using Expressions in SnapLogic.
Suggestion icon (): Indicates a list that is dynamically populated based on the configuration.
Upload icon (): Indicates that you can upload files. Learn more about managing Files.
Field Name | Field Type | Description |
---|---|---|
Label*
Default Value: Oracle - Parallel Load | String | Specify the name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snaps in your pipeline.
|
Schema name
Default Value: N/A | String/Expression | Specify the database schema name. The suggestions in the Schema field are populated only when at least a single table exists in the schema. If no tables exist to use that schema, only SYS, SYSTEM, and XDB are populated. You can pass the values as expressions using the pipeline parameters but cannot use values from the upstream Snap. |
Table name*
Default Value: N/A | String/Expression | Specify the table that the rows will be inserted into. This list is populated based on the tables associated with the selected schema. |
Create table if not present
Default Value: Deselected | Checkbox
| Select this checkbox to automatically create the target table if it does not exist.
|
Maximum thread count*
| Integer/Expression | Specify the maximum number of threads allowed to perform data insertions simultaneously. If the value is 0, the Snap automatically sets the number of threads equal to the number of available processors on the system. |
Batch capacity*
| Integer/Expression | Specify the maximum number of records allowed in a single batch for an insertion task. |
Snap memory usage: To gracefully handle failed record insertions, the Snap must retain some input document data in memory during the Snap execution. To calculate the memory usage follow this formula: | ||
Maximum error count*
| Integer/Expression
| Specify the maximum number of record insertion errors allowed before execution stops on the next error. For example, if you specify a value of 10, the pipeline execution continues until the number of failed records exceeds 10. |
Truncate table before load
| Checkbox/Expression
| Select this checkbox to truncate the target table before loading data.
|
Snap Execution
Default Value: | Dropdown list | Select one of the following three modes in which the Snap executes:
|
Troubleshooting
Error | Reason | Resolution |
---|---|---|
| Either the target table might not exist or a database access error might have occurred when trying to execute the TRUNCATE statement. | Verify your database credentials and ensure the target table exists, then retry. |
| The connection used to retrieve table metadata displayed an exception before closing. | Verify the target table column names and retry. |
Behavior of Parallel Load Execution with Open Error Views
The following table describes the expected behavior for the general outcomes of Parallel Load execution when the Snap has an open error view (that is, when you select Route Error Data to Error View for When error occurs).
Committed Database Insertions | Error View | Output Preview | Snap Outcome | Final Snap State |
---|---|---|---|---|
All insertions committed. | N/A | One document containing the | All input records are successfully inserted into the target database. | Success |
One error document for each record that failed to insert. | One document containing the | Some input records individually fail to insert into the target database, but do not exceed maximum error count. | Success | |
Insertions made successfully | Multiple error documents, one for each record that failed to insert. | One document containing the | Enough input records individually fail to insert to exceed the Maximum error count and cause the Snap to end execution early. | Success |
Insertions made | One error document for | One document containing | An error occurs beyond the scope of any individual record. | Failure |