Oracle - Parallel Load
In this article
Overview
You can use this Snap to insert data parallelly across multi-threaded tasks. Compared to the Oracle Bulk Load Snap, this Snap executes a high-performance data load.
The Oracle - Parallel Load Snap always auto-commits its database operations. The Auto commit setting either in the Oracle Thin Account or the Oracle Thin Dynamic Account configured for the Snap does not affect the Snap’s behavior.
This Snap supports Kerberos authentication for Oracle.
Snap Type
The Oracle - Parallel Load Snap is a Write-type Snap that writes the specified data in the target table.
Prerequisites
A valid Oracle Thin Account or Oracle Thin Dynamic Account.
Support for Ultra Pipelines Â
Does not work in Ultra Pipelines.Â
Limitations
None.
Known Issues
Both the Schema name and the Table name settings accept schema from upstream Snap and offer suggestions for expressions. However, we recommend that you do not use those suggestions because they cause incorrect behavior.
Snap Views
Type | Format | Number of Views | Examples of Upstream and Downstream Snaps | Description |
---|---|---|---|---|
Input | Document  |
|
| Requires both the Schema name and the Table name to process a specific number of input documents into a prepared insert query and execute all of the prepared input document values on that query as a batch. |
Output | Document  |
|
| The Snap outputs the number of documents successfully inserted into the target table in the output. |
Error | Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter when running the pipeline by choosing one of the following options from the When errors occur list under the Views tab:
Learn more about the behavior of the Oracle - Parallel Load Snap with open error views. |
Snap Settings
Asterisk ( * ): Indicates a mandatory field.
Add icon ( ): Indicates that you can add fields in the field set.
Remove icon ( ): Indicates that you can remove fields from the field set.
Expression icon (): Indicates the value is an expression (if enabled) or a static value (if disabled). Learn more about Using Expressions in SnapLogic.
Suggestion icon (): Indicates a list that is dynamically populated based on the configuration.
Upload icon (): Indicates that you can upload files. Learn more about managing Files.
Field Name | Field Type | Description |
---|---|---|
Label* Â Default Value:Â Oracle - Parallel Load | String | Specify the name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snaps in your pipeline. Â |
Schema name  Default Value: N/A | String/Expression | Specify the database schema name. The suggestions in the Schema field are populated only when at least a single table exists in the schema. If no tables exist to use that schema, only SYS, SYSTEM, and XDB are populated. You can pass the values as expressions using the pipeline parameters but cannot use values from the upstream Snap. |
Table name* Â Default Value: N/A | String/Expression | Specify the table that the rows will be inserted into. This list is populated based on the tables associated with the selected schema. |
Create table if not present  Default Value: Deselected | Checkbox  | Select this checkbox to automatically create the target table if it does not exist.
|
Maximum thread count*
| Integer/Expression | Specify the maximum number of threads allowed to perform data insertions simultaneously. If the value is 0, the Snap automatically sets the number of threads equal to the number of available processors on the system. |
Batch capacity*
| Integer/Expression | Specify the maximum number of records allowed in a single batch for an insertion task. |
Snap memory usage: To gracefully handle failed record insertions, the Snap must retain some input document data in memory during the Snap execution. To calculate the memory usage follow this formula: | ||
Maximum error count*
 | Integer/Expression  | Specify the maximum number of record insertion errors allowed before execution stops on the next error. For example, if you specify a value of 10, the pipeline execution continues until the number of failed records exceeds 10. |
Truncate table before load
| Checkbox/Expression  | Select this checkbox to truncate the target table before loading data.  |
Snap Execution  Default Value: | Dropdown list | Select one of the following three modes in which the Snap executes:
|
Troubleshooting
Error | Reason | Resolution |
---|---|---|
| Either the target table might not exist or a database access error might have occurred when trying to execute the TRUNCATE statement. | Verify your database credentials and ensure the target table exists, then retry. |
| The connection used to retrieve table metadata displayed an exception before closing. | Verify the target table column names and retry. |
Behavior of Parallel Load Execution with Open Error Views
The following table describes the expected behavior for the general outcomes of Parallel Load execution when the Snap has an open error view (that is, when you select Route Error Data to Error View for When error occurs).
Committed Database Insertions | Error View | Output Preview | Snap Outcome | Final Snap State |
---|---|---|---|---|
All insertions committed. | N/A | One document containing the | All input records are successfully inserted into the target database. | Success |
One error document for each record that failed to insert. | One document containing the | Some input records individually fail to insert into the target database, but do not exceed maximum error count. | Success | |
Insertions made successfully | Multiple error documents, one for each record that failed to insert. | One document containing the | Enough input records individually fail to insert to exceed the Maximum error count and cause the Snap to end execution early. | Success |
Insertions made | One error document for | One document containing | An error occurs beyond the scope of any individual record. | Failure |
Examples
Load 1000 Document Insertions
The following example pipeline demonstrates the use of Parallel Load to insert 1000 documents. This example showcases Parallel Load's efficiency and capability in managing large-scale data insertion tasks.
Â
Configure the Mapper Snap with subscriber details (Subscriber ID, Last Login, and Subscriber’s full name) to pass them to the Oracle - Parallel Load Snap.
Configure the Oracle - Parallel Load Snap with the following settings:
Maximum thread count: 4
Batch capacity: 100
On validation, the pipeline inserts a maximum of 400 records simultaneously, across 4 threads handling 100 records each. The image below displays only 50 records because the Preview document count is set to 50.
Handle Database Insertion Failures with Open Error View
This example demonstrates a Mapper Snap configuration designed to intentionally cause a small number of records to fail during database insertion. For every failed insertion, an error document is generated. Specifically, out of 1000 records, 7 are expected to fail, resulting in 7 error documents, and the remaining 993 records are successfully inserted. The count of successful insertions is also recorded in the output document. This pipeline demonstrates how to efficiently manage and log insertion errors.
Configure the Mapper Snap with settings so that a small number of records fail during database insertion, and route the error documents to the error view.
Configure the Oracle - Parallel Load Snap as follows to insert a total of 1000 documents.
On validation, 993 records are successfully inserted and written to the output document, and 7 error documents are written on the error view. The image below displays only 47 records because the Preview document count is set to 50.
Inserted records | Insertion failed records |
---|---|
 |   |
Downloads
Â
Snap Pack History
Related Content
Â
Have feedback? Email documentation@snaplogic.com | Ask a question in the SnapLogic Community
© 2017-2024 SnapLogic, Inc.