Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

In this article

Table of Contents
maxLevel2
excludeOlder Versions|Additional Resources|Related Links|Related Information

Overview

You can use this Snap to process data in bulk. The Snap uses Oracle SQL*Loader internally to perform the bulk load action. The input data is first written to either a temporary data file (on a Windows JCC) or a named pipe (on a Linux JCC). Then the Oracle SQL*Loader loads the data from the data file/named pipe into the target table.

Note

The Snap uses EZCONNECT to connect to Oracle. It does not use TNSNames or LDAP connections. 

Upcoming

Multiexcerpt include macro
nameOracle JDBC Driver Upgrade
templateDataeJyLjgUAARUAuQ==
pageOracle Snap Pack
addpanelfalse

Snap type:

Write

Prerequisites:

None.

Known Issues:
  • When processing data, the Oracle Bulk Load Snap executes in an infinite loop when database storage reaches its full capacity.
  • 1438090 Snap does not support Kerberos authentication for Oracle.
Support and limitations:

Columns in a table that do not contain actual values but are generated based on a formula are referred as virtual columns. Tables containing virtual columns cannot be bulk loaded using this Snap. The table must only contain actual values to be bulk loaded. A workaround for this issue is to use a view that specifies only the non-virtual columns and use that view for the bulk load. For more information about creating a view in the Oracle database, refer to Creating a view in Oracle database.

Does not work in Ultra Pipelines

Account: 

This Snap uses account references created on the Accounts page of SnapLogic Manager to handle access to this endpoint. See Oracle Account for information on setting up this type of account.

Views:


Input

This Snap has one document input view by default. 

A second view can be added for metadata for the table as a document so that the target absent table can be created in the database with a similar schema as the source table. This schema is usually from the second output of a database Select Snap. If the schema is from a different database, there is no guarantee that all the data types would be properly handled.

OutputThis Snap has at most one document output view.
Error

This Snap has at most one document error view and produces zero or more documents in the view. All error messages for rejected input data are routed to the error view AFTER the whole data loading execution is done, because those messages are extracted from the log file of Oracle SQL*Loader which is only available after the Oracle SQL*Loader finishes the loading.


Table Creation

If the table does not exist when the Snap tries to do the load, and the Create table property is set, the table will be created with the columns and data types Required. to hold the values in the first input document. If you would like the table to be created with the same schema as a source table, you can connect the second output view of a Select Snap to the second input view of this Snap. The extra view in the Select and Bulk Load Snaps are used to pass metadata about the table, effectively allowing you to replicate a table from one database to another.
 

The table metadata document that is read in by the second input view contains a dump of the JDBC DatabaseMetaData class. The document can be manipulated to affect the CREATE TABLE statement that is generated by this Snap. For example, to rename the name column to full_name, you can use a Mapper (Data) Snap that sets the path $.columns.name.COLUMN_NAME to full_name.  The document contains the following fields:

  • columns - Contains the result of the getColumns() method with each column as a separate field in the object. Changing the COLUMN_NAME value will change the name of the column in the created table. Note that if you change a column name, you do not need to change the name of the field in the row input documents. The Snap will automatically translate from the original name to the new name. For example, when changing from name to full_name, the name field in the input document will be put into the "full_name" column. You can also drop a column by setting the COLUMN_NAME value to null or the empty string.  The other fields of interest in the column definition are:

    • TYPE_NAME - The type to use for the column.  If this type is not known to the database, the DATA_TYPE field will be used as a fallback.  If you want to explicitly set a type for a column, set the DATA_TYPE field.

    • _SL_PRECISION - Contains the result of the getPrecision() method.  This field is used along with the _SL_SCALE field for setting the precision and scale of a DECIMAL or NUMERIC field.

    • _SL_SCALE - Contains the result of the getScale() method.  This field is used along with the _SL_PRECISION field for setting the precision and scale of a DECIMAL or NUMERIC field.

  • primaryKeyColumns - Contains the result of the getPrimaryKeys() method with each column as a separate field in the object.

  • declaration - Contains the result of the getTables() method for this table. The values in this object are just informational at the moment.  The target table name is taken from the Snap property.

  • importedKeys - Contains the foreign key information from the getImportedKeys() method. The generated CREATE TABLE statement will include FOREIGN KEY constraints based on the contents of this object. Note that you will need to change the PKTABLE_NAME value if you changed the name of the referenced table when replicating it.

  • indexInfo - Contains the result of the getIndexInfo() method for this table with each index as a separated field in the object.  Any UNIQUE indexes in here will be included in the CREATE TABLE statement generated by this Snap.

Note

The Snap will not automatically fix some errors encountered during table creation since they may require user intervention to resolve correctly. For example, if the source table contains a column with a type that does not have a direct mapping in the target database, the Snap will fail to execute. You will then need to add a Mapper (Data) Snap to change the metadata document to explicitly set the values needed to produce a valid CREATE TABLE statement.


Note

The BLOB type is not supported by this Snap.   


Settings

Label*


Specify the name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your Pipeline.

Schema name


Specify the database schema name. Alternatively, click the Suggestion Image Modified icon to fetch the list database schemas.

Note

The values can be passed using the Pipeline parameters but not the upstream parameter.

Default Value: [None]
Example
: SYS

Table name*


Specify the table on which to execute the bulk load operation.

Note

The values can be passed using the Pipeline parameters but not the upstream parameter.

Default Value: [None]
Example: people

Create table if not present


Multiexcerpt include macro
nameME_Create_Table_Automatically_2_Inputs
pageRedshift - Bulk Load

Default Value: Not selected

Sqlldr absolute path


Specify the absolute path of the sqlldr program in the JCC's file system. If empty, the Snap will look for it in the JCC's environment variable PATH.

Note

On Windows, a value must be specified and must be entered manually and the path to the sqlldr executable should include the ".exe" extension to ensure the executable is actually referenced.

Default Value: [None]

Example/u01/app/oracle/product/11.2.0/xe/bin/sqlldr (For Linux)

              C:\app\Administrator\product\11.2.0\client_1\bin\sqlldr.exe (For Windows) 

Insert mode


Available insert modes when loading data into table.

  • INSERT - Load rows only if the target table is empty.
  • APPEND - If data already exists in the table, the new rows are appended to the table. If data does not already exist, the new rows are simply loaded.
  • REPLACE - First delete all the rows in the existing table and then load rows.
  • TRUNCATE - First truncate the table and then load rows.

Default Value: APPEND
See this doc for more information.

Maximum error count*


Specify the maximum number of rows which can fail before the bulk load operation is stopped.

Default Value50 
Example
:10 (if you want the Pipeline execution to continue as far as the number of failed records are less than 10)

Use direct path load


Select this check box to use direct path load mode of SQLLDR program. Typically, used when loading large data sets. This substantially improves the bulk load performance by reducing loading time.

Default Value: Selected 

Skip virtual columns

Select this checkbox to skip virtual columns to prevent errors while loading data.

Default Value: Selected

Additional SQL Loader Parameters

Use this field set to define additional SQL Load parameters if any. This field set contains Param Name and Param Value fields.

Parameter Name

Choose the parameter name for SQL. Available options are:

  • BINDSIZE
  • COLUMNARRAY ROWS
  • DIRECT
  • LOAD
  • MULTITHREADING
  • PARALLEL
  • READSIZE
  • RESUMABLE
  • RESUMABLE_NAME
  • RESUMABLE_TIMEOUT
  • ROWS
  • SILENT
  • SKIP
  • SKIP_INDEX
  • MAINTENANCE
  • SKIP_UNUSABLE_INDEXES
  • STREAMSIZE

See SQL* Loader parameters for more information.

Default Value: N/A
Example: COLUMNARRAYROWS

Parameter Value

Specify the value for the parameter selected above.

Default Value: N/A
Example: 2000

Column Length Overrides

Use this field set to define values for overriding the column length that includes CLOB (Character Large Object) and NCLOB (National Character Large Object) data types. Click Image Removed to add additional column names and their lengths. Add each column in a separate row. The field set contains the following fields:

  • Column Name

  • Length

Column Name

Specify or select the name of the column in the table that you want to load.

Default Value: N/A
Example: DESCRIPTION, MESSAGE_BODY

Length (in bytes)

Specify a value (in bytes) for the column length.

Default Value: N/A
Example: 10

Multiexcerpt include macro
nameSnap Execution
pageSOAP Execute


Multiexcerpt include macro
nameExecution_Detail_Write
pageSOAP Execute

Insert excerpt
Azure SQL - Bulk Load
Azure SQL - Bulk Load
nopaneltrue

Troubleshooting

ErrorReasonResolutionFailed to execute queryException while writing document to the stream,Address the reported issue.

Example


This example loads 6 records to table TECTONIC.TERENCE_BULK_TEST2 with Oracle Bulk Load Snap, and three of them are invalid records which cannot be inserted and will be routed to the error view.

 


The definition of Table TECTONIC.TERENCE_BULK_TEST2 :

Code Block
CREATE TABLE "TECTONIC"."TERENCE_BULK_TEST2" ( "id" NUMBER, "first_name" VARCHAR2(4000), "last_name" VARCHAR2(4000) ) 


The input data defined in JSON Generator Snap:

There are 3 rows of data whose ID column are strings, which is invalid when inserting into table TECTONIC.TERENCE_BULK_TEST2.

The settings of the Oracle Bulk Load Snap:


The output of output view:

The output of the error view, the three invalid rows are routed here:

Snap Pack History

Insert excerpt
Oracle Snap Pack
Oracle Snap Pack
nopaneltrue