MySQL - Bulk Load
On this Page
Snap type: | Write | |||||||
|---|---|---|---|---|---|---|---|---|
Description: | Snap executes MySQL bulk load. This Snap uses the LOAD DATA INFILE statement internally to perform the bulk load action. The SnapLogic Platform does not support the installation of utilities or processes on Cloudplexes. Learn more. The file is first copied to JCC node, then to MySQL server under tmp directory and finally to the target MySQL table. Ensure sufficient space in the JCC and MySQL tmp location.
| |||||||
| Prerequisites: | [None] | |||||||
| Support and limitations: |
| |||||||
| Account: | This Snap uses account references created on the Accounts page of SnapLogic Manager to handle access to this endpoint. See MySQL Account for information on setting up this type of account. MySQL account settings have been shared across different MySQL Snaps and the batch size settings varies the performance for some of the Snaps. We recommend changing the batch size setting (within the account details) to 100k or 200k for only the MySQL Bulk Load Snap (these batch size settings for bulk load may vary based on the environment settings but this range should be ideal). | |||||||
| Views: |
| |||||||
Settings | ||||||||
Label* | The name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your pipeline. | |||||||
Schema name | The database schema name. In case it is not defined, then the suggestion for the Table name will retrieve all tables names of all schemas. The property is suggestible and will retrieve available database schemas during suggest values. The values can be passed using the pipeline parameters but not the upstream parameter.
| |||||||
Table name* | Table on which to execute the bulk load operation. The values can be passed using the pipeline parameters but not the upstream parameter. Example: people | |||||||
Create table if not present | Whether the table should be automatically created if it is not already present. Learn more about Table creation. Default value: Deselected | |||||||
Partitions | This is used to specify a list of one or more partitions and/or subpartitions. When used, if any input document cannot be inserted into any of the partitions or subpartitions named in the list, the input document will be ignored. Default value: Deelected | |||||||
Columns | When no column list is provided, input documents are expected to contain a field for each table column. When a column list is provided, the Snap will load only the specified columns. | |||||||
Set Clause* | This is used to assign values to columns. For example, you can use "COLUMN1 = 1" to insert integer 1 to column COLUMN1 for each input document. See this link for more information. Default value: [None] | |||||||
On duplicates* | Choose the action to take when duplicate records are found. A duplicate is defined as a row that shares the same value for a primary key or unique index as an existing row. The available options are:
When you choose REPLACE option, in case of data type mismatch or constraint violation, an error is displayed and transaction is rolled back based on the Chunk size. While, for the IGNORE option, no such error is displayed for data type mismatch or constraint violation. Column value assignment is done according to default values. Default value: IGNORE | |||||||
Concurrency Option | Specifies how to handle the load process when other clients are reading from the table. Available concurrency options are:
Default value: None | |||||||
Character Set | The MySQL server uses the character set indicated by the character_set_database system variable to interpret the information in the inputs. If the contents of the inputs use a character set that differs from the default, it is recommended that you specify the character set of the inputs with this property. A character set of binary specifies "no conversion". It is not possible to load data that uses the ucs2, utf16, utf16le, or utf32 character set. Default: [None] | |||||||
Chunk size | Specifies the number of records to be loaded at a time. Default: 100000 | |||||||
Loading | Loading | |||||||
Table Creation
If the table does not exist when the Snap tries to do the load, and the Create table property is set, the table will be created with the columns and data types required to hold the values in the first input document. If you would like the table to be created with the same schema as a source table, you can connect the second output view of a Select Snap to the second input view of this Snap. The extra view in the Select and Bulk Load Snaps are used to pass metadata about the table, effectively allowing you to replicate a table from one database to another.
The table metadata document that is read in by the second input view contains a dump of the JDBC DatabaseMetaData class. The document can be manipulated to affect the CREATE TABLE statement that is generated by this Snap. For example, to rename the name column to full_name, you can use a Mapper (Data) Snap that sets the path $.columns.name.COLUMN_NAME to full_name. The document contains the following fields:
columns - Contains the result of the getColumns() method with each column as a separate field in the object. Changing the COLUMN_NAME value will change the name of the column in the created table. Note that if you change a column name, you do not need to change the name of the field in the row input documents. The Snap will automatically translate from the original name to the new name. For example, when changing from name to full_name, the name field in the input document will be put into the "full_name" column. You can also drop a column by setting the COLUMN_NAME value to null or the empty string. The other fields of interest in the column definition are:
TYPE_NAME - The type to use for the column. If this type is not known to the database, the DATA_TYPE field will be used as a fallback. If you want to explicitly set a type for a column, set the DATA_TYPE field.
_SL_PRECISION - Contains the result of the getPrecision() method. This field is used along with the _SL_SCALE field for setting the precision and scale of a DECIMAL or NUMERIC field.
_SL_SCALE - Contains the result of the getScale() method. This field is used along with the _SL_PRECISION field for setting the precision and scale of a DECIMAL or NUMERIC field.
primaryKeyColumns - Contains the result of the getPrimaryKeys() method with each column as a separate field in the object.
declaration - Contains the result of the getTables() method for this table. The values in this object are just informational at the moment. The target table name is taken from the Snap property.
importedKeys - Contains the foreign key information from the getImportedKeys() method. The generated CREATE TABLE statement will include FOREIGN KEY constraints based on the contents of this object. Note that you will need to change the PKTABLE_NAME value if you changed the name of the referenced table when replicating it.
indexInfo - Contains the result of the getIndexInfo() method for this table with each index as a separated field in the object. Any UNIQUE indexes in here will be included in the CREATE TABLE statement generated by this Snap.
When invalid data is passed to the Snap, the Snap execution fails. The database administrator can set a global variable that can either handle an invalid case by passing a default value (such as, if strings are passed for integers, then pass the value 0) , or by displaying an error. See Load Data Syntax for more information.
In a scenario where the Auto commit on the account is set to true, and the downstream Snap does depends on the data processed on an Upstream Database Bulk Load Snap, use the Script Snap to add delay for the data to be available. For example, when performing a create, insert and a delete function sequentially on a pipeline, using a Script Snap helps in creating a delay between the insert and delete function or otherwise it may turn out that the delete function is triggered even before inserting the records on the table.
Example
The following example illustrates the usage of the MySQL Bulk Load Snap. In this pipeline, we map the data using the Mapper Snap, insert it into the target table using the MySQL Bulkload Snap, read the data using the MySQL Select Snap and additionally sending the first document to the output view using the Head Snap.
The pipeline:
1. The mapper Snap maps the data and writes the result to the target path, enron numeric_table.
2. The MySQL Bulk Load Snap loads inputs to the table, enron numeric_table.
3. The MySQL Select Snap gets records from the table, enron numeric_table where the clause is col1= 100.
4. The Head Snap is set to1, meaning it would send the first document to the output view, and hence the MySQL Select Snap passes the first document only.
Successful execution of the pipeline gives the following output preview.
Have feedback? Email documentation@snaplogic.com | Ask a question in the SnapLogic Community
© 2017-2025 SnapLogic, Inc.
