Skip to end of banner
Go to start of banner

MySQL - Bulk Load

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »


MySQL account settings have been shared across different MySQL Snaps and the batch size settings varies the performance for some of the Snaps. We recommend changing the batch size setting (within the account details) to 100k or 200k for only the MySQL Bulk Load Snap (these batch size settings for bulk load may vary based on the environment settings but this range should be ideal).

If you are using other MySQL Snaps along with MySQL Bulk Load Snap in the same pipeline, then use different accounts for each of these Snaps and increase the batch size setting for MySQL Bulk Load Snap (within the account details) as mentioned above.

 

Snap type:

Write

 

Description:

Snap executes MySQL bulk load.  This Snap uses the LOAD DATA INFILE statement internally to perform the bulk load action. The file is first copied to JCC node, then to MySQL server under tmp directory and finally to the target MySQL table. 

Ensure sufficient space in the JCC and MySQL tmp location.


Table Creation

If the table does not exist when the Snap tries to do the load, and the Create table property is set, the table will be created with the columns and data types required to hold the values in the first input document. If you would like the table to be created with the same schema as a source table, you can connect the second output view of a Select Snap to the second input view of this Snap. The extra view in the Select and Bulk Load Snaps are used to pass metadata about the table, effectively allowing you to replicate a table from one database to another.
 

The table metadata document that is read in by the second input view contains a dump of the JDBC DatabaseMetaData class. The document can be manipulated to affect the CREATE TABLE statement that is generated by this Snap. For example, to rename the name column to full_name, you can use a Mapper (Data) Snap that sets the path $.columns.name.COLUMN_NAME to full_name.  The document contains the following fields:

  • columns - Contains the result of the getColumns() method with each column as a separate field in the object. Changing the COLUMN_NAME value will change the name of the column in the created table. Note that if you change a column name, you do not need to change the name of the field in the row input documents. The Snap will automatically translate from the original name to the new name. For example, when changing from name to full_name, the name field in the input document will be put into the "full_name" column. You can also drop a column by setting the COLUMN_NAME value to null or the empty string.  The other fields of interest in the column definition are:

    • TYPE_NAME - The type to use for the column.  If this type is not known to the database, the DATA_TYPE field will be used as a fallback.  If you want to explicitly set a type for a column, set the DATA_TYPE field.

    • _SL_PRECISION - Contains the result of the getPrecision() method.  This field is used along with the _SL_SCALE field for setting the precision and scale of a DECIMAL or NUMERIC field.

    • _SL_SCALE - Contains the result of the getScale() method.  This field is used along with the _SL_PRECISION field for setting the precision and scale of a DECIMAL or NUMERIC field.

  • primaryKeyColumns - Contains the result of the getPrimaryKeys() method with each column as a separate field in the object.

  • declaration - Contains the result of the getTables() method for this table. The values in this object are just informational at the moment.  The target table name is taken from the Snap property.

  • importedKeys - Contains the foreign key information from the getImportedKeys() method. The generated CREATE TABLE statement will include FOREIGN KEY constraints based on the contents of this object. Note that you will need to change the PKTABLE_NAME value if you changed the name of the referenced table when replicating it.

  • indexInfo - Contains the result of the getIndexInfo() method for this table with each index as a separated field in the object.  Any UNIQUE indexes in here will be included in the CREATE TABLE statement generated by this Snap.


  • The Snap will not automatically fix some errors encountered during table creation since they may require user intervention to resolve correctly. For example, if the source table contains a column with a type that does not have a direct mapping in the target database, the Snap will fail to execute. You will then need to add a Mapper (Data) Snap to change the metadata document to explicitly set the values needed to produce a valid CREATE TABLE statement.
  • The BLOB type is not supported by this Snap.  
Prerequisites:

[None]

 

Support and limitations:

 

Account: 

This Snap uses account references created on the Accounts page of SnapLogic Manager to handle access to this endpoint. See MySQL Account for information on setting up this type of account.

 

MySQL account settings have been shared across different MySQL Snaps and the batch size settings varies the performance for some of the Snaps. We recommend changing the batch size setting (within the account details) to 100k or 200k for only the MySQL Bulk Load Snap (these batch size settings for bulk load may vary based on the environment settings but this range should be ideal).

If you are using other MySQL Snaps along with MySQL Bulk Load Snap in the same pipeline, then use different accounts for each of these Snaps and increase the batch size setting for MySQL Bulk Load Snap (within the account details) as mentioned above. 

Views:
Input

This Snap has one document input view by default. 

A second view can be added for metadata for the table as a document so that the target absent table can be created in the MySQL database with a similar schema as the source table. This schema is usually from the second output of a database Select Snap. If the schema is from a different database, there is no guarantee that all the data types would be properly handled.

OutputThis Snap has at most one document output view.
Error

This Snap has at most one document error view and produces zero or more documents in the view. 

 

Settings

Label

Required. 

The name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your pipeline.

Schema name

 

The database schema name. In case it is not defined, then the suggestion for the Table name will retrieve all tables names of all schemas. The property is suggestible and will retrieve available database schemas during suggest values.
Example: SYS
Default value: [None] 

Table name

Required.

Table on which to execute the bulk load operation.
Example: people
Default value:  [None] 

Create table if not present


Whether the table should be automatically created if it is not already present. 

Default value: Not selected 

Partitions


This is used to specify a list of one or more partitions and/or subpartitions. When used, if any input document cannot be inserted into any of the partitions or subpartitions named in the list, the input document will be ignored.

Default value:  Not selected 

Columns


When no column list is provided, input documents are expected to contain a field for each table column. When a column list is provided, the Snap will load only the specified columns.
Default value: Not selected 

Set Clause

Required.

This is used to assign values to columns. For example, you can use "COLUMN1 = 1" to insert integer 1 to column COLUMN1 for each input document. See this link for more information.

Default value: [None] 

On duplicates

Required.

Specifies the action to be performed when duplicate records are found. In other words, rows that have the same value for a primary key or unique index as an existing row. If you choose REPLACE, input rows replace the existing rows. If you choose IGNORE, input rows are ignored.

Default value: IGNORE 

Concurrency Option


Specifies how to handle the load process when other clients are reading from the table. 

Available concurrency options are:

  • LOW_PRIORITY -  If this option is selected, the loading process is delayed until no other clients are reading from the table. This affects only storage engines that use only table-level locking (such as MyISAM, MEMORY, and MERGE).
  • CONCURRENT - If this option is used with a MyISAM table that satisfies the condition for concurrent inserts (that is, it contains no free blocks in the middle), other threads can retrieve data from the table while MySQL Bulk Load Snap is executing. This affects the performance of the MySQL Bulk Load Snap a bit, even if no other thread is using the table at the same time.

Default value: None 

Character Set


The MySQL server uses the character set indicated by the character_set_database system variable to interpret the information in the inputs. If the contents of the inputs use a character set that differs from the default, it is recommended that you specify the character set of the inputs with this property. A character set of binary specifies "no conversion".

It is not possible to load data that uses the ucs2, utf16, utf16le, or utf32 character set. 

 Default: [None] 

Chunk size


Specifies the number of records to be loaded at a time.

 
Note: This property will override the "Batch size" property of the account

Default: 100000

Execute during preview


This property enables you to execute the Snap during the Save operation so that the output view can produce the preview data.

Default value:  Not selected

 

 

When invalid data is passed to the Snap, the Snap execution fails. The database administrator can set a global variable that can either handle an invalid case by passing a default value (such as, if strings are passed for integers, then pass the value 0) , or by displaying an error. For more information, click the link https://dev.mysql.com/doc/refman/5.6/en/load-data.html.


Example


 

The following example illustrates the usage of the MySQL Bulk Load Snap.  In this pipeline, we map the data using the Mapper Snap, insert it into the target table using the MySQL Bulkload Snap,  read the data using the MySQL Select Snap and additionally sending the first document to the output view using the Head Snap.

 

The pipeline:

 

1. The mapper Snap maps the data and writes the result to the  target path, enron numeric_table. 

 

 

2. The MySQL Bulk Load Snap loads inputs to the table, enron numeric_table.

3. The MySQL Select Snap gets records from the table, enron numeric_table  where the clause is col1= 100. 

 

 

4.  The Head Snap is set to1, meaning it would send the first document to the output view, and hence the  MySQL Select Snap passes the first document only.

 

Successful execution of the pipeline gives the following output preview.

 


Snap History

4.7.0

  • Introduced the Snap in this release.

 

  • No labels