Redshift - Bulk Upsert
On this Page
Snap type: | Write | |||||||
---|---|---|---|---|---|---|---|---|
Description: | This Snap executes a Redshift bulk upsert. The Snap bulk updates the records if present, or, inserts records into the target table. Incoming documents are first written to a staging file on S3. A temporary table is created on Redshift with the contents of the staging file. An update operation is then run to update existing records in the target table and/or an insert operation is run to insert new records into the target table. The COPY command is used to load the staging S3 file to the temporary table. Recommended JDBC JAR Version Use RedshiftJDBC42-1.2.10.1009.jar as the JDBC JAR version in the Redshift Account (JDBC jars property) when using this Snap | |||||||
Prerequisites: | IAM Roles for Amazon EC2The 'IAM_CREDENTIAL_FOR_S3' feature is used to access S3 files from EC2 Groundplex, without Access-key ID and Secret key in the AWS S3 account in the Snap. The IAM credential stored in the EC2 metadata is used to gain access rights to the S3 buckets. To enable this feature, set the Global properties (Key-Value parameters) and restart the JCC: This feature is supported in the EC2-type Groundplex only. Learn more. | |||||||
Support and limitations: | Does not work in Ultra Pipelines. If you use the PostgreSQL driver ( | |||||||
Account: | This Snap uses account references created on the Accounts page of SnapLogic Manager to handle access to this endpoint. The S3 Bucket, S3 Access-key ID and S3 Secret key properties are required for the Redshift-Bulk Upsert Snap. The S3 Folder property may be used for the staging file. If the S3 Folder property is left blank, the staging file will be stored in the bucket. See Configuring Redshift Accounts for information on setting up this type of account. | |||||||
Views: |
| |||||||
Settings | ||||||||
Label | Required. The name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your pipeline. | |||||||
Schema name | The database schema name. Selecting a schema filters the Table name list to show only those tables within the selected schema. The values can be passed using the pipeline parameters but not the upstream parameter. | |||||||
Table name | Required. Table on which to execute the bulk load operation. The values can be passed using the pipeline parameters but not the upstream parameter. | |||||||
Key columns | Required. Columns to use to check for existing entries in the target table. Default: None | |||||||
Validate input key value | If selected, all duplicates and null key-column values in the input data will be written to the error view and the bulk upsert operation will stop. Or else, duplicates are inserted into the target table unless same duplicate rows already exist in the target table and null key-column values may cause unexpected result. The detection of duplicates is performed after all data is copied to S3 and then to a temporary table in Redshift and before the data is updated or inserted upsert operation will stop. Or else, duplicates are inserted into the target table unless same duplicate rows already exist in the target table and null key-column values may cause unexpected result. The detection of duplicates is performed after all data is copied to S3 and then to a temporary table in Redshift and before the data is updated or inserted into the target table. Any two input documents with the same values for all key columns are considered 'duplicates'. If unchecked, duplicates in the input data will be inserted into the target table unless one or more duplicates already exist in the target table. Please note that Redshift allows duplicate rows to be inserted regardless of primary columns or key columns. Default: Not selected | |||||||
Truncate data | Truncate existing data before performing data load. With the Bulk Update Snap, instead of doing truncate and then update, a Bulk Insert would be faster. Default: Not Selected | |||||||
Update statistics | Update table statistics after data load by performing an Analyze operation on the table. Default value: Not selected | |||||||
Accept invalid characters | Accept invalid characters in the input. Invalid UTF-8 characters are replaced with a question mark when loading. Default value: Selected | |||||||
Maximum error count | Required. The Maximum number of rows which can fail before the bulk load operation is stopped. Example: 10 (if you want the pipeline execution to continue as far as the number of failed records is less than 10) | |||||||
Truncate columns | Truncate column values which are larger than the maximum column length in the table Default value: Selected | |||||||
Disable data compression | Disable compression of data being written to S3. Disabling compression will reduce CPU usage on the Snaplex machine, at the cost of increasing the size of data uploaded to S3. Default value: Not selected | |||||||
Load empty strings | If selected, empty string values in the input documents are loaded as empty strings to the string-type fields. Otherwise, empty string values in the input documents are loaded as null. Null values are loaded as null regardless. Default value: Not selected | |||||||
Additional options | Additional options to be passed to the COPY command. Check http://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html for available options. The COPY command is used to load the staging S3 file to the temporary table. Example: Date format can be specified as DATEFORMAT 'MM-DD-YYYY' Default value: [None] | |||||||
Parallelism | Defines how many files will be created in S3 per execution. If set to 1 then only one file will be created in S3 which will be used for the copy command. If set to n with n > 1, then n files will be created as part of a manifest copy command, allowing a concurrent copy as part of the Redshift load. The Snap itself will not stream concurrent to S3. It will use a round robin mechanism on the incoming documents to populate the n files. The order of the records is not preserved during the load. | |||||||
Instance type | Appears when the Parallelism value is greater than 1. Select the type of instance from the following options:
Default Value: Default Example: High-performance S3 upload optimized | |||||||
IAM role | This property enables you to perform the bulk load using IAM role. If this option is selected, ensure that the AWS account ID, role name and region name are provided in the account. Default value: Not selected | |||||||
Server-side encryption | This defines the S3 encryption type to use when temporarily uploading the documents to S3 before the insert into the Redshift. Default value: Not selected | |||||||
KMS Encryption type | Specifies the type of KMS S3 encryption to be used on the data. The available encryption options are:
Default value: None If both the KMS and Client-side encryption types are selected, the Snap gives precedence to the SSE, and displays an error prompting the user to select either of the options only. | |||||||
KMS key | Conditional. This property applies only when the encryption type is set to Server-Side Encryption with KMS. This is the KMS key to use for the S3 encryption. For more information about the KMS key, refer to AWS KMS Overview and Using Server Side Encryption. Default value: [None] | |||||||
Vacuum type | Reclaims space and sorts rows in a specified table after the upsert operation. The available options to activate are FULL, SORT ONLY, DELETE ONLY and REINDEX. Refer to the AWS document on "Vacuuming Tables" for more information. Auto-commit needs to be enabled for Vacuum. Default value: NONE | |||||||
Vacuum threshold (%) | Specifies the threshold above which VACUUM skips the sort phase. If this property is left empty, Redshift sets it to 95% by default. Default value: [None] | |||||||
Encryption type | This defines the S3 encryption type to use when temporarily uploading the documents to S3 before the insert into Redshift. One of the following three options can be selected from the drop-down menu:
Default value: None | |||||||
KMS key | Conditional. This property only applies if encryption type is set to Server-Side Encryption with KMS. This is the KMS key to use for the S3 encryption. For more information about the KMS key refer to AWS KMS Overview and Using Server Side Encryption. Default value: [None] | |||||||
Snap execution | Select one of the three modes in which the Snap executes. Available options are:
When enabled, the SOAP request will be executed and if the Snap has an output view defined, then the response will be written to the output view of the Snap. |
Redshift's Vacuum Command
In Redshift, when rows are DELETED or UPDATED against a table they are simply logically deleted (flagged for deletion), not physically removed from disk. This causes the rows to continue consuming disk space and those blocks are scanned when a query scans the table. This results in an increase in table storage space and degraded performance due to otherwise avoidable disk IO during scans. A vacuum recovers the space from deleted rows and restores the sort order.
Troubleshooting
Error | Reason | Resolution |
---|---|---|
| This issue occurs due to incompatibilities with the recent upgrade in the Postgres JDBC drivers. | Download the latest 4.1 Amazon Redshift driver here and use this driver in your Redshift Account configuration and retry running the Pipeline. |
Example
The following example will illustrate the usage of the Redshift Upsert Snap. In this example, we update a record using the Redshift Upsert Snap.
In the pipeline execution:
Mapper (Data) Snap maps the record details to the input fields of Redshift Upsert Snap:
Redshift Upsert Snap updates the record using the Accountnbackup table object:
After the pipeline executes, the Redshift Upsert Snap shows the following data preview: