Redshift - Bulk Load

On this Page

Snap type:Write
Description:

This Snap executes a Redshift bulk load. The input data is first written to a staging file on S3. Then the Redshift copy command is used to insert data into the target table.

Table Creation

If the table does not exist when the Snap tries to do the load, and the Create table property is set, the table will be created with the columns and data types required to hold the values in the first input document. If you would like the table to be created with the same schema as a source table, you can connect the second output view of a Select Snap to the second input view of this Snap. The extra view in the Select and Bulk Load Snaps are used to pass metadata about the table, effectively allowing you to replicate a table from one database to another.

ETL Transformations & Data Flow

This Snap executes a Load function with the given properties. The documents that are provided on the input view will be inserted into the provided table on the provided database.

Input & Output:

  • InputThis Snap can have an upstream Snap that can pass a document output view. Such as Structure or JSON Generator.

  • Output: The Snap outputs one document specifying the status, with the records count that are being inserted into the table and also the failure record count. Any error occurring during the process is routed to the error view.

  • Expected Upstream Snaps: The columns of the selected table need to be mapped upstream using a Mapper Snap. The Mapper Snap will provide the target schema, which reflects the schema of the table that is selected for the Redshift Bulk Load Snap.
  • Expected Downstream Snaps: The Snap will output a single document for the entire bulk load operation which contains the count of records inserted into the targeted table as well as the count of failed records.
Prerequisites:

IAM Roles for Amazon EC2

The 'IAM_CREDENTIAL_FOR_S3' feature is used to access S3 files from EC2 Groundplex, without Access-key ID and Secret key in the AWS S3 account in the Snap. The IAM credential stored in the EC2 metadata is used to gain access rights to the S3 buckets. To enable this feature, set the Global properties (Key-Value parameters) and restart the JCC:
jcc.jvm_options = -DIAM_CREDENTIAL_FOR_S3=TRUE

This feature is supported in the EC2-type Groundplex only. Learn more.  

Limitations and Known Issues
  • Does not work in Ultra Tasks.
  • The Snap will not automatically fix some errors encountered during table creation since they may require user intervention to be resolved correctly. For example, if the source table contains a column with a type that does not have a direct mapping in the target database, the Snap will fail to execute. You will then need to add a Mapper (Data) Snap to change the metadata document to explicitly set the values needed to produce a valid CREATE TABLE statement.
  • If string values in the input document contain the '\0' character (string terminator), the Redshift COPY command, which is used by the Snap internally, fails to handle them properly. Therefore, the Snap skips the '\0' characters when it writes CSV data into the temporary S3 files before the COPY command is executed.

If you use the PostgreSQL driver (org.postgresql.Driver) with the Redshift Snap Pack, it could result in errors if the data type provided to the Snap does not match the data type in the Redshift table schema. Either use the Redshift driver (com.amazon.redshift.jdbc42.Driver) or use the correct data type in the input document to resolve these errors.

Account:

This Snap uses account references created on the Accounts page of SnapLogic Manager to handle access to this endpoint. The S3 BucketS3 Access-key ID and S3 Secret key properties are required for the Redshift-Bulk Load Snap. The S3 Folder property may be used for the staging file. If the S3 Folder property is left blank, the staging file will be stored in the bucket. See Configuring Redshift Accounts for information on setting up this type of account.

Configurations:

Account & Access

This Snap uses account references created on the Accounts page of SnapLogic Manager to handle access to this endpoint. The S3 BucketS3 Access-key ID and S3 Secret key properties are required for the Redshift-Bulk Load Snap. The S3 Folder property may be used for the staging file. If the S3 Folder property is left blank, the staging file will be stored in the bucket. See Configuring Redshift Accounts for information on setting up this type of account.

Redshift IAM Account Setup

  • If the EC2 plex (where your pipeline is running with an IAM role), Redshift cluster, and S3 bucket are in the same AWS account, then you must use the SnapLogic Redshift Account (regular IAM Account).
  • If the EC2 plex (where your pipeline is running with an IAM role) is in one account and the Redshift cluster and S3 bucket are in a different AWS account, you must use the SnapLogic Redshift Cross-Account IAM Role Account to run your pipelines successfully.

This applies only to the Redshift - Bulk Load, Redshift - Unload, and Redshift - S3 Upsert Snaps.


Views:

Input

This Snap has one document input view by default.

You can add a second view for metadata for the table as a document so that the target absent table can be created in the database with a similar schema as the source table. This schema is usually from the second output of a database Select Snap. If the schema is from a different database, the data types might not be properly handled.

OutputThis Snap has at most one output view.
ErrorThis Snap has at most one error view and produces zero or more documents in the view. If you open an error view and expect to have all failed records routed to the error view, you must increase the Maximum error count property. If the number of failed records exceeds the Maximum error count, the pipeline execution will fail with an exception thrown and the failed records will not be routed to the error view.

Settings

Label*

Specify the name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your pipeline.

Schema name

Specify the database schema name. In case it is not defined, then the suggestion for the Table Name will retrieve all tables names of all schemas. The property is suggestible and will retrieve available database schemas during suggest values.

The values can be passed using the pipeline parameters but not the upstream parameter.

Example: SYS
Default value:  None

Table name*

Specify the table on which to execute the bulk load operation.

You can pass the values using the Pipeline parameters but not the upstream parameter.

Examplepeople

Default value: None

Create table if not present

Select this checkbox to automatically create the target table if it does not exist.

  • If a second input view is configured for the Snap and it contains a document with schema (metadata) from the source table, the Snap creates the new (target) table using the same schema (metadata). However, if the schema comes from a different database, the Snap might fail with the Unable to create table: "<table_name>" error due to data type incompatibility.
  • In the absence of a second input view, the Snap creates a table based on the data types of the columns generated from the first row of the input document (first input view).

Due to implementation details, a newly created table is not visible to subsequent database Snaps during runtime validation. If you want to immediately use the newly updated data you must use a child Pipeline that is invoked through a Pipeline Execute Snap.

Default value: Not selected

Data Source

Specify the source from where the data should load. The available options are Input view and Staged files.

  • Input View: If you select this option, leave the Table Columns field empty,
  • Staged files: When you select this option, the following fields appear:
    • S3 Path: Provide the path to which the records are to be added.
    • Column Delimiter: Specify the delimiter to use to separate column values in the staged files; it must be a single-char value. The default value is comma (,).
Validate input data

Select this checkbox to enable the Snap perform input data validation to verify all input documents are flat map data. If any value is a Map or a List object, the Snap writes an error to the error view, and if this condition occurs, no document is written to the output view. See the Troubleshooting section above for information on handling errors caused due to invalid input data. 

Default value:  Not selected

Recommendation

If this property is not selected, the Snap does not validate the structure of input documents, converts all values to strings, writes the S3 CSV file, and executes the Redshift COPY command. If the COPY command finds error in the input CSV data, it writes errors to the error table, and the Snap routes these errors to the error view (if error view is enabled). However, some errors reported by the COPY command may not be easy to understand. Therefore, it is advisable to enable input data validation during pipeline development and testing, as this may also help troubleshoot the pipeline.

Flat Map Data

Flat map data is a collection of key-value pairs, where the values are all single-class objects unlike a Map or List.

Truncate data*Select this checkbox to truncate existing data before performing data load. With the Bulk Update Snap, instead of doing truncate and then update, a Bulk Insert would be faster.

Default value:  Not selected

Update statistics

Select this checkbox to update table statistics after data load by performing an Analyze operation on the table.

Default value:  Not selected

Accept invalid characters

Select this checkbox to accept invalid characters in the input. Invalid UTF-8 characters are replaced with a question mark when loading.

Default value:  Selected

Maximum error count*

Specify the maximum number of rows which can fail before the bulk load operation is stopped.

Default: 100
Example: 10   (if you want the Pipeline execution to continue as far as the number of failed records is less than 10)

Truncate columns

Select this checkbox to truncate column values which are larger than the maximum column length in the table

Default value:  Selected

Disable data compression

Select this checkbox to disable compression of data being written to S3. Disabling compression will reduce CPU usage on the Snaplex machine, at the cost of increasing the size of data uploaded to S3.

Default value:  Not selected

Load empty stringsSelect this checkbox to load empty strings in the input documents as empty strings to the string-type fields. Else, empty string values in the input documents are loaded as null. Null values are loaded as null regardless.

Default value: 
 Not selected
Additional options

Specify additional options to be passed to the COPY command. For example, EMPTYASNULL, this command indicates that the Redshift should load empty fields as NULL. Empty fields occur when data contains two delimiters in succession with no characters between the delimiters. Learn more about the available options in Amazon Redshift – Copy documentation.

Default value:  N/A
ExampleACCEPTANYDATE

Parallelism

Define the number of files to be created in S3 per execution. If set to 1 then only one file will be created in S3 which will be used for the copy command. If set to n with n > 1, then n files will be created as part of a manifest copy command, allowing a concurrent copy as part of the Redshift load. The Snap itself will not stream concurrent to S3. It will use a round robin mechanism on the incoming documents to populate the n files. The order of the records is not preserved during the load.

Default value: None

Instance type

Appears when the parallelism value is greater than 1.

Select the type of instance from the following options:

  • Default

  • High-performance S3 upload optimized

  • Choosing the Default option processes the default instance, while the High-performance S3 upload optimized option processes the AWS high-performance EC2 instance such as R6a.

  • When you select the High-performance S3 upload optimized option for the Instance type property, the Snap might increase the number of threads depending on the Parallelism property. In these cases, we recommend that you do not execute too many pipelines concurrently.

Default Value: Default

Example: High-performance S3 upload optimized

IAM RoleSelect this check box if bulk load or unload has to be done using the IAM role. If you select IAM Role, ensure that you provide values for (AWS account ID, Role name, and Region name) fields in the Redshift Account.

Server-side encryption

Select this checkbox to enable encryption for the data that is loaded. This defines the S3 encryption type to use when temporarily uploading the documents to S3 before you insert data into Redshift.  

Default value: Not selected

KMS Encryption type

Specify the type of Key Management Service (KMS) S3 encryption to be used on the data. The available encryption options are:

  • None - Files do not get encrypted using KMS encryption
  • Server-Side KMS Encryption - This option enables the output files on Amazon S3 to be encrypted using the Amazon S3 generated KMS key. 

Default value: None

If both the KMS and Client-side encryption types are selected, the Snap gives precedence to the SSE,  and displays an error prompting the user to select either of the options only.

KMS key

Activates when KMS Encryption type is set to Server-Side Encryption with KMS.

Specify the KMS key to use for the S3 encryption. For more information about the KMS key, refer to AWS KMS Overview and Using Server Side Encryption

Default value: None

Vacuum type
Select the option for Vacuum type.
Vacuum type reclaims space and sorts rows in a specified table after the upsert operation. The available options to activate are FULL, SORT ONLY, DELETE ONLY and REINDEX. Refer to the AWS document on Vacuuming Tables for more information.

Auto-commit needs to be enabled for Vacuum.

Default value:  None

Vacuum threshold (%)

Specifies the threshold above which VACUUM skips the sort phase. If this property is left empty, Redshift sets it to 95% by default.

Default value: None

Snap execution

Select one of the three modes in which the Snap executes. Available options are:
  • Validate & Execute: Performs limited execution of the Snap, and generates a data preview during Pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during Pipeline runtime.
  • Execute only: Performs full execution of the Snap during Pipeline execution without generating preview data.
  • Disabled: Disables the Snap and all Snaps that are downstream from it.

In a scenario where the Auto commit on the account is set to true, and the downstream Snap does depends on the data processed on an Upstream Database Bulk Load Snap, use the Script Snap to add delay for the data to be available.

For example, when performing a create, insert and a delete function sequentially on a pipeline, using a Script Snap helps in creating a delay between the insert and delete function or otherwise it may turn out that the delete function is triggered even before inserting the records on the table.

Redshift's Vacuum Command

In Redshift, when rows are DELETED or UPDATED against a table they are simply logically deleted (flagged for deletion), not physically removed from disk. This causes the rows to continue consuming disk space and those blocks are scanned when a query scans the table. This results in an increase in table storage space and degraded performance due to otherwise avoidable disk IO during scans. A vacuum recovers the space from deleted rows and restores the sort order. 

Troubleshooting

Error

Reason

Resolution

type "e" does not exist

This issue occurs due to incompatibilities with the recent upgrade in the Postgres JDBC drivers.

Download the latest 4.1 Amazon Redshift driver here and use this driver in your Redshift Account configuration and retry running the Pipeline.

Basic Use Case


The following Pipeline describes how the Snap functions as a standalone Snap in a Pipeline:

Use Case: Replicate a Database Schema in Redshift

MySQL Select to Redshift Bulk Load
In this example, a MySQL Select Snap is used to select data from 'AV_Persons' table belonging to the 'enron' schema. The Mapper Snap maps this data to the target table's schema and is then loaded onto the "bulkload_demo" table in the "prasanna" schema:

Select the data from the MySQL database.

Mapper will be used to map the data to the input schema that is associated with Redshift Bulkload database table.

Loads the input Documents that is coming from the Mapper to a S3 file.

Finally invoke the COPY command to invoke the created S3 file to insert the data into the destination table.

 

Typical Snap Configurations


Key configuration lies in how the SQL statements are passed to perform bulk load of the records. The statements can be passed:

Without Expression

The values are passed directly to the Snap.


With Expression

Using Pipeline parameters
The Table name is passed as a Pipeline parameter.

Basic Use Case #2: Use of twin inputs to define schema while creating tables in Redshift


The following Pipeline demonstrates how to use the second input view of Redshift - Bulk Load Snap to define the schema for creating a non-existent table in Redshift data store.

We use two JSON Generator Snaps: one for passing the table schema and another for passing the data rows. 

Input 1: SchemaInput 2: Data

We use the Redshift Bulk Upload Snap to combine these two inputs — schema and data rows, and create a table, if it does not exist in the Redshift Data store.

Redshift - Bulk Load Snap SettingsViews

Output

After creating the table and loading the data based on these inputs, the Snap displays the result in the output in terms of the number of records loaded into the specified table and the number of failed records. 

To see the records inserted into the new table, we can connect a Redshift - Execute Snap and run a select query on the newly-created table. It retrieves the data inserted into each record. It is possible that you see more records in the output, in case the table existed and the new rows are inserted into it.

Redshift - Execute SnapOutput

Download this Pipeline.

Advanced Use Case


The following Pipeline describes how a lookup functionality is used in an enterprise environment. In this Pipeline, tweets pertaining to a keyword "#ThursdayThoughts" are extracted using a Twitter Query Snap and loaded into the table "twittersnaplogic" in the public schema using the Redshift Bulk Load Snap. When executed, all the records are loaded into the table. 

The Pipeline performs the following ETL operations:

Extract: Twitter Query Snap selects 25 tweets (a value specified in the Maximum tweets property) pertaining to the Twitter query.

Transform: The Sort Snap, labeled Sort by Username in this pipeline, sorts the records based on the value of the field user.name.

Extract: The Salesforce SOQL Snap extract's records matching the user.name field value. Extracted records are LastName, FirstName, Salutation, Name, and Email. 

Load: The Redshift Bulk Load Snap loads the results into the table, twittersnaplogic, on the Redshift instance.

The configuration of the Twitter Query Snap is as shown below:


The Twitter Query Snap is configured to retrieve 25 records from the Account, a preview of the Snap's operation is shown below:


The Redshift Bulk Load Snap is configured as shown in the figure below:

Successful execution of the Snap is denoted by a count of the records loaded into the specified table, note that the number of failed records is also shown provided it does not exceed the limit specified in the Maximum error count property, in which case it would stop the pipeline. The image below shows that all 25 records retrieved in the Twitter Query Snap have been loaded into the table successfully. 


Advanced Use Case #2


The following Pipeline demonstrates  how a bulk load functionality is used typically in an enterprise environment. In this Pipeline, the File Reader Snap reads the data, from which the data will be loaded in to the table using the Redshift Bulk Load Snap. When executed, all the records are loaded into the table. 

The Pipeline performs the following ETL operations:

Extract: File Reader snap will read the data from the file and submit it to the CSV parser.

Transform: CSV Parser will perform the parsing of the provided data which will be written to the MySQL insert snap.

Extract: The entire data will be loaded into the Redshift table, by creating new table with the provided schema.


Downloads

Important steps to successfully reuse Pipelines

  1. Download and import the pipeline into the SnapLogic application.
  2. Configure Snap accounts as applicable.
  3. Provide pipeline parameters as applicable.


  File Modified

File Redshift_BulkLoad_TwinInput_FEP.slp

Nov 07, 2020 by Anand Vedam

File usecase-redshift.slp

Feb 14, 2024 by Kalpana Malladi

Snap Pack History

 Click to view/expand
Release Snap Pack VersionDateType  Updates
November 2024main29029 StableUpdated and certified against the current SnapLogic Platform release.

August 2024

main27765

 

Stable

  • Upgraded the org.json.json library from v20090211 to v20240303, which is fully backward compatible.
  • Upgraded the JDBC driver for the Redshift Snap Pack to v2.1.0.29 to address the SQL Injection vulnerabilities. Pipelines using the Redshift Snaps are not impacted after the driver upgrade, because the latest JDBC driver is fully backward compatible.

May 2024437patches26634 LatestFixed an issue with Redshift - Execute Snap that produced logs causing node crashes.
May 2024main26341 StableUpdated the Delete Condition (Truncates a Table if empty) field in the Redshift - Delete Snap to Delete condition (deletes all records from a table if left blank) to indicate that all entries will be deleted from the table when this field is blank, but no truncate operation is performed.
February 2024main25112 StableUpdated and certified against the current SnapLogic Platform release.
November 2023main23721

 

StableUpdated and certified against the current SnapLogic Platform release.
August 2023main22460 Stable
  • The Redshift-Bulk Load and Redshift-Bulk Upsert Snaps now support expression enablers for the Additional options field that enables you to use parameters.
  • The Redshift - Execute Snap now includes a new Query type field. When Auto is selected, the Snap tries to determine the query type automatically.


Behavior Change

Starting with version main22460, in the Redshift Select Snap:

  • When you create a table in Redshift, by default, all column names are displayed in lowercase in the output.
  • When you enter column names in uppercase in the Output Field property, the column names are displayed in lowercase in the output.

May 2023

main21015 

Stable

Upgraded with the latest SnapLogic Platform release.

February 2023

432patches20500

 Latest

The Redshift Account no longer fails when a URL is entered in the JDBC URL field and no driver is specified.

February 2023432patches20166 Latest

Updated the description for S3 Security Token field as follows:

Specify the S3 security token part of AWS Security Token Service (STS) authentication. It is not required unless a particular S3 credential is configured to require it.

February 2023432patches20101

  

Latest
  • The JDBC driver class for Redshift accounts is bundled with the com.amazon.redshift.jdbc42.Driver as the default driver. This upgrade is backward-compatible. The existing pipelines will continue to work as expected and the new pipelines will use the Redshift Driver as the default driver. SnapLogic will support providing fixes for the issues you might encounter with accounts that use the PostgreSQL driver only until November 2023.
    After November 2023, SnapLogic will not provide support for the issues with the PostgreSQL driver. Therefore, we recommend you to migrate from the PostgreSQL JDBC driver to the Redshift JDBC driver. Learn more about migrating from the PostgreSQL JDBC Driver to the Amazon Redshift Driver. (432patches20101)

  • The Instance type option in the Redshift Bulk Load Snap enables you to use the Amazon EC2 R6a instance. This property appears only when the parallelism value is greater than one.

February 2023432patches20035

 

Latest

The Redshift Snaps that earlier supported only Redshift Cluster now support Redshift Serverless as well. With Redshift Serverless, you can avoid setting up and managing data warehouse infrastructure when you run or scale analytics.

February 2023main19844 StableUpgraded with the latest SnapLogic Platform release.
November 2022main18944 Stable

The Redshift - Insert Snap now creates the target table only from the table metadata of the second input view when the following conditions are met:

  • The Create table if not present checkbox is selected.

  • The target table does not exist.

  • The table metadata is provided in the second input view.

August 2022430patches17189 Latest
August 2022main17386 Stable

The Redshift accounts support:

  • Expression enabler to pass values from Pipeline parameters.

  • Security Token for S3 bucket external staging.

4.29 Patch429patches16908 Latest
  • Enhanced the Redshift accounts with the following:

    • Expression enabler to pass values from Pipeline parameters.

    • Support for Security Token for S3 bucket external staging.

  • Fixed an issue with Redshift - Execute Snap where the Snap failed when the query contained comments with single or double quotes in it. Now the Pipeline executes without any error if the query contains a comment.

4.29 Patch

429patches15806

 Latest

Fixed an issue with Redshift Account and Redshift SSL Account where the Redshift Snaps failed when the S3 Secret key or S3 Access-key ID contained special characters, such as +.

4.29

main15993

 

Stable

Upgraded with the latest SnapLogic Platform release.

4.28main14627 StableUpdated the label for Delete Condition to Delete Condition (Truncates Table if empty) in the Redshift Delete Snap.
4.27 Patch427patches12999 LatestFixed an issue with the Redshift Bulk Load Snap, where the temporary files in S3 were not deleted for aborted or interrupted Pipelines.
4.27 Patch427patches12999 Latest
4.27main12833 Stable

Enhanced the Redshift - Execute Snap to invoke stored procedures.

4.26main11181 StableUpgraded with the latest SnapLogic Platform release.
4.25 Patch425patches11008 Latest

Updated the AWS SDK from version 1.11.688 to 1.11.1010 in the Redshift Snap Pack and added a custom SnapLogic User Agent header value.

4.25main9554
 
StableUpgraded with the latest SnapLogic Platform release.
4.24main8556
Stable
4.23main7430
 
Stable

Fixed an issue with the Redshift Bulk Load Snap that fails while displaying a Failed to commit transaction error.

4.22main6403
 
StableUpgraded with the latest SnapLogic Platform release.
4.21 Patch421patches6144 Latest

Fixed the following issues with DB Snaps:

  • The connection thread waits indefinitely causing the subsequent connection requests to become unresponsive.
  • Connection leaks occur during Pipeline execution.
4.21 PatchMULTIPLE8841 Latest

Fixed the connection issue in Database Snaps by detecting and closing open connections after the Snap execution ends. 

4.21snapsmrc542

 

StableUpgraded with the latest SnapLogic Platform release.
4.20 Patch db/redshift8774
Latest

Fixed the Redshift - Execute Snap that hangs if the SQL statement field contains only a comment ("-- comment"). 

4.20snapsmrc535
 
StableUpgraded with the latest SnapLogic Platform release.
4.19 Patch db/redshift8410 Latest

Fixed an issue with the Redshift - Update Snap wherein the Snap is unable to perform operations when:

  • An expression is used in the Update condition property.
  • Input data contain the character '?'.
4.19snaprsmrc528
 
StableUpgraded with the latest SnapLogic Platform release.
4.18 Patch db/redshift8043 Latest

Enhanced the Snap Pack to support AWS SDK 1.11.634 to fix the NullPointerException issue in the AWS SDK. This issue occurred in AWS-related Snaps that had HTTP or HTTPS proxy configured without a username and/or password. 

4.18 PatchMULTIPLE7884 Latest

Fixed an issue with the PostgreSQL grammar to better handle the single quote characters.

4.18 PatchMULTIPLE7778 Latest

Updated the AWS SDK library version to default to Signature Version 4 Signing process for API requests across all regions.

4.18snapsmrc523
 
StableUpgraded with the latest SnapLogic Platform release.
4.17 Patchdb/redshift7433 Latest

Fixed an issue with the Redshift Bulk Load Snap wherein the Snap fails to copy the entire data from source to the Redshift table without any statements being aborted.

4.17ALL7402
 
Latest

Pushed automatic rebuild of the latest version of each Snap Pack to SnapLogic UAT and Elastic servers.

4.17snapsmrc515
 
Latest
  • Fixed an issue with the Redshift Execute Snap wherein the Snap would send the input document to the output view even if the Pass through field is not selected in the Snap configuration. With this fix, the Snap sends the input document to the output view, under the key original, only if you select the Pass through field.
  • Added the Snap Execution field to all Standard-mode Snaps. In some Snaps, this field replaces the existing Execute during preview checkbox.
4.16 Patch db/redshift6821 Latest

Fixed an issue with the Lookup Snap passing data simultaneously to output and error views when some values contained spaces at the end.

4.16snapsmrc508
 
StableUpgraded with the latest SnapLogic Platform release.
4.15 Patch db/redshift6286 Latest

Fixed an issue with the Bulk Upsert Snap wherein there was no output for any input schema.

4.15 Patch db/redshift6334 Latest

Replaced Max idle time and Idle connection test period properties with Max life time and Idle Timeout properties, respectively, in the Account configuration. The new properties fix the connection release issues that were occurring due to default/restricted DB Account settings.

4.15snapsmrc500
 
StableUpgraded with the latest SnapLogic Platform release.
4.14 Patch db/redshift5786 Latest

Fixed an issue wherein the Redshift Upload snap logged the access and secret keys without encryption in the error logs. The keys are now masked.

4.14 Patch db/redshift5667 Latest
  • Added "Validate input data" property in the Redshift Bulk Load Snap to enable users to troubleshoot input data schema.
  • Enhanced a check to identify whether the Provided Query in the Redshift Execute Snap is of read or write type.
4.14snapsmrc490
 
StableUpgraded with the latest SnapLogic Platform release.
4.13 Patch db/redshift/5303 Latest

Added a new property "Validate input data" in the Redshift Bulk Load Snap to help users troubleshoot the input data schema.

4.13 Patch db/redshift5186 Latest

Fixed the Bulk Load and Unload Snaps wherein the KMS encryption type property is failing with validation error.

4.13

snapsmrc486

 
Stable

Added KMS encryption support to these Snaps: Redshift Unload, Redshift Bulk Load, Redshift Bulk Upsert, and Redshift S3 Upsert.

4.12 Patch db/redshift5027 Latest

Fixed an issue wherein the Redshift Snaps timeout and fail to retrieve a database connection.

4.12 Patch

MULTIPLE4967 Latest

Provided an interim fix for an issue with the Redshift accounts by re-registering the driver for each account validation. The final fix is being shipped in a separate build.

4.12 Patch

MULTIPLE4744 Latest

Added support for Redshift grammar to recognize window functions as being part of the query statement.

4.12

snapsmrc480

 
StableUpgraded with the latest SnapLogic Platform release.
4.11 Patch db/redshift4589 Latest

Fixed an issue when creating a Redshift table via the second/metadata input view for the Redshift Bulk Load Snap.

4.11snapsmrc465
 
Stable

Added SSL support to the Configuring Redshift Accounts.

4.10 Patch db/redshift4115 Latest

The Upsert or BulkUpdate/BulkLoad shall not execute and produce output when no inputView has been provided.

4.10 Patchredshift3936 Latest

Addressed an issue in Redshift Execute with a Select that hangs after extracting 13 million in the morning or 30 million in the evening 

4.10

snapsmrc414

 
Stable

Added Auto commit property to the Select and Execute Snaps at the Snap level to support overriding of the Auto commit property at the Account level.

4.9.0 Patch

redshift3229 Latest

Addressed an issue in Redshift Multiple Execute where INSERT INTO SELECT statement generated a 'transaction, commit and rollback statements are not supported' exception.

4.9.0 Patch

redshift3073 Latest

Fixed an issue regarding connection not closed after login failure; Expose autocommit for "Select into" statement in PostgreSQL Execute Snap and Redshift Execute Snap

4.9snapsmrc405
 
Stable
  • Updated the Bulk Load, Bulk Upsert and S3 Upsert Snaps with the properties Vacuum type & Vacuum threshold (%) (replaced the original Vacuum property).

  • Update the S3 Upsert Snap with the properties, IAM role and Server-side encryption to support data upsert across two VPCs.

  • Added support for the Redshift driver under the account setting for JDBC jars.

4.8.0 Patchredshift2852 Latest
  • Addressed an issue with Redshift Insert failing with 'casts smallint as varchar'

  • Addressed an issue with Redshift Bulk Upsert fails to drop temp table

4.8.0 Patchredshift2799 Latest
  • Addressed an issue with Redshift Snaps with the default driver failing with could not load JDBC driver for url file.

  • Added the properties, JDBC Driver Class, JDBC jars and JDBC Url to enable the users to upload the Redshift JDBC drivers that can override the default driver.
4.8.0 Patchredshift2758 Latest

Potential fix for JDBC deadlock issue.

4.8.0 Patch

redshift2713 Latest

Fixed Redshift Snap Pack rendering dates that are one hour off from the date returned by database query for non-UTC Snaplexes

4.8.0 Patch

redshift2697 Latest

Addresses an issue where some changes made in the platform patch MRC294 to improve performance caused Snaps in the listed Snap Packs to fail.

4.8

snapsmrc398

 
Stable
  • Redshift MultiExecute Snap introduced in this release.

  • Redshift Account: Info tab added to accounts.

  • Database accounts now invalidate connection pools if account properties are modified and login attempts fail.

  • Info tab added to accounts.
  • Database accounts now invalidate connection pools if account properties are modified and login attempts fail.
4.7.0 Patchredshift2434 Latest

Replaced newSingleThreadExecutor() with a fixed thread pool.

4.7.0 Patch

redshift2387 Latest

Addressed an issue in Redshift Bulk Load Snap where Load Empty String was setting not working after release.

4.7.0 Patch

redshift2223 Latest

Auto-commit is turned off automatically for SELECT

4.7.0 Patch

redshift2201

 

Latest

Fixed an issue for database Select Snaps regarding Limit rows not supporting an empty string from a pipeline parameter.

4.7

snapsmrc382

 
Stable
  • Updated the Redshift Snap Account Settings with the IAM properties that include AWS account ID , IAM role name, and Region name.

  • Redshift Bulk Load Snap updated with the properties IAM Role & Server-side encryption.

  • Redshift Bulk Upsert Snap updated with the properties Load empty stringsIAM Role & Server-side encryption.

  • Updated the Redshift Upsert Snap with Load empty strings property.

  • Updated the Redshift Unload Snap with the property IAM role.

4.6snapsmrc362
 
Stable
  • Redshift Execute Snap enhanced to fully support SQL statements with/without expressions & SQL bind variables.

  • Resolved an issue in Redshift Execute Snap that caused errors when executing a command Select current_schemas(true).

  • Resolved an issues in Redshift Execute Snap that caused errors when a Select * from <table_name> into statement was executed.

  • Enhanced error reporting in Redshift Bulk Load Snap to provided appropriate resolution messages.

4.5.1

redshift1621

 
Latest
  • Redshift S3 Upsert Snap introduced in this release.

  • Resolved an issue that occurred while inserting mismatched data type values in Redshift Insert Snap.

4.5

snapsmrc344

 Stable
  • Resolved an issue in Redshift Bulk Upsert Snap that occurred when purging temp tables.

  • Resolved an issue in Redshift Upload/Upsert Snap that occurred when using IAM credentials in an EC2 instance with an S3 bucket.

4.4.1NA Latest

Resolved an issue with numeric precision when trying to use create table if not present in Redshift Insert Snap.

4.4NA StableUpgraded with the latest SnapLogic Platform release.
4.3.2NA Stable
  • Redshift Select Where clause property now has expression support.

  • Redshift Update Update condition property now has expression support.

  • Resolved an issue with Redshift Select Table metadata being empty if the casing is different from the suggested one for table name

4.3NA Stable
  • Table List Snap: A new option, Compute table graph, now lets you determine whether or not to generate dependents data into the output.

  • Redshift Unload Snap Parallel property now explicitly adds 'PARALLEL [OFF|FALSE]' to the UNLOAD query.

4.2NA Latest
  • Resolved an issue where Redshift SCD2 Snap historized the current row when no Cause-historization fields had changed.

  • Ignore empty result added to Execute and Select Snaps. The option will not any document to the output view for select statements.

  • Resolved an issue with Redshift Select Snap returning a Date object for DATE column data type instead of a LocalDate object.

  • Resolved an issue in RedShift SCD2 failing to close database cursor connection.

  • Resolved an issue with Redshift Lookup Snap not handling values with spaces in the prefix.

  • Updated driver not distributed with the Redshift Snap Pack.

  • Output fields table property added to Select Snap.

  • Resolved an issue with Redshift - Bulk Loader incorrectly writing to wrong location on S3 and disable data compression not working

  • Resolved an issue in Execute and Select Snaps where the output document was the same as the input document if the query produces no data. When there is no result from the SELECT query, the input document will be passed through to the output view as a value to the 'original' key. The new property Pass through with true default.

NANA NA
  • Redshift Account: Enhanced error messaging

  • Redshift SCD2: Bug fixes with compound keys

  • RedShift Lookup: Bug fixes on lookup failures; Pass-though on no lookup match property added to allow you to pass the input document through to the output view when there is no lookup matching.