This Snap provides the functionality of SCD (slowly changing dimension) Type 2 on a target Redshift table. The Snap executes one SQL lookup request per set of input documents to avoid making a request for every input record. Its output is typically a stream of documents for the Redshift - Bulk Upsert Snap, which updates or inserts rows into the target table. Therefore, this Snap must be connected to the Redshift - Bulk Upsert Snap to accomplish the complete SCD2 functionality.
ETL Transformations and Data Flow
This Snap enables the following ETL operations/flows:
Take the incoming document from the upstream snap and perform a lookup operation in the database, producing one or two documents in the output view.
If a record exists in the database (with the values provided in the input document), generate two output documents; otherwise generates only one.
Feed the output documents to the Redshift Bulk Upsert Snap, which will insert them into the destination table to preserve history.
Input & Output
Expected upstream Snaps: Any Snap, such as a Mapper or JSON Parser Snap, whose output contains a map of key-value entries.
Expected downstream Snaps: Any Snap, such as the Redshift Bulk Upsert or Structure Snap, that accepts documents containing data organized as key-value pairs.
Input: Each document in the input view should contain a data map of key-value entries. The input data must contain data in the Natural Key and Cause-historization fields.
Output: Each document in the output view contains a data map of key-value entries for all fields of a row in the target Redshift table.
Able to connect to the database from the desired plex nodes
Limitations and Known Issues:
Configurations:
Account & Access
This Snap uses account references created on the Accounts page of SnapLogic Manager to handle access to this endpoint. See Configuring Redshift Accounts for information on setting up this type of account.
Views:
Input
This Snap has exactly one input view and expects documents in the view.
Output
This Snap allows zero or one output views and produces documents in the view.
Error
This Snap has at most one error view and produces zero or more documents in the view.
Settings
Label
Required. The name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your pipeline.
Schema name
The name of the database schema that contains the table whose data you want to update. Selecting a schema filters the Table name list to only show tables created in the selected schema. If you do not specify a schema in the Schema name field, the Table name field lists out all tables with the name you specify in all schemas in the database.
Default value: [None]
The values can be passed using the pipeline parameters but not the upstream parameter fields.
Table name
The name of the table that contains the data you want to update.
Default value: [None]
The values can be passed using the pipeline parameters but not the upstream parameter fields.
Natural key
Required. Names of fields that identify a unique row in the target table. The identity key cannot be used as the Natural key, since a current row and its historical rows cannot have the same natural key value.
The values can be passed using the pipeline parameters but not the upstream parameter fields.
Example: id (Each record has to have a unique value.)
Default value: [None]
Cause-historization fields
Required. Names of fields where any change in values causes the historization of an existing row and the insertion of a new 'current' row.
Example: gold bullion rate
Default value: [None]
SCD Fields
Required. Enter the field names you want to use as dimension fields–or select them from the suggestion list. The columns in the table specified in the Table name field above are used to populate the suggestion list.
Example:
Meaning
Field
Value
Current row
current_row
1
Historical row
current_row
0
Start date
start_date
$start_date
End date
end_date
$end_date
Default value:
Meaning
Field
Value
Current row
[None]
1
Historical row
[None]
0
Start date
[None]
Date.now()
End date
[None]
Date.now()
Ignore unchanged rows
Specifies whether the Snap must ignore writing unchanged rows from the source table to the target table. If you enable this option, the Snap generates a corresponding document in the target only if the Cause-historization column in the source row is changed. Else, the Snap does not generate any corresponding document in the target.
Redshift_Iteration_Initialization_Example: Creates a DB table with initial data in Redshift. Once this is done, it upserts data into the DB.
Redshift_Select_Write: Retrieves the latest data from Redshift and writes it to a file, enabling us to see how the data in Redshift changes with every upsert iteration.
Import the pipelines into SnapLogic and configure the Redshift Snaps with your Redshift account.
Pipeline Properties
The Redshift_Iteration_Initialization_Example pipeline creates a Gas Dimensions (gas_dim_test) table and inserts new gas price values into it. To do so, it needs to initialize the DB using one branch of the pipeline, and then iteratively upsert new data using other branches. To control which branch of the pipeline gets triggered, two parameters must be declared at the pipeline level:
INITIALIZE: Indicates whether the task being performed is one of initialization.
SCD2_ITERATION: Indicates whether the task being performed is an iterative upsert.
Later, in the pipeline, we shall use these parameters to create trigger configurations to control when each child pipeline is executed.
Understanding the Sample Pipelines
This section describes the two pipelines used in this example and explains how they work together.
The Redshift_Iteration_Initialization_Example Pipeline
The Redshift_Iteration_Initialization_Example pipeline comprises three pipelines:
Create DB Table and Upload Initial Data: Creates the DB table and uploads three rows of data.
Upload Updated Data: Uploads new data associated with the rows created by Pipeline 1 above.
Truncate DB Table: Clears the contents of the DB table, resetting the pipeline for reuse from scratch.
Create DB Table and Upload Initial Data
This pipeline contains the following Snaps:
Snap Name
Snap
Purpose
Settings
Comments
Initialization Data
JSON Generator Snap
Contains the initial data for the Gas Dimensions DB table.
Contains the initial upload data:
View initial upload data
The data is organized into four columns:
premium: Lists the price of premium fuel.
regular: Lists the price of regular fuel.
station_id: Lists the identifier of the gas station from where the data was collected.
zipcode: Lists the zip code in which the gas station operates.
The important thing to note here is that, while gas rates may change, the data in the station_id and zipcode fields will not. Thus, these can be used as key fields later in the pipeline.
Check Initialize
Filter Snap
Checks whether the DB table can be created.
Contains the following configuration in the Filter field:
This Snap checks whether the value of the INITIALIZE pipeline parameter is greater than 0. If it is, then the Snap executes the Snaps that follow it; else, it exits.
Thus, when we need to create the gas_dim table, we set the INITIALIZE parameter to 1. Once the pipeline completes execution, we change the value of the Initialize parameter to 0. This ensures that the table is not re-created.
Map Attributes to Redshift
Mapper Snap
Maps attributes in the input data to column headers that must be created in the Redshift DB.
Contains the following mappings:
Date.now(): $start_date: Instructs the DB to take the current date-time as the time from which the uploaded data is active.
null: $end_date: Sets end_date to 'null'.
true: $active: Maps the value 'true' to indicate that a record is active.
$station_id: $station_id: Maps station_id in the input data to station_id in the Redshift DB table.
$zipcode: $zipcode: Maps zipcode in the input data to zipcode in the Redshift DB table.
$premium: $premium: Maps premium in the input data to premium in the Redshift DB table.
$regular: $regular: Maps regular in the input data to regular in the Redshift DB table.
Create Table in Redshift with Input Data
Redshift - Insert
Creates a DB table in Redshift.
Contains the following settings:
Schema Name: "public"
Table Name: "gas_dim_test"
This creates a table named "gas_dim_test" in the "public" schema in Redshift with the data in the Initialization Data Snap.
Upload Updated Data
This pipeline updates the data in the gas_dim_test DB table in three iterations. This pipeline contains the following Snaps:
Snap Name
Snap
Purpose
Settings
Comments
Iteration 1
SCD2 Data (Iteration 1)
JSON Generator
Contains the updated data for the gas_dim_test DB table.
Contains the following data:
View the data used for Iteration 1
Iteration 1 updates the data associated with station_id U01.
Check Iteration 1
Filter Snap
Checks whether the first data update iteration should be executed.
Contains the following setting in the Filter expression field:
If, in the pipeline properties, the value of the SCD2_ITERATION parameter is 1, and the value of the INITIALIZE parameter is 0, then this pipeline is executed.
Check Not Initialize
Filter Snap
Contains the following setting in the Filter expression field:
Iteration 2
SCD2 Data (Iteration 2)
JSON Generator
Contains the updated data for the gas_dim_test DB table.
Contains the following data:
View the data used for Iteration 2
Iteration 2 updates the data associated with station_id U01 and V01.
Check Iteration 2
Filter Snap
Checks whether the second data update iteration should be executed.
Contains the following setting in the Filter expression field:
If, in the pipeline properties, the value of the SCD2_ITERATION parameter is 2, and the value of the INITIALIZE parameter is 0, then this pipeline is executed.
Check Not Initialize
Filter Snap
Contains the following setting in the Filter expression field:
Iteration 3
SCD2 Data (Iteration 3)
JSON Generator
Contains the updated data for the gas_dim_test DB table.
Contains the following data:
View the data used for Iteration 3
Iteration 3 updates the data associated with station_id U01, V01, and C01.
Check Iteration 3
Filter Snap
Checks whether the third data update iteration should be executed.
Contains the following setting in the Filter expression field:
If, in the pipeline properties, the value of the SCD2_ITERATION parameter is 2, and the value of the INITIALIZE parameter is 0, then this pipeline is executed.
Check Not Initialize
Filter Snap
Contains the following setting in the Filter expression field:
Processing the Output of the Three Iterations
Union
Union Snap
Takes the output from the iteration triggered upline and passes it on to the downstream Snaps.
Prepare Upload Data
Redshift - SCD2
Prepares the data collected by the upstream Union Snap.
Contains settings that ensure that the correct DB table is updated, and that the appropriate fields are read the same way as they were created in the pipeline above.
Click the screenshot below to review the settings in detail.
These settings basically instruct Redshift to read the public.gas_dim table as follows:
Natural Key: The values in the following fields will never change: station_id, zipcode.
Cause-historization fields: When these values change, create a new row containing the new values. Mark the existing value as old.
SCD fields
All rows containing current data must contain the value 'true' in the 'active' field.
Similarly, all fields containing historical data must contain the value 'false' in the 'active' field.
The start date of the current row must always be the current date-time.
Assign the end date to the current data when updated data is received.
Upsert Updated Data
Redshift - Bulk Upsert
Upserts the data prepared by the Redshift - SCD2 Snap to the gas_dim_test DB table.
Contains the following settings:
Key columns: These are the columns used to check for existing entries in the target table:
station_id
zipcode
For the Bulk Upsert Snap to work, at least one Natural key must be included in the Key columns field; else the upsert will fail.
Once this Snap completes execution, the updated data is populated into the DB table, the 'active' column in the row containing the old data is updated to now read 'false', and a new row containing the updated data is added, where the 'active' column reads 'true'.
Truncate DB Table
Once the simuation is run, and you have seen how the Redshift - SCD 2 and Redshift - Upsert Snaps work together to update and historize DB data as new data arrives, you may want to clear the concerned table to ensure that, when the simulation is run again, an empty table is available.
This pipeline contains the following Snaps:
Check Iteration 4: Checks whether the fourth iteration should be executed. If the check succeeds, the downstream Snaps are executed; else, the pipeline exits.
Check Not Initialize: Checks whether the INITIALIZE parameter in the pipeline properties is set to '0'. If the check succeeds, the downstream Snaps are executed; else, the pipeline exits.
Redshift - Execute: Runs the following command on the gas_dim_test table to clear all its contents:
The Redshift_Select_Write Pipeline
Run this pipeline after each iteration to view the updated output.
This pipeline contains the following Snaps:
Redshift - Select: Reads the contents of the public.gas_dim_test DB table.
JSON Formatter: Formats the data received from the Redshift - Select Snap as a JSON file.
File Writer: Saves the JSON data as RS_File.json in the SLDB.
Changes Made to the Public.gas_dim DB Table with Each Iteration
As the values entered in the SCD2_ITERATION and INITIALIZE pipeline parameters change, and as the pipelines are run iteratively, the public.gas_dim_test DB table is created and iteratively updated. This section lists out the contents of the table at the end of each iteration.
Event
Triggered Pipeline
What Happens Here
Output
Initialization
Create DB Table and Upload Initial Data
The table is created, and data from the Initialization Data Snap is uploaded into it.
No. of rows of data: 3
Iteration 1
Upload Updated Data
New data corresponding to the U01 station ID is uploaded, the existing row of data is end-dated, and the active status of the existing data is changed to false. The active status of the new data is saved as true.
No. of rows of data: 4
3 current
1 historical (for U01)
Iteration 2
Upload Updated Data
New data corresponding to the U01 and V01 station IDs is uploaded, the existing rows of data are end-dated, and the active status of the existing data is changed to false. The active status of the new data is saved as true.
No. of rows of data: 6
3 current
3 historical
2 historical rows of station ID U01 data
1 historical row of station ID V01 data
Iteration 3
Upload Updated Data
New data corresponding to the C01 station IDs is uploaded, and the active status of the existing data is changed to false.
New data corresponding to the U01 and V01 station IDs is uploaded, the existing rows of data are end-dated, and the active status of the existing data is changed to false. The active status of the new data is saved as true.
No. of rows of data: 9
3 current
6 historical
3 historical rows of station ID U01 data
2 historical row of station ID V01 data
1 historical row of station ID C01 data
Iteration 4
Truncate DB Table
Clears the contents of the gas_dim_test DB table.
Downloads
Important steps to successfully reuse Pipelines
Download and import the pipeline into the SnapLogic application.
Configure Snap accounts as applicable.
Provide pipeline parameters as applicable.
File
Modified
No files shared here yet.
Snap Pack History
Click to view/expand
Release
Snap Pack Version
Date
Type
Updates
November 2024
main29029
Stable
Updated and certified against the current SnapLogic Platform release.
August 2024
main27765
Stable
Upgraded the org.json.json library from v20090211 to v20240303, which is fully backward compatible.
Upgraded the JDBC driver for the Redshift Snap Pack to v2.1.0.29 to address the SQL Injection vulnerabilities. Pipelines using the Redshift Snaps are not impacted after the driver upgrade, because the latest JDBC driver is fully backward compatible.
May 2024
437patches26634
Latest
Fixed an issue withRedshift - Execute Snap that produced logs causing node crashes.
May 2024
main26341
Stable
Updated the Delete Condition (Truncates a Table if empty) field in the Redshift - DeleteSnap to Delete condition (deletes all records from a table if left blank) to indicate that all entries will be deleted from the table when this field is blank, but no truncate operation is performed.
February 2024
main25112
Stable
Updated and certified against the current SnapLogic Platform release.
November 2023
main23721
Stable
Updated and certified against the current SnapLogic Platform release.
August 2023
main22460
Stable
The Redshift-Bulk Load and Redshift-Bulk Upsert Snaps now support expression enablers for the Additional options field that enables you to use parameters.
The Redshift - Execute Snap now includes a new Query type field. When Auto is selected, the Snap tries to determine the query type automatically.
Behavior Change
Starting withversion main22460, in the Redshift Select Snap:
When you create a table in Redshift, by default, all column names are displayed in lowercase in the output.
When you enter column names in uppercase in theOutput Fieldproperty, the column names are displayed in lowercase in the output.
May 2023
main21015
Stable
Upgraded with the latest SnapLogic Platform release.
February 2023
432patches20500
Latest
The Redshift Account no longer fails when a URL is entered in the JDBC URL field and no driver is specified.
February 2023
432patches20166
Latest
Updated the description for S3 Security Token field as follows:
Specify the S3 security token part of AWS Security Token Service (STS) authentication. It is not required unless a particular S3 credential is configured to require it.
February 2023
432patches20101
Latest
The JDBC driver class for Redshift accounts is bundled with the com.amazon.redshift.jdbc42.Driver as the default driver. This upgrade is backward-compatible. The existing pipelines will continue to work as expected and the new pipelines will use the Redshift Driver as the default driver. SnapLogic will support providing fixes for the issues you might encounter with accounts that use the PostgreSQL driver only until November 2023. After November 2023, SnapLogic will not provide support for the issues with the PostgreSQL driver. Therefore, we recommend you to migrate from the PostgreSQL JDBC driver to the Redshift JDBC driver. Learn more about migrating from the PostgreSQL JDBC Driver to the Amazon Redshift Driver. (432patches20101)
The Instance type option in the Redshift Bulk Load Snap enables you to use the Amazon EC2 R6a instance. This property appears only when the parallelism value is greater than one.
February 2023
432patches20035
Latest
The Redshift Snaps that earlier supported only Redshift Cluster now support Redshift Serverless as well. With Redshift Serverless, you can avoid setting up and managing data warehouse infrastructure when you run or scale analytics.
February 2023
main19844
Stable
Upgraded with the latest SnapLogic Platform release.
November 2022
main18944
Stable
TheRedshift - InsertSnap now creates the target table only from the table metadata of the second input view when the following conditions are met:
The Create table if not present checkbox is selected.
The target table does not exist.
The table metadata is provided in the second input view.
The Redshift Account now validates correctly when the S3 bucket is blank.
August 2022
main17386
Stable
The Redshift accounts support:
Expression enabler to pass values from Pipeline parameters.
Security Token for S3 bucket external staging.
4.29 Patch
429patches16908
Latest
Enhanced the Redshift accounts with the following:
Expression enabler to pass values from Pipeline parameters.
Support for Security Token for S3 bucket external staging.
Fixed an issue with Redshift - Execute Snap where the Snap failed when the query contained comments with single or double quotes in it. Now the Pipeline executes without any error if the query contains a comment.
4.29 Patch
429patches15806
Latest
Fixed an issue with Redshift Account and Redshift SSL Account where the Redshift Snaps failed when the S3 Secret key or S3 Access-key IDcontained special characters, such as +.
4.29
main15993
Stable
Upgraded with the latest SnapLogic Platform release.
4.28
main14627
Stable
Updated the label for Delete Condition to Delete Condition (Truncates Table if empty) in the Redshift DeleteSnap.
4.27 Patch
427patches12999
Latest
Fixed an issue with the Redshift Bulk Load Snap, where the temporary files in S3 were not deleted for aborted or interrupted Pipelines.
Upgraded with the latest SnapLogic Platform release.
4.25 Patch
425patches11008
Latest
Updated the AWS SDK from version 1.11.688 to 1.11.1010 in the Redshift Snap Pack and added a custom SnapLogic User Agent header value.
4.25
main9554
Stable
Upgraded with the latest SnapLogic Platform release.
4.24
main8556
Stable
Enhanced the Redshift - SelectSnap to return only the selected output fields or columns in the output schema (second output view) using the Fetch Output Fields In Schema checkbox. If the Output Fields field is empty all the columns are visible.
Fixed an issue with theRedshift Bulk Load Snapthat fails while displaying aFailed to commit transactionerror.
4.22
main6403
Stable
Upgraded with the latest SnapLogic Platform release.
4.21 Patch
421patches6144
Latest
Fixed the following issues with DB Snaps:
The connection thread waits indefinitely causing the subsequent connection requests to become unresponsive.
Connection leaks occur during Pipeline execution.
4.21 Patch
MULTIPLE8841
Latest
Fixed the connection issue in Database Snaps by detecting and closing open connections after the Snap execution ends.
4.21
snapsmrc542
Stable
Upgraded with the latest SnapLogic Platform release.
4.20 Patch
db/redshift8774
Latest
Fixed the Redshift - Execute Snap that hangs if theSQL statement field contains only a comment ("-- comment").
4.20
snapsmrc535
Stable
Upgraded with the latest SnapLogic Platform release.
4.19 Patch
db/redshift8410
Latest
Fixed an issue with the Redshift - Update Snap wherein the Snap is unable to perform operations when:
An expression is used in theUpdate conditionproperty.
Input data contain the character '?'.
4.19
snaprsmrc528
Stable
Upgraded with the latest SnapLogic Platform release.
4.18 Patch
db/redshift8043
Latest
Enhanced the Snap Pack to support AWS SDK 1.11.634 to fix the NullPointerException issue in the AWS SDK. This issue occurred in AWS-related Snaps that had HTTP or HTTPS proxy configured without a username and/or password.
4.18 Patch
MULTIPLE7884
Latest
Fixed an issue with the PostgreSQL grammar to better handle the single quote characters.
4.18 Patch
MULTIPLE7778
Latest
Updated the AWS SDK library version to default to Signature Version 4 Signing process for API requests across all regions.
4.18
snapsmrc523
Stable
Upgraded with the latest SnapLogic Platform release.
4.17 Patch
db/redshift7433
Latest
Fixed an issue with the Redshift Bulk Load Snap wherein the Snap fails to copy the entire data from source to the Redshift table without any statements being aborted.
4.17
ALL7402
Latest
Pushed automatic rebuild of the latest version of each Snap Pack to SnapLogic UAT and Elastic servers.
4.17
snapsmrc515
Latest
Fixed an issue with the Redshift Execute Snap wherein the Snap would send the input document to the output view even if the Pass through field is not selected in the Snap configuration. With this fix, the Snap sends the input document to the output view, under the key original, only if you select the Pass through field.
Added the Snap Execution field to all Standard-mode Snaps. In some Snaps, this field replaces the existing Execute during preview checkbox.
4.16 Patch
db/redshift6821
Latest
Fixed an issue with the Lookup Snap passing data simultaneously to output and error views when some values contained spaces at the end.
4.16
snapsmrc508
Stable
Upgraded with the latest SnapLogic Platform release.
4.15 Patch
db/redshift6286
Latest
Fixed an issue with the Bulk Upsert Snap wherein there was no output for any input schema.
4.15 Patch
db/redshift6334
Latest
ReplacedMax idle timeandIdle connection testperiodproperties withMax life timeandIdle Timeout properties, respectively, in the Account configuration. The new properties fix the connection release issues that were occurring due to default/restricted DB Account settings.
4.15
snapsmrc500
Stable
Upgraded with the latest SnapLogic Platform release.
4.14 Patch
db/redshift5786
Latest
Fixed an issue wherein the Redshift Upload snap logged the access and secret keys without encryption in the error logs. The keys are now masked.
4.14 Patch
db/redshift5667
Latest
Added "Validate input data" property in the Redshift Bulk Load Snap to enable users to troubleshoot input data schema.
Enhanced a check to identify whether the Provided Query in the Redshift Execute Snap is of read or write type.
4.14
snapsmrc490
Stable
Upgraded with the latest SnapLogic Platform release.
4.13 Patch
db/redshift/5303
Latest
Added a new property "Validate input data" in the Redshift Bulk Load Snap to help users troubleshoot the input data schema.
4.13 Patch
db/redshift5186
Latest
Fixed the Bulk Load and Unload Snaps wherein the KMS encryption type property is failing with validation error.
4.13
snapsmrc486
Stable
Added KMS encryption support to these Snaps: Redshift Unload, Redshift Bulk Load, Redshift Bulk Upsert, and Redshift S3 Upsert.
4.12 Patch
db/redshift5027
Latest
Fixed an issue wherein the Redshift Snaps timeout and fail to retrieve a database connection.
4.12 Patch
MULTIPLE4967
Latest
Provided an interim fix for an issue with the Redshift accounts by re-registering the driver for each account validation. The final fix is being shipped in a separate build.
4.12 Patch
MULTIPLE4744
Latest
Added support for Redshift grammar to recognize window functions as being part of the query statement.
4.12
snapsmrc480
Stable
Upgraded with the latest SnapLogic Platform release.
4.11 Patch
db/redshift4589
Latest
Fixed an issue when creating a Redshift table via the second/metadata input viewfor the Redshift Bulk Load Snap.
The Upsert or BulkUpdate/BulkLoad shall not execute and produce output when no inputView has been provided.
4.10 Patch
redshift3936
Latest
Addressed an issue in Redshift Execute with a Select that hangs after extracting 13 million in the morning or 30 million in the evening
4.10
snapsmrc414
Stable
AddedAuto commitproperty to the Select and Execute Snaps at the Snap level to support overriding of theAuto commitproperty at the Account level.
4.9.0 Patch
redshift3229
Latest
Addressed an issue in Redshift Multiple Execute where INSERT INTO SELECT statement generated a 'transaction, commit and rollback statements are not supported' exception.
4.9.0 Patch
redshift3073
Latest
Fixed an issue regarding connection not closed after login failure; Expose autocommit for "Select into" statement in PostgreSQL Execute Snap and Redshift Execute Snap
4.9
snapsmrc405
Stable
Updated the Bulk Load, Bulk Upsert and S3 Upsert Snaps with the properties Vacuum type & Vacuum threshold (%) (replaced the original Vacuum property).
Update the S3 Upsert Snap with the properties, IAM role and Server-side encryption to support data upsert across two VPCs.
Added support for the Redshift driver under the account setting for JDBC jars.
4.8.0 Patch
redshift2852
Latest
Addressed an issue with Redshift Insert failing with 'casts smallint as varchar'
Addressed an issue with Redshift Bulk Upsert fails to drop temp table
4.8.0 Patch
redshift2799
Latest
Addressed an issue with Redshift Snaps with the default driver failing with could not load JDBC driver for url file.
Added the properties,JDBC Driver Class,JDBC jarsandJDBC Urlto enable the users toupload the Redshift JDBC drivers that can override the default driver.
4.8.0 Patch
redshift2758
Latest
Potential fix for JDBC deadlock issue.
4.8.0 Patch
redshift2713
Latest
Fixed Redshift Snap Pack rendering dates that are one hour off from the date returned by database query for non-UTC Snaplexes
4.8.0 Patch
redshift2697
Latest
Addresses an issue where some changes made in the platform patch MRC294 to improve performance caused Snaps in the listed Snap Packs to fail.
4.8
snapsmrc398
Stable
Redshift MultiExecute Snap introduced in this release.
Redshift Account: Info tab added to accounts.
Database accounts now invalidate connection pools if account properties are modified and login attempts fail.
Info tab added to accounts.
Database accounts now invalidate connection pools if account properties are modified and login attempts fail.
4.7.0 Patch
redshift2434
Latest
Replaced newSingleThreadExecutor() with a fixed thread pool.
4.7.0 Patch
redshift2387
Latest
Addressed an issue in Redshift Bulk Load Snap where Load Empty String was setting not working after release.
4.7.0 Patch
redshift2223
Latest
Auto-commit is turned off automatically for SELECT
4.7.0 Patch
redshift2201
Latest
Fixed an issue for database Select Snaps regarding Limit rows not supporting an empty string from a pipeline parameter.
4.7
snapsmrc382
Stable
Updated the Redshift Snap Account Settings with the IAM properties that include AWS account ID , IAM role name, and Region name.
Redshift Bulk Load Snap updated with the properties IAM Role & Server-side encryption.
Redshift Bulk Upsert Snap updated with the properties Load empty strings, IAM Role & Server-side encryption.
Updated the Redshift Upsert Snap with Load empty strings property.
Updated the Redshift Unload Snap with the property IAM role.
4.6
snapsmrc362
Stable
Redshift Execute Snap enhanced to fully support SQL statements with/without expressions & SQL bind variables.
Resolved an issue in Redshift Execute Snap that caused errors when executing a command Select current_schemas(true).
Resolved an issues in Redshift Execute Snap that caused errors when a Select * from <table_name> into statement was executed.
Enhanced error reporting in Redshift Bulk Load Snap to provided appropriate resolution messages.
4.5.1
redshift1621
Latest
Redshift S3 Upsert Snap introduced in this release.
Resolved an issue that occurred while inserting mismatched data type values in Redshift Insert Snap.
4.5
snapsmrc344
Stable
Resolved an issue in Redshift Bulk Upsert Snap that occurred when purging temp tables.
Resolved an issue in Redshift Upload/Upsert Snap that occurred when using IAM credentials in an EC2 instance with an S3 bucket.
4.4.1
NA
Latest
Resolved an issue with numeric precision when trying to use create table if not present in Redshift Insert Snap.
4.4
NA
Stable
Upgraded with the latest SnapLogic Platform release.
4.3.2
NA
Stable
Redshift Select Where clause property now has expression support.
Redshift Update Update condition property now has expression support.
Resolved an issue with Redshift Select Table metadata being empty if the casing is different from the suggested one for table name
4.3
NA
Stable
Table List Snap: A new option, Compute table graph, now lets you determine whether or not to generate dependents data into the output.
Redshift Unload Snap Parallel property now explicitly adds 'PARALLEL [OFF|FALSE]' to the UNLOAD query.
4.2
NA
Latest
Resolved an issue where Redshift SCD2 Snap historized the current row when no Cause-historization fields had changed.
Ignore empty result added to Execute and Select Snaps. The option will not any document to the output view for select statements.
Resolved an issue with Redshift Select Snap returning a Date object for DATE column data type instead of a LocalDate object.
Resolved an issue in RedShift SCD2 failing to close database cursor connection.
Resolved an issue with Redshift Lookup Snap not handling values with spaces in the prefix.
Updated driver not distributed with the Redshift Snap Pack.
Output fields table property added to Select Snap.
Resolved an issue with Redshift - Bulk Loader incorrectly writing to wrong location on S3 and disable data compression not working
Resolved an issue in Execute and Select Snaps where the output document was the same as the input document if the query produces no data. When there is no result from the SELECT query, the input document will be passed through to the output view as a value to the 'original' key. The new property Pass through with true default.
NA
NA
NA
Redshift Account: Enhanced error messaging
Redshift SCD2: Bug fixes with compound keys
RedShift Lookup: Bug fixes on lookup failures; Pass-though on no lookup match property added to allow you to pass the input document through to the output view when there is no lookup matching.