Redshift - SCD2

On this Page

Overview

Snap Type:

Read

Description:

This Snap provides the functionality of SCD (slowly changing dimension) Type 2 on a target Redshift table. The Snap executes one SQL lookup request per set of input documents to avoid making a request for every input record. Its output is typically a stream of documents for the Redshift - Bulk Upsert Snap, which updates or inserts rows into the target table. Therefore, this Snap must be connected to the Redshift - Bulk Upsert Snap to accomplish the complete SCD2 functionality.

ETL Transformations and Data Flow

This Snap enables the following ETL operations/flows:

  1. Take the incoming document from the upstream snap and perform a lookup operation in the database, producing one or two documents in the output view.
  2. If a record exists in the database (with the values provided in the input document), generate two output documents; otherwise generates only one.
  3. Feed the output documents to the Redshift Bulk Upsert Snap, which will insert them into the destination table to preserve history.

Input & Output

  • Expected upstream Snaps: Any Snap, such as a Mapper or JSON Parser Snap, whose output contains a map of key-value entries.
  • Expected downstream Snaps: Any Snap, such as the Redshift Bulk Upsert or Structure Snap, that accepts documents containing data organized as key-value pairs.
  • Input Each document in the input view should contain a data map of key-value entries. The input data must contain data in the Natural Key and Cause-historization fields.
  • Output: Each document in the output view contains a data map of key-value entries for all fields of a row in the target Redshift table.

Modes

Prerequisites:
  • Redshift database installed
  • Able to connect to the database from the desired plex nodes
Limitations and Known Issues:

If you use the PostgreSQL driver (org.postgresql.Driver) with the Redshift Snap Pack, it could result in errors if the data type provided to the Snap does not match the data type in the Redshift table schema. Either use the Redshift driver (com.amazon.redshift.jdbc42.Driver) or use the correct data type in the input document to resolve these errors.

Configurations:

Account & Access

This Snap uses account references created on the Accounts page of SnapLogic Manager to handle access to this endpoint. See Configuring Redshift Accounts for information on setting up this type of account.

Views:

InputThis Snap has exactly one input view and expects documents in the view.
OutputThis Snap allows zero or one output views and produces documents in the view.
ErrorThis Snap has at most one error view and produces zero or more documents in the view.

Settings

Label

Required. The name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your pipeline.

Schema name



The name of the database schema that contains the table whose data you want to update. Selecting a schema filters the Table name list to only show tables created in the selected schema. If you do not specify a schema in the Schema name field, the Table name field lists out all tables with the name you specify in all schemas in the database.

Default value: [None]

The values can be passed using the pipeline parameters but not the upstream parameter fields.

Table name
The name of the table that contains the data you want to update.

Default value: [None]

The values can be passed using the pipeline parameters but not the upstream parameter fields.

Natural key

Required. Names of fields that identify a unique row in the target table. The identity key cannot be used as the Natural key, since a current row and its historical rows cannot have the same natural key value.

The values can be passed using the pipeline parameters but not the upstream parameter fields.


Example:  id (Each record has to have a unique value.)

Default value:  [None]

Cause-historization fields

Required. Names of fields where any change in values causes the historization of an existing row and the insertion of a new 'current' row.

Example: gold bullion rate

Default value:  [None]

SCD Fields

Required. Enter the field names you want to use as dimension fields–or select them from the suggestion list. The columns in the table specified in the Table name field above are used to populate the suggestion list.

Example:

 Meaning
 
 Field Value
 Current row 
 
 current_row 1
 Historical row 
 
 current_row 0
 Start date
 
 start_date $start_date
 End date end_date
 
 $end_date


Default value:
 

 Meaning
 
 Field Value
 Current row 
 
 [None] 1
 Historical row 
 
 [None] 0
 Start date
 
 [None] Date.now()
 End date [None]
 
 Date.now()
 
Ignore unchanged rows

Specifies whether the Snap must ignore writing unchanged rows from the source table to the target table. If you enable this option, the Snap generates a corresponding document in the target only if the Cause-historization column in the source row is changed. Else, the Snap does not generate any corresponding document in the target. 

Default value: Not selected

Snap Execution

Select one of the three modes in which the Snap executes. Available options are:
  • Validate & Execute: Performs limited execution of the Snap, and generates a data preview during Pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during Pipeline runtime.
  • Execute only: Performs full execution of the Snap during Pipeline execution without generating preview data.
  • Disabled: Disables the Snap and all Snaps that are downstream from it.

Troubleshooting


ErrorReasonResolution

type "e" does not exist

This issue occurs due to incompatibilities with the recent upgrade in the Postgres JDBC drivers.

Download the latest 4.1 Amazon Redshift driver here and use this driver in your Redshift Account configuration and retry running the Pipeline.

Example


 Typical Snap Configurations

Typical Snap Configurations

Do it Yourself

 Do it Yourself!

Task Overview

This example enables us to record and store changing gas prices and see how they change over time. To do so, we need to:

  1. Insert a table into the Redshift database and gas price data.
  2. Add new rows of gas price data as they come in and check to see how existing values are historized.

Download and Import Sample Pipelines

Before we start, you need to download, import, and configure the pipelines associated with this example:

  1. Download the Redshift_Iteration_Initialization_Example ZIP file. It contains two pipelines:
    • Redshift_Iteration_Initialization_Example: Creates a DB table with initial data in Redshift. Once this is done, it upserts data into the DB.
    • Redshift_Select_Write: Retrieves the latest data from Redshift and writes it to a file, enabling us to see how the data in Redshift changes with every upsert iteration.
  2. Import the pipelines into SnapLogic and configure the Redshift Snaps with your Redshift account.

Pipeline Properties

The Redshift_Iteration_Initialization_Example pipeline creates a Gas Dimensions (gas_dim_test) table and inserts new gas price values into it. To do so, it needs to initialize the DB using one branch of the pipeline, and then iteratively upsert new data using other branches. To control which branch of the pipeline gets triggered, two parameters must be declared at the pipeline level:

  • INITIALIZE: Indicates whether the task being performed is one of initialization.
  • SCD2_ITERATION: Indicates whether the task being performed is an iterative upsert.

Later, in the pipeline, we shall use these parameters to create trigger configurations to control when each child pipeline is executed.

Understanding the Sample Pipelines

This section describes the two pipelines used in this example and explains how they work together.

The Redshift_Iteration_Initialization_Example Pipeline

The Redshift_Iteration_Initialization_Example pipeline comprises three pipelines:

  1. Create DB Table and Upload Initial Data: Creates the DB table and uploads three rows of data.
  2. Upload Updated Data: Uploads new data associated with the rows created by Pipeline 1 above.
  3. Truncate DB Table: Clears the contents of the DB table, resetting the pipeline for reuse from scratch.
Create DB Table and Upload Initial Data

This pipeline contains the following Snaps:

Snap NameSnapPurposeSettingsComments
Initialization DataJSON Generator SnapContains the initial data for the Gas Dimensions DB table.

Contains the initial upload data:

 View initial upload data

The data is organized into four columns:

  • premium: Lists the price of premium fuel.
  • regular: Lists the price of regular fuel.
  • station_id: Lists the identifier of the gas station from where the data was collected.
  • zipcode: Lists the zip code in which the gas station operates.

The important thing to note here is that, while gas rates may change, the data in the station_id and zipcode fields will not. Thus, these can be used as key fields later in the pipeline.

Check InitializeFilter SnapChecks whether the DB table can be created.

Contains the following configuration in the Filter field:


This Snap checks whether the value of the INITIALIZE pipeline parameter is greater than 0. If it is, then the Snap executes the Snaps that follow it; else, it exits.

Thus, when we need to create the gas_dim table, we set the INITIALIZE parameter to 1. Once the pipeline completes execution, we change the value of the Initialize parameter to 0. This ensures that the table is not re-created.

Map Attributes to RedshiftMapper SnapMaps attributes in the input data to column headers that must be created in the Redshift DB.

Contains the following mappings:

  • Date.now(): $start_date: Instructs the DB to take the current date-time as the time from which the uploaded data is active.
  • null: $end_date: Sets end_date to 'null'.
  • true: $active: Maps the value 'true' to indicate that a record is active.
  • $station_id: $station_id: Maps station_id in the input data to station_id in the Redshift DB table.
  • $zipcode: $zipcode: Maps zipcode in the input data to zipcode in the Redshift DB table.
  • $premium: $premium: Maps premium in the input data to premium in the Redshift DB table.
  • $regular: $regular: Maps regular in the input data to regular in the Redshift DB table.

Create Table in Redshift with Input DataRedshift - InsertCreates a DB table in Redshift.

Contains the following settings:

  • Schema Name: "public"
  • Table Name: "gas_dim_test"

This creates a table named "gas_dim_test" in the "public" schema in Redshift with the data in the Initialization Data Snap.

Upload Updated Data

This pipeline updates the data in the gas_dim_test DB table in three iterations. This pipeline contains the following Snaps:

Snap NameSnapPurposeSettingsComments

Iteration 1

SCD2 Data (Iteration 1)JSON GeneratorContains the updated data for the gas_dim_test DB table.

Contains the following data:

 View the data used for Iteration 1
Iteration 1 updates the data associated with station_id U01.
Check Iteration 1Filter SnapChecks whether the first data update iteration should be executed.

Contains the following setting in the Filter expression field:

If, in the pipeline properties, the value of the SCD2_ITERATION parameter is 1, and the value of the INITIALIZE parameter is 0, then this pipeline is executed.
Check Not InitializeFilter Snap

Contains the following setting in the Filter expression field:

Iteration 2
SCD2 Data (Iteration 2)JSON GeneratorContains the updated data for the gas_dim_test DB table.

Contains the following data:

 View the data used for Iteration 2
Iteration 2 updates the data associated with station_id U01 and V01.
Check Iteration 2Filter SnapChecks whether the second data update iteration should be executed.

Contains the following setting in the Filter expression field:

If, in the pipeline properties, the value of the SCD2_ITERATION parameter is 2, and the value of the INITIALIZE parameter is 0, then this pipeline is executed.
Check Not InitializeFilter Snap

Contains the following setting in the Filter expression field:

Iteration 3
SCD2 Data (Iteration 3)JSON GeneratorContains the updated data for the gas_dim_test DB table.

Contains the following data:

 View the data used for Iteration 3
Iteration 3 updates the data associated with station_id U01, V01, and C01.
Check Iteration 3Filter SnapChecks whether the third data update iteration should be executed.

Contains the following setting in the Filter expression field:

If, in the pipeline properties, the value of the SCD2_ITERATION parameter is 2, and the value of the INITIALIZE parameter is 0, then this pipeline is executed.
Check Not InitializeFilter Snap

Contains the following setting in the Filter expression field:

Processing the Output of the Three Iterations
UnionUnion SnapTakes the output from the iteration triggered upline and passes it on to the downstream Snaps.

Prepare Upload Data
Redshift - SCD2
Prepares the data collected by the upstream Union Snap.

Contains settings that ensure that the correct DB table is updated, and that the appropriate fields are read the same way as they were created in the pipeline above.

Click the screenshot below to review the settings in detail.

These settings basically instruct Redshift to read the public.gas_dim table as follows:

  • Natural Key: The values in the following fields will never change: station_id, zipcode.
  • Cause-historization fields: When these values change, create a new row containing the new values. Mark the existing value as old.
  • SCD fields
    • All rows containing current data must contain the value 'true' in the 'active' field.
    • Similarly, all fields containing historical data must contain the value 'false' in the 'active' field.
    • The start date of the current row must always be the current date-time.
    • Assign the end date to the current data when updated data is received.


Upsert Updated Data
Redshift - Bulk Upsert
Upserts the data prepared by the Redshift - SCD2 Snap to the gas_dim_test DB table.

Contains the following settings:

Key columns: These are the columns used to check for existing entries in the target table:
  • station_id
  • zipcode

For the Bulk Upsert Snap to work, at least one Natural key must be included in the Key columns field; else the upsert will fail.

Once this Snap completes execution, the updated data is populated into the DB table, the 'active' column in the row containing the old data is updated to now read 'false', and a new row containing the updated data is added, where the 'active' column reads 'true'.

Truncate DB Table

Once the simuation is run, and you have seen how the Redshift - SCD 2 and Redshift - Upsert Snaps work together to update and historize DB data as new data arrives, you may want to clear the concerned table to ensure that, when the simulation is run again, an empty table is available.

This pipeline contains the following Snaps:

  • Check Iteration 4: Checks whether the fourth iteration should be executed. If the check succeeds, the downstream Snaps are executed; else, the pipeline exits.
  • Check Not Initialize: Checks whether the INITIALIZE parameter in the pipeline properties is set to '0'. If the check succeeds, the downstream Snaps are executed; else, the pipeline exits.
  • Redshift - Execute: Runs the following command on the gas_dim_test table to clear all its contents:

The Redshift_Select_Write Pipeline

Run this pipeline after each iteration to view the updated output.

This pipeline contains the following Snaps:

  • Redshift - Select: Reads the contents of the public.gas_dim_test DB table.
  • JSON Formatter: Formats the data received from the Redshift - Select Snap as a JSON file.
  • File Writer: Saves the JSON data as RS_File.json in the SLDB.

Changes Made to the Public.gas_dim DB Table with Each Iteration

As the values entered in the SCD2_ITERATION and INITIALIZE pipeline parameters change, and as the pipelines are run iteratively, the public.gas_dim_test DB table is created and iteratively updated. This section lists out the contents of the table at the end of each iteration.

EventTriggered PipelineWhat Happens HereOutput
InitializationCreate DB Table and Upload Initial Data

The table is created, and data from the Initialization Data Snap is uploaded into it.

No. of rows of data: 3

Iteration 1Upload Updated Data

New data corresponding to the U01 station ID is uploaded, the existing row of data is end-dated, and the active status of the existing data is changed to false. The active status of the new data is saved as true.

No. of rows of data: 4

  • 3 current
  • 1 historical (for U01)
Iteration 2Upload Updated Data

New data corresponding to the U01 and V01 station IDs is uploaded, the existing rows of data are end-dated, and the active status of the existing data is changed to false. The active status of the new data is saved as true.

No. of rows of data: 6

  • 3 current
  • 3 historical
    • 2 historical rows of station ID U01 data
    • 1 historical row of station ID V01 data
Iteration 3Upload Updated Data

New data corresponding to the C01 station IDs is uploaded, and the active status of the existing data is changed to false.

New data corresponding to the U01 and V01 station IDs is uploaded, the existing rows of data are end-dated, and the active status of the existing data is changed to false. The active status of the new data is saved as true.

No. of rows of data: 9

  • 3 current
  • 6 historical
    • 3 historical rows of station ID U01 data
    • 2 historical row of station ID V01 data
    • 1 historical row of station ID C01 data
Iteration 4Truncate DB TableClears the contents of the gas_dim_test DB table.

Downloads

Important steps to successfully reuse Pipelines

  1. Download and import the Pipeline into SnapLogic.
  2. Configure Snap accounts as applicable.
  3. Provide Pipeline parameters as applicable.

  File Modified

File Redshift_Iteration_Initialization_Example.slp

Feb 23, 2022 by Subhajit Sengupta

File Redshift_Select_Write.slp

Feb 23, 2022 by Subhajit Sengupta
 

Snap Pack History

 Click to view/expand
Release Snap Pack VersionDateType  Updates
February 2024main25112 StableUpdated and certified against the current SnapLogic Platform release.
November 2023main23721

 

StableUpdated and certified against the current SnapLogic Platform release.
August 2023main22460 Stable
  • The Redshift-Bulk Load and Redshift-Bulk Upsert Snaps now support expression enablers for the Additional options field that enables you to use parameters.
  • The Redshift - Execute Snap now includes a new Query type field. When Auto is selected, the Snap tries to determine the query type automatically.


Behavior Change

Starting with version main22460, in the Redshift Select Snap:

  • When you create a table in Redshift, by default, all column names are displayed in lowercase in the output.
  • When you enter column names in uppercase in the Output Field property, the column names are displayed in lowercase in the output.

May 2023

main21015 

Stable

Upgraded with the latest SnapLogic Platform release.

February 2023

432patches20500

 Latest

The Redshift Account no longer fails when a URL is entered in the JDBC URL field and no driver is specified.

February 2023432patches20166 Latest

Updated the description for S3 Security Token field as follows:

Specify the S3 security token part of AWS Security Token Service (STS) authentication. It is not required unless a particular S3 credential is configured to require it.

February 2023432patches20101

  

Latest
  • The JDBC driver class for Redshift accounts is bundled with the com.amazon.redshift.jdbc42.Driver as the default driver. This upgrade is backward-compatible. The existing pipelines will continue to work as expected and the new pipelines will use the Redshift Driver as the default driver. SnapLogic will support providing fixes for the issues you might encounter with accounts that use the PostgreSQL driver only until November 2023.
    After November 2023, SnapLogic will not provide support for the issues with the PostgreSQL driver. Therefore, we recommend you to migrate from the PostgreSQL JDBC driver to the Redshift JDBC driver. Learn more about migrating from the PostgreSQL JDBC Driver to the Amazon Redshift Driver. (432patches20101)

  • The Instance type option in the Redshift Bulk Load Snap enables you to use the Amazon EC2 R6a instance. This property appears only when the parallelism value is greater than one.

February 2023432patches20035

 

Latest

The Redshift Snaps that earlier supported only Redshift Cluster now support Redshift Serverless as well. With Redshift Serverless, you can avoid setting up and managing data warehouse infrastructure when you run or scale analytics.

February 2023main19844 StableUpgraded with the latest SnapLogic Platform release.
November 2022main18944 Stable

The Redshift - Insert Snap now creates the target table only from the table metadata of the second input view when the following conditions are met:

  • The Create table if not present checkbox is selected.

  • The target table does not exist.

  • The table metadata is provided in the second input view.

August 2022430patches17189 Latest
August 2022main17386 Stable

The Redshift accounts support:

  • Expression enabler to pass values from Pipeline parameters.

  • Security Token for S3 bucket external staging.

4.29 Patch429patches16908 Latest
  • Enhanced the Redshift accounts with the following:

    • Expression enabler to pass values from Pipeline parameters.

    • Support for Security Token for S3 bucket external staging.

  • Fixed an issue with Redshift - Execute Snap where the Snap failed when the query contained comments with single or double quotes in it. Now the Pipeline executes without any error if the query contains a comment.

4.29 Patch

429patches15806

 Latest

Fixed an issue with Redshift Account and Redshift SSL Account where the Redshift Snaps failed when the S3 Secret key or S3 Access-key ID contained special characters, such as +.

4.29

main15993

 

Stable

Upgraded with the latest SnapLogic Platform release.

4.28main14627 StableUpdated the label for Delete Condition to Delete Condition (Truncates Table if empty) in the Redshift Delete Snap.
4.27 Patch427patches12999 LatestFixed an issue with the Redshift Bulk Load Snap, where the temporary files in S3 were not deleted for aborted or interrupted Pipelines.
4.27 Patch427patches12999 Latest
4.27main12833 Stable

Enhanced the Redshift - Execute Snap to invoke stored procedures.

4.26main11181 StableUpgraded with the latest SnapLogic Platform release.
4.25 Patch425patches11008 Latest

Updated the AWS SDK from version 1.11.688 to 1.11.1010 in the Redshift Snap Pack and added a custom SnapLogic User Agent header value.

4.25main9554
 
StableUpgraded with the latest SnapLogic Platform release.
4.24main8556
Stable
4.23main7430
 
Stable

Fixed an issue with the Redshift Bulk Load Snap that fails while displaying a Failed to commit transaction error.

4.22main6403
 
StableUpgraded with the latest SnapLogic Platform release.
4.21 Patch421patches6144 Latest

Fixed the following issues with DB Snaps:

  • The connection thread waits indefinitely causing the subsequent connection requests to become unresponsive.
  • Connection leaks occur during Pipeline execution.
4.21 PatchMULTIPLE8841 Latest

Fixed the connection issue in Database Snaps by detecting and closing open connections after the Snap execution ends. 

4.21snapsmrc542

 

StableUpgraded with the latest SnapLogic Platform release.
4.20 Patch db/redshift8774
Latest

Fixed the Redshift - Execute Snap that hangs if the SQL statement field contains only a comment ("-- comment"). 

4.20snapsmrc535
 
StableUpgraded with the latest SnapLogic Platform release.
4.19 Patch db/redshift8410 Latest

Fixed an issue with the Redshift - Update Snap wherein the Snap is unable to perform operations when:

  • An expression is used in the Update condition property.
  • Input data contain the character '?'.
4.19snaprsmrc528
 
StableUpgraded with the latest SnapLogic Platform release.
4.18 Patch db/redshift8043 Latest

Enhanced the Snap Pack to support AWS SDK 1.11.634 to fix the NullPointerException issue in the AWS SDK. This issue occurred in AWS-related Snaps that had HTTP or HTTPS proxy configured without a username and/or password. 

4.18 PatchMULTIPLE7884 Latest

Fixed an issue with the PostgreSQL grammar to better handle the single quote characters.

4.18 PatchMULTIPLE7778 Latest

Updated the AWS SDK library version to default to Signature Version 4 Signing process for API requests across all regions.

4.18snapsmrc523
 
StableUpgraded with the latest SnapLogic Platform release.
4.17 Patchdb/redshift7433 Latest

Fixed an issue with the Redshift Bulk Load Snap wherein the Snap fails to copy the entire data from source to the Redshift table without any statements being aborted.

4.17ALL7402
 
Latest

Pushed automatic rebuild of the latest version of each Snap Pack to SnapLogic UAT and Elastic servers.

4.17snapsmrc515
 
Latest
  • Fixed an issue with the Redshift Execute Snap wherein the Snap would send the input document to the output view even if the Pass through field is not selected in the Snap configuration. With this fix, the Snap sends the input document to the output view, under the key original, only if you select the Pass through field.
  • Added the Snap Execution field to all Standard-mode Snaps. In some Snaps, this field replaces the existing Execute during preview checkbox.
4.16 Patch db/redshift6821 Latest

Fixed an issue with the Lookup Snap passing data simultaneously to output and error views when some values contained spaces at the end.

4.16snapsmrc508
 
StableUpgraded with the latest SnapLogic Platform release.
4.15 Patch db/redshift6286 Latest

Fixed an issue with the Bulk Upsert Snap wherein there was no output for any input schema.

4.15 Patch db/redshift6334 Latest

Replaced Max idle time and Idle connection test period properties with Max life time and Idle Timeout properties, respectively, in the Account configuration. The new properties fix the connection release issues that were occurring due to default/restricted DB Account settings.

4.15snapsmrc500
 
StableUpgraded with the latest SnapLogic Platform release.
4.14 Patch db/redshift5786 Latest

Fixed an issue wherein the Redshift Upload snap logged the access and secret keys without encryption in the error logs. The keys are now masked.

4.14 Patch db/redshift5667 Latest
  • Added "Validate input data" property in the Redshift Bulk Load Snap to enable users to troubleshoot input data schema.
  • Enhanced a check to identify whether the Provided Query in the Redshift Execute Snap is of read or write type.
4.14snapsmrc490
 
StableUpgraded with the latest SnapLogic Platform release.
4.13 Patch db/redshift/5303 Latest

Added a new property "Validate input data" in the Redshift Bulk Load Snap to help users troubleshoot the input data schema.

4.13 Patch db/redshift5186 Latest

Fixed the Bulk Load and Unload Snaps wherein the KMS encryption type property is failing with validation error.

4.13

snapsmrc486

 
Stable

Added KMS encryption support to these Snaps: Redshift Unload, Redshift Bulk Load, Redshift Bulk Upsert, and Redshift S3 Upsert.

4.12 Patch db/redshift5027 Latest

Fixed an issue wherein the Redshift Snaps timeout and fail to retrieve a database connection.

4.12 Patch

MULTIPLE4967 Latest

Provided an interim fix for an issue with the Redshift accounts by re-registering the driver for each account validation. The final fix is being shipped in a separate build.

4.12 Patch

MULTIPLE4744 Latest

Added support for Redshift grammar to recognize window functions as being part of the query statement.

4.12

snapsmrc480

 
StableUpgraded with the latest SnapLogic Platform release.
4.11 Patch db/redshift4589 Latest

Fixed an issue when creating a Redshift table via the second/metadata input view for the Redshift Bulk Load Snap.

4.11snapsmrc465
 
Stable

Added SSL support to the Configuring Redshift Accounts.

4.10 Patch db/redshift4115 Latest

The Upsert or BulkUpdate/BulkLoad shall not execute and produce output when no inputView has been provided.

4.10 Patchredshift3936 Latest

Addressed an issue in Redshift Execute with a Select that hangs after extracting 13 million in the morning or 30 million in the evening 

4.10

snapsmrc414

 
Stable

Added Auto commit property to the Select and Execute Snaps at the Snap level to support overriding of the Auto commit property at the Account level.

4.9.0 Patch

redshift3229 Latest

Addressed an issue in Redshift Multiple Execute where INSERT INTO SELECT statement generated a 'transaction, commit and rollback statements are not supported' exception.

4.9.0 Patch

redshift3073 Latest

Fixed an issue regarding connection not closed after login failure; Expose autocommit for "Select into" statement in PostgreSQL Execute Snap and Redshift Execute Snap

4.9snapsmrc405
 
Stable
  • Updated the Bulk Load, Bulk Upsert and S3 Upsert Snaps with the properties Vacuum type & Vacuum threshold (%) (replaced the original Vacuum property).

  • Update the S3 Upsert Snap with the properties, IAM role and Server-side encryption to support data upsert across two VPCs.

  • Added support for the Redshift driver under the account setting for JDBC jars.

4.8.0 Patchredshift2852 Latest
  • Addressed an issue with Redshift Insert failing with 'casts smallint as varchar'

  • Addressed an issue with Redshift Bulk Upsert fails to drop temp table

4.8.0 Patchredshift2799 Latest
  • Addressed an issue with Redshift Snaps with the default driver failing with could not load JDBC driver for url file.

  • Added the properties, JDBC Driver Class, JDBC jars and JDBC Url to enable the users to upload the Redshift JDBC drivers that can override the default driver.
4.8.0 Patchredshift2758 Latest

Potential fix for JDBC deadlock issue.

4.8.0 Patch

redshift2713 Latest

Fixed Redshift Snap Pack rendering dates that are one hour off from the date returned by database query for non-UTC Snaplexes

4.8.0 Patch

redshift2697 Latest

Addresses an issue where some changes made in the platform patch MRC294 to improve performance caused Snaps in the listed Snap Packs to fail.

4.8

snapsmrc398

 
Stable
  • Redshift MultiExecute Snap introduced in this release.

  • Redshift Account: Info tab added to accounts.

  • Database accounts now invalidate connection pools if account properties are modified and login attempts fail.

  • Info tab added to accounts.
  • Database accounts now invalidate connection pools if account properties are modified and login attempts fail.
4.7.0 Patchredshift2434 Latest

Replaced newSingleThreadExecutor() with a fixed thread pool.

4.7.0 Patch

redshift2387 Latest

Addressed an issue in Redshift Bulk Load Snap where Load Empty String was setting not working after release.

4.7.0 Patch

redshift2223 Latest

Auto-commit is turned off automatically for SELECT

4.7.0 Patch

redshift2201

 

Latest

Fixed an issue for database Select Snaps regarding Limit rows not supporting an empty string from a pipeline parameter.

4.7

snapsmrc382

 
Stable
  • Updated the Redshift Snap Account Settings with the IAM properties that include AWS account ID , IAM role name, and Region name.

  • Redshift Bulk Load Snap updated with the properties IAM Role & Server-side encryption.

  • Redshift Bulk Upsert Snap updated with the properties Load empty stringsIAM Role & Server-side encryption.

  • Updated the Redshift Upsert Snap with Load empty strings property.

  • Updated the Redshift Unload Snap with the property IAM role.

4.6snapsmrc362
 
Stable
  • Redshift Execute Snap enhanced to fully support SQL statements with/without expressions & SQL bind variables.

  • Resolved an issue in Redshift Execute Snap that caused errors when executing a command Select current_schemas(true).

  • Resolved an issues in Redshift Execute Snap that caused errors when a Select * from <table_name> into statement was executed.

  • Enhanced error reporting in Redshift Bulk Load Snap to provided appropriate resolution messages.

4.5.1

redshift1621

 
Latest
  • Redshift S3 Upsert Snap introduced in this release.

  • Resolved an issue that occurred while inserting mismatched data type values in Redshift Insert Snap.

4.5

snapsmrc344

 Stable
  • Resolved an issue in Redshift Bulk Upsert Snap that occurred when purging temp tables.

  • Resolved an issue in Redshift Upload/Upsert Snap that occurred when using IAM credentials in an EC2 instance with an S3 bucket.

4.4.1NA Latest

Resolved an issue with numeric precision when trying to use create table if not present in Redshift Insert Snap.

4.4NA StableUpgraded with the latest SnapLogic Platform release.
4.3.2NA Stable
  • Redshift Select Where clause property now has expression support.

  • Redshift Update Update condition property now has expression support.

  • Resolved an issue with Redshift Select Table metadata being empty if the casing is different from the suggested one for table name

4.3NA Stable
  • Table List Snap: A new option, Compute table graph, now lets you determine whether or not to generate dependents data into the output.

  • Redshift Unload Snap Parallel property now explicitly adds 'PARALLEL [OFF|FALSE]' to the UNLOAD query.

4.2NA Latest
  • Resolved an issue where Redshift SCD2 Snap historized the current row when no Cause-historization fields had changed.

  • Ignore empty result added to Execute and Select Snaps. The option will not any document to the output view for select statements.

  • Resolved an issue with Redshift Select Snap returning a Date object for DATE column data type instead of a LocalDate object.

  • Resolved an issue in RedShift SCD2 failing to close database cursor connection.

  • Resolved an issue with Redshift Lookup Snap not handling values with spaces in the prefix.

  • Updated driver not distributed with the Redshift Snap Pack.

  • Output fields table property added to Select Snap.

  • Resolved an issue with Redshift - Bulk Loader incorrectly writing to wrong location on S3 and disable data compression not working

  • Resolved an issue in Execute and Select Snaps where the output document was the same as the input document if the query produces no data. When there is no result from the SELECT query, the input document will be passed through to the output view as a value to the 'original' key. The new property Pass through with true default.

NANA NA
  • Redshift Account: Enhanced error messaging

  • Redshift SCD2: Bug fixes with compound keys

  • RedShift Lookup: Bug fixes on lookup failures; Pass-though on no lookup match property added to allow you to pass the input document through to the output view when there is no lookup matching.