Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

On this Page

Table of Contents
maxLevel2
excludeOlder Versions|Additional Resources|Related Links|Related Information

Snap type:

Read

Description:

This Snap allows you to fetch data from a database by providing a table name and configuring the connection. The Snap produces the records from the database on its output view which can then be processed by a downstream Snap. 

ETL Transformations & Data Flow

This Snap enables the following ETL operations:

Fetch data from an existing Redshift table using the user configuration, and feed it to downstream Snaps.

JSON paths can be used in a query and will have values from an incoming document substituted into the query. However, documents missing values for a given JSON path will be written to the Snap's error view. After a query is executed, the query's results are merged into the incoming document overwriting any existing keys' values. The original document is output if there are no results from the query.


Queries produced by the Snap have an equivalent format:

Code Block
SELECT * FROM [table] WHERE [where clause] ORDER BY [ordering] LIMIT [limit] OFFSET [offset]


If more powerful functionality is desired, then the Execute Snap should be used.

Input & Output

  • InputThis Snap can have an upstream Snap that can pass a document output view such as Mapper or JSON Generator.

  • Output: A document or a set of documents that contains the result of query for each input document. If no input document is provided, the query will be done only once.
Prerequisites:

[None]

Limitations and Known Issues:

Works in Ultra Task Pipelines.

Configurations:


Account and Access

This Snap uses account references created on the Accounts page of SnapLogic Manager to handle access to this endpoint. See Redshift Account for information on setting up this type of account.

Views

InputThis Snap allows none or one input view. If the input view is defined, then the where clause can substitute incoming values for a given expression (in such as to use it as a lookup).
OutputThis Snap has one output view by default and produces one document for each row in the table. A second view can be added to dump out the metadata for the table as a document. The metadata document can then be fed into the second input view of Redshift Insert or Bulk Load Snap so that the table is created in Redshift with a similar schema as the source table. See the Redshift Snaps for more information.
Error

This Snap has at most one error view and produces zero or more documents in the view.
 


Troubleshooting:[None]

Settings

Label


Required The name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your pipeline.

Schema name


The database schema name. Selecting a schema filters the Table name list to show only those tables within the selected schema. The property is suggestible and will retrieve available database schemas during suggest values.

Table name


Required Name of table to execute insert on
Example: people

Where clause 

Where clause of select statement. This supports document value substitution (such as $person.firstname will be substituted with the value found in the incoming document at the path). However, you may not use a value substitution after "IS" or "is" word. Please see the examples below:

Examples

  • email = 'you@example.com' or email = $email
  • email IS NOT NULL
  • email IS NULL
 

Order by: Column names 

Enter in the columns in the order in which you want to order by. The default database sort order will be used.

Example

name

email  

Limit offset

Starting row for the query.
Example: 0
Default value: [None] 

Limit rows 

Number of rows to return from the query.
Example: 10

Default value: [None] 

Output fields

Enter or select output field names for SQL SELECT statement. To select all fields, leave it at default.

Example: email, address, first, last, etc.

Default value: [None]

Fetch Output Fields In Schema

Insert excerpt
Cassandra - Select
Cassandra - Select
nopaneltrue

Default value: Not selected

Pass through


If checked, the input document will be passed through to the output view under the key 'original'.

Default value: Selected

Ignore empty result


If selected, no document will be written to the output view when a SELECT operation does not produce any result. If this property is not selected and the Pass through property is selected, the input document will be passed through to the output view.

Default value: Not selected

Auto commit

Select one of the options for this property to override the state of the Auto commit property on the account. The Auto commit at the Snap-level has three values: TrueFalse, and Use account setting. The expected functionality for these modes are:

  •  True - The Snap will execute with auto-commit enabled regardless of the value set for Auto commit in the Account used by the Snap.
  •  False - The Snap will execute with auto-commit disabled regardless of the value set for Auto commit in the Account used by the Snap.
  • Use account setting - The Snap will execute with Auto commit property value inherited by the Account used by the Snap.

Default value: False

Number of retries

Specifies the maximum number of attempts to be made to receive a response. The request is terminated if the attempts do not result in a response.

Example: 3

Default value: 0

Multiexcerpt include macro
nameretries
pageFile Reader

Retry interval (seconds)

Specifies the time interval between two successive retry requests. A retry happens only when the previous attempt resulted in an exception. 

Example:  10

Default value: 1

Match data types

Conditional. This property applies only when the Output fields property is provided with any field value(s).

If this property is selected, the Snap tries to match the output data types same as when the Output fields property is empty (SELECT * FROM ...). The output preview would be in the same format as the one when SELECT * FROM is implied and all the contents of the table are displayed.

Default value: Not selected

Staging mode
Multiexcerpt include macro
nameStaging_mode_Description
pageGeneric JDBC - Select

Multiexcerpt include macro
nameSnap Execution
pageAnaplan Read

Multiexcerpt include macro
nameSnap_Execution_Introduced
pageAnaplan Read


Note

For the 'Suggest' in the Order by columns and the Output fields properties, the value of the Table name property should be an actual table name instead of an expression. If it is an expression, it will display an error message "Could not evaluate accessor:  ..." when the 'Suggest' button is clicked. This is because, at the time the "Suggest" button is clicked, the input document is not available for the Snap to evaluate the expression in the Table name property. The input document is available to the Snap only during the preview or execution time.



Basic Use Case


Following is an example using several of the Redshift properties to select two rows of data from table account in schema public with a where condition clause.

Typical Snap Configurations


The Key configurations for the Snap are:

  • Without Expression
  • With Expression

Following examples are using the sample data: demo_guest.csv (available in Downloads below). Please use the Redshift Bulk Load to load the file into the Redshift instance or create a table using:

Code Block
CREATE TABLE "public"."demo_guest" (             id varchar(20),             name varchar(20),             inst_dt timestamp )

The pipeline can be found here: redshift-select_2017_06_12.slp (available in Downloads below)

Without Expressions:  Select a table with WHERE condition and show the results in order. The configuration below is equivalent to the Query: 

Code Block
SELECT * FROM "public"."demo_guest" WHERE "name" = 'Tom' ORDER BY "inst_dt";


  • With Expressions:
    • Query statement from the upstream

Query a table according to the input document. The Mapper Snap connects to the Snap and provides the needed upstream input document to the Redshift Select Snap. 

The Mapper configuration:

    • Redshift Select Snap configuration:

    • Pipeline Parameter

 Query a table according to the pipeline parameter. The following example pipeline used the 'id' defined in pipeline parameter to query the table.

 

Advanced Use Case


The following describes a pipeline, with a broader business logic involving multiple ETL transformations. The use case can be moving data from on-prem to cloud. Following is the sample pipeline. 

In this example, the goal is to move all account data from on-prem instances to Redshift CDW so users can run analytics on top of this. Files (account details) stored in MySQL (producer) are pushed to a particular topic in Confluent Kafka, File reader reads another file (account/leads) and is pushed to the same topic. Consumer can consume from the same topic and later move this to Redshift. Redshift Select can be used to verify the data moved, and then Tableau can consume this for Analytics. 

The ETL Transformations

In the pipeline #1:

  1. Extract: The MySQL Select Snap reads the documents from the MySQL Database.

  2. Load:  The Confluent Kafka Producer Snap loads the documents into a topic.


In the pipeline #3:

  1. Extract: The File Reader Snap reads the records to be be pushed to the Confluent Kafka topic.
  2. Transform: The Excel Parser Snap parses the records in an .xls format 
  3. Load: The Confluent Kafka Producer Snap loads the .xls documents into a topic.


In the pipeline #3:

  1. Extract: The Confluent Kafka Consumer Snap reads the documents from the same topic.
  2. Transform: The Mapper Snap maps the input documents to the Redshift Database
  3. Load: The Redshift Bulk Load Snap loads the documents into a table. 
  4. Read: The Redshift Select Snap reads the newly loaded documents.


Downloads

Attachments
uploadfalse
oldfalse
patterns*slp, *zip


Insert excerpt
Redshift Snap Pack
Redshift Snap Pack
nopaneltrue