Scale

On this Page

Overview

The Scale Snap is a Transform type Snap that scales numeric values in fields to specific ranges or applies statistical transformations. The Snap helps you with data preparation before applying a machine learning algorithm to the data. The Scale Snap supports the following four transformations:

  • Scale to range [0,1]
  • Scale to range [-1,1]
  • Z-transformation
  • Log-transformation

Input and Output

Expected input

  • First input view: A document with numeric fields.
  • Second input view: A document containing data statistics computed by the Profile Snap.

The Scale Snap processes the data in streams while the Profile Snap consumes all the data before it derives any statistics. Therefore, while using the Profile Snap:

  • Build a pipeline with the Profile Snap to generate data statistics that are stored in the JSON format using the JSON Formatter and File Writer Snaps.
  • Use the File Reader and JSON Parser Snaps to read statistics from the Profile Snap and feed the output data into the Scale Snap.

Expected output: A document with numeric fields that are transformed as per the transformation type.

Expected upstream Snaps

  • First input view: A Snap that has a document output view. For example, Mapper, CSV Generator, or Categorical to Numeric.
  • Second input view: A sequence of File Reader and JSON Parser Snaps. These Snaps read the data statistics computed by the Profile Snap in another pipeline. It is required to select Value distribution in the Profile Snap and set Top value limit according to the number of unique values; or set to 0, which means unlimited.

Expected downstream Snaps: A Snap that has a document input view. For example, Mapper, JSON Formatter, or Type Converter.

Prerequisites

None

Configuring Accounts

Accounts are not used with this Snap.

Configuring Views

Input

This Snap has exactly two document input views - the Data input view and the Profile input view.

  • The profile input view is required for Range [0,1], Range [-1,1], and Z-transformation types. 

  • For Log-transformation type, the second input view of the Snap can remain unconnected.

OutputThis Snap has exactly one document output view.
ErrorThis Snap has at most one document error view.

Troubleshooting

None

Limitations and Known Issues

None

Modes


Snap Settings


LabelRequired. The name for the Snap. You can modify this to be specific, especially if you have more than one of the same Snap in your pipeline.
Policy

Specify your preferences for a field's transformation. Each policy contains an input field, transformation rule, and the result field. The Snap transforms the values in the input field and writes them to the result field.

You can apply multiple transformations to the same input field. However, the result fields must be different. If the result field is the same as the input field, the Snap overwrites the input field with the result field.

Field

The field that must be transformed. This is a suggestible field that suggests all the fields in the dataset. The Snap displays an error message for non-numeric fields.

Default value: None

Rule

The type of transformation to be performed on the selected field. The available options are:

  • Range [0,1]: Scale values into [0,1]. The minimum value is scaled to 0 and the maximum value is scaled to 1. Other values are scaled to (x - min) / (max-min). 
  • Range [-1,1]: Scale values into [-1,1]. The minimum value is scaled to -1 and the maximum value is scaled to 1.
  • Z-transformation: This is the same as standardization, which is z = (x - mean) / sd.
  • Log-transformation: Natural logarithm is applied to each value.

For example, if you want to transform 35 with the following statistics:

  • min = 0
  • max = 50
  • mean = 25
  • sd = 10

The result for each transformation rule is as following:

  • Range [0,1]: 0.7
  • Range [-1,1]: 0.4
  • Z-transformation: 1
  • Log-transformation: 3.56
Result field

The result field to use in the output map. If the Result field is the same as Field, the values are overwritten. If the Result field does not exist in the original input document, a new field is added. 

Default value: None

Snap Execution

Select one of the three modes in which the Snap executes. Available options are:

  • Validate & Execute: Performs limited execution of the Snap, and generates a data preview during Pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during Pipeline runtime.
  • Execute only: Performs full execution of the Snap during Pipeline execution without generating preview data.
  • Disabled: Disables the Snap and all Snaps that are downstream from it.

Example


Normalizing Values

This pipeline demonstrates how to normalize values using the Scale Snap. An input dataset with 19 rows is passed to the Scale Snap. The dataset consists of fields with different ranges and we are scaling those fields into the same range of [0,1] to prepare this dataset before applying machine learning algorithm.

Download this pipeline.

 Understanding the pipeline

The input is a Blood Transfusion Service Center dataset in CSV format that contains the following details: last_donation(month), total_donation, total_blood(cc), first_donation(month), and donate_in_mar_2017. The dataset is derived from hereFields in this dataset are scaled into the range [0,1] using Scale Snap

The input data preview is as follows:

We use the Type Converter Snap which is configured as the following to automatically convert values into the appropriate type

The Auto checkbox is selected so the Snap automatically detects and converts the data types, as required.

The converted data from Type Converter Snap is passed to the Profile Snap to derive statistical details of the dataset. The Profile Snap is configured as follows:

Following is the output preview of the Profile Snap.

The data from Profile Snap is passed to the Scale Snap. The Scale Snap is configured as follows:

The output preview of the Scale Snap is:

You can see that all the values are scaled in the range [0,1].

Downloads

Important steps to successfully reuse Pipelines

  1. Download and import the Pipeline into SnapLogic.
  2. Configure Snap accounts as applicable.
  3. Provide Pipeline parameters as applicable.

  File Modified

File Scale_Example.slp

Nov 08, 2018 by Vidya Patil

Snap Pack History

 Click to view/expand

4.29 (main15993)

  •  Upgraded with the latest SnapLogic Platform release.

4.28 (main14627)

  • Enhanced the Type Converter Snap with the Fail safe upon execution checkbox. Select this checkbox to enable the Snap to convert data with valid data types, while ignoring invalid data types.

4.27 (427patches13730)

  • Enhanced the Type Converter Snap with the Fail safe upon execution checkbox. Select this checkbox to enable the Snap to ignore invalid data types and convert data with valid data types.

4.27 (427patches13948)

4.27 (main12833)

  • No updates made.

4.26 (main11181)

  • No updates made.

4.25 (425patches10994)

  • Fixed an issue when the Deduplicate Snap where the Snap breaks when running on a locale that does not format decimals with Period (.) character. 

4.25 (main9554)

  • No updates made.

4.24 (main8556)

  • No updates made.

4.23 (main7430)

  • No updates made.

4.22 (main6403)

  • No updates made.

4.21 (snapsmrc542)

  • Introduces the Mask Snap that enables you to hide sensitive information in your dataset before exporting the dataset for analytics or writing the dataset to a target file.
  • Enhances the Match Snap to add a new field, Match all, which matches one record from the first input with multiple records in the second input. Also, enhances the Comparator field in the Snap by adding one more option, Exact, which identifies and classifies a match as either an exact match or not a match at all.
  • Enhances the Deduplicate Snap to add a new field, Group ID, which includes the Group ID for each record in the output. Also, enhances the Comparator field in the Snap by adding one more option, Exact, which identifies and classifies a match as either an exact match or not a match at all.
  • Enhances the Sample Snap by adding a second output view which displays data that is not in the first output. Also, a new algorithm type, Linear Split, which enables you to split the dataset based on the pass-through percentage.

4.20 Patch mldatapreparation8771

  • Removes the unused jcc-optional dependency from the ML Data Preparation Snap Pack.

4.20 (snapsmrc535)

  • No updates made.

4.19 (snapsmrc528)

  • New Snap: Introducing the Deduplicate Snap. Use this Snap to remove duplicate records from input documents. When you use multiple matching criteria to deduplicate your data, it is evaluated using each criterion separately, and then aggregated to give the final result.

4.18 (snapsmrc523)

  • No updates made.

4.17 Patch ALL7402

  • Pushed automatic rebuild of the latest version of each Snap Pack to SnapLogic UAT and Elastic servers.

4.17 (snapsmrc515)

  • New Snap: Introducing the Feature Synthesis Snap, which automatically creates features out of multiple datasets that share a one-to-one or one-to-many relationship with each other.
  • New Snap: Introducing the Match Snap, which enables you to automatically identify matched records across datasets that do not have a common key field.
  • Added the Snap Execution field to all Standard-mode Snaps. In some Snaps, this field replaces the existing Execute during preview check box.

4.16 (snapsmrc508)

  • Added a new Snap, Principal Component Analysis, which enables you to perform principal component analysis (PCA) on numeric fields (columns) to reduce dimensions of the dataset.

4.15 (snapsmrc500)

  • New Snap Pack. Perform preparatory operations on datasets such as data type transformation, data cleanup, sampling, shuffling, and scaling. Snaps in this Snap Pack are: 
    • Categorical to Numeric
    • Clean Missing Values
    • Date Time Extractor
    • Numeric to Categorical
    • Sample
    • Scale
    • Shuffle
    • Type Converter