Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

In this article

Table of Contents
minLevel1
maxLevel2
absoluteUrltrue

Overview

You can use this Snap to ...read both the input document data and write the data in the binary (Parquet) format to the output.

...

Snap Type

The <Snap name> Parquet Formatter Snap is a Read/Write/Parse/Transform/Flow/ Format-type Snap that reads/fetches/writes/parses/executes/transforms/calls/creates…

Prerequisites

...

Valid client ID.

...

.

Prerequisites

None.

Support for Ultra Pipelines

...

Ultra Pipelines

...

...

Works in Ultra Pipelines if....

Does not

...

work in Ultra Pipelines

Limitations

...

None.

and Known Issues

None.

Snap Views

Type

Format

Number of Views

Examples of Upstream and Downstream Snaps

Description

Input 

Document

Binary

Binary or Document

 

  • Min: 1

  • Max: 2

  • Mapper

  • Copy

Requires document data as input.

.Requires the EDI data and internal ID as a JSON document

You can override the schema setting by inserting an object like this into the second input view.

Output

Document

Binary

Binary or Document

 

  • Min: 1

  • Max: 1

  • ..

  • ..

The EDI transaction ID and order confirmation
  • Parquet Writer

  • Parquet Parser

Writes the document data in the binary (Pprquet) format to the output.

Error

Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter when running the

Pipeline

pipeline by choosing one of the following options from the When errors occur list under the Views tab:

  • Stop Pipeline Execution: Stops the current

Pipeline
  • pipeline execution if the Snap encounters an error.

  • Discard Error Data and Continue: Ignores the error, discards that record, and continues with the remaining records.

  • Route Error Data to Error View: Routes the error data to an error view without stopping the Snap execution.

Learn more about Error handling in Pipelines.

Snap Settings

Info
  • Asterisk ( * ): Indicates a mandatory field.

  • Suggestion icon ((blue star)): Indicates a list that is dynamically populated based on the configuration.

  • Expression icon ((blue star) ): Indicates the value is an expression (if enabled) or a static value (if disabled). Learn more about Using Expressions in SnapLogic.

  • Add icon ( (blue star) ): Indicates that you can add fields in the fieldsetfield set.

  • Remove icon ( (blue star)): Indicates that you can remove fields from the fieldsetfield set.

  • Upload icon ((blue star) ): Indicates that you can upload files.

Field Name

Field Type

Field Dependency

Description

Label*

 

Default Value

ELT Database Account

Parquet Formatter
Example

ELT RS Account

Transform Parquet Formatter

String

None.

The

Specify the name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your

Pipeline

pipeline.

Number of records

Default Value
Example:

String/Expression

fieldset Name

Specify advanced parameters that you want to include in the request.

Field 1*

Default Value<value> or None.
Example<value>

String

Debug mode check box is not selected.

Field 2

Default Value<value> or None.
Example<value>

String

None.

Snap Execution

Default Value
Example: Validate & Execute

Dropdown list

N/A

 

Edit Schema

 

 

 

 

 

 

 

Button

Specify a valid Parquet schema that describes the data.  

The following is an example of a schema using all the primitive and some examples of logical types:

Code Block
message document {
  # Primitive Types
  optional int64 32_num;
  optional int64 64_num;
  optional boolean truth;
  optional binary message;
  optional float pi;
  optional double e;
  optional int96 96_num;
  optional fixed_len_byte_array (1) one_byte;
  # Logical Types
  optional binary snowman (UTF8);
  optional int32 8_num (INT_8);
  optional int32 16_num (INT_16);
  optional int32 u8_num (UINT_8);
  optional int32 u16_num (UINT_16);
  optional int32 u32_num (UINT_32);
  optional int64 u64_num (UINT_64);
  optional int32 dec_num (DECIMAL(5,2));
  optional int32 jan7 (DATE);
  optional int32 noon (TIME_MILLIS);
  optional int64 jan7_epoch (TIMESTAMP_MILLIS);
  optional binary embedded (JSON);
}
Note

"Generate template" does not support nested structures like MAP and LIST type.

Compression

 

 

Default Value: NONE

Example: SNAPPY

Dropdown list

Choose the type of compression to use when writing the file. The available options are:

  • NONE

  • SNAPPY

  • GZIP

Many compression algorithms require both Java and system libraries, and the algorithms fail if the latter is not installed. If you see unexpected errors, ask your system administrator to verify if all the required system libraries are installed because they are typically not installed by default. The system libraries have names such as liblzo2.so.2 or libsnappy.so.1 and can be located in the /usr/lib/x86_64-linux-gnu directory.

Decimal rounding mode

 

Default Value: Half up

Example: Up

Dropdown list

Choose the required rounding method for decimal values when they exceed the required number of decimal places. The following are the available options:

  • Half up

  • Half down

  • Half even

  • Up

  • Down

  • Ceiling

  • Floor

  • Truncate

Snap execution

Default ValueValidate & Execute
Example: Execute only

Dropdown list

Select one of the following three modes in which the Snap executes:

  • Validate & Execute: Performs limited execution of the Snap, and generates a data preview during

Pipeline
  • pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during

Pipeline
  • pipeline runtime.

  • Execute only: Performs full execution of the Snap during

Pipeline
  • pipeline execution without generating preview data.

  • Disabled: Disables the Snap and all Snaps that are downstream from it.

Troubleshooting

...

Error

...

Reason

...

Resolution

...

Account validation failed.

...

The Pipeline ended before the batch could complete execution due to a connection error.

...

Verify that the Refresh token field is configured to handle the inputs properly. If you are not sure when the input data is available, configure this field as zero to keep the connection always open.

Examples

Excluding Fields from the Input Data Stream

We can exclude the unrequired fields from the input data stream by omitting them in the Input schema fieldset. This example demonstrates how we can use the <Snap Name> to achieve this result:

<screenshot of Pipeline/Snap and description>

...

Schema

Code Block
languagejson
{
   "schema": "message document 
   {\n  optional binary AUTOSYNC_PRIMARYKEY (STRING);
   \n  optional binary AUTOSYNC_VALUEHASH (STRING);
   \n  optional binary AUTOSYNC_CURRENTRECORDFLAG (STRING);
   \n  optional int64 AUTOSYNC_EFFECTIVEBEGINTIME (TIMESTAMP(MILLIS,true));
   \n  optional int64 AUTOSYNC_EFFECTIVEENDTIME (TIMESTAMP(MILLIS,true));
   \n  optional double ID1;\n  optional binary ID2 (STRING);
   \n  optional binary ID3 (STRING);
   \n  optional binary ID4 (STRING);\n  optional binary ID5 (STRING);
   \n  optional binary ID6 (STRING);\n  optional binary ID7 (STRING);
   \n  optional binary ID8;\n  optional double ID9;
   \n  optional double ID10;
   \n  optional double ID11;\n  optional double ID12;
   \n  optional double ID13;\n  optional double ID14;
   \n  optional int32 ID15 (DATE);\n  optional int64 ID16 (TIMESTAMP(MILLIS,true));
   \n  optional int64 ID17 (TIMESTAMP(MILLIS,true));
   \n  optional int64 ID18 (TIMESTAMP(MILLIS,true));
   \n  optional double ID100;
   \n}
   \n"
}

Examples

Transform document data into Parquet format and vice-versa

This example demonstrates how to convert the input document data to parquet and parquet data back to document output.

...

Download this pipeline.

Step 1: Configure the JSON Generator Snap with input data.

...

Step 2: Configure the Parquet Formatter Snap with the schema for the input document data.

...

Step 3: Configure the Parquet Parser Snap. On validation, the Snap reverts the Parquet data to document data.

Parquet Parser Configuration

Parquet Parser Output

parquet-parser-config.pngImage Added

parquet-parser-output.pngImage Added

Downloads

Info
  1. Download and import the Pipeline pipeline into the SnapLogic Platform..

  2. Configure Snap accounts, as applicable.

  3. Provide Pipeline parameters, as applicable.

Attachments
patterns*.slp, *.zip
sortByname

Snap Pack History

ExpandtitleClick here to expand...

Insert excerpt
Transform Snap Pack
Transform Snap Pack
nopaneltrue

...

Related Content