Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

In this article

Table of Contents
minLevel1
maxLevel2
absoluteUrltrue

...

You can use this Snap to read both the input document data from the input and write the data in the binary (parquetParquet) format to the output.

...

Snap Type

The Parquet Formatter Snap is a WriteFormat-type Snap.

Prerequisites

None.

Support for Ultra Pipelines

...

Does not work in Ultra Pipelines

Limitations and Known Issues

None.

Snap Views

Type

Format

Number of Views

Examples of Upstream and Downstream Snaps

Description

Input 

Document

 

  • Min: 1

  • Max: 2

  • Mapper

  • Copy

Requires document data as input.

You can override the schema setting by inserting an object like this into the second input view.

Output

Binary

 

  • Min: 1

  • Max: 1

  • Parquet Writer

  • Parquet Parser

Writes the document data in the binary (

parquet

Pprquet) format to the output.

Error

Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter when running the

Pipeline

pipeline by choosing one of the following options from the When errors occur list under the Views tab:

  • Stop Pipeline Execution: Stops the current

Pipeline
  • pipeline execution if the Snap encounters an error.

  • Discard Error Data and Continue: Ignores the error, discards that record, and continues with the remaining records.

  • Route Error Data to Error View: Routes the error data to an error view without stopping the Snap execution.

Learn more about Error handling in Pipelines.

Snap Settings

Info
  • Asterisk ( * ): Indicates a mandatory field.

  • Suggestion icon ((blue star)): Indicates a list that is dynamically populated based on the configuration.

  • Expression icon ((blue star) ): Indicates the value is an expression (if enabled) or a static value (if disabled). Learn more about Using Expressions in SnapLogic.

  • Add icon ( (blue star) ): Indicates that you can add fields in the fieldsetfield set.

  • Remove icon ( (blue star)): Indicates that you can remove fields from the fieldsetfield set.

  • Upload icon ((blue star) ): Indicates that you can upload files.

Field Name

Field Type

Description

Label*

 

Default ValueParquet Formatter
Example

Hadoop

Transform Parquet Formatter

String

Specify the name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your

Pipeline

pipeline.

 

Edit Schema

Button

Specify a valid Parquet schema that describes the data.  

The schema can be specified based off a Hive Metastore table schema or generated from suggest data. Save the pipeline before editing the schema to generate suggest data that assists in specifying the schema based off of the schema of incoming documents.  If no suggest data is available, then an example schema is generated along with documentation. Alter one of those schemas to describe the input data.
The Parquet schema can also be written manually. A schema is defined by a list of fields and here is an example describing the contact information of a person. 

Expand

After defining the message type, a list of fields are given. A field is comprised of a repetition, a type, and the field name. Available repetitions are required, optional, and repeated.
Each field has a type. The primitive types include:

binary -used for strings

fixed_len_byte_array - used for byte arrays of fixed length

boolean - a 1 bit boolean value

int32 - a 32 bit integer

int64 - a 64 bit integer

int96 - a 96 bit integer

float - a 32 bit floating point number

double - a 64 bit floating point number

These types can be annotated with a logical type to specify how the application should interpret the data. The Logical types include:

UTF8 - used with binary to specify the string as UTF8 encoded

INT_8 - used with int32 to specify the int as an 8 bit signed integer

INT_16 - used with int32 to specify the int as a 16 bit signed integer

Unsigned types - may be used to produce smaller in-memory representations of the data. If the stored value is larger than the maximum allowed by int32 or int64, then the behavior is undefined.

UINT_8 - used with int32 to specify the int as an 8 bit unsigned integer

UINT_16 - used with int32 to specify the int as a 16 bit unsigned integer

UINT_32 - used with int32 to specify the int as a 32 bit unsigned integer

UINT_64 - used with int64 to specify the int as a 64 bit unsigned integer

DECIMAL(precisionscale) - used to describe arbitrary-precision signed decimal numbers of the form value * 10^(-scale) to the given precision. The annotation can be with int32, int64, fixed_len_byte_array, binary. See the Parquet documentation for limits on precision that can be given.

DATE - used with int32 to specify the number of days since the Unix epoch, 1 January 1970

Note: This Snap supports only the following date format: yyyy-MM-dd.

TIME_MILLIS - used with int32 to specify the number of milliseconds after midnight

TIMESTAMP_MILLIS - used with int64 to store the number of milliseconds from the Unix epoch, 1 January 1970

INTERVAL - used with a fixed_len_byte_array of length 12, where the array stores 3 unsigned little-endian integers. These integers specify

a number in months

a number in days

a number in milliseconds

JSON - used with binary to represent an embedded JSON document

BSON - used for an embedded BSON document

 

 

 

 

 

 

 

Button

Specify a valid Parquet schema that describes the data.  

The following is an example of a schema using all the primitive and some examples of logical types:

Code Block
message document {
  # Primitive Types
  optional int64 32_num;
  optional int64 64_num;
  optional boolean truth;
  optional binary message;
  optional float pi;
  optional double e;
  optional int96 96_num;
  optional fixed_len_byte_array (1) one_byte;
  # Logical Types
  optional binary snowman (UTF8);
  optional int32 8_num (INT_8);
  optional int32 16_num (INT_16);
  optional int32 u8_num (UINT_8);
  optional int32 u16_num (UINT_16);
  optional int32 u32_num (UINT_32);
  optional int64 u64_num (UINT_64);
  optional int32 dec_num (DECIMAL(5,2));
  optional int32 jan7 (DATE);
  optional int32 noon (TIME_MILLIS);
  optional int64 jan7_epoch (TIMESTAMP_MILLIS);
  optional binary embedded (JSON);
}
Note

"Generate template"

will

does not

work for

support nested

structure

structures like MAP and LIST type.

Compression

 

 

Default Value: NONE

Example: SNAPPY

Dropdown list

Select

Choose the type of compression to use when writing the file. The

following are the

available options are:

  • NONE

  • SNAPPY

  • GZIP

Many compression algorithms require both Java and system libraries, and

will

the algorithms fail if the latter is not installed. If you see unexpected errors, ask your system administrator to verify

that

if all the required system libraries are

installed–they

installed because they are typically not installed by default. The system libraries

will

have names such as liblzo2.so.2 or libsnappy.so.1 and

will probably

can be located in the /usr/lib/x86_64-linux-gnu directory.

 

Decimal rounding mode

 

Default Value: Half up

Example: Up

Dowpdown

Dropdown list

Select

Choose the required rounding method for decimal values when they exceed the required number of decimal places. The following are the available options

are

:

  • Half up

  • Half down

  • Half even

  • Up

  • Down

  • Ceiling

  • Floor

  • Truncate

Snap execution

Default ValueValidate & Execute
Example: Execute only

Dropdown list

Select one of the following three modes in which the Snap executes:

  • Validate & Execute: Performs limited execution of the Snap, and generates a data preview during

Pipeline
  • pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during

Pipeline
  • pipeline runtime.

  • Execute only: Performs full execution of the Snap during

Pipeline
  • pipeline execution without generating preview data.

  • Disabled: Disables the Snap and all Snaps that are downstream from it.

Schema

Code Block

...

Examples

Excluding Fields from the Input Data Stream

We can exclude the unrequired fields from the input data stream by omitting them in the Input schema fieldset. This example demonstrates how we can use the <Snap Name> to achieve this result:

<screenshot of Pipeline/Snap and description>

...

language

...

Error

...

Reason

...

Resolution

...

Account validation failed.

...

The Pipeline ended before the batch could complete execution due to a connection error.

...

Verify that the Refresh token field is configured to handle the inputs properly. If you are not sure when the input data is available, configure this field as zero to keep the connection always open.

json
{
   "schema": "message document 
   {\n  optional binary AUTOSYNC_PRIMARYKEY (STRING);
   \n  optional binary AUTOSYNC_VALUEHASH (STRING);
   \n  optional binary AUTOSYNC_CURRENTRECORDFLAG (STRING);
   \n  optional int64 AUTOSYNC_EFFECTIVEBEGINTIME (TIMESTAMP(MILLIS,true));
   \n  optional int64 AUTOSYNC_EFFECTIVEENDTIME (TIMESTAMP(MILLIS,true));
   \n  optional double ID1;\n  optional binary ID2 (STRING);
   \n  optional binary ID3 (STRING);
   \n  optional binary ID4 (STRING);\n  optional binary ID5 (STRING);
   \n  optional binary ID6 (STRING);\n  optional binary ID7 (STRING);
   \n  optional binary ID8;\n  optional double ID9;
   \n  optional double ID10;
   \n  optional double ID11;\n  optional double ID12;
   \n  optional double ID13;\n  optional double ID14;
   \n  optional int32 ID15 (DATE);\n  optional int64 ID16 (TIMESTAMP(MILLIS,true));
   \n  optional int64 ID17 (TIMESTAMP(MILLIS,true));
   \n  optional int64 ID18 (TIMESTAMP(MILLIS,true));
   \n  optional double ID100;
   \n}
   \n"
}

Examples

Transform document data into Parquet format and vice-versa

This example demonstrates how to convert the input document data to parquet and parquet data back to document output.

...

Download this pipeline.

Step 1: Configure the JSON Generator Snap with input data.

...

Step 2: Configure the Parquet Formatter Snap with the schema for the input document data.

...

Step 3: Configure the Parquet Parser Snap. On validation, the Snap reverts the Parquet data to document data.

Parquet Parser Configuration

Parquet Parser Output

parquet-parser-config.pngImage Added

parquet-parser-output.pngImage Added

Downloads

Info
  1. Download and import the Pipeline pipeline into the SnapLogic Platform..

  2. Configure Snap accounts, as applicable.

  3. Provide Pipeline parameters, as applicable.

Attachments
patterns*.slp, *.zip
sortByname

Snap Pack History

...

Insert excerpt
Transform Snap Pack
Transform Snap Pack
nopaneltrue

...

Related Content