Skip to end of banner
Go to start of banner

Parquet Formatter

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Version History

« Previous Version 2 Next »

In this article

Overview

You can use this Snap to read the document data from the input and write the data in the binary (parquet) format to the output.

Snap Type

The Parquet Formatter Snap is a Write-type Snap.

Prerequisites

None.

Support for Ultra Pipelines

Limitations and Known Issues

None.

Snap Views

Type

Format

Number of Views

Examples of Upstream and Downstream Snaps

Description

Input 

Document

  • Min: 1

  • Max: 2

  • Mapper

  • Copy

Requires document data as input.

Output

Binary

  • Min: 1

  • Max: 1

  • Parquet Writer

  • Parquet Parser

Writes the document data in the binary (parquet) format to the output.

Error

Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter when running the Pipeline by choosing one of the following options from the When errors occur list under the Views tab:

  • Stop Pipeline Execution: Stops the current Pipeline execution if the Snap encounters an error.

  • Discard Error Data and Continue: Ignores the error, discards that record, and continues with the remaining records.

  • Route Error Data to Error View: Routes the error data to an error view without stopping the Snap execution.

Learn more about Error handling in Pipelines.

Snap Settings

  • Asterisk ( * ): Indicates a mandatory field.

  • Suggestion icon ((blue star)): Indicates a list that is dynamically populated based on the configuration.

  • Expression icon ((blue star) ): Indicates the value is an expression (if enabled) or a static value (if disabled). Learn more about Using Expressions in SnapLogic.

  • Add icon ( (blue star) ): Indicates that you can add fields in the fieldset.

  • Remove icon ( (blue star)): Indicates that you can remove fields from the fieldset.

  • Upload icon ((blue star) ): Indicates that you can upload files.

Field Name

Field Type

Description

Label*

Default ValueParquet Formatter
ExampleHadoop Parquet Formatter

String

Specify the name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your Pipeline.

Edit Schema

Button

Specify a valid Parquet schema that describes the data.  

The schema can be specified based off a Hive Metastore table schema or generated from suggest data. Save the pipeline before editing the schema to generate suggest data that assists in specifying the schema based off of the schema of incoming documents.  If no suggest data is available, then an example schema is generated along with documentation. Alter one of those schemas to describe the input data.
The Parquet schema can also be written manually. A schema is defined by a list of fields and here is an example describing the contact information of a person. 

 Click here to expand...

After defining the message type, a list of fields are given. A field is comprised of a repetition, a type, and the field name. Available repetitions are required, optional, and repeated.
Each field has a type. The primitive types include:

binary -used for strings

fixed_len_byte_array - used for byte arrays of fixed length

boolean - a 1 bit boolean value

int32 - a 32 bit integer

int64 - a 64 bit integer

int96 - a 96 bit integer

float - a 32 bit floating point number

double - a 64 bit floating point number

These types can be annotated with a logical type to specify how the application should interpret the data. The Logical types include:

UTF8 - used with binary to specify the string as UTF8 encoded

INT_8 - used with int32 to specify the int as an 8 bit signed integer

INT_16 - used with int32 to specify the int as a 16 bit signed integer

Unsigned types - may be used to produce smaller in-memory representations of the data. If the stored value is larger than the maximum allowed by int32 or int64, then the behavior is undefined.

UINT_8 - used with int32 to specify the int as an 8 bit unsigned integer

UINT_16 - used with int32 to specify the int as a 16 bit unsigned integer

UINT_32 - used with int32 to specify the int as a 32 bit unsigned integer

UINT_64 - used with int64 to specify the int as a 64 bit unsigned integer

DECIMAL(precisionscale) - used to describe arbitrary-precision signed decimal numbers of the form value * 10^(-scale) to the given precision. The annotation can be with int32, int64, fixed_len_byte_array, binary. See the Parquet documentation for limits on precision that can be given.

DATE - used with int32 to specify the number of days since the Unix epoch, 1 January 1970

Note: This Snap supports only the following date format: yyyy-MM-dd.

TIME_MILLIS - used with int32 to specify the number of milliseconds after midnight

TIMESTAMP_MILLIS - used with int64 to store the number of milliseconds from the Unix epoch, 1 January 1970

INTERVAL - used with a fixed_len_byte_array of length 12, where the array stores 3 unsigned little-endian integers. These integers specify

a number in months

a number in days

a number in milliseconds

JSON - used with binary to represent an embedded JSON document

BSON - used for an embedded BSON document

The following is an example of a schema using all the primitive and some examples of logical types:

message document {
  # Primitive Types
  optional int64 32_num;
  optional int64 64_num;
  optional boolean truth;
  optional binary message;
  optional float pi;
  optional double e;
  optional int96 96_num;
  optional fixed_len_byte_array (1) one_byte;
  # Logical Types
  optional binary snowman (UTF8);
  optional int32 8_num (INT_8);
  optional int32 16_num (INT_16);
  optional int32 u8_num (UINT_8);
  optional int32 u16_num (UINT_16);
  optional int32 u32_num (UINT_32);
  optional int64 u64_num (UINT_64);
  optional int32 dec_num (DECIMAL(5,2));
  optional int32 jan7 (DATE);
  optional int32 noon (TIME_MILLIS);
  optional int64 jan7_epoch (TIMESTAMP_MILLIS);
  optional binary embedded (JSON);
}

"Generate template" will not work for nested structure like MAP and LIST type.

Compression

Default Value: NONE

Example: SNAPPY

Dropdown list

Select the type of compression to use when writing the file. The following are the available options:

  • NONE

  • SNAPPY

  • GZIP

Many compression algorithms require both Java and system libraries and will fail if the latter is not installed. If you see unexpected errors, ask your system administrator to verify that all the required system libraries are installed–they are typically not installed by default. The system libraries will have names such as liblzo2.so.2 or libsnappy.so.1 and will probably be located in the /usr/lib/x86_64-linux-gnu directory.

 

Decimal rounding mode

 

Default Value: Half up

Example: Up

Dowpdown list

Select the required rounding method for decimal values when they exceed the required number of decimal places. The available options are:

  • Half up

  • Half down

  • Half even

  • Up

  • Down

  • Ceiling

  • Floor

  • Truncate

Snap execution

Default ValueValidate & Execute
Example: Execute only

Dropdown list

Select one of the following three modes in which the Snap executes:

  • Validate & Execute: Performs limited execution of the Snap, and generates a data preview during Pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during Pipeline runtime.

  • Execute only: Performs full execution of the Snap during Pipeline execution without generating preview data.

  • Disabled: Disables the Snap and all Snaps that are downstream from it.

Troubleshooting

Error

Reason

Resolution

Account validation failed.

The Pipeline ended before the batch could complete execution due to a connection error.

Verify that the Refresh token field is configured to handle the inputs properly. If you are not sure when the input data is available, configure this field as zero to keep the connection always open.

Examples

Excluding Fields from the Input Data Stream

We can exclude the unrequired fields from the input data stream by omitting them in the Input schema fieldset. This example demonstrates how we can use the <Snap Name> to achieve this result:

<screenshot of Pipeline/Snap and description>

Download this Pipeline. 

Downloads

  1. Download and import the Pipeline into SnapLogic.

  2. Configure Snap accounts, as applicable.

  3. Provide Pipeline parameters, as applicable.

  File Modified
No files shared here yet.

Snap Pack History

 Click here to expand...


Related Content

  • No labels