Parquet Writer

In this article

Overview

You can use this Snap to convert documents to the Parquet format and write the data to HDFS, ADL (Azure Data Lake), ABFS (Azure Blob File Storage), WASB (Azure storage), or an S3 bucket. This Snap supports a nested schema such as LIST and MAP. You can also use this Snap to write schema information to the Catalog Insert Snap.

This Snap supports HDFS, ADL (Azure Data Lake), ABFS (Azure Blob File Storage), WASB (Azure storage), and S3 protocols.


Snap Type

The Parquet Writer Snap is a Write-type Snap.

Prerequisites

You must have access and permission to write to HDFS, ADL (Azure Data Lake), ABFS (Azure Data Lake Storage Gen 2), WASB (Azure storage), or AWS S3.ย 

Support for Ultra Pipelines

Works inย Ultra Pipelines.ย 

Supported Versions

The Parquet Writer Snap is tested againstย Windows Server 2008, 2010, and 2012.

Limitations

  • Auto schema generation in this Snap excludes null fields. For example, if the Snap receives ten input documents during preview execution, and four of these documents contain null values for certain fields in all instances, those four fields are disregarded during schema generation. The schema only includes fields with at least one non-null value among the preview input documents.

  • "Generate template" is unsupported for a nested structure like MAP and LIST type. Generate template is a link within the schema editor accessed through the Edit Schema button.

  • All expression Snap properties can be evaluated (when the '=' button is pressed) from pipeline parameters only, not from input documents from upstream Snaps. Input documents are data to be formatted and written to the target files.ย 

  • The security model configured for the Groundplex (SIMPLE or KERBEROS authentication) must match the security model of the remote server. Due to the limitations of the Hadoop library, we can only create the necessary internal credentials to configure the Groundplex.

  • Parquet Snaps work well in a Linux environment. However, due to limitations in the Hadoop library on Windows,ย their functioning in a Windows environment may not always be as expected. We recommend you use a Linux environment for working with Parquet Snaps.

To use the Parquet Writer Snap on a Windows Snaplex, follow these steps:

  1. Download hadoop.dlland winutils.exe โ€‹winutils/hadoop-3.2.2/bin at master ยท cdarlint/winutils (SnapLogicโ€™s Hadoop version is 3.2.2)

  2. Create a temporary directory.

  3. Place the hadoop.dlland winutils.exe files in this path: C:\\hadoop\bin

  4. Set the environment variable HADOOP_HOME to point to C:\\hadoop

  5. Add C:\hadoop\bin to the environment variable PATH as shown below:

  6. Add the JVM options in the Windows Snaplex:jcc.jvm_options= -Djava.library.path=C:\\hadoop\bin

    If you already have an existing jvm_options, then add: "-Djava.library.path=C:\\hadoop\bin" after the space.
    For example:jcc.jvm_options = -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8000 -Djava.library.path=C:\\hadoop\bin

  7. Restart the JCC for configurations to take effect.

Known Issues

Learn more about the Azure Storage library upgrade.

Snap Views

Type

Format

Number of Views

Examples of Upstream and Downstream Snaps

Description

Type

Format

Number of Views

Examples of Upstream and Downstream Snaps

Description

Inputย 

Document

ย 

  • Min: 1

  • Max: 2

Mapper

  • This Snap has one or two document input views. When you enable the second input view, the Snap ignores other schema settings such as Schema button or Hive Metastore related properties, but it accepts the schema from the second input view only. However, when you disable the second input view, the Snap prepares to receive the Schema with the provided information on the Hive Metastore URL property. The supported data types are:

    • Primitive: Boolean, Integer, Float, double, and byte-array

    • Local: map, list

  • The Snap expects a Hive Execute Snap that contains the "Describe table" statement inย the second input view.

Output

Document

  • Min: 0

  • Max: 1

Mapper

A document with a filename for each Parquet file written. For example: {"filename" : "hdfs://localhost/tmp/2017/april/sample.parquet"}

Error

Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter while running the Pipeline by choosing one of the following options from the When errors occur list under the Views tab. The available options are:

  • Stop Pipeline Execution: Stops the current pipeline execution when the Snap encounters an error.

  • Discard Error Data and Continue: Ignores the error, discards that record, and continues with the remaining records.

  • Route Error Data to Error View: Routes the error data to an error view without stopping the Snap execution.

Learn more about Error handling in Pipelines.

Snap Settings

  • Asterisk ( * ): Indicates a mandatory field.

  • Suggestion icon (โ€‹Emoji :Suggest_Icon: ): Indicates a list that is dynamically populated based on the configuration.

  • Expression icon (โ€‹Emoji :Expression_Enabler: ): Indicates the value is an expression (if enabled) or a static value (if disabled). Learn more about Using Expressions in SnapLogic.

  • Add icon ( ): Indicates that you can add fields in the field set.

  • Remove icon ( โ€‹Emoji :Remove_icon: ): Indicates that you can remove fields from the field set.

  • Upload icon (โ€‹Emoji :Upload_icon: ): Indicates that you can upload files.

Field Name

Field Type

Description

Field Name

Field Type

Description

Label*

Default Value: Parquet Writer
Example: Parquet Writer

String

Specify a unique name for the Snap.

Directory

ย 

Default Value: hdfs://<hostname>:<port>/
Example:ย 

  • hdfs://ec2-54-198-212-134.compute-1.amazonaws.com:8020/user/john/input/

  • webhdfs://cdh-ga-2.fullsail.yourcompany.com:50070/user/ec2-user/csv/

  • _dirname

ย 

ย 

ย 

String/Expression/Suggestion

Specify the file path to a directory to read data from. The path must be in the following format:
hdfs://<hostname>:<port>/

All files within the directory must be Parquet formatted. The following file storage systems are supported:

  • hdfs
    Protocol: hdfs://<hostname>:<port>/<path to directory>
    Example: hdfs://localhost:8020/tmp

  • S3
    Protocol: S3://<testbucket>/<key name prefix>
    Example: S3://test-bucker/tmp

  • wasb
    Protocol: wasb:///<storage container>/path to directory>
    Example: wasb:///container/tmp

  • wasbs
    Protocol: wasbs:///<storage container>/path to directory>
    Example: wasbs:///container/tmp

  • adl
    Protocol: adl://<store name>/<path to directory>
    Example: adl://storename/tmp

  • adls
    Protocol: adls://<store name>/<path to directory>
    Example: adls://storename/tmp

  • abfs
    Protocol: abfs:///filesystem/<path>/
    Example: abfs://filesystem1/core.windows.net/dirl

  • abfs
    Protocol: abfs://<filesystem>@<accountname>.<endpoint>/<path>
    Example: abfs://filesystem2@adlsgen2test1.dfs.core.windows.net/dirl

  • abfss
    Protocol: abfss:///filesystem/<path>/
    Example: abfss://filesystem1/core.windows.net/dirl

  • abfss
    Protocol: abfss://<filesystem>@<accountname>.<endpoint>/<path>
    Example: abfss://filesystem2@adlsgen2test1.dfs.core.windows.net/dirl

SnapLogic automatically appends azuredatalakestore.net to the store name you specify when using Azure Data Lake; therefore, you do not have to add azuredatalakestore.net to the URI while specifying the directory.

The Directory property is not used in the Pipeline execution or preview and used only in the Suggest operation. When you click on the Suggest icon, the Snap displays a list of subdirectories under the given directory. It generates the list by applying the value of the Filter property.

File Filter

ย 

Default Value: *
Example: **

ย 

String/Expression

Specify the Glob file pattern.

The following rules are used to interpret glob patterns:

- Theย *ย character matches zero or moreย charactersย of aย nameย component without crossing directory boundaries. For example, the *.csv pattern matchesย a path that represents aย file nameย ending in .csv, and *.* matches all file names that contain a period.

- โ€‹Theย **ย characters match zero or moreย charactersย across directories; therefore, it matches all files or directories in the current directory and in its subdirectories. For example, /home/** matches all files and directories in the /home/ directory.

- โ€‹Theย ?ย character matches exactly one character of a name component. For example, 'foo.?' matches file names that start with 'foo.' and are followed by a single-character extension.

- โ€‹The \ character is used to escape characters that would otherwise be interpreted as special characters. The expression \\ matches a single backslash, and \{ matches a left brace, for example.

- Theย !ย character is used to exclude matching files from the output.ย 

- โ€‹Theย [ ]ย characters form aย bracket expressionย that matches a single character of a name component out of a set of characters. For example, '[abc]' matches 'a',ย 'b', orย 'c'. The hyphen (-) may be used to specify a range, so '[a-z]' specifies a range that matches fromย 'a'ย toย 'z'ย (inclusive). These forms can be mixed, so '[abce-g]' matches 'a',ย 'b',ย 'c',ย 'e',ย 'f'ย orย 'g'. If the character after theย [ย is aย !ย then it is used for negation, so '[!a-c]' matches any character exceptย 'a',ย 'b', orย 'c'.

- โ€‹Within a bracket expression, the '*', '?', and '\' characters match themselves. The '-' character matches itself if it is the first character within the brackets, or the first character after theย !, if negating.

- The '{ }' characters are a group ofย sub-patterns where the group returns a match if anyย sub-patternย in the group matches the contents of a target directory. Theย ','ย character is used to separate sub-patterns. Groups cannot be nested. For example, the pattern '*.{csv,ย json}' matches file names ending with '.csv' or '.json'.

- โ€‹Leading dot characters in a file name are treated as regular characters in match operations. For example, theย '*'ย glob pattern matches file nameย ".login".

โ€‹- All other characters match themselves.

Examples:

'*.csv' matches all files with a csv extension in the current directory only.

'**.csv' matches all files with a csv extension in the current directory and in all its subdirectories.

*[!{.pdf,.tmp}] excludes all files with the extension PDF or TMP.

File

Default Value:ย N/A
Example:ย 

  • sample.csv

  • tmp/another.csv

  • _filename

String/Expression/Suggestion

Specify the filename or a relative path to a file under the directory given in the Directory property. It should not start with a URL separator "/".ย 
The File value can be a JavaScript expression which will be evaluated with values from the input view document. When you click the Suggest icon, the Snap displays a list of regular files under the directory in the Directory property. It generates the list by applying the value of the Filter property.

Hive Metastore URL

Default Value:ย N/A
Example: thrift://localhost:9083

String/Expression

Specify the URL of the Hive Metastore to assist in setting the schema along with theย databaseย andย tableย setting. If the data being written has a Hive schema, then the Snap can be configured to read the schema instead of manually entering it. Set the value to a Hive Metastore URL where you define the schema.ย 

Database

Default value:ย N/A

String/Expression/Suggestion

Specify the Hive Metastore database which holds the schema for the outgoing data.

ย 

Table

ย 

Default value:ย N/A

String/Expression/Suggestion

Specify the table whose schema should be used for formatting the outgoing data.

ย 

Fetch Hive Schema at Runtime

ย 

Default value: Deselected

Checkbox

Select this checkbox to fetch the schema from the Metastore table before writing. The Snap fails to write if it cannot make connection to the metastore or the table does not exist during the Pipeline's execution. Will use the metastore schema instead of the one set in the Snap's Edit Schema property if this is checked.

Edit Schema

ย 

Button

Specify a valid Parquet schema that describes the data. ย 

After defining the message type, a list of fields are given. A field is comprised of a repetition, a type, and the field name. Available repetitions are required, optional, and repeated.
Each field has a type. The primitive types include:

binary -used for strings

fixed_len_byte_array - used for byte arrays of fixed length

boolean - a 1 bit boolean value

int32 - a 32 bit integer

int64 - a 64 bit integer

int96 - a 96 bit integer

float - a 32 bit floating point number

double - a 64 bit floating point number

These types can be annotated with a logical type to specify how the application should interpret the data. The Logical types include:

UTF8 - used with binary to specify the string as UTF8 encoded

INT_8 - used with int32 to specify the int as an 8 bit signed integer

INT_16 - used with int32 to specify the int as a 16 bit signed integer

Unsigned types - may be used to produce smaller in-memory representations of the data. If the stored value is larger than the maximum allowed by int32 or int64, then the behavior is undefined.

UINT_8 - used with int32 to specify the int as an 8 bit unsigned integer

UINT_16 - used with int32 to specify the int as a 16 bit unsigned integer

UINT_32 - used with int32 to specify the int as a 32 bit unsigned integer

UINT_64 - used with int64 to specify the int as a 64 bit unsigned integer

DECIMAL(precision,ย scale) - used to describe arbitrary-precision signed decimal numbers of the form value * 10^(-scale) to the given precision. The annotation can be with int32, int64, fixed_len_byte_array, binary. See the Parquet documentation for limits on precision that can be given.

DATE - used with int32 to specify the number of days since the Unix epoch, 1 January 1970

Note: This Snap supports only the following date format: yyyy-MM-dd.

TIME_MILLIS - used with int32 to specify the number of milliseconds after midnight

TIMESTAMP_MILLIS - used with int64 to store the number of milliseconds from the Unix epoch, 1 January 1970

INTERVAL - used with a fixed_len_byte_array of length 12, where the array stores 3 unsigned little-endian integers. These integers specify

a number in months

a number in days

a number in milliseconds

JSON - used with binary to represent an embedded JSON document

BSON - used for an embedded BSON document

The following is an example of a schema using all the primitive and some examples of logical types:

message document { # Primitive Types optional int64 32_num; optional int64 64_num; optional boolean truth; optional binary message; optional float pi; optional double e; optional int96 96_num; optional fixed_len_byte_array (1) one_byte; # Logical Types optional binary snowman (UTF8); optional int32 8_num (INT_8); optional int32 16_num (INT_16); optional int32 u8_num (UINT_8); optional int32 u16_num (UINT_16); optional int32 u32_num (UINT_32); optional int64 u64_num (UINT_64); optional int32 dec_num (DECIMAL(5,2)); optional int32 jan7 (DATE); optional int32 noon (TIME_MILLIS); optional int64 jan7_epoch (TIMESTAMP_MILLIS); optional binary embedded (JSON); }

Compression*

ย 

Dropdown list

Select the type of compression to use when writing the file. The available options are:

  • NONE

  • SNAPPY

  • GZIP

  • LZO

Partition by

ย 

Default Value:ย N/A

String/Suggestion

Specify or select the key which will be used to get the 'Partition by' folder name. All input documents should contain this key name or an error document will be written to the error view. Refer to the 'Partition by' example below for an illustration.ย 

Azure SAS URI Properties

Shared Access Signatures (SAS) properties of the Azure Storage account.

SAS URI

String/Expression

Specify the Shared Access Signatures (SAS) URI that you want to use to access the Azure storage blob folder specified in theย Azure Storage Account.

Decimal Rounding Mode

ย 

Default Value: Half up

Example: Up

Dropdown list

Select the required rounding method for decimal values when they exceed the required number of decimal places. The available options are:

  • Half up

  • Half down

  • Half even

  • Up

  • Down

  • Ceiling

  • Floor

  • Truncate

Snap Execution

Dropdown list

Select one of the following three modes in which the Snap executes:

  • Validate & Execute: Performs limited execution of the Snap, and generates a data preview during Pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during Pipeline runtime.

  • Execute only: Performs full execution of the Snapย during Pipeline execution without generating preview data.

  • Disabled: Disables the Snap and all Snaps that are downstream from it.

Troubleshooting

Error

Reason

Resolution

Error

Reason

Resolution

Unable to connect to the Hive Metastore.

This error occurs when the Parquet Writer Snap is unable to fetch schema forย Kerberos-enabled Hive Metastore.

Pass the Hive Metastore's schema directly to the Parquet Writer Snap. To do so:

  1. Enable the 'Schema View' in the Parquet Writer Snap by adding the second Input View.

    1. Connectย a Hive Execute Snap to the Schema View. Configure the Hive Execute Snap to execute theย DESCRIBE TABLEย command to read the table metadata and feed it to the schema view.ย 

Parquet Snaps may not work as expected in the Windows environment.

Because of the limitations in the Hadoop library on Windows, Parquet Snaps does not function as expected.

To use the Parquet Writer Snap on a Windows Snaplex, follow these steps:

  1. Create a temporary directory. For example: C:\\hadoop\

  2. Place two files, "hadoop.dll" and "winutils.exe", in the newly created temporary directory. Use this link โ€‹winutils/hadoop-3.2.2/bin at master ยท cdarlint/winutils to download hadoop.dll and winutills.exe. (SnapLogicโ€™s code base Hadoop is 3.2.2).

  3. Add the JVM options in the Windows Plex as shown below:
    jcc.jvm_options = -Djava.library.path=C:\\hadoop

  4. If you already have existing jvm_options, then add the following "-Djava.library.path=C:\\hadoop" after the space. For example:
    jcc.jvm_options = -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8000 -Djava.library.path=C:\\hadoop

  5. Restart the JCC for configurations to take effect.

Failure: 'boolean org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(java.lang.String, int)',

Because of the limitations in the Hadoop library on Windows, Parquet Snaps does not function as expected.

To resolve this issue, follow these steps:

  1. Download hadoop.dlland winutils.exe โ€‹winutils/hadoop-3.2.2/bin at master ยท cdarlint/winutils (SnapLogicโ€™s Hadoop version is 3.2.2)

  2. Create a temporary directory.

  3. Place the hadoop.dlland winutils.exe files in this path: C:\\hadoop\bin

  4. Set the environment variable HADOOP_HOME to point to C:\\hadoop

  5. Add C:\hadoop\bin to the environment variable PATH as shown below:
    Variable name: PATH
    Variable value: VEN_HOME%\bin;%HADOOP_HOME%\bin

  6. Add the JVM options in the Windows Snaplex:jcc.jvm_options= -Djava.library.path=C:\\hadoop\bin

    If you already have an existing jvm_options, then add: "-Djava.library.path=C:\\hadoop\bin" after the space.
    For example:jcc.jvm_options = -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8000 -Djava.library.path=C:\\\hadoop\bin

  7. Restart the JCC for configurations to take effect.


Additional Information

Write to S3 files with HDFS version CDH 5.8 or later

When running the HDFS version later than CDH 5.8, the Hadoop Snap Pack may fail to write to S3 files.ย To overcome this, make the following changes in the Cloudera manager:

  1. Go toย HDFS configuration.

  2. Inย Cluster-wide Advanced Configuration Snippet (Safety Valve) for core-site.xml, add an entry with the following details:

    • Name: fs.s3a.threads.max

    • Value: 15

  3. Clickย Save.

  4. Restart all the nodes.

  5. Underย Restart Stale Services, selectย Re-deploy client configuration.

  6. Clickย Restart Now.

Temporary Files

During execution, data processing on Snaplex nodes occurs principally in-memory as streaming and is unencrypted. When larger datasets are processed that exceed the available compute memory, the Snap writes Pipeline data to local storage as unencrypted to optimize the performance. Theseย temporary files are deleted when the Snap/Pipeline execution completes. You can configure the temporary data's location in theย Global propertiesย table of the Snaplex's node properties, which can also help avoid Pipeline errors due to the unavailability of space. For more information, seeย Temporary Folder in Configuration Options.ย 

โ€‹

Examples

Parquet Writer Snap with the Second Input View

The following example pipeline demonstrates the usage of the second input view to receive the table metadata. You can provide the Hive Metastore information in the Hive Metastore URL field, wherein a single input view is sufficient.ย 

ย 

The Parquet Writer Snap configuration with Directory path:

ย 

The Parquet Writer Snap View:

ย 

The Hive Execute Snap with the table metadata information to pass to the second input view of the Parquet Writer Snap:

The Hive Execute Snap Output:

Here is an example of a Parquet Writer configured to write to a local instance of HDFS. The output is written toย /tmp/parquet-example. There is no Hive Metastore configured and no compression used.

See the documentation on the Schema setting to view an example of the schema.

ย 

Example on "Partition by":

Assume the following input documents:

[
ย  ย  {
ย  ย  ย  ย  "month" : "MAR",
ย  ย  ย  ย  "day" : "01",
ย  ย  ย  ย  "msg" : "Hello, World",
ย  ย  ย  ย  "num" : 1
ย  ย  },
ย  ย  {
ย  ย  ย  ย  "month" : "FEB",
ย  ย  ย  ย  "day" : "07",
ย  ย  ย  ย  "msg" : "Hello, World",
ย  ย  ย  ย  "num" : 3
ย  ย  },
ย  ย  {
ย  ย  ย  ย  "month" : "MAR",
ย  ย  ย  ย  "day" : "01",
ย  ย  ย  ย  "msg" : "Hello, World",
ย  ย  ย  ย  "num" : 2
ย  ย  },
ย  ย  {
ย  ย  ย  ย  "month" : "FEB",
ย  ย  ย  ย  "day" : "07",
ย  ย  ย  ย  "msg" : "Hello, World",
ย  ย  ย  ย  "num" : 4
ย  ย  }
]

The settings of the Parquet Writer Snap are as follows:

The pipeline execution will generate two files:

  • hdfs://localhost:8080/tmp/FEB/07/sample.parquet

  • hdfs://localhost:8080/tmp/MAR/01/sample.parquet

The key-value pairs for "month" and "day" will not be included in the output files.

ย 

Reading and writing Parquet files from/to AWS S3 requires an S3 account.

  1. Create an S3 account or use an existing one.

    1. If it is a regular S3 account, name the account and supply the Access-key ID and Secret key.

    2. If the account is IAM role enabled Account:

      1. Select the IAM role checkbox.

      2. Leave the Access-key ID and Secret key blank.

      3. The IAM role properties are optional. You can leave them blank. To use IAM Role Properties, select the IAM Role check box.โ€‹

  2. Within the Parquet Snap, use a valid S3 oath for the directory in the format of:
    s3://<bucket name>/<folder name>/.../<filename>

ย 

Inserting and Querying Custom Metadata from the Flight Metadata Table

The Pipeline in this zipped example, MetadataCatalog_Insert_Read_Example.zip, demonstrates how you can:

  • Use the Catalog Insert Snap to update metadata tables.

  • Use the Catalog Query Snap to read the updated metadata information.

In this example:

  1. We import a file containing the metadata.

  2. We create a parquet file using the data in the imported file

  3. We insert metadata that meets specific requirements into a partition in the target table.

  4. We read the newly-inserted metadata using the Catalog Query Snap.

ย 

Downloads

Snap Pack History

ย 

Related Content