Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

In this article

Table of Contents
maxLevel2
excludeOlder Versions|Additional Resources|Related Links|Related Information

Overview

Parquet Reader is a Read-type Snap that reads Parquet files from HDFS (Hadoop Distributed File System), ADL (Azure Data Lake), ABFS (Azure Data Lake Blob File Storage Gen 2), WASB (Windows Azure Storage Blob),or S3 and converts the data into documents. You can also use this Snap to read the structure of Parquet files in the SnapLogic metadata catalog.

Note

This Snap supports support HDFS (Hadoop Distributed File System), ADL (Azure Data Lake), ABFS (Azure Data Lake Blob File Storage Gen 2 ), WASB (Windows Azure Storage Blob), and S3 protocols.


Behavior Change 

When you select the Use datetime types checkbox in the Parquet Reader Snap, the Snap displays the LocalDate and DateTime in the output for INT32 (DATE) and INT64 (TIMESTAMP_MILLIS) columns. When you deselect this checkbox, the columns retain the previous datatypes and display string and integer values in the output.

Prerequisites

Access and permission to read from HDFS, ADL (Azure Data Lake), ABFS (Azure Data Lake Storage Gen 2), WASB (Azure storage),or AWS S3.

Support for Ultra Pipelines

Works in Ultra Task Pipelines.

Limitations and Known Issues

None.

Snap Input and Output

Input/OutputType of ViewNumber of ViewsExamples of Upstream and Downstream SnapsDescription
InputDocument
Min: 0
Max: 1
  • JSON Generator
[None]
OutputDocument
Min: 1
Max: 1
  • Mapper
  • JSON Formatter
A document with the columns and data of the Parquet file.

Account

This Snap uses account references created on the Accounts page of SnapLogic Manager to handle access to this endpoint. This Snap supports several account types, as listed below.

ProtocolAccount TypesDocumentationSetting
S3Amazon AWSAWS S3Access-key ID, Secret key
S3AWS IAM RoleAWS S3Enable IAM Role checkbox
HDFSKerberosKerberosClient Principal, Keytab file, Service Principal
WASBAzure Blob StorageAzure StorageAccount name, Primary access key
WASBsAzure Blob StorageAzure StorageAccount name, Primary access key
ADLAzure Data LakeAzure Data LakeTenant ID, Access ID, Security Key
ADFSAzure Data LakeAzure Data LakeTenant ID, Access ID, Security Key


Note

The security model configured for the Groundplex (SIMPLE or KERBEROS authentication) must match the security model of the remote server. Due to limitations of the Hadoop library we are only able to create the necessary internal credentials for the configuration of the Groundplex.

Snap Settings

Field

Field Type

Description


Label*


String

Specify a unique name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your Pipeline.

Default ValueParquet Reader
ExampleParquet Reader

Directory


String/Expression

Specify a directory in HDFS to read data. All files within the directory must be Parquet formatted.

We support file storage systems as below:

ProtocolDirectory FormatExample
hdfshdfs://<hostname>:<port>/<path to directory>

hdfs://localhost:8020/tmp

s3s3://<testbucket>/<key name prefix>

s3://test-bucker/tmp

wasbwasb:///<storage container>/path to directory>

wasb:///container/tmp

wasbswasbs:///<storage container>/path to directory>

wasbs:///container/tmp

adladl://<store name>/<path to directory>

adl://storename/tmp

adlsadls://<store name>/<path to directory>

adls://stprename/tmp

abfs

abfs:///filesystem/<path>/

abfs://filesystem1/core.windows.net/dirl
abfsabfs://<filesystem>@<accountname>.<endpoint>/<path>abfs://filesystem2@adlsgen2test1.dfs.core.windows.net/dirl
abfssabfss:///filesystem/<path>/abfss://filesystem1/core.windows.net/dirl
abfssabfss://<filesystem>@<accountname>.<endpoint>/<path>abfss://filesystem2@adlsgen2test1.dfs.core.windows.net/dirl


When you use the ABFS protocol to connect to an endpoint, the account name and endpoint details provided in the URL override the corresponding values in the Account Settings fields.

Note

With the ABFS protocol, SnapLogic creates a temporary file to store the incoming data. Therefore, the hard drive where the JCC is running should have enough space to temporarily store all the account data coming in from ABFS.


Note

SnapLogic automatically appends "azuredatalakestore.net" to the store name you specify when using Azure Data Lake; therefore, you do not need to add 'azuredatalakestore.net' to the URI while specifying the directory.


File FilterString/Expression

A glob to select only certain files or directories. 

The glob pattern is used to display a list of directories or files when the Suggest icon is pressed in the Directory or File property. The complete glob pattern is formed by combining the value of the Directory property and the Filter property. If the value of the Directory property does not end with "/", the Snap appends one so that the value of the Filter property is applied to the directory specified by the Directory property.

The following rules are used to interpret glob patterns:

  • The * character matches zero or more characters of a name component without crossing directory boundaries. For example, *.csv matches a path that represents a filename ending in .csv and *.* matches file names containing a dot.

  • The ** characters matches zero or more characters crossing directory boundaries, therefore it matches all files or directories in the current directory as well as in all subdirectories. For example, /home/** matches all files and directories in the /home/ directory.

  • The ? character matches exactly one character of a name component. For example, foo.? matches file names starting with foo. and a single character extension.

  • The backslash character (\) is used to escape characters that would otherwise be interpreted as special characters. The expression \\ matches a single backslash and "\{" matches a left brace for example.

  • The ! character is used to exclude matching files from the output. 
  • The [ ] characters are a bracket expression that match a single character of a name component out of a set of characters. For example, [abc] matches "a", "b", or "c". The hyphen (-) may be used to specify a range so [a-z] specifies a range that matches from "a" to "z" (inclusive). These forms can be mixed so [abce-g] matches "a", "b", "c", "e", "f" or "g". If the character after the [ is a ! then it is used for negation so [!a-c] matches any character except "a", "b", or "c".

    Within a bracket expression the *, ? and \ characters match themselves. The (-) character matches itself if it is the first character within the brackets, or the first character after the ! if negating.

  • The { } characters are a group of subpatterns, where the group matches if any subpattern in the group matches. The "," character is used to separate the subpatterns. Groups cannot be nested. For example, *.{csv, json} matches file names ending with .csv or .json

  • Leading dot characters in file name are treated as regular characters in match operations. For example, the "*" glob pattern matches file name ".login". 

  • All other characters match themselves.

Default Value: *

FileString/Expression

Required for standard mode. Filename or a relative path to a file under the directory given in the Directory property. It should not start with a URL separator "/". The File property can be a JavaScript expression which will be evaluated with values from the input view document. When you press the Suggest icon, it will display a list of regular files under the directory in the Directory property. It generates the list by applying the value of the Filter property.

Example

  • sample.parquet
  • tmp/another.orc
  • _filename


Note

Both the Parquet Reader and Parquet Writer Snaps have the ability to read compressed files. The compression codecs that are currently supported are: Snappy, GZIP, and LZO. To use LZO compression, you must explicitly enable the LZO compression type on the cluster (as an administrator) for the Snap to recognize and run the format. For more information, see Data Compression. For detailed guidance on setting up LZO compression, see Clourdera documentation on Installing the GPL Extras Parcel.


Note

Many compression algorithms require both Java and system libraries and will fail if the latter is not installed. If you see unexpected errors, ask your system administrator to verify that all the required system libraries are installed–they are typically not installed by default. The system libraries will have names such as liblzo2.so.2 or libsnappy.so.1 and will probably be located in the /usr/lib/x86_64-linux-gnu directory.


User Impersonation

Checkbox

Hadoop allows you to configure proxy users to access HDFS on behalf of other users; this is called impersonation. When user impersonation is enabled on the Hadoop cluster, any jobs submitted using a proxy are executed with the impersonated user's existing privilege levels rather than those of a superuser.

  • When you select this checkbox and Kerberos is the account type, the Client Principal configured in the Kerberos account impersonates the Pipeline user.
  • When you select this checkbox and Kerberos is not  the account type, the user who is executing the Pipeline is used to impersonate for the HDFS Operations. For example, if the user logged into the SnapLogic platform is operator@snaplogic.com,then the user name "operator" is used to proxy the super user. For the Non Kerberised cluster, ensure to activate the Superuser access on the configuration settings.


The HDFS Writer Snap supports Azure storage account, Azure Data Lake account, Kerberos account, or no account. When an account is configured with the HDFS Writer Snap, User impersonation setting has no impact on the accounts except Kerberos Account.

Default ValueNot selected

Note

 For encryption zones, use user impersonation. 


Ignore empty fileCheckbox

Select this checkbox to ignore an empty file, that is the Snap does nothing.
If you deselect this checkbox, the Snap produces an empty output document.

Info
  • This property applies when the file does not contain any data.
  • An empty Parquet file cannot be a zero-byte file. If a file to be parsed is a zero-byte file, it is considered an invalid Parquet file and produces an error.

Default ValueSelected

Use old data formatCheckbox

Deselect this checkbox to read complex data types or nested schema such as LIST, and MAP. Null values are skipped when doing so.

Default Value: Selected

int96 As Timestamp

Enabled when you deselect Use old data format checkbox.

Select this checkbox to enable the Snap to convert int96 values to timestamp strings of a specified format in Date Time Format field.

If you deselect this checkbox, the Snap shows values for int96 data type as 12-byte BigInteger objects. 

Default Value: Deselected

Use datetime types

Select this checkbox to enable the Snap to display LOCALDATE type for int32(DATE) and DATETIME type for int64(TIMESTAMP_MILLIS) in the output. When deselected, the columns retain the previous datatypes.

Default Value: Deselected

Date Time Format

Enabled when you select int96 As Timestamp checkbox.

Enter a date-time format of your choice for int96 data-type fields (timestamp and time zone). For more information about valid date-time formats, see DateTimeFormatter.


Info

The int96 data type can support up to nanosecond accuracy.

Default Valueyyyy-MM-dd'T'HH:mm:ss.SSSX
Example: yyyy-MM-dd'T'HH:mm:ssX

Azure SAS URI PropertiesShared Access Signatures (SAS) properties of the Azure Storage account.
SAS URIString/Expression

Specify the Shared Access Signatures (SAS) URI that you want to use to access the Azure storage blob folder specified in the Azure Storage Account. You can get a valid SAS URI either from the Shared access signature in the Azure Portal or by generating one from the SAS Generator Snap.

Note

If the SAS URI value is provided in the Snap settings, then the settings provided in the account (if any account is attached) are ignored.


Snap Execution


Dropdown list

Multiexcerpt include macro
nameSnap_Execution_Introduced
pageAnaplan Read

Default Value: Validate & Execute

Example: Execute Only

Troubleshooting

Insert excerpt
Hadoop Directory Browser
Hadoop Directory Browser
nopaneltrue

Multiexcerpt include macro
nameTemporary Files
pageJoin

Examples



Expand
titleReading from a Hadoop cluster with an applied filter

Displaying Files in a Directory Using the Parquet Reader

In the below pipeline, the Parquet Reader Snap reads the documents from a directory path with a filter *.parquet. Since the file name is not provided, the Snap displays all the files with the applied filter on that directory.


Successful execution of the Pipeline displays the output preview as follows:



Expand
titleReading from HDFS

Reading from HDFS

A Parquet Reader configured to read from a local instance of HDFS. The path it reads from is /tmp/test.parquet.


Expand
titleReading from S3

Reading from S3

Reading a Parquet file from AWS S3 requires an S3 account.

  1. Create an S3 account or use an existing one.
    1. If it is a regular S3 account, name the account and supply the Access-key ID and Secret key.
    2. If the account is an IAM role-enabled Account:

      1. Select the IAM role checkbox.

      2. Leave the Access-key ID and Secret key blank.

      3. The IAM role properties are optional. You can leave them blank.

        Info

        To use IAM Role Properties, ensure to select the IAM Role check box.


  2. Within the Parquet Snap, use a valid S3 path for the directory in the format of:
    s3://<bucket name>/<key name prefix>


Expand
titleReading from Kerberos

Reading from Kerberos

Read from Kerberosize Cluster requires Kerberos account as below:


Expand
titleReading Schema Information from the Catalog Query Snap

Reading Schema Information from the Catalog Query Snap

In this example, we shall query a SnapLogic metadata catalog table partition to retrieve the schema used and then use the Parquet Reader Snap to read the retrieved schema.

Download this pipeline

Expand

Understanding the Pipeline

We create the pipeline as shown below:

The Catalog Query Snap

In this pipeline, the Catalog Query Snap retrieves metadata information from the metadata catalog table and makes it available for the next Snap. We configure the Snap to pick up the table from a location in SLDB; we also specify partition keys (city=Milpitas) to identify a precise partition in the table, as shown below:

On execution the Snap retrieves the metadata associated with the specified partition in the concerned table, as shown below:

The Parquet Reader Snap

We now want to extract the schema information from the Catalog Query Snap's output. To do so, we insert a Parquet Reader Snap into the pipeline, as shown below:

Once this Snap is executed, it identifies and retrieves the schema information from the Catalog Query Snap, as shown below:

Download this pipeline


Downloads

Multiexcerpt include macro
namedownload_instructions
pageOpenAPI

Attachments
previewfalse
uploadfalse
oldfalse
patterns*.slp, *.zip

Insert excerpt
Hadoop Snap Pack
Hadoop Snap Pack
nopaneltrue