Skip to end of banner
Go to start of banner

Parquet Reader

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 80 Current »

In this article

Overview

Parquet Reader is a Read-type Snap that reads Parquet files from HDFS (Hadoop Distributed File System), ADL (Azure Data Lake), ABFS (Azure Blob File Storage), WASB (Windows Azure Storage Blob), or S3 and converts the data into documents. You can also use this Snap to read the structure of Parquet files in the SnapLogic metadata catalog.

This Snap supports HDFS (Hadoop Distributed File System), ADL (Azure Data Lake), ABFS (Azure Blob File Storage), WASB (Windows Azure Storage Blob), and S3 protocols.


Behavior Change 

When you select the Use datetime types checkbox in the Parquet Reader Snap, the Snap displays the LocalDate and DateTime in the output for INT32 (DATE) and INT64 (TIMESTAMP_MILLIS) columns. When you deselect this checkbox, the columns retain the previous datatypes and display string and integer values in the output.

Prerequisites

Access and permission to read from HDFS, ADL (Azure Data Lake), ABFS (Azure Data Lake Storage Gen 2), WASB (Azure storage), or AWS S3.

Support for Ultra Pipelines

Works in Ultra Tasks.

Limitations 

None.

Known Issues

The upgrade of Azure Storage library from v3.0.0 to v8.3.0 has caused the following issue when using the WASB protocol:
When you use invalid credentials for the WASB protocol in Hadoop Snaps (HDFS Reader, HDFS Writer, ORC Reader, Parquet Reader, Parquet Writer), the pipeline does not fail immediately, instead it takes 13-14 minutes to display the following error:

reason=The request failed with error code null and HTTP code 0. , status_code=error

SnapLogic® is actively working with Microsoft®Support to resolve the issue.

Learn more about Azure Storage library upgrade.

Snap Input and Output

Input/OutputType of ViewNumber of ViewsExamples of Upstream and Downstream SnapsDescription
InputDocument
Min: 0
Max: 1
  • JSON Generator
[None]
OutputDocument
Min: 1
Max: 1
  • Mapper
  • JSON Formatter
A document with the columns and data of the Parquet file.

Account

This Snap uses account references created on the Accounts page of SnapLogic Manager to handle access to this endpoint. This Snap supports several account types, as listed below.

ProtocolAccount TypesDocumentationSetting
S3Amazon AWSAWS S3Access-key ID, Secret key
S3AWS IAM RoleAWS S3Enable IAM Role checkbox
HDFSKerberosKerberosClient Principal, Keytab file, Service Principal
WASBAzure Blob StorageAzure StorageAccount name, Primary access key
WASBsAzure Blob StorageAzure StorageAccount name, Primary access key
ADLAzure Data LakeAzure Data LakeTenant ID, Access ID, Security Key
ADFSAzure Data LakeAzure Data LakeTenant ID, Access ID, Security Key

The security model configured for the Groundplex (SIMPLE or KERBEROS authentication) must match the security model of the remote server. Due to limitations of the Hadoop library we are only able to create the necessary internal credentials for the configuration of the Groundplex.

Snap Settings

Field

Field Type

Description


Label*


String

Specify a unique name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your Pipeline.

Default ValueParquet Reader
ExampleParquet Reader

Directory


String/Expression

Specify a directory in HDFS to read data. All files within the directory must be Parquet formatted.

We support file storage systems as below:

ProtocolDirectory FormatExample
hdfshdfs://<hostname>:<port>/<path to directory>

hdfs://localhost:8020/tmp

s3s3://<testbucket>/<key name prefix>

s3://test-bucker/tmp

wasbwasb:///<storage container>/path to directory>

wasb:///container/tmp

wasbswasbs:///<storage container>/path to directory>

wasbs:///container/tmp

adladl://<store name>/<path to directory>

adl://storename/tmp

adlsadls://<store name>/<path to directory>

adls://stprename/tmp

abfs

abfs:///filesystem/<path>/

abfs://filesystem1/core.windows.net/dirl
abfsabfs://<filesystem>@<accountname>.<endpoint>/<path>abfs://filesystem2@adlsgen2test1.dfs.core.windows.net/dirl
abfssabfss:///filesystem/<path>/abfss://filesystem1/core.windows.net/dirl
abfssabfss://<filesystem>@<accountname>.<endpoint>/<path>abfss://filesystem2@adlsgen2test1.dfs.core.windows.net/dirl

When you use the ABFS protocol to connect to an endpoint, the account name and endpoint details provided in the URL override the corresponding values in the Account Settings fields.

With the ABFS protocol, SnapLogic creates a temporary file to store the incoming data. Therefore, the hard drive where the JCC is running should have enough space to temporarily store all the account data coming in from ABFS.

SnapLogic automatically appends "azuredatalakestore.net" to the store name you specify when using Azure Data Lake; therefore, you do not need to add 'azuredatalakestore.net' to the URI while specifying the directory.

File FilterString/Expression

A glob to select only certain files or directories. 

The glob pattern is used to display a list of directories or files when the Suggest icon is pressed in the Directory or File property. The complete glob pattern is formed by combining the value of the Directory property and the Filter property. If the value of the Directory property does not end with "/", the Snap appends one so that the value of the Filter property is applied to the directory specified by the Directory property.

The following rules are used to interpret glob patterns:

  • The * character matches zero or more characters of a name component without crossing directory boundaries. For example, *.csv matches a path that represents a filename ending in .csv and *.* matches file names containing a dot.

  • The ** characters matches zero or more characters crossing directory boundaries, therefore it matches all files or directories in the current directory as well as in all subdirectories. For example, /home/** matches all files and directories in the /home/ directory.

  • The ? character matches exactly one character of a name component. For example, foo.? matches file names starting with foo. and a single character extension.

  • The backslash character (\) is used to escape characters that would otherwise be interpreted as special characters. The expression \\ matches a single backslash and "\{" matches a left brace for example.

  • The ! character is used to exclude matching files from the output. 
  • The [ ] characters are a bracket expression that match a single character of a name component out of a set of characters. For example, [abc] matches "a", "b", or "c". The hyphen (-) may be used to specify a range so [a-z] specifies a range that matches from "a" to "z" (inclusive). These forms can be mixed so [abce-g] matches "a", "b", "c", "e", "f" or "g". If the character after the [ is a ! then it is used for negation so [!a-c] matches any character except "a", "b", or "c".

    Within a bracket expression the *, ? and \ characters match themselves. The (-) character matches itself if it is the first character within the brackets, or the first character after the ! if negating.

  • The { } characters are a group of subpatterns, where the group matches if any subpattern in the group matches. The "," character is used to separate the subpatterns. Groups cannot be nested. For example, *.{csv, json} matches file names ending with .csv or .json

  • Leading dot characters in file name are treated as regular characters in match operations. For example, the "*" glob pattern matches file name ".login". 

  • All other characters match themselves.

Default Value: *

FileString/Expression

Required for standard mode. Filename or a relative path to a file under the directory given in the Directory property. It should not start with a URL separator "/". The File property can be a JavaScript expression which will be evaluated with values from the input view document. When you press the Suggest icon, it will display a list of regular files under the directory in the Directory property. It generates the list by applying the value of the Filter property.

Example

  • sample.parquet
  • tmp/another.orc
  • _filename


Both the Parquet Reader and Parquet Writer Snaps have the ability to read compressed files. The compression codecs that are currently supported are: Snappy, GZIP, and LZO. To use LZO compression, you must explicitly enable the LZO compression type on the cluster (as an administrator) for the Snap to recognize and run the format. For more information, see Data Compression. For detailed guidance on setting up LZO compression, see Clourdera documentation on Installing the GPL Extras Parcel.

Many compression algorithms require both Java and system libraries and will fail if the latter is not installed. If you see unexpected errors, ask your system administrator to verify that all the required system libraries are installed–they are typically not installed by default. The system libraries will have names such as liblzo2.so.2 or libsnappy.so.1 and will probably be located in the /usr/lib/x86_64-linux-gnu directory.

User Impersonation

Checkbox

Hadoop allows you to configure proxy users to access HDFS on behalf of other users; this is called impersonation. When user impersonation is enabled on the Hadoop cluster, any jobs submitted using a proxy are executed with the impersonated user's existing privilege levels rather than those of a superuser.

  • When you select this checkbox and Kerberos is the account type, the Client Principal configured in the Kerberos account impersonates the Pipeline user.
  • When you select this checkbox and Kerberos is not  the account type, the user who is executing the Pipeline is used to impersonate for the HDFS Operations. For example, if the user logged into the SnapLogic platform is operator@snaplogic.com, then the user name "operator" is used to proxy the super user. For the Non Kerberised cluster, ensure to activate the Superuser access on the configuration settings.


The HDFS Writer Snap supports Azure storage account, Azure Data Lake account, Kerberos account, or no account. When an account is configured with the HDFS Writer Snap, User impersonation setting has no impact on the accounts except Kerberos Account.

Default ValueNot selected

 For encryption zones, use user impersonation. 

Ignore empty fileCheckbox

Select this checkbox to ignore an empty file, that is the Snap does nothing.
If you deselect this checkbox, the Snap produces an empty output document.

  • This property applies when the file does not contain any data.
  • An empty Parquet file cannot be a zero-byte file. If a file to be parsed is a zero-byte file, it is considered an invalid Parquet file and produces an error.

Default ValueSelected

Use old data formatCheckbox

Deselect this checkbox to read complex data types or nested schema such as LIST, and MAP. Null values are skipped when doing so.

Default Value: Selected

int96 As Timestamp

Enabled when you deselect Use old data format checkbox.

Select this checkbox to enable the Snap to convert int96 values to timestamp strings of a specified format in Date Time Format field.

If you deselect this checkbox, the Snap shows values for int96 data type as 12-byte BigInteger objects. 

Default Value: Deselected

Use datetime types

Select this checkbox to enable the Snap to display LOCALDATE type for int32(DATE) and DATETIME type for int64(TIMESTAMP_MILLIS) in the output. When deselected, the columns retain the previous datatypes.

Default Value: Deselected

Date Time Format

Enabled when you select int96 As Timestamp checkbox.

Enter a date-time format of your choice for int96 data-type fields (timestamp and time zone). For more information about valid date-time formats, see DateTimeFormatter.

The int96 data type can support up to nanosecond accuracy.

Default Valueyyyy-MM-dd'T'HH:mm:ss.SSSX
Example: yyyy-MM-dd'T'HH:mm:ssX

Azure SAS URI PropertiesShared Access Signatures (SAS) properties of the Azure Storage account.
SAS URIString/Expression

Specify the Shared Access Signatures (SAS) URI that you want to use to access the Azure storage blob folder specified in the Azure Storage Account. You can get a valid SAS URI either from the Shared access signature in the Azure Portal or by generating one from the SAS Generator Snap.

If the SAS URI value is provided in the Snap settings, then the settings provided in the account (if any account is attached) are ignored.

Snap Execution

Dropdown list

Select one of the following three modes in which the Snap executes:

  • Validate & Execute: Performs limited execution of the Snap, and generates a data preview during Pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during Pipeline runtime.

  • Execute only: Performs full execution of the Snap during Pipeline execution without generating preview data.

  • Disabled: Disables the Snap and all Snaps that are downstream from it.

Default Value: Validate & Execute

Example: Execute Only

Troubleshooting

Writing to S3 files with HDFS version CDH 5.8 or later

When running HDFS version later than CDH 5.8, the Hadoop Snap Pack may fail to write to S3 files. To overcome this, make the following changes in the Cloudera manager:

  1. Go to HDFS configuration.
  2. In Cluster-wide Advanced Configuration Snippet (Safety Valve) for core-site.xml, add an entry with the following details:
    • Name: fs.s3a.threads.max
    • Value: 15
  3. Click Save.
  4. Restart all the nodes.
  5. Under Restart Stale Services, select Re-deploy client configuration.
  6. Click Restart Now.

Temporary Files

During execution, data processing on Snaplex nodes occurs principally in-memory as streaming and is unencrypted. When larger datasets are processed that exceeds the available compute memory, the Snap writes Pipeline data to local storage as unencrypted to optimize the performance. These temporary files are deleted when the Snap/Pipeline execution completes. You can configure the temporary data's location in the Global properties table of the Snaplex's node properties, which can also help avoid Pipeline errors due to the unavailability of space. For more information, see Temporary Folder in Configuration Options

Examples


 Reading from a Hadoop cluster with an applied filter

Displaying Files in a Directory Using the Parquet Reader

In the below pipeline, the Parquet Reader Snap reads the documents from a directory path with a filter *.parquet. Since the file name is not provided, the Snap displays all the files with the applied filter on that directory.


Successful execution of the Pipeline displays the output preview as follows:


 Reading from HDFS

Reading from HDFS

A Parquet Reader configured to read from a local instance of HDFS. The path it reads from is /tmp/test.parquet.

 Reading from S3

Reading from S3

Reading a Parquet file from AWS S3 requires an S3 account.

  1. Create an S3 account or use an existing one.
    1. If it is a regular S3 account, name the account and supply the Access-key ID and Secret key.
    2. If the account is an IAM role-enabled Account:

      1. Select the IAM role checkbox.

      2. Leave the Access-key ID and Secret key blank.

      3. The IAM role properties are optional. You can leave them blank.

        To use IAM Role Properties, ensure to select the IAM Role check box.

  2. Within the Parquet Snap, use a valid S3 path for the directory in the format of:
    s3://<bucket name>/<key name prefix>

 Reading from Kerberos

Reading from Kerberos

Read from Kerberosize Cluster requires Kerberos account as below:


 Reading Schema Information from the Catalog Query Snap

Reading Schema Information from the Catalog Query Snap

In this example, we shall query a SnapLogic metadata catalog table partition to retrieve the schema used and then use the Parquet Reader Snap to read the retrieved schema.

Download this pipeline

 Click here to expand...

Understanding the Pipeline

We create the pipeline as shown below:

The Catalog Query Snap

In this pipeline, the Catalog Query Snap retrieves metadata information from the metadata catalog table and makes it available for the next Snap. We configure the Snap to pick up the table from a location in SLDB; we also specify partition keys (city=Milpitas) to identify a precise partition in the table, as shown below:

On execution the Snap retrieves the metadata associated with the specified partition in the concerned table, as shown below:

The Parquet Reader Snap

We now want to extract the schema information from the Catalog Query Snap's output. To do so, we insert a Parquet Reader Snap into the pipeline, as shown below:

Once this Snap is executed, it identifies and retrieves the schema information from the Catalog Query Snap, as shown below:

Download this pipeline

Downloads

Important steps to successfully reuse Pipelines

  1. Download and import the pipeline into the SnapLogic application.
  2. Configure Snap accounts as applicable.
  3. Provide pipeline parameters as applicable.

  File Modified


 Click to view/expand
Release Snap Pack VersionDateType  Updates
November 2024main29029 StableUpdated and certified against the current SnapLogic Platform release.

August 2024

main27765

 

Stable

Upgraded the org.json.json library from v20090211 to v20240303, which is fully backward compatible.

May 2024437patches27226 -

The upgrade of the Azure Storage library from v3.0.0 to v8.3.0 has impacted the Hadoop Snap Pack causing the following issue when using the WASB protocol.

Known Issue

When you use invalid credentials for the WASB protocol in Hadoop Snaps (HDFS Reader, HDFS Writer, ORC Reader, Parquet Reader, Parquet Writer), the pipeline does not fail immediately, instead it takes 13-14 minutes to display the following error:

reason=The request failed with error code null and HTTP code 0. , status_code=error

SnapLogic® is actively working with Microsoft®Support to resolve the issue.

Learn more about the Azure Storage Library Upgrade.

May 2024437patches27471 Latest

Fixed a resource leak issue with the following Hadoop Snaps, which involved too many stale instances of ProxyConnectionManager and significantly impacted memory utilization.

May 2024437patches26370 Latest

Enhanced the HDFS Writer Snap with the Write empty file checkbox to enable you to write an empty or a 0-byte file to all the supported protocols that are recognized and compatible with the target system or destination.

May 2024main26341 Stable

The Azure Data Lake Account has been removed from the Hadoop Snap Pack because Microsoft retired the Azure Data Lake Storage Gen1 protocol on February 29, 2024. We recommend replacing your existing Azure Data Lake Accounts (in Binary or Hadoop Snap Packs) with other Azure Accounts.

February 2024

436patches25902

 Latest

Fixed a memory management issue in the HDFS Writer, HDFS ZipFile Writer, ORC Writer, and Parquet Writer Snaps, which previously caused out-of-memory errors when multiple Snaps were used in the pipeline. The Snap now conducts a pre-allocation memory check, dynamically adjusting the write buffer size based on available memory resources when writing to ADLS.

February 2024435patches25410Latest

Enhanced the AWS S3 Account for Hadoop with an External ID that enables you to access Hadoop resources securely.

February 2024main25112 StableUpdated and certified against the current SnapLogic Platform release.
November 2023435patches23904 Latest
  • Fixed an issue with the Parquet Writer Snap that displayed an error Failed to write parquet data when the decimal value passed from the second input view exceeded the specified scale.

  • Fixed an issue with the Parquet Writer Snap that previously failed to handle the conversion of BigInt/int64 (larger numbers) after the 4.35 GA now converts them accurately.

November 2023435patches23780 Latest

Fixed an issue related to error routing to the output view. Also fixed a connection timeout issue.

November 2023main23721StableUpdated and certified against the current SnapLogic Platform release.
August 2023434patches23173 Latest

Enhanced the Parquet Writer Snap with a Decimal Rounding Mode dropdown list to enable the rounding method for decimal values when the number exceeds the required decimal places.

August 2023434patches22662 Latest
  • Enhanced the Parquet Writer Snap with the support for LocalDate and DateTime. The Snap now shows the schema suggestions for LocalDate and DateTime correctly.

  • Enhanced the Parquet Reader Snap with the Use datetime types checkbox that supports LocalDate and DateTime datatypes.

Behavior change:

When you select the Use datetime types checkbox in the Parquet Reader Snap, the Snap displays the LocalDate and DateTime in the output for INT32 (DATE) and INT64 (TIMESTAMP_MILLIS) columns. When you deselect this checkbox, the columns retain the previous datatypes and display string and integer values in the output.

August 2023

main22460

 

Stable

Updated and certified against the current SnapLogic Platform release.

May 2023433patches22180 Latest

Introduced the HDFS Delete, which deletes the specified file, group of files, or directory from the supplied path and protocol in the Hadoop Distributed File System (HDFS).

May 2023433patches21494 LatestThe Hadoop Directory Browser Snap now returns all the output documents as expected after implementing pagination for the ABFS protocol.
May 2023main21015 StableUpgraded with the latest SnapLogic Platform release.
February 2023

432patches20820

 Latest

Fixed an authorization issue that occurs with the Parquet Writer Snap when it receives empty document input.

February 2023432patches20209 Latest

The Apache Commons Compress library has been upgraded to version 1.22.

February 2023432patches20139 Latest

The Kerberos Account that is available for a subset of snaps in the Hadoop Snap pack now supports a configuration that enables you to read from and write to the Hadoop Distributed File System (HDFS) managed by multiple Hadoop clusters. You can specify the location of the Hadoop configuration files in the Hadoop config directory field. The value in this field overrides the value that is set on the Snaplex system property used for configuring a single cluster. 

February 2023

main19844 StableUpgraded with the latest SnapLogic Platform release.
November 2022main18944 Stable

The AWS S3 and S3 Dynamic accounts now support a maximum session duration of an IAM role defined in AWS.

August 2022main17386 StableExtended the AWS S3 Dynamic Account support to ORC Reader and ORC Writer Snaps to support AWS Security Token Service (STS) using temporary credentials.
4.29 Patch429patches16630 Latest
  • Extended the AWS S3 Dynamic Account support to ORC Reader and ORC Writer Snaps to support AWS Security Token Service (STS) using temporary credentials.
  • Fixed an issue in the following Snaps that use AWS S3 dynamic account, where the Snaps displayed the security credentials like Access Key, Secret Key, and Security Token in the logs. Now, the security credentials in the logs are blurred for the Snaps that use AWS S3 dynamic account.
4.29main15993 Stable

Enhanced the AWS S3 Account for Hadoop account to include the S3 Region field that allows cross-region or proxied cross-region access to S3 buckets in the Parquet Reader and Parquet Writer Snaps.

4.28 Patch428patches15216 LatestAdded the AWS S3 Dynamic account for Parquet Reader and Parquet Writer Snaps.
4.28main14627 StableUpgraded with the latest SnapLogic Platform release.
4.27 Patch427patches13769 Latest

Fixed an issue with the Hadoop Directory Browser Snap where the Snap was not listing the files in the given directory for Windows VM.

4.27 Patch427patches12999 LatestEnhanced the Parquet Reader Snap with int96 As Timestamp checkbox, which when selected enables the Date Time Format field. You can use this field to specify a date-time format of your choice for int96 data-type fields. The int96 As Timestamp checkbox is available only when you deselect Use old data format checkbox.

4.27

main12833

 

Stable

Enhanced the Parquet Writer and Parquet Reader Snaps with Azure SAS URI properties, and Azure Storage Account for Hadoop with SAS URI Auth Type. This enables the Snaps to consider SAS URI given in the settings if the SAS URI is selected in the Auth Type during account configuration. 

4.26426patches12288 Latest

Fixed a memory leak issue when using HDFS protocol in Hadoop Snaps.

4.26main11181 StableUpgraded with the latest SnapLogic Platform release.
4.25 Patch425patches9975 Latest

Fixed the dependency issue in Hadoop Parquet Reader Snap while reading from AWS S3. The issue is caused due to conflicting definitions for some of the AWS classes (dependencies) in the classpath.

4.25main9554
 
Stable
  • Enhanced the HDFS Reader and HDFS Writer Snaps with the Retry mechanism that includes the following settings:
    • Number of Retries: Specifies the maximum number of retry attempts when the Snap fails to connect to the Hadoop server.
    • Retry Interval (seconds): Specifies the minimum number of seconds the Snap must wait before each retry attempt.
4.24 Patch424patches9262 Latest

Enhanced the AWS S3 Account for Hadoop to support role-based access when you select IAM role checkbox.

4.24 Patch424patches8876

 


Latest

Fixes the missing library error in Hadoop Snap Pack when running Hadoop Pipelines in JDK11 runtime.

4.24main8556
StableUpgraded with the latest SnapLogic Platform release.
4.23 Patch423patches7440 Latest

Fixes the issue in HDFS Reader Snap by supporting to read and write files larger than 2GB using ABFS(S) protocol.

4.23main7430
 
StableUpgraded with the latest SnapLogic Platform release.
4.22main6403
 
StableUpgraded with the latest SnapLogic Platform release.
4.21 Patchhadoop8853 Latest

Updates the Parquet Writer and Parquet Reader Snaps to support the yyyy-MM-dd format for the DATE logical type.

4.21snapsmrc542

 

StableUpgraded with the latest SnapLogic Platform release.
4.20 Patchhadoop8776 Latest

Updates the Hadoop Snap Pack to use the latest version of org.xerial.snappy:snappy-java for compression type Snappy, in order to resolve the java.lang.UnsatisfiedLinkError: org.xerial.snappy.SnappyNative.maxCompressedLength(I)I error.

4.20snapsmrc535
 
StableUpgraded with the latest SnapLogic Platform release.
4.19 Patchhadoop8270 Latest

Fixes an issue with the Hadoop Parquet Writer Snap wherein the Snap throws an exception when the input document includes one or all of the following:

  • Empty lists.
  • Lists with all null values.
  • Maps with all null values.
4.19snaprsmrc528
 
StableUpgraded with the latest SnapLogic Platform release.
4.18 Patchhadoop8033 Latest

Fixed an issue with the Parquet Writer Snap wherein the Snap throws an error when working with WASB protocol.

4.18snapsmrc523
 
Stable
4.17ALL7402
 
Latest

Pushed automatic rebuild of the latest version of each Snap Pack to SnapLogic UAT and Elastic servers.

4.17snapsmrc515
 
Latest

Added the Snap Execution field to all Standard-mode Snaps. In some Snaps, this field replaces the existing Execute during preview check box.

4.16snapsmrc508
 
Stable

Added a new property, Output for each file written, to handle multiple binary input data in the HDFS Writer Snap.

4.15snapsmrc500
 
Stable
  • Added two new Snaps: HDFS ZipFile Reader and HDFS ZipFile Writer.
  • Added support for the Data Catalog Snaps in Parquet Reader and Parquet Writer Snaps.
4.14 Patchhadoop5888 Latest

Fixed an issue wherein the Hadoop snaps were throwing an exception when a Kerberized account is provided, but the snap is run in a non-kerberized environment.

4.14snapsmrc490
 
Stable
  • Added the Hadoop Directory Browser Snap, which browses a given directory path in the Hadoop file system using the HDFS protocol and generates a list of all the files in the directory. It also lists subdirectories and their contents.
  • Added support for S3 file protocol in the ORC Reader, and ORC Writer Snaps.
  • Added support for reading nested schema in the Parquet Reader Snap.
4.13 Patchhadoop5318 Latest
  • Fixed the HDFS Reader/Writer and Parquet Reader/Writer Snaps, wherein Hadoop configuration information does not parse from the client's configuration files.
  • Fixed the HDFS Reader/Writer and Parquet Reader/Writer Snaps, wherein User Impersonation does not work on Hadooplex.
4.13

snapsmrc486

 
Stable
  • KMS encryption support added to AWS S3 account in the Hadoop Snap Pack.
  • Enhanced the Parquet Reader, Parquet Writer, HDFS Reader, and HDFS Writer Snaps to support WASB and ADLS file protocols.
  • Added the AWS S3 account support to the Parquet Reader and Writer Snaps. 
  • Added second input view to the Parquet Reader Snap that when enabled, accepts table schema.
  • Supported with AWS S3, Azure Data Lake, and Azure Storage Accounts.
4.12 Patchhadoop5132 Latest

Fixed an issue with the HDFS Reader Snap wherein the pipeline becomes stale while writing to the output view.

4.12

snapsmrc480

 
StableUpgraded with the latest SnapLogic Platform release.
4.11 Patchhadoop4275
Latest

Addressed an issue with Parquet Reader Snap leaking file descriptors (connections to HDFS data nodes). The Open File descriptor values work stable now,

4.11snapsmrc465
 
Stable

Added Kerberos support to the standard mode Parquet Reader and Parquet Writer Snaps.

4.10 Patchhadoop4001 Latest

Supported HDFS Writer to write to the encryption zone.

4.10 Patchhadoop3887 Latest

Addressed the suggest issue for the HDFS Reader on Hadooplex.

4.10 Patchhadoop3851 Latest
  • ORC supports read/write from local file system.
  • Addressed an issue to bind the Hive Metadata to Parquet Writer Schema at Runtime.
4.10 Patchhadoop3838 Latest

Made HDFS Snaps work with Zone encrypted HDFS.

4.10

snapsmrc414

 
Stable
  • Updated the Parquet Writer Snap with Partition by property to support the data written into HDFS based on the partition definition in the schema in Standard mode.
  • Support for S3 accounts with IAM Roles added to Parquet Reader and Parquet Writer
  • HDFS Reader/Writer with Kerberos support on Groundplex (including user impersonation).
4.9 Patchhadoop3339 Latest

Addressed the following issues:

  • ORC Reader passing, but ORC Writer failing when run on a Cloudplex.
  • ORC Reader Snap is not routing error to error view.
  • Intermittent failures with the ORC Writer
4.9.0 Patchhadoop3020 Latest

Added missing dependency org.iq80.snappy:snappy to Hadoop Snap Pack.

4.9snapsmrc405
 
StableUpgraded with the latest SnapLogic Platform release.
4.8

snapsmrc398

 
Stable

Snap-aware error handling policy enabled for Spark mode in Sequence Formatter and Sequence Parser. This ensures the error handling specified on the Snap is used.

4.7.0 Patchhadoop2343 Latest

Spark Validation: Resolved an issue with validation failing when setting the output file permissions.

4.7

snapsmrc382

 
Stable
  • Updated the HDFS Writer and HDFS Reader Snaps with Azure Data Lake account for standard mode pipelines.
  • HDFS Writer: Spark mode support added to write to a specified directory in an Azure Storage Layer using the wasb file system protocol.
  • HDFS Reader: Spark mode support added to read a single file or an HDFS directory from an Azure Storage Layer.
4.6snapsmrc362
 
Stable
  • The following Snaps now support error view in Spark mode: HDFS Reader, Sequence Parser.
  • Resolved an issue in HDFS Writer Snap that sends the same data in output & error view.
4.5

snapsmrc344

 
Stable
  • HDFS Reader and HDFS Writer Snaps updated to support IAM Roles for Amazon EC2.
  • Support for Spark mode added to Parquet Reader, Parquet Writer
  • The HBase Snaps are no longer available as of this release.
4.4.1
 Stable
  • Resolved an issue with Sequence Formatter not working in Spark mode.
  • Resolved an issue with HDFSReader not using the filter set when configuring SparkExec paths.
4.4
 Stable
  • NEW! Parquet Reader and Writer Snaps
  • NEW! ORC Reader and Writer Snaps
  • Spark support added to the HDFS Reader, HDFS Writer, Sequence Formatter, and Sequence Parser Snaps.
  • Behavior change: HDFS Writer in SnapReduce mode now requires the File property to be blank.
4.3.2
 Stable
  • Implemented wasbs:// protocol support in Hadoop Snap Pack.
  • Resolved an issue with HDFS Reader unable to read all files under a folder (including all files under its subfolders) using the ** filter.

  • No labels