You can use this Snap to convert documents to the Parquet format and write the data to HDFS, ADL (Azure Data Lake), ABFS (Azure Blob File Storage), WASB (Azure storage), or an S3 bucket. This Snap supports a nested schema such as LIST and MAP. You can also use this Snap to write schema information to the Catalog Insert Snap.
This Snap supports HDFS, ADL (Azure Data Lake), ABFS (Azure Blob File Storage), WASB (Azure storage), and S3 protocols.
Snap Type
The Parquet Writer Snap is a Write-type Snap.
Prerequisites
You must have access and permission to write to HDFS, ADL (Azure Data Lake), ABFS (Azure Data Lake Storage Gen 2), WASB (Azure storage), or AWS S3.
The Parquet Writer Snap is tested against Windows Server 2008, 2010, and 2012.
Limitations
Auto schema generation in this Snap excludes null fields. For example, if the Snap receives ten input documents during preview execution, and four of these documents contain null values for certain fields in all instances, those four fields are disregarded during schema generation. The schema only includes fields with at least one non-null value among the preview input documents.
"Generate template" is unsupported for a nested structure like MAP and LIST type. Generate template is a link within the schema editor accessed through the Edit Schema button.
All expression Snap properties can be evaluated (when the '=' button is pressed) from pipeline parameters only, not from input documents from upstream Snaps. Input documents are data to be formatted and written to the target files.
The security model configured for the Groundplex (SIMPLE or KERBEROS authentication) must match the security model of the remote server. Due to the limitations of the Hadoop library, we can only create the necessary internal credentials to configure the Groundplex.
Parquet Snaps work well in a Linux environment. However, due to limitations in the Hadoop library on Windows, their functioning in a Windows environment may not always be as expected. We recommend you use a Linux environment for working with Parquet Snaps.
To use the Parquet Writer Snap on a Windows Snaplex, follow these steps:
Place the hadoop.dlland winutils.exe files in this path: C:\hadoop\bin
Set the environment variable HADOOP_HOME to point to C:\hadoop
Add C:\hadoop\bin to the environment variable PATH as shown below:
Add the JVM options in the Windows Snaplex:jcc.jvm_options= -Djava.library.path=C:\hadoop\bin
If you already have an existing jvm_options, then add: "-Djava.library.path=C:\hadoop\bin" after the space. For example:jcc.jvm_options = -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8000 -Djava.library.path=C:\hadoop\bin
Restart the JCC for configurations to take effect.
Known Issues
The upgrade of Azure Storage library from v3.0.0 to v8.3.0 has caused the following issue when using the WASB protocol: When you use invalid credentials for the WASB protocol in Hadoop Snaps (HDFS Reader, HDFS Writer, ORC Reader, Parquet Reader, Parquet Writer), the pipeline does not fail immediately, instead it takes 13-14 minutes to display the following error:
reason=The request failed with error code null and HTTP code 0. , status_code=error
SnapLogic® is actively working with Microsoft®Support to resolve the issue.
This Snap has one or two document input views. When you enable the second input view, the Snap ignores other schema settings such as Schema button or Hive Metastore related properties, but it accepts the schema from the second input view only. However, when you disable the second input view, the Snap prepares to receive the Schema with the provided information on the Hive Metastore URL property. The supported data types are:
Primitive: Boolean, Integer, Float, double, and byte-array
Local: map, list
The Snap expects a Hive Execute Snap that contains the "Describe table" statement in the second input view.
Output
Document
Min: 0
Max: 1
Mapper
A document with a filename for each Parquet file written. For example: {"filename" : "hdfs://localhost/tmp/2017/april/sample.parquet"}
Error
Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter while running the Pipeline by choosing one of the following options from the When errors occur list under the Views tab. The available options are:
Stop Pipeline Execution: Stops the current pipeline execution when the Snap encounters an error.
Discard Error Data and Continue: Ignores the error, discards that record, and continues with the remaining records.
Route Error Data to Error View: Routes the error data to an error view without stopping the Snap execution.
Suggestion icon (Emoji :Suggest_Icon: ): Indicates a list that is dynamically populated based on the configuration.
Expression icon (Emoji :Expression_Enabler: ): Indicates the value is an expression (if enabled) or a static value (if disabled). Learn more about Using Expressions in SnapLogic.
Add icon ( ): Indicates that you can add fields in the field set.
Remove icon ( Emoji :Remove_icon: ): Indicates that you can remove fields from the field set.
Upload icon (Emoji :Upload_icon: ): Indicates that you can upload files.
SnapLogic automatically appends azuredatalakestore.net to the store name you specify when using Azure Data Lake; therefore, you do not have to add azuredatalakestore.net to the URI while specifying the directory.
The Directory property is not used in the Pipeline execution or preview and used only in the Suggest operation. When you click on the Suggest icon, the Snap displays a list of subdirectories under the given directory. It generates the list by applying the value of the Filter property.
File Filter
Default Value: * Example: **
String/Expression
Specify the Glob file pattern.
Use glob patterns to display a list of directories or files when you click the Suggestion icon in the Directory or File property. A complete glob pattern is formed by combining the value of the Directory property with the Filter property. If the value of the Directory property does not end with "/", the Snap appends one, so that the value of the Filter property is applied to the directory specified by the Directory property.
Glob Pattern Interpretation Rules
The following rules are used to interpret glob patterns:
- The * character matches zero or more characters of a name component without crossing directory boundaries. For example, the *.csv pattern matches a path that represents a file name ending in .csv, and *.* matches all file names that contain a period.
- The ** characters match zero or more characters across directories; therefore, it matches all files or directories in the current directory and in its subdirectories. For example, /home/** matches all files and directories in the /home/ directory.
- The ? character matches exactly one character of a name component. For example, 'foo.?' matches file names that start with 'foo.' and are followed by a single-character extension.
- The \ character is used to escape characters that would otherwise be interpreted as special characters. The expression \\ matches a single backslash, and \{ matches a left brace, for example.
- The ! character is used to exclude matching files from the output.
- The [ ] characters form a bracket expression that matches a single character of a name component out of a set of characters. For example, '[abc]' matches 'a', 'b', or 'c'. The hyphen (-) may be used to specify a range, so '[a-z]' specifies a range that matches from 'a' to 'z' (inclusive). These forms can be mixed, so '[abce-g]' matches 'a', 'b', 'c', 'e', 'f' or 'g'. If the character after the [ is a ! then it is used for negation, so '[!a-c]' matches any character except 'a', 'b', or 'c'.
- Within a bracket expression, the '*', '?', and '\' characters match themselves. The '-' character matches itself if it is the first character within the brackets, or the first character after the !, if negating.
- The '{ }' characters are a group of sub-patterns where the group returns a match if any sub-pattern in the group matches the contents of a target directory. The ',' character is used to separate sub-patterns. Groups cannot be nested. For example, the pattern '*.{csv, json}' matches file names ending with '.csv' or '.json'.
- Leading dot characters in a file name are treated as regular characters in match operations. For example, the '*' glob pattern matches file name ".login".
- All other characters match themselves.
Examples:
'*.csv' matches all files with a csv extension in the current directory only.
'**.csv' matches all files with a csv extension in the current directory and in all its subdirectories.
*[!{.pdf,.tmp}] excludes all files with the extension PDF or TMP.
File
Default Value:N/A Example:
sample.csv
tmp/another.csv
_filename
String/Expression/Suggestion
Specify the filename or a relative path to a file under the directory given in the Directory property. It should not start with a URL separator "/". The File value can be a JavaScript expression which will be evaluated with values from the input view document. When you click the Suggest icon, the Snap displays a list of regular files under the directory in the Directory property. It generates the list by applying the value of the Filter property.
Specify the URL of the Hive Metastore to assist in setting the schema along with the database and table setting. If the data being written has a Hive schema, then the Snap can be configured to read the schema instead of manually entering it. Set the value to a Hive Metastore URL where you define the schema.
Database
Default value: N/A
String/Expression/Suggestion
Specify the Hive Metastore database which holds the schema for the outgoing data.
Table
Default value: N/A
String/Expression/Suggestion
Specify the table whose schema should be used for formatting the outgoing data.
Fetch Hive Schema at Runtime
Default value: Deselected
Checkbox
Select this checkbox to fetch the schema from the Metastore table before writing. The Snap fails to write if it cannot make connection to the metastore or the table does not exist during the Pipeline's execution. Will use the metastore schema instead of the one set in the Snap's Edit Schema property if this is checked.
Edit Schema
Button
Specify a valid Parquet schema that describes the data.
The schema can be specified based off a Hive Metastore table schema or generated from suggest data. Save the pipeline before editing the schema to generate suggest data that assists in specifying the schema based off of the schema of incoming documents. If no suggest data is available, then an example schema is generated along with documentation. Alter one of those schemas to describe the input data. The Parquet schema can also be written manually. A schema is defined by a list of fields and here is an example describing the contact information of a person.
Click here to expand...
After defining the message type, a list of fields are given. A field is comprised of a repetition, a type, and the field name. Available repetitions are required, optional, and repeated. Each field has a type. The primitive types include:
binary -used for strings
fixed_len_byte_array - used for byte arrays of fixed length
boolean - a 1 bit boolean value
int32 - a 32 bit integer
int64 - a 64 bit integer
int96 - a 96 bit integer
float - a 32 bit floating point number
double - a 64 bit floating point number
These types can be annotated with a logical type to specify how the application should interpret the data. The Logical types include:
UTF8 - used with binary to specify the string as UTF8 encoded
INT_8 - used with int32 to specify the int as an 8 bit signed integer
INT_16 - used with int32 to specify the int as a 16 bit signed integer
Unsigned types - may be used to produce smaller in-memory representations of the data. If the stored value is larger than the maximum allowed by int32 or int64, then the behavior is undefined.
UINT_8 - used with int32 to specify the int as an 8 bit unsigned integer
UINT_16 - used with int32 to specify the int as a 16 bit unsigned integer
UINT_32 - used with int32 to specify the int as a 32 bit unsigned integer
UINT_64 - used with int64 to specify the int as a 64 bit unsigned integer
DECIMAL(precision, scale) - used to describe arbitrary-precision signed decimal numbers of the form value * 10^(-scale) to the given precision. The annotation can be with int32, int64, fixed_len_byte_array, binary. See the Parquet documentation for limits on precision that can be given.
DATE - used with int32 to specify the number of days since the Unix epoch, 1 January 1970
Note: This Snap supports only the following date format: yyyy-MM-dd.
TIME_MILLIS - used with int32 to specify the number of milliseconds after midnight
TIMESTAMP_MILLIS - used with int64 to store the number of milliseconds from the Unix epoch, 1 January 1970
INTERVAL - used with a fixed_len_byte_array of length 12, where the array stores 3 unsigned little-endian integers. These integers specify
a number in months
a number in days
a number in milliseconds
JSON - used with binary to represent an embedded JSON document
BSON - used for an embedded BSON document
The following is an example of a schema using all the primitive and some examples of logical types:
"Generate template" will not work for nested structure like MAP and LIST type.
Compression*
Dropdown list
Select the type of compression to use when writing the file. The available options are:
NONE
SNAPPY
GZIP
LZO
To use LZO compression, you must explicitly enable the LZO compression type on the cluster (as an administrator) for the Snap to recognize and run the format. For more information, see Data Compression. For detailed guidance on setting up LZO compression, see Cloudera documentation on Installing the GPL Extras Parcel.
Many compression algorithms require both Java and system libraries and will fail if the latter is not installed. If you see unexpected errors, ask your system administrator to verify that all the required system libraries are installed–they are typically not installed by default. The system libraries will have names such as liblzo2.so.2 or libsnappy.so.1 and will probably be located in the /usr/lib/x86_64-linux-gnu directory.
Partition by
Default Value: N/A
String/Suggestion
Specify or select the key which will be used to get the 'Partition by' folder name. All input documents should contain this key name or an error document will be written to the error view. Refer to the 'Partition by' example below for an illustration.
Azure SAS URI Properties
Shared Access Signatures (SAS) properties of the Azure Storage account.
SAS URI
String/Expression
Specify the Shared Access Signatures (SAS) URI that you want to use to access the Azure storage blob folder specified in the Azure Storage Account.
You can get a valid SAS URI either from the Shared access signature in the Azure Portal or by generating from the SAS Generator Snap.
If SAS URI value is provided in the Snap settings, then the account settings (in case any account is attached) are ignored.
Decimal Rounding Mode
Default Value: Half up
Example: Up
Dropdown list
Select the required rounding method for decimal values when they exceed the required number of decimal places. The available options are:
Half up
Half down
Half even
Up
Down
Ceiling
Floor
Truncate
Snap Execution
Dropdown list
Select one of the following three modes in which the Snap executes:
Validate & Execute: Performs limited execution of the Snap, and generates a data preview during Pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during Pipeline runtime.
Execute only: Performs full execution of the Snap during Pipeline execution without generating preview data.
Disabled: Disables the Snap and all Snaps that are downstream from it.
Troubleshooting
Error
Reason
Resolution
Unable to connect to the Hive Metastore.
This error occurs when the Parquet Writer Snap is unable to fetch schema for Kerberos-enabled Hive Metastore.
Pass the Hive Metastore's schema directly to the Parquet Writer Snap. To do so:
Enable the 'Schema View' in the Parquet Writer Snap by adding the second Input View.
Connect a Hive Execute Snap to the Schema View. Configure the Hive Execute Snap to execute the DESCRIBE TABLE command to read the table metadata and feed it to the schema view.
Parquet Snaps may not work as expected in the Windows environment.
Because of the limitations in the Hadoop library on Windows, Parquet Snaps does not function as expected.
To use the Parquet Writer Snap on a Windows Snaplex, follow these steps:
Create a temporary directory. For example: C:\test\.
Add the JVM options in the Windows Plex as shown below: jcc.jvm_options = -Djava.library.path=C:\\test
If you already have existing jvm_options, then add the following "-Djava.library.path=C:\\test" after the space. For example: jcc.jvm_options = -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8000 -Djava.library.path=C:\\test
Restart the JCC for configurations to take effect.
Place the hadoop.dlland winutils.exe files in this path: C:\hadoop\bin
Set the environment variable HADOOP_HOME to point to C:\hadoop
Add C:\hadoop\bin to the environment variable PATH as shown below: Variable name: PATH Variable value: VEN_HOME%\bin;%HADOOP_HOME%\bin
Add the JVM options in the Windows Snaplex:jcc.jvm_options= -Djava.library.path=C:\hadoop\bin
If you already have an existing jvm_options, then add: "-Djava.library.path=C:\hadoop\bin" after the space. For example:jcc.jvm_options = -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8000 -Djava.library.path=C:\hadoop\bin
Restart the JCC for configurations to take effect.
Additional Information
Write to S3 files with HDFS version CDH 5.8 or later
When running the HDFS version later than CDH 5.8, the Hadoop Snap Pack may fail to write to S3 files. To overcome this, make the following changes in the Cloudera manager:
Go to HDFS configuration.
In Cluster-wide Advanced Configuration Snippet (Safety Valve) for core-site.xml, add an entry with the following details:
Name: fs.s3a.threads.max
Value: 15
Click Save.
Restart all the nodes.
Under Restart Stale Services, select Re-deploy client configuration.
Click Restart Now.
Temporary Files
During execution, data processing on Snaplex nodes occurs principally in-memory as streaming and is unencrypted. When larger datasets are processed that exceed the available compute memory, the Snap writes Pipeline data to local storage as unencrypted to optimize the performance. These temporary files are deleted when the Snap/Pipeline execution completes. You can configure the temporary data's location in the Global properties table of the Snaplex's node properties, which can also help avoid Pipeline errors due to the unavailability of space. For more information, see Temporary Folder in Configuration Options.
Examples
Parquet Writer Snap with the Second Input View
The following example pipeline demonstrates the usage of the second input view to receive the table metadata. You can provide the Hive Metastore information in the Hive Metastore URL field, wherein a single input view is sufficient.
When you enable the second input view, the Snap overrides other schema settings, such as the schema within the Edit Schema box or the Hive Metastore-related properties and only accepts the schema from the second input view.
When you disable the second input view, the Snap receives the Schema from the Hive Metastore URL field.
The Parquet Writer Snap configuration with Directory path:
The Parquet Writer Snap View:
The Hive Execute Snap with the table metadata information to pass to the second input view of the Parquet Writer Snap:
The Hive Execute Snap Output:
Parquet Writer configured to write to a local instance of HDFS
Here is an example of a Parquet Writer configured to write to a local instance of HDFS. The output is written to /tmp/parquet-example. There is no Hive Metastore configured and no compression used.
See the documentation on the Schema setting to view an example of the schema.
The settings of the Parquet Writer Snap are as follows:
The pipeline execution will generate two files:
hdfs://localhost:8080/tmp/FEB/07/sample.parquet
hdfs://localhost:8080/tmp/MAR/01/sample.parquet
The key-value pairs for "month" and "day" will not be included in the output files.
Parquet Reader/Writer with S3 in Standard Mode
Reading and writing Parquet files from/to AWS S3 requires an S3 account.
Create an S3 account or use an existing one.
If it is a regular S3 account, name the account and supply the Access-key ID and Secret key.
If the account is IAM role enabled Account:
Select the IAM role checkbox.
Leave the Access-key ID and Secret key blank.
The IAM role properties are optional. You can leave them blank. To use IAM Role Properties, select the IAM Role check box.
Within the Parquet Snap, use a valid S3 oath for the directory in the format of: s3://<bucket name>/<folder name>/.../<filename>
Inserting and Querying Custom Metadata from the Flight Metadata Table
The Pipeline in this zipped example, MetadataCatalog_Insert_Read_Example.zip, demonstrates how you can:
Use the Catalog Insert Snap to update metadata tables.
Use the Catalog Query Snap to read the updated metadata information.
In this example:
We import a file containing the metadata.
We create a parquet file using the data in the imported file
We insert metadata that meets specific requirements into a partition in the target table.
We read the newly-inserted metadata using the Catalog Query Snap.
Understanding the Pipeline
Downloads
Download and import the Pipeline into SnapLogic.
Configure Snap accounts, as applicable.
Provide Pipeline parameters, as applicable.
File
Modified
No files shared here yet.
Snap Pack History
Click to view/expand
Release
Snap Pack Version
Date
Type
Updates
November 2024
main29029
Stable
Updated and certified against the current SnapLogic Platform release.
August 2024
main27765
Stable
Upgraded the org.json.json library from v20090211 to v20240303, which is fully backward compatible.
May 2024
437patches27226
-
The upgrade of the Azure Storage library from v3.0.0 to v8.3.0 has impacted the Hadoop Snap Pack causing the following issue when using the WASB protocol.
Known Issue
When you use invalid credentials for the WASB protocol in Hadoop Snaps (HDFS Reader, HDFS Writer, ORC Reader, Parquet Reader, Parquet Writer), the pipeline does not fail immediately, instead it takes 13-14 minutes to display the following error:
reason=The request failed with error code null and HTTP code 0. , status_code=error
SnapLogic® is actively working with Microsoft®Support to resolve the issue.
Fixed a resource leak issue with the following Hadoop Snaps, which involved too many stale instances of ProxyConnectionManager and significantly impacted memory utilization.
Enhanced the HDFS Writer Snap with the Write empty file checkbox to enable you to write an empty or a 0-byte file to all the supported protocols that are recognized and compatible with the target system or destination.
May 2024
main26341
Stable
The Azure Data Lake Account has been removed from the Hadoop Snap Pack because Microsoft retired the Azure Data Lake Storage Gen1 protocol on February 29, 2024. We recommend replacing your existing Azure Data Lake Accounts (in Binary or Hadoop Snap Packs) with other Azure Accounts.
February 2024
436patches25902
Latest
Fixed a memory management issue in the HDFS Writer, HDFS ZipFile Writer, ORC Writer, and Parquet Writer Snaps, which previously caused out-of-memory errors when multiple Snaps were used in the pipeline. The Snap now conducts a pre-allocation memory check, dynamically adjusting the write buffer size based on available memory resources when writing to ADLS.
February 2024
435patches25410
Latest
Enhanced the AWS S3 Account for Hadoop with an External ID that enables you to access Hadoop resources securely.
February 2024
main25112
Stable
Updated and certified against the current SnapLogic Platform release.
November 2023
435patches23904
Latest
Fixed an issue with the Parquet Writer Snap that displayed an error Failed to write parquet data when the decimal value passed from the second input view exceeded the specified scale.
Fixed an issue with the Parquet Writer Snap that previously failed to handle the conversion of BigInt/int64 (larger numbers) after the 4.35 GA now converts them accurately.
November 2023
435patches23780
Latest
Fixed an issue related to error routing to the output view. Also fixed a connection timeout issue.
November 2023
main23721
Stable
Updated and certified against the current SnapLogic Platform release.
August 2023
434patches23173
Latest
Enhanced the Parquet Writer Snap with a Decimal Rounding Mode dropdown list to enable the rounding method for decimal values when the number exceeds the required decimal places.
August 2023
434patches22662
Latest
Enhanced the Parquet Writer Snap with the support for LocalDate and DateTime. The Snap now shows the schema suggestions for LocalDate and DateTime correctly.
Enhanced the Parquet Reader Snap with the Use datetime types checkboxthat supports LocalDate and DateTime datatypes.
Behavior change:
When you select the Use datetime types checkbox in the Parquet Reader Snap, the Snap displays the LocalDate and DateTime in the output for INT32 (DATE) and INT64 (TIMESTAMP_MILLIS) columns. When you deselect this checkbox, the columns retain the previous datatypes and display string and integer values in the output.
August 2023
main22460
Stable
Updated and certified against the current SnapLogic Platform release.
May 2023
433patches22180
Latest
Introduced the HDFS Delete, which deletes the specified file, group of files, or directory from the supplied path and protocol in the Hadoop Distributed File System (HDFS).
May 2023
433patches21494
Latest
The Hadoop Directory Browser Snap now returns all the output documents as expected after implementing pagination for the ABFS protocol.
May 2023
main21015
Stable
Upgraded with the latest SnapLogic Platform release.
February 2023
432patches20820
Latest
Fixed an authorization issue that occurs with the Parquet Writer Snap when it receives empty document input.
February 2023
432patches20209
Latest
The Apache Commons Compress library has been upgraded to version 1.22.
February 2023
432patches20139
Latest
The Kerberos Account that is available for a subset of snaps in the Hadoop Snap pack now supports a configuration that enables you to read from and write to the Hadoop Distributed File System (HDFS) managed by multiple Hadoop clusters. You can specify the location of the Hadoop configuration files in the Hadoop config directory field. The value in this field overrides the value that is set on the Snaplex system property used for configuring a single cluster.
February 2023
main19844
Stable
Upgraded with the latest SnapLogic Platform release.
November 2022
main18944
Stable
The AWS S3 and S3 Dynamic accounts now support a maximum session duration of an IAM role defined in AWS.
Fixed an issue in the following Snaps that use AWS S3 dynamic account, where the Snaps displayed the security credentials like Access Key, Secret Key, and Security Token in the logs. Now, the security credentials in the logs are blurred for the Snaps that use AWS S3 dynamic account.
Upgraded with the latest SnapLogic Platform release.
4.27 Patch
427patches13769
Latest
Fixed an issue with the Hadoop Directory Browser Snap where the Snap was not listing the files in the given directory for Windows VM.
4.27 Patch
427patches12999
Latest
Enhanced the Parquet Reader Snap with int96 As Timestamp checkbox, which when selected enables the Date Time Format field. You can use this field to specify a date-time format of your choice for int96 data-type fields. The int96 As Timestamp checkbox is available only when you deselect Use old data format checkbox.
4.27
main12833
Stable
Enhanced the Parquet Writer and Parquet Reader Snaps with Azure SAS URI properties, and Azure Storage Account for Hadoop with SAS URI Auth Type. This enables the Snaps to consider SAS URI given in the settings if the SAS URI is selected in the Auth Type during account configuration.
4.26
426patches12288
Latest
Fixed a memory leak issue when using HDFS protocol in Hadoop Snaps.
4.26
main11181
Stable
Upgraded with the latest SnapLogic Platform release.
4.25 Patch
425patches9975
Latest
Fixed the dependency issue inHadoop Parquet ReaderSnap while reading fromAWS S3. The issue is caused due to conflicting definitions for some of the AWS classes (dependencies) in the classpath.
4.25
main9554
Stable
Enhanced the HDFS Reader and HDFS Writer Snaps with the Retry mechanism that includes the following settings:
Number of Retries: Specifies the maximum number of retry attempts when the Snap fails to connect to the Hadoop server.
Retry Interval (seconds): Specifies the minimum number of seconds the Snap must wait before each retry attempt.
4.24 Patch
424patches9262
Latest
Enhanced the AWS S3 Account for Hadoop to support role-based access when you select IAM role checkbox.
4.24 Patch
424patches8876
Latest
Fixes the missing library error inHadoop Snap Pack when running Hadoop Pipelines in JDK11 runtime.
4.24
main8556
Stable
Upgraded with the latest SnapLogic Platform release.
4.23 Patch
423patches7440
Latest
Fixes the issue inHDFS ReaderSnap by supporting to read and write files larger than 2GB using ABFS(S) protocol.
4.23
main7430
Stable
Upgraded with the latest SnapLogic Platform release.
4.22
main6403
Stable
Upgraded with the latest SnapLogic Platform release.
4.21 Patch
hadoop8853
Latest
Updates the Parquet Writer and Parquet Reader Snaps to support the yyyy-MM-dd format for the DATE logical type.
4.21
snapsmrc542
Stable
Upgraded with the latest SnapLogic Platform release.
4.20 Patch
hadoop8776
Latest
Updates the Hadoop Snap Pack to use the latest version of org.xerial.snappy:snappy-java for compression type Snappy, in order to resolve the java.lang.UnsatisfiedLinkError: org.xerial.snappy.SnappyNative.maxCompressedLength(I)I error.
4.20
snapsmrc535
Stable
Upgraded with the latest SnapLogic Platform release.
4.19 Patch
hadoop8270
Latest
Fixes an issue with the Hadoop Parquet Writer Snap wherein the Snap throws an exception when the input document includes one or all of the following:
Empty lists.
Lists with all null values.
Maps with all null values.
4.19
snaprsmrc528
Stable
Upgraded with the latest SnapLogic Platform release.
4.18 Patch
hadoop8033
Latest
Fixed an issue with the Parquet Writer Snap wherein the Snap throws an error when working with WASB protocol.
Added ADLS Gen2 support for ABFS (Azure Blob File System) and ABFSS protocols.
4.17
ALL7402
Latest
Pushed automatic rebuild of the latest version of each Snap Pack to SnapLogic UAT and Elastic servers.
4.17
snapsmrc515
Latest
Added the Snap Execution field to all Standard-mode Snaps. In some Snaps, this field replaces the existing Execute during preview check box.
4.16
snapsmrc508
Stable
Added a new property, Output for each file written, to handle multiple binary input data in the HDFS Writer Snap.
4.15
snapsmrc500
Stable
Added two new Snaps: HDFS ZipFile Reader and HDFS ZipFile Writer.
Added support for the Data Catalog Snaps in Parquet Reader and Parquet Writer Snaps.
4.14 Patch
hadoop5888
Latest
Fixed an issue wherein the Hadoop snaps were throwing an exception when a Kerberized account is provided, but the snap is run in a non-kerberized environment.
4.14
snapsmrc490
Stable
Added the Hadoop Directory BrowserSnap, which browses a given directory path in the Hadoop file system using the HDFS protocol and generates a list of all the files in the directory. It also lists subdirectories and their contents.
Added support for S3 file protocol in theORC Reader, andORC WriterSnaps.
Added support for reading nested schema in the Parquet Reader Snap.
4.13 Patch
hadoop5318
Latest
Fixed the HDFS Reader/Writer and Parquet Reader/Writer Snaps, wherein Hadoop configuration information does not parse from the client's configuration files.
Fixed the HDFS Reader/Writer and Parquet Reader/Writer Snaps, wherein User Impersonation does not work on Hadooplex.
4.13
snapsmrc486
Stable
KMS encryption support added to AWS S3 account in the Hadoop Snap Pack.
Enhanced the Parquet Reader, Parquet Writer, HDFS Reader, and HDFS Writer Snaps to support WASB and ADLS file protocols.
Added the AWS S3 account support to the Parquet Reader and Writer Snaps.
Added second input view to the Parquet Reader Snap that when enabled, accepts table schema.
Supported with AWS S3, Azure Data Lake, and Azure Storage Accounts.
4.12 Patch
hadoop5132
Latest
Fixed an issue with the HDFS Reader Snap wherein the pipeline becomes stale while writing to the output view.
4.12
snapsmrc480
Stable
Upgraded with the latest SnapLogic Platform release.
4.11 Patch
hadoop4275
Latest
Addressed an issue with Parquet Reader Snap leaking file descriptors (connections to HDFS data nodes). The Open File descriptor values work stable now,
4.11
snapsmrc465
Stable
Added Kerberos support to the standard mode Parquet Reader and Parquet Writer Snaps.
4.10 Patch
hadoop4001
Latest
Supported HDFS Writer to write to the encryption zone.
4.10 Patch
hadoop3887
Latest
Addressed the suggest issue for the HDFS Reader on Hadooplex.
4.10 Patch
hadoop3851
Latest
ORC supports read/write from local file system.
Addressed an issue to bind the Hive Metadata to Parquet Writer Schema at Runtime.
4.10 Patch
hadoop3838
Latest
Made HDFS Snaps work with Zone encrypted HDFS.
4.10
snapsmrc414
Stable
Updated the Parquet Writer Snap withPartition byproperty to support the data written into HDFS based on the partition definition in the schema in Standard mode.
Support for S3 accounts with IAM Roles added to Parquet Reader and Parquet Writer
HDFS Reader/Writer with Kerberos support on Groundplex (including user impersonation).
4.9 Patch
hadoop3339
Latest
Addressed the following issues:
ORC Reader passing, but ORC Writer failing when run on a Cloudplex.
ORC Reader Snap is not routing error to error view.
Intermittent failures with the ORC Writer
4.9.0 Patch
hadoop3020
Latest
Added missing dependency org.iq80.snappy:snappy to Hadoop Snap Pack.
4.9
snapsmrc405
Stable
Upgraded with the latest SnapLogic Platform release.
4.8
snapsmrc398
Stable
Snap-aware error handling policy enabled for Spark mode in Sequence Formatter and Sequence Parser. This ensures the error handling specified on the Snap is used.
4.7.0 Patch
hadoop2343
Latest
Spark Validation: Resolved an issue with validation failing when setting the output file permissions.
4.7
snapsmrc382
Stable
Updated the HDFS Writer and HDFS Reader Snaps with Azure Data Lake account for standard mode pipelines.
HDFS Writer: Spark mode support added to write to a specified directory in an Azure Storage Layer using the wasb file system protocol.
HDFS Reader: Spark mode support added to read a single file or an HDFS directory from an Azure Storage Layer.
4.6
snapsmrc362
Stable
The following Snaps now support error view in Spark mode: HDFS Reader, Sequence Parser.
Resolved an issue in HDFS Writer Snap that sends the same data in output & error view.
4.5
snapsmrc344
Stable
HDFS Reader and HDFS Writer Snaps updated to support IAM Roles for Amazon EC2.
Support for Spark mode added to Parquet Reader, Parquet Writer
The HBase Snaps are no longer available as of this release.
4.4.1
Stable
Resolved an issue with Sequence Formatter not working in Spark mode.
Resolved an issue with HDFSReader not using the filter set when configuring SparkExec paths.
4.4
Stable
NEW! Parquet Reader and Writer Snaps
NEW!ORC Reader and Writer Snaps
Spark support added to the HDFS Reader, HDFS Writer, Sequence Formatter, and Sequence Parser Snaps.
Behavior change: HDFS Writer in SnapReduce mode now requires the File property to be blank.
4.3.2
Stable
Implemented wasbs:// protocol support in Hadoop Snap Pack.
Resolved an issue with HDFS Reader unable to read all files under a folder (including all files under its subfolders) using the ** filter.