All expression Snap properties (when '=' button is pressed) can be evaluated from pipeline parameters only, not from input documents. Input documents are data to be formatted and written to the target files.
The Directory property is not used in the pipeline execution or preview, and used only in the Suggest operation. When you press the Suggest icon, the Snap displays a list of subdirectories under the given directory. It generates the list by applying the value of the Filter property.
Required for standard mode. Filename or a relative path to a file under the directory given in the Directory property. It should not start with a URL separator "/". The File property can be a JavaScript expression which will be evaluated with values from the input view document. When you press the Suggest icon, it will display a list of regular files under the directory in the Directory property. It generates the list by applying the value of the Filter property.
Use Hive tables if your input documents contains complex data types, such as maps and arrays.
Example:
sample.orc
tmp/another.orc
_filename
Default value: [None]
File action
Required. Select an action to take when the specified file already exists in the directory. Please note the Append file action is supported for SFTP, FTP, and FTPS protocols only.
Default value: [None]
File permissions for various users
Set the user and desired permissions.
Default value: [None]
Hive Metastore URL
This setting is used to assist in setting the schema along with the database and table setting. If the data being written has a Hive schema, then the Snap can be configured to read the schema instead of manually entering it. Set the value to a Hive Metastore URL where the schema is defined.
Default value: [None]
Database
The Hive Metastore database where the schema is defined. See the Hive Metastore URL setting for more information.
Table
The table from which the schema in the Hive Metastore's database must be read. See the Hive Metastore URL setting for more information.
Compression
Required. The compression type to be used when writing the file.
Column paths
Paths where the column values appear in the document. This property is required if the Hive Metastore URL property is empty.
Examples:
Column Name: Fun
Column Path: $column_from_input_data
Column Type: string
Default value: [None]
Snap execution
Select one of the three modes in which the Snap executes. Available options are:
Validate & Execute: Performs limited execution of the Snap, and generates a data preview during Pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during Pipeline runtime.
Execute only: Performs full execution of the Snap during Pipeline execution without generating preview data.
Disabled: Disables the Snap and all Snaps that are downstream from it.
Troubleshooting
Writing to S3 files with HDFS version CDH 5.8 or later
When running HDFS version later than CDH 5.8, the Hadoop Snap Pack may fail to write to S3 files. To overcome this, make the following changes in the Cloudera manager:
Go to HDFS configuration.
In Cluster-wide Advanced Configuration Snippet (Safety Valve) for core-site.xml, add an entry with the following details:
Name: fs.s3a.threads.max
Value: 15
Click Save.
Restart all the nodes.
Under Restart Stale Services, select Re-deploy client configuration.
Click Restart Now.
Temporary Files
During execution, data processing on Snaplex nodes occurs principally in-memory as streaming and is unencrypted. When larger datasets are processed that exceeds the available compute memory, the Snap writes Pipeline data to local storage as unencrypted to optimize the performance. These temporary files are deleted when the Snap/Pipeline execution completes. You can configure the temporary data's location in the Global properties table of the Snaplex's node properties, which can also help avoid Pipeline errors due to the unavailability of space. For more information, see Temporary Folder in Configuration Options.
Examples
ORC Writer Writing to an HDFS Instance
ORC Writer Writing to an HDFS Instance
Here is an example of a ORC Writer configured to write to a local instance of HDFS. The output is written to /tmp/orc-output. The Hive Metastore used reads the schema from the employee_orc table from the masterdb database. No column paths or compression are used. For an example of the Schema, see the documentation on the Schema setting.
ORC Writer Writing to an S3 Instance
ORC Writer Writing to an S3 Instance
Here is an example of a ORC Writer configured to write to a local instance of S3. The output is written to /tmp/orc-output. The Hive Metastore used reads the schema from the employee_orc table from the masterdb database. No column paths or compression are used. For an example of the Schema, see the documentation on the Schema setting.
Updated and certified against the current SnapLogic Platform release.
August 2024
main27765
Stable
Upgraded the org.json.json library from v20090211 to v20240303, which is fully backward compatible.
May 2024
437patches27226
-
The upgrade of the Azure Storage library from v3.0.0 to v8.3.0 has impacted the Hadoop Snap Pack causing the following issue when using the WASB protocol.
Known Issue
When you use invalid credentials for the WASB protocol in Hadoop Snaps (HDFS Reader, HDFS Writer, ORC Reader, Parquet Reader, Parquet Writer), the pipeline does not fail immediately, instead it takes 13-14 minutes to display the following error:
reason=The request failed with error code null and HTTP code 0. , status_code=error
SnapLogic® is actively working with Microsoft®Support to resolve the issue.
Fixed a resource leak issue with the following Hadoop Snaps, which involved too many stale instances of ProxyConnectionManager and significantly impacted memory utilization.
Enhanced the HDFS Writer Snap with the Write empty file checkbox to enable you to write an empty or a 0-byte file to all the supported protocols that are recognized and compatible with the target system or destination.
May 2024
main26341
Stable
The Azure Data Lake Account has been removed from the Hadoop Snap Pack because Microsoft retired the Azure Data Lake Storage Gen1 protocol on February 29, 2024. We recommend replacing your existing Azure Data Lake Accounts (in Binary or Hadoop Snap Packs) with other Azure Accounts.
February 2024
436patches25902
Latest
Fixed a memory management issue in the HDFS Writer, HDFS ZipFile Writer, ORC Writer, and Parquet Writer Snaps, which previously caused out-of-memory errors when multiple Snaps were used in the pipeline. The Snap now conducts a pre-allocation memory check, dynamically adjusting the write buffer size based on available memory resources when writing to ADLS.
February 2024
435patches25410
Latest
Enhanced the AWS S3 Account for Hadoop with an External ID that enables you to access Hadoop resources securely.
February 2024
main25112
Stable
Updated and certified against the current SnapLogic Platform release.
November 2023
435patches23904
Latest
Fixed an issue with the Parquet Writer Snap that displayed an error Failed to write parquet data when the decimal value passed from the second input view exceeded the specified scale.
Fixed an issue with the Parquet Writer Snap that previously failed to handle the conversion of BigInt/int64 (larger numbers) after the 4.35 GA now converts them accurately.
November 2023
435patches23780
Latest
Fixed an issue related to error routing to the output view. Also fixed a connection timeout issue.
November 2023
main23721
Stable
Updated and certified against the current SnapLogic Platform release.
August 2023
434patches23173
Latest
Enhanced the Parquet Writer Snap with a Decimal Rounding Mode dropdown list to enable the rounding method for decimal values when the number exceeds the required decimal places.
August 2023
434patches22662
Latest
Enhanced the Parquet Writer Snap with the support for LocalDate and DateTime. The Snap now shows the schema suggestions for LocalDate and DateTime correctly.
Enhanced the Parquet Reader Snap with the Use datetime types checkboxthat supports LocalDate and DateTime datatypes.
Behavior change:
When you select the Use datetime types checkbox in the Parquet Reader Snap, the Snap displays the LocalDate and DateTime in the output for INT32 (DATE) and INT64 (TIMESTAMP_MILLIS) columns. When you deselect this checkbox, the columns retain the previous datatypes and display string and integer values in the output.
August 2023
main22460
Stable
Updated and certified against the current SnapLogic Platform release.
May 2023
433patches22180
Latest
Introduced the HDFS Delete, which deletes the specified file, group of files, or directory from the supplied path and protocol in the Hadoop Distributed File System (HDFS).