ORC Writer

On this Page

Snap type:

Write

Description:

This Snap converts documents into the ORC format and writes the data to HDFS, S3, or the local file system.

  • Expected upstream Snaps: Any Snap with a document output view.
  • Expected downstream Snaps: [None]
  • Expected input: A document.
  • Expected output: [None]

This Snap supports both HDFS (non-Kerberos) and ABFS (Azure Data Lake Storage Gen 2 ), WASB(Azure storage), and S3 protocols.

Prerequisites:

[None]

Support and limitations:

All expression Snap properties (when '=' button is pressed) can be evaluated from pipeline parameters only, not from input documents. Input documents are data to be formatted and written to the target files.

Account: 

The ORC Writer works with the following accounts:

Views:
InputThis Snap has exactly one document input view. 
Output

This Snap has at most one document output view.

ErrorThis Snap has at most one document error view and produces zero or more documents in the view.

Settings

Label

Required. The name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your pipeline.

Default Value: ORC Writer

Directory



Required. The path to a directory from which you want the ORC Reader Snap to read data. All files within the directory must be ORC formatted.

Basic directory URI structure

  • HDFS: hdfs://<hostname>:<port>/
  • S3: s3:///<S3 bucket name>/<file-path>
  • ABFS(S): abfs(s):///filesystem/<path>/
  • ABFS(S): abfs(s)://filesystem@accountname.endpoint/<path>

The Directory property is not used in the pipeline execution or preview, and used only in the Suggest operation. When you press the Suggest icon, the Snap displays a list of subdirectories under the given directory. It generates the list by applying the value of the Filter property.

Example

  • hdfs://ec2-54-198-212-134.compute-1.amazonaws.com:8020/user/john/input/

  • webhdfs://cdh-qa-2.fullsail.Snaplogic.com:50070/user/ec2-user/csv/
  • s3://test-s3-drea/8867_output.json
  • _dirname 

  • file:///home/snaplogic/file.orc
  • abfs:///filesystem2/dir1
  • abfs://filesystem2@snaplogicaccount.dfs.core.windows.net/dir1

Default valuehdfs://<hostname>:<port>/


Filter


File




Required for standard mode. Filename or a relative path to a file under the directory given in the Directory property. It should not start with a URL separator "/". The File property can be a JavaScript expression which will be evaluated with values from the input view document. When you press the Suggest icon, it will display a list of regular files under the directory in the Directory property. It generates the list by applying the value of the Filter property.

Use Hive tables if your input documents contains complex data types, such as maps and arrays.

Example: 

  • sample.orc
  • tmp/another.orc
  • _filename

Default value:  [None]

File action


Required. Select an action to take when the specified file already exists in the directory. Please note the Append file action is supported for SFTP, FTP, and FTPS protocols only.

Default value: [None]

File permissions for various users

Set the user and desired permissions.

Default value: [None]

Hive Metastore URL


This setting is used to assist in setting the schema along with the database and table setting.  If the data being written has a Hive schema, then the Snap can be configured to read the schema instead of manually entering it. Set the value to a Hive Metastore URL where the schema is defined.

Default value: [None]

Database

The Hive Metastore database where the schema is defined. See the Hive Metastore URL setting for more information.

Table

The table from which the schema in the Hive Metastore's database must be read. See the Hive Metastore URL setting for more information.

Compression

Required. The compression type to be used when writing the file. 

Column paths


Paths where the column values appear in the document. This property is required if the Hive Metastore URL property is empty.

Examples:

  • Column Name: Fun
  • Column Path: $column_from_input_data
  • Column Type: string

Default value: [None]

Snap execution

Select one of the three modes in which the Snap executes. Available options are:
  • Validate & Execute: Performs limited execution of the Snap, and generates a data preview during Pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during Pipeline runtime.
  • Execute only: Performs full execution of the Snap during Pipeline execution without generating preview data.
  • Disabled: Disables the Snap and all Snaps that are downstream from it.

Troubleshooting

Writing to S3 files with HDFS version CDH 5.8 or later

When running HDFS version later than CDH 5.8, the Hadoop Snap Pack may fail to write to S3 files. To overcome this, make the following changes in the Cloudera manager:

  1. Go to HDFS configuration.
  2. In Cluster-wide Advanced Configuration Snippet (Safety Valve) for core-site.xml, add an entry with the following details:
    • Name: fs.s3a.threads.max
    • Value: 15
  3. Click Save.
  4. Restart all the nodes.
  5. Under Restart Stale Services, select Re-deploy client configuration.
  6. Click Restart Now.

Temporary Files

During execution, data processing on Snaplex nodes occurs principally in-memory as streaming and is unencrypted. When larger datasets are processed that exceeds the available compute memory, the Snap writes Pipeline data to local storage as unencrypted to optimize the performance. These temporary files are deleted when the Snap/Pipeline execution completes. You can configure the temporary data's location in the Global properties table of the Snaplex's node properties, which can also help avoid Pipeline errors due to the unavailability of space. For more information, see Temporary Folder in Configuration Options

Examples


 ORC Writer Writing to an HDFS Instance

ORC Writer Writing to an HDFS Instance

Here is an example of a ORC Writer configured to write to a local instance of HDFS. The output is written to /tmp/orc-output. The Hive Metastore used reads the schema from the employee_orc table from the masterdb database. No column paths or compression are used. For an example of the Schema, see the documentation on the Schema setting.



 ORC Writer Writing to an S3 Instance

ORC Writer Writing to an S3 Instance

Here is an example of a ORC Writer configured to write to a local instance of S3. The output is written to /tmp/orc-output. The Hive Metastore used reads the schema from the employee_orc table from the masterdb database. No column paths or compression are used. For an example of the Schema, see the documentation on the Schema setting.



See Also

Read more about ORC at the Apache project's website, https://orc.apache.org/


 Click to view/expand
Release Snap Pack VersionDateType  Updates
November 2024main29029 StableUpdated and certified against the current SnapLogic Platform release.

August 2024

main27765

 

Stable

Upgraded the org.json.json library from v20090211 to v20240303, which is fully backward compatible.

May 2024437patches27226 -

The upgrade of the Azure Storage library from v3.0.0 to v8.3.0 has impacted the Hadoop Snap Pack causing the following issue when using the WASB protocol.

Known Issue

When you use invalid credentials for the WASB protocol in Hadoop Snaps (HDFS Reader, HDFS Writer, ORC Reader, Parquet Reader, Parquet Writer), the pipeline does not fail immediately, instead it takes 13-14 minutes to display the following error:

reason=The request failed with error code null and HTTP code 0. , status_code=error

SnapLogic® is actively working with Microsoft®Support to resolve the issue.

Learn more about the Azure Storage Library Upgrade.

May 2024437patches27471 Latest

Fixed a resource leak issue with the following Hadoop Snaps, which involved too many stale instances of ProxyConnectionManager and significantly impacted memory utilization.

May 2024437patches26370 Latest

Enhanced the HDFS Writer Snap with the Write empty file checkbox to enable you to write an empty or a 0-byte file to all the supported protocols that are recognized and compatible with the target system or destination.

May 2024main26341 Stable

The Azure Data Lake Account has been removed from the Hadoop Snap Pack because Microsoft retired the Azure Data Lake Storage Gen1 protocol on February 29, 2024. We recommend replacing your existing Azure Data Lake Accounts (in Binary or Hadoop Snap Packs) with other Azure Accounts.

February 2024

436patches25902

 Latest

Fixed a memory management issue in the HDFS Writer, HDFS ZipFile Writer, ORC Writer, and Parquet Writer Snaps, which previously caused out-of-memory errors when multiple Snaps were used in the pipeline. The Snap now conducts a pre-allocation memory check, dynamically adjusting the write buffer size based on available memory resources when writing to ADLS.

February 2024435patches25410Latest

Enhanced the AWS S3 Account for Hadoop with an External ID that enables you to access Hadoop resources securely.

February 2024main25112 StableUpdated and certified against the current SnapLogic Platform release.
November 2023435patches23904 Latest
  • Fixed an issue with the Parquet Writer Snap that displayed an error Failed to write parquet data when the decimal value passed from the second input view exceeded the specified scale.

  • Fixed an issue with the Parquet Writer Snap that previously failed to handle the conversion of BigInt/int64 (larger numbers) after the 4.35 GA now converts them accurately.

November 2023435patches23780 Latest

Fixed an issue related to error routing to the output view. Also fixed a connection timeout issue.

November 2023main23721StableUpdated and certified against the current SnapLogic Platform release.
August 2023434patches23173 Latest

Enhanced the Parquet Writer Snap with a Decimal Rounding Mode dropdown list to enable the rounding method for decimal values when the number exceeds the required decimal places.

August 2023434patches22662 Latest
  • Enhanced the Parquet Writer Snap with the support for LocalDate and DateTime. The Snap now shows the schema suggestions for LocalDate and DateTime correctly.

  • Enhanced the Parquet Reader Snap with the Use datetime types checkbox that supports LocalDate and DateTime datatypes.

Behavior change:

When you select the Use datetime types checkbox in the Parquet Reader Snap, the Snap displays the LocalDate and DateTime in the output for INT32 (DATE) and INT64 (TIMESTAMP_MILLIS) columns. When you deselect this checkbox, the columns retain the previous datatypes and display string and integer values in the output.

August 2023

main22460

 

Stable

Updated and certified against the current SnapLogic Platform release.

May 2023433patches22180 Latest

Introduced the HDFS Delete, which deletes the specified file, group of files, or directory from the supplied path and protocol in the Hadoop Distributed File System (HDFS).

May 2023433patches21494 LatestThe