Hadoop Snap Pack
In this article
Overview
Apache™ Hadoop® is an open-source software framework for the storage and processing of large datasets.
Use Snaps in this Snap Pack to read data from and write data to the Hadoop File System (HDFS).
For Hadoop with Kerberos, you must install a few utilities on the Snaplex, such as kinit, kdestroy, and so on.
Additionally the Hadoop Snaps use the Hadoop libraries that invoke a few system programs internally. The SnapLogic Platform does not support the installation of utilities or processes on Cloudplexes. Learn more.
Prerequisites
A Groundplex needs to be configured as a Hadoop client for this integration to work. The JAR files and property files that must be installed for this depends on the version and vendor of your Hadoop File System. Refer to your vendor's documentation for more information.
Supported Versions
This Snap Pack is tested against:
CDH 5.8
CDH 5.10
CDH 5.16.1
CDH 5.16
CDH 6.1.1
HDP 2.6.1
HDP 2.6.3.1
HDP 2.6.3
Known Issue
The upgrade of the Azure Storage library from v3.0.0 to v8.3.0 caused the following issue when using the WASB protocol:
When you use invalid credentials for the WASB protocol in (HDFS Reader, HDFS Writer, ORC Reader, Parquet Reader, Parquet Writer) Snaps, the pipeline does not fail immediately, instead it takes 13-14 minutes to display this error: reason=The request failed with error code null and HTTP code 0. , status_code=error
SnapLogic is actively working with Microsoft Support to resolve the issue.
Customizing the Location of the Temporary Directory
Snaps in the Hadoop Snap Pack briefly save a temporary file in the system while processing and before passing the contents to a downstream Snap. The temporary file is stored in a default location and is automatically deleted after the process is complete.
You can change the location of the temporary file to a custom location by using the global property jcc.jvm_options.
You may choose to use one of the two methods in this section, to change the temporary file location.
Modifying the java.io.tmp.dir file does not change the location of the temporary file.
Method 1: Specifying the Temporary Location in the SnapLogic Manager
In the SnapLogic Manager, Snaplexes tab, select the applicable Snaplex's name.
In the Update Snaplex dialog, Node Properties tab, under Global properties, add the global property, jcc.jvm_options =
-Dhadoop.tmp.dir=/tmp/syd,where /tmp/sydis the new location to save the temporary file.
3. Click Update and OK to Snaplex Update Notice. This updates the new location for the temporary file and restarts the Snaplex with the new property setting.
Method 2: Manually Changing the Location of Temporary File in global properties
You can change from the default location of the temporary file to another location in global properties in your local SnapLogic environment.
Access the /etc folder in your SnapLogic installation:
For a Linux installation, enter the command:
cd $SL_ROOT/etcon the command prompt.For a Windows installation, enter the command:
cd %SL_ROOT%\etcon the command prompt.
Open the
global propertiesfile.
Add the following entry:jcc.jvm_options = -Dhadoop.tmp.dir=tmp/syd/
3. Save the file and restart the Snaplex to update the new location for the temporary file.
A folder, s3a, is created under the temporary location path to store the temporary file.
Known Issues
After upgrading your Snaplex to the 4.20 GA version, Pipelines with HDFS Reader Snap that use Kerberos authentication might remain in the start state.
ORC Writer/Reader Snaps fail on S3 when using the 4.20 Snaplex with the previous Snap Pack version (hadoop8270 and snapsmrc528) displaying this error:
Unable to read input stream, Reason: For input string: "100M" error.ORC Reader/Writer and Parquet Reader/Writer Snaps fail in 4.20 when executing a Pipeline on S3 with this error:
org.apache.hadoop.fs.StorageStatistics.
Have feedback? Email documentation@snaplogic.com | Ask a question in the SnapLogic Community
© 2017-2025 SnapLogic, Inc.