...
Download
hadoop.dll
andwinutils.exe
https://github.com/cdarlint/winutils/tree/master/hadoop-3.2.2/bin (SnapLogic’s Hadoop version is 3.2.2)Create a temporary directory.
Place the
hadoop.dll
andwinutils.exe
files in this path:C:\\hadoop\bin
Set the environment variable
HADOOP_HOME
to point toC:\\hadoop
Add
C:\hadoop\bin
to the environment variable PATH as shown below:Add the JVM options in the Windows Snaplex:
jcc.jvm_options= -Djava.library.path=C:\\hadoop\bin
If you already have an existing
jvm_options
, then add:"-Djava.library.path=C:\\hadoop\bin"
after the space.
For example:jcc.jvm_options = -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8000 -Djava.library.path=C:\\hadoop\bin
Restart the JCC for configurations to take effect.
...
Field Name | Field Type | Description | ||||||
---|---|---|---|---|---|---|---|---|
Label* Default Value: Parquet Writer | String | Specify a unique name for the Snap. | ||||||
Directory Default Value: hdfs://<hostname>:<port>/
| String/Expression/Suggestion | Specify the file path to a directory to read data from. The path must be in the following format: All files within the directory must be Parquet formatted. The following file storage systems are supported:
The Directory property is not used in the Pipeline execution or preview and used only in the Suggest operation. When you click on the Suggest icon, the Snap displays a list of subdirectories under the given directory. It generates the list by applying the value of the Filter property. | ||||||
File Filter Default Value: * | String/Expression | Specify the Glob file pattern. Use glob patterns to display a list of directories or files when you click the Suggestion icon in the Directory or File property. A complete glob pattern is formed by combining the value of the Directory property with the Filter property. If the value of the Directory property does not end with "/", the Snap appends one, so that the value of the Filter property is applied to the directory specified by the Directory property.
| ||||||
File Default Value: N/A
| String/Expression/Suggestion | Specify the filename or a relative path to a file under the directory given in the Directory property. It should not start with a URL separator "/". | ||||||
Hive Metastore URL Default Value: N/A | String/Expression | Specify the URL of the Hive Metastore to assist in setting the schema along with the database and table setting. If the data being written has a Hive schema, then the Snap can be configured to read the schema instead of manually entering it. Set the value to a Hive Metastore URL where you define the schema. | ||||||
Database Default value: N/A | String/Expression/Suggestion | Specify the Hive Metastore database which holds the schema for the outgoing data. | ||||||
Table Default value: N/A | String/Expression/Suggestion | Specify the table whose schema should be used for formatting the outgoing data. | ||||||
Fetch Hive Schema at Runtime Default value: Deselected | Checkbox | Select this checkbox to fetch the schema from the Metastore table before writing. The Snap fails to write if it cannot make connection to the metastore or the table does not exist during the Pipeline's execution. Will use the metastore schema instead of the one set in the Snap's Edit Schema property if this is checked. | ||||||
Edit Schema | Button | Specify a valid Parquet schema that describes the data. The schema can be specified based off a Hive Metastore table schema or generated from suggest data. Save the pipeline before editing the schema to generate suggest data that assists in specifying the schema based off of the schema of incoming documents. If no suggest data is available, then an example schema is generated along with documentation. Alter one of those schemas to describe the input data.
The following is an example of a schema using all the primitive and some examples of logical types:
| ||||||
Compression* | Dropdown list | Select the type of compression to use when writing the file. The available options are:
| ||||||
Partition by Default Value: N/A | String/Suggestion | Specify or select the key which will be used to get the 'Partition by' folder name. All input documents should contain this key name or an error document will be written to the error view. Refer to the 'Partition by' example below for an illustration. | ||||||
Azure SAS URI Properties | Shared Access Signatures (SAS) properties of the Azure Storage account. | |||||||
SAS URI | String/Expression | Specify the Shared Access Signatures (SAS) URI that you want to use to access the Azure storage blob folder specified in the Azure Storage Account. You can get a valid SAS URI either from the Shared access signature in the Azure Portal or by generating from the SAS Generator Snap.
| ||||||
Decimal Rounding Mode
Default Value: Half up Example: Up | Dropdown list | Select the required rounding method for decimal values when they exceed the required number of decimal places. The available options are:
| ||||||
Timestamp Parquet type | Dropdown list Select one of the following three modes in which the Snap executes: Validate & Execute: Performs limited execution of the Snap, and generates a data preview during Pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during Pipeline runtime. | Choose the appropriate Parquet type for your timestamp schema based on the format of the timestamp data. This schema is passed from the second input view. The available options are:
If the timestamp data is in string format, a date-time object, or milliseconds, select one of the INT64 types (Millis, Micros, or Nanos) from the dropdown.
Default value: INT96 | ||||||
Snap Execution | Dropdown list | Select one of the following three modes in which the Snap executes:
|
...
Error | Reason | Resolution |
---|---|---|
Unable to connect to the Hive Metastore. | This error occurs when the Parquet Writer Snap is unable to fetch schema for Kerberos-enabled Hive Metastore. | Pass the Hive Metastore's schema directly to the Parquet Writer Snap. To do so:
|
Parquet Snaps may not work as expected in the Windows environment. | Because of the limitations in the Hadoop library on Windows, Parquet Snaps does not function as expected. | To use the Parquet Writer Snap on a Windows Snaplex, follow these steps:
|
| Because of the limitations in the Hadoop library on Windows, Parquet Snaps does not function as expected. | To resolve this issue, follow these steps:
|
...
Expand | ||
---|---|---|
| ||
Downloads
Info |
---|
|
...