In this article
...
Overview
You can use this Snap to read the document data from the input and write the data in the binary (parquet) format to the output...
Snap Type
The <Snap name> Parquet Formatter Snap is a Read/ Write/Parse/Transform/Flow/Format-type Snap that reads/fetches/writes/parses/executes/transforms/calls/creates…
Prerequisites
...
Valid client ID.
...
.
Prerequisites
None.
Support for Ultra Pipelines
Works in Ultra Pipelines.
Works in Ultra Pipelines if....
Does not support Ultra Pipelines.
Limitations
...
None.
and Known Issues
None.
Snap Views
Type | Format | Number of Views | Examples of Upstream and Downstream Snaps | Description |
---|---|---|---|---|
Input | Document |
Binary or Document
|
|
Requires |
document data |
as input. |
Output |
Binary |
Binary or Document
|
..
..
| Writes the document data in the binary (parquet) format to the output. | |||
Error | Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter when running the Pipeline by choosing one of the following options from the When errors occur list under the Views tab:
Learn more about Error handling in Pipelines. |
Snap Settings
Info |
---|
|
Field Name | Field Type |
---|
Field Dependency
Description |
---|
Label* Default Value: |
Parquet Formatter |
Hadoop Parquet Formatter | String |
None.
Specify the name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your Pipeline. | ||
Edit Schema | Button | Specify a valid Parquet schema that describes the data. |
Number of records
Default Value:
Example:
String/Expression
fieldset Name
Specify advanced parameters that you want to include in the request.
Field 1*
Default Value: <value> or None.
Example: <value>
String
Debug mode check box is not selected.
Field 2
Default Value: <value> or None.
Example: <value>
String
None.
Snap Execution
Default Value:
Example: Validate & Execute
Dropdown list
The schema can be specified based off a Hive Metastore table schema or generated from suggest data. Save the pipeline before editing the schema to generate suggest data that assists in specifying the schema based off of the schema of incoming documents. If no suggest data is available, then an example schema is generated along with documentation. Alter one of those schemas to describe the input data.
The following is an example of a schema using all the primitive and some examples of logical types:
| ||||||
Compression Default Value: NONE Example: SNAPPY | Dropdown list | Select the type of compression to use when writing the file. The following are the available options:
Many compression algorithms require both Java and system libraries and will fail if the latter is not installed. If you see unexpected errors, ask your system administrator to verify that all the required system libraries are installed–they are typically not installed by default. The system libraries will have names such as liblzo2.so.2 or libsnappy.so.1 and will probably be located in the /usr/lib/x86_64-linux-gnu directory.
| ||||
Decimal rounding mode
Default Value: Half up Example: Up | Dowpdown list | Select the required rounding method for decimal values when they exceed the required number of decimal places. The available options are:
| ||||
Snap execution Default Value: Validate & Execute | Dropdown list | Select one of the following three modes in which the Snap executes:
|
Troubleshooting
Error | Reason | Resolution |
---|---|---|
Account validation failed. | The Pipeline ended before the batch could complete execution due to a connection error. | Verify that the Refresh token field is configured to handle the inputs properly. If you are not sure when the input data is available, configure this field as zero to keep the connection always open. |
Examples
Excluding Fields from the Input Data Stream
...