Hyper Write
In this article
Overview
You can use this Snap to create a hyper file on a local disk for incoming documents and publish it to Tableau Server/Online without packaging it into a data source.
Support for Ultra Pipelines
Does not work in Ultra Pipelines.
Limitations
To publish hyper files without packaging them as a data source, the file must contain exactly one schema in one table.
Known Issues
None.
Snap Views
Type | Format | Number of Views | Examples of Upstream and Downstream Snaps | Description |
---|---|---|---|---|
Input | Document
|
|
| Parsed CSV data stream. |
Output | Document
|
|
| Creates a Hyper file. |
Snap Settings
Field Name | Field Type | Description |
---|---|---|
Label*
Default Value: Hyper Write | String | The name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your Pipeline.
|
Datasource
Default Value: N/A | String | Specify the name of the Tableau data source on the server. If left empty, the Snap takes the current date and time as the default value. |
Project
Default Value: N/A | String | Specify the project name to which hyper file should be published. |
Schema Name
Default Value: Extract | Expression/String | Specify a schema name for the Tableau extract. If left empty, the Snap uses the default schema name Extract. |
Overwrite
Default Value: Deselected | Checkbox | Select this checkbox to overwrite an existing data source with the same name. |
Append
Default Value: Deselected | Checkbox | Select this checkbox to append an existing data source with the same name. |
Certificate Check
Default Value: Selected | Checkbox | Select this checkbox to enable the tabcmd process to validate Tableau server SSL certificate. |
Snap Execution
Default Value: Validate & Execute | Dropdown list | Select one of the three modes in which the Snap executes. Available options are:
|
Example
Creating Hyper File And Writing On To Local Database
This example Pipeline demonstrates how to use the Hyper Write Snap to create a hyper file and write it on to the local database.
Initially, we configure the Pipeline with the File Reader Snap to read 123.csv file from the SL database.
Upon validation, we see the following binary data in the output preview of the Snap.
Next, we configure the CSV Parser Snap to parse the CSV file.
Upon validation, we see the parsed CSV data in the output preview of the Snap.
Next, we configure the Hyper Write Snap to create a .hyper file and publish it without packaging it into a data source.
Upon validation, we see the following hyper data in the output preview of the Snap.
Next, we configure the JSON Formatter Snap to read the hyper document stream and write the data to output.
Upon validation, we see the following output in the preview of the Snap.
Finally, we configure the File Writer Snap to write the hyper file to the SL database.
Downloads
Download and import the Pipeline into SnapLogic.
Configure Snap accounts as applicable.
Provide Pipeline parameters as applicable.
Snap Pack History
See Also
Have feedback? Email documentation@snaplogic.com | Ask a question in the SnapLogic Community
© 2017-2024 SnapLogic, Inc.