Hyper Formatter

In this Article

Overview

You can use this Snap to convert the incoming documents into a Tableau hyper extract file and send the extract to the binary output view.

You can configure multiple input views in this Snap as the hyper file contains multiple tables. The table names for the hyper file is taken from the View Labels that you configure in Views.

Prerequisites

  • Tableau 10.5 or later version.

  • To run the Tableau hyper Pipelines in a plex, you must set the environment or system property TABLEAU_HYPER_LIBS to hyper/ (excluding hyper folder) where the jcc is running.

  • Tableau Hyper Snaps use Tableau Hyper API version 21.0.0.12982. You must have the Java Native Access (JNA) and shared libraries of the same API version for the respective Operating System on which you run your Tableau Pipelines. You can download the files for your operating system (Windows, Mac, and Linux) using this link: https://www.tableau.com/support/releases/hyper-api/0.0.12982.

Support for Ultra Pipelines

Does not work in Ultra Pipelines

Limitations

None.

Known Issues

None.

Snap Input and Output

Input/Output

Type of View

Number of Views

Examples of Upstream and Downstream Snaps

Description

Input/Output

Type of View

Number of Views

Examples of Upstream and Downstream Snaps

Description

Input 

Document

  • Min:1

  • Max:∞

  • Mapper

  • Copy

  • Any Parser Snaps

Data in document format.

Output

Binary

  • Min:1

  • Max:1

  • File Writer

  • S3 File Writer

Document in binary format.

Snap Settings

Field Name

Field Dependency

Description

Field Name

Field Dependency

Description

Label*

None.

Specify the name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your Pipeline.

Default ValueHyper Formatter
ExampleHyper Formatter

Schema Name

N/A

Specify a schema name for the Tableau extract. If left empty, the Snap uses the default schema name Extract.

Default ValueExtract
ExampleExtract

Snap Execution

N/A

Select one of the three modes in which the Snap executes. Available options are:

  • Validate & Execute: Performs limited execution of the Snap, and generates a data preview during Pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during Pipeline runtime.

  • Execute only: Performs full execution of the Snap during Pipeline execution without generating preview data.

  • Disabled: Disables the Snap and all Snaps that are downstream from it.

    Default Value: Validate & Execute
    Example: Execute only

Example

Transforming Data into Hyper Format and Publishing the Datasource

This example Pipeline demonstrates how we can transform data into hyper extract format and publish the datasource in the Tableau server. Publishing a datasource in the Tableau server is a three-step process:

  1. Initiate file upload

  2. Append to file upload

  3. Publish datasource

We have three Tableau REST Snaps in this Pipeline, each calling these three APIs: initiate_file_upload, append_file_upload, and publish_datasource.

Initially, we pass the following parameters in the Pipeline as key-value pairs:

Key

Value

Key

Value

projectid

87473786-9edd-495d-9820-c9ecbc928fe1

siteid

b3013847-c198-4a1e-8650-9f43f9d5e9f2

hyperfilename

test.hyper

We begin with configuring the JSON Generator Snap to pass product information with the required sub elements, such as Category, Product ID, Product Name, and Sub-Category.

Upon validation, we get the product data in the output preview of the Snap.

Next, we configure the Hyper Formatter Snap to format the JSON data output into Tableau hyper format. We configure the views with one input view Table_name_product. This view label name will be used as table name in the hyper file in the output.

Hyper Formatter Snap Configuration

Hyper Formatter Views

Hyper Formatter Snap Configuration

Hyper Formatter Views

 

 

Next, we configure the File Writer Snap to write the transformed hyper format data file, test.hyper to the SL database. Upon validation, the hyper data file test.hyper is written to the SL database.

File Writer Snap Configuration

File Writer Output

File Writer Snap Configuration

File Writer Output

 

 

Next, we configure the Mapper Snap to pass inputs for initializing the file upload. Hence, we pass the datasource Type, SiteId, and specify whether the file should be overwritten.

Upon validation, we get the following data in the output preview of the Snap.

Next, we configure the Tableau REST Snap to initiate the file upload operation using the REST operation.

Upon validation, the Snap establishes an upload session and creates a unique session ID. Since the file is not uploaded in this operation, the value of the $filezSize field is 0. A preview of the output from the Tableau REST Snap is as shown below:

Next, we configure the second Mapper Snap to append the uploaded file. Hence, we pass the fileSize, uploadSessionId, and siteId values to the downstream Tableau REST Snap.

Upon validation, we get the following output (fileSize, uploadSessionId, and siteId).

Next, we configure the Tableau REST Snap to append the file to Tableau Online Site (repository/folder) in which the appended file will be placed. Hence, we use the REST operation append_file_upload to append the test.hyper file and also provide the target file’s name. This operation creates a resource location for the uploaded file (test.hyper).

Upon validation, the file is uploaded in chunks to the site (based on $siteId). In the Snap's output preview we can notice that the session ID of the upload session is the same as that generated in the "Initiate" step.

The file is now ready to be published. Next, we configure the Mapper Snap to map the fields from the output of the second Tableau REST Snap to use in the downstream Snap. So, we pass the datasource type, uploadSessionId, siteId, hyperfilename and projectId, and specify whether we want to overwrite the file.
While uploadSessionId is an input from the previous REST operation, whereas the siteId, hyperfilename, and projectId are the hardcoded values in Pipeline parameters.

Upon validation, we get the following output in the preview of the Snap.

Next, we configure the last Tableau REST Snap to publish the datasource to Tableau instance. This Snap uses the inputs from the upstream Snap to determine how to publish the file online. We use the REST operation publish_datasource to accomplish this task.

Upon validation, we get the following output in the preview of the Snap.

Upon executing the Pipeline, the hyper extract file test.hyper is uploaded to Tableau instance and stored under External Assets group.

Once the file is uploaded to Tableau, we can use this data for visualization of product information in terms of graphs or pie charts.

Download this Pipeline

Downloads

Important Steps to Successfully Reuse Pipelines

  1. Download and import the Pipeline into SnapLogic.

  2. Configure Snap accounts as applicable.

  3. Provide Pipeline parameters as applicable.

  File Modified

File Example_Tableau_HyperFormatter.slp

Jul 23, 2021 by Kalpana Malladi

Snap Pack History


See Also