Adobe Experience Platform Write

In this article

Overview

Use this Snap to restructure and write input JSON data as JSON or Parquet files into Adobe Enterprise Platform (AEP).

Prerequisites

  • The dataset into which you want to write the output of the Adobe Experience Platform Write Snap. If you do not have the dataset ready, you must create it first using the Adobe Experience Platform UI or the Adobe Experience Platform S3 Connector Snap.

Snap Input and Output

Input/OutputType of ViewNumber of ViewsCompatible Upstream and Downstream SnapsDescription
Input Document
  • Min: 1
  • Max: 1
  • Mapper Snap
  • Binary to Document Snap
  • Sort Snap
Each input document contains data that you want to write to AEP.
OutputDocument
  • Min: 0
  • Max: 1
  • Mapper Snap
  • Join Snap
  • Union Snap

Each output document lists out the status and other details associated with the write process.

Snap Settings

Parameter NameData TypeDescriptionDefault ValueExample 
LabelStringRequired. The name for the Snap. Modify this to be more specific, especially if there is more than one of the same Snap in the Pipeline.Adobe Experience Platform WriteWrite Documents into AEP

Dataset Name

String

Required. The name of the dataset into which you want to write the input data. This is a suggestible field. Click  to view dataset suggestions that you can use.

This property is expression-enabled. For more information on the expression language, see Understanding Expressions in SnapLogic and Using Expressions. For information on Pipeline Parameters, see Pipeline Properties.

This Snap reads Pipeline parameters, but not values from upstream Snaps. You can, however, add this Snap in Pipelines with other Snaps to execute them in a sequence.

N/AAdobe Generic Dataset
Schema TypeString

Required. The schema type that you want to use to create the file that must be written. Click to select from the following options:

  • Parquet
  • JSON
ParquetParquet
Snap execution StringSelect one of the three modes in which the Snap executes. Available options are:
  • Validate & Execute: Performs limited execution of the Snap, and generates a data preview during Pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during Pipeline runtime.
  • Execute only: Performs full execution of the Snap during Pipeline execution without generating preview data.
  • Disabled: Disables the Snap and all Snaps that are downstream from it.

Execute onlyValidate and Execute

Limitations

While the Dataset Name field is expression enabled, its purpose is to read Pipeline parameter values. This Snap does not read values from upstream Snaps. You can, however, add this Snap in Pipelines with other Snaps to execute them in a sequence.

Troubleshooting

ErrorReasonResolution
Missing property value

The snap settings are not correct.

You may see this error because you did not provide a mandatory value.

Check to ensure that all mandatory values are provided and valid.
Invalid Snap configuration

Target Dataset Name is null or missing.

This typically means that you did not specify the dataset into which the parquet file must be written or that the dataset name specified doesn't exist in AEP.

Ensure that the property Dataset Name has a valid dataset name. Use the drop-down button adjacent to the Dataset Name field to select a valid dataset.
Unable to load the private key for the given aliasFile not found on <your SnapLogic instance> at <account location>This error appears when the account doesn't have valid values. Ensure that the keystore path, keystore passphrase, private key alias, and private key passphrase are correct. Also, you must upload the keystore in the Keystore Path that you specify.
Unable to obtain access tokenThis message appears when the endpoint account details are incorrect.Check your Organization ID, Technical Account IDClient IDClient Secret Key details and try again.

Examples

Writing JSON Documents from S3 as Parquet Files in AEP

In this example, you retrieve a JSON document from your AWS S3 account and write it as a parquet file in a target AEP dataset. To do so, you decide to perform the following tasks:

  1. Read the source file from AWS S3
  2. Map fields in your source document to elements in your target AEP schema
  3. Write the restructured data into the target dataset in AEP as parquet files

You design the Pipeline as shown below:

Read source data from AWS S3

You open the S3 File Reader Snap and configure it to retrieve the file that you want to write into AEP.

You attach a JSON Parser Snap to the S3 File Reader Snap to parse and view the incoming JSON data.

Map fields in your source document to elements in your target AEP schema

You now must prepare the JSON document for inclusion into AEP as a parquet file. To do so, you use the Mapper Snap.

You configure the Mapper Snap to take all the data coming in from the upstream Snaps as one entity and attach a randomly generated ID and a timestamp as two additional entities. The Mapper output is now ready to be ingested into AEP.

Write the restructured data into the target dataset in AEP as parquet files

You use the AEP Writer Snap to write the restructured document coming from the upstream Mapper Snap into AEP. You choose the dataset into which the document must be written and you execute the Pipeline.

If your account settings are correct, the Snap logs into AEP on your behalf and writes the Parquet file into the target dataset.

Download this Pipeline

Downloads

  File Modified

File AEP_Write_Example.slp

Apr 24, 2020 by Siddharth Menon

Snap Pack History

 Click to view/expand

4.24 (main8556)

  • No updates made.

4.23 423patches7447

  • Fixes the Adobe Experience Platform Execute Snap issue of not passing data correctly to the downstream Snap by creating Map obj = new LinkedHashMap(); inside loop. LinkedHashMap is a hash table and linked list implementation of the Map interface, with a predictable iteration order. 

4.23 (main7430)

  • Replaces the Adobe Cloud Platform Snap Pack.
  • Introduces the Adobe Experience Platform Read Snap that enables executing SQL queries in the Adobe Experience Platform.
  • Updates the Adobe Experience Platform JWT Account to enable you to create accounts in a Sandbox location during the development phase.
  • Enhances the Adobe Experience Platform Read Snap by adding fields, Batch IDsStart date, and End Date to allow filtering batches by ID, start date, and end date with millisecond precision.

4.22 (main6403)

4.21 Patches

4.21 (snapsmrc542)

4.20 (snapsmrc535)

  • No updates made.

4.19 (snapsmrc528)

  • No updates made.

4.18 (snapsmrc523)

  • Enhanced the Snap Pack to support the Map datatype in XSD-based datasets.

4.17 Patch ALL7402

  • Pushed automatic rebuild of the latest version of each Snap Pack to SnapLogic UAT and Elastic servers.

4.17 (snapsmrc515)

  • Added the Snap Execution field to all Standard-mode Snaps. In some Snaps, this field replaces the existing Execute during preview check box.

4.16 (snapsmrc508)

  • No updates made. Automatic rebuild with a platform release.

4.15 (snapsmrc500)

  • No updates made. Automatic rebuild with a platform release.

4.14 (snapsmrc490) 

  • No updates made. Automatic rebuild with a platform release.

4.13 (snapsmrc486)

  • New! Initial release of the Snap Pack. This includes Adobe Cloud Platform Read and Adobe Cloud Platform Write Snaps.
  • Introduced the basic Adobe Cloud Platform JWT Account.