Adobe Cloud Platform Write

On this Page

ACP Snap Deprecation Notice

Since Adobe Cloud Platform (ACP) is deprecated by Adobe, we are replacing the ACP Snap Pack with Adobe Experience Platform (AEP) Snap Pack starting from 4.22 GA release. Hence, ACP Write Snap is deprecated in 4.22 release. For more information, see Adobe Experience Platform Write Snap article.

Overview

The Adobe Experience Platform Write Snap is useful in creating data sets in the Adobe Experience Platform instance based on a selected schema and data set name, or by writing incoming documents into an existing data set in the Adobe Experience Platform. Some of the Snap's properties are suggestible to make it easier to select the data set into which the incoming document is to be written. These properties are Dataset name, and Dataset ID. The Snap also supports batching, which enables you to write a configurable number of documents into a data set. If a batch write is unsuccessful, then the Snap will retry writing that batch for a configurable number of times.

Prerequisites

Access to Adobe Experience Platform.

Configuring Accounts

This Snap uses account references created on the Accounts page of SnapLogic Manager to handle access to this endpoint. See Configuring Adobe Experience Platform Accounts for information on setting up this type of account.

Configuring Views

Input

This Snap has exactly one document input view.
OutputThis Snap has at most one document output view.
ErrorThis Snap has at most one document error view and produces zero or more documents in the view.

Limitations and Known Issues

As of its initial release, the Snap supports only Profile XSD, CSV, and Parquet formats.

Modes

Snap Settings


LabelRequired. The name for the Snap. Modify this to be more specific, especially if there are more than one of the same Snap in the pipeline.
Dataset schema

This is a drop-down list that lets the user select the schema of the data set that is to be read. The suggestions in Dataset name and Dataset ID are based on this selection.

The following options are available in this drop-down menu:

  • Profile XSD: If the data set to be read is based on XSD schema.
  • CSV: If the data set to be read is based on CSV schema.
  • Parquet: If the data set to be read is based on Parquet schema.

Default value: Parquet

Recommendations for writing/creating data sets in different schema types

Data sets can be written in CSV and Parquet schema using incoming documents from upstream Snaps.

Profile XSD Schema - Points to remember

  1. To write data sets in the Profile XSD schema since it is a predefined schema in the Adobe Experience Platform, use a Mapper upstream of the Adobe Experience Platform Write Snap to map the data set's values.
  2. Ensure that all required elements in the Profile XSD schema are configured, else the Snap will throw an error.
  3. Ensure that the data passed to each element in the schema is in the intended data type for the element.
Dataset name

The name of the data set to be read. This is a suggestible field and will provide names of all the data sets within the Account based on the selection in the Dataset schema property.

Example: doctest

Default value: [None] 

This property is expression-enabled. For more information on the expression language, see Understanding Expressions in SnapLogic and Using Expressions. For information on Pipeline Parameters, see Pipeline Properties.

Dataset ID

Required. The data set ID of the data set to be read. This is a suggestible field and will provide IDs of all the data sets within the Account based on the Dataset schema and Dataset name property's configuration. Select the applicable data set ID into which the incoming data is to be written.

Example: 5acdb87f9iqdrac201da2e0e9

Default value: [None]

Leave this property blank if a new data set is to be created and enable the Create new dataset property.

This property is expression-enabled. For more information on the expression language, see Understanding Expressions in SnapLogic and Using Expressions. For information on Pipeline Parameters, see Pipeline Properties.

Refresh datasets cache

If selected, the Snap will refresh the cache and show refreshed results. This is handy in cases where changes are made to the data set outside of SnapLogic, such as a new data set created, existing data set deleted, and so on. In such cases, enabling this property will refresh the cache and display updated content.

Default value: Not selected.

This property is disabled by default to save on unnecessary API calls but it is useful in cases where a new data set is added after the Snap's initial configuration. New data sets will not be visible in the property's suggestions unless this property is enabled.

Create new dataset

To be enabled when creating a new data set.

Default value: Not selected.

Pass through

If selected, the input document will be passed through to the output view under the key 'original'. 

Default value: Not selected.

SettingsBatching-related settings (Batch size, Retry limits, and so on).
Batch size (MB)

The size limit per batch in mega bytes (MB).

Default value: 10

This property is expression-enabled. For more information on the expression language, see Understanding Expressions in SnapLogic and Using Expressions. For information on Pipeline Parameters, see Pipeline Properties.

Batch timeout (milliseconds)

The timeout limit (in milliseconds) within which one batch of data should be written. If a batch exceeds this limit, the Snap proceeds to the next batch and displays the records from the batch that could not be written, in the Error view (if the Snap's Error view is configured).

Default value: 500

This property is expression-enabled. For more information on the expression language, see Understanding Expressions in SnapLogic and Using Expressions. For information on Pipeline Parameters, see Pipeline Properties.

Retry limit

The number of retries for the Snap to write an unsuccessful batch.

Default value: 0

  • If the value is larger than 0, the Snap first downloads the target file into a temporary local file. If any error occurs during the download, the Snap waits for the time specified in the Retry interval and attempts to download the file again from the beginning. When the download is successful, the Snap streams the data from the temporary file to the downstream Pipeline. All temporary local files are deleted when they are no longer needed.
  • Ensure that the local drive has sufficient free disk space to store the temporary local file.

This property is expression-enabled. For more information on the expression language, see Understanding Expressions in SnapLogic and Using Expressions. For information on Pipeline Parameters, see Pipeline Properties.

Retry request interval (milliseconds)

The time interval (in milliseconds) between successive retry attempts.

Default value: 500

This property is expression-enabled. For more information on the expression language, see Understanding Expressions in SnapLogic and Using Expressions. For information on Pipeline Parameters, see Pipeline Properties.

Snap execution

Select one of the three modes in which the Snap executes. Available options are:

  • Validate & Execute: Performs limited execution of the Snap, and generates a data preview during Pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during Pipeline runtime.
  • Execute only: Performs full execution of the Snap during Pipeline execution without generating preview data.
  • Disabled: Disables the Snap and all Snaps that are downstream from it.

Troubleshooting

Snap throws an error when Pipeline execution is done in Cloudplex

To execute Pipelines in a Cloudplex, you must set ALLOW_CLOUDPLEX_PROCESS_CREATION to true in the SnapLogicSecurityManager.java file. Contact SnapLogic Customer Support for help with the setting.


Examples

 Creating a new data set

This pipeline demonstrates how the Adobe Experience Platform Write Snap can be used to create a new data set. For the sake of this example, a CSV data set is being created.

The CSV data to be created is passed to the Adobe Experience Platform Write Snap through the upstream CSV Generator Snap. The following is the CSV data in the CSV Generator Snap:

Below is the CSV generated from the Snap, this serves as the input document for the Adobe Experience Platform Write Snap:

The Adobe Experience Platform Write Snap is configured as shown below:

When the pipeline is executed, a new data set will be created with the provided data set name. A unique data set ID will be created and assigned by the Adobe Experience Platform, this ID can be seen in the Adobe Experience Platform Write Snap's output preview as shown below:


The exported pipeline is available in the Downloads section below.


Creating new data sets - Points to remember

  1. The correct schema should be selected in the Dataset schema property.
  2. The desired name should be provided in the Dataset name property, it need not be a unique name since each data set is identified by the data set ID.
  3. The Dataset ID property should be left blank.
  4. The Create new dataset property should be enabled.
 Writing into an existing data set

This pipeline demonstrates how the Adobe Experience Platform Write Snap can be used to write data into an existing data set. Data is written into the same data set that was created in the example above.

The CSV data to be written is passed to the Adobe Experience Platform Write Snap through the upstream CSV Generator Snap. The following is the CSV data in the CSV Generator Snap:

Below is the CSV generated from the Snap, this serves as the input document for the Adobe Experience Platform Write Snap:

The Adobe Experience Platform Write Snap is configured as shown below:

When the pipeline is executed, the new data is written into the data set matching the specified data set ID, this ID can be seen in the Adobe Experience Platform Write Snap's output preview as shown below:

Downloads

Snap Pack History

 Click to view/expand

4.24 (main8556)

  • No updates made.

4.23 423patches7447

  • Fixes the Adobe Experience Platform Execute Snap issue of not passing data correctly to the downstream Snap by creating Map obj = new LinkedHashMap(); inside loop. LinkedHashMap is a hash table and linked list implementation of the Map interface, with a predictable iteration order. 

4.23 (main7430)

  • Replaces the Adobe Cloud Platform Snap Pack.
  • Introduces the Adobe Experience Platform Read Snap that enables executing SQL queries in the Adobe Experience Platform.
  • Updates the Adobe Experience Platform JWT Account to enable you to create accounts in a Sandbox location during the development phase.
  • Enhances the Adobe Experience Platform Read Snap by adding fields, Batch IDsStart date, and End Date to allow filtering batches by ID, start date, and end date with millisecond precision.

4.22 (main6403)

4.21 Patches

4.21 (snapsmrc542)

4.20 (snapsmrc535)

  • No updates made.

4.19 (snapsmrc528)

  • No updates made.

4.18 (snapsmrc523)

  • Enhanced the Snap Pack to support the Map datatype in XSD-based datasets.

4.17 Patch ALL7402

  • Pushed automatic rebuild of the latest version of each Snap Pack to SnapLogic UAT and Elastic servers.

4.17 (snapsmrc515)

  • Added the Snap Execution field to all Standard-mode Snaps. In some Snaps, this field replaces the existing Execute during preview check box.

4.16 (snapsmrc508)

  • No updates made. Automatic rebuild with a platform release.

4.15 (snapsmrc500)

  • No updates made. Automatic rebuild with a platform release.

4.14 (snapsmrc490) 

  • No updates made. Automatic rebuild with a platform release.

4.13 (snapsmrc486)

  • New! Initial release of the Snap Pack. This includes Adobe Cloud Platform Read and Adobe Cloud Platform Write Snaps.
  • Introduced the basic Adobe Cloud Platform JWT Account.