Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Highlights

...

Known Issues

Fixed Issues

AutoSync

<links to be added after the doc is published>

New Features

  • AutoSync now supports the following endpoints:

    • Google BigQuery as a target

    • Marketo as a source

    • CSV or JSON files stored in Amazon S3 as a source

    • CSV files as a source

  • The ability to synchronize data from Salesforce to Snowflake using SCD2

...

  • AutoSync cannot upload a CSV file to the cloud data warehouse if the name includes a special character such as #, $, %, or &.

  • Data pipelines loading to Google BigQuery fail when source file, table, or object names include spaces or special characters.

  • Sometimes AutoSync is not able to clean up the Google Cloud Storage staging area after loading from Salesforce to Google BigQuery. If this occurs, you should manually remove the files to retrieve the storage.

  • In some cases, when you create or edit the synchronization schedule, you can select a start time in the past. If you do this for a pipeline that is scheduled to run once, it will not run unless you start it manually.

  • To use an existing Snowflake Account for an AutoSync data pipeline, you must add a JDBC_TIMEZONE parameterthe SCD2 load type for Snowflake, you must modify Snowflake configurations created before the May 2023 release. Because AutoSync automatically sets the timezone to UTC for SCD2 operations, do the following:

    • For an Account created in the IIP, add a Uri property parameter with the name TIMEZONE and a value of UTC.

    • For credentials saved in AutoSync, delete them and create a new configuration.

Platform

New Features

  • New Public APIs are available for APIM.

  • Pipeline Cache is a subscription feature that enables you to cache reference information in a Pipeline. The reference information is cached in memory, and the Pipeline Cache mechanism retrieves it. Looking up values based on database rows/columns is a costly operation if the table is queried repeatedly, using the same data as reference fields. An in-memory data store allows your pipelines to look up these keys/values/data in other Snaps at runtime. When you have data in an external system, with data such as an object or ID, that you plan to look up, then you can use Pipeline Cache to perform this operation.

...