Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • New endpoints, Google BigQuery, Amazon S3 (for CVS CSV and JSON files), Marketo, and CSV files

  • Synchronize with SCD2 between Salesforce and Snowflake

...

  • Improved scheduling options

Known Issues

  • If you save a data pipeline and then edit its name, you cannot delete it.

  • AutoSync cannot upload a CSV file to the cloud data warehouse if the name includes a special character, such as #, $, %, or &.

  • Google BigQuery does not support special characters for names. Data pipelines where the source file, table, or object names include special characters will fail. Whitespace, dash, and underscore in the names are not a problem.

  • At times, AutoSync cannot clean up the Google Cloud Storage staging area after loading from Salesforce to Google BigQuery. If this occurs, you should manually remove the files to retrieve the storage.

  • In some cases, when you create or edit the synchronization schedule, you can select a start time in the past. If you do this for a data pipeline that is scheduled to run once, it will not run unless you start it manually.

  • To use the SCD2 load type for Snowflake, you must modify Snowflake configurations created before the May 2023 release. Because AutoSync automatically sets the timezone to UTC for SCD2 operations, do the following:

    • For an Account created in the IIP, add a Uri property parameter with the name TIMEZONE and a value of UTC.

    • For credentials saved in AutoSync, delete them and create a new configuration.

...