...
AutoSync now supports the following endpoints:
Google BigQuery as a target. Learn more.
Marketo as a source. Learn more.
CSV or JSON files stored in Amazon S3 as a source. Learn more.
Upload CSV files file upload from a local or network drive as a source. Learn more.
The ability to change load type for synchronization. The types available depend on what AutoSync supports for the source and endpoint. AutoSync supports loading from Salesforce to Snowflake using SCD2.
To change the load type :
After after creating a data pipeline, edit it. From the Auto Synchronize tab, select the Load type. Learn more.For a Salesforce source, AutoSync automatically handles changes to the column precision, scale, or character limit by propagating the changes to the target. Learn more.
...
If you remove or rename a source table or object after using it in a data pipeline, its name will still be visible in the source configuration and you will not be able to de-select it.
For Amazon S3 data sources, you must add a forward slash if you copy the URL from the Amazon S3 console. For example, change
s3://my_bucket
tos3:///my_bucket
.The first row of CSV files must define the names for the column headings in the target table, with no empty or null values.
AutoSync cannot upload a CSV file to the cloud data warehouse if the file name includes single or double quotes or a special character, such as
#
,$
,%
, or&
.Google BigQuery does not support special characters for names. Data pipelines where the source file, table, or object names include special characters will fail. Whitespace, dash, and underscore in the names are not a problem.
At timesSometimes, AutoSync cannot clean up the Google Cloud Storage staging area after loading from Salesforce to Google BigQuery. If this occurs, you should manually remove the files to retrieve the storage.
In the IIP AutoSync Manager screen, you can create Accounts to use with AutoSync. For Google BigQuery, the Account Type dropdown allows you to select Google Service Account or Google Service Account JSON, which are not supported by AutoSync.
In some casesSometimes, when you create or edit the synchronization schedule, you can select a start time in the past. If you do this for a data pipeline that is scheduled to run once, it will not run unless you start it manually.
To use the SCD2 load type for Snowflake, you must modify Snowflake configurations created before the May 2023 release. Because AutoSync automatically sets the timezone to UTC for SCD2 operations, do the following:
For an Account created in the IIP, add a Uri property parameter with the name
TIMEZONE
and a value ofUTC
.For credentials saved in AutoSync, delete them and create a new configuration.
...
Fixed an issue with the Encrypt Field Snap, where the Snap failed to support an RSA public key to encrypt a message or field. Now the Snap supports the RSA public key to encrypt the message.
Changes in
...
Behavior
Previously, in a Project, you could use Snaps from a private Snap Pack uploaded to the Project, the Project shared folder, or the global shared folder. Now, if a Snap Pack is uploaded to the Project containing different Snaps than those uploaded to a shared folder, only the Snaps uploaded to the Project are visible. If you have existing Pipelines pipelines using Snap Packs that are no longer available because of this change, you need to fix the Pipelines them by uploading the required Snaps to the Project.
...
The UI now uses accessible colors. Some icons are larger and more readable. For example, note the status icons on the summary cards in the Execution overview:
Changes in behavior
For the Execution Overview:
The Executed by column was renamed to Owner.
Invoker is a new optional column that identifies who ran a Triggered Task or manually ran an AutoSync data pipeline. It contains the user ID and their IP address. To add the Invoker column, click in the Search bar and click the Invoker pill:
...