Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • If you remove or rename a source table or object after using it in a data pipeline, its name will still be visible in the source configuration and you will not be able to de-select it.

  • For Amazon S3 data sources, you must add a forward slash if you copy the URL from the Amazon S3 console. For example, change s3://my_bucket to s3:///my_bucket.

  • The first row of CSV files must define the names for the column headings in the target table, with no empty or null values.

  • AutoSync cannot upload a CSV file to the cloud data warehouse if the file name includes single or double quotes or a special character, such as #, $, %, or &.

  • Google BigQuery does not support special characters for names. Data pipelines where the source file, table, or object names include special characters will fail. Whitespace, dash, and underscore in the names are not a problem.

  • At times, AutoSync cannot clean up the Google Cloud Storage staging area after loading from Salesforce to Google BigQuery. If this occurs, you should manually remove the files to retrieve the storage.

  • In the IIP AutoSync Manager screen, you can create Accounts to use with AutoSync. For Google BigQuery, the Account Type dropdown allows you to select Google Service Account or Google Service Account JSON, which are not supported by AutoSync.

  • In some cases, when you create or edit the synchronization schedule, you can select a start time in the past. If you do this for a data pipeline that is scheduled to run once, it will not run unless you start it manually.

  • To use the SCD2 load type for Snowflake, you must modify Snowflake configurations created before the May 2023 release. Because AutoSync automatically sets the timezone to UTC for SCD2 operations, do the following:

    • For an Account created in the IIP, add a Uri property parameter with the name TIMEZONE and a value of UTC.

    • For credentials saved in AutoSync, delete them and create a new configuration.

Platform

New Features

  • Pipeline Cache is a subscription feature (currently available in private beta) that enables you to cache data in memory for reference in another pipeline. This information can be a dataset, and Expression Language functions can retrieve the data from this dataset. Looking up values based on database rows/columns is a costly operation if the table is queried repeatedly to use the same data as reference fields. An in-memory data store allows your pipelines to look up the data as key/value pairs in other Snaps at runtime. If you have data in an external system that you need to look up (such as an employee object or an identifier field), then you can use Pipeline Cache to optimize the mapping to match the references in this type of operation.

Info

This feature is in private beta. Email support@snaplogic.com for an invitation to gain early access.

...