Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Snap Pack

Date of Update

Snap Pack Version

Updates

Snowflake

433patches21370

  • Fixed an issue with the Snowflake Bulk Upsert Snap where the output document was missing information about the error records count and the reason for the error.

  • Fixed an issue that caused stored procedure executions to fail in the Snowflake - Multi Execute Snap.

Transform

433patches21336

Fixed an issue with the AutoPrep Snap where dates could potentially be rendered in a currency format because currency format options were displayed for the DOB column.

Salesforce

433patches21367

  • The Salesforce SOQL Snap now honors the selection of the Match Data Type checkbox when the value entered for Batch Size is greater than 50,000.

  • The Salesforce Read Snap now honors the selection of the Match Data Type checkbox if the Use PK chunking if supported checkbox is also selected.

ELT

N/A

Fixed a null pointer exception so no 5XX errors can occur if you download non-existent query details from the Pipeline Execution Statistics of an ELT (write-type) Snap.

  • This is a Control Plane update only and there is no change in the ELT Snap Pack version.

ML Data Preparation

433patches21247

Fixed an issue with the Match Snap where a null pointer exception was thrown when the second input view had fewer records than the first.

Binary

433patches21291

Fixed an issue with the Multi File Reader Snap where it failed with the error S3 object not found when the Snap found no matching file to read and the Folder/File property value did not end with a forward slash (/).

API Suite

433patches21307

Fixed an issue with the HTTP Client Snap that caused pagination to fail when the next Snap in the pipeline could modify the input document (for example, the Mapper Snap).

Transform

433patches21196

Enhanced the In-Memory Lookup Snap with the following new fields to improve memory management and help reduce the possibility of out-of-memory failures:

  • Minimum memory (MB)

  • Out-of-memory timeout (minutes)

These new fields replace the Maximum memory % field.

Flow

433patches21196

Fixed an issue with the Pipeline Execute Snap where no input view produced a null pointer exception.

Binary

433patches21179

  • Fixed an issue with the File Delete Snap where the Snap failed with a 404 Not Found error when trying to delete files from an Amazon S3 bucket. This issue occurred only with the Identity and Access Management (IAM) role in an Amazon AWS S3 Account.

  • Fixed an issue where Binary Snaps could not handle region information for the Amazon S3 file protocol, which resulted in an error.

Copybook

main131

Addressed a bug in the compatibility mode of the updated the COBOL Copybook Parser Snap that caused a working pipeline to fail.

API Suite

433patches21140

  • Fixed an issue affecting the HTTP Client Snap, which caused it to hang for an extended period when the user-agent contained the term java.

  • The HTTP Client Snap is enhanced with the Prevent URL encoding checkbox. This checkbox enables you to control whether the Snap should automatically encode the URL or prevent the URL encoding based on your preference.

SQL Server

433patches21119

Updated the SQL Server - Bulk Load Snap to preserve empty strings as empty strings and null as nulls.

Flow

433patches21040

Provided a fix to ensure that the Data Validator Snap works with "unknown" data types.

Kafka

433patches21070

Fixed an issue with the Kafka Consumer Snap that caused it to skip records in a partition when a pipeline failed without successfully processing any records from that partition. This was an edge case that was only possible with a specific configuration of the Snap and a specific ordering of events during termination of the pipeline.

...

  • If you remove or rename a source table or object after using it in a data pipeline, its name will still be visible in the source configuration and you will not be able to de-select it.

  • For Amazon S3 data sources, you must add a forward slash if you copy the URL from the Amazon S3 console. For example, change s3://my_bucket to s3:///my_bucket.

  • The first row of CSV files must define the names for the column headings in the target table, with no empty or null values.

  • AutoSync cannot upload a CSV file to the cloud data warehouse if the file name includes single or double quotes or a special character, such as #, $, %, or &.

  • Google BigQuery does not support special characters for names. Data pipelines where the source file, table, or object names include special characters will fail. Whitespace, dash, and underscore in the names are not a problem.

  • Sometimes, AutoSync cannot clean up the Google Cloud Storage staging area after loading from Salesforce to Google BigQuery. If this occurs, you should manually remove the files to retrieve the storage.

  • In the IIP AutoSync Manager screen, you can create Accounts to use with AutoSync. For Google BigQuery, the Account Type dropdown allows you to select Google Service Account or Google Service Account JSON, which are not supported by AutoSync.

  • Sometimes, when you create or edit the synchronization schedule, you can select a start time in the past. If you do this for a data pipeline that is scheduled to run once, it will not run unless you start it manually.

  • Data pipelines that load the Marketo Program Manager object can time out and fail.

  • To use the SCD2 load type for Snowflake, you must modify Snowflake configurations created before the May 2023 release. Because AutoSync automatically sets the timezone to UTC for SCD2 operations, do the following:

    • For an Account created in the IIP, add a Uri property parameter with the name TIMEZONE and a value of UTC.

    • For credentials saved in AutoSync, delete them and create a new configuration.

Data Automation

Fixed Issues

: Fixed a null pointer exception so no 5XX errors can occur if you download non-existent query details from the Pipeline Execution Statistics of an ELT (write-type) Snap.

Platform

New Features

  • Pipeline Cache is a subscription feature (currently available in private beta) that enables you to cache data in memory for reference in another pipeline. This information can be a dataset, and Expression Language functions can retrieve the data from this dataset. Looking up values based on database rows/columns is a costly operation if the table is queried repeatedly to use the same data as reference fields. An in-memory data store allows your pipelines to look up the data as key/value pairs in other Snaps at runtime. If you have data in an external system that you need to look up (such as an employee object or an identifier field), then you can use Pipeline Cache to optimize the mapping to match the references in this type of operation.

...

  • When creating a new project from a Git repository, you can also create a new branch for the new project. Learn more.

  • Support for HashiCorp KV Secrets Engine Version 1 is available, in addition to KV Search Engine Version 2.

Fixed Issues

  • Fixed an issue where Orgs could not be provisioned.

  • The subscription feature Secrets Management -CyberArk is now displayed correctly on the Manager > Subscriptions page.

...

  • The Bouncy Castle library version is upgraded from bcpg-jdk150n[1.69] to bcpg-jdk180n[1.73] in all our Snap Packs. This upgrade brings in the latest security features to enhance the performance of the SnapLogic platform.

  • The Generic Database Account now supports the SSH Tunneling connection. You can now encrypt the network connection between your client and the database server, ensuring a highly secure connection.

  • The Hive Snap Pack is Cloudera-certified for Cloudera Data Warehouse (CDW). You can use the Hive Execute Snap to work with CDW clusters through a Generic Hive Database account.

  • The Marketo Bulk Extract Snap works successfully in the non-lineage path in an Ultra task.

  • The Key passphrase field in the Private Key Account of the Binary Snap Pack now supports expressions, allowing dynamic evaluation using pipeline parameters when the expression button is enabled.

  • Snowflake

    • SnapLogic is specified as a partner tag in all requests directing to Snowflake, making it easier for Snowflake to identify the requests coming from SnapLogic.

    • The default JDBC JAR for the Snowflake Snap Pack is upgraded to version 3.13.28 to support the GEOMETRY data type.

Fixed Issues

  • Fixed an issue with the Encrypt Field Snap, where the Snap failed to support an RSA public key to encrypt a message or field. Now the Snap supports the RSA public key to encrypt the message.

...