...
Snap Pack | Date of Update | Snap Pack Version | Updates |
---|---|---|---|
| 433patches21370 |
| |
| 433patches21336 | Fixed an issue with the AutoPrep Snap where dates could potentially be rendered in a currency format because currency format options were displayed for the DOB column. | |
| 433patches21367 |
| |
| N/A | Fixed a null pointer exception so no 5XX errors can occur if you download non-existent query details from the Pipeline Execution Statistics of an ELT (write-type) Snap.
| |
| 433patches21247 | Fixed an issue with the Match Snap where a null pointer exception was thrown when the second input view had fewer records than the first. | |
| 433patches21291 | Fixed an issue with the Multi File Reader Snap where it failed with the error | |
| 433patches21307 | Fixed an issue with the HTTP Client Snap that caused pagination to fail when the next Snap in the pipeline could modify the input document (for example, the Mapper Snap). | |
| 433patches21196 | Enhanced the In-Memory Lookup Snap with the following new fields to improve memory management and help reduce the possibility of out-of-memory failures:
These new fields replace the Maximum memory % field. | |
| 433patches21196 | Fixed an issue with the Pipeline Execute Snap where no input view produced a null pointer exception. | |
| 433patches21179 |
| |
| main131 | Addressed a bug in the compatibility mode of the updated the COBOL Copybook Parser Snap that caused a working pipeline to fail. | |
| 433patches21140 |
| |
| 433patches21119 | Updated the SQL Server - Bulk Load Snap to preserve empty strings as empty strings and null as nulls. | |
| 433patches21040 | Provided a fix to ensure that the Data Validator Snap works with "unknown" data types. | |
| 433patches21070 | Fixed an issue with the Kafka Consumer Snap that caused it to skip records in a partition when a pipeline failed without successfully processing any records from that partition. This was an edge case that was only possible with a specific configuration of the Snap and a specific ordering of events during termination of the pipeline. |
...
If you remove or rename a source table or object after using it in a data pipeline, its name will still be visible in the source configuration and you will not be able to de-select it.
For Amazon S3 data sources, you must add a forward slash if you copy the URL from the Amazon S3 console. For example, change
s3://my_bucket
tos3:///my_bucket
.The first row of CSV files must define the names for the column headings in the target table, with no empty or null values.
AutoSync cannot upload a CSV file to the cloud data warehouse if the file name includes single or double quotes or a special character, such as
#
,$
,%
, or&
.Google BigQuery does not support special characters for names. Data pipelines where the source file, table, or object names include special characters will fail. Whitespace, dash, and underscore in the names are not a problem.
Sometimes, AutoSync cannot clean up the Google Cloud Storage staging area after loading from Salesforce to Google BigQuery. If this occurs, you should manually remove the files to retrieve the storage.
In the IIP AutoSync Manager screen, you can create Accounts to use with AutoSync. For Google BigQuery, the Account Type dropdown allows you to select Google Service Account or Google Service Account JSON, which are not supported by AutoSync.
Sometimes, when you create or edit the synchronization schedule, you can select a start time in the past. If you do this for a data pipeline that is scheduled to run once, it will not run unless you start it manually.
Data pipelines that load the Marketo Program Manager object can time out and fail.
To use the SCD2 load type for Snowflake, you must modify Snowflake configurations created before the May 2023 release. Because AutoSync automatically sets the timezone to UTC for SCD2 operations, do the following:
For an Account created in the IIP, add a Uri property parameter with the name
TIMEZONE
and a value ofUTC
.For credentials saved in AutoSync, delete them and create a new configuration.
Data Automation
Fixed Issues
: Fixed a null pointer exception so no 5XX errors can occur if you download non-existent query details from the Pipeline Execution Statistics of an ELT (write-type) Snap.
Platform
New Features
Pipeline Cache is a subscription feature (currently available in private beta) that enables you to cache data in memory for reference in another pipeline. This information can be a dataset, and Expression Language functions can retrieve the data from this dataset. Looking up values based on database rows/columns is a costly operation if the table is queried repeatedly to use the same data as reference fields. An in-memory data store allows your pipelines to look up the data as key/value pairs in other Snaps at runtime. If you have data in an external system that you need to look up (such as an employee object or an identifier field), then you can use Pipeline Cache to optimize the mapping to match the references in this type of operation.
...
When creating a new project from a Git repository, you can also create a new branch for the new project. Learn more.
Support for HashiCorp KV Secrets Engine Version 1 is available, in addition to KV Search Engine Version 2.
Fixed Issues
Fixed an issue where Orgs could not be provisioned.
The subscription feature Secrets Management -CyberArk is now displayed correctly on the Manager > Subscriptions page.
...
The Bouncy Castle library version is upgraded from
bcpg-jdk150n[1.69]
tobcpg-jdk180n[1.73]
in all our Snap Packs. This upgrade brings in the latest security features to enhance the performance of the SnapLogic platform.The Generic Database Account now supports the SSH Tunneling connection. You can now encrypt the network connection between your client and the database server, ensuring a highly secure connection.
The Hive Snap Pack is Cloudera-certified for Cloudera Data Warehouse (CDW). You can use the Hive Execute Snap to work with CDW clusters through a Generic Hive Database account.
The Marketo Bulk Extract Snap works successfully in the non-lineage path in an Ultra task.
The Key passphrase field in the Private Key Account of the Binary Snap Pack now supports expressions, allowing dynamic evaluation using pipeline parameters when the expression button is enabled.
Snowflake
SnapLogic is specified as a partner tag in all requests directing to Snowflake, making it easier for Snowflake to identify the requests coming from SnapLogic.
The default JDBC JAR for the Snowflake Snap Pack is upgraded to version 3.13.28 to support the
GEOMETRY
data type.
Fixed Issues
Fixed an issue with the Encrypt Field Snap, where the Snap failed to support an RSA public key to encrypt a message or field. Now the Snap supports the RSA public key to encrypt the message.
...