You may see an Upgrade Notice dialog informing that you will be logged out immediately to complete the update process. Because the SnapLogic platform consists of multiple applications, you are likely to see this message a few times.
We recommend that you upgrade to the latest browser version.
Accounts and Platform Updates
Some accounts may have a fixed time for refresh tokens, such as Google accounts that must be refreshed every hour. If that refresh needs to occur when the platform is down for an update, the refresh does not occur. To prevent these accounts from failing after a new platform deployment, we recommend you to refresh your accounts before the designated downtime. If the platform is down for longer than an hour, you will need to refresh those accounts once the platform is back online.
Introducing Data Catalog as a Service. The catalog service adds a Table asset type which can store data location and schema information for files in an external file system. In conjunction with the Catalog Insert and Catalog Query Snaps, this feature provides data integrators with the tools required to interconnect and analyze enterprise metadata. For eXtremeplex users, this feature includes using the Catalog Writer and Catalog Reader Snaps (see Big Data Snaps). You can gain visibility of data (Table and Column Structure) stored in the Data Lake (usually in an AWS S3 account) for building Capture/Ingest/Conform/Refine Pipelines in a single UI.
Enhanced support in Runtime Archive to enable access to logs of all pipeline executions for auditing purposes. Prior to this release, logs for only 100 child pipelines per parent pipeline were archived; this limitation has been removed.
Added High Availability configurations for Triggered Tasks. Users relying on running their pipelines via the Task method can now deploy Triggered Tasks to multiple Snaplexes using one of the Override URLs to achieve fault tolerance in the event of a failure.
Added the API Usage Detail feature to API Meters in Manager, enabling administrators to monitor API workflows in their Org to prevent exceeding usage. You can now track API usage both daily and concurrently both summarized in pie graphs, along with a table of the top ten API uses.
Enhanced the recent SnapLogic release process to enable admin users to automatically update their respective Orgs immediately when a new release is available. Starting with the SnapLogic 4.16 release (around mid-February, 2019), you will be able to configure their SnapLogic org to update automatically to 4.16 and later in the Manager tab. The current process of manually updating all Orgs 35 days after GA will continue.
Enhanced the API download functionality by giving an option to download tasks (APIs) in .yaml or .json formats for viewing in Swagger.
You can now extract enterprise metadata from AWS S3 and store them as Table Assets in Manager. Metadata is stored in Tables accessible via projects; alternatively, you can search under Assets in Manager. Once opened, you can view partition keys and schema of the table data.
API Meters in Manager enables you to monitor API usage. Clicking Usage Detail displays a table of the top ten most run API calls, as well as pie graphs summarizing the usage. Both daily and concurrent views are available.
Set your version of SnapLogic to upgrade automatically to the next available release of SnapLogic via the Auto Update Check box. With 4.15, this feature enables an automatic update to the 4.16 release.
Enhanced the Activity Log UI, enabling you to view the number of log entries available, and view log entries in sets of 100.
Pipelines that are already started before a Snap update will continue to run with the older version of the Snaps. Only pipelines started after the new Snaps are deployed will run with the new Snaps.
For Ultra pipelines, the currently running instances will continue to run with the older Snap packs. Editing and saving the task instance in Manager will cause a rolling restart of the Ultra pipeline instances and the new Snap Packs will get picked up.
Customers using SQL Server with Windows authentication and SAP will need to restart their Groundplex and Cloudplex instances.
New Snap Packs are displayed in the Designer only after all the JCCs in an org are upgraded to the latest 4.15 version.
New Snap Packs
SnapLogic Data Science (Machine Learning): Introducing SnapLogic Data Science Snaps that enable you to conduct machine learning operations on your datasets. Using these Snap Packs, you can conduct data cleaning, data modeling, and data analytics. The Data Science solution includes the following:
ML Data Preparation Snap Pack: Perform preparatory operations on datasets such as data type transformation, data cleanup, sampling, shuffling, and scaling. Snaps in this Snap Pack are:
Categorical to Numeric: Convert categorical columns into numeric columns by using integer coding or one hot encoding.
Sample: Generate sample datasets from an input dataset using sampling algorithms.
Scale: Scale values in columns to specify ranges or apply statistical transformations.
Shuffle: Randomize the order of the row data in the dataset.
Type Converter: Determine types of values in columns. There are four supported types: integer, floating point, text, and datetime.
ML Core Snap Pack: Perform data modeling operations such as model training, cross-validation, and model-based predictions. Additionally, you can also execute Python scripts remotely. Snaps in this Snap Pack are:
Users can automate reporting activities and creation of EDI transactions using these Snaps in Ultra tasks. Users can also generate analytics information to aid in their decision-making efforts by using the Analytic Snap Packs in conjunction with these Snaps.
Workday Prism:Provides the ability to integrate data in Workday Prism to generate self-service Analytical reports that empower HR and finance teams to make business decisions. This Snap Pack includes the following Snap:
Bulk Load Snap: Use this Snap to upload multiple records/files in the SnapLogic pipeline from multiple sources (such as ERP, CRM, databases, and Workday HCM) into Workday Prism, and derive insightful visual analytics on the data.
Catalog Insert: Insert metadata information into metadata catalogs tables. You can also use this Snap to further enrich your metadata catalog with additional details, ensuring that you have all the meta-information you need to get actionable insights from your data.
Catalog Query: Query metadata information from the metadata catalog. You can use this Snap to retrieve metadata from tables in your catalog and select the elements that you want to reuse. For example, you could use a combination of the Catalog Query and Catalog Insert Snaps to retrieve metadata from one set of tables and create or enrich other tables in your catalog.
New Snaps in Existing Snap Packs
Snowflake Snap Pack: Added the following Snaps:
Snowflake Multi Execute: Execute multiple Data Modifying Language (DML) and Data Definition Language (DDL) queries in the Snowflake database.
Snowflake SCD2: Create SCD2 type field-historization in Snowflake tables.
Users can leverage these metadata Snaps by just referring to the table names without needing to know the actual file locations.
Hadoop Snap Pack: Added the following Snaps:
HDFS ZipFile Writer: Create and write ZIP files directly into the Hadoop file system. Use this Snap to collect files from various locations, zip them all together, and insert them directly into the Hadoop file system at one go.
HDFS ZipFile Reader: Read the contents of ZIP files that are located in the Hadoop file system. This Snap enables you to read all the ZIP files in a Hadoop directory that match specific criteria and make the unzipped files available to downstream Snaps.
Updated Snap Packs
Anaplan Account: Anaplan is extending the deadline to migrate to Certificate Authority (CA) certificates to December, 2019. To support this, on December 8, 2018, Anaplan will update their production certificate. This will replace validation for Anaplan certificates and existing Anaplan certificates will cease to work once this certificate has been issued. Anaplan still recommends that customers update their integration clients to a version that supports Certificate Authentication 2.0 which uses a CA certificate. To support Anaplan v2.0 API, we have enhanced theAnaplan Accountconfiguration.
Depending on the authentication method you use, perform the steps mentioned in the table below:
Action required with the updated Anaplan Snap Pack
Basic username and password
No action required
Update the Anaplan Account settings and enter relevant information in External certificate file, External certificate contents, and External private key properties. Click here for details.
Self-signed Anaplan Certificate
Once you upgrade your Snaplexes to 4.15, you must roll back to the previous version of the Anaplan Snap Pack. Anaplan certificate expires on December 8, 2018. You must download the new certificate and update the Anaplan Snap account with the new certificate.
Anaplan Snap Pack: Added retry capability to Anaplan Action, Read, Write, and Upload Snaps that allow you to handle connection timeouts during Pipeline execution.
Binary Snap Pack: Added a new property, Binary Header Properties, to the CSV Formatter Snap. This property can be configured with the binary headers and their values in the incoming document.
REST Snap Pack: Added the ability to upload multiple files using REST Post, Put, and Patch Snaps.
Hive Snap Pack: Added Zookeeper support to the Configuring Hive Accounts. You can now set up your Hive account using Zookeeper.
Azure SQL Snap Pack: Added two new properties, Schema source and Use type default, to the PolyBase Bulk Load Snap that lets you specify the schema source and handle any missing values in the input documents.
Unable to partition data when the Parquet schema has all the required fields.
Unable to generate a schema from the source when the input columns contain 'null' value in the first document.
REST Post: Unable to support UTF-8 characters for Filename to be used property. Instead, the value renders as question marks.
Redshift Multi Execute: SQL binding does not work with Expression OFF.
Teradata FastExport: Special characters appear in output after exporting table data is exported.
Hive Snap Pack: The Hive Snap Pack does not validate with Apache Hive JDBC v1.2.1 jars or earlier because of a defectin Hive. HDP 2.6.3 and HDP 2.6.1 run on Apache Hive JDBC v1.2.1 jars. To validate Snaps that must work with HDP 2.6.3 and HDP 2.6.1, use JDBC v2.0.0 jars.
Big Data Enhancements
Copy an eXtremeplex: Enables you to reuse existing configuration, while creating a new eXtremeplex in the Manager tab.
Schema Source Options: You now have the option to ingest the Schema in Spark SQL 2.x CSV and JSON parser Snaps via Inferred Schema (automatically) or from a Hive Metastore (selected by the user), in addition to entering it manually.
Added external validation functionality to Hadooplex: Validating pipelines externally as a regular Spark job enables users to execute multiple Spark Pipelines concurrently as opposed to a single Pipeline currently from a single Hadooplex container. Ensure that there are sufficient resources in the Hadoop cluster to run multiple Pipelines concurrently.
New Snaps in Spark 2.x Snap Pack
Spark SQL 2.x Snap Pack: Added the following new Snaps:
Avro Formatter: Enables you to format data input from an upstream Snap in the Avro data format.
Avro Parser: Parse Avro data to pass on to n downstream Snap in a different data format.
JSON Formatter: Format data input from an upstream Snap in the JSON data format.
JSON Parser: Parse Avro data to pass on to a downstream Snap in a different data format.
Catalog Writer: Part of the Data Catalog service, this Snap enables you to format data per the selected format type and insert metadata into the target table path.
Catalog Reader: Queries a table as well as reads and parses metadata information to the downstream table.
When creating tables using the Catalog Reader and Catalog Writer Snaps, any user can overwrite the contents of the table or add it to any project regardless of permissions.
This section tracks the changes made during the iterative pushes to the UAT server and the GA release. The expected schedule is:
UAT #1, October 19, 2018 (Release updates are published above)
UAT #2, October 26, 2018
UAT #3, November 02, 2018
GA, November 09, 2018 (9 pm PDT)
Fixed an issue with the task timeout for local task execution.
Fixed an issue that causes excessive audit trail logging for user update.
Fixed an issue with the API meter cleanup.
Fixed an issue with the OAuth token refresh.
Fixed an issue regarding details displaying in Dashboard for Ultra tasks.
Fixed the following in the Node Properties tab (Create/Update Snaplex dialog):
Restart Max Wait property, which when set to 0, automatically resets to select Forever.
HTTPS Port property, wherein an error displays asking for an integer input, even when the user has entered an integer value.
Added an option to overwrite an existing eXtremeplex when copying the same to create a new eXtremeplex.
Fixed a script error encountered by Snaps having the File property when specifying the file path.
Fixed an issue with Snowflake and Redshift SCD2 Snaps where the asterisk that indicates a mandatory field is missing.
Fixed an issue with the REST Snap Pack where API call response times are slower than typical or even failing.
Fixed an issue where the Directory Browser Snap is not providing the directory name in the Name column when using the SMB protocol.
Big Data Updates
Fixed an issue wherein if Pipelines on Hadooplex fail, the entire Pipeline turns red, instead of just the Snap that has an issue turning red.
Fixed an issue in Catalog Writer wherein the Table Schema is not displayed in Manager (Table > Show schema) for an eXtreme pipeline.
Added the yarn_suggest_queue property to the Execution Settings tab in the Pipeline properties dialog to enable the queue to validate a Pipeline. If no value is specified, then the default setting is derived from the myresource_queue value in the plex.properties file.
Fixed an issue with the Hadooplex, wherein no Spark jobs are submitted to Hadoop clusters.
Fixed an with the Catalog Writer that created an output file after validating a Pipeline, even when the Execute during preview check box is not selected.
Fixed an issue with the Catalog Reader that returns an empty result for a partitioned table created by an eXtreme Pipeline.
Fixed an issue with the Catalog Reader that reads the records of only the last table partition in an eXtreme Pipeline, instead of reading all the records across partitions.
Fixed an issue with the Copy Snap that previews only a single output, even when multiple preview outputs are expected.
Fixed the doc links in the Avro Formatter Snap that did not open the related documentation.
Fixed an issue where an existing pipelines with the HDFS Writer Snap failed for HDP Kerberos and for CDH Kerberos as well.
Fixed an issue that causes the ORC Writer Snap to fail while validating a Pipeline, displaying a "Could not initialize class" error.
Moved the Catalog Query and Catalog Insert Snaps from the SnapLogic Metadata Catalog to a new Data Catalog Snap Pack.
Fixed an issue that causes private Snap Packs loading from the global shared project to fail during Pipeline validation.
Fixed an OAuth token refresh error with Google BigQuery account, while using the REST Snap Pack.
Fixed an issue in the Data Catalog Snaps (both Standard- and Spark-mode) wherein when the Table Name field expression (=) is enabled and disabled, the drop down arrow to select values disappears.
Fixed an issue across multiple Snaps wherein clicking on a Snap to zoom in/out when it is still loading throws various script errors.
Fixed an issue with the Project field in Add New Pipeline dialog wherein, when selecting a project, the special characters display as HTML entity numbers.
Fixed an issue with the File Operation Snap wherein the error message does not route to the Error View for an invalid file path.
Fixed the following Hadoop Snap Pack issues:
HDFS ZipFile Writer/Reader Snaps fail when using unicode characters for file names.
User impersonation fails for CDH HA-enabled cluster having no account when the User Impersonation is enabled.
Unable to define multiple input views; only a single input view is currently possible.
HDFS Writer Snap fails when using the WASB account protocol.
Fixed an issue with the Hive Execute Snap that does not authenticate with Kerberos for Hortonworks Data Platform (HDP) cluster.
Fixed the following Data Science Snap Pack issues:
Enhanced support in the Numeric to Categorical Snap wherein the bin value throws an error if the entered value is not a positive integer.
Enhanced support with the Numeric to Categorical Snap where the Snap throws an error if the user provides unsorted list or duplicated value.
Enhanced support with the Categorical to Numeric Snap where the Snap throws an error if the profile input view is not connected.
Fixed an initialization error with the Remote Python Script Snap Account.
Fixed an issue in the Snowflake Bulk Load Snap where the Snap would fail when the property Load empty strings is set to true and non-string field values are null.
Big Data Updates
Fixed the following Data Catalog Snap Pack issues:
Catalog Reader Snap cannot read an integer partition from the table created by Standard-mode Pipelines.
Catalog Reader cannot process expression parameters for Partition Key and Value fields for eXtreme Pipelines.
Fixed an issue with eXtreme Pipelines wherein the timestamp data written by Spark is not being read by the Standard-mode Pipelines.
Fixed an issue with eXtreme Pipelines that fail to fetch the pricing data for Spot AWS instances.
Added a configuration option to enable support for Unicode characters in SLFS (SnapLogic File System) file names.
Fixed an issue with the Adobe Cloud Platform wherein the pipelines were failing because of an API update from Adobe. To handle the API update, the pagination limit for fetching datasets is changed from 0 to 20.
Fixed an issue with the JSON Parser/Formatter Spark SQL 2.x Snaps wherein Pipelines ingested with a JSON file with multilines fails to validate.
4.15 Dot Releases
SnapLogic dot releases help optimize and continuously improve the platform. This dot release section documents all customer-impacting updates across the SnapLogic platform.
Fixed an issue with pushing of pipeline runtime information to control plane that causes pipeline failures.
Fixed an issue with the ForEach Snap and ground trigger passing in maps and arrays as input.
Added new properties,Preserve data types,Connection properties,Read timeout in seconds, andConnection timeout in seconds, to theWorksheet ReaderSnap. These properties let you convert input data types to strings and handle connection timeouts. Also, fixed issues with header count, column count mismatch and rendering of columns with no headers.
Added new functionality to the property Expression attribute values in the Delete Table Item, Scan, Update, and Bulk Get Snaps. The property now handles columns that are named after the DynamoDB reserve words.
Replaced Max idle time and Idle connection testperiod properties with Max life time and Idle Timeout properties, respectively, in the Account configuration dialog. The new properties fix the connection release issues that are occurring due to default/restricted DB Account settings.