Kafka Snap Pack
In this article
Overview
You can use this Snap Pack for accessing Apache Kafka and Confluent Kafka to produce and consume messages.
Supported Versions
The 4.30 release of this Snap Pack uses the following client libraries:
Apache-Kafka Version 3.1.1
Confluent-Kafka Version 7.1.1
Temporary Files
During execution, data processing on Snaplex nodes occurs principally in-memory as streaming and is unencrypted. When larger datasets are processed that exceed the available compute memory, Snap writes Pipeline data to local storage as unencrypted to optimize the performance. These temporary files are deleted when the Snap/Pipeline execution completes. You can configure the temporary data's location in the Global properties table of Snaplex's node properties, which can also help avoid Pipeline errors due to the unavailability of space. For more information, see Temporary Folder in Configuration Options.
August 2023 main22460 Stable Updated and certified against the current SnapLogic Platform release. 433patches21070 Fixed an issue with the Kafka Consumer Snap that caused it to skip records in a partition when a pipeline failed without successfully processing any records from that partition. This was an edge case that was only possible with a specific configuration of the Snap and a specific ordering of events during termination of the pipeline. May 2023 Stable Upgraded with the latest SnapLogic Platform release. Enhanced the Kafka Consumer Snap with the new Max Poll Interval (ms) field, which allows you to specify the time limit between subsequent calls to poll. This field corresponds to the Kafka max.poll.interval.ms property. Upgraded Kafka client libraries to Apache-Kafka Version 3.1.1 and Confluent-Kafka Version 7.1.1. Optimized the performance of the Snap with the support for LZ4 and ZSTD compression types of the messages produced. The Kafka account that connects to the Kafka Schema Registry uses TLS v1.2 protocol instead of the older version of TLS, which has security vulnerabilities. Enhanced the Kafka Producer Snap with LZ4 and ZSTD Compression Types for compressing the messages that optimize Snap's performance. Fixed an issue with the Kafka Producer Snap where the Snap wrote an output document even when there was an error. Fixed an issue with Kafka Account where the expression buttons were enabled, by default, for the Bootstrap Server and Advanced Property Value fields. Upgraded Kafka client libraries to Apache-Kafka Version 3.1.1 and Confluent-Kafka Version 7.1.1. Enhanced the Kafka account, which is configured to connect to the Kafka Schema Registry, to use TLS v1.2 protocol instead of TLS that has security vulnerabilities. Fixed an issue with Kafka Snaps where the Snaps failed with null pointer exceptions when logging the cluster information with the controller’s identity not known. 4.27 main12833 Stable Improved the handling of interrupted/aborted Kafka Snaps to ensure proper clean-up of metrics. Optimized the Kafka Producer Snap to initialize the Kafka API only if there is at least one input document. The Kafka Producer and Consumer Snaps create a default Client ID string to identify the specific Snap instance. Enhanced the Kafka Producer Snap to support Avro serialization format with a new target data type Avro under Key and Value Serializer. Enhanced the Kafka SSL Account with new fields (Registry Username or Key and Registry Password or Secret) for Schema Registry authentication. The two existing SASL properties (SASL Username and SASL Password) have been revised to SASL Username or Key and SASL Password or Secret respectively. Apache Kafka client library is upgraded from version 2.6.0 to 2.8.0. Confluent Kafka client libraries are updated from version 5.2.1 to 6.2.0. Fixed an issue in the Kafka Producer Snap by removing the validation of the account when the Snap initialized in a Pipeline execution. Removed the Confluent prefix from the Snaps and accounts in the Snap Pack. However, the Snap Pack name continues to be Confluent Kafka Snap Pack. Added the Wait For Full Count check box to the Kafka Consumer Snap to determine how a positive value for the Message Count field must be interpreted. Enabled (by default): The Snap continues polling for messages until the specified count is reached. Disabled: If the number of messages that are currently available is less than the specified count, then the Snap consumes the available messages and terminates. Known Issue The Wait For Full Count check box activates only when you provide a positive integer value in the Message Count field. However, it does not activate when you use an expression for Message Count even if the value evaluates to a positive number. Workaround: To activate this check box, temporarily replace the Message Count expression ( Added support for writing and reading record headers. The Kafka Producer Snap has a new Headers table to configure the Key, Value, and Serializer for each header to be written. The Kafka Consumer Snap reads any headers present on the records it consumes. It provides two new fields to configure how the header values should be deserialized – Default Header Deserializer and Header Deserializers, for any headers that require a deserializer other than the default. Added support for writing and reading each record’s timestamp. The Kafka Producer Snap has a new Timestamp field that can be configured to set each record’s timestamp, which is the number of milliseconds since the epoch (00:00:00 UTC on January 1, 1970). This can be set to an expression that evaluates to a long integer, or a string that can be parsed as a long integer, or a date. If you specify no expression, or the value is empty, then the timestamp is set to the current time. The Timestamp field is relevant only if the Kafka topic is configured with The Kafka Consumer Snap has a new check box, Include Timestamp, which by default is disabled for backward compatibility. If enabled, the output for each record includes its timestamp in the metadata. The Kafka Producer Snap has a new check box, Output Records, to determine the format of each output document when configured with an output view. Disabled (by default): The Snap’s output includes only the basic metadata (topic, partition, offset) for each record, along with the original input document. Enabled: Each output document contains a more complete representation of the record produced, including its key, value, headers, and timestamp. The Kafka Consumer Snap has a new field, Output Mode, with two options: One output document per record (the default): Every record received from Kafka has a corresponding output document. One output document per batch: Preserves the batching of records as received from Kafka. Every poll that returns a non-empty set of records results in a single output document containing the list of records as Removed the Add 1 to Offsets check box from the Kafka Consumer Snap. Fixed an intermittent issue of Confluent Kafka Consumer Snap stopping when Message Count is 0 while the records are available. The Snap will now stop only if a poll returns no records twice in a row, and has partitions assigned before and after each poll. Enhanced the Confluent Kafka Consumer Snap with an option to allow reprocessing of records that remain unacknowledged due to acknowledge timeouts. This ensures that all records from Kafka are processed. 4.21 Patch Fixes the Confluent Kafka Consumer Snap that aborts abruptly upon getting a Enhanced the Confluent Kafka Consumer Snap to significantly reduce the volume of commit requests for offsets when using the Record acknowledgement mode (Wait after each record), and thereby, improve performance and stability. 4.20 Patch Updated the Confluent Kafka SSL Account settings to make the Truststore filepath and Truststore password properties optional. Fixed an issue with the Confluent Kafka Producer Snap wherein the Topic property doesn't evaluate expression against input documents. Fixed an exception that occurs while consuming log-compacted topics, in the Confluent Kafka Consumer Snap. Fixed the Consumer Snap to stop quickly and cleanly on CLOSE and STOP lifecycle events. 4.18 Patch Added support for Kerberos-based authentication to enable connecting to Kerberos-enabled Kafka servers. 4.17 Patch Updated the following Snaps: Pushed automatic rebuild of the latest version of each Snap Pack to SnapLogic UAT and Elastic servers. Fixed an issue with the Confluent Kafka Producer Snap wherein the Snap fails to produce the upstream Snap's input schema for Partition Number property in the output preview. 4.16 Patch Certified Confluent Platform version 5.x. Fixed an issue with the Confluent Kafka Consumer Snap wherein some connections were not closed after a Pipeline is aborted. Added a new property to the Confluent Kafka Producer Snap, Message publish timeout (milliseconds), that lets you specify a timeout value. Fixed the Confluent Kafka Snaps that do not delete temp files after pipeline execution. Fixed the Confluent Kafka Snap Pack that does not delete the temp files after executing pipelines. Added support in Confluent Kafka Producer Snap the ability to write data to multiple partitions, in a round robin fashion. snapsmrc486 4.12 Patch Fixed an issue with the Consumer Snap that fails when the specified timeout value is less than the pipeline execution time. snapsmrc480 Enhanced the performance of the Confluent Kafka Producer Snap in writing incoming documents into the Confluent Kafka Cluster. 4.11 Patch Fixed an issue with the Confluent Kafka Consumer Snap wherein the Consumer session is not closing properly when manually stopping a pipeline, which causes the subsequent pipeline execution to wait for the previous session to die and not consume messages. Fixed an issue with the Confluent Kafka Consumer Snap that fetched 500 messages only, when message count set to 0 caused due to driver default value change. Updated the Confluent Kafka SSL Account with SCRAM_SASL support. snapsmrc414 Confluent v3.2 support for Confluent Kafka Snap Pack. Fixed an issue regarding account validation failure due to unauthorized topics; Enhance the error message for empty topic suggestions Addressed an issue with Confluent Kafka Producer not including input document schema in Message Key and Value expression properties snapsmrc398 New Snap Pack for 4.8. It consists of Confluent Kafka Consumer and Confluent Kafka Producer.Snap Pack History
Release Snap Pack Version Date Type Updates May 2023 Latest main21015 February 2023 432patches20143 Latest/Stable Fixed an issue with the Kafka Producer Snap to avoid an error about delivery.timeout.ms
. The Snap now adjusts the value of the Message Publish Timeout setting automatically and display a warning in the Pipeline Execution Statistics.February 2023 main19844 Stable Upgraded with the latest SnapLogic Platform release. November 2022 431patches19770 Latest Fixed a memory issue that occurred when the Kafka Producer Snap was used to produce many records in a long-running pipeline. November 2022 431patches19211 Latest November 2022 main18944 Stable Upgraded with the latest SnapLogic Platform release. August 2022 main17386 Stable 4.29 Patch 429patches17109 Latest 4.29 main15993 Stable The Kafka Snap Pack now supports Azure Event Hubs streaming service. Learn more: Azure Event Hubs configuration in Azure portal for Kafka SSL Account. 4.28 Patch 428patches14904 Latest 4.28 main14627 Stable Upgraded with the latest SnapLogic Platform release. Upgraded with the latest SnapLogic Platform release. 4.26 main11181 Stable Upgraded with the latest SnapLogic Platform release. 4.25 Patch 425patches10543 Latest 4.25 main9554 Stable Upgraded with the latest SnapLogic Platform release. 4.24 Patch 424patches8805 Latest 4.24 main8556 Stable Upgraded with the latest SnapLogic Platform release. 4.23 Patch 423patches7900 Latest ) with a positive integer, select the desired state for Wait For Full Count, and then restore the original value in the Message Count field.
message.timestamp.type
= CreateTime
(which is the default). For more information, see the official Kafka documentation.batch
, batch_size
and batch_index
. This mode is especially useful when Auto Commit is disabled and Acknowledge Mode is Wait after each batch of records, depending on the nature of the processing between the Kafka Consumer and the Kafka Acknowledge Snaps.4.23 main7430 Stable 4.22 main6403 Stable Upgraded with the latest SnapLogic Platform release. 4.21 Patch 421patches6136 Latest 421patches5862 Latest CommitFailedException
error. Upgraded the Kafka client library from version 2.2 to 2.5.4.21 snapsmrc542 Stable confluentkafka8747 Latest 4.20 snapsmrc535 Stable Upgraded with the latest SnapLogic Platform release. 4.19 snaprsmrc528 Stable Upgraded with the latest SnapLogic Platform release. 4.18 Patch confluentkafka8111 Latest 4.18 Patch confluentkafka8006 Latest 4.18 Patch confluentkafka7784 Latest confluentkafka7732 Latest 4.18 snapsmrc523 Stable Upgraded with the latest SnapLogic Platform release. confluentkafka7537 Latest 4.17 ALL7402 Latest 4.17 snapsmrc515 Latest 4.16 Patch confluentkafka7118 Latest confluentkafka6891 Latest 4.16 snapsmrc508 Stable Upgraded with the latest SnapLogic Platform release. 4.15 Patch confluentkafka6446 Latest 4.15 snapsmrc500 Stable Upgraded with the latest SnapLogic Platform release. 4.14 Patch confluentkafka5737 Latest 4.14 Patch confluentkafka5663 Latest 4.14 snapsmrc490 Stable Upgraded with the latest SnapLogic Platform release. 4.13 Patch confluentkafka5614 Latest 4.13 Patch confluentkafka5330 Latest 4.13 Stable Upgraded with the latest SnapLogic Platform release. confluentkafka5087 Latest 4.12 Stable confluentkafka4626 Latest 4.11 Patch confluentkafka4302 Latest 4.11 snapsmrc465 Stable 4.10 Stable 4.9 Patch confluentkafka3156 Latest 4.9 snapsmrc405 Stable 4.8 Patch confluentkafka2804 Latest 4.8 Stable
Related Links
Have feedback? Email documentation@snaplogic.com | Ask a question in the SnapLogic Community
© 2017-2023 SnapLogic, Inc.