In this article

Overview

You can use the Kafka Acknowledge Snap to notify the Kafka Consumer Snap to commit an offset at the specified metadata in each input document.


  • This Snap should be used only if the Auto commit field in the Consumer Snap is not selected (set to false).
  • This Snap no longer requires a Kafka account.

Prerequisites

Support for Ultra Pipelines   

Works in Ultra Pipelines.

Prerequisites

None.

Limitations and Known Issues

None.

Snap Input and Output

Input/OutputType of ViewNumber of ViewsExamples of Upstream and Downstream SnapsDescription
InputDocument
  • Min:1
  • Max:1
  • Mapper Snap
  • Copy Snap

Metadata from an upstream Kafka Consumer Snap. The input data schema is as follows:

"metadata": {
   "topic": "xyz",
   "partition": 2,
   "offset": 523,
   "consumer_group": "CopyGroup1",
   "client_id": "17a9bbc7-da8f-45f8-813e-1ebca9b80383",
   "tracker_index": 0,
   "batch_size": 500,
   "batch_index": 1,
   "record_index": 23,
   "auto_commit": false
}


OutputDocument
  • Min:0
  • Max:1
  • Mapper Snap
  • JSON Formatter

Kafka messages that have been processed and acknowledged.
If the Auto Commit field is set to false in the input document and a notification is sent successfully to the corresponding Consumer Snap, the output schema looks is similar to the following:

{
 "status": "success",
 "original": {
 "metadata": {
 "consumer_group": "abc",
 "topic": "xyz",
 "partition": 123,
 "offset": 456,
 "auto_commit": false
 }
 }
}

If the Auto commit field is set to true in the input document, the output schema looks is similar to the following:

{
 "status": "Auto-commit is on",
 "original": {
 "metadata": {
 "consumer_group": "abc",
 "topic": "xyz",
 "partition": 123,
 "offset": 456,
 "auto_commit": true
 }
}


Snap Settings

Parameter Name

Data Type

DescriptionDefault ValueExample
LabelString
Kafka AcknowledgeKafka_Acknowledge
Metadata pathStringRequired. Specify the JSON path of the metadata within each input document. metadata$metadata
Snap ExecutionDrop-down list

Select one of the three modes in which the Snap executes:

  • Validate & Execute. Performs limited execution of the Snap, and generates a data preview during Pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during Pipeline runtime.
  • Execute only. Performs full execution of the Snap during Pipeline execution without generating preview data.
  • Disabled. Disables the Snap and all Snaps downstream from it.

Validate & ExecuteValidate & Execute

Troubleshooting

None.

Example

Acknowledging Messages

This example Pipeline demonstrates how we use the:

First, we use the Sequence Snap to enable the Pipeline to send documents in large numbers. We configure the Sequence Snap to send all the documents starting from 1 through 2500. Hence, we set the Initial value as 1 and Number of documents as 2500.

We configure the Kafka Producer Snap to send the documents to the Topic named SampleKafkaTopic and we set the Partition number to 0 to let the broker decide which partition to use.

We configure the Kafka Consumer Snap to read the messages from the Topic named SampleKafkaTopic and we set the Partition number to 0. The message count is set to 100, which means the Snap consumes 100 messages and sends those messages to the output view.

On successful execution of the Pipeline, we can view the consumed and acknowledged messages in the Pipeline Execution statistics. Note that the Message Count is set to 100 in the Consumer Snap; hence, the Acknowledge Snap acknowledges the same count.

Download the Pipeline.

Downloads

  1. Download and import the Pipeline into SnapLogic.
  2. Configure Snap accounts as applicable.
  3. Provide Pipeline parameters as applicable.


See Also