Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

On this PageIn this article

Table of Contents
maxLevel2
excludeOlder Versions|Additional Resources|Related Links|Related Information

Overview

Use this You can use the Kafka Acknowledge Snap to notify the Kafka Consumer Snap to commit an offset at the specified metadata in each input document.
Image Modified


Note
  • This Snap should be used only if the Auto commit field in the Consumer Snap is not selected (set to false).
  • This Snap no longer requires a Kafka account.

Prerequisites

  • A Confluent Kafka server with a valid account.
  • The Kafka Acknowledge Snap in a pipeline must receive the metadata from an

    An upstream Snap

    ,

    (for example, a Kafka Consumer Snap) that sends metadata to the Kafka Acknowledge Snap in a Pipeline.

Support for Ultra Pipelines   

Works in Ultra Pipelines.

...

Prerequisites

None.

Limitations and Known Issues

None.

Snap Input and Output

Input/OutputType of ViewNumber of ViewsExamples of Upstream and Downstream SnapsDescription
InputDocument
  • Min:1
  • Max:1
  • Mapper Snap
  • Copy Snap

Metadata from an upstream Kafka Consumer Snap. The input data schema is as

shown below

follows:

Code Block
"metadata": {
   "topic": "xyz",
   "partition": 2,
   "offset": 523,
   "consumer_group": "CopyGroup1",
   "client_id": "17a9bbc7-da8f-45f8-813e-1ebca9b80383",
   "tracker_index": 0,
   "batch_size": 500,
   "batch_index": 1,
   "record_index": 23,
   "auto_commit": false
}


OutputDocument
  • Min:0
  • Max:1
  • Mapper Snap
  • JSON Formatter
Processed

Kafka messages that have been processed and acknowledged

Kafka messages

.
If the Auto Commit field is set to false in the input document and a notification is sent successfully to the corresponding Consumer Snap, the output schema looks

as shown below

is similar to the following:

Code Block
{
 "status": "success",
 "original": {
 "metadata": {
 "consumer_group": "abc",
 "topic": "xyz",
 "partition": 123,
 "offset": 456,
 "auto_commit": false
 }
 }
}

If the Auto commit field is set to true in the input document, the output schema

looks as shown below

looks is similar to the following:

Code Block
{
 "status": "Auto-commit is on",
 "original": {
 "metadata": {
 "consumer_group": "abc",
 "topic": "xyz",
 "partition": 123,
 "offset": 456,
 "auto_commit": true
 }
}


Snap Settings

Parameter Name

Data Type

DescriptionDefault ValueExample
LabelString
Insert excerpt
File Writer
File Writer
nopaneltrue
Kafka AcknowledgeKafka_Acknowledge
Metadata pathStringRequired. Specify the JSON path of the metadata within each input document. metadata$metadata
Snap ExecutionDrop-down list

Select one of the three

following

modes in which the Snap executes:

  • Validate & Execute.
 Performs
  •  Performs limited execution of the Snap, and generates a data preview during Pipeline validation. Subsequently,
then
  • performs full execution of the Snap (unlimited records) during Pipeline runtime.
  • Execute only. Performs full execution of the Snap during Pipeline execution without generating preview data.
  • Disabled. Disables the Snap and all Snaps downstream from it.

Validate & ExecuteValidate & Execute

Troubleshooting

None.

Example

Acknowledging Messages

In this Pipeline, This example Pipeline demonstrates how we use the:

  • Kafka Producer Snap to produce and send messages to a Kafka topic,

...

  •  
  • Kafka Consumer Snap to read messages from

...

  • a topic, and

...

  • Kafka Acknowledge Snap to acknowledge the number of messages read (message count).
    Image Modified

First, we use the Sequence Snap to enable the pipeline Pipeline to send the documents in large numbers. We configure the Sequence Snap to read 2500 documents with the initial value as 1 and hence the Snap numbers all the send all the documents starting from 1 through 2500. Hence, we set the Initial value as 1 and Number of documents as 2500.

We configure the Kafka Producer Snap to send the documents to the Topic named SampleKafkaTopic and we set the Partition number to 0 to let the broker decide which partition to use.

We configure the Kafka Consumer Snap to read the messages from the Topic named SampleKafkaTopic and we set the Partition number to 0. The message count is set to 100, which means the Snap consumes 100 messages and sends those messages to the output view.

On successful execution of the Pipeline, we can view the consumed and acknowledged messages in the Pipeline Execution statistics as shown below. Note that the Message Count is set to 100 in the Consumer Snap; hence, hence the Acknowledge Snap acknowledges the same count.

...

Attachments
patterns.*slp, .*zip

...

See Also

Insert excerpt
Confluent Kafka Snap PackConfluent
Kafka Snap Pack
nopaneltrue