In this article
Table of Contents | ||||
---|---|---|---|---|
|
Overview
You can use this the Kafka Acknowledge Snap to notify the Kafka Consumer Snap to commit an offset at the specified metadata in each input document.
Note |
---|
|
Prerequisites
- A Confluent Kafka server with a valid account.
- The Kafka Acknowledge Snap in a pipeline must receive the metadata from an
An upstream Snap
,(for example, a Kafka Consumer Snap) that sends metadata to the Kafka Acknowledge Snap in a Pipeline.
Support for Ultra Pipelines
Works in Ultra Pipelines.
Prerequisites
None.
Limitations
...
None.
and Known Issues
None.
Snap Input and Output
Input/Output | Type of View | Number of Views | Examples of Upstream and Downstream Snaps | Description | ||||
---|---|---|---|---|---|---|---|---|
Input | Document |
|
| Metadata from an upstream Kafka Consumer Snap. The input data schema is as shown belowfollows:
| ||||
Output | Document |
|
| Processed Kafka messages that have been processed and acknowledged Kafka messages.
If the Auto commit field is set to true in the input document, the output schema looks as shown belowlooks is similar to the following:
|
...
Parameter Name | Data Type | Description | Default Value | Example | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Label | String |
| Kafka Acknowledge | Kafka_Acknowledge | ||||||||
Metadata path | String | Required. Specify the JSON path of the metadata within each input document. | metadata | $metadata | ||||||||
Snap Execution | Drop-down list | Select one of the three following modes in which the Snap executes:
| Validate & Execute | Validate & Execute |
...
First, we use the Sequence Snap to enable the pipeline Pipeline to send the documents in large numbers. We configure the Sequence Snap to read 2500 documents with the initial value as 1 and hence the Snap numbers all the send all the documents starting from 1 through 2500. Hence, we set the Initial value as 1 and Number of documents as 2500.
We configure the Kafka Producer Snap to send the documents to the Topic , named SampleKafkaTopic and we set the Partition number to 0 to let the broker decide which partition to use.
We configure the Kafka Consumer Snap to read the messages from the Topic, named SampleKafkaTopic and we set the Partition number to 0. The message count is set to 100, which means the Snap consumes 100 messages and sends those messages to the output view.
On successful execution of the Pipeline, we can view the consumed and acknowledged messages in the Pipeline Execution statistics as shown below. Note that the Message Count is set to 100 in the Consumer Snap; hence, hence the Acknowledge Snap acknowledges the same count.
...
Attachments | ||
---|---|---|
|
...
See Also
- Apache Kafka
- Confluent - Schema Registry
- Getting Started with SnapLogic
- Snap Support for Ultra Pipelines
- SnapLogic Product Glossary
Insert excerpt | ||||||
---|---|---|---|---|---|---|
|