Kafka Snap Pack
This page is no longer maintained (Nov 12, 2025). For the most current information, go to https://docs.snaplogic.com/snaps/snaps-data/sp-kafka/sp-kafka-about.html.
In this article
Overview
You can use this Snap Pack for accessing Apache Kafka and Confluent Kafka to produce and consume messages.
Supported Versions
The Kafka Snap Pack uses the following client libraries:
Apache-Kafka Version 3.9.1
Confluent-Kafka Version 7.9.2
Articles in this Section
- Kafka Consumer
- Kafka Producer
- Configuring Kafka Accounts
- Kafka Account
- Kafka Kerberos Account
- Kafka MSK IAM Account
- Azure Event Hubs configuration in Azure portal for Kafka SSL Account
- Application Configuration in Azure Portal for Kafka OAuth2 Account
- Kafka OAuth2 Account
- Authenticating to Confluent Cloud using OAuth/OIDC with Auth0 as Identity Provider
- Kafka SSL Account
- Kafka Acknowledge
Temporary Files
During execution, data processing on Snaplex nodes occurs principally in-memory as streaming and is unencrypted. When larger datasets are processed that exceed the available compute memory, Snap writes Pipeline data to local storage as unencrypted to optimize the performance. These temporary files are deleted when the Snap/Pipeline execution completes. You can configure the temporary data's location in the Global properties table of Snaplex's node properties, which can also help avoid Pipeline errors due to the unavailability of space. For more information, see Temporary Folder in Configuration Options.
Have feedback? Email documentation@snaplogic.com | Ask a question in the SnapLogic Community
© 2017-2025 SnapLogic, Inc.