iIn this Article
Overview
There are several factors to consider when deploying a self-managed Snaplex (Groundplex). Most of these considerations are because of IT requirements in your computing environment, but some also depend on Pipeline production and the types of Pipelines you plan to run in production.
The term Groundplex is used throughout this document to refer specifically to your self-managed Snaplex.
Computing Requirements
The Groundplex must conform to the following minimum specifications:
Nodes (Min/Rec) | Minimum: 1 Recommended: 2 or more nodes | SnapLogic Project and Enterprise platform package nodes can be configured in the following sizes:
We recommend two nodes for high availability. For requirements about clustering nodes, refer to Node Cluster. All nodes within a Snaplex must be of the same size. |
---|---|---|
RAM (Min) | Minimum: 8 GB | Depending on the size, number, and design of Pipelines, more RAM is required to maintain an acceptable level of performance. |
CPU (Min) | Minimum: 2 core | All Snaps execute in parallel in their own threads: the more cores that are available to the Snaplex, the more performant the system. |
Disk (Min/Rec) | Minimum: 80 GB Recommended: 100 GB |
|
Memory (RAM) is used by the Pipelines to execute. Some Snaps, such as Sort Snaps, which accumulate many documents, consume more memory; the amount of memory used is influenced by the volume and size of the documents being processed. For an optimum sizing analysis based on your requirements, contact your SnapLogic Sales Engineer.
Supported Operating Systems
Groundplexes support the following operating systems:
CentOS (or Red Hat) Linux 6.3 or newer.
Debian and Ubuntu Linux.
Windows Server 2016/2019/2022 with a minimum of 8 GB RAM.
You can also deploy a Groundplex:
On a Docker container
In a Kubernetes Environment
For improved security, the Groundplex machine timestamp is verified to check if it is synchronized with the time stamp on the SnapLogic Cloud. Running a time service on the Groundplex node ensures that the timestamp is always kept synchronized.
For Linux, refer to Basic NTP Configuration for more details on setting up a NTP server.
For Windows, refer to Windows Time Service Technical Reference for more information.
A large clock skew can also affect communication between the FeedMaster and the JCC nodes. The Date.now()
expression language function might be different between Snaplex nodes, and Internal log messages might have skewed time stamps, making it more difficult to debug issues.
We apply security updates to the Snaplex via auto-update, except for the base monitor process. To update the Monitor, install the latest RPM or Docker image directly.
CPU Architecture
We support x86 architecture and do not support ARM.
Network Guidelines and Requirements
The following network guidelines and requirements apply to Groundplex deployments:
Network throughput
Network firewall
Network Throughput Guidelines
Groundplexes require connectivity to the SnapLogic Integration Cloud, and also connectivity to the cloud applications which may be used in your Tasks and Pipelines.
To optimize performance, we recommend the following network throughput guidelines:
Guideline | Minimum | Recommended |
---|---|---|
Network In (Min/Rec) | 10 MB/second | 15 MB/second+ |
Network Out (Min/Rec) | 5 MB/second | 10MB/second+ |
Network Firewall Requirements
To communicate with the SnapLogic Control Plane, Groundplexes use a combination of HTTP/HTTPS requests and WebSockets communication over the TLS (SSL) tunnel. For this combination to operate effectively, you must configure the firewall to allow the following network communication requirements:
Component | Required | Consequence |
---|---|---|
HTTP outbound Port 443 | Yes | Does not function |
HTTP HEAD | Desired | Without HEAD support, a full GET request requires more time and bandwidth |
WebSockets (WSS protocol) | Yes | Does not function |
JCC node: 8090 (HTTP), 8081 (HTTPS) | Yes |
Needs to be available for communication between the nodes in a Snaplex. |
Feedmaster: 8090 (HTTP), 8084 (HTTPS), 8089 (Message queue) |
The nodes of a Snaplex need to communicate among themselves, so it is important that each node can resolve each other's host names. This requirement is crucial when you are making local calls into the Snaplex nodes for the execution of the Pipelines instead of initiating it through the SnapLogic Platform. The Pipelines are load-balanced by SnapLogic with Tasks passed to the target node.
Communication between the customer-managed Groundplex and the SnapLogic-managed S3 bucket is over HTTPS, with TLS enforced by default. The AWS-provided S3 URL also uses an HTTPS connection, with TLS enforced by default. If direct access from the Groundplex to the SnapLogic AWS S3 bucket is blocked, then the connection to the AWS S3 bucket communication falls back to a connection through the SnapLogic Control Plane that still uses TLS 1.2.
To successfully implement the Zero Trust policy in any environment, use the following S3 URLs.
snaplogic.com is required for all users.
snaplogic-prod-sldb.s3.amazonaws.com and s3.amazonaws.com access is required for file operations that use
sldb
protocol (For example, FileReader / FileWriter Snaps that are configured to usesldb
protocol).
Learn more about Snaplex network setup.
Network Guidelines for Snap Usage
In the SnapLogic Platform, the Snaps communicate to and from the application endpoints. The protocols and ports required for this communication are mostly determined by the endpoints themselves and not by SnapLogic. Cloud and SaaS applications commonly communicate using HTTPS, although older applications and non-cloud or SaaS applications might have their own requirements.
For example, the following table shows some of these requirements:
Application | Protocol | Default Port |
---|---|---|
Netezza | TCP | 5480 |
Oracle | TCP | 1521 |
RedShift | TCP | 5439 |
Salesforce | HTTPS | 443 |
Each of these application connections might allow the use of a proxy for the network connection, but it is a configuration option of the application’s connection—not one applied by SnapLogic.
FeedMaster Node Ports
For Ultra Pipelines, the FeedMaster node listens on the following two ports:
8084—The FeedMaster's HTTPS port. Requests for the Pipelines are sent here in addition to some internal requests from the other Groundplex nodes.
8089—The FeedMaster's embedded ActiveMQ broker SSL port. The other Groundplex nodes connect to this port to send and receive messages.
The machine hosting the FeedMaster nodes needs to have those ports open on the local firewall, and the other Groundplex nodes need to allow outbound requests to the FeedMaster nodes on those ports.
Groundplex Name and Associated Nodes
Every Snaplex requires a name, for example, ground-dev or ground-prod. In the SnapLogic Designer, you can choose the Snaplex where Pipelines are executed.
Your nodes are associated with a Groundplex through the Environment variable: for example, dev
or prod
. When you configure the nodes for your Groundplex, you must set the jcc.environment
to the Environment value that you provided in the Create Snaplex dialog. You can change this variable in the Update Snaplex dialog.
The host name of the system used by a Groundplex can not have an underscore (_) in its name as per DNS standards. Avoid special characters as well.
After the Snaplex service is started on a node, the service connects to the SnapLogic Cloud service. Runtime logs from the Snaplex are written to the following folder:
Linux:
/opt/snaplogic/run/log
Windows:
c:\opt\snaplogic\run\log
The Dashboard shows the currently connected nodes for each Snaplex.
Understanding the Distribution of Data Processing across Snaplex Nodes
When a Pipeline or Task is executed, the work is assigned to one of the JCC nodes in the Snaplex. The scheduling of pipelines across nodes in a Snaplex is based on an algorithm that is least-loaded, with priority on memory usage. If there are similarly loaded nodes, the algorithm randomizes the pipeline execution across them.
To ensure that requests are being shared across JCC nodes, we recommend that you set up a load balancer to distribute the work across JCC nodes in the Snaplex.
Node Cluster
Starting multiple nodes with the JCC service pointing to the same Snaplex configuration automatically forms a cluster of nodes if you follow these requirements for nodes in a Snaplex:
The nodes need to communicate with each other on the following ports: 8081, 8084, and 8090.
The nodes should have a reliable, low-latency network connection between them.
The nodes should be homogeneous in that they should have the same CPU and memory configurations, as well as access to the same network endpoints.
All JCC nodes should be the same size. All FeedMaster nodes should be the same size for load balancing. Worker and FeedMaster nodes can be of different sizes.
Node Diagnostics
Snaplex Diagnostics helps you verify your Snaplex host environment and troubleshoot any issues. Each Snaplex node is a JCC instance running on a host, and the node diagnostic test highlights the hardware and thread limits requirements. It checks for minimum hardware requirements such as RAM and disk storage. The details of the test are listed in the table below. For more information on how to view the node details panel, refer to https://docs.snaplogic.com/monitor/node-details-panel.html
Diagnostic test | Recommended value | Examples of current value displayed in the diagnostic test |
Nodes have insufficient swap space | If the maximum value is not present, the system displays the value of 50% of the RAM configured or 8GB, whichever is greater. | If the value is not as per the recommended value the current value is displayed in red. |
Max Slots | If there is no minimum value, then the recommended value is calculated as follows: RAM configured or 8GB) * 2000 (max value) rounded to the nearest 500. | Example: If the maximum value is 3840, the current value displayed is 4000 |
Thread limit | Minimum value = 65000 | Displays the thread limit in red if the value is below the recommended value. Example: 4000 |
Max file descriptors | If there is no maximum value and the minimum value is 65000, then the recommended value should be 65000. | Example: 65535 |
Max jvm heap | The minimum and the recommended value is calculated as follows: Minimum value = 12GiB Recommended value = 12GiB | Example: 12.44 GiB |
RAM configured | Minimum value = 4GiB Recommended value = 4 GiB | Example: 2.73% |
RAM available | More than 15 minute period per day where memory utilization is > 75% or average memory utilization is > 60% | Example: 2.78% |
Disk storage configured | Minimum value: 40GiB Recommended value: 100GiB | Example: 39.98 GiB |
JCC Node Communication Requirements
Each JCC node publishes its IP addresses to the control plane. DNS is not required for communication between nodes. We recommend setting up all the nodes inside a Snaplex in the same network and data center. Communication between JCC nodes in the same Snaplex is required for the following reasons:
The Pipeline Execute Snap communicates directly with neighboring JCC nodes in a Snaplex to start child Pipeline executions and send documents between parent and child Pipelines.
The data displayed in Data Preview is written to and read from neighboring JCC nodes in the Snaplex.
The requests and responses made in Ultra Pipelines are exchanged between a FeedMaster node and all JCC nodes in a Snaplex.
A Ground Triggered Task (invoked from a Groundplex) can be executed on a neighboring JCC node because of load-balancing, in which case, the Pipeline prepare request, and the bodies of the request and response, are transferred between nodes.
Any extra latency or network hops between neighboring JCC nodes can introduce performance and reliability problems.