SnapLogic Project and Enterprise platform package nodes can be configured in the following sizes:
Medium: 2 vCPU and 8GB RAM
Large: 4 vCPU and 16GB RAM
X-Large: 8 vCPU and 32GB RAM
2X-Large: 16 vCPU and 64GB RAM
We recommend two nodes for high availability. For requirements about clustering nodes, see Node Cluster.
All nodes within a Snaplex must be of the same size.
Depending on the size, number, and nature of pipelines, more RAM is required to maintain an acceptable level of performance.
Minimum: 2 core
All Snaps execute in parallel in their own threads -- the more cores that are available to the Snaplex, the more performant the system.
Local disk is required for logging and for any Snap that uses local disk for temporary storage (for example, Sort and Join Snaps). For details, see Temporary Folder.
SnapLogic does not in anyway restrict the disk size of your Groundplex nodes.
Memory (RAM) is used by the Pipelines to execute. Some Snaps, like Sort Snaps, which accumulate many documents, consume more memory; the amount of memory used is influenced by the volume and size of the documents being processed. For an optimum sizing analysis based on your requirements, contact your SnapLogic Sales Engineer.
The SnapLogic on-premises Snaplex is supported on the following operating systems:
CentOS (or Red Hat) Linux 6.3 or newer.
Debian and Ubuntu Linux.
Windows Server 2012/2016/2019 with a minimum of 8GB RAM.
SnapLogic will sunset support for Windows Server 2012 on . Hence, ensure that you upgrade your Groundplex instances to Windows 2016 or 2019.
For improved security, the Groundplex machine timestamp is verified to check if it is in sync with the timestamp on the SnapLogic cloud. Running a time service on the Groundplex node will ensure that the timestamp is always kept in sync.
Large clock skew can also affect communication between the FeedMaster and the JCC nodes.The Date.now() expression language function might be different between Snaplex nodes, and Internal log messages might have skewed timestamps, making it more difficult to debug issues.
Network Throughput Guidelines
You should consider that, when running, a Groundplex requires connectivity to the SnapLogic Integration Cloud, as well as the cloud applications which may be used in the processes/Pipelines created and run in the solution. To optimize performance, we recommend the following network throughput guidelines:
Network In (Min/Rec)
Min: 10MB/sec, Recommended: 15MB/sec+
Depends on usage
Network Out (Min/Rec)
Min: 5MB/sec, Recommended: 10MB/sec+
Depends on usage
Network Firewall Requirements
On-premises Snaplex (Groundplex)
To communicate with the SnapLogic Integration Cloud, SnapLogic On-premises Snaplex uses a combination of HTTPS requests and WebSockets communication over the TLS (SSL) tunnel. In order for this combination to operate effectively, the firewall must be configured to allow the following network communication requirements:
HTTP outbound Port 443
Does not function
Without HEAD support, a full GET requires more time and bandwidth
Slower data transfer
Websockets (WSS protocol)
Does not function
Snaps using HTTP client without proxy support
Needs to use Snaps that support proxy settings
Unable to reach Snaplex neighbor - https://hostname:8081
Needs to be available for communication between the nodes in a Snaplex.
The nodes of a Snaplex need to communicate among themselves, so it is important that they can each resolve each other's hostnames. This is required when you are making local calls into the Snaplex nodes for the execution of Pipelines rather than going through the SnapLogic Platform. The Pipelines are load balanced by SnapLogic with Tasks passed to the target node.
Communication between the customer-managed Groundplex and the SnapLogic-managed S3 bucket is over HTTPS with TLS enforced by default. The AWS provided S3 URL also uses an HTTPS connection with TLS enforced by default. If direct access from the Groundplex to the SnapLogic AWS S3 bucket is blocked, then the connection to the AWS S3 bucket communication falls back to a connection via the SnapLogic Control Plane that still uses TLS 1.2.
In the SnapLogic Platform, the Snaps tactually communicate to and from the applications. The protocols and ports required for application communication are mostly determined by the endpoint applications themselves, and not by SnapLogic. It is common for cloud/SaaS applications to communicate using HTTPS, although older applications and non-cloud/SaaS applications might have their own requirements.
Each of these application connections may allow the use of proxy for the network connection, but it is a configuration option of the application’s connection—not one applied by SnapLogic.
The FeedMaster listens on the following two ports:
8084—The FeedMaster's HTTPS port. Requests for the pipelines are sent here as well as some internal requests from the other Groundplex nodes.
8089—The FeedMaster's embedded ActiveMQ broker SSL port. The other Groundplex nodes connects to this port to send/receive messages.
The machine hosting the FeedMaster needs to have those ports open on the local firewall, and the other Groundplex nodes need to allow outbound requests to the FeedMaster on those ports.
Load Balancer Guidelines
A load balancer facilitates the efficient distribution of network or application traffic between client devices and backend servers. In the SnapLogic environment, a load balancer is for incoming requests to the Snaplex from client applications. This purpose differs from that of an HTTP proxy, which might be required for outbound requests from the Snaplex to the Control Plane or other endpoints. Typically, the HTTP proxy is required when Groundplex nodes are on client servers with a restricted network configuration.
Use Cases for a Load Balancer
You should provision a load balancer for a Snaplex when external client API calls are sent directly to the Snaplex nodes. We recommend a load balancer in the following use cases:
Snaplex-triggered Pipeline executions—Since the Control Plane triggering mechanism imposes additional Org level API limits, we recommend using the Snaplex triggering mechanism for high-volume API usage.
REST requests to Ultra Task Pipelines—For direct API calls to the Snaplex, the requests must pass through a load balancer. Without a load balancer, request failures occur, and the Snaplex eventually goes offline during Snaplex maintenance or upgrades.
API Policies—To apply API policies to APIs or Triggered and Ultra Tasks on a Cloudplex, you must have a load balancer, which SnapLogic provisions.
You can configure the load balancer to run health checks on the node, which ensure that the node going offline for maintenance does not receive any new requests.
After configuring the load balancer, you must add the load balancer URL to the Snaplex properties.
Only the Ultra Task Load balancer field needs to be configured since that enables load balancing for Triggered Task requests as well. Use the following guidelines:
If a load balancer points to a FeedMaster node, then you only need to configure the Ultra Task load balancer.
If the load balancer points to worker nodes, then you should only configure in Snaplex properties.
Use Cases where a Load Balancer is Not Required
Load balancers are not required for the following types of activities:
Pipeline executions triggered through the Control Plane.
Scheduled Pipelines, Pipeline/account validation, and Pipeline development.
Headless Ultra (since the Ultra Task processing is not driven by REST API calls).
Child Pipeline executions triggered through the Pipeline Execute Snap.
Cloudplex Load Balancer
On a Cloudplex, a load balancer is provisioned by SnapLogic, typically only when Ultra Task subscription feature is enabled. The Cloudplex load balancer has a snaplogic.io domain endpoint that points to the FeedMaster nodes. You can provision a load balancer for both Ultra and Snaplex triggered executions.
As an Org admin, you must update the Snaplex settings with the load balancer URL after the load balancer is provisioned.
Groundplex Load Balancer Best Practices
For Snaplex instances with FeedMaster nodes, the load balancer should point to the FeedMaster nodes, like https://fm-node1.example.com:8084 and https://fm-node2.example.com:8084. A FeedMaster node can process Triggered and Utra Task requests, a JCC can process only Triggered Task requests. In the latter case, it is easier to use the FeedMaster node as the load balancer endpoint. The Ultra Task load balancer field value needs to be updated in the Snaplex settings with the load balancer URL.
You should configure the load balancer to run health checks on the Snaplex node on the /healthz URL. Any other response code besides 200 indicates a health check failure.
The load balancer should perform SSL offloading/termination so that the certificate and cipher management can be done on the load balancer without updating the Snaplex nodes. The connection between the client and the load balancer is over HTTPS with your signed certificate. The connection between the load balancer and the Snaplex nodes are also over HTTPS with the default SnapLogic generated certificate.
You must set the HTTP request timeout to a value of 900 or higher to allow for long-running requests. This timeout setting is different from the keep-alive timeouts that are used for connection management, like the following examples:
The proxy_read_timeout for Nginx.
The ProxyTimeout for Apache.
The idle timeout for AWS ELB.
The following image from the AWS UI shows a sample health check configuration for the AWS ELB.
Changing Default Ports
If you change the default ports of the JCC and FeedMaster nodes in your Groundplex, then you reconfigure your load balancer to use the new port assignments, which are 8081 for JCC and 8084 for FeedMaster.