Skip to end of banner
Go to start of banner

Groundplex Deployment Planning

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 49 Next »

In this Article

Overview

There are several factors to consider when deploying a self-managed Snaplex (Groundplex). Most of these considerations are because of IT requirements in your computing environment, but some also depend on Pipeline production and the types of Pipelines you plan to run in production.

The term Groundplex is used throughout this document to refer specifically to your self-managed Snaplex.

Computing Requirements

The Groundplex must conform to the following minimum specifications:

Nodes (Min/Rec)

Minimum: 1

Recommended: 2 or more nodes

SnapLogic Project and Enterprise platform package nodes can be configured in the following sizes:

  • Medium: 2 vCPU and 8 GB RAM 

  • Large: 4 vCPU and 16 GB RAM 

  • X-Large: 8 vCPU and 32 GB RAM

  • 2 X-Large: 16 vCPU and 64 GB RAM

We recommend two nodes for high availability. For requirements about clustering nodes, refer to Node Cluster.

All nodes within a Snaplex must be of the same size.

RAM (Min)

Minimum: 8 GB

Depending on the size, number, and design of Pipelines, more RAM is required to maintain an acceptable level of performance.

CPU (Min)

Minimum: 2 core

All Snaps execute in parallel in their own threads: the more cores that are available to the Snaplex, the more performant the system.

Disk (Min/Rec)

Minimum: 80 GB

Recommended: 100 GB

  • The recommended disk space is for both total and available, assuming that the Groundplex nodes are not running other software.

  • The Snaplex installation has two directories: SL_ROOT and java.io.tmpdir Both require their own filesystems.

  • Local disk space is required for logging and for any Snap that uses the local disk for temporary storage (for example, Sort and Join Snaps). For details, refer to Temporary Folder.

  • SnapLogic does not have restrictions on the disk size of your Groundplex nodes.

Memory (RAM) is used by the Pipelines to execute. Some Snaps, such as Sort Snaps, which accumulate many documents, consume more memory; the amount of memory used is influenced by the volume and size of the documents being processed. For an optimum sizing analysis based on your requirements, contact your SnapLogic Sales Engineer.

Supported Operating Systems

Groundplexes support the following operating systems:

  • CentOS (or Red Hat) Linux 6.3 or newer.

  • Debian and Ubuntu Linux.

  • Windows Server 2016/2019 with a minimum of 8 GB RAM.

You can also deploy a Groundplex:

  • On a Docker container

  • In a Kubernetes Environment

For improved security, the Groundplex machine timestamp is verified to check if it is synchronized with the time stamp on the SnapLogic Cloud. Running a time service on the Groundplex node ensures that the timestamp is always kept synchronized.

A large clock skew can also affect communication between the FeedMaster and the JCC nodes. The Date.now() expression language function might be different between Snaplex nodes, and Internal log messages might have skewed time stamps, making it more difficult to debug issues.

We apply security updates to the Snaplex via auto-update, except for the base monitor process. To update the Monitor, install the latest RPM or Docker image directly.

CPU Architecture

We support x86 architecture and do not support ARM.

Network Guidelines and Requirements

The following network guidelines and requirements apply to Groundplex deployments:

  • Network throughput

  • Network firewall

Network Throughput Guidelines

Groundplexes require connectivity to the SnapLogic Integration Cloud, and also connectivity to the cloud applications which may be used in your Tasks and Pipelines.

To optimize performance, we recommend the following network throughput guidelines:

Guideline

Minimum

Recommended

Network In (Min/Rec)

10 MB/second

15 MB/second+

Network Out (Min/Rec)

5 MB/second

10MB/second+

Network Firewall Requirements

To communicate with the SnapLogic Control Plane, Groundplexes use a combination of HTTP/HTTPS requests and WebSockets communication over the TLS (SSL) tunnel. For this combination to operate effectively, you must configure the firewall to allow the following network communication requirements:

Component

Required

Consequence

HTTP outbound Port 443

Yes

Does not function

HTTP HEAD

Desired

Without HEAD support, a full GET request requires more time and bandwidth

WebSockets (WSS protocol)

Yes

Does not function

JCC node: 8090 (HTTP), 8081 (HTTPS)

Yes

Unable to reach Snaplex neighbor - https://hostname:8081

Needs to be available for communication between the nodes in a Snaplex.

Feedmaster: 8090 (HTTP), 8084 (HTTPS), 8089 (Message queue)

  • The nodes of a Snaplex need to communicate among themselves, so it is important that each node can resolve each other's host names. This requirement is crucial when you are making local calls into the Snaplex nodes for the execution of Pipelines instead of initiating it through the SnapLogic Platform. The Pipelines are load-balanced by SnapLogic with Tasks passed to the target node.

  • Communication between the customer-managed Groundplex and the SnapLogic-managed S3 bucket is over HTTPS with TLS enforced by default. The AWS-provided S3 URL also uses an HTTPS connection, with TLS enforced by default. If direct access from the Groundplex to the SnapLogic AWS S3 bucket is blocked, then the connection to the AWS S3 bucket communication falls back to a connection through the SnapLogic Control Plane that still uses TLS 1.2.

  • To successfully implement the Zero Trust policy on the Kubernetes environment, use the following domains/endpoints.

Learn more about Snaplex network set up.

Network Guidelines for Snap Usage

In the SnapLogic Platform, the Snaps actually communicate to and from the application endpoints. The protocols and ports required for this communication are mostly determined by the endpoints themselves and not by SnapLogic. Cloud and SaaS applications commonly communicate using HTTPS, although older applications and non-cloud or SaaS applications might have their own requirements. 

For example, the following table shows some of these requirements:

Application

Protocol

Default Port

Netezza

TCP

5480

Oracle

TCP

1521

RedShift

TCP

5439

Salesforce

HTTPS

443

Each of these application connections might allow the use of a proxy for the network connection, but it is a configuration option of the application’s connection—not one applied by SnapLogic.

FeedMaster Node Ports

For Ultra Pipelines, the FeedMaster node listens on the following two ports:

  • 8084—The FeedMaster's HTTPS port. Requests for the Pipelines are sent here in addition to some internal requests from the other Groundplex nodes.

  • 8089—The FeedMaster's embedded ActiveMQ broker SSL port. The other Groundplex nodes connect to this port to send and receive messages.

The machine hosting the FeedMaster nodes needs to have those ports open on the local firewall, and the other Groundplex nodes need to allow outbound requests to the FeedMaster nodes on those ports.

Groundplex Name and Associated Nodes

Every Snaplex requires a name, for example, ground-dev or ground-prod. In the SnapLogic Designer, you can choose the Snaplex where Pipelines are executed.

Your nodes are associated with a Grounplex through the Environment variable: for example, dev or prod. When you configure the nodes for your Groundplex, you must set the jcc.environment to the Environment value that you provided in the Create Snaplex dialog. You can change this variable in the Update Snaplex dialog.

The host name of the system used by a Groundplex can not have an underscore (_) in its name as per DNS standards. Avoid special characters as well.

After the Snaplex service is started on a node, the service connects to the SnapLogic Cloud service. Runtime logs from the Snaplex are written to the following folder: 

  • Linux: /opt/snaplogic/run/log 

  • Windows: c:\opt\snaplogic\run\log

The Dashboard shows the currently connected nodes for each Snaplex.

Understanding the Distribution of Data Processing across Snaplex Nodes

When a Pipeline or Task is executed, the work is assigned to one of the JCC nodes in the Snaplex. The scheduling of pipelines across nodes in a Snaplex is based on an algorithm that is least-loaded, with priority on memory usage. If there are similarly loaded nodes, the algorithm randomizes the pipeline execution across them.

To ensure that requests are being shared across JCC nodes, we recommend that you set up a load balancer to distribute the work across JCC nodes in the Snaplex.

Node Cluster

Starting multiple nodes with the JCC service pointing to the same Snaplex configuration automatically forms a cluster of nodes if you follow these requirements for nodes in a Snaplex:

  • The nodes need to communicate with each other on the following ports: 8081, 8084, and 8090.

  • The nodes should have a reliable, low-latency network connection between them.

  • The nodes should be homogeneous in that they should have the same CPU and memory configurations, as well as access to the same network endpoints.

  • All JCC nodes should be the same size. All FeedMaster nodes should be the same size for load balancing. Worker and FeedMaster nodes can be of different sizes.

JCC Node Communication Requirements

Each JCC node publishes its IP addresses to the control plane. DNS is not required for communication between nodes. We recommend setting up all the nodes inside a Snaplex in the same network and data center. Communication between JCC nodes in the same Snaplex is required for the following reasons:

  • The Pipeline Execute Snap communicates directly with neighboring JCC nodes in a Snaplex to start child Pipeline executions and send documents between parent and child Pipelines.

  • The data displayed in Data Preview is written to and read from neighboring JCC nodes in the Snaplex.

  • The requests and responses made in Ultra Pipelines are exchanged between a FeedMaster node and all JCC nodes in a Snaplex.

  • A Ground Triggered Task (invoked from a Groundplex) can be executed on a neighboring JCC node because of load-balancing, in which case, the Pipeline prepare request, and the bodies of the request and response, are transferred between nodes.

Any extra latency or network hops between neighboring JCC nodes can introduce performance and reliability problems.

  • No labels