Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Info

The regular nodes count and feed master nodes count are similar to the minReplicas and maxReplicas fields, which refer to the number of Pods needed to run the application. These are used to configure the pod count statically rather than using autoscaling, as in the case of minReplicas and maxReplicas.

The YAML files set the default name values for Groundplex nodes. The example provided in this document snaplogic-snaplex is only for reference and not a required naming convention.

Example Helm Chart

Code Block
# Default values for snaplogic-snaplex.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# Regular nodes count
jccCount: 1
# Feedmaster nodes count
feedmasterCount: 1
# Docker image of SnapLogic snaplex
image:
  repository: snaplogic/snaplex
  tag: latest
# SnapLogic configuration link
snaplogic_config_link:
# SnapLogic Org admin credential
#snaplogic_secret:
# Enhanced encryption secret
#enhanced_encrypt_secret:
# CPU and memory limits/requests for the nodes
limits:
  memory: 8Gi
  cpu: 2000m
requests:
  memory: 8Gi
  cpu: 2000m
# Default file ulimit and process ulimit
sl_file_ulimit: 8192
sl_process_ulimit: 4096
# Enable/disable startup, liveness and readiness probes
probes:
  enabled: true
# JCC HPA
autoscaling:
  enabled: false
  minReplicas:
  maxReplicas:
  # Average count of Snaplex queued pipelines (e.g. targetPlexQueueSize: 5), leave empty to disable
  # To enable this metric, Prometheus and Prometheus-Adapter are required to install.
  targetPlexQueueSize:
  # Average CPU utilization (e.g. targetAvgCPUUtilization: 50 means 50%), leave empty to disable.
  # To enable this metric, Kubernetes Metrics Server is required to install.
  targetAvgCPUUtilization:
  # Average memory utilization (e.g. targetAvgMemoryUtilization: 50 means 50%), leave empty to disable.
  # To enable this metric, Kubernetes Metrics Server is required to install.
  targetAvgMemoryUtilization:
  # window to consider waiting while scaling up. default is 0s if empty.
  scaleUpStabilizationWindowSeconds:
  # window to consider waiting while scaling down. default is 300s if empty.
  scaleDownStabilizationWindowSeconds:
# grace period seconds after JCC termination signal before force shutdown, default is 30s if empty.
terminationGracePeriodSeconds: 900
# Enable IPv6 service for DNS routing to pods
enableIPv6: false

...

For Kubernetes-based deployments, users have to build images/containers that install the utilities in appropriate locations. Those images can use the official Snaplogic image as the base image. When deploying the Snaplex to Kubernetes, the users would then use that image/container for deployment and have necessary dependencies and utilities in place.

Disk sizing guidelines

By default, Kubernetes pods use the disk space of the node they run on, called ephemeral disk. If ephemeral disk runs low, the Kubernetes pod taking the most disk space on the node will be evicted (e.g. restarted), and that disk space will be freed up. Kubernetes pods do not retain ephemeral disk space across restarts, so each time a pod restarts, its filesystem will be essentially cleared.

The amount of ephemeral disk space a Kubernetes worker node needs is dependent on the workloads running on that node, e.g., the number of ground plexes on K8s, the number of other pods, etc.

If more disk space is needed, a Persistent Volume can be used. In cloud environments like AWS, this is often EBS storage. Persistent volumes can be mounted to pods, and they retain data across restarts.

Best Practices

  • Avoid running processes in the same container as the JCC so that the JCC has the maximum amount of memory available, as requested.

  • Do not overwrite the global properties options unless you are working with your CSM to customize your Groundplex.

  • Request resources upfront. The requests determine the minimum required resources, while limits set the maximum resources a Container can consume. Setting them to the same amount ensures stability and exact resource usage. To do this, set the pod’s request and limit to the same value, as shown in the image below:

...