Deploying a Groundplex in Kubernetes

In this article

Overview

SnapLogic supports Kubernetes orchestration on your Groundplex instances. You can deploy Snaplex nodes in your Kubernetes environment by setting up a Helm Chart that defines the node configuration for discoverability in the Kubernetes environment. This article explains how you can deploy and configure SnapLogic Snaplex nodes in a Kubernetes environment and contains an attached Helm Chart that you can use.

The autoscaling solution described in the article Deploying a Groundplex in Kubernetes with Elastic Scaling is no longer available. You can configure your Groundplex to use autoscaling through Kubernetes-based metrics. Consult your CSM to learn more.

Workflow

Prerequisites

Downloading the Configuration File from SnapLogic Manager

  1. Open an existing Snaplex in the Org:

    1. Navigate to the target Snaplex in Manager.

    2. Click on the Snaplex name to display the Update Snaplex dialog.
      Alternatively, if none exists, Create a Snaplex

  2. On the Downloads tab, click to copy the Configuration link. Paste this link into your Helm Chart.

     

  3. Click Cancel to exit the dialog.

  4. Since the configuration link has an expiration, to ensure that the Kubernetes pods continue to run:

    1. Delete all query string parameters from the Configuration Link URL.
      In the following example URL, delete everything from the question mark to the end:
      https://elastic.snaplogic.com/api/1/rest/plex/config/PlatformQA/shared/Ground_Triggered?expires=1613086219&user_id=testuser22@snaplogic.com&_sl_authproxy_key=1BN...

    2. Set the parameter snaplogic_secret in the Helm chart YAML file to the name of the Kubernetes secret you create, as described in the Deploying the Helm Chart section.

For a Zero Trust Kubernetes Installation, open up port TCP 443 and websockets connections for the sites mentioned below:
https://elastic.snaplogic.com
https://tcp.elastic.snaplogic.com
https://snaplogic-prod-sldb.s3.amazonaws.com
https://s3.amazonaws.com

Running the Snaplex with Org Credentials

You can associate Org admin credentials with the SnapLogic secret created when enabling enhanced encryption. Doing so makes it easier to share the Snaplex service with the users in your Org. Both the credentials for the SnapLogic Org admin and the enhanced encryption secret are in JSON format as key/value pairs.

  1. Generate a key and encode it for each value.
    NOTE: We recommend that you use base64 to encode the values.

     

  2. To create the SnapLogic secret:

    1. Create the YAML file with the following two keys: username and password.

      Example YAML File

      apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: username: <base64 username> password: <base64 password>


      TIP: Run the following command to encode your username/password into the text of the secret:
      $ echo -n "snaplogic_username_or_password" | base64

      IMPORTANT: If your password includes any of the following characters, you must escape the character with a backslash (\) in the string that you pass to the encoder:
      \ (backslash)
      $ (dollar sign)
      ' (apostrophe or single-quote)
      ` (backtick)
      " (double-quotes)
      & (ampersand)
      | (pipe symbol)
      ; (semicolon)
      ! (exclamation mark)
      For example, if your password is mypa$$word, pass the string mypa\$\$word to the Base64 encoder.

    2. Run the following command:

      $ kubectl apply -f snaplogic_secret.yaml

     

  3. (Optional) If Enhanced Encryption is enabled for your Org, create the Enhanced Encryption secret by running the following commands:
    $ kubectl create secret generic enhanced-encryption-secret --from-file=keystore_jks --from-file=keystore_pass
    $ kubectl apply -f enhanced_encryption_secret.yaml

  4. After the secret is created, delete the YAML file, which is no longer needed.

See the Kubernetes documentation regarding the management of the secret.

You can now deploy the Helm Chart.

Deploying the Helm Chart

About the Helm Chart

The helm chart defines the values for your Groundplex nodes in your Kubernetes environment.

You can download the Helm Chart package which contains the following:

  • values.yaml - This file is the helm chart.

  • templates folder - Boilerplate charts based on parameters.

  • Chart.yaml - This file contains metadata about the artifact repository.

Helm Chart Fields

The following list describes each field parameter in the values.yaml file:

  1. Regular nodes count. Specifies the number of JCC Nodes to deploy.

  2. FeedMaster nodes count. Specifies the number of FeedMaster nodes  to deploy.

  3. Docker image of SnapLogic Snaplex image. Specifies the repository where the image resides and the tag indicating the version of the image. While you can specify a version of your Snaplex, we recommend that you enter the latest version for the most recently released SnapLogic build.

  4. SnapLogic configuration link. Specifies the link to the SnapLogic JCC configuration file (also known as .slpropz).

  5. SnapLogic Org admin credential. Specifies the secret (an encoded username and password) to authenticate the deployment.

  6. Enhanced encryption secret. Specifies the secret (an additional encoded username and password) to authenticate the deployment, available to the user only.

  7. CPU and memory limits for the nodes. Specifies the upper limits and requests for the CPU and memory resources. You can set these values for upper limits
    only; the lower limits are system-defined and cannot be modified.

  8. Default file ulimit and process ulimit: Specifies the number of possible open file descriptors and processes in a node CPU.

  9. Probes. Monitors the SnapLogic app.

  10. Autoscaling: Sets autoscaling properties. Enabled to false by default. Contact your CSM for SnapLogic recommendations for this setting.

  11. Termination Grace Period Seconds: The time to shut down a node gracefully before it gets terminated.

  12. IPv6: Enabled to false by default. You can enable IPv6 for your connections if your infrastructure supports it.

Example Helm Chart

# Default values for snaplogic-snaplex. # This is a YAML-formatted file. # Declare variables to be passed into your templates. # Regular nodes count jccCount: 1 # Feedmaster nodes count feedmasterCount: 1 # Docker image of SnapLogic snaplex image: repository: snaplogic/snaplex tag: latest # SnapLogic configuration link snaplogic_config_link: # SnapLogic Org admin credential #snaplogic_secret: # Enhanced encryption secret #enhanced_encrypt_secret: # CPU and memory limits/requests for the nodes limits: memory: 8Gi cpu: 2000m requests: memory: 8Gi cpu: 2000m # Default file ulimit and process ulimit sl_file_ulimit: 8192 sl_process_ulimit: 4096 # Enable/disable startup, liveness and readiness probes probes: enabled: true # JCC HPA autoscaling: enabled: false minReplicas: maxReplicas: # Average count of Snaplex queued pipelines (e.g. targetPlexQueueSize: 5), leave empty to disable # To enable this metric, Prometheus and Prometheus-Adapter are required to install. targetPlexQueueSize: # Average CPU utilization (e.g. targetAvgCPUUtilization: 50 means 50%), leave empty to disable. # To enable this metric, Kubernetes Metrics Server is required to install. targetAvgCPUUtilization: # Average memory utilization (e.g. targetAvgMemoryUtilization: 50 means 50%), leave empty to disable. # To enable this metric, Kubernetes Metrics Server is required to install. targetAvgMemoryUtilization: # window to consider waiting while scaling up. default is 0s if empty. scaleUpStabilizationWindowSeconds: # window to consider waiting while scaling down. default is 300s if empty. scaleDownStabilizationWindowSeconds: # grace period seconds after JCC termination signal before force shutdown, default is 30s if empty. terminationGracePeriodSeconds: 900 # Enable IPv6 service for DNS routing to pods enableIPv6: false

Steps

  1. Configure the following parameters in the Helm Chart and name the file values.yaml.

  2. In the Helm Chart console, run the following command: 

    $ helm install --name snaplogic <helm_chart_folder>
    Where <helm_chart_folder> is the Helm Chart zip file, which you can download from this document.

  3. Run the helm list command to determine the status of the deployment.
    The following sample output shows a successful deployment:

    The following sample output shows the pending status of resources:

Once you deploy your Helm Chart, you can deploy a load balancer.

Configuring IPv6

To set up IPv6 on Kubernetes:

  1. In the values.yaml file of the Helm Chart, set the value enableIPv6: true.

  2. Set the global property for the jcc.k8s_subdomain_service in this format -- <Helm Release Name>-snaplogic-snaplex-ipv6.

Deploying a Load Balancer

To add load balancers to your JCC and FeedMaster nodes:

  1. In the Helm console, run the helm list command to list the services. 

  2. In SnapLogic Manager, navigate to the target Project folder, then click the target Snaplex; the Update Snaplex dialog appears. 

  3. On the Settings tab of the Update Snaplex dialog, enter the corresponding values in the following fields:

    • Load balancer. Enter the protocol and port number of the Snaplex JCC node. See PORT(s) associated with snaplogic-snaplex-regular.

    • Ultra load balancer. Enter the the protocol and port number of the FeedMaster node. See PORT(s) associated with snaplogic-snaplex-feed.

  4. Review the information, then click Update.

Once your Snaplex and FeedMaster nodes are deployed, you can start designing and running Pipelines and Tasks.

 

Best Practices

  • Avoid running processes on the same pod as the JCC node, so that the JCC can have the maximum amount of memory available on that pod.

  • Do not overwrite the global.properties file unless working with your CSM to customize your Groundplex.

  • Request resources upfront. Do this by setting the pod’s request and limit to the same value, as shown in the image below:

Downloads

Download and extract the following files, using the values.yaml file as the basis for your Helm Chart.

  File Modified

ZIP Archive helm_chart_v2 (1).zip

Oct 12, 2023 by John Brinckwirth

 



See Also