...
- Import the following SnapLogic template to your Grafana analytics from the following source:
https://grafana.com/grafana/dashboards/14363
The page describes multiple methods for importing the template. - In the custom_values.yaml chart, set the Enabled field to true.
Access the Grafana UI through the IP addresses under the EXTERNAL-IP column from Step 2 of Installing the Prometheus Application.
Info You can also use the
kubectl top
command to monitor the CPU and Memory for the pods.
...
- Enter the following values into your Helm Chart:
- CPU and memory requests for the nodes.
- HPA (Horizontal Pod Autoscaler) enabled flag.
- minReplicas. The lower limit for the number of replicas to which the autoscaler can scale down. The value defaults to 1 pod and cannot be set to 0.
- maxReplicas. The upper limit for the number of pods that can be set by the autoscaler. This value cannot be smaller than the value set for the MinReplicas.
- targetPlexQueueSize. The target value for the average count of Snaplex queued Pipelines.
- Example: targetPlexQueueSize: 5
Where 5 is the number of queued Pipelines. - Leave this field empty to disable this metric.
- Example: targetPlexQueueSize: 5
- targetAvgCPUUtilization. The target utilization for average CPU utilization.
- Example.
targetAvgCPUUtilization: 50
Where 50 is 50% - Leave this field empty to disable the metric.
- Example.
- targetAvgMemoryUtilization. The target utilization for for average memory utilization.
- Example:
targetAvgMemoryUtilization: 50
Where 50 is 50% - Leave this field empty to disable the metric.
- Example:
- scaleUpStabilizationWindowSeconds. The window of time during scale up
If you leave this field empty, the default value is 0. - scaleDownStabilizationWindowSeconds. The window of time time scale down.
If you leave this field empty, the default value is 300. - terminationGracePeriodSeconds. The grace period in seconds after the Snaplex node termination signal and before a forced shutdown.
If you leave this field empty, the default value is 30 seconds.
- CPU and memory requests for the nodes.
- Install the Helm Chart by running the following command.
$ helm install -n snaplogic <helm_chart_folder>
Verify the Helm Chart installation by running the following command.
Code Block $ kubectl get all -n snaplogic NAME READY STATUS RESTARTS AGE pod/<helm_chart_name>-snaplogic-snaplex-feedmaster-84ff4f48c-7rtpd 1/1 Running 0 12s pod/<helm_chart_name>-snaplogic-snaplex-jcc-66ddddcb76-ttdwz 1/1 Running 0 12s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/<helm_chart_name>-snaplogic-snaplex-feed NodePort 10.100.83.252 <none> 8084:30456/TCP 13s service/<helm_chart_name>-snaplogic-snaplex-regular NodePort 10.100.140.124 <none> 8081:30182/TCP 13s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/<helm_chart_name>-snaplogic-snaplex-feedmaster 1/1 1 1 13s deployment.apps/<helm_chart_name>-snaplogic-snaplex-jcc 1/1 1 1 13s NAME DESIRED CURRENT READY AGE replicaset.apps/<helm_chart_name>-snaplogic-snaplex-feedmaster-84ff4f48c 1 1 1 13s replicaset.apps/<helm_chart_name>-snaplogic-snaplex-jcc-66ddddcb76 1 1 1 13s NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE horizontalpodautoscaler.autoscaling/<helm_chart_name>-snaplogic-snaplex-hpa Deployment/<helm_chart_name>-snaplogic-snaplex-jcc 10%/50% 1 5 0 13s
...