Manage Groundplex Nodes in Your Snaplex

In this article

Overview

You can add nodes to your Self-managed Snaplex and upgrade their configuration files. Learn more about customizing your Snaplex configuration.

Adding Nodes to your Groundplex

After you create a Self-managed Snaplex, you can still add nodes to your existing Groundplex by using the following procedure to update the Snaplex properties file:

  1. Download and install the appropriate package for your operating system.

    The Downloads tab is not visible until after a Snaplex is created.

  2. After installing the software, download the following configuration file and place it in the /opt/snaplogic/etc directory and make sure the file name ends with slpropz. The download link is valid for one hour. 

  3. To migrate existing Groundplex nodes to use the slpropz configuration file, make sure the values in the Node Properties and Node Proxies match what you have configured in your global.properties file.

  4. Download the slpropz configuration file, place it in the /opt/snaplogic/etc directory, and restart the JCC service. 

If the Groundplex fails to restart after a new configuration is installed, it reverts back to the last working configuration.

Best Practices

  • When upgrading a Windows-based Groundplex to use the slpropz configuration file, the Monitor process would need to be updated by running jcc.bat update_service. The max heap space for the JCC process can be set incorrectly if the Monitor process is not updated.

  • If you are unable to create an SLDB file using international language characters (such as æ, Æ, Ø) in the file name, update the jcc.use_lease_urls property in Global Properties of the Snaplex to False. This workaround works for all UTF-8 characters, and therefore supports all global languages.

  • By default, if the first run of an Ultra Task fails, SnapLogic attempts to run the Task a total of five times. However, you can configure the number of times you want Ultra Tasks from a specific Snaplex to run by configuring the maximum number of allowed retries for an Ultra Task. To do so, modify the following parameter in Global Properties in Snaplex Update: ultra.max_redelivery_count to indicate the number of times you want a failed Ultra Task to run.

  • We recommend that for your critical workloads on the production environment, you use a minimum of two worker nodes (and two FeedMaster nodes for Ultra Tasks) to avoid service disruption during Snaplex upgrades.

Migrating Older Snaplex nodes 

If you have a Snaplex running with a global.properties file in the $SL_ROOT/etc folder, you should consider migrating the Snaplex to use the slpropz mechanism for the configuration.

Running the Groundplex with the global.properties file requires the following tasks for any configuration change:

We recommended users manually update their global.properties file instead of overwriting it.

  1. Manually update the global.properties file.

  2. Copy the updated configuration to all the nodes.

  3. Restart all the nodes.

We recommended users migrate to the new slpropz mechanism for defining configuration files.

To migrate the Snaplex configuration to the slpropz mechanism:

  1. Update the Snaplex properties in Manager by adding custom configurations, which remain in the global.properties into the Snaplex properties.

    If you do not complete this step, then the configuration changes applied to the nodes are not available when running with slpropz.

     

  2. Navigate to the Manager > Update Snaplex > Downloads tab, and download slpropz file into the $SL_ROOT/etc folder.

  3. Back up the global.properties and keys.properties files, then remove them from the $SL_ROOT/etc folder.

  4. Restart the JCC process on all the Groundplex nodes through the Public API.

  5. Alternatively, as an Org admin, you can restart the JCC node through the Dashboard by clicking on the target Groundplex node and selecting Node Restart.

The Snaplex nodes should run with the new slpropz mechanism, therefore allowing for remote configuration updates.

Use case for display of pipelines after the JCC node is killed

The use case pertains to the display of pipelines after the JCC node has been terminated.

If there are multiple JCC nodes running, then a task with several instances is divided among them. For example, if we have an Ultra Task of nine instances, they will be split into three, each between the three JCC nodes. However, if any one of the nodes crashes, then the JCC state is not updated in the SLDB (Service Level DataBase). As a result, the JCC instance will remain in the RUNNING state, creating zombie instances on the Dashboard.

The zombie instances are seen on the Dashboard for a span of eight hours or until they reach the maximum heartbeat limit specified in the cleanup pipelines method. After the cleanup process is complete, the instances will no longer be visible on the Dashboard.

If the node crash is resolved within the eight-hour limit, the instances will be automatically cleaned up and have no visibility in the Dashboard.