Using Long-running Triggered Tasks with an Enterprise Scheduler

In this article:


Enterprise job schedulers that execute long-running Triggered Task Pipelines can report timeout or lost client connection errors. To help solve this issue, we provide the script. It can execute on any server that has network access to call the Triggered Task URL.

Scheduler errors occur when network configuration limits the idle time for REST calls and a Triggered Task does not finish within that limit. For example, many network configurations limit the idle time for REST calls to one hour. With long-running Pipelines, such as for overnight batch processing, the synchronous call invoking the Triggered Task might not return within an hour. This can cause the scheduler to report errors even if the Pipeline is still running and eventually completes successfully.

Instead of invoking the Triggered Task directly, you can set up a scheduler to execute the Pipeline from the script. The script starts Pipeline execution and polls continuously for completion status. This keeps the connection alive and prevents the scheduler from reporting an error.


  • A user or service account with read and execute access for the project that contains the Triggered Task.

  • The URL for the Triggered Task, including any Pipeline parameters and their values.

  • Read and execute permissions on the node(s) that will run the Triggered Task.

  • For use on Windows servers, a Cygwin shell with curl.


Install the Script

Install the script on any server where the Scheduler has access to it and configure it with your SnapLogic credentials:

  1. Download

  2. Extract the compressed file and save the script to the appropriate location.
    For example, use /usr/local/bin on Linux machines and %systemroot%\System32\Repl\Imports\Scripts on Windows.

  3. From the directory containing the script, invoke --config in a terminal window. For example, on Windows:

  4. Enter the SnapLogic user name and password.

Next, configure the Pipeline to work with the script.

Configure the Pipeline

To configure the Pipeline to work with the script, add a Mapper Snap that returns the Runtime ID (RUUID) value when the Triggered Task runs. The script will monitor execution using the RUUID.

  1. In Snaplogic Designer, open the Pipeline that you want the script to execute.

  2. Add a Mapper Snap to the canvas.

  3. In the Settings tab, enter Map return in the Label field.

  4. In the Mapping table, add the Expression, pipe.ruuid with the Target path ruuid:

  5. Select the Views tab.

  6. Click - (minus) to remove the input view.

  7. Leave the default output view, output0, as is, with Type document:

  8. Save and close the Mapper Snap.

  9. The Pipeline should look similar to the following (before validation):

Next, set up the scheduler to run the script.

Run the script from the scheduler

After installing the script and configuring the Pipeline, invoke the script from the scheduler. The invocation must include your Org name and the Triggered Task URL. For example: --org <my_org> "<task_path>?param1=<param-value>?param2=<param2-value>"

The following table lists command options.









Creates or updates a snaplogic_exec.credentials file to store the credentials used by the Public Monitoring API. Run this command before invoking it to run the Triggered Task.



Specifies the SnapLogic Org containing the Pipeline to execute. (Required)



Specifies the number of seconds between Public Monitoring API calls. (Default 30)



Specifies the maximum number of times to retry the the Public Monitoring API call before terminating the script



Enables more messages during script execution



Enables debug messages during script execution

-help or -usage


Outputs parameters and command options to the terminal window

Using the script should enable you to execute long-running Triggered Tasks from an enterprise scheduler without causing timeout or lost connection errors.


If you encounter issues with the script, run it using the --debug and --verbose commands. They display messages as commands execute, and report all responses.