Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Tasks are a way to execute your pipelines using a fixed schedule or by accessing a URL. The URL method of execution can also feed data into a pipeline and receive the output of the pipeline. From the Tasks page, you can view what tasks are already available and create new ones. Since the pipelines executed by tasks will be run in an unattended mode, you can receive notifications of activity by specifying a comma-separated list of email addresses.

Info

If you are on the Tasks page under a project, the list will display what type of task each one is in the Run Policy column. If you are on the project-level page, this column is not available.


Hover over the task name to get access to the context menu. From there, you have access to view the task's activity log to see when it was created or modified.

Running a Pipeline from a URL

To create a URL that can be used to execute a pipeline:

  1. Select the Triggered (External URL) option when creating or updating a task.
    The option Do not start a new execution if one is already active let's you prevent a pipeline from running again if a previous execution of that pipeline is already running. If you attempt to call the triggered task when it is already running, you will receive a 409 error.
  2. After saving the task, select Details from the context menu for that task to see the Details page.

  3. Click Execute for the URL to run the pipeline (you may be prompted to log in).

    Note

    If the pipeline is set to run in an on-premises Snaplex, there will be two URLs listed: one that is reachable from the cloud and one that is reachable from the Snaplex machine. The On-Premise URL should be used when streaming data to/from a pipeline running in an on-premises Snaplex.

    For information on API Throttles of Triggered tasks, see Settings.

Using Bearer Tokens with Triggered Tasks

When you are creating a triggered task, the Bearer token is pre-populated with a token. This can be used in either a cloud or on-premises trigger, typically for use with a service account user

Clearing this field disables token authentication; authentication must then be done with a user name and password. Note that with Ultra tasks, an empty Bearer token means no authentication.

Passing Pipeline Arguments

If you would like to pass arguments to the pipeline, you can append a query string to the URL with the key/value pairs of the parameters defined in the pipeline properties. For example, if you created a task named "pipeline-with-args" that executed a pipeline that had the parameters "Age" and "Name", the end of the URL would look something like:

Code Block
pipeline-with-args?Age=32&Name=Bob


Note

Be careful to URL-encode the arguments that you are using in the query string.

Request Pipeline Arguments

Information from the HTTP request to the trigger URL will automatically be passed as pipeline arguments when the pipeline is executed. The following arguments are available and have names similar to what you would find in a CGI script:

  • PATH_INFO - The path elements after the Task part of the URL.
  • REMOTE_USER - The name of the user that invoked this request.
  • REMOTE_ADDR - The IP address of the host that invoked this request.
  • REQUEST_METHOD - The method used to invoke this request.


The following HTTP headers from the request may also be available:

  • CONTENT_TYPE
  • HTTP_ACCEPT
  • HTTP_ACCEPT_ENCODING
  • HTTP_DATE
  • HTTP_USER_AGENT
  • HTTP_REFERER
Note

When referencing these arguments, they must be prefixed with an underscore like any other pipeline parameter.  When designing a pipeline, you might find it easier to explicitly add these pipeline parameters with a default value.

Be careful to URL-encode the arguments that you are using in the query string.

Semantic URL

Pipeline parameters can be sent to triggered tasks in the semantic URL form. These values will be rendered into a variable names PATH_INFO, which must be defined as a parameter value in the pipeline itself, as in:

param1/value1

param2/value2

PATH_INFO/path

 

The triggered URL can then be appended with the new values as in:

Code Block
triggered-url/newvariable1/newvariable2?param1=value1&param2=value2


where the pipeline parameters are:

param1=value1
param2=value2
PATH_INFO= newvariable1/newvariable2 

 

Viewing the Output of a Pipeline

When executing a pipeline using a task URL, the output of the pipeline can be included as the response to the HTTP request.  Creating a pipeline that can be used in this manner is done by leaving one output view unlinked in the pipeline.  For example, a pipeline that consists solely of a File Reader Snap would return the contents of the file when the task URL was requested.  This feature works with both document-oriented and binary views.  If it is a document view, then the response will consist of a JSON-encoded array of each document read from the view.  If the view is a binary stream, then the first binary stream will be returned.  A formatter, like XML or CSV, can be used to transform the output of a document view into a binary view if the default JSON-encoding does not work for you.

Note

Only one unlinked output view is supported, if more than one unlinked view is found an error will be returned when trying to request the task URL.


Note

The On-premises URL should be used when receiving data from a pipeline running in an on-premises Snaplex. 

POSTing Input to a Pipeline

If you would like to send a document to a pipeline, you can leave one input view unlinked and then do an HTTP POST to the task URL.  The content of the POST request will then be turned into a document and fed into the pipeline.  If the input view accepts documents, then the content of the POST request should be a JSON-encoded object or an array-of-objects.  If it is a single object, that value will be fed into the pipeline.  If it is an array of objects, each object will be separately fed into the pipeline as documents.  If the input view accepts binary streams, a single stream containing the raw contents of the POST request will be fed into the view.

Note
Note: Only

Only one unlinked input view is supported, if more than one unlinked view is found an error will be returned when trying to request the task URL.

 


Note
On-premise Note:

The On-premise URL should be used when sending data to a pipeline running in an on-premises Snaplex.

 

Authentication

The external URL created for a task requires you to be authenticated with the SnapLogic servers.  When using a web browser to access the URL, you might not have to provide credentials since you might already be logged in to the designer.  Using other methods to request the task URL will require you to provide credentials using HTTP Basic Authentication. 

Command-line Examples

This section covers some examples of using the cURL command-line tool to access task URLs.  Note that these are only examples of the syntax and are not functioning URLs.
 

To execute a pipeline that takes no input and has an unlinked output view that will write documents: 

Code Block
$ curl -u 'user@example.com:mypassword' https://elastic.
Snaplogic
snaplogic.com/api/1/rest/slsched/feed/example/jobs/test-reader
{ "msg" : "Hello, World!" }
 

To execute a pipeline that takes a parameter and has an unlinked output view that writes a document:
 

Code Block
$ curl -u 'user@example.com:mypassword' https://elastic.
Snaplogic
snaplogic.com/api/1/rest/slsched/feed/example/jobs/test-hello?Name=John
{ "msg" : "Hello, John!" }

 To execute a pipeline that accepts a JSON document and transforms it:

 
Code Block
$ curl -u 'user@example.com:mypassword' --data-binary '{ "name" : "Bob" }' --header "Content-Type:application/json" https://elastic.
Snaplogic
snaplogic.com/api/1/rest/slsched/feed/example/jobs/test-transformer
{ "msg" : "Hello, Bob!" }


Debugging Mode

To make it easier to debug triggered executions of pipelines, you can check the Debug option in the task configuration dialog to record the next five invocations of the task. When enabled, the headers and content from the HTTP request and response will be saved in files on SLFS for each invocation. For example, when using a triggered task as a callback URL for another cloud service, enabling debugging will allow you to view the response that is being sent to the other cloud service. You can view the trace files in the Task Details page. The Open Trace Directory link will open the directory containing all of the trace files for this task.  You can also search for the files for a particular invocation by clicking the View Trace Files link in the details for that invocation in the execution history.

 

The Task Details page will display how many debug invocations are left next to the run count. After the five debug invocations have been done, the task will automatically revert to normal behavior. If you need to continue debugging, you can go back to the task configuration dialog and re-enable debug-mode. 

Note
Note:

Task debugging is not yet available for triggered tasks invoked through the on-premises URL.

Running an On-Premises Pipeline from a URL

If you are running a pipeline in an on-premises Snaplex and streaming data to/from the pipeline, you must use the "On-Premises URL" to execute the pipeline, the "Cloud URL" will not work.  The "On-Premises URL" will refer to the Groundplex machine from which you can perform the request.  The request must be done on the Groundplex machine since the service is only listening for connections from the local machine.  If you would like to allow requests from other hosts, you can configure the machine's firewall rules or install a proxy to forward the requests to the Groundplex service.
 

Additional Setup for Multiple Groundplexes

If you have multiple Groundplexes installed, the triggered pipeline can be run on any of the Groundplexes. Because of this load-balancing functionality, the Groundplex used to trigger a task may not be the one that actually runs the pipeline. Therefore, you will need to configure the Groundplex machines to accept requests from the other Groundplexes so that data can be streamed back to the client.

Synchronous and Asynchronous Execution of Triggered Tasks

When a task is triggered in SnapLogic using the URL, by default the request is synchronous, and will respond with a "200" response when the task completes. However, if the pipeline has a single, unterminated Snap, providing output, then the output from that Snap is provided as the response to the request. This allows the developer to make his pipeline "asynchronous", in that the response is written back to the user.

 

For example, if I have a pipeline which looks something like:

When you trigger the task, for example from a browser, you will get the following response on completion.
 

{ http_status_code: 200 }

 

 

It will wait until the pipeline has completed before displaying this message, which may take some time. 


If you wanted to make the pipeline "asynchronous" (or at least a little more so), you can add a Snap with an unterminated output, something like:

Now when you trigger the pipeline, the output from the response generator Snap (in this case, a JSON Generator responds with the message shown) will be provided almost as soon as the pipeline executions starts: 

{ msg: "Started Ok", http_status_code: 200 }
 

The net effect of which is that the pipeline execution becomes "pseudo-asynchronous".

Enabling and Disabling Pipeline Tasks

If you have tasks that you want to temporarily disable, then re-enable at a later point, select the checkbox next to the task name, then click Disable. To re-enable the task, select the checkbox and click Enable.
This feature is only available at the Tasks page under the project. It is not available at the project level.

Moving Tasks

Note that if you move tasks and their associated pipelines to a different project,  you will need to re-select the appropriate pipeline within the task. Otherwise, the task will fail as it will still be pointing to the version of the pipeline in the previous location. 


Panel
bgColor#ebf7e1
borderStylesolid

Pages in this Section

Child pages (Children Display)