Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Reverted from v. 105

In this Article

Table of Contents
maxLevel2
excludeOlder Versions|Additional Resources|Related Links|Related Information

Snap type:

Flow

Description:

This Snap is for executing Pipelines and passing documents into and out of these executions. The following execution modes are supported by this Snap:

  • Execute a Pipeline for every input document. If the target Pipeline has an unlinked input view, then the input document to this Snap is passed into the unlinked input of the child execution. Similarly, if the target Pipeline has an unlinked output view, then the output document from the child Pipeline execution is used as the output document from the Pipeline Execute Snap.
  • Execute one or more Pipelines and reuse them to process multiple input documents to this Snap. The child Pipeline must have one unlinked input view and one unlinked output view. As documents are received by this Snap, they are passed to the child Pipeline executions for processing. Any output documents from the child Pipeline executions are used as the output of this Snap.

You can execute Pipelines on the same Snaplex node that the parent Pipeline is running without contacting the SnapLogic cloud. In this mode, the Snap does not need to connect with the SnapLogic cloud servers, allowing you to use this Snap in an Ultra Pipeline. For example, you can use Snaps that do not support Ultra mode with this design.

Info
titleDeprecated Snaps

This Snap replaces the ForEach and Task Execute Snaps, as well as the Nested Pipeline mechanism.

Note
titleSchema Propagation
  • If the child Pipeline has a Snap that supports schema suggest, such as the JSON Formatter Snap or MySQL Insert Snap, then the schema is back propagated to the parent Pipeline. This is useful in mapping input values in the parent Pipeline to the corresponding fields in the child Pipeline using a Mapper Snap in the parent Pipeline. 
  • For the schema suggest to work, the parent Pipeline must be validated first; only then is the schema from the child Pipeline visible in the parent Pipeline.
  • The Pipeline Execute Snap is capable of propagating schema in both directions: upstream as well as downstream. See the example Schema Propagation in Parent Pipeline – 3.
Note
  • If there are not enough free resources to execute the Pipeline on the Snaplex, the Snap will wait until resources are available and display the following message in the execution statistics dialog:

Image Removed

  • Because a large number of executions can be generated by this Snap, only the last 100 completed child executions will be saved for inspection in the Dashboard.

This Snap will eventually replace the ForEach and Task Execute Snaps, as well as the Nested Pipeline mechanism.

Note
titleSchema Propagation
  • If the child Pipeline has a Snap that supports schema suggest, such as the JSON Formatter Snap or MySQL Insert Snap, then the schema is back propagated to the parent Pipeline. This is useful in mapping input values in the parent Pipeline to the corresponding fields in the child Pipeline using a Mapper Snap in the parent Pipeline. 
  • For the schema suggest to work the parent Pipeline has to be validated first, only then will the schema from the child Pipeline be visible in the parent Pipeline.
  • The Pipeline Execute Snap is capable of propagating schema in both directions: upstream as well as downstream. Refer to example "Schema Propagation in Parent Pipeline – 3".
Prerequisites:

None

Support and limitations:
  • If there are not enough nodes to execute the Pipeline on the Snaplex, the Snap waits until resources are available; and the following message appears in the execution statistics dialog:
  • Image Removed
  • Because a large number of Pipeline runs can be generated by this Snap, only the last 100 completed Child Pipeline runs are saved for inspection in the Dashboard.
  • Works in Ultra Pipelines with the following exception: reusing an execution on another Snaplex is not supported.
  • Supported in eXtreme mode.
Account: 

Accounts are not used with this Snap.

Views:

The Input/Output views on the Views tab are configurable.

InputThis Snap has at most one document or binary input view.  Documents or binary data received on this view are sent to the child Pipeline if the child Pipeline has an unlinked input view. The document or binary data can be used to specify Pipeline parameters for the child Pipeline run.Output

This Snap has at most one document or binary data output view. 

  • If the child Pipeline has an unlinked output,  then documents or binary data from that view are passed out of this view.
  • If the child Pipeline does not have an unlinked view, then output document or binary data is generated for successful runs with the following field: run_id.

Unsuccessful runs write a document to the error view. If reuse is disabled, then the original input document or binary data is added to output document.

Note

 If the pool size property is greater than one, then the order of the output is not guaranteed to be the same as the input.

ErrorThis Snap has at most one document error view and produces zero or more documents in the view. If the Pipeline execution does not complete successfully, a document will be written to this error view.

Settings

LabelRequired. The name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your Pipeline.Pipeline

Required. The absolute or relative path to the child Pipeline to run. If only the Pipeline name is given, then the Snap searches for the Pipeline in the following folders in this order:

  • Current project.
  • Shared project-space.
  • Shared directory of that Org.

You can also dynamically choose the Pipeline to execute by turning this property into an expression. For example, to execute all of the Pipelines in a project, you can connect the SnapLogic List Snap to this Snap to get the list of Pipelines in the project and execute each one.

The child Pipeline must be designed to be compatible with this Snap. The current requirements are:

No unlinked binary input or output views.

Note
When this property is an expression, the Snap needs to contact the SnapLogic cloud servers to load the Pipeline information for execution. Also, if reuse is enabled, the result of the expression cannot change between documents.
  • When reusing Pipeline executions, there must be one unlinked input view and zero or one unlinked output view.
  • If reuse is not enabled, then use at most one unlinked input view and one unlinked output view.
  • If the child Pipeline has an unlinked input view, make sure this Snap has an input view because the input document or binary data is fed into the child Pipeline.
    .
  • Note
    • You must manually update the reference to the child Pipeline. If you rename the child Pipeline, then the connection between child and parent Pipeline breaks. 
    • The child Pipeline is executed in preview mode when the Pipeline containing this Snap is saved. Consequently, any Snaps marked not to execute in preview mode are not executed and the child Pipeline only processes 50 documents. 
    • You cannot have a Pipeline call itself (recursion is not supported).
    Default value: N/A 
    Execute On

    Select the Snaplex or node on which you want to execute the child Pipeline:

    SNAPLEX_WITH_PATH: On a Snaplex that is different from the one on which the parent Pipeline must run.
    If you choose this option, a new field, Snaplex Path, appears below this field. For information on working with this field, see If you do not specify a Snaplex, this selection behaves like the LOCAL_NODE option.
  • LOCAL_NODE: On the same node as the parent Pipeline.
  • LOCAL_SNAPLEX: On a node in the local Snaplex.
  • Default value: LOCAL_NODE

    Snaplex Path

    Required when the option selected in the Execution On field is SNAPLEX_WITH_PATH. Select the Snaplex on which you want the child Pipeline to run. To select from the list of Snaplexes available in your Org, click the Image Removed icon.

    Default value: N/A

    Execution label

    The label to display in the Pipeline view of the Dashboard. By default, the child Pipeline's label is used.  You can use this property to differentiate one execution from another.

    Default value: N/A

    Pipeline parameters

    You can configure Pipeline parameters used for executing the Pipeline in this property. Select the Pipeline parameters defined for the Pipeline selected in the Pipeline field. The Parameter value is the expression property that you can configure as an expression based on incoming documents, or this value can be a constant.

    Note

    When reuse is enabled, the parameter values cannot change from one invocation to the next.

    Parameter name

    The Parameter name is the name of the parameter used while executing the Pipeline. You can select the Pipeline parameters defined for the Pipeline selected in the Pipeline field.

    Default value: N/A

    Parameter value

    The Parameter value is the expression property that can be configured as an expression based on incoming documents or can be entered as a constant.

    If the value is configured as an expression based on the input, each incoming document or binary data is evaluated for the given expression and the value is set as the parameter value for the parameter name while invoking the Pipeline.  The result of the expression evaluation is JSON-encoded if it is not a string. The child Pipeline then needs to use the JSON.parse() expression to decode the parameter value.

    Default value: N/A

    When reuse is enabled, the parameter values cannot change from one invocation to the next.

    Reuse executions to process documents

    This flag specifies the execution mode to use. When enabled, the Snap starts a child execution and passes multiple inputs to the Pipeline. Reusable executions continue to live until all of the input documents to this Snap have been fully processed.

    If this flag is not enabled, then a new Pipeline execution is created for each input document.  

    Pipeline parameter values can only be changed if this flag is not enabled. In other words, reusable executions cannot have different Pipeline parameter values for different documents.

    Ultra Mode Compatibility

     If reuse is enabled and this Snap is used

    In this article

    Table of Contents
    maxLevel2
    absoluteUrltrue

    Overview

    Use this Snap to run multiple child Pipelines through one parent Pipeline. 

    Note
    To execute a SnapLogic Pipeline that is exposed as a REST service, use the REST Get Snap instead.


    Image Added

    Key Features

    The Pipeline Execute Snap enables you to do the following:

    • Structure complex Pipelines into smaller segments through child Pipelines.
    • Initiate parallel data processing using the Pooling option.
    • Orchestrate data processing across nodes, within the Snaplex or across Snaplexes.
    • Distribute global values through Pipeline parameters across a set of child Pipeline Snaps.

    Supported Modes for Pipelines

    • Standard mode (default): In the standard mode, a new child Pipeline is started per input document. If you set the pool size to n (where n is any number with the default setting value being 1), then n number of concurrent child Pipelines can run; each child Pipeline processes one document from the parent and then completes.
    • Reuse mode: In reuse mode, child Pipelines are started and each child Pipeline instance can process multiple input documents from the parent. If you set the pool size to n (default 1), then n number of child pipelines are started and they will process input document in a streaming manner. The child pipeline needs to have an unlinked input view for use in reuse mode.
      Reuse mode is more performant, but reuse mode has the restriction that the child pipeline has to be a streaming pipeline.

      Info
      titleReplaces Deprecated Snaps

      This Snap replaces the ForEach and Task Execute Snaps, as well as the Nested Pipeline mechanism.


    • Resumable Child Pipeline: Resumable Pipeline does not support the Pipeline Execute Snap. A regular mode pipeline can use the Pipeline Execute Snap to call a Resumable mode pipeline. If the child pipeline is a Resumable pipeline, then the batch size cannot be greater than one. 

    • Works in Ultra Task Pipelines with the following exception: Reusing a runtime on another Snaplex is not supported. See Snap Support for Ultra Pipelines
      Without reuse enabled, the one-in-one-out requirement for Ultra means batching will not be supported. There will be a runtime check which will fail the parent pipeline if the batch size is set greater than 1. This would be similar to the current behavior in DB insert and other snaps in ultra mode.

    • ELT Mode: A Pipeline can use the Pipeline Execute Snap to call another Pipeline with ELT Snaps only in the standard mode.
    • Pooling Enabled: The pool size and batch size can both be set to greater than one, in which case the input documents are spread across the child pipeline in a round-robin fashion to  ensure that if the child Pipeline is doing any external calls that are slow, then the processing is spread across the children in parallel. The limitation of this option is that the document order is not maintained.

    Prerequisites

    None.

    Limitations

    • If there are not enough Snaplex nodes to execute the Pipeline on the Snaplex, then the Snap waits until Snaplex resources are available. When this situation occurs, the following message appears in the execution statistics dialog:
      Image Added

    • Because a large number of Pipeline runtimes can be generated by this Snap, only the last 100 completed child Pipeline runs are saved for inspection in the Dashboard.

    • The Pipeline Execute Snap cannot exceed a depth of 64 child Pipelines before they begin failing.
    • Child Pipelines do not display data preview details. You can view the data preview for any child Pipeline when the Pipeline Execute Snap completes execution in the parent Pipeline.
    • Ultra Pipelines do not support batching. 

    • Unlike the Group By N Snap, when you configure the Batch field, the documents are processed one by one by the Pipeline Execute Snap and then transferred to the child Pipeline as soon as the parent Pipeline receives it. When the batch is over or the input stream ends, the child Pipeline is closed.


    Snap Views

    TypeFormatNumber of ViewsExamples of Upstream and Downstream SnapsDescription
    Input 

    Binary or Document


    • Min:0
    • Max:1
    • Mapper Snap
    • Copy Snap
    The document or binary data to send to the child Pipeline.
    Output

    Binary or Document

    • Min:0
    • Max:1
    • Mapper Snap
    • Router Snap
    • If the child Pipeline has an unlinked output, then documents or binary data from that view are passed out of this view.
    • If the child Pipeline does not have an unlinked view, then output document or binary data is generated for successful runtimes with the run_id field.
    • Unsuccessful Pipeline runs write a document to the error view. If Reuse executions to process documents is unselected, then the original input document or binary data is added to the output document.
    Note
    titleFor Pool Size Usage

    If the value in the Pool Size field is greater than one, then the order of records in the output is not guaranteed to be the same as that of the input.


    Error

    Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter when running the Pipeline by choosing one of the following options from the When errors occur list under the Views tab:

    • Stop Pipeline Execution: Stops the current Pipeline execution when the Snap encounters an error.

    • Discard Error Data and Continue: Ignores the error, discards that record, and continues with the remaining records.

    • Route Error Data to Error View: Routes the error data to an error view without stopping the Snap execution.

    Learn more about Error handling in Pipelines.

    Snap Settings

    Info
    • Asterisk ( * ): Indicates a mandatory field.

    • Suggestion icon (Image Added): Indicates a list that is dynamically populated based on the configuration.

    • Expression icon (Image Added): Indicates whether the value is an expression (if enabled) or a static value (if disabled). Learn more about Using Expressions in SnapLogic.

    • Add icon (Image Added): Indicates that you can add fields in the field set.

    • Remove icon (Image Added): Indicates that you can remove fields from the field set.


    Field NameField TypeDescription

    Label*


    Default Value
    Pipeline Execute
    Example
    :ExecuteCustomerUpdatePipeline

    String

    Insert excerpt
    File Writer
    File Writer
    nopaneltrue



    Pipeline*


    Default Value
    N/A
    Example
    NetSuite-Create-Credit-Memo

    String/Expression

    Specify the absolute or relative (expression-based) path of the child Pipeline to run. If you only specify the Pipeline name, the Snap searches for the Pipeline in the following folders in this order:

    • Current project
    • Shared project-space
    • Shared folder of the Org

    You can specify the absolute path to the target project using the Org/project_space/project notation.

    You can also dynamically choose the Pipeline to run by entering an expression in this field when Expression enabled. For example, to run all of the Pipelines in a project, you can connect the SnapLogic List Snap to this Snap to retrieve the list of Pipelines in the project and run each one.

    Execute On


    Default Value
    SNAPLEX_WITH_PATH
    Example
    groundplex4-West

    Dropdown list

    Select the one of the following Snaplex options to specify the target Snaplex for the child Pipeline:

    • SNAPLEX_WITH_PATH. Runs the child Pipeline on a user-specified Snaplex. When you choose this option, the Snaplex Path field appears.
    • LOCAL_NODE. Runs the child Pipeline on the same node as the parent Pipeline.
    • LOCAL_SNAPLEX. Runs the child Pipeline on one of the available nodes in the same Snaplex as the parent Pipeline.




    Info
    titleBest Practices

    When choosing which Snaplex to run, always consider first the LOCAL_NODE option. With this selection, the child Pipeline runtime occurs on the same Snaplex node as that of the parent Pipeline. Due to absence of network traffic required to start the child Pipeline on another node, having the runtime on the same node makes the execution faster and more reliable.

    In both of these cases, you do not need to set an Org-specific property. This scenario facilitates usage for the user importing the Pipeline.

    However, if you do need to run the child Pipeline on a different Snaplex than the parent Pipeline, then select SNAPLEX_WITH_PATH and enter the target Snaplex name.

    In this case, you can take advantage of the following strategies:

    • Use a relative path for the Snaplex name and try to use the same Snaplex names in your development, QA, stage, or production Orgs.
      For example, if the Snaplex property is set to cloud, the Pipeline is run on the Snaplex named cloud in the shared directory of the current Org.
    • Use a Pipeline parameter or expression library 1 to specify the path to the Snaplex.
    • For complicated setups, you might want to create an expression library file that contains the details of the environment and import that into the Pipeline. In this model, each Org would have its own version of the library with the appropriate settings for the environment. The Pipeline can then be moved around to each Org without needing to be updated.
    Note
    titleUltra Pipelines

    We do not recommend using the pipe.plexPath setting for Ultra Pipelines, since requests should be processed quickly and sending an execution to another node can be resource-intensive and thus costly.




    Snaplex Path


    Default Value
    :N/A
    Example
    DevPlex-1

    String/Expression

    Appears when you select SNAPLEX_WITH_PATH option for Execute On.

    Enter the name of the Snaplex on which you want the child Pipeline to run. Click Image Added to select from the list of Snaplex instances available in your Org.

    Note

    If you do not provide a Snaplex Path, then by default, the child Pipeline runs on the same node as the parent Pipeline. 


    Execution Label


    Default Value:
    N/A 
    Example
    NetSuite-Create-Credit-Memo

    String/Expression

    Specify the label to display in the Pipeline view of the Dashboard. You can use this field to differentiate one Pipeline execution from another.



    Pipeline Parameters






    Use this field set to define the Pipeline Parameters for the Pipeline selected in the Pipeline field. When you select Reuse executions to process documents, you cannot change parameter values from one Pipeline invocation to the next.

    Parameter Name

    Default Value: N/A 
    Example
    Postal_Code

    Suggestion

    Enter the name of the parameter. You can select the Pipeline Parameters defined for the Pipeline selected in the Pipeline field.



    Parameter Value

    Default Value: N/A 
    Example
    :94402

    String/Expression

    Enter the value for the Pipeline Parameter, which can be an expression based on incoming documents or a constant. 

    If you configure the value as an expression based on the input, then each incoming document or binary data is evaluated against that expression when invoking the Pipeline. The result of the expression is JSON-encoded if it is not a string. The child Pipeline then needs to use the JSON.parse() expression to decode the parameter value.

    When Reuse executions to process documents is enabled, the parameter values cannot change from one invocation to the next.

    Reuse executions to process documents


    Default Value
    Deselected

    Checkbox

    Select this option to start a child Pipeline and pass multiple inputs to the Pipeline. Reusable executions continue to live until all of the input documents to this Snap have been fully processed.

    Info
    • When you select this option and the Pipeline parameters use expressions, the expressions are evaluated using the first document. The parameter value in the child Pipeline does not change across documents.
    • If you do not select this option, then a new Pipeline execution is created for each input document.  
    • The Reuse mode does not support the Batch Size option.


    Batch size*


    Default Value:
    1
    Example
    :
    2

    Integer

    Enter the number of documents in the batch size. If Batch Size is set to N, then N input documents are sent to each child Pipeline that is started. After N documents, the child Pipeline input view is closed until the child Pipeline completes its execution. The output of the child Pipeline (one or more documents) passes to the Pipeline Execute output view. New child Pipelines are started after the original Pipeline completes.

    Note
    • The Batch Size field is not displayed if Reuse executions to process documents is selected.
    • This feature is incompatible with reusable executions.



    Info
    titleBackward Compatibility

    For existing Pipeline Execute Snap instances, null and zero values for the property are treated as a Batch size of 1. The behavior for existing Pipelines should not change, the child pipeline should get one document per execution.



    Pool Size


    Default Value:
    1
    Example
    : 4

    Integer/Expression

    Enter multiple input documents or binary data to be processed concurrently by specifying an execution pool size. When the pool size is greater than one, then the Snap starts Pipeline executions as needed for up to the specified pool size. 

    When you select Reuse executions to process documents, the Snap starts a new execution only if all executions are busy working on documents or binary data and the total number of executions is below the pool size.

    Timeout


    Default Value:
    N/A
    Example
    :10

    Integer/Expression

    Enter the maximum number of seconds for which the Snap must wait for the child Pipeline to complete the runtime. If the child Pipeline does not complete the runtime before the timeout, the execution process stops and is marked as failed.

    Retry limit


    Default Value
    : N/A
    Example
    3

    Integer/Expression

    Enter the maximum number of retry attempts that the Snap must make in case of a network failure. If the child Pipeline does not execute successfully, an error document is written to the error view.

    Note

    This feature is incompatible with reusable executions.


    Retry interval


    Default Value:
    N/A
    Example
    10

    String/ExpressionEnter the minimum number of seconds for which the Snap must wait between two successive retry requests. A retry happens only when the previous attempt resulted in an error. 

    Snap Execution


    Default ValueValidate & Execute
    Example
    :
    Execute only

    Dropdown list
    Multiexcerpt include macro
    nameSnap_Execution_Introduced
    pageAnaplan Read




    Guidelines for Child Pipelines

    • No unlinked binary input or output views. When you enter an expression in the Pipeline field, the Snap needs to contact the SnapLogic cloud servers to load the Pipeline runtime information. Also, if Reuse executions to process documents is enabled, the result of the expression cannot change between documents.

    • When reusing Pipeline executions, there must be one unlinked input view and zero or one unlinked output view.
    • If you do not enable Reuse executions to process documents, then use at most one unlinked input view and one unlinked output view.
    • If the child Pipeline has an unlinked input view, make sure the Pipeline Execute Snap has an input view because the input document or binary data is fed into the child Pipeline.
    • If you rename the child Pipeline, then you must manually update the reference to it in the Pipeline field; otherwise, the connection between child and parent Pipeline breaks. 
    • The child Pipeline is executed in preview mode when the Pipeline containing the Pipeline Execute Snap is saved. Consequently, any Snaps marked not to execute in preview mode are not executed and the child Pipeline only processes 50 documents. 
    • You cannot have a Pipeline call itself: recursion is not supported.

    Schema Propagation Guidelines

    • If the child Pipeline has a Snap that supports schema suggest, such as the JSON Formatter Snap or MySQL Insert Snap, then the schema is back propagated to the parent Pipeline. This configuration is useful in mapping input values in the parent Pipeline to the corresponding fields in the child Pipeline using a Mapper Snap in the parent Pipeline. 
    • For the schema suggest to work, the parent Pipeline must be validated first; only then is the schema from the child Pipeline visible in the parent Pipeline.
    • The Pipeline Execute Snap is capable of propagating schema in both directions: upstream as well as downstream. See the example Schema Propagation in Parent Pipeline – 3.

      Info
      • For a child Pipeline to run on a Snaplex node, memory is prioritized over slots. That means, irrespective of the node a parent Pipeline runs, the child Pipeline runs on a node within 10% memory utilization.

      • Difference between how a local node and a Snaplex run the child Pipeline:

        • SNAPLEX_WITH_PATH: Runs the child Pipeline on a user-specified Snaplex. When you select this option, the Snaplex Path field appears.

        • LOCAL_NODE: Runs the child Pipeline on the same node as the parent Pipeline.

        • LOCAL_SNAPLEX: Runs the child Pipeline on one of the available nodes in the same Snaplex as the parent Pipeline.


    Ultra Mode Compatibility

    • If you selected Reuse executions to process documents for this Snap in an Ultra Pipeline, then the Snaps in the child Pipeline must also be Ultra-compatible.
    • If you need to use Snaps that are not Ultra-compatible in an Ultra Pipeline, you can create a child Pipeline with those Snaps and use a Pipeline Execute Snap
    with reuse
    • with Reuse executions to process documents disabled to invoke the Pipeline. Since the child Pipeline is executed for every input document, the Ultra Pipeline restrictions do not apply.
     

    • For example, if you want to run an SQL Select operation on a table that would return more than one document, you can put a Select Snap followed by a Group By N Snap with the group size set to zero in a child Pipeline. In
    that
    • this configuration, the child Pipeline
    is executed,
    • performs the select operation during execution, and then the Group By Snap gathers all of the outputs into a single document or as binary data. That single output document or binary data can then be used as the output of the Ultra Pipeline.

    Default value:  Not selected

    Number of Retries

    The maximum number of retry attempts that the Snap must make in case of a network failure. If the child Pipeline does not execute successfully, an error document is written to the error view.

    Note

    This feature is incompatible with reusable executions.

    Example: 3

    Default value: 0

    Retry Interval

    The minimum number of seconds for which the Snap must wait between two successive retry requests. A retry happens only when the previous attempt resulted in an exception. 

    Example 10

    Default value: 1

    Pool size

    Multiple input documents or binary data can be processed concurrently by specifying an execution pool size. When the pool size is greater than one, the Snap starts executions as needed up to the given pool size. When Reuse is enabled, the Snap starts a new execution only if all executions are busy working on documents or binary data and the total number of executions is below the pool size. However, in Spark mode, if you execute multiple Pipelines simultaneously, the Hadoop cluster might face resource constraints, depending upon the complexity of the Pipelines and the size of the documents being processed. 

    Default value: 1

    Timeout

    The maximum number of seconds for which the Snap must wait for the child Pipeline to complete execution. If the child Pipeline does not complete executing before the timeout, the execution process stops and is marked as failed.

    Example 10

    Default value: 0 (No timeout is the default)

    Multiexcerpt include macro
    nameSnap Execution
    pageSOAP Execute

    Multiexcerpt include macro
    nameSnap_Execution_Introduced
    pageAnaplan Read

    Examples

    Execute Child Pipeline Multiple Times


    Expand
    titlePipeline Execution Example

    The project, PE_Multiple_Executions,demonstrates how the Pipeline Execute Snap can be configured to execute a child Pipeline multiple times. It contains the following Pipelines:

  • PE_Multiple_Executions_Child - A simple child Pipeline that writes out a document with static string and the number of input documents received by the Snap.
  • PE_Multiple_Executions_NoReuse_Parent - A parent Pipeline that executes the PE_Multiple_Executions_Child Pipeline five times. You can save the Pipeline to examine the output documents. Note that the output contains a copy of the original document and the $inCount field is always set to one because the Pipeline was separately executed five times.
  • PE_Multiple_Executions_Reuse_Parent - A parent Pipeline that executes the PE_Multiple_Executions_Child Pipeline once and feeds the child execution five documents. You can save the Pipeline to examine the output documents. Note that the output does not contain a copy of the original document and the $inCount field goes up for each document since the same Snap instance is being used to process each document

    Creating a Child Pipeline by Accessing Pipelines in the Pipeline Catalog in Designer

    You can now browse the Pipeline Catalog for the target child Pipeline, and then select, drag and drop it in the Canvas. The SnapLogic Designer automatically adds the child Pipeline using a Pipeline Execute Snap.
    Image Added

    Likewise, you can preview a child Pipeline by hovering over a Pipeline Execute Snap while the parent Pipeline is open on the Designer Canvas.
    Image Added


    Returning Child Pipeline Output to the Parent Pipeline

    A common use case for the Pipeline Execute Snap is to run a child Pipeline whose output is immediately returned to the parent Pipeline for further processing. You can achieve this return with the following Pipeline design for the child Pipeline.

    Image Added

    In this example, the document in this child Pipeline is sent to the parent Pipeline through output1 of the Copy Snap. Any unconnected output view is returned to the parent Pipeline. You can use any Snap that completes execution in this way.


    Execution States

    When a Pipeline Execute Snap activates its child Pipeline, you can view the status as it executes on the SnapLogic Designer canvas.

    A child Pipeline also reports its execution to the parent Pipeline. In the Studio Execution overview and Dashboard , you can hover over the status shown in the Status column for a pipeline with a Pipeline Execute Snap, and the following messages are displayed for the following scenarios:

    • If the parent Pipeline has the status Failed shown in the Status column, then the following message is displayed: One of the child Pipelines Failed.

    • If the parent Pipeline has the status Completed with Errors shown in the Status column, the following message is displayed: One of the child Pipelines completed with Errors.

    These execution state messages apply even when the child Pipeline does not appear on the Dashboard because of child Pipeline execution limits.

    The Studio Execution overview does not include the Completed with Warnings status as a searchable status.


    Examples

    Run a Child Pipeline Multiple Times

    The PE_Multiple_Executions project demonstrates how you can configure the Pipeline Execute Snap to execute a child Pipeline multiple times. The project contains the following Pipelines:

    • PE_Multiple_Executions_Child: A simple child Pipeline that writes out a document with static string and the number of input documents received by the Snap.
    • PE_Multiple_Executions_
    UltraSplitAggregate
    • NoReuse_Parent
    - A parent Pipeline that is an example of using Snaps that are not Ultra-compatible in an Ultra Pipeline . This Pipeline can be turned into an Ultra Pipeline by removing the JSON Generator Snap at the head of the Pipeline and creating an Ultra Task
    • : A parent Pipeline that executes the PE_Multiple_Executions_Child Pipeline five times. You can save the Pipeline to examine the output documents. Note that the output contains a copy of the original document and the $inCount field is always set to one because the Pipeline was separately executed five times.
    • PE_Multiple_Executions_
    UltraSplitAggregate_Child- A child Pipeline that splits an array field in the input document and sums the values of the $num field in the resulting documents.

    Converting Existing Pipelines

    Expand
    titleConverting existing pipelines

    The Pipeline Execute Snap can replace some uses of the Nested Pipeline mechanism as well as the ForEach and Task Execute Snaps. For now, this Snap only supports  child Pipelines with unlinked document views (binary views are not supported). If these limitations are not a problem for your use case, read on to find out how you can transition to this Snap and the advantages of doing so.

    Nested Pipeline

    Converting a Nested Pipeline will require the child Pipeline to be adapted to have no more than a single unlinked document input view and no more than a single unlinked document output view. If the child Pipeline can be made compatible, then you can use this Snap by dropping it on the canvas and selecting the child Pipeline for the Pipeline property. You will also want to enable the Reuse property to preserve the existing execution semantics of Nested Pipelines. The advantages of using this Snap over Nested Pipelines are:

    • Multiple executions can be started to process documents in parallel.
    • The Pipeline to execute can be determined dynamically by an expression.
    • The original document or binary data is attached to any output documents or binary data if reuse is not enabled.

    ForEach

    Converting a ForEach Snap to the Pipeline Execute Snap is pretty straightforward since they have a similar set of properties. You should be able to select the Pipeline you would like to run and populate the parameter listing. The advantages of using this Snap over the ForEach Snap are:

    • Documents or binary data fed into the Pipeline Execute Snap can be sent to the child execution through an unlinked input view in the child.
      Documents or binary data sent out of an unlinked output view in the child execution is written out of the Pipeline Execute's output view.
    • The execution label can be changed.
    • The Pipeline to execute can be determined dynamically by an expression.
    • Executing a Pipeline does not require communication with the cloud servers. 

    Task Execute

    Converting a Task Execute Snap to the Pipeline Execute Snap is also straightforward since the properties are similar. To start, you only need to select the Pipeline you would like to use, you no longer have to create a Triggered Task. If you set the Batch Size property in the Task Execute to one, then you will not want to enable the Reuse property. If the Batch Size was greater than one, then you should enable Reuse. The Pipeline parameters should be the same between the Snaps. The advantages of using this Snap over the Task Execute Snap are:

    • A task does not need to be created.
    • Multiple executions can be started to process documents in parallel.
    • There is no limit on the number of documents that can be processed by a single execution (i.e. no batch size).
    • The execution label can be changed.
    • The Pipeline to execute can be determined dynamically by an expression.
    • The original document will be attached to any output documents if reuse is not enabled.
    • Executing a Pipeline does not require communication with the cloud servers. 

    "Auto" Router

    Converting a Pipeline that uses the Router Snap in "auto" mode can be done by moving the duplicated portions of the Pipeline into a new Pipeline and then calling that Pipeline using a Pipeline Execute. After refactoring the Pipeline, you can adjust the "Pool Size" of the Pipeline Execute Snap to control how many operations are done in parallel. The advantages of using this Snap over an "Auto" Router are:

    • De-duplication of Snaps in the Pipeline.
    • Adjusting the level of parallelism is trivially done by changing the "Pool Size". 

    Propagate Schema Backward – 1

    Expand
    titleSchema propagation in parent pipeline - 1

    The project, PE_Backward_Schema_Propagation_Contacts, demonstrates the schema suggest feature of the Pipeline Execute Snap. It contains the following files:

    • PE_Backward_Schema_Propagation_Contacts_Parent
    • PE_Backward_Schema_Propagation_Contacts_Child
    • contact.schema (Schema file)
    • test.json (Output file)

    The parent Pipeline is shown below:

    Image Removed

    The child Pipeline is as shown below:

    Image Removed

    The Pipeline Execute Snap is configured as:

    Image Removed

    The following schema is provided in the JSON Formatter Snap. It has three properties - $firsName, $lastName, and $age. This schema is back propagated to the parent Pipeline.

    Image Removed

    The parent Pipeline must be validated in order for the child Pipeline's schema to be back-propagated to the parent Pipeline. Below is the Mapper Snap in the parent Pipeline: 

    Image Removed

    Notice that the Target Schema section shows the three properties of the schema in the child Pipeline:

    Image RemovedImage Removed

    Upon execution the data passed in the Mapper Snap will be written into the test.json file in the child Pipeline. The exported project is available in the Downloads section below.

     Schema Propagation in Parent Pipeline – 2

    Expand
    titleSchema propagation in parent pipeline - 2

    The project PE_Backward_Schema_Propagation_Books demonstrates the schema suggest feature of the Pipeline Execute Snap wherein a new record is inserted into the target table with the MySQL Insert Snap being the one that back-propagates schema. It contains the following Pipelines:

    • PE_Backward_Schema_Propagation_Books_Parent
    • PE_Backward_Schema_Propagation_Books_Child

    The record to be inserted is passed through the parent Pipeline is shown below:

    Image Removed

    The parent Pipeline must be validated in order for the child Pipeline's schema to be back propagated to the parent Pipeline. Below is the Mapper Snap in the parent Pipeline. Notice that the Target Schema section shows the three properties of the schema in the child Pipeline:

    Image Removed

    The Pipeline Execute Snap is configured as:

    Image Removed

    The child Pipeline is as shown below:

    Image Removed

    The MySQL Insert Snap is configured as shown below, note here that only specifying the table name will also suffice, the schema shown in the Mapper Snap is automatically taken from the table. 

    Image Removed

    Upon execution the record (passed through the Pipeline Execute Snap) is written into the target table. This can be confirmed using a standalone MySQL Select Snap. The following screenshot shows all the records in the target table after the Pipeline is executed with the latest insert highlighted.

    Image Removed

    The exported project is available in the Downloads section below.

    Propagate Schema Backward and Forward

    Expand
    titleBackward and forward schema propagation

    The project, PE_Backward_Forward_Schema_Propagation, demonstrates the Pipeline Execute Snap's capability of propagating schema in both directions – upstream as well as downstream. It contains the following Pipelines:

    • PE_Backward_Forward_Schema_Propagation_Parent
    • PE_Backward_Forward_Schema_Propagation_Child
    The parent Pipeline is as shown below:
    Image Removed
    The Pipeline Execute Snap is configured to call the Pipeline schema-child. This child Pipeline consists of a Mapper Snap that is configured as shown below:
    Image Removed
    The Mapper Snaps upstream and downstream of the Pipeline Execute Snap: Mapper_InputSchemaPropagation, and  Mapper_TargetSchemaPropagation are configured as shown below:
    Image Removed

     Image Removed

    When the Pipeline is executed, data propagation takes place between the parent and child Pipeline:
    • The string expression $foo is propagated from the child Pipeline to the Pipeline Execute Snap.
    • The Pipeline Execute Snap propagates it to the upstream Mapper Snap (Mapper_InputSchemaPropagation), as visible in the Target Schema section. Here it is assigned the value 123.
    • This is passed from the Mapper to the Pipeline Execute Snap that internally passes the value to the child Pipeline. Here $foo is mapped to $bar. $baz is another string expression in the child Pipeline (assigned the value 2).
    • $bar, and $baz are propagated to the Pipeline Execute Snap and propagated forward to the downstream Mapper Snap (Mapper_TargetSchemaPropagation). This can be seen in the Input Schema section of the Mapper Snap.

    The exported project is available in the Downloads section below.

    Note

    These zip files are exported projects. They have to be imported into a project space to be used. To do this go to Manager, navigate to a project space and select Import from the drop-down menu. If you are unable to import the project, the Pipeline's files (SLP) are also included in the Download section for you to download and import.

    Downloads

    Attachmentspatterns*.slp,*.
    • Reuse_Parent: A parent Pipeline that executes the PE_Multiple_Executions_Child Pipeline once and feeds the child Pipeline execution five documents. You can save the Pipeline to examine the output documents. Note that the output does not contain a copy of the original document and the $inCount field goes up for each document since the same Snap instance is being used to process each document.
    • PE_Multiple_Executions_UltraSplitAggregate_Parent: A parent Pipeline that is an example of using Snaps that are not Ultra-compatible in an Ultra Pipeline. This Pipeline can be turned into an Ultra Pipeline by removing the JSON Generator Snap at the head of the Pipeline and creating an Ultra Task.
    • PE_Multiple_Executions_UltraSplitAggregate_Child: A child Pipeline that splits an array field in the input document and sums the values of the $num field in the resulting documents.

    Propagate a Schema Backward 

    The project, PE_Backward_Schema_Propagation_Contacts, demonstrates the schema suggest feature of the Pipeline Execute Snap. It contains the following files:

    • PE_Backward_Schema_Propagation_Contacts_Parent
    • PE_Backward_Schema_Propagation_Contacts_Child
    • contact.schema (Schema file)
    • test.json (Output file)


    The parent Pipeline is shown below:

    Image Added

    The child Pipeline is as shown below:

    Image Added

    The Pipeline Execute Snap is configured as:

    Image Added

    The following schema is provided in the JSON Formatter Snap. It has three properties - $firsName, $lastName, and $age. This schema is back propagated to the parent Pipeline.

    Image Added

    The parent Pipeline must be validated in order for the child Pipeline's schema to be back-propagated to the parent Pipeline. Below is the Mapper Snap in the parent Pipeline: 

    Image Added

    Notice that the Target Schema section shows the three properties of the schema in the child Pipeline:

    Image AddedImage Added


    Upon execution the data passed in the Mapper Snap will be written into the test.json file in the child Pipeline. The exported project is available in the Downloads section below.

    Propagate Schema Backward and Forward

    The project, PE_Backward_Forward_Schema_Propagation, demonstrates the Pipeline Execute Snap's capability of propagating schema in both directions – upstream as well as downstream. It contains the following Pipelines:
    • PE_Backward_Forward_Schema_Propagation_Parent
    • PE_Backward_Forward_Schema_Propagation_Child


    The parent Pipeline is as shown below:
    Image Added
    The Pipeline Execute Snap is configured to call the Pipeline schema-child. This child Pipeline consists of a Mapper Snap that is configured as shown below:
    Image Added
    The Mapper Snaps upstream and downstream of the Pipeline Execute Snap: Mapper_InputSchemaPropagation, and  Mapper_TargetSchemaPropagation are configured as shown below:
    Image Added

     Image Added

    When the Pipeline is executed, data propagation takes place between the parent and child Pipeline:
    • The string expression $foo is propagated from the child Pipeline to the Pipeline Execute Snap.
    • The Pipeline Execute Snap propagates it to the upstream Mapper Snap (Mapper_InputSchemaPropagation), as visible in the Target Schema section. Here it is assigned the value 123.
    • This is passed from the Mapper to the Pipeline Execute Snap that internally passes the value to the child Pipeline. Here $foo is mapped to $bar. $baz is another string expression in the child Pipeline (assigned the value 2).
    • $bar, and $baz are propagated to the Pipeline Execute Snap and propagated forward to the downstream Mapper Snap (Mapper_TargetSchemaPropagation). This can be seen in the Input Schema section of the Mapper Snap.


    Migrating from Legacy Nested Pipelines

    The Pipeline Execute Snap can replace some uses of the Nested Pipeline mechanism as well as the ForEach and Task Execute Snaps. For now, this Snap only supports  child Pipelines with unlinked document views (binary views are not supported). If these limitations are not a problem for your use case, read on to find out how you can transition to this Snap and the advantages of doing so.

    Nested Pipeline

    Converting a Nested Pipeline will require the child Pipeline to be adapted to have no more than a single unlinked document input view and no more than a single unlinked document output view. If the child Pipeline can be made compatible, then you can use this Snap by dropping it on the canvas and selecting the child Pipeline for the Pipeline property. You will also want to enable the Reuse property to preserve the existing execution semantics of Nested Pipelines. The advantages of using this Snap over Nested Pipelines are:

    • Multiple executions can be started to process documents in parallel.
    • The Pipeline to execute can be determined dynamically by an expression.
    • The original document or binary data is attached to any output documents or binary data if reuse is not enabled.

    ForEach

    Converting a ForEach Snap to the Pipeline Execute Snap is pretty straightforward since they have a similar set of properties. You should be able to select the Pipeline you would like to run and populate the parameter listing. The advantages of using this Snap over the ForEach Snap are:

    • Documents or binary data fed into the Pipeline Execute Snap can be sent to the child Pipeline execution through an unlinked input view in the child.
      Documents or binary data sent out of an unlinked output view in the child Pipeline execution is written out of the Pipeline Execute's output view.
    • The execution label can be changed.
    • The Pipeline to execute can be determined dynamically by an expression.
    • Executing a Pipeline does not require communication with the cloud servers. 

    Task Execute

    Converting a Task Execute Snap to the Pipeline Execute Snap is also straightforward since the properties are similar. To start, you only need to select the Pipeline you would like to use, you no longer have to create a Triggered Task. If you set the Batch Size property in the Task Execute to one, then you will not want to enable the Reuse property. If the Batch Size was greater than one, then you should enable Reuse. The Pipeline parameters should be the same between the Snaps. The advantages of using this Snap over the Task Execute Snap are:

    • A task does not need to be created.
    • Multiple executions can be started to process documents in parallel.
    • There is no limit on the number of documents that can be processed by a single execution (that is, no batch size).
    • The execution label can be changed.
    • The Pipeline to execute can be determined dynamically by an expression.
    • The original document will be attached to any output documents if reuse is not enabled.
    • Executing a Pipeline does not require communication with the cloud servers. 

    "Auto" Router

    Converting a Pipeline that uses the Router Snap in "auto" mode can be done by moving the duplicated portions of the Pipeline into a new Pipeline and then calling that Pipeline using a Pipeline Execute. After refactoring the Pipeline, you can adjust the "Pool Size" of the Pipeline Execute Snap to control how many operations are done in parallel. The advantages of using this Snap over an "Auto" Router are:

    • De-duplication of Snaps in the Pipeline.
    • Adjusting the level of parallelism is trivially done by changing the Pool Size value.

    Downloads

    Note
    titleImportant Steps to Successfully Reuse Pipelines
    1. Download the zip files, extract the files, and import the Pipelines into SnapLogic.
    2. Configure Snap accounts as applicable.
    3. Provide Pipeline parameters as applicable.

    Attachments
    patterns.*zip

    Insert excerpt
    Flow Snap Pack
    Flow Snap Pack
    nopaneltrue