Best Practices

In this Page

This page describes common best practices for pipeline design and development, pipeline management, and administration.

General

 Know how to clear your browser cache.

If you experience odd behavior for no apparent reason, clear your browser cache before you log into the latest SnapLogic Elastic Integration Platform. See the appropriate documentation for your browser:

 Platform Upgrades and Accounts

Some accounts may have a fixed time for refresh tokens, such as Google accounts, which must be refreshed every hour. If that refresh needs to occur when the platform is down for an update, the refresh does not occur. To prevent these accounts from failing after a new platform deployment, it is recommended that you refresh your accounts before the designated down time.

Video: SnapLogic Best Practices: Scaling Your Integrations

Org Environments


 Create dedicated Orgs for your activities...

User separate Orgs for production, development, and testing activities.

Do not use the same Org for the following activities:

  • Testing and production
  • Dev and production (no testing)


 Limit UAT usage for testing features before the GA of a quarterly release...

Every quarterly release is available a week early on UAT. 

Only use UAT for testing release features during the two-week window of the release. We do not recommend ongoing tests or experiments on UAT because the version might change suddenly outside of the two-week window.


 Do not use the Files feature in Manager as a file system or storage....

Use a Cloud storage provider to store production data. File Assets should not be used as a file source or a destination in production pipelines. When you configure File Reader and Writer Snaps, set the file path to a cloud provider or external file system.

Only use sldb for the following:

  • JAR files

  • JDBC files

  • Expressions libraries (.expr)

  • Account-related configuration files



Pipeline Design and Development

 Use a naming convention.

Standardize on a naming convention for pipelines and maintain this convention consistently across all pipelines and resources. Adopt a standard that fits in with your organization. Pipeline names should ideally indicate execution level (Main execution pipeline or Sub helper pipeline), integration name (typically names of endpoints, if applicable), and operation (can include type of data processed, or data conversion if applicable). Considering that projects maintain alphabetical pipelines, it is usually best to name them strategically. The rule of thumb here is to name pipelines from the highest identifier to the lowest, most concrete identifier. This is especially useful in cases where pipelines are nested within one another. For example, in the case of a SalesForce integration to SQL, you could have “Main SalesForce to SQL”, “Sub SalesForce to SQL Transfer Customers”, “Sub SalesForce to SQL Transfer Leads”.

Unless otherwise noted, the names of any asset or project is limited to UTF-8 alphanumeric characters and these punctuation characters !"#$%&'()*+,-.:;<=>?@[\]^_`{|}~.

Recommend Pipeline Naming Conventions

  • <verb><Business function>, such as getCustomerOrder
  • <verb><Business function>_< optional from system>_< optional to system>, such as getCustomerOrder_SFDC_SQL
  • prepend child pipelines with the same characters, such as sub_ or z_
 Do not assume that data preview provides a complete representation of the data.

Data preview is limited to the first 50 records that are pulled in. All subsequent data previews down the pipeline will work only with that initial preview data set, so your actual resulting data may vary from what you see in the preview data.

 Disable Pipeline Validation for your production Orgs

Because Pipeline Validation is intended for Pipeline development and testing, we recommend disabling Pipeline Validation in Manager > Settings for your production Orgs. You can always manually run Pipeline Validation for individual pipelines in Designer.

 Avoid large pipelines triggered by events.

When a pipeline is called in response to an event, the caller has to wait for the response until the entire pipeline completes. If the pipeline is large and takes a long time to complete, the caller may time out and mark a failure even though the pipeline is still running and processing data. Pipelines called in response to HTTP events should not process the data. They should provide status about whether data was “accepted” or not, but leave the “processing” of data to another asynchronous or scheduled pipeline.

 Do not schedule a chain reaction.

When possible, separate a large pipeline into smaller pieces and schedule the individual pipelines independently. Distribute the execution of resources across the timeline and avoid a chain reaction.

 Know how to configure an Email Sender Snap to avoid sending hundreds of individual emails.

If you configure an Email Sender Snap with an input view and connect it to receive data from upstream Snaps, one email will be sent for each document processed through the pipeline unless you are using HTML table as the Email type. The HTML table format will embed the data in the email body up to the Batch size limit, sending as many emails as necessary to batch through the documents. Alternatively, if you want to just send the details as an attachment to an email, do not add an input view. Instead, just place the Snap unconnected on the workspace, write the data to a file, and have the Email Sender Snap attach that file to the email.

 If your pipeline fails, retry the validation.

If a pipeline fails for unknown reason, click Save after any modifications, then click Retry before running your pipeline again. This will clear the cached data and gather new preview data based on the latest pipeline configuration.

For scheduled pipelines, close all open-ended Snaps (remove open output/error views).

Start Ultra pipelines with listener Snaps like JMS Consumer.

Select Ignore empty stream in the JSON Formatter Snap to prevent generating empty output when no input data is provided.

Pipeline Management

 Rename Snaps when you place them in your pipeline.

By giving each Snap in your pipeline a unique name, it will be easier to identify the correct log information for that Snap in the runtime logs, especially if you are using multiples of the same Snap.

 Maintain pipeline versions.

Accidentally deleting or making serious blunders in a pipeline could result in days of lost work. Some general guidelines for pipeline backups include exporting pipelines after significant milestones (major changes, a new release), renaming the pipeline file (.slp) to indicate the event, and storing the exported pipelines in a repository. For example, you can backup your assets such as pipelines, files, accounts, and tasks to GitHub repositories using your SnapLogic account. For more information, see SnapLogic - GitHub Integration.

 Set up notifications for pipeline events.

When pipelines tasks are configured to schedule pipeline runs or allow them to be triggered, you can have notifications sent when the task has started, completed, failed, or stopped.

Tasks

 Triggered Tasks: General Information
  • Pipelines configured as triggered tasks can expose a maximum of (x1) unconnected output views.

  • The Task Execute Snap will timeout at the platform after 15 minutes. This is irrespective of whether the pipeline is active or idle.

 Triggered Tasks using the Cloud URL
  • If the execution time of the task exceeds 15 minutes, the platform will timeout the request and return an HTTP 504. This is enforced globally by the platform and cannot be modified.

  • By default, the remote request will wait until the pipeline execution is complete. Upon completion, the platform will return a response document containing the HTTP code of the pipeline exit status. If the pipeline fails during execution, additional system statistics may be returned in the same response document.

    • If the pipeline exposes an unconnected output view, the documents generated by the view will override the default response document.

 Triggered Tasks using an On-Premises URL
  • If the Snaplex node on which the task is running is patched to mrc205 or higher, there is no platform-enforced restriction on execution time.

    • If the Snaplex node is not patched to at least mrc205, the task may fail after 10 minutes. The 10 minute timeout for local url (pre-mrc205) happens for pipelines which are not active, if the pipeline output view is continuously streaming results, the timeout does not apply.

  • By default, the remote request will return asynchronously after starting the pipeline. The platform will not return a default response document.

    • If the pipeline exposes an unconnected output view, the remote request will wait until the pipeline execution is complete. The documents generated by the view will become the response to the request.

Administration

 Do not share credentials.

Multiple people logging in with the same credential can lead to someone unintentionally modifying your work.

 Do not use an admin user for Development.

Create a separate user login for each developer. By default, a project will be created for them, but you can also give them either full access or only read and execute permissions on other projects. Using the admin user would give you access to all projects.

 Create Accounts in individual projects, not in the Shared project.

Accounts store credentials to access other applications. Unless it is an account you know everyone in your organization needs, do not save it in the Shared project. Instead, create projects for specific applications and store the Account in that project.

 Name your Groudplex appropriately.

The Groundplex name should follow DNS Standards. Avoid using underscores & special characters in Groundplex names.


See Also