In this Page
This page describes common best practices for pipeline design and development, pipeline management, and administration.
If you experience odd behavior for no apparent reason, clear your browser cache before you log into the latest SnapLogic Elastic Integration Platform. See the appropriate documentation for your browser: |
Some accounts may have a fixed time for refresh tokens, such as Google accounts, which must be refreshed every hour. If that refresh needs to occur when the platform is down for an update, the refresh does not occur. To prevent these accounts from failing after a new platform deployment, it is recommended that you refresh your accounts before the designated down time. |
User separate Orgs for production, development, and testing activities. Do not use the same Org for the following activities:
|
Every quarterly release is available a week early on UAT. Only use UAT for testing release features during the two-week window of the release. We do not recommend ongoing tests or experiments on UAT because the version might change suddenly outside of the two-week window. |
Use a Cloud storage provider to store production data. File Assets should not be used as a file source or a destination in production pipelines. When you configure File Reader and Writer Snaps, set the file path to a cloud provider or external file system. Only use
|
Standardize on a naming convention for pipelines and maintain this convention consistently across all pipelines and resources. Adopt a standard that fits in with your organization. Pipeline names should ideally indicate execution level (Main execution pipeline or Sub helper pipeline), integration name (typically names of endpoints, if applicable), and operation (can include type of data processed, or data conversion if applicable). Considering that projects maintain alphabetical pipelines, it is usually best to name them strategically. The rule of thumb here is to name pipelines from the highest identifier to the lowest, most concrete identifier. This is especially useful in cases where pipelines are nested within one another. For example, in the case of a SalesForce integration to SQL, you could have “Main SalesForce to SQL”, “Sub SalesForce to SQL Transfer Customers”, “Sub SalesForce to SQL Transfer Leads”.
Recommend Pipeline Naming Conventions
|
Data preview is limited to the first 50 records that are pulled in. All subsequent data previews down the pipeline will work only with that initial preview data set, so your actual resulting data may vary from what you see in the preview data. |
Because Pipeline Validation is intended for Pipeline development and testing, we recommend disabling Pipeline Validation in Manager > Settings for your production Orgs. You can always manually run Pipeline Validation for individual pipelines in Designer. |
When a pipeline is called in response to an event, the caller has to wait for the response until the entire pipeline completes. If the pipeline is large and takes a long time to complete, the caller may time out and mark a failure even though the pipeline is still running and processing data. Pipelines called in response to HTTP events should not process the data. They should provide status about whether data was “accepted” or not, but leave the “processing” of data to another asynchronous or scheduled pipeline. |
When possible, separate a large pipeline into smaller pieces and schedule the individual pipelines independently. Distribute the execution of resources across the timeline and avoid a chain reaction. |
If you configure an Email Sender Snap with an input view and connect it to receive data from upstream Snaps, one email will be sent for each document processed through the pipeline unless you are using HTML table as the Email type. The HTML table format will embed the data in the email body up to the Batch size limit, sending as many emails as necessary to batch through the documents. Alternatively, if you want to just send the details as an attachment to an email, do not add an input view. Instead, just place the Snap unconnected on the workspace, write the data to a file, and have the Email Sender Snap attach that file to the email. |
If a pipeline fails for unknown reason, click Save after any modifications, then click Retry before running your pipeline again. This will clear the cached data and gather new preview data based on the latest pipeline configuration. For scheduled pipelines, close all open-ended Snaps (remove open output/error views). Start Ultra pipelines with listener Snaps like JMS Consumer. Select Ignore empty stream in the JSON Formatter Snap to prevent generating empty output when no input data is provided. |
By giving each Snap in your pipeline a unique name, it will be easier to identify the correct log information for that Snap in the runtime logs, especially if you are using multiples of the same Snap. |
Accidentally deleting or making serious blunders in a pipeline could result in days of lost work. Some general guidelines for pipeline backups include exporting pipelines after significant milestones (major changes, a new release), renaming the pipeline file (.slp) to indicate the event, and storing the exported pipelines in a repository. For example, you can backup your assets such as pipelines, files, accounts, and tasks to GitHub repositories using your SnapLogic account. For more information, see SnapLogic - GitHub Integration. |
When pipelines tasks are configured to schedule pipeline runs or allow them to be triggered, you can have notifications sent when the task has started, completed, failed, or stopped. |
|
|
|
Multiple people logging in with the same credential can lead to someone unintentionally modifying your work. |
Create a separate user login for each developer. By default, a project will be created for them, but you can also give them either full access or only read and execute permissions on other projects. Using the admin user would give you access to all projects. |
Accounts store credentials to access other applications. Unless it is an account you know everyone in your organization needs, do not save it in the Shared project. Instead, create projects for specific applications and store the Account in that project. |
The Groundplex name should follow DNS Standards. Avoid using underscores & special characters in Groundplex names. |