On this page
Table of Contents maxLevel 3 exclude Impacted Doc(s)|Scope|Doc Updates
Overview
Activity Log Notifications (also known as Notifications or Alerts) let Org admins configure alert routing for specific event types within their Organizations. You must have Org admin permissions for managing alerts.
Note | ||
---|---|---|
| ||
|
Info | ||
---|---|---|
| ||
SnapLogic supports the Slack messaging app within the SnapLogic platform communications, enabling you as an Org admin to add Slack channels and recipients for your SnapLogic communications. For details on adding Slack communications to your Org, see SnapLogic Notifications through Slack. |
You can create notifications that notify users based on the following events.
- ACL: Changes to Asset Control List (ACL) for a project.
- API: Threshold violation for API usage.
- Asset: Changes to assets.
- Dist: Changes in the distribution version of Snaplex instances.
- Group: Changes to groups (creation, deletion, and update).
- Session: Session start and end times for users.
- Snaplex: Alerts to changes in Snaplex state, size or condition
- Snaplex Node: Threshold violations for performance.
- Task: Threshold violations for performance and reliability.
- User: Changes to SnapLogic login credentials and access.
Managing Notifications
As an Org admin, you can create notification rules, view and delete notifications, and use them for tracking Snaplex congestion.
Creating Notification Rules
- Navigate to SnapLogic Manager > Settings, scroll down the page to Alerts/Activity Log Notifications, and click Settings.
- In the Settings > Notifications page, click Create Notification to display the Create Notification Rule dialog.
- In the Create Notification Rule dialog, click Event Type to display the drop-down list:
See Event Type Parameters for a list of the Event Types and associated fields. - Select the target Event Type and enter information for the displayed fields.
- For notification recipients, enter the recipient's email address or Slack channel. To send a direct message, enter the Slack username.
- For specific parameters for each event type, see the Event Type Parameters.
- Click Save to exit.
You can go to Settings > Notifications to validate that your notification is created.
Viewing Alert Information
- Log in to SnapLogic and click the Manager tab.
The Organization Settings page appears to display the Settings controls. - In the left pane, click Alerts to display All the alerts in an Org.
- To filter the Open and Closed alerts, click the corresponding tabs.
- To filter the alerts by time period, enter the target Start and End dates.
- (Optional) Click the download icon () to download as a CSV file.
Updating Notifications
- Navigate to SnapLogic Manager > Settings, scroll down the page to Alerts/Activity Log Notifications, and click Settings.
- In the Settings > Notifications page, click any notification to display the Edit Notification Rule dialog:
- Edit the notification as required and click Save.
You can make the following changes:- Event Type. Change to any event type.
- Projects to notify. Select target project folders
- Email. Add or remove email addresses.
- Slack Recipients. Add or remove Slack channels and usernames.
Deleting a Notification
- Log in to SnapLogic and click the Manager tab.
The Organization Settings page appears to display the Settings controls. - Scroll down the page to Alerts/Activity Log Notifications and click Settings.
The Settings > Notifications page displays the list of alerts. - Click the Delete Notification icon () next to the alert that you want to delete.
The alert disappears from the list.
Use Alerts to Track Snaplex Congestion AnchorSnaplexCongestionAlerts SnaplexCongestionAlerts
SnaplexCongestionAlerts | |
SnaplexCongestionAlerts |
Snaplex alerts are generated in the following scenarios:
- One or more Pipelines are in the Queued state on a Snaplex for more than:
- the 75% of their respective Time To Live (TTL), or
- the maximum time that SnapLogic attempts to execute a Pipeline.
- The daily API usage exceeds 75% of the limit.
- The concurrent API usage exceeds 75% of the limit.
You must always investigate the cause of a Snaplex congestion Alert. Frequently queuing Alerts indicate that either the Snaplex needs more capacity or the running Pipelines need a redesign.
Info | ||
---|---|---|
| ||
A scheduled job runs every 5 minutes to monitor the queued Pipelines. If it detects any Pipeline above the 75% of TTL threshold at that instant, it generates an Open Alert. However, if an Open Alert already exists for that Snaplex, a new Alert is not generated. The previous Open Alert remains in the same state. After generating an Open Alert the scheduled job repeats monitoring the Pipelines that have been queued for more than 75% of their respective TTLs. If there are no Pipelines above the threshold for that Snaplex, the Alert is resolved and its state changes to Closed. |
Event Type Parameters
Use the following table to understand the event types and corresponding parameters for creating notification rules in different scenarios.
Multiexcerpt macro | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Info |
---|
In Manager, you can configure notifications to receive an email or a Slack message when a node in a Snaplex fails or crashes. When you set a notification rule, a Snaplex node crash option is available under Snaplex. To create, update or manage notifications, refer to Manage Notifications. You can also view the node fail/crash information on the Manager > Alerts tab. A red triangle indicates the status of a new alert, and if the alert is older than an hour, the status is indicated by a gray triangle. To view alerts, refer to Viewing Alert Information. |
Snaplex Node Notification Threshold Watermarks
You can adjust the behavior of the alerting system with threshold watermarks. In some scenarios, an alert is set against a CPU, memory, or disk resource that is overutilized, but not directly by Pipeline executions. For example, a disk alert might be triggered by accumulative files in a /temp directory mounted on the Snaplex node; in this case, the Org admin would put the Snaplex in maintenance mode and devops might clear the space in the directory. Threshold watermarks enhance the reporting accuracy of Snaplex node alerts and can be used to reduce the frequency of alerts.
For memory and CPU usage, the alert is removed when the usage falls below the lower watermark which is 40% below the threshold by default. For example: If you have set a threshold of 80% for memory usage, and there is an an alert present that is addressed by a user action, then the alert is removed from the Dashboard when the memory goes below 0.4 * 80 = 32%. Similarly, for disk alerts the high watermark is 40% above the threshold. So if the free space on the disk extends beyond 40% of the threshold then the alert is removed.
You can configure the following threshold watermarks through enabling a feature flag for your Org.
- com.snaplogic.cc.service.UsageMonitorServiceImpl.LOW_WATERMARK = 0.4
- com.snaplogic.cc.service.UsageMonitorServiceImpl.HIGH_WATERMARK = 1.04
Enabling the Threshold Watermark Feature Flag
- Obtain your Org ID by navigating to Manager > Settings. You can find your Org ID below Organization Id at the top of the Settings page. Alternatively, you can run the following API call:
https://elastic.snaplogic.com/api/1/rest/asset/<org_name>
The API call requires authentication. In the returned response, the Org Id is the value to theorg_snode_id
key. Check to see if you have any feature flags already set for your Org.
Note title Important If you have other existing feature flags set for your Org, then they must be appended to the request in Step 3. Contact SnapLogic Support for questions about current feature flags.
Make the following request from a browser:
Code Block https://elastic.snaplogic.com/api/1/rest/admin/snappack/org-dist/<org_snode_id>
Code Block language xml title Sample Response { "response_map": { "overrides": {}, "_id": "52e99318640a9a03d8681d0d", "flag_overrides": { <Existing_feature_flag> }, "dist_id": "latest" }, "http_status_code": 200 }
Where
<Existing_feature_flags>
are the feature flags that are already enabled in the Org.- Make note of any existing feature flags for Step 3.
Apply the following feature flag to a specific Org, appending any existing feature flags from Step 2.
Code Block "com.snaplogic.cc.service.UsageMonitorServiceImpl.LOW_WATERMARK = 0.4" "com.snaplogic.cc.service.UsageMonitorServiceImpl.HIGH_WATERMARK = 1.04"
Panel title Sample Curl Command for Enabling Threshold Watermarks curl -u <org_admin_username> -H 'Content-Type: application/json' --data-binary '{"flag_overrides": { "com.snaplogic.cc.service.UsageMonitorServiceImpl.LOW_WATERMARK = 0.4":"true", "com.snaplogic.cc.service.UsageMonitorServiceImpl.HIGH_WATERMARK = 1.04":"true"...
<Existing_feature_flags>
...}}' https://elastic.snaplogic.com/api/1/rest/admin/snappack/org-dist/<org_snode_id>Where:
The
org_admin_username
is the username of the Org admin.The
org_snode_id
value is the Org ID (for example: 52e44318640a9a03d8681d0d).<Existing_feature_flag> indicates any existing feature flags from Step 2a.
To verify that the threshold watermark feature flag is enabled for a specific Org, make the following request in a browser:
Code Block title Request https://elastic.snaplogic.com/api/1/rest/admin/snappack/org-dist/<org_snode_id>
Navigate to the Dashboard to monitor the resource utilization of your Snaplex nodes against the alert reporting.
To verify that the threshold watermarks are enabled for a specific Org, make the following request in a browser:
https://elastic.snaplogic.com/api/1/rest/admin/snappack/org-dist/<org_snode_id>