Server & API
The Alerta API receives alerts from multiple sources, correlates, de-duplicates or suppresses them, and makes the alerts available via a RESTful JSON API.
Alerts can be intercepted as they are received to modify, enhance or reject them using pre-receive hooks. Alerts can also be used to trigger actions in other systems after the alert has been processed using post-receive hooks or following an operator action or alert status change for bi-directional integration.
There are several integrations with popular monitoring tools available and webhooks can be used to trivially integrate with AWS Prometheus, Grafana, PagerDuty and many more.
Event Processing
Alerta comes out-of-the-box with key features designed to reduce the burden of alert management. When an event is received it it is processed in the following way:
all plugin pre-receive hooks are run in listed order, an alert is immediately rejected if any plugins return a
RejectExceptionorRateLimitexceptionalert is checked against any active blackout periods, alert suppressed if any match
alert is checked if duplicate, if so duplicate count is increased and repeat set to True
alert is checked if correlated, if so change severity and/or status etc
if alert is neither a duplicate or correlated then create new alert
all plugin post-receive hooks are run in listed order
any tags or attributes changed in post-receive hooks are persisted
Each of the above actions are explained in more detail in the following sections.
Plugins
Plugins are small python scripts that can run either before or after an alert is saved to the database, or before an operator action or status change update. This is achieved by registering pre-receive hooks for transformers, post-receive hooks for external notification and status change hooks for bi-directional integration.
Transformers
Using pre-receive hooks, plugins provide the ability to transform raw alert data from sources before alerts are created. For example, alerts can be normalised to ensure they all have specific attributes or tags or only have a specific value from a range of allowed values. This is demonstrated in the reject plugin that enforces an alert policy.
Plugins can also be used to enhance alerts – like the Geo location plugin
which adds location data to alerts based on the remote IP address of the client,
or the generic enhance plugin which adds a customer attribute based on
information contained in the alert.
External Notification
Using post-receive hooks, plugin integrations can be used to provide downstream systems with alerts in realtime for external notification. For example, pushing alerts onto an AWS SNS topic, AMQP queue, logging to a Logstash/Kibana stack, or sending notifications to HipChat, Slack or Twilio and many more.
Operator Actions
Actions taken against alerts can be used as triggers for further integrations with external systems.
TBC
Bi-directional Integration
Using status change hooks, plugins can be used to complete a two way integration with an external system. That is, an external system like Prometheus Alertmanager that generates alerts that are forwarded to Alerta can be updated when the status of an alert changes in Alerta.
For example, if an operator “acknowledges” a Prometheus alert in the Alerta web UI then a status change hook could silence the corresponding alert in Alertmanager. This requires that external systems provide enough information in the alert created in Alerta for that alert to be uniquely identified at a later date.
More information about bi-directional integration and real-world examples for Telegram, Zabbix, Prometheus and many others can be found on the Integrations & Plugins page.
Blackout Periods
An alert that is received during a blackout period
is suppressed. That is, it is received by Alerta and a 202 Accepted status
code is returned however this means that even though the alert has been accepted,
it won’t be processed.
Alerta defines many different alert attributes that can be used to group alerts and it is these attributes that can be used to define blackout rules. For example, to suppress alerts from an entire environment, service or group, or a combination of these. However, it is possible to define blackout rules based only on resource and event attributes for situations that require that level of granularity.
Tags can also be used to define a blackout rule which should allow a lot of
flexibility because tags can be added at source, using the alerta CLI, or
using a plugin. Note that one or more tags can be required to match an alert
for the suppression to apply.
In summary, blackout rules can be any of:
an entire environment eg.
environment=Productiona particular resource eg.
resource=host55an entire service eg.
service=Webevery occurrence of a specific event eg.
event=DiskFulla group of events eg.
group=Sysloga specific event for a resource eg.
resource=host55 and event=DiskFullall events that have a specific set of tags eg.
tags=[ blackout, london ]
Note that an environment is always required to be defined for a blackout rule.
De-Duplication
When an alert with the same environment-resource-event
combination is received with the same severity, the alert
is de-duplicated.
This means that information from the de-duplicated alert is used to
update key attributes of the existing alert (like duplicateCount,
repeat flag, value, text and lastReceiveTime) and the
new alert is not shown.
Alerts are sorted in the Alerta web UI by lastReceiveTime by default
so that the most recent alerts will be displayed at the top regardless
of whether they were new alerts or de-duplicated alerts.
Simple Correlation
Alerta implements what we call “simple correlation” – as opposed to complex correlation which is much more involved. Simple correlation, in combination with de-duplication, provides straight-forward and effective ways to reduce the burden of managing an alert console.
With Alerta, there are two ways alerts can be correlated, namely:
When an alert with the same
environment-resource-eventcombination is received with a differentseverity, then the alert is correlated.When a alert with the same
environment-resourcecombination is received with aneventin thecorrelatelist of related events with any severity, then the alert is correlated.
In both cases, this means that information from the correlated alert is
used to update key attributes of the existing alert (like severity,
event, value, text and lastReceiveTime) and the new alert
is not shown.
State-based Browser
Alerta is called state-based because it will automatically change the alert status based on the current and previous severity of alerts and subsequent user actions.
The Alerta API will:
only show the most recent state of any alert
change the status of an alert to
closedif anormal,okorclearedis receivedchange the status of a
closedalert toopenif the event reoccurschange the status of an
acknowledgedalert toopenif the new severity is higher than the currentseverityupdate the
severityand other key attributes of an alert when a more recent alert is received (see correlation and deduplication)update the
trendIndicationattribute based onpreviousSeverityand currentseveritywith eithermoreSevere,lessSevereornoChangeupdate the
historylog following aseverityorstatuschange (see alert history)
All of these automatic actions combine to ensure that important alerts are given the priority they deserve.
Note
To take full advantage of the state-based browser it is recommended to
implement the timeout of expired alerts using the House Keeping
script.
Alert History
Whenever an alert status or severity changes, that change is recorded in the alert history log. This is to allow operations staff follow the lifecycle of a particular alert, if necessary.
The alert history is visible in the Alert Details page of any alert and also
by using the alerta command-line tool history sub-command.
For example, it will show whether an alert status change happened as a result of operator (external) action or an automatic correlation (auto) action.
Heartbeats
An Alerta heartbeat is a periodic HTTP request sent to the Alerta API to indicate normal operation of the origin of the heartbeat.
They can be used to ensure components of the Alerta monitoring system are
operating normally or sent from any other source. As well as an origin
they include a timeout in seconds (after which they will be considered stale),
and optional tags and attributes.
They are visible in the Alerta console (Heartbeats page) and via the alerta
command-line tool using the heartbeat sub-command to send them, and the
heartbeats sub-command to view them.
Alerts can be generated from stale or slow heartbeats
using alerta heartbeats --alert. For more information about generating
alerts from heartbeats see the heartbeats command
reference.