Server & API¶
Alerts can be intercepted as they are received to modify, enhance or reject them using pre-receive hooks. Alerts can also be used to trigger actions in other systems after the alert has been processed using post-receive hooks or following an operator action or alert status change for bi-directional integration.
Alerta comes out-of-the-box with key features designed to reduce the burden of alert management. When an event is received it it is processed in the following way:
all plugin pre-receive hooks are run in listed order, an alert is immediately rejected if any plugins return a
alert is checked against any active blackout periods, alert suppressed if any match
alert is checked if duplicate, if so duplicate count is increased and repeat set to True
alert is checked if correlated, if so change severity and/or status etc
if alert is neither a duplicate or correlated then create new alert
all plugin post-receive hooks are run in listed order
any tags or attributes changed in post-receive hooks are persisted
Each of the above actions are explained in more detail in the following sections.
Plugins are small python scripts that can run either before or after an alert is saved to the database, or before an operator action or status change update. This is achieved by registering pre-receive hooks for transformers, post-receive hooks for external notification and status change hooks for bi-directional integration.
Using pre-receive hooks, plugins provide the ability to transform raw alert data from sources before alerts are created. For example, alerts can be normalised to ensure they all have specific attributes or tags or only have a specific value from a range of allowed values. This is demonstrated in the reject plugin that enforces an alert policy.
Plugins can also be used to enhance alerts – like the Geo location plugin
which adds location data to alerts based on the remote IP address of the client,
or the generic enhance plugin which adds a
customer attribute based on
information contained in the alert.
Using post-receive hooks, plugin integrations can be used to provide downstream systems with alerts in realtime for external notification. For example, pushing alerts onto an AWS SNS topic, AMQP queue, logging to a Logstash/Kibana stack, or sending notifications to HipChat, Slack or Twilio and many more.
Actions taken against alerts can be used as triggers for further integrations with external systems.
Using status change hooks, plugins can be used to complete a two way integration with an external system. That is, an external system like Prometheus Alertmanager that generates alerts that are forwarded to Alerta can be updated when the status of an alert changes in Alerta.
For example, if an operator “acknowledges” a Prometheus alert in the Alerta web UI then a status change hook could silence the corresponding alert in Alertmanager. This requires that external systems provide enough information in the alert created in Alerta for that alert to be uniquely identified at a later date.
More information about bi-directional integration and real-world examples for Telegram, Zabbix, Prometheus and many others can be found on the Integrations & Plugins page.
An alert that is received during a blackout period
is suppressed. That is, it is received by Alerta and a
202 Accepted status
code is returned however this means that even though the alert has been accepted,
it won’t be processed.
Alerta defines many different alert attributes that can be used to group alerts and it is these attributes that can be used to define blackout rules. For example, to suppress alerts from an entire environment, service or group, or a combination of these. However, it is possible to define blackout rules based only on resource and event attributes for situations that require that level of granularity.
Tags can also be used to define a blackout rule which should allow a lot of
flexibility because tags can be added at source, using the
alerta CLI, or
using a plugin. Note that one or more tags can be required to match an alert
for the suppression to apply.
In summary, blackout rules can be any of:
an entire environment eg.
a particular resource eg.
an entire service eg.
every occurrence of a specific event eg.
a group of events eg.
a specific event for a resource eg.
resource=host55 and event=DiskFull
all events that have a specific set of tags eg.
tags=[ blackout, london ]
Note that an
environment is always required to be defined for a blackout rule.
When an alert with the same
combination is received with the same
severity, the alert
This means that information from the de-duplicated alert is used to
update key attributes of the existing alert (like
lastReceiveTime) and the
new alert is not shown.
Alerts are sorted in the Alerta web UI by
lastReceiveTime by default
so that the most recent alerts will be displayed at the top regardless
of whether they were new alerts or de-duplicated alerts.
Alerta implements what we call “simple correlation” – as opposed to complex correlation which is much more involved. Simple correlation, in combination with de-duplication, provides straight-forward and effective ways to reduce the burden of managing an alert console.
With Alerta, there are two ways alerts can be correlated, namely:
When an alert with the same
eventcombination is received with a different
severity, then the alert is correlated.
When a alert with the same
resourcecombination is received with an
correlatelist of related events with any severity, then the alert is correlated.
In both cases, this means that information from the correlated alert is
used to update key attributes of the existing alert (like
lastReceiveTime) and the new alert
is not shown.
Alerta is called state-based because it will automatically change the alert status based on the current and previous severity of alerts and subsequent user actions.
The Alerta API will:
only show the most recent state of any alert
change the status of an alert to
change the status of a
openif the event reoccurs
change the status of an
openif the new severity is higher than the current
trendIndicationattribute based on
historylog following a
statuschange (see alert history)
All of these automatic actions combine to ensure that important alerts are given the priority they deserve.
To take full advantage of the state-based browser it is recommended to
implement the timeout of
expired alerts using the House Keeping
Whenever an alert status or severity changes, that change is recorded in the alert history log. This is to allow operations staff follow the lifecycle of a particular alert, if necessary.
The alert history is visible in the Alert Details page of any alert and also
by using the
alerta command-line tool
For example, it will show whether an alert status change happened as a result of operator (external) action or an automatic correlation (auto) action.
An Alerta heartbeat is a periodic HTTP request sent to the Alerta API to indicate normal operation of the origin of the heartbeat.
They can be used to ensure components of the Alerta monitoring system are
operating normally or sent from any other source. As well as an
they include a
timeout in seconds (after which they will be considered stale),
They are visible in the Alerta console (Heartbeats page) and via the
command-line tool using the
heartbeat sub-command to send them, and the
heartbeats sub-command to view them.
Alerts can be generated from stale or slow heartbeats
alerta heartbeats --alert. For more information about generating
alerts from heartbeats see the heartbeats command