Skip to main content

Handling Prometheus Alerts with n8n Workflows

Introduction
#

Prometheus Alertmanager is powerful, but its routing configuration can be complex. Using n8n as an alert handler gives you more flexibility - route based on any criteria, enrich alerts with additional data, and integrate with any notification system.

Download the Workflow
Get the ready-to-use workflow from our n8n Workflow Gallery.

Prerequisites
#

  • An n8n instance with public webhook URL
  • Prometheus with Alertmanager configured
  • PagerDuty account (for critical alerts)
  • Slack workspace (for warnings)

Workflow Overview
#

1
Alertmanager → n8n Webhook → Split Alerts → Route by Severity → PagerDuty/Slack/Email

Step-by-Step Setup
#

Step 1: Configure Alertmanager
#

Add n8n as a webhook receiver in alertmanager.yml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
receivers:
  - name: 'n8n-handler'
    webhook_configs:
      - url: 'https://your-n8n-instance/webhook/alertmanager'
        send_resolved: true

route:
  receiver: 'n8n-handler'
  group_by: ['alertname', 'severity']
  group_wait: 30s
  group_interval: 5m
  repeat_interval: 4h

Step 2: Create the Webhook Endpoint
#

  1. Add a Webhook node in n8n
  2. Set HTTP Method to POST
  3. Set path to alertmanager
  4. Response Mode: On Received

Alertmanager sends payloads like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
{
  "status": "firing",
  "alerts": [
    {
      "status": "firing",
      "labels": {
        "alertname": "HighMemoryUsage",
        "severity": "critical",
        "instance": "server-01"
      },
      "annotations": {
        "summary": "Memory usage above 90%",
        "description": "Server server-01 memory at 95%"
      },
      "startsAt": "2025-01-15T10:00:00Z"
    }
  ]
}

Step 3: Split Alerts
#

Add a Split In Batches node to process each alert individually:

  • Batch Size: 1
  • Input: {{ $json.alerts }}

Step 4: Route by Severity
#

Add a Switch node based on severity:

Condition 1 - Critical:

1
{{ $json.labels.severity }} equals "critical"

→ Route to PagerDuty

Condition 2 - Warning:

1
{{ $json.labels.severity }} equals "warning"

→ Route to Slack

Default: → Route to Email

Step 5: Send to PagerDuty
#

For critical alerts, add a PagerDuty node:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
{
  "routing_key": "YOUR_PAGERDUTY_ROUTING_KEY",
  "event_action": $json.status === "firing" ? "trigger" : "resolve",
  "dedup_key": $json.labels.alertname + "-" + $json.labels.instance,
  "payload": {
    "summary": $json.annotations.summary,
    "severity": "critical",
    "source": $json.labels.instance,
    "custom_details": {
      "alertname": $json.labels.alertname,
      "description": $json.annotations.description
    }
  }
}

Step 6: Send to Slack
#

For warnings, add a Slack node:

Channel: #alerts

Message:

1
2
3
4
5
6
7
{{ $json.status === "firing" ? "🔥" : "✅" }} *{{ $json.labels.alertname }}*

{{ $json.annotations.summary }}

*Instance:* {{ $json.labels.instance }}
*Severity:* {{ $json.labels.severity }}
*Status:* {{ $json.status }}

Step 7: Send Email (Optional)
#

For low-severity alerts:

To: ops-team@company.com Subject: [{{ $json.labels.severity }}] {{ $json.labels.alertname }} Body: Alert details

Advanced Routing
#

Route by Team
#

Different teams own different services:

1
2
3
4
5
6
7
const service = $json.labels.service;
const teamChannels = {
  'api': '#api-alerts',
  'frontend': '#frontend-alerts',
  'database': '#dba-alerts'
};
return teamChannels[service] || '#general-alerts';

Business Hours Routing
#

Reduce noise outside business hours:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
const hour = new Date().getHours();
const isBusinessHours = hour >= 9 && hour < 18;

if ($json.labels.severity === 'critical') {
  return 'pagerduty'; // Always page for critical
} else if (isBusinessHours) {
  return 'slack';
} else {
  return 'email'; // Batch for morning review
}

Alert Enrichment
#

Add context from other sources:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
// Fetch server info from CMDB
const serverInfo = await fetch(`https://cmdb/api/servers/${$json.labels.instance}`);
const data = await serverInfo.json();

return {
  ...$json,
  owner: data.owner,
  runbook: data.runbook_url,
  environment: data.environment
};

Handling Resolved Alerts
#

Alertmanager sends status: resolved when alerts clear:

1
2
3
4
if ($json.status === 'resolved') {
  // Send resolution notification
  // Or auto-resolve PagerDuty incident
}

Troubleshooting
#

Alerts Not Arriving
#

  • Check Alertmanager logs for webhook errors
  • Verify n8n webhook URL is accessible
  • Test with curl: curl -X POST your-webhook-url -d '{"test": true}'

Duplicate Alerts
#

  • Use dedup_key in PagerDuty
  • Implement deduplication in n8n with a cache

Missing Labels
#

  • Check your Prometheus alerting rules
  • Ensure labels are propagated through Alertmanager

Best Practices
#

  1. Use deduplication - Prevent alert storms
  2. Include runbook links - Speed up resolution
  3. Set appropriate severities - Reserve critical for true emergencies
  4. Test your routing - Use Alertmanager’s test endpoint
  5. Monitor your monitoring - Alert if n8n webhook fails

Conclusion
#

Using n8n to handle Prometheus alerts gives you unlimited flexibility in routing, enrichment, and notification. You can adapt quickly to changing requirements without modifying Alertmanager configuration.

Download the complete workflow from our n8n Workflow Gallery.