Handling Prometheus Alerts with n8n Workflows
Introduction
Prometheus Alertmanager is powerful, but its routing configuration can be complex. Using n8n as an alert handler gives you more flexibility - route based on any criteria, enrich alerts with additional data, and integrate with any notification system.
Prerequisites
- An n8n instance with public webhook URL
- Prometheus with Alertmanager configured
- PagerDuty account (for critical alerts)
- Slack workspace (for warnings)
Workflow Overview
Alertmanager → n8n Webhook → Split Alerts → Route by Severity → PagerDuty/Slack/EmailStep-by-Step Setup
Step 1: Configure Alertmanager
Add n8n as a webhook receiver in alertmanager.yml:
receivers:
- name: 'n8n-handler'
webhook_configs:
- url: 'https://your-n8n-instance/webhook/alertmanager'
send_resolved: true
route:
receiver: 'n8n-handler'
group_by: ['alertname', 'severity']
group_wait: 30s
group_interval: 5m
repeat_interval: 4hStep 2: Create the Webhook Endpoint
- Add a Webhook node in n8n
- Set HTTP Method to
POST - Set path to
alertmanager - Response Mode:
On Received
Alertmanager sends payloads like:
{
"status": "firing",
"alerts": [
{
"status": "firing",
"labels": {
"alertname": "HighMemoryUsage",
"severity": "critical",
"instance": "server-01"
},
"annotations": {
"summary": "Memory usage above 90%",
"description": "Server server-01 memory at 95%"
},
"startsAt": "2025-01-15T10:00:00Z"
}
]
}Step 3: Split Alerts
Add a Split In Batches node to process each alert individually:
- Batch Size:
1 - Input:
{{ $json.alerts }}
Step 4: Route by Severity
Add a Switch node based on severity:
Condition 1 - Critical:
{{ $json.labels.severity }} equals "critical"→ Route to PagerDuty
Condition 2 - Warning:
{{ $json.labels.severity }} equals "warning"→ Route to Slack
Default: → Route to Email
Step 5: Send to PagerDuty
For critical alerts, add a PagerDuty node:
{
"routing_key": "YOUR_PAGERDUTY_ROUTING_KEY",
"event_action": $json.status === "firing" ? "trigger" : "resolve",
"dedup_key": $json.labels.alertname + "-" + $json.labels.instance,
"payload": {
"summary": $json.annotations.summary,
"severity": "critical",
"source": $json.labels.instance,
"custom_details": {
"alertname": $json.labels.alertname,
"description": $json.annotations.description
}
}
}Step 6: Send to Slack
For warnings, add a Slack node:
Channel: #alerts
Message:
{{ $json.status === "firing" ? "🔥" : "✅" }} *{{ $json.labels.alertname }}*
{{ $json.annotations.summary }}
*Instance:* {{ $json.labels.instance }}
*Severity:* {{ $json.labels.severity }}
*Status:* {{ $json.status }}Step 7: Send Email (Optional)
For low-severity alerts:
To: ops-team@company.com
Subject: [{{ $json.labels.severity }}] {{ $json.labels.alertname }}
Body: Alert details
Advanced Routing
Route by Team
Different teams own different services:
const service = $json.labels.service;
const teamChannels = {
'api': '#api-alerts',
'frontend': '#frontend-alerts',
'database': '#dba-alerts'
};
return teamChannels[service] || '#general-alerts';Business Hours Routing
Reduce noise outside business hours:
const hour = new Date().getHours();
const isBusinessHours = hour >= 9 && hour < 18;
if ($json.labels.severity === 'critical') {
return 'pagerduty'; // Always page for critical
} else if (isBusinessHours) {
return 'slack';
} else {
return 'email'; // Batch for morning review
}Alert Enrichment
Add context from other sources:
// Fetch server info from CMDB
const serverInfo = await fetch(`https://cmdb/api/servers/${$json.labels.instance}`);
const data = await serverInfo.json();
return {
...$json,
owner: data.owner,
runbook: data.runbook_url,
environment: data.environment
};Handling Resolved Alerts
Alertmanager sends status: resolved when alerts clear:
if ($json.status === 'resolved') {
// Send resolution notification
// Or auto-resolve PagerDuty incident
}Troubleshooting
Alerts Not Arriving
- Check Alertmanager logs for webhook errors
- Verify n8n webhook URL is accessible
- Test with curl:
curl -X POST your-webhook-url -d '{"test": true}'
Duplicate Alerts
- Use
dedup_keyin PagerDuty - Implement deduplication in n8n with a cache
Missing Labels
- Check your Prometheus alerting rules
- Ensure labels are propagated through Alertmanager
Best Practices
- Use deduplication - Prevent alert storms
- Include runbook links - Speed up resolution
- Set appropriate severities - Reserve critical for true emergencies
- Test your routing - Use Alertmanager’s test endpoint
- Monitor your monitoring - Alert if n8n webhook fails
Conclusion
Using n8n to handle Prometheus alerts gives you unlimited flexibility in routing, enrichment, and notification. You can adapt quickly to changing requirements without modifying Alertmanager configuration.
Download the complete workflow from our n8n Workflow Gallery.