Contents

Handling Prometheus Alerts with n8n Workflows

Prometheus Alertmanager is powerful, but its routing configuration can be complex. Using n8n as an alert handler gives you more flexibility - route based on any criteria, enrich alerts with additional data, and integrate with any notification system.

Download the Workflow
Get the ready-to-use workflow from our n8n Workflow Gallery.
  • An n8n instance with public webhook URL
  • Prometheus with Alertmanager configured
  • PagerDuty account (for critical alerts)
  • Slack workspace (for warnings)
Alertmanager → n8n Webhook → Split Alerts → Route by Severity → PagerDuty/Slack/Email

Add n8n as a webhook receiver in alertmanager.yml:

receivers:
  - name: 'n8n-handler'
    webhook_configs:
      - url: 'https://your-n8n-instance/webhook/alertmanager'
        send_resolved: true

route:
  receiver: 'n8n-handler'
  group_by: ['alertname', 'severity']
  group_wait: 30s
  group_interval: 5m
  repeat_interval: 4h
  1. Add a Webhook node in n8n
  2. Set HTTP Method to POST
  3. Set path to alertmanager
  4. Response Mode: On Received

Alertmanager sends payloads like:

{
  "status": "firing",
  "alerts": [
    {
      "status": "firing",
      "labels": {
        "alertname": "HighMemoryUsage",
        "severity": "critical",
        "instance": "server-01"
      },
      "annotations": {
        "summary": "Memory usage above 90%",
        "description": "Server server-01 memory at 95%"
      },
      "startsAt": "2025-01-15T10:00:00Z"
    }
  ]
}

Add a Split In Batches node to process each alert individually:

  • Batch Size: 1
  • Input: {{ $json.alerts }}

Add a Switch node based on severity:

Condition 1 - Critical:

{{ $json.labels.severity }} equals "critical"

→ Route to PagerDuty

Condition 2 - Warning:

{{ $json.labels.severity }} equals "warning"

→ Route to Slack

Default: → Route to Email

For critical alerts, add a PagerDuty node:

{
  "routing_key": "YOUR_PAGERDUTY_ROUTING_KEY",
  "event_action": $json.status === "firing" ? "trigger" : "resolve",
  "dedup_key": $json.labels.alertname + "-" + $json.labels.instance,
  "payload": {
    "summary": $json.annotations.summary,
    "severity": "critical",
    "source": $json.labels.instance,
    "custom_details": {
      "alertname": $json.labels.alertname,
      "description": $json.annotations.description
    }
  }
}

For warnings, add a Slack node:

Channel: #alerts

Message:

{{ $json.status === "firing" ? "🔥" : "✅" }} *{{ $json.labels.alertname }}*

{{ $json.annotations.summary }}

*Instance:* {{ $json.labels.instance }}
*Severity:* {{ $json.labels.severity }}
*Status:* {{ $json.status }}

For low-severity alerts:

To: ops-team@company.com Subject: [{{ $json.labels.severity }}] {{ $json.labels.alertname }} Body: Alert details

Different teams own different services:

const service = $json.labels.service;
const teamChannels = {
  'api': '#api-alerts',
  'frontend': '#frontend-alerts',
  'database': '#dba-alerts'
};
return teamChannels[service] || '#general-alerts';

Reduce noise outside business hours:

const hour = new Date().getHours();
const isBusinessHours = hour >= 9 && hour < 18;

if ($json.labels.severity === 'critical') {
  return 'pagerduty'; // Always page for critical
} else if (isBusinessHours) {
  return 'slack';
} else {
  return 'email'; // Batch for morning review
}

Add context from other sources:

// Fetch server info from CMDB
const serverInfo = await fetch(`https://cmdb/api/servers/${$json.labels.instance}`);
const data = await serverInfo.json();

return {
  ...$json,
  owner: data.owner,
  runbook: data.runbook_url,
  environment: data.environment
};

Alertmanager sends status: resolved when alerts clear:

if ($json.status === 'resolved') {
  // Send resolution notification
  // Or auto-resolve PagerDuty incident
}
  • Check Alertmanager logs for webhook errors
  • Verify n8n webhook URL is accessible
  • Test with curl: curl -X POST your-webhook-url -d '{"test": true}'
  • Use dedup_key in PagerDuty
  • Implement deduplication in n8n with a cache
  • Check your Prometheus alerting rules
  • Ensure labels are propagated through Alertmanager
  1. Use deduplication - Prevent alert storms
  2. Include runbook links - Speed up resolution
  3. Set appropriate severities - Reserve critical for true emergencies
  4. Test your routing - Use Alertmanager’s test endpoint
  5. Monitor your monitoring - Alert if n8n webhook fails

Using n8n to handle Prometheus alerts gives you unlimited flexibility in routing, enrichment, and notification. You can adapt quickly to changing requirements without modifying Alertmanager configuration.

Download the complete workflow from our n8n Workflow Gallery.