Skip to content

A reusable auditing tool for n8n that automatically analyses workflows for security issues, performance risks, error handling, readability, and AI usage.

Notifications You must be signed in to change notification settings

christinec-dev/n8n-Audit-Workflow

Repository files navigation

n8n Workflow Auditor

A reusable auditing tool for n8n that automatically analyses workflows for security issues, performance risks, AI usage, error handling gaps, and naming clarity.

It generates a Markdown or PDF report that can be consumed directly, published (e.g., GitHub Gist, Slack, Confluence), or enriched by an LLM for narrative summarisation.

Screenshot 2025-09-07 210348

👀 Example Output


🌟 Features

  • Security Analysis

    • Detects leaked secrets, API keys, bearer tokens, and weak passwords
    • Flags company identifiers configured in the Configuration node
    • Reports on credential usage (hardcoded, OAuth, predefined, generic)
  • Performance Analysis

    • Identifies bottlenecks (slow nodes vs workflow average)
    • Flags unstable nodes (high runtime variance)
    • Detects disabled and never-executed nodes
    • Reports per-node execution stats (avg runtime, stdev, samples, dominant share)
  • AI/LLM Usage Tracking

    • Audits all AI nodes (@n8n/n8n-nodes-langchain)
    • Extracts model, preview of prompts, token estimates, execution times
    • Provides counts: active, disabled, and total AI nodes
  • Error Handling Analysis

    • Flags nodes with potential runtime failures (APIs, DB queries, LLM calls, custom code)
    • Adds per-node recommendations (retry on failure, continue on error, fallback models, polling)
    • Includes global golden rules for error handling (Error Workflows, retries, notifications, failover strategies)
  • Node Naming Audit

    • Detects nodes left with default names (e.g., Node1, Node2) or unclear short names
    • Recommends descriptive, consistent naming to improve maintainability and debugging
  • Reporting

    • Outputs Markdown (audit_report.md)
    • Optionally generates PDF (audit_report.pdf)
    • Can publish reports as GitHub Gists (default) or send to Slack/Confluence/etc.
    • Supports AI summarisation for narrative reports (via OpenAI)

⚙️ Setup

View visual setup guide here.

1. Import Workflow

Load the provided n8n Workflow Auditor.json into your n8n instance. Do this in its own workflow.

2. Configure Settings

Open the Configuration node and adjust:

  • maxExecutions → number of past runs to analyse (default: 5)
  • bottleneckThreshold → multiplier over average to flag bottlenecks (default: 1.5, should be fine)
  • unstableThreshold → stdev/avg ratio to flag instability (default: 0.5, should be fine)
  • shouldAuditSubflows → include subflows (true/false)
  • companyIdentifiers → comma-separated sensitive terms you want flagged (e.g. AcmeCorp,ProjectX)
  • shouldUseAIReportingtrue = LLM summarisation · false = scripted Markdown

3. Connect Credentials

Ensure the following credentials are configured in n8n:

  • n8nApi → access your instance settings and generate an API key (used by Get Executions)
  • openAiApi (or another LLM) → for AI summarisation (if enabled)
  • httpCustomAuth + githubApi → for publishing reports as GitHub Gists (optional)

(You can replace this step with Slack, Confluence, or ConvertAPI.)

4. Select Target Workflow

In the Get Latest Successful Executions of Workflow node, set the workflowId of the workflow you want to audit.
Optionally configure Get Latest Successful Executions of Subworkflows if analysing subflows.

5. Run the Audit

Trigger the workflow manually (Test workflow) to run the audit:

  1. Fetches the last N executions
  2. Runs security, performance, and AI audits
  3. Generates a Markdown/PDF report

6. View Reports

  • Default: uploaded as a private GitHub Gist
  • Alternative: sent to Slack/Confluence/ConvertAPI (replace output nodes)
  • PDF generation is included if enabled in the final stage

📜 Report Contents

Each report includes:

  • Glossary of audit terms
  • Global Summary (workflows analysed, issues by severity)
  • Per-workflow breakdown:
    • Security Findings (tables of issues)
    • Performance Findings (slow nodes, bottlenecks, disabled/unused)
    • API & Credential Usage (node + credential type)
    • AI Audit Findings (model, prompt preview, token estimate, runtime)
    • Error Handling Findings (node-level risks + actionable fixes)
    • Recommendations
  • Overall Recommendations (security, performance, AI risks)

⚖️ Scaling Considerations

Running the n8n Workflow Auditor involves fetching past executions, parsing node data, and generating large Markdown/AI reports. This can be resource intensive depending on how many executions you analyse and the size of your workflows.

Instance Sizing (if using AWS)

  • t2.micro / t2.small (1 GB RAM, 1 vCPU)Not recommended. Workflows with many executions or AI summarisation will likely crash or hang due to memory/CPU limits.
  • t3.small (2 GB RAM, 2 vCPU burstable) → Minimum viable size for small audits.
  • t3.medium (4 GB RAM, 2 vCPU) → Recommended for production use with AI summarisation enabled.
  • t3.large+ (8 GB RAM, 2+ vCPU) → For teams auditing multiple workflows or large histories.

Execution Limits

  • Use the maxExecutions setting in the Configuration node to cap how many runs are analysed (default: 5).
  • Best practice is 20–50 executions per audit. Avoid loading hundreds or thousands at once.

Memory Optimisation

  • Drop unnecessary fields (avoid storing full execution logs).
  • Aggregate metadata only (node name, runtime, errors).
  • Slice arrays (executions.slice(0, N)) when iterating in custom code.

AI Summarisation

  • AI prompts can grow large when embedding full JSON results.
  • Keep JSON payloads lean (remove fields not needed for reporting).
  • If using AI reporting, ensure at least t3.medium (4 GB RAM) to prevent out-of-memory crashes.

Stability Enhancements

  • Enable swap space on smaller instances as a buffer (2 GB recommended).
  • Separate heavy audits into dedicated workflows, isolating them from production automations.
  • Consider offloading summarisation (AI Writer) to a serverless function or bigger host if costs allow.

🛠️ Troubleshooting

Running audits can sometimes stress small EC2 instances or misconfigured n8n environments.
Here are common issues and fixes:

1. Workflow Crashes / EC2 Instance Freezes

Symptom: The whole n8n process or EC2 instance stops responding during an audit.
Cause: Out-of-memory (OOM) kill on small instances (t2.micro, t2.small). The auditor loads too many executions into memory.
Fix:

  • Upgrade to at least t3.small (2 GB RAM) or t3.medium (4 GB RAM).
  • Limit executions in Configuration → maxExecutions (20–50 recommended).
  • Add swap space (2 GB) for stability buffer.

2. Workflow Hangs at AI Writer

Symptom: The audit gets stuck when generating the AI summary.
Cause: Prompt too large (full JSON > 100s of KB) or instance CPU throttling.
Fix:

  • Reduce execution history (maxExecutions).
  • Trim unneeded fields from JSON before passing to AI.
  • Ensure sufficient memory (4 GB+).

3. Database Bloat

Symptom: SQLite/Postgres DB grows very large, queries slow down.
Cause: n8n stores full workflow execution logs, including audit payloads.
Fix:

  • Use SQLite vacuum or Postgres VACUUM FULL.
  • Configure n8n to prune old executions:
    EXECUTIONS_DATA_PRUNE=true
    EXECUTIONS_DATA_MAX_AGE=336  # keep 14 days (hours)
    EXECUTIONS_DATA_PRUNE_HARD_DELETE=true
    

🔮 Roadmap

  • Add inline PDF charts for runtime distribution
  • Expand AI audit with cost estimation (tokens × pricing)
  • Extend destinations: Jira, Notion, Google Drive
  • Add performance issues time blocks to identify when performance dips during morning, evening, or afternoon
  • Subflows identification within sublows. Currently only main workflow + subflows of main workflow is being audited
  • Improved report formatting, for both manual and AI generated reports

About

A reusable auditing tool for n8n that automatically analyses workflows for security issues, performance risks, error handling, readability, and AI usage.

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published