From de2c4896a3fe87ee1acb1decbf6330e073565f94 Mon Sep 17 00:00:00 2001 From: Stephan Reugels Date: Wed, 3 Apr 2024 23:52:44 +0200 Subject: [PATCH] Update for exabgp v4 --- README.md | 41 ++++++++++++++++++++++++++++++++++++++++- 1 file changed, 40 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index a86cabc..fa54d08 100644 --- a/README.md +++ b/README.md @@ -24,13 +24,43 @@ Implementing the blocklists as a BGP feed that is then Null-routed on your route * The blocklists you want to subscribe to * The interval to refresh things (don't make it less than 30 minutes) * The proper route announcement and withdrawal syntax for your setup +* Install golang-go * Compile the `blocklist` application `go build blocklist.go` * Install and configure [ExaBGP](https://github.com/Exa-Networks/exabgp) * Get it peering with your router * Have it use the `blocklist` application to provide routes + * [optional] If using a huge amount of prefixes set `exabgp.api.ack` in `/etc/exabgp.env` to `false` * Fire it up -### Example `exabgp` Config File +### Example `exabgp` v4+ Config File +``` +process droproutes { + run /wherever/you/put/the/application/blocklist; + encoder text; +} + +template { + neighbor AS65332 { + router-id 192.168.1.1; + local-as 65332; + local-address 192.168.1.2; + peer-as 65256; + family { + ipv4 unicast; + ipv4 multicast; + } + api { + processes [ droproutes ]; + } + } +} + +neighbor 192.168.1.1 { + inherit AS65332; +} +``` + +### Example `exabgp` v3 Config File ``` group AS65332 { @@ -57,6 +87,15 @@ group AS65332 { } ``` +## Troubleshooting + +#### ExaBGP v4+ crashing with >10.000 prefixes +* Make sure you set exabgp.api.ack in /etc/exabgp.env to 'false' +``` +[exabgp.api] +ack = false +``` + ## Motivation While the [exabgp-edgerouter](https://github.com/infowolfe/exabgp-edgerouter) provided the functionality that I wanted, the performance was not ideal as the blocklists grew in size. For a list of approximately 2000 entries, it would take about 90 seconds to process, deduplicate, and consolidate into CIDR blocks. When I increased the lists I wanted to follow to ones that composed of approximated 45000 entries, the script was still running 90 minutes later. This wasn't going to work. So, I rewrote the algorithm to be more efficient. A 45000 long entry is now processed in under a second.