-
Notifications
You must be signed in to change notification settings - Fork 231
FAQ
The following are general questions related to Gatekeeper. For general questions about DDoS, see our answers on Quora.
Gatekeeper addresses infrastructure-layer DDoS attacks, while CDNs address application-layer DDoS attacks. Infrastructure-layer attacks target all layers of a protocol stack except the application layer. Infrastructure-layer attacks include your typical SYN/ICMP/UDP floods and DNS/NTP/Memcached amplifications as well as advanced attacks such as carpet bombing, catch-22 attacks, Coremelt, and Crossfire. Whereas application-layer attacks include HTTP flood attacks, Slowloris, Cache-busting attacks, CDN bypass, and exploiting any critical (e.g. login) or costly (e.g search) part of the application.
More than 90% of all DDoS attacks are infrastructure-layer attacks (See, for example, figure "Distribution of DDoS attacks by type" of the report Kaspersky DDoS attacks in Q4 2019). Moreover, if the goal is to protect an application that is not built on top of HTTP such as email, games, and DNS, a CDN does not help even against the less than 10% application-layer attacks.
Protecting against application-layer attacks is not affordable without a solution for infrastructure-layer attacks. The memory footprint of a connection alone can be more than 100x that of a flow in Gatekeeper. For example, with web pages averaging∼2MB in size, the memory footprint to sustain a TCP or QUIC connection can easily reach 25KB, which is 100x the 256B required for a flow in Gatekeeper (see Section 6.2 of our technical report). More can be said on the cost of CPU, latency, packets per second, and manageability, but this information is not publically available. The vital need for a solution for infrastructure-layer attacks is recognized in the industry through the growing number of tools that CDNs and other companies employ to deal with infrastructure attacks (see Chapter 10 of the book Building Secure and Reliable Systems and some of these tools employed in production.
To summarize, the choice between Gatekeeper and a CDN solution is between which class of attacks one is working to stop: infrastructure-layer or application-layer attacks. For some, both, Gatekeeper and a CDN solution, may be required. But even if Gatekeeper is not employed, some solution for infrastructure-layer attacks is still needed.
If you have enough bandwidth to stand your attacks and enough Gatekeeper servers to process that bandwidth, the answer is yes. The typical limitations of this setup are that (1) adding bandwidth or servers can become more expensive than adding another vantage point, and (2) having a single vantage point somehow constrains what a policy can do. For example, having multiple vantage points enables policies to filter spoofed source addresses when those packets come in through a wrong vantage point. There are sophisticated attacks that, as far as we know, can only be mitigated with multiple vantage points; see our technical report Circumventing Crossfire Attacks via Limited-Access Cloud Paths for details. Nevertheless, bootstrapping a Gatekeeper deployment with a single vantage point before going bigger is a tested, winning strategy.
Gatekeeper supports IPv4 and IPv6 networks.
A flow is defined as the pair of source and destination IP addresses. All policy decisions are enforced over flows.
Vantage points (VPs) are locations that support the deployment of Gatekeeper servers. Besides basic hardware demands, these locations must provide BGP speakers to announce protected network prefixes and private links between the VP and the protected destination. The private links are used to make Gatekeeper servers the entry points of traffic toward the protected destinations. These private links can be implemented using a number of technologies, including regular tunnels. Typical VPs are Internet exchanges, points of presence, peering-link locations, and (some) cloud providers; not all cloud providers support BGP announcements.
Gatekeeper and Grantor servers are designed to process as many packets as possible on the hardware they are installed. This makes them exhaust the resources of their hardware, so we recommend assigning each server a single role, that is, each server is either a Gatekeeper server or a Grantor server, and no other role. Under this assumption, we have found that reserving 16GB for the operating system, and all the remaining memory for huge pages to work fine.
How do I address the following log entry "GATEKEEPER: net: there are only X network ports available to DPDK/Gatekeeper, but configuration is using Y ports"?
The number "Y" in the log entry should always be two on a Gatekeeper server, and one on a Grantor server. If it is not the case, there is an error in your configuration files. The number "X" is always less than the number "Y" in this log entry, and the likely cause for this issue is that at least one of the network interfaces has not been bound to DPDK. You find how to do it in section Configure Network Adapters of file README.md.
If none of the previous solutions works and your server has more than a processor socket, also known as NUMA node, you may have an unbalanced allocation of huge pages per NUMA node. You can easily verify this with the command cat /sys/devices/system/node/node*/hugepages/hugepages-1048576kB/nr_hugepages. You might need to replace hugepages-1048576kB in the command with the proper size of your huge pages. We recommend setting up your servers to use the largest huge pages possible, typically 1GB. The output of this command on a properly set up server should include as many lines as the number of sockets in the server, each line should have a single number, and all numbers should be equal or have a difference of one. See information on how to configure huge pages if you need to make changes.
If you have properly set up huge pages on your server, and it still shows an unbalanced allocation, it means that the RAM modules are unevenly attached to the memory banks of your server. While this is a rare problem, the solution for it is simple: open your server and evenly split the memory modules between the sockets. If splitting the modules is not possible, you will need either to buy more memory modules or to exchange the ones you have, so you can have the total memory evenly split between sockets.
How do I address the following log entry "GATEKEEPER CPS: mnl_cb_run: cannot bring KNI X up: No such device"?
The name "X" depends on how you configured Gatekeeper or Grantor, but it's typically "kni_front" or "kni_back". Due to the nature of this issue, you may not see the log entry above, but another log entry that states that a KNI interface is not found; the "No such device" part of the log entry.
The root cause of this issue is as follows: systemd (or any other daemon) is renaming the KNI interfaces, so the kernel, Gatekeeper, and the routing daemon have different names for the same interface. Very likely, systemd is configured to rename an interface based only on its MAC address. This is done in production to have friendly interface names to be used in scripts. However, a KNI interface in Gatekeeper has the same MAC address as its corresponding physical interface.
The solution is to use the PCI addresses of physical interfaces instead of their MAC addresses to identify them for systemd.
If Gatekeeper servers had a single network interface, a Gatekeeper server under a DDoS attack may not receive policy decisions coming from Grantor servers due to the saturation of the interface.
This information is passed to Gatekeeper servers dynamically to quickly accommodate network changes. An example is found in file lua/examples/example_of_dynamic_config_request.lua (search for GK_FWD_GRANTOR); there's an example for IPv4 and another for IPv6. More information can be found here. One can use the shell command gkctl to send requests to Gatekeeper and Grantor servers.
The packages to install Gatekeeper servers and Grantor servers are the same. The binary gatekeeper can run as a Gatekeeper server or as a Grantor server. This choice is defined in the configuration file main_config.lua found in folder /etc/gatekeeper.
Currently, no. The milestone Minimal deployments intends to change this answer to yes, but this milestone has been dormant in favor of the other milestones.
Yes. In fact, there's no restriction on how you set Bird up. The patch we added to Bird is to switch from talking to the kernel to talk to Gatekeeper, so Gatekeeper receives the computed routing table.
Although the interfaces of Gatekeeper and Grantor servers are not available to the operating system, the Control Plane Support (CPS) block creates network interfaces in the kernel that mirror the front and back networks. Thus, you can set Bird up over these interfaces to establish BGP sessions with routers. These interfaces are called KNI interfaces.
I am noticing odd behavior with the Linux interfaces created by Gatekeeper for the BGP speaker. Is this a bug?
If using the Control Plane Services block, Gatekeeper will create Linux interfaces (only one for Grantor) using the DPDK KNI library. These interfaces are used to relay control plane information such as BGP packets to a BGP speaker application. Gatekeeper handles the lifecycle and configuration of these interfaces, so you should not need to configure them directly using applications like ethtool or iproute2. In fact, doing so can create issues for Gatekeeper. NetworkManager is known to attempt to configure the interfaces, which can cause IP addresses to be dropped.