-
Notifications
You must be signed in to change notification settings - Fork 0
/
intro.tex
29 lines (20 loc) · 5.58 KB
/
intro.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
\section{Introduction}
\label{sec:intro}
%Software-defined networking (SDN), as envisioned today in separation of control and data planes and centralized control plane computation, is a relatively new concept.
%receiving a lot of attention in the research and practitioner communities.
%It is widely believed that dependencies in networks are easier to understand through software defined networking (SDN) than distributed protocols. As SDNs gain traction in practice, it is a good time to verify whether this is actually true.
%(TODO: Roger: I am of course happy with any sentence, but the one above felt a bit too lame.)
From early papers on the topic (e.g.,~\cite{rcp-case,rcp,4d,ethane}), we can learn that the primary promises of SDNs were that $i)$ centralized control plane computation can eliminate the ill-effects of distributed computation (e.g., looping packets), and $ii)$ separating control and data planes simplifies the task of configuring the data plane in a manner that satisfies diverse policy concerns.
For example, to eliminate oscillations and loops that can occur in certain iBGP architectures, the Routing Control Platform (RCP)~\cite{rcp-case,rcp} proposed a centralized control plane architecture that directly configured the data plane of routers in an autonomous system.
4D aimed to simplify network management~\cite{4d}. It observed that the ``data plane needs to implement, in addition to next-hop forwarding, functions such as tunneling, access control, address translation, and queuing." In today's networks, this ``requires complex arrangements of commands to tag routes, filter routes, and configure multiple interacting routing processes, all the while ensuring that no router is asked to handle more routes and packet filters than it has resources to cope with."
%A change to any one part of the configuration can easily break other parts."
Based on this observation, it argues for centrally computing data plane state in a way that obeys all concerns.
Similarly, ETHANE reasoned that for simplified management of enterprise networks ``policy should determine the path that packets follow"~\cite{ethane}. It then argued for SDNs because the requirements of network management ``are complex and require strong consistency, making it quite hard to compute in a distributed manner." These promises have led to SDNs garnering a lot of attention from both researchers and practitioners.
However, as we gain more experience with this paradigm, a nuanced story is emerging. Researchers have shown that, even with SDNs, packets can take paths that do not comply with policy~\cite{safeupdate} and that traffic greater than capacity can arrive at a link~\cite{swan}. What explains this gap between the promise and these inconsistencies? The root cause is that promises apply to the eventual behavior of the network, {\em after} data plane state has been changed, but inconsistencies emerge {\em during} data plane state changes.
Since changes to data plane state, in response to failures, load changes, and policy changes, are an essential part of an operational network, so will be the inconsistencies. Thus, successful use of SDNs requires not only methods to compute policy-compliant data plane state but also methods to change that state in a way that maintains desired consistency properties.
%Going beyond the specific inconsistencies and their solutions considered by prior work~\cite{safeupdate,swan}, this paper presents a broader view of this challenge.
This paper takes a broad view of this aspect of SDNs. The key to consistently updating a network is carefully coordinating rule updates across multiple switches. Atomically updating the rules of multiple switches is difficult and uncoordinated updates can lead to inconsistencies such as packet loops and drops. Thus, when a rule can be safely updated at a switch depends on what rules are present at other switches, and in some cases, these dependencies can be circular.
%Through a mix of simple impossibility results and concrete algorithms (some of which are new) from prior work and some we develop), that can maintain a given consistency property,
By analyzing a spectrum of possible consistency properties and the dependency structures they induce, we find an inherent trade-off. Dependency structures are simpler for weaker consistency properties and are more intricate for stronger properties. For instance, simply ensuring that no packet is dropped does not induce any dependency amongst switches, but ensuring that no packet sees a mix of old and new rules~\cite{safeupdate} makes rules at a switch depend on all other switches in the network.
We also take a detailed view of a basic consistency property---no packet loops---and develop two new update algorithms that induce less intricate dependency structures than currently known algorithms~\cite{safeupdate}. One of our algorithms induces provably minimal dependencies.
Motivated by our analysis, we sketch a general architecture for consistent network updates. It decouples across two modules the two concerns that are primary during network updates: consistency and speed. Given the consistency property of interest, the first module computes a correct plan for updating the network. This plan is represented as a directed acyclic graph in which nodes are rule updates and edges represent their dependencies. The second module is responsible for quickly applying the plan based on the properties of the network, e.g. the time for individual switches to apply updates and distances between the controller and switches. Preliminary experimental results highlight the value of this architecture as well as open challenges.