当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《计算机网络》课程教学资源(参考文献)A DoS-limiting Network Architecture

资源类别:文库,文档格式:PDF,文档页数:12,文件大小:361.86KB,团购合买
点击下载完整版文档(PDF)

A DoS-limiting Network Architecture Xiaowei Yang David Wetherall Thomas Anderson University of California, Irvine University of Washington University of Washington xwy @ics. uci.edu djw@cs. washington. edu tom@cs. washington. edu ABSTRACT Each of these proposals has merit and provides techniques that We present the design and evaluation of TVA, a network arch can help address the Dos problem. Yet we argue that each ad- tecture that limits the impact of Denial of Service(Dos)floods dresses only an aspect of Dos rather than the overall problem. In from the outset. Our work builds on earlier work on capabilities contrast, our goal is to provide a comprehensive solution to the which senders obtain short-term authorizations from receivers that Dos problem. We require that a Dos-limiting network architecture they stamp on their packets. We address the full range of possible ensure that any two legitimate nodes be able to effectively commu- attacks against communication between pairs of hosts, inclue nicate despite the arbitrary behavior of k attacking hosts. We limit spoofed packet floods, network and host bottlenecks. and router ourselves to open networks, such as the Internet, where the com- state exhaustion. We use simulation to show that attack traffic can municating hosts are not known in advance; this rules out statically only degrade legitimate traffic to a limited extent, significantly out- configured networks that, for example, only permit predetermined performing previously proposed DoS solutions. We use a modified Linux kernel implementation to argue that our design can run on Our solution is the Traffic Validation Architecture (TVA). TVA gigabit links using only inexpensive off-the-shelf hardware.Our is based on the notion of capabilities that we advocated in earlier work [2] and which were subsequently refined by Yaar et. al.[25] design is also suitable for transition into practice, providing incre. Our attraction to capabilities is that they cut to the heart of the Dos mental benefit for incremental deployment. problem by allowing destinations to control the packets they re- Categories and subject descriptors ceive. However, capabilities are currently little understood at a de- tailed level or leave many important questions unanswered. A key C2.6[Computer-Communication Networks ]: Internetworking contribution of our work is the careful design and evaluation of a more complete capability-based network architecture. TVA coun- General term ters a broader set of possible attacks, including those that flood Security, Design the setup channel, that exhaust router state, that consume network bandwidth, and so forth We have also designed Tva to be practical in three key respects Keywords First, we bound both the computation and state needed to proces Denial-of-Service. Internet capabilities. We report on an implementation that suggests our de- sign will be able to operate at gigabit speeds with commodity hard- 1. INTRODUCTION ware. Second, we have designed our system to be incrementally deployable in the current Internet. This can be done by placing Denial of Service (DoS)attacks have become an increasing threat inline packet processing boxes at trust boundaries and points of to the reliability of the Internet. An early study showed that dos attacks occurred at a rate of nearly 4000 attacks per week [ 18]. In them. No changes to Internet routing or legacy routers are needed 2001, a DoS attack [4] was able to take down seven of the thirteen and no cross-provider relationships are required. Third, our design DNS root servers. And more recently, DoS attacks have been used for online extortion 5] provides a spectrum of solutions that can be mixed and matched to some extent. Our intent is to see how far it is possible to go The importance of the problem has led to a raft of proposed so- towards limiting DoS with a practical implementation, but we are lutions. Researchers have advocated filtering to prevent the use of pragmatic enough to realize that others may apply a different cost spoofed source addresses[8], traceback to locate the source of the benefit tradeoff. disrupting packets [20, 22, 21, 24], overlay-based filtering [12, 1 The remainder of this paper discusses our work in more detail 14] to protect approaches to servers, pushback of traffic filters into We discuss related work in Section 2, motivating our approach the network [16, 10, 3], address isolation to distinguish client and Section 3. Section 4 describes a concrete implementation of our ar- server trafic 9), and capabilities to control incoming bandwidth [2, hitecture and illustrates its behavior. Sections 5. 6 and 7 evaluate our approach using a combination of simulation, a kernel imple Section 9 summarizes our work Permission to igital or hard copies of all or part of this work for not made or dis vantage and bear this notice and the full citation on th publish, to post on servers or to redistribute to lists, requires prior specifie SIGCOMM05, August 22-26, 2005, Philadelphia, Pennsylvania, USA Thc aeea tes a as inspire d ety the f danes te calloy lcod arity. Copyright2005ACM1-59593-0094/05/0008$500. age, saving more than $200 million annually

A DoS-limiting Network Architecture Xiaowei Yang University of California, Irvine xwy@ics.uci.edu David Wetherall University of Washington djw@cs.washington.edu Thomas Anderson University of Washington tom@cs.washington.edu ABSTRACT We present the design and evaluation of TVA, a network archi￾tecture that limits the impact of Denial of Service (DoS) floods from the outset. Our work builds on earlier work on capabilities in which senders obtain short-term authorizations from receivers that they stamp on their packets. We address the full range of possible attacks against communication between pairs of hosts, including spoofed packet floods, network and host bottlenecks, and router state exhaustion. We use simulation to show that attack traffic can only degrade legitimate traffic to a limited extent, significantly out￾performing previously proposed DoS solutions. We use a modified Linux kernel implementation to argue that our design can run on gigabit links using only inexpensive off-the-shelf hardware. Our design is also suitable for transition into practice, providing incre￾mental benefit for incremental deployment. Categories and Subject Descriptors C.2.6 [Computer-Communication Networks]: Internetworking General Terms Security, Design Keywords Denial-of-Service, Internet 1. INTRODUCTION Denial of Service (DoS) attacks have become an increasing threat to the reliability of the Internet. An early study showed that DoS attacks occurred at a rate of nearly 4000 attacks per week [18]. In 2001, a DoS attack [4] was able to take down seven of the thirteen DNS root servers. And more recently, DoS attacks have been used for online extortion [5]. The importance of the problem has led to a raft of proposed so￾lutions. Researchers have advocated filtering to prevent the use of spoofed source addresses [8], traceback to locate the source of the disrupting packets [20, 22, 21, 24], overlay-based filtering [12, 1, 14] to protect approaches to servers, pushback of traffic filters into the network [16, 10, 3], address isolation to distinguish client and server traffic [9], and capabilities to control incoming bandwidth [2, 25]. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SIGCOMM’05, August 22–26, 2005, Philadelphia, Pennsylvania, USA. Copyright 2005 ACM 1-59593-009-4/05/0008 ...$5.00. Each of these proposals has merit and provides techniques that can help address the DoS problem. Yet we argue that each ad￾dresses only an aspect of DoS rather than the overall problem. In contrast, our goal is to provide a comprehensive solution to the DoS problem. We require that a DoS-limiting network architecture ensure that any two legitimate nodes be able to effectively commu￾nicate despite the arbitrary behavior of k attacking hosts. We limit ourselves to open networks, such as the Internet, where the com￾municating hosts are not known in advance; this rules out statically configured networks that, for example, only permit predetermined legitimate hosts to send packets to each other. Our solution is the Traffic Validation Architecture (TVA1 ). TVA is based on the notion of capabilities that we advocated in earlier work [2] and which were subsequently refined by Yaar et. al. [25]. Our attraction to capabilities is that they cut to the heart of the DoS problem by allowing destinations to control the packets they re￾ceive. However, capabilities are currently little understood at a de￾tailed level or leave many important questions unanswered. A key contribution of our work is the careful design and evaluation of a more complete capability-based network architecture. TVA coun￾ters a broader set of possible attacks, including those that flood the setup channel, that exhaust router state, that consume network bandwidth, and so forth. We have also designed TVA to be practical in three key respects. First, we bound both the computation and state needed to process capabilities. We report on an implementation that suggests our de￾sign will be able to operate at gigabit speeds with commodity hard￾ware. Second, we have designed our system to be incrementally deployable in the current Internet. This can be done by placing inline packet processing boxes at trust boundaries and points of congestion, and upgrading collections of hosts to take advantage of them. No changes to Internet routing or legacy routers are needed, and no cross-provider relationships are required. Third, our design provides a spectrum of solutions that can be mixed and matched to some extent. Our intent is to see how far it is possible to go towards limiting DoS with a practical implementation, but we are pragmatic enough to realize that others may apply a different cost￾benefit tradeoff. The remainder of this paper discusses our work in more detail. We discuss related work in Section 2, motivating our approach in Section 3. Section 4 describes a concrete implementation of our ar￾chitecture and illustrates its behavior. Sections 5, 6 and 7 evaluate our approach using a combination of simulation, a kernel imple￾mentation, and analysis. Section 8 discusses deployment issues. Section 9 summarizes our work. 1 The name TVA is inspired by the Tennessee Valley Authority, which operates a large-scale network of dams to control flood dam￾age, saving more than $200 million annually. 241

2. BACKGROUND tivated by weaknesses of the SIFF proposal. First, in SIFF, router Our design leverages previous proposals to address Dos attacks. stamps are embedded in normal IP packets, which requires each We discuss this work, attack scenarios to illustrate its strengths router stamp to be extremely short(2 bits), and thus potentially and weaknesses and to motivate our approach. discoverable by brute-force attack. We show how to combine the Early work in the area of Dos sought to make all sources identifi security of long stamps with the efficiency of short stamps. Sec able, e.g., ingress filtering 8] discards packets with widely spoofed ond, initial request packets are forwarded with low priority. This addresses at the edge of the network, and traceback uses routers to allows attacking hosts to establish"approved"connections purely create state so that receivers can reconstruct the path of unwanted amongst themselves and flood a path and prevent any further con- traffic[20, 21, 22]. This is a key step, but it is insufficient as a com- nections from being established along its congested links. We ad- source and a destination. She might then hide her tracks by us- nally, routers allow all copies of packets with a valid stamp through ing the IP TTL field to cause the attack packets to be dropped after because they have no per-flow state. Thus, an attacker that is in- they have done their damage but before they arrive at the destina- correctly granted a capability by a receiver can flood the receiver tion. The network then simply appears to be broken from the point at an arbitrary rate until the permission expires. This is problem of view of the source and the destination atic because a typical web server will only know after a connection One might think that fair queuing [6] would help by ensuring starts whether the traffic is legitimate and given the timeout con- that each flow gets its fair share of the bottleneck bandwidth. Yet stants suggested in [25], even a small rate of incorrect decision en if we assume ingress filtering(so there are no spoofed source would allow Dos attacks to succeed. Our approach is to provide addresses)k hosts attacking a destination limit a good connection fine-grained control over how many packets can be sent based on a to 1/k of the bandwidth. This applies even if the destination know single authorization does not want any of the attack traffic. The problem is worse In summary, existing proposed Dos solutions have a number of Dod ideas but are incomplete. Our goal is to provide a more com- pairs. Then, an attacker in control of k well-positioned hosts can prehensive solution. We further restrict ourselves to solutions that reate a large number of flows to limit the useful traffic to only 1/k2 might be practically implemented in todays technology, e.g, with of the congested link. For example, 30 well-placed hosts could cut limited state at each router and with reasonable amount of compu- a gigabit link to only a megabit or so of usable traffic. tation at each hop a different tack is for the network to limit communication to previously established patterns, e.g., by giving legitimate hosts an 3. TVA DESIGN OVERVIEW authenticator off-line that permits them to send to specific desti In this section, we motivate the key components of our design. ations. SOS [12] and Mayday [1] take this approach. Our goal is Later in Section 4, we describe the protocol and sketch its common to design an open network, one where any two hosts can commu- case of operation. The overall goal of Tva is to strictly limit the nicate without prior arrangement. Our work can thus be seen as impact of packet foods so that two hosts can communicate despite automating the provision of authenticato attacks by other hosts. To achieve this, we start with standard IP An insidious aspect of the Internet model is that receivers have forwarding and routing. We then extend hosts and routers with the ho control over the resources consumed on their behalf. a host can handling described below, conceptually at the IP level. For sim- receive(and have to pay for!)a repetitive stream of packets re- plicity of exposition, we consider a network in which all routers and hosts run our protocol. However, our design only requires up- packet filters at routers upstream from the destination to cause ur grades at network locations that are trust boundaries or that experi wanted packets to be dropped in the network before they consume ence congestion. the resources of the destination, e.g., pushback [16, 10] and more cently AITF [3]. Unfortunately, these filters will block some le 3.1 Packets with Capabilities gitimate traffic from the receiver because there is no clean way to discriminate attack traffic from other traffic, given that attacker To prevent a destination from losing connectivity because of a flood of unwanted can manufacture packets with contents of their choosing. Our work before they reach a congested link. Otherwise the damage has al- In earlier work, we proposed the approach of putting a capability, of identifying wanted packets and providing them with preferential or token, into each data packet to demonstrate that the packet was service. To cleanly accomplish this, we require that each packet uested by the receiver[2]. Communication takes two steps first, carry information that each router can check to determine whether the sender requests permission to send; after verifying the sender is the packet is wanted by the destination. We refer to this explicit good, the receiver provides it with a token. When included in a inform packet, this token allows the network to verify that the packet was authorized by the receiver. By itself, this does not prevent attacks schemes that describe unwanted packets using implicit features [16 against the initial request packet, the router state or computation 10]. They do not require a difficult inference problem to be solved needed to verify the packet, and so forth. For example, in our ini- are precise since attackers cannot spoof them, and are not foiled by ial work we used a separate overlay for transmitting the reque packets, an attack against this channel would disrupt hosts that had bilities must meet several implied requirements. First, they must be not yet established a capability to send granted by the destination to the sender, so that they can be stamped In SIFF, Yaar er. al. refine the capability approach to eliminate on packets. This raises an obvious bootstrap issue, which we ad the separate overlay channel for request packets and per-fiow state dress shortly. Second, capabilities must be unforgeable and not Instead, routers stamp packets with a key that reaches the receiver readily transferable across senders or destinations. This is to pre- and is returned to authorize the sender, which uses it on subsequent vent attackers from stealing or sharing valid capabilities. Third, packets[25]. This is reminiscent of work in robust admission con- routers must be able to verify capabilities without trusting hosts trol [15]. We adopt this approach, with some enhancements mo- This ensures malicious hosts cannot spoof capabilities. Fourth,ca-

2. BACKGROUND Our design leverages previous proposals to address DoS attacks. We discuss this work, using attack scenarios to illustrate its strengths and weaknesses and to motivate our approach. Early work in the area of DoS sought to make all sources identifi- able, e.g., ingress filtering [8] discards packets with widely spoofed addresses at the edge of the network, and traceback uses routers to create state so that receivers can reconstruct the path of unwanted traffic [20, 21, 22]. This is a key step, but it is insufficient as a com￾plete solution. For example, an attacker might flood a link between a source and a destination. She might then hide her tracks by us￾ing the IP TTL field to cause the attack packets to be dropped after they have done their damage but before they arrive at the destina￾tion. The network then simply appears to be broken from the point of view of the source and the destination. One might think that fair queuing [6] would help by ensuring that each flow gets its fair share of the bottleneck bandwidth. Yet even if we assume ingress filtering (so there are no spoofed source addresses) k hosts attacking a destination limit a good connection to 1/k of the bandwidth. This applies even if the destination knows it does not want any of the attack traffic. The problem is worse if fair queuing is performed across source and destination address pairs. Then, an attacker in control of k well-positioned hosts can create a large number of flows to limit the useful traffic to only 1/k2 of the congested link. For example, 30 well-placed hosts could cut a gigabit link to only a megabit or so of usable traffic. A different tack is for the network to limit communication to previously established patterns, e.g., by giving legitimate hosts an authenticator off-line that permits them to send to specific desti￾nations. SOS [12] and Mayday [1] take this approach. Our goal is to design an open network, one where any two hosts can commu￾nicate without prior arrangement. Our work can thus be seen as automating the provision of authenticators. An insidious aspect of the Internet model is that receivers have no control over the resources consumed on their behalf: a host can receive (and have to pay for!) a repetitive stream of packets re￾gardless of whether they are desired. One response is to install packet filters at routers upstream from the destination to cause un￾wanted packets to be dropped in the network before they consume the resources of the destination, e.g., pushback [16, 10] and more recently AITF [3]. Unfortunately, these filters will block some le￾gitimate traffic from the receiver because there is no clean way to discriminate attack traffic from other traffic, given that attackers can manufacture packets with contents of their choosing. Our work can be seen as a robust implementation of network filtering. In earlier work, we proposed the approach of putting a capability, or token, into each data packet to demonstrate that the packet was requested by the receiver [2]. Communication takes two steps: first, the sender requests permission to send; after verifying the sender is good, the receiver provides it with a token. When included in a packet, this token allows the network to verify that the packet was authorized by the receiver. By itself, this does not prevent attacks against the initial request packet, the router state or computation needed to verify the packet, and so forth. For example, in our ini￾tial work we used a separate overlay for transmitting the request packets; an attack against this channel would disrupt hosts that had not yet established a capability to send. In SIFF, Yaar et. al. refine the capability approach to eliminate the separate overlay channel for request packets and per-flow state. Instead, routers stamp packets with a key that reaches the receiver and is returned to authorize the sender, which uses it on subsequent packets [25]. This is reminiscent of work in robust admission con￾trol [15]. We adopt this approach, with some enhancements mo￾tivated by weaknesses of the SIFF proposal. First, in SIFF, router stamps are embedded in normal IP packets, which requires each router stamp to be extremely short (2 bits), and thus potentially discoverable by brute-force attack. We show how to combine the security of long stamps with the efficiency of short stamps. Sec￾ond, initial request packets are forwarded with low priority. This allows attacking hosts to establish “approved” connections purely amongst themselves and flood a path and prevent any further con￾nections from being established along its congested links. We ad￾dress this through a more careful treatment of request packets. Fi￾nally, routers allow all copies of packets with a valid stamp through because they have no per-flow state. Thus, an attacker that is in￾correctly granted a capability by a receiver can flood the receiver at an arbitrary rate until the permission expires. This is problem￾atic because a typical Web server will only know after a connection starts whether the traffic is legitimate and given the timeout con￾stants suggested in [25], even a small rate of incorrect decisions would allow DoS attacks to succeed. Our approach is to provide fine-grained control over how many packets can be sent based on a single authorization. In summary, existing proposed DoS solutions have a number of good ideas but are incomplete. Our goal is to provide a more com￾prehensive solution. We further restrict ourselves to solutions that might be practically implemented in today’s technology, e.g., with limited state at each router and with reasonable amount of compu￾tation at each hop. 3. TVA DESIGN OVERVIEW In this section, we motivate the key components of our design. Later in Section 4, we describe the protocol and sketch its common case of operation. The overall goal of TVA is to strictly limit the impact of packet floods so that two hosts can communicate despite attacks by other hosts. To achieve this, we start with standard IP forwarding and routing. We then extend hosts and routers with the handling described below, conceptually at the IP level. For sim￾plicity of exposition, we consider a network in which all routers and hosts run our protocol. However, our design only requires up￾grades at network locations that are trust boundaries or that experi￾ence congestion. 3.1 Packets with Capabilities To prevent a destination from losing connectivity because of a flood of unwanted packets, the network must discard those packets before they reach a congested link. Otherwise the damage has al￾ready been done. This in turn requires that routers have a means of identifying wanted packets and providing them with preferential service. To cleanly accomplish this, we require that each packet carry information that each router can check to determine whether the packet is wanted by the destination. We refer to this explicit information as a capability [2]. Capabilities have significant potential benefits compared to other schemes that describe unwanted packets using implicit features [16, 10]. They do not require a difficult inference problem to be solved, are precise since attackers cannot spoof them, and are not foiled by end-to-end encryption. However, to be viable as a solution, capa￾bilities must meet several implied requirements. First, they must be granted by the destination to the sender, so that they can be stamped on packets. This raises an obvious bootstrap issue, which we ad￾dress shortly. Second, capabilities must be unforgeable and not readily transferable across senders or destinations. This is to pre￾vent attackers from stealing or sharing valid capabilities. Third, routers must be able to verify capabilities without trusting hosts. This ensures malicious hosts cannot spoof capabilities. Fourth, ca- 242

per-destination queue (2) tre 1: A quest to the destination, to which routers add pre-capabilities; and (2) pabilities must expire so that a destination can cut off a sender from whom it no longer wants to receive packets. Finally, to be practical capabilities must add little overhead in the common case. The rest Figure 2: Queue a capability router. There are three types of traffic: rec preferential forwarding; and legacy 3.2 Bootstrapping Capabilities traffic that competes for any remaining bandwidth. In our design, capabilities are initially obtained using request packets that do not have capabilities. These requests are sent from sender to a destination, e.g., as part of a TCP SYN packet. The an identifier for a network path. We then fair-queue requests using destination then returns capabilities to the sender if it chooses to the most recent tag to identify a queue. This is shown in Figure 2 authorize the sender for further packets, e.g., piggybacked on the As a better approximation, earlier tags can be used TCP SYN/ACK response. This is shown in Figure 1 for a single di- queue to break ties rection of transfer; each direction is handled independently, though Queuing based on a path identifier has two benefits. Fi requests and responses in different directions be combined in number of queues is bounded because the tag space is bounded one packet. Once the sender has capabilities, the communications This in turn bounds the complexity of fair queuing, ensuring that is bootstrapped in the sense that the sender can send further packets we will not exhaust router resources. Second, the scheme offers with capabilities that routers can validate defense-in-depth because each trust domain such as an As places Ignoring legacy issues for the moment, we expect the number of the most trust in the domain that precedes it. Therefore, an attacker packets without associated capabilities to be small in most settings. This is because one capability covers all connections between two arbitrary tags can at most cause queue contention at the next down- stream trust domain(As). One consequence of this is that senders ing the current capability before it expires. Nonetheless, it is crucial that share the same path identifier share fate, localizing the impact that the initial request channel not open an avenue for DoS attacks, of an attack and providing an incentive for improved local security either by fiooding a destination or blocking the requests of legit- 3.3 Destination Policies imate senders. The first issue is straightforward to address: we rate-limit requests at all etwork locations so that they cannot The next question we consider is how a destination can sume all of the bandwidth. Request packets should comprise only mine whether to authorize a request. This is a matter of policy a small fraction of bandwidth. Even with 250 bytes of request for a it depends on the role the destination plays in the network. We 10KB flow, request traffic is 2.5% of the bandwidth. This allows us sider two extreme cases of a client and a public server to argue that to rate-limit request traffic to be no more than 5% of the capacity simple policies can be effective of each link, with the added margin for bursts a client may act in a way that by default allows it to contact any It is more challenging to prevent requests from attackers from server but not otherwise be contacted, as is done by firewalls and overwhelming requests from legitimate clients. Ideally, we would NAT boxes today. To do this, it accepts incoming requests if they like to use per-source fair queuing to ensure that no source can match outgoing requests it has already made and refuses them oth- overwhelm others, regardless of how many different destinations erwise. Note that the client can readily do this because capabilities it contacts. However, this is problematic because source addresses are added to existing packets rather than carried as separate pa may be spoofed, but per-source fair queuing requires an authenti- ets. For example, a client can accept a request on a TCP SYN/ACK cated source identifier. One possibility is ingress filtering, but we that matches its earlier request on a TCP SYN discarded it as too fragile because a single unprotected ingress al a public server may initially grant all requests with a default lows remote spoofing. Another possibility is to sign packets using number of bytes and timeout, using the path identifier to fairly serve public key infrastructure, but we discarded it as too much of a different sources when the load is high. If any of the senders mis deployment hurdle behave, by sending unexpected packets or floods, that sender can Instead, we build a path identifier analogous to Pi [24] and use be temporarily blacklisted and its capability will soon expire. This it as an approximate source locator. Each router at the ingress of blacklisting is possible because the handshake involved in the ca- a trust boundary, e. g, AS edge, tags the request with a small (16 pability exchange weakly authenticates that the source address cor- bit) value derived from its incoming interface that is likely to be responds to a real host. The result is that misbehaving senders are unique across the trust boundary, e.g., a pseudo-random hash. This quickly contained. More sophisticated policies may be based on tagidentifiestheupstreampartyRoutersnotattrustboundariesdohttpcookiesthatidentifyreturningcustomersCaptcHasthat not tag requests as the upstream has already tagged. The tags act as distinguish zombies from real users [11], and so forth

sender router router dest request response (1) (2) Figure 1: A sender obtaining initial capabilities by (1) sending a re￾quest to the destination, to which routers add pre-capabilities; and (2) receiving a response, to which the destination added capabilities. pabilities must expire so that a destination can cut off a sender from whom it no longer wants to receive packets. Finally, to be practical, capabilities must add little overhead in the common case. The rest of our design is geared towards meeting these requirements. 3.2 Bootstrapping Capabilities In our design, capabilities are initially obtained using request packets that do not have capabilities. These requests are sent from a sender to a destination, e.g., as part of a TCP SYN packet. The destination then returns capabilities to the sender if it chooses to authorize the sender for further packets, e.g., piggybacked on the TCP SYN/ACK response. This is shown in Figure 1 for a single di￾rection of transfer; each direction is handled independently, though requests and responses in different directions can be combined in one packet. Once the sender has capabilities, the communications is bootstrapped in the sense that the sender can send further packets with capabilities that routers can validate. Ignoring legacy issues for the moment, we expect the number of packets without associated capabilities to be small in most settings. This is because one capability covers all connections between two hosts, and new capabilities for a long transfer can be obtained us￾ing the current capability before it expires. Nonetheless, it is crucial that the initial request channel not open an avenue for DoS attacks, either by flooding a destination or blocking the requests of legit￾imate senders. The first issue is straightforward to address: we rate-limit requests at all network locations so that they cannot con￾sume all of the bandwidth. Request packets should comprise only a small fraction of bandwidth. Even with 250 bytes of request for a 10KB flow, request traffic is 2.5% of the bandwidth. This allows us to rate-limit request traffic to be no more than 5% of the capacity of each link, with the added margin for bursts. It is more challenging to prevent requests from attackers from overwhelming requests from legitimate clients. Ideally, we would like to use per-source fair queuing to ensure that no source can overwhelm others, regardless of how many different destinations it contacts. However, this is problematic because source addresses may be spoofed, but per-source fair queuing requires an authenti￾cated source identifier. One possibility is ingress filtering, but we discarded it as too fragile because a single unprotected ingress al￾lows remote spoofing. Another possibility is to sign packets using a public key infrastructure, but we discarded it as too much of a deployment hurdle. Instead, we build a path identifier analogous to Pi [24] and use it as an approximate source locator. Each router at the ingress of a trust boundary, e.g., AS edge, tags the request with a small (16 bit) value derived from its incoming interface that is likely to be unique across the trust boundary, e.g., a pseudo-random hash. This tag identifies the upstream party. Routers not at trust boundaries do not tag requests as the upstream has already tagged. The tags act as path−identifier queue requests capability checking regular packets yes legacy packets no low priority queue per−destination queue Figure 2: Queue management at a capability router. There are three types of traffic: requests that are rate-limited; regular packets with associated capabilities that receive preferential forwarding; and legacy traffic that competes for any remaining bandwidth. an identifier for a network path. We then fair-queue requests using the most recent tag to identify a queue. This is shown in Figure 2. As a better approximation, earlier tags can be used within a given queue to break ties. Queuing based on a path identifier has two benefits. First the number of queues is bounded because the tag space is bounded. This in turn bounds the complexity of fair queuing, ensuring that we will not exhaust router resources. Second, the scheme offers defense-in-depth because each trust domain such as an AS places the most trust in the domain that precedes it. Therefore, an attacker at the edge of the network or a compromised router who writes arbitrary tags can at most cause queue contention at the next down￾stream trust domain (AS). One consequence of this is that senders that share the same path identifier share fate, localizing the impact of an attack and providing an incentive for improved local security. 3.3 Destination Policies The next question we consider is how a destination can deter￾mine whether to authorize a request. This is a matter of policy, and it depends on the role the destination plays in the network. We con￾sider two extreme cases of a client and a public server to argue that simple policies can be effective. A client may act in a way that by default allows it to contact any server but not otherwise be contacted, as is done by firewalls and NAT boxes today. To do this, it accepts incoming requests if they match outgoing requests it has already made and refuses them oth￾erwise. Note that the client can readily do this because capabilities are added to existing packets rather than carried as separate pack￾ets. For example, a client can accept a request on a TCP SYN/ACK that matches its earlier request on a TCP SYN. A public server may initially grant all requests with a default number of bytes and timeout, using the path identifier to fairly serve different sources when the load is high. If any of the senders mis￾behave, by sending unexpected packets or floods, that sender can be temporarily blacklisted and its capability will soon expire. This blacklisting is possible because the handshake involved in the ca￾pability exchange weakly authenticates that the source address cor￾responds to a real host. The result is that misbehaving senders are quickly contained. More sophisticated policies may be based on HTTP cookies that identify returning customers, CAPTCHAs that distinguish zombies from real users [11], and so forth. 243

Pre-capability(routers ≤(4-t3)xN timestamp(8 bits) hash(src IP, dest IP, time, secret)(56 bits) Capability(hosts) ts t1 ash(pre-capability, N, T)(56 bits) Figure 3: Format of capabilities. Figure 4: Bound on the bytes of a capability with caching. 3.4 Unforgeable Capabilities whose requests were granted would be able to ar- strap mechanism and policy, we turn our destination until their capabilities expire. This attention to the form of capabilities themselves. Our key require low even a very small rate of false authorizations ment is that an attacker can neither forge a capability, nor make use to deny service. This argues for a very short expiration period, yet of a capability that they steal or transfer from another party. We otocol dynamics such as TCP timeouts place a lower bound on also need capabilities to expire what is reasonable work path, including source and destination IP addresses, at a spe- grant the right to send up to bytes along a path within the next T cific time. Each router that forwards a request packet generates its wn pre-capability and attaches it to the packet. This pre-capability of data as well as the period of validity. The form of these ca- is shown in Figure 3. It consists of a local router timestamp and a pabilities is shown in Figure 3. The destination converts the pre- ryptographic hash of that timestamp plus the source and destina- capabilities it receives from routers to full capabilities by hashing tion IP addresses and a slowly-changing secret known only to th them with n and T. each destination can choose n and T router. Observe that each router can verify for itself that a pur limits) for each request, using any method from simple defaults to ported pre-capability attached to a packet is valid by re-computing models of prior behavior. It is these full capabilities, along with M the hash, since the router knows all of the inputs, but it is crypto and T, that are returned to authorize the sender. For longer flows, graphically hard for other parties to forge the pre-capability with the sender should renew these capabilities before they reach their out knowing the router secret. Each router changes its secret at limits twice the rate of the timestamp rollover, and only uses the current With this scheme, routers verify their portion of the capabilities by C-computing the hashes much as before, except that now two or the previous secret to validate capability. This ensures that a hashes are required instead of one. The routers now perform two pre-capability expires within at most the timestamp rollover period nd each pre-capability is valid for about the same time period re- further checks, one for N and one for T. First, routers check that gardless of when it is issued. The high-order bit of the timestamp their local time is no greater than the router timestamp plus t to ensure ot exp ired. This requires that T ndicates whether the current or the previous router secret should be at most one half of the largest router timestamp so that two time be used for validation. This trick allows a router to try only one values can be unambiguously compared under a modulo clock. The ecret even if the router changed its secret right after issuing a pre replay of very old capabilities for which the local router clock has The destination thus receives an ordered list of pre-capabilities wrapped are handled as before by periodically changing the router that corresponds to a specific network path with fixed source and secret. Second, routers check that the capability will not be used destination IP endpoints. It is this correspondence that prevents for more than N bytes. This check is conceptually simple, but an attacker from successfully using capabilities issued to another and raises the concern that attackers may exhaust arty: it cannot generally arrange to send packets with a specific router state. We deal with this concern next source and destination IP address through a specific sequence of 3.6 Bounded router state routers unless it is co-located with the source. In the latter case. the attacker is indistinguishable from the source as far as the network We wish to ensure that attackers cannot exhaust router mem is concerned, and shares its fate in the same manner as for requests ory to bypass capability limits. This is especially a concern given (And other, more devastating attacks are possible if local security that we are counting the bytes sent with a capability and colluding breached. Thus we reduce remote exploitation to the problem of local security If the destination wishes to authorize the request, it returns an To handle this problem, we design an algorithm that bounds the ordered list of capabilities to the sender via a packet sent in the tes sent using a capability while using only a fixed amount of reverse direction. Conceptually, the pre-capabilities we have de router state no matter how attackers behave. In the worst case. a scribed could directly serve as these capabilities. However, we pro- capability may be used to send 2N bytes in T seconds. The same cess them further to provide greater control, as is described next capability will still be precisely limited to n bytes if there is no memory pressu 3.5 Fine-Grained Capabilities The high level idea of the algorithm is to make a router ke ep only for flows(a flow is defined on a sender to a destination basis. Even effective policies will sometimes make the wrong decision with valid capabilities that send faster than N/T. The router does and the receiver will authorize traffic that ultimately is not wanted For example, with our blacklist server policy an attacker will be not need to keep state for other authorized fows because they will authorized at least once, and with our client policy the server that 2An alternative would be to build rapid capability revocation.We a client accesses may prove to be malicious. If authorizations were believe this to be a less tractable problem

Pre-capability (routers) timestamp (8 bits) hash(src IP, dest IP, time, secret) (56 bits) Capability (hosts) timestamp (8 bits) hash(pre-capability, N, T) (56 bits) Figure 3: Format of capabilities. 3.4 Unforgeable Capabilities Having provided a bootstrap mechanism and policy, we turn our attention to the form of capabilities themselves. Our key require￾ment is that an attacker can neither forge a capability, nor make use of a capability that they steal or transfer from another party. We also need capabilities to expire. We use cryptography to bind each capability to a specific net￾work path, including source and destination IP addresses, at a spe￾cific time. Each router that forwards a request packet generates its own pre-capability and attaches it to the packet. This pre-capability is shown in Figure 3. It consists of a local router timestamp and a cryptographic hash of that timestamp plus the source and destina￾tion IP addresses and a slowly-changing secret known only to the router. Observe that each router can verify for itself that a pur￾ported pre-capability attached to a packet is valid by re-computing the hash, since the router knows all of the inputs, but it is crypto￾graphically hard for other parties to forge the pre-capability with￾out knowing the router secret. Each router changes its secret at twice the rate of the timestamp rollover, and only uses the current or the previous secret to validate capability. This ensures that a pre-capability expires within at most the timestamp rollover period, and each pre-capability is valid for about the same time period re￾gardless of when it is issued. The high-order bit of the timestamp indicates whether the current or the previous router secret should be used for validation. This trick allows a router to try only one secret even if the router changed its secret right after issuing a pre￾capability. The destination thus receives an ordered list of pre-capabilities that corresponds to a specific network path with fixed source and destination IP endpoints. It is this correspondence that prevents an attacker from successfully using capabilities issued to another party: it cannot generally arrange to send packets with a specific source and destination IP address through a specific sequence of routers unless it is co-located with the source. In the latter case, the attacker is indistinguishable from the source as far as the network is concerned, and shares its fate in the same manner as for requests. (And other, more devastating attacks are possible if local security is breached.) Thus we reduce remote exploitation to the problem of local security. If the destination wishes to authorize the request, it returns an ordered list of capabilities to the sender via a packet sent in the reverse direction. Conceptually, the pre-capabilities we have de￾scribed could directly serve as these capabilities. However, we pro￾cess them further to provide greater control, as is described next. 3.5 Fine-Grained Capabilities Even effective policies will sometimes make the wrong decision and the receiver will authorize traffic that ultimately is not wanted. For example, with our blacklist server policy an attacker will be authorized at least once, and with our client policy the server that a client accesses may prove to be malicious. If authorizations were ts  (t2 -t1 ) x N t1 t4 ts+T t2 ttl time t3  (t4 -t3 ) x N  N Figure 4: Bound on the bytes of a capability with caching. binary, attackers whose requests were granted would be able to ar￾bitrarily flood the destination until their capabilities expire. This problem would allow even a very small rate of false authorizations to deny service. This argues for a very short expiration period, yet protocol dynamics such as TCP timeouts place a lower bound on what is reasonable. To tackle this problem, we design fine-grained capabilities that grant the right to send up to N bytes along a path within the next T seconds, e.g., 100KB in 10 seconds2 . That is, we limit the amount of data as well as the period of validity. The form of these ca￾pabilities is shown in Figure 3. The destination converts the pre￾capabilities it receives from routers to full capabilities by hashing them with N and T. Each destination can choose N and T (within limits) for each request, using any method from simple defaults to models of prior behavior. It is these full capabilities, along with N and T, that are returned to authorize the sender. For longer flows, the sender should renew these capabilities before they reach their limits. With this scheme, routers verify their portion of the capabilities by re-computing the hashes much as before, except that now two hashes are required instead of one. The routers now perform two further checks, one for N and one for T. First, routers check that their local time is no greater than the router timestamp plus T to ensure that the capability has not expired. This requires that T be at most one half of the largest router timestamp so that two time values can be unambiguously compared under a modulo clock. The replay of very old capabilities for which the local router clock has wrapped are handled as before by periodically changing the router secret. Second, routers check that the capability will not be used for more than N bytes. This check is conceptually simple, but it requires state and raises the concern that attackers may exhaust router state. We deal with this concern next. 3.6 Bounded Router State We wish to ensure that attackers cannot exhaust router mem￾ory to bypass capability limits. This is especially a concern given that we are counting the bytes sent with a capability and colluding attackers may create many authorized connections across a target link. To handle this problem, we design an algorithm that bounds the bytes sent using a capability while using only a fixed amount of router state no matter how attackers behave. In the worst case, a capability may be used to send 2N bytes in T seconds. The same capability will still be precisely limited to N bytes if there is no memory pressure. The high level idea of the algorithm is to make a router keep state only for flows (a flow is defined on a sender to a destination basis.) with valid capabilities that send faster than N/T. The router does not need to keep state for other authorized flows because they will 2An alternative would be to build rapid capability revocation. We believe this to be a less tractable problem. 244

not send more than N bytes before their capabilities expire in t bilities using its source and destination IP addresses, and compare seconds. We track flows via their rates by using the rate N/T to the cached flow nonce with that in the packet. A match indicates convert bytes to equivalent units of time, as we describe next that a router has validated the capabilities of the flow in previous When a router receives a packet with a valid capability for which ackets. The packets are then subject to byte limit and expiration it does not have state, it begins to track byte counts for the capabil- time checking as before ity and also associates a minimal time-to-live(ttl) with the state For this scheme to work well. senders must know when routers The ttl is set to the time equivalent value of the packet: L*T/N will evict their capabilities from the cache. To do so, hosts model seconds(with L being the packet length). This ttl is decremented router cache eviction based on knowledge of the capability param- as time passes(but our implementation simply sets an expiration eters and how many packets have used the capability and when. By time of now ttl) and incremented as subsequent packets are the construction of our algorithm, eviction should be rare for high charged to the capability. When the ttl reaches zero, it is permissi- rate flows, and it is only these fows that need to remain in cache ble for the router to reclaim the state for use with a new capability. to achieve overall bandwidth efficiency. This modeling can either We now show that this scheme bounds the number of bytes sent be conservative, based on later reverse path knowledge of which Ising a capability. Referring to Figure 4, suppose that the router packets reached the destination,, or optimistic, assuming that los created the capability at time ts and it expires at time ts +T. fur- is infrequent. In the occasional case that routers do not have the ther suppose that the router creates state for the capability at time needed capabilities in cache, the packets will be demoted to legacy t1>ts and reclaims the state when its ttl reaches zero at time packets rather than lost, as we describe next t2<ts +T. Then by the definition of the ttl, the capability must have been used for at most(t2-t1)/T*N bytes from ty 3. 8 Route Changes and failures to t2. This may occur more than once, but regardless of how many To be robust, our design must accommodate route changes and times it occurs, the time intervals can total to no more than T sec ailures such as router restarts. The difficulty this presents is that a onds. Thus the total bytes used for the capability must be at most packet may arrive at a router that has no associated capability state /T*N=N bytes. If a capability has state created at time im- either because none was set up or because the cache state or router ediately preceding ts +T, then up to n bytes can be sent at a secret has been lost. rate faster than N/T. Therefore, at most N+N=2N bytes can This situation should be infrequent, but we can still minim be sent before the capability is expired disruption. First, we demote such packets to be the same priority as This scheme requires only fixed memory to avoid reclaiming legacy trafic(which have no associated capabilities) by changing a state with non-zero ttl values, as required above. Suppose the ca- bit in the capability header. They are likely to reach the destination pacity of the input link is C. To have state at time t, a capability in normal operation when there is little congestion. The destination must be used to send faster than N/T before t. Otherwise, the ttl then echoes demotion events to the sender by setting a bit in the associated with the state will reach zero and the state may be re- capability header of the next message sent on the reverse channel claimed. There can be at most C/(N/T) such capabilities. We This tells the sender that it must re-acquire capabilities require that the minimum N/T rate be greater than an archited ural constraint(N/T)min. This bounds the state a router needs 3.9 Balancing Authorized Traffic to C/ records. As an example, if the minimum sending Capabilities ensure that only authorized traffic will compete for rate is 4K bytes in 10 seconds, a router with a gigabit input line will the bandwidth to reach a destination, but we remain vulnerable to only need 312, 500 records. If each record requires 100 bytes, then foods of authorized traffic: a pair of colluding attackers can au- a line card with 32MB of memory will never run out of state thorize high-rate transfers between themselves and disrupt other authorized traffic that shares the bottleneck. This would allow for 3.7 Efficient Capabilities example, a compromised insider to authorize floods on an access We want capabilities to be bandwidth efficient as well as link by outside attackers Yet these properties are in confiict, since security benefits from long We must arbitrate between authorized traffic to mitigate this at- ck. Since we do not know which authorized flows are malicious short ones(i.e, less overhead). To reconcile these factor, we ob- if any, we simply seek to give each capability a reasonable share of serve that most bytes reside in flows for which the same ca- bability is used repeatedly on packets of the flow. Thus we use the authorizing destination IP address. This is shown in Figure 2 long capabilities(64 bits per router)to ensure security, and cache Users will now get a decreasing share of bandwidth as the network capabilities at routers so that they can subsequently be omitted for becomes busier in terms of users(either due to legitimate usage or bandwidth efficiency. We believe that this is a better tradeoff than colluding attackers), but they will be little affected unless the num- short capabilities that are always present, e. g, SIFF uses 2 bits per ber of attackers is much larger than the number of legitimate users. router. Short capabilities are vulnerable to a brute force attack if Note that we could queue on the source address (if source ad- the behavior of individual routers can be inferred, e.g, from band dress can be trusted)or other fiow definitions involving prefixes width effects, and do not provide effective protection with a limited he best choice is a matter of As policy that likely depends on nitial deployment. whether the source or destination is a direct customer of the as In our design, when a sender obtains new capabilities from a e.g., the source might be used when the packet is in the eceiver. it chooses a random flow nonce and includes it together ISP's network and vice versa with the list of capabilities in its packets. When a router One important consideration is that we limit the number of q a packet with a valid capability it caches the capability relevant in to bound the implementation complexity of fair queuing. To do formation and flow nonce, and initializes a byte counter and ttl as this, we again fall back on our router state bound and fair- previously described. Subsequent packets can then carry the flow over the flows that have their capabilities in cache. In this once and omit the list of capabilities. Observe that path MTU dis- ner, the high-rate flows that send more rapidly than N/T will fairly overy is likely unaffected because the larger packet is the first one We ignore for the present the layering issues involved in using sent to a destination. Routers look up a packet that omits its capa- transport knowledge instead of building more mechanism

not send more than N bytes before their capabilities expire in T seconds. We track flows via their rates by using the rate N/T to convert bytes to equivalent units of time, as we describe next. When a router receives a packet with a valid capability for which it does not have state, it begins to track byte counts for the capabil￾ity and also associates a minimal time-to-live (ttl) with the state. The ttl is set to the time equivalent value of the packet: L ∗ T /N seconds (with L being the packet length). This ttl is decremented as time passes (but our implementation simply sets an expiration time of now + ttl) and incremented as subsequent packets are charged to the capability. When the ttl reaches zero, it is permissi￾ble for the router to reclaim the state for use with a new capability. We now show that this scheme bounds the number of bytes sent using a capability. Referring to Figure 4, suppose that the router created the capability at time ts and it expires at time ts + T. Fur￾ther suppose that the router creates state for the capability at time t1 > ts, and reclaims the state when its ttl reaches zero at time t2 < ts + T. Then by the definition of the ttl, the capability must have been used for at most (t2 − t1)/T ∗ N bytes from t1 to t2. This may occur more than once, but regardless of how many times it occurs, the time intervals can total to no more than T sec￾onds. Thus the total bytes used for the capability must be at most T /T ∗ N = N bytes. If a capability has state created at time im￾mediately preceding ts + T, then up to N bytes can be sent at a rate faster than N/T. Therefore, at most N + N = 2N bytes can be sent before the capability is expired. This scheme requires only fixed memory to avoid reclaiming state with non-zero ttl values, as required above. Suppose the ca￾pacity of the input link is C. To have state at time t, a capability must be used to send faster than N/T before t. Otherwise, the ttl associated with the state will reach zero and the state may be re￾claimed. There can be at most C/(N/T) such capabilities. We require that the minimum N/T rate be greater than an architec￾tural constraint (N/T)min. This bounds the state a router needs to C/(N/T)min records. As an example, if the minimum sending rate is 4K bytes in 10 seconds, a router with a gigabit input line will only need 312,500 records. If each record requires 100 bytes, then a line card with 32MB of memory will never run out of state. 3.7 Efficient Capabilities We want capabilities to be bandwidth efficient as well as secure. Yet these properties are in conflict, since security benefits from long capabilities (i.e., a long key length) while efficiency benefits from short ones (i.e., less overhead). To reconcile these factors, we ob￾serve that most bytes reside in long flows for which the same ca￾pability is used repeatedly on packets of the flow. Thus we use long capabilities (64 bits per router) to ensure security, and cache capabilities at routers so that they can subsequently be omitted for bandwidth efficiency. We believe that this is a better tradeoff than short capabilities that are always present, e.g., SIFF uses 2 bits per router. Short capabilities are vulnerable to a brute force attack if the behavior of individual routers can be inferred, e.g., from band￾width effects, and do not provide effective protection with a limited initial deployment. In our design, when a sender obtains new capabilities from a receiver, it chooses a random flow nonce and includes it together with the list of capabilities in its packets. When a router receives a packet with a valid capability it caches the capability relevant in￾formation and flow nonce, and initializes a byte counter and ttl as previously described. Subsequent packets can then carry the flow nonce and omit the list of capabilities. Observe that path MTU dis￾covery is likely unaffected because the larger packet is the first one sent to a destination. Routers look up a packet that omits its capa￾bilities using its source and destination IP addresses, and compare the cached flow nonce with that in the packet. A match indicates that a router has validated the capabilities of the flow in previous packets. The packets are then subject to byte limit and expiration time checking as before. For this scheme to work well, senders must know when routers will evict their capabilities from the cache. To do so, hosts model router cache eviction based on knowledge of the capability param￾eters and how many packets have used the capability and when. By the construction of our algorithm, eviction should be rare for high￾rate flows, and it is only these flows that need to remain in cache to achieve overall bandwidth efficiency. This modeling can either be conservative, based on later reverse path knowledge of which packets reached the destination3 , or optimistic, assuming that loss is infrequent. In the occasional case that routers do not have the needed capabilities in cache, the packets will be demoted to legacy packets rather than lost, as we describe next. 3.8 Route Changes and Failures To be robust, our design must accommodate route changes and failures such as router restarts. The difficulty this presents is that a packet may arrive at a router that has no associated capability state, either because none was set up or because the cache state or router secret has been lost. This situation should be infrequent, but we can still minimize its disruption. First, we demote such packets to be the same priority as legacy traffic (which have no associated capabilities) by changing a bit in the capability header. They are likely to reach the destination in normal operation when there is little congestion. The destination then echoes demotion events to the sender by setting a bit in the capability header of the next message sent on the reverse channel. This tells the sender that it must re-acquire capabilities. 3.9 Balancing Authorized Traffic Capabilities ensure that only authorized traffic will compete for the bandwidth to reach a destination, but we remain vulnerable to floods of authorized traffic: a pair of colluding attackers can au￾thorize high-rate transfers between themselves and disrupt other authorized traffic that shares the bottleneck. This would allow, for example, a compromised insider to authorize floods on an access link by outside attackers. We must arbitrate between authorized traffic to mitigate this at￾tack. Since we do not know which authorized flows are malicious, if any, we simply seek to give each capability a reasonable share of the network bandwidth. To do this we use fair-queuing based on the authorizing destination IP address. This is shown in Figure 2. Users will now get a decreasing share of bandwidth as the network becomes busier in terms of users (either due to legitimate usage or colluding attackers), but they will be little affected unless the num￾ber of attackers is much larger than the number of legitimate users. Note that we could queue on the source address (if source ad￾dress can be trusted) or other flow definitions involving prefixes. The best choice is a matter of AS policy that likely depends on whether the source or destination is a direct customer of the AS, e.g., the source might be used when the packet is in the sender ISP’s network and vice versa. One important consideration is that we limit the number of queues to bound the implementation complexity of fair queuing. To do this, we again fall back on our router state bound, and fair-queue over the flows that have their capabilities in cache. In this man￾ner, the high-rate flows that send more rapidly than N/T will fairly 3We ignore for the present the layering issues involved in using transport knowledge instead of building more mechanism. 245

share the bandwidth. These are the flows that we care most about Common Header Version(+) type(4)up limiting. The low-rate flows will effectively receive FIFO service lxxx. demoted with drops depending on the timing of arrivals. This does not guar- ntee faimess but is adequate in that it prevents starvation. An alter xlxx return info native approach would have been to hash the flows to a fixed num- xxoO: request ber of queues in the manner of stochastic fair queuing [23]. How ever, we believe our scheme has the potential to prevent attackers xx10: regular w/ nonce only from using deliberate hash collisions to crowd out legitimate users. xx11: renewal 3.10 Short, Slow or Asymmetric Flows capabilitynum(8)capabilityptr(S) TVA is designed to run with low overhead for long, fast flows that have a reverse channel. Short or slow connections will expe- rience a higher relative overhead, and in the extreme may require Request Header blank capability 1(64) a capability exchange for each packet. However, several factors uggest that Tva is workable even in this regime. First, the effect on aggregate efficiency is likely to be small given that most bytes elong to long flows. Second, and perhaps more importantly, ou design does not introduce added latency in the form of handshakes blank capability n(64) because capabilities are carried on existing packets, e.g., a request ay be bundled with a TCP SYN and the capability returned on the Regular/Renewal TCP SYNACK. Third, short flows are less likely because flows are defined on a sender to a destination ip address basis thus all tcp Header now nonce (48) connections or DNS exchanges between a pair of hosts can take place using a single capability TVa will have its lowest relative efficiency when all flows near a host are e.g, at the root DNS servers. Here, the portion cached capability 1(64) of request bandwidth must be increased. TVA will then provide benefits by fair-queuing requests from different regions of the net work. Truly unidirectional flows would also require capability-onl packets in the reverse direction. Fortunately, even media streaming capability n(64) protocols typically use some reverse channel communications. Fi- nally, we have not addressed Ip multicast as it already require some Return info urn type(8) form of authorization action from the receiver. It would be inter- esting to see whether we can provide a stronger protection in this 000000lx:a8-bit lity num fied, N, T, setting by using capabilitiy lities follow this field 4.TⅤ A PROTOCOL Figure 5: Types of capability packets. Return information is present In this section, we describe Tva in terms of how hosts and if the return bit in the common header is set Sizes are in bits. The units routers process packets and provide a more detailed view of the for N are KB: the units for t are seconds. common case for data transfer. We consider attacks more systen atically in the following sections. We ignore legacy concerns for the moment, returning to them in Section 8. demoted to low priority traffic that is treated as legacy traffic. Such There are three elements in our protocol: packets that carry ca- packets are called demoted packets ability information; hosts that act as senders and destinations, and We use the lowest two bits of the type field in the capability routers that process capability information. We describe each in header to indicate the type and the format of packets request packet, egular packet with a flow nonce only, regular packet with both a flow nonce and a list of capabilities, and renewal packet. One bit in 4.1 Packets with Capabilities the rype field is used by routers to indicate that the packet has been Other than legacy traffic, all packets carry a capability header demoted. The remaining bit indicates whether there is also return that extends the behavior of Ip. We implement this as a shim layer information being carried in the reverse direction to a sender. This above IP, piggybacking capability information on normal packets information follows the capability payload. It may be a list of so that there are no separate capability packets pabilities granted by the destination a demote notification There are two types of packets from th Each capability is as described in Section 3: a 64 bit value, bro- les: request packets and regular packets. They share an identify ken down into 8 bits of router timestamp in seconds(a modulo 256 ing capability header and are shown in Figure 5. Request pack clock), and 56 bits of a keyed hash. carry a list of blank capabilities and path identifiers that are filled in quests travel towards destinations. Regular pad 4.2 Senders and destinations have two formats: packets that carry both a flow nonce and a list of To send to a destination for which it has no valid capabilities, valid capabilities, and packets that carry only a fow nonce.(Recal ender must first send a request. A request will typically be com- that a flow is defined by a source and a destination IP address. )a bined with the first packet a sender sends, such as a TCP SYN regular packet with a list of capabilities may be used to request a When a destination receives the request, it must decide ew set of capabilities. We refer to such packets as renewal pack ant or refuse the transfer. We described some simple ets. If a regular packet does not pass the capability check, it may be Section 3.3; there is also an issue we have not tackled

share the bandwidth. These are the flows that we care most about limiting. The low-rate flows will effectively receive FIFO service with drops depending on the timing of arrivals. This does not guar￾antee fairness but is adequate in that it prevents starvation. An alter￾native approach would have been to hash the flows to a fixed num￾ber of queues in the manner of stochastic fair queuing [23]. How￾ever, we believe our scheme has the potential to prevent attackers from using deliberate hash collisions to crowd out legitimate users. 3.10 Short, Slow or Asymmetric Flows TVA is designed to run with low overhead for long, fast flows that have a reverse channel. Short or slow connections will expe￾rience a higher relative overhead, and in the extreme may require a capability exchange for each packet. However, several factors suggest that TVA is workable even in this regime. First, the effect on aggregate efficiency is likely to be small given that most bytes belong to long flows. Second, and perhaps more importantly, our design does not introduce added latency in the form of handshakes, because capabilities are carried on existing packets, e.g., a request may be bundled with a TCP SYN and the capability returned on the TCP SYN/ACK. Third, short flows are less likely because flows are defined on a sender to a destination IP address basis. Thus all TCP connections or DNS exchanges between a pair of hosts can take place using a single capability. TVA will have its lowest relative efficiency when all flows near a host are short, e.g., at the root DNS servers. Here, the portion of request bandwidth must be increased. TVA will then provide benefits by fair-queuing requests from different regions of the net￾work. Truly unidirectional flows would also require capability-only packets in the reverse direction. Fortunately, even media streaming protocols typically use some reverse channel communications. Fi￾nally, we have not addressed IP multicast as it already require some form of authorization action from the receiver. It would be inter￾esting to see whether we can provide a stronger protection in this setting by using capabilities. 4. TVA PROTOCOL In this section, we describe TVA in terms of how hosts and routers process packets and provide a more detailed view of the common case for data transfer. We consider attacks more system￾atically in the following sections. We ignore legacy concerns for the moment, returning to them in Section 8. There are three elements in our protocol: packets that carry ca￾pability information; hosts that act as senders and destinations; and routers that process capability information. We describe each in turn. 4.1 Packets with Capabilities Other than legacy traffic, all packets carry a capability header that extends the behavior of IP. We implement this as a shim layer above IP, piggybacking capability information on normal packets so that there are no separate capability packets. There are two types of packets from the standpoint of capabili￾ties: request packets and regular packets. They share an identify￾ing capability header and are shown in Figure 5. Request packets carry a list of blank capabilities and path identifiers that are filled in by routers as requests travel towards destinations. Regular packets have two formats: packets that carry both a flow nonce and a list of valid capabilities, and packets that carry only a flow nonce. (Recall that a flow is defined by a source and a destination IP address.) A regular packet with a list of capabilities may be used to request a new set of capabilities. We refer to such packets as renewal pack￾ets. If a regular packet does not pass the capability check, it may be Common Header version (4) type (4) upper protocol (8) path−id 1 (16) capability 1 (64) capability n (64) blank capability 1 (64) blank capability n (64) path−id n (16) Request Header capability num (8) capability ptr (8) flow nonce (48) capability num (8) capability ptr (8) N (10) T (6) xx01: regular w/ capabilities x1xx: return info xx00: request common header (16) Regular / Renewal common header (16) Header cached Return info return type (8) xx10: regular w/ nonce only xx11: renewal 1xxx: demoted 00000001: demotion notification and a list of return capabilities follow this field. 0000001x: a 8−bit capability num fied, N, T, Figure 5: Types of capability packets. Return information is present if the return bit in the common header is set. Sizes are in bits. The units for N are KB; the units for T are seconds. demoted to low priority traffic that is treated as legacy traffic. Such packets are called demoted packets. We use the lowest two bits of the type field in the capability header to indicate the type and the format of packets: request packet, regular packet with a flow nonce only, regular packet with both a flow nonce and a list of capabilities, and renewal packet. One bit in the type field is used by routers to indicate that the packet has been demoted. The remaining bit indicates whether there is also return information being carried in the reverse direction to a sender. This information follows the capability payload. It may be a list of ca￾pabilities granted by the destination or a demote notification. Each capability is as described in Section 3: a 64 bit value, bro￾ken down into 8 bits of router timestamp in seconds (a modulo 256 clock), and 56 bits of a keyed hash. 4.2 Senders and Destinations To send to a destination for which it has no valid capabilities, a sender must first send a request. A request will typically be com￾bined with the first packet a sender sends, such as a TCP SYN. When a destination receives the request, it must decide whether to grant or refuse the transfer. We described some simple policies in Section 3.3; there is also an issue we have not tackled of how to 246

express policies within the socket APl. If the destination chooses to authorize the transfer, it sends a response with capabilities back 's s valid >protocol = TVA)I to the sender, again combined with another packet, such as a TCP if(isRTS(pkt))t/rts pkt SYN/ACK. This SYN/ACK will also carry a request for the reverse -pi(pkt) direction. The reverse setup occurs in exactly the same manner as the forward setup, and we omit its description. To refuse the transfer, the destination may instead return an empty capability list, in combined with a packet such as a TCP RST. (entry)t/* has Once the sender receives capabilities, the remainder of the trans- check byte count, expiration, update entry % fer is straightforward. The sender sends data packets, initially with capabilities, and models capability expiration and cache expiration two hashes */ at routers to conservatively determine when routers will have their first renewed pkt. replace leck entry y is Valid replace Entry(entry, pkt) apabilities in cache, and when to renew the capabilities. In the common case, the flow nonce and capabilities are cached at every router. This enables the source to transmit most packets with only if (validate Cap(pkt) the flow nonce is Valid create Entry(pkt); create and check entry The destination simply implements a capability ing policy and does not need to model router behavior. It also echoes an demote signals to the sender, so that the sender may repair the path. if (is valid) if(isRenewal(pkt))(/ renewal pkt enewPkt(pkt); / insert recap %/ 4.3 Routers ueRegular(pkt);/ per-destination queue * Routers route and forward packets as required by IP and ad ditionally process packets according to the capability information that they carry. At a high level, routers share the capacity of each utgoing link between three classes of traffic. This is shown in Fig- ure 2. Request packets, which do not have valid capabilities, are guaranteed access to a small, fixed fraction of the link (5% is ou packets with associated capabilities may use the remainder of the capacity. Legacy traffic is treated as the lowest priority, obtaining bandwidth that is not needed for either requests or regular packets in the traditional fifo manner To process a request, the router adds a pre-capability to the end of the list and adds a new path identifier if it is at a trust boundary secret. It recomputes the two hash functions to check whether they The pre-capability is computed as the local timestamp concatenated match the capability value. The router also checks that the byte with the hash of a router secret, the current, local router time in count does not exceed N, and the current time does not exceed the seconds using its modulo 256 clock, and the source and destination expiration time(of timestamp +T)and updates the entry'sttl. Any P addresses of the packet. This is shown in Figure 3. The path packet with a valid capability or flow nonce is scheduled using fair identifier is a constant that identifies the ingress to the trust do queuing. Our scheme does this across flows cached main, either with high likelihood using pseudo-random functions using destination addresses by default or with configuration information. Requests are fair-queued for on If neither the packet's flow nonce nor capability is valid, then ward transmission using the most recent path identifiers the packet is marked as demoted and queued along with legacy To process a regular packet, routers check that the packet is ar packets. Figure 6 shows the pseudo-code on how a capability router thorized, update the cached information and packet as needed, an schedule the packet for forwarding. First, the router tries to lo- for the f address from the packet. An entry will exist if the router has re- 5. SIMULATION RESULTS ceived a valid regular packet from that flow in the recent past. The In this section, we use ns to simulate TVA, SIFF, pushback and cache entry stores the valid capability, the flow nonce, the autho. the legacy Internet to see how well TVA limits the impact of Dos rized bytes to send (N), the valid time(T), and the ttl and byte foods. TVa is as described in the previous sections, except that we count as described in Section 3.6 ate-limit capability requests to 1% of the link capacity, down from If there is a cached entry for the flow, the router compares the our default of 5%, to stress our design. SIFF is implemented as flow nonce to the packet. If there is a match, it further checks and described in [25]. It treats capacity requests as legacy traffic, does updates the byte count and the ttl, and then fair queues the packet not limit the number of times a capability is used to forward traffic, s described below. If the flow nonce does not match and a list of and does not balance authorized traffic sent to different destina- capabilities are present, this could be the first packet with a renewed tions. Pushback is implemented as described in [16]. It recursively pability, and so the capability is checked and if valid, replaced in pushes destination-based network filters backwards across the the cache entry. Equivalently, if there is not a cached entry for the coming link that contributes most of the flood. flow, the capability is checked, and a cache entry is allocated if it is For each scheme, we set up fixed length transfers between le valid. If the packet has a valid capability and is a renewal packet, a itimate users, and a destination under various attacks. We then A router validates capability using the information in the packet average time of the transfers that complete a transfers, and i)the fresh pre-capability is minted and placed in the packet measure:i)the average fraction of completed etrics are use- ( the source and destination addresses, N, and T) plus the routers ful because a successful Dos attack will cause heavy loss that will

express policies within the socket API. If the destination chooses to authorize the transfer, it sends a response with capabilities back to the sender, again combined with another packet, such as a TCP SYN/ACK. This SYN/ACK will also carry a request for the reverse direction. The reverse setup occurs in exactly the same manner as the forward setup, and we omit its description. To refuse the transfer, the destination may instead return an empty capability list, again combined with a packet such as a TCP RST. Once the sender receives capabilities, the remainder of the trans￾fer is straightforward. The sender sends data packets, initially with capabilities, and models capability expiration and cache expiration at routers to conservatively determine when routers will have their capabilities in cache, and when to renew the capabilities. In the common case, the flow nonce and capabilities are cached at every router. This enables the source to transmit most packets with only the flow nonce. The destination simply implements a capability granting policy and does not need to model router behavior. It also echoes any demote signals to the sender, so that the sender may repair the path. 4.3 Routers Routers route and forward packets as required by IP and ad￾ditionally process packets according to the capability information that they carry. At a high level, routers share the capacity of each outgoing link between three classes of traffic. This is shown in Fig￾ure 2. Request packets, which do not have valid capabilities, are guaranteed access to a small, fixed fraction of the link (5% is our default) and are rate-limited not to exceed this amount. Regular packets with associated capabilities may use the remainder of the capacity. Legacy traffic is treated as the lowest priority, obtaining bandwidth that is not needed for either requests or regular packets in the traditional FIFO manner. To process a request, the router adds a pre-capability to the end of the list and adds a new path identifier if it is at a trust boundary. The pre-capability is computed as the local timestamp concatenated with the hash of a router secret, the current, local router time in seconds using its modulo 256 clock, and the source and destination IP addresses of the packet. This is shown in Figure 3. The path identifier is a constant that identifies the ingress to the trust do￾main, either with high likelihood using pseudo-random functions or with configuration information. Requests are fair-queued for on￾ward transmission using the most recent path identifiers. To process a regular packet, routers check that the packet is au￾thorized, update the cached information and packet as needed, and schedule the packet for forwarding. First, the router tries to lo￾cate an entry for the flow using the source and the destination IP address from the packet. An entry will exist if the router has re￾ceived a valid regular packet from that flow in the recent past. The cache entry stores the valid capability, the flow nonce, the autho￾rized bytes to send (N), the valid time (T), and the ttl and byte count as described in Section 3.6. If there is a cached entry for the flow, the router compares the flow nonce to the packet. If there is a match, it further checks and updates the byte count and the ttl, and then fair queues the packet as described below. If the flow nonce does not match and a list of capabilities are present, this could be the first packet with a renewed capability, and so the capability is checked and if valid, replaced in the cache entry. Equivalently, if there is not a cached entry for the flow, the capability is checked, and a cache entry is allocated if it is valid. If the packet has a valid capability and is a renewal packet, a fresh pre-capability is minted and placed in the packet. A router validates capability using the information in the packet (the source and destination addresses, N, and T) plus the router’s if (pkt−>protocol == TVA) { isValid = false; if (isRTS(pkt)) { /* rts pkt */ insert precap pi(pkt); enqueRts(pkt); /* per path identifier queue */ } else { /* regular pkt */ entry = lookup(pkt); if (entry) { /* has entry */ if (pkt−>nonce == entry−>nonce) { /* check byte count, expiration, update entry */ isValid = updateEntry(entry, pkt); } else if (validateCap(pkt)) { /* comp two hashes */ /* first renewed pkt. replace and check entry */ isValid = replaceEntry(entry, pkt); } } else { /* no entry */ if (validateCap(pkt)) { isValid = createEntry(pkt); /* create and check entry */ } } if (isValid) { if (isRenewal(pkt)) { /* renewal pkt */ renewPkt(pkt); /* insert precap */ } enqueRegular(pkt); /* per-destination queue */ } else { demote(pkt); enqueLegacy(pkt); } } } else { enqueLegacy(pkt); } Figure 6: How a capability router processes a packet. secret. It recomputes the two hash functions to check whether they match the capability value. The router also checks that the byte count does not exceed N, and the current time does not exceed the expiration time (of timestamp +T) and updates the entry’s ttl. Any packet with a valid capability or flow nonce is scheduled using fair queuing. Our scheme does this across flows cached at the router using destination addresses by default. If neither the packet’s flow nonce nor capability is valid, then the packet is marked as demoted and queued along with legacy packets. Figure 6 shows the pseudo-code on how a capability router processes a packet. 5. SIMULATION RESULTS In this section, we use ns to simulate TVA, SIFF, pushback and the legacy Internet to see how well TVA limits the impact of DoS floods. TVA is as described in the previous sections, except that we rate-limit capability requests to 1% of the link capacity, down from our default of 5%, to stress our design. SIFF is implemented as described in [25]. It treats capacity requests as legacy traffic, does not limit the number of times a capability is used to forward traffic, and does not balance authorized traffic sent to different destina￾tions. Pushback is implemented as described in [16]. It recursively pushes destination-based network filters backwards across the in￾coming link that contributes most of the flood. For each scheme, we set up fixed length transfers between le￾gitimate users, and a destination under various attacks. We then measure: i) the average fraction of completed transfers, and ii) the average time of the transfers that complete. These metrics are use￾ful because a successful DoS attack will cause heavy loss that will 247

mate users destination 0.8 10Mb 10ms 806 colluder 04 Internet SIFF 0.2 pusI TVA 100 attackers 100 Figure 7: Simulation topology Number of attackers 6 both slow legitimate transfers and eventually cause the application Interne to abort them 5 We simulate the dumbbell topology shown in Figure 7. The bot- tleneck link in our simulation is provisioned to give a legitimate a 4 pushback* TVA user a nominal throughput of IMb/s over a bottleneck link with a nominal capacity of 10Mb/s. The rTT is 60ms. Each attacker oods at the rate of a legitimate user, 1Mb/s, and we vary intens 2 of the attacks from 1/10 of the bottleneck bandwidth to 10 times the bottleneck bandwidth by varying the number of attackers from I to 100. We use these relatively low rates to speed the simulation, since the key variables are the ratios between attacker, legitimate 100 user,and the bottleneck bandwidth, given that there is a bandwidth- delay product sufficient to avoid small window effects Number of attackers Each legitimate user sends a 20KB file a thousand times using TCP, the next transfer starting after the previous one completes or Figure 8: Legacy traffic flooding does not increase the file transfer aborts due to excessive loss. Capability requests are piggy backed time or decrease the fraction of completed transfers with TVA. with on TCP SYNS. To provide a fair comparison for other schemes SIFF. file transfer time ses and the fraction of completed trans- we modify TCP to have a more aggressive connection establish fer drops when the intensity of attack increases; with pushback, as the ment algorithm. Specifically, the timeout for TCP SYNS is fixed number of attackers increases, pushback poorly isolates attack and le- at one second(without the normal exponential backoff) and up to gitimate traffic, and the transfer time increases and the fraction of com- ight retransmissions are performed. Without this change, SIFF pleted transfer drops sharply; with today's Internet, the transfer time suffers disproportionately because it treats sYn packets with capa- increases sharply, and the fraction of completion quickly approaches bility requests as legacy traffic, and therefore its performance under ze overload will be dominated by long TCP timeouts. Similarly, we set the tcP if sion timeout for a regular data packet exceeds 64 seconds, or it has to suffer losses We see this in Figure 8 when the number of attack transmitted the same packet more than 10 times. ers is greater than 10 and the fraction of completions drops and the We note that TCP inefficiencies limit the effective throughput ompletion time increases. When the aggregate attack bandwidth itimate user to be no more than 533Kb/s in our scenario Ba is greater than the bottleneck bandwidth Bl, the packet loss given the transfer of 20K B with a 60ms RTT. This implies that there rate p is approximately(Ba-B1)/Ba. Once a request packet gets through, a sender's subsequent packets are authorized packets and Isers -the contention effects we see come directly from massed are treated with higher priority. So the probability that a file trans- through within nine tries, i.e., (1-P). When the number 5.1 Legacy packet Floods ers is 100, p is 90%, giving a completion rate of(1-0.9) This is consistent with our results. Similarly, the average 二 The first scenario we consider is that of each attacker flooding transfer to complete with up to 9 tries the destination with legacy traffic at IMb/s. Figure 8 shows the fraction of completions and the average completion time for TVa in comparison with SIFf, pushback, and the current Internet. Tan=∑p-1.(1-p)/(1-p We see that TVA maintains the fraction of completions near 100% and the completion time remains small as the intensity of When there are 100 attackers, this time is 4.05 seconds, which attacks increases. That is, our design strictly limits the impact of again consistent with our results legacy trafic floods. This is because we treat legacy trafic wit Pushback performs well until the number of attackers is large, lower priority than request traffic. SIff treats both legacy and request packets as equally low pri legitimate traffic. This is because attack traffic becomes harder to ority traffic. Therefore, when the intensity of legacy trafic exceeds identify as the number of attackers increases since each incoming the bottleneck bandwidth, a legitimate user's request packets begin link contributes a small fraction of the overall attack. As can be

10ms 10ms 10ms 1~100 attackers 10Mb, 10ms bottleneck colluder 10 legitimate users destination Figure 7: Simulation topology both slow legitimate transfers and eventually cause the applications to abort them. We simulate the dumbbell topology shown in Figure 7. The bot￾tleneck link in our simulation is provisioned to give a legitimate user a nominal throughput of 1Mb/s over a bottleneck link with a nominal capacity of 10Mb/s. The RTT is 60ms. Each attacker floods at the rate of a legitimate user, 1Mb/s, and we vary intensity of the attacks from 1/10 of the bottleneck bandwidth to 10 times the bottleneck bandwidth by varying the number of attackers from 1 to 100. We use these relatively low rates to speed the simulation, since the key variables are the ratios between attacker, legitimate user, and the bottleneck bandwidth, given that there is a bandwidth￾delay product sufficient to avoid small window effects. Each legitimate user sends a 20KB file a thousand times using TCP, the next transfer starting after the previous one completes or aborts due to excessive loss. Capability requests are piggybacked on TCP SYNs. To provide a fair comparison for other schemes, we modify TCP to have a more aggressive connection establish￾ment algorithm. Specifically, the timeout for TCP SYNs is fixed at one second (without the normal exponential backoff) and up to eight retransmissions are performed. Without this change, SIFF suffers disproportionately because it treats SYN packets with capa￾bility requests as legacy traffic, and therefore its performance under overload will be dominated by long TCP timeouts. Similarly, we set the TCP data exchange to abort the connection if its retransmis￾sion timeout for a regular data packet exceeds 64 seconds, or it has transmitted the same packet more than 10 times. We note that TCP inefficiencies limit the effective throughput of a legitimate user to be no more than 533Kb/s in our scenario, given the transfer of 20KB with a 60ms RTT. This implies that there is virtually no bandwidth contention with a pool of 10 legitimate users – the contention effects we see come directly from massed attackers. 5.1 Legacy Packet Floods The first scenario we consider is that of each attacker flooding the destination with legacy traffic at 1Mb/s. Figure 8 shows the fraction of completions and the average completion time for TVA in comparison with SIFF, pushback, and the current Internet. We see that TVA maintains the fraction of completions near 100% and the average completion time remains small as the intensity of attacks increases. That is, our design strictly limits the impact of legacy traffic floods. This is because we treat legacy traffic with lower priority than request traffic. SIFF treats both legacy and request packets as equally low pri￾ority traffic. Therefore, when the intensity of legacy traffic exceeds the bottleneck bandwidth, a legitimate user’s request packets begin 0 0.2 0.4 0.6 0.8 1 1 10 100 Fraction of completion Number of attackers Internet SIFF pushback TVA 0 1 2 3 4 5 6 1 10 100 Transfer time (s) Number of attackers Internet SIFF pushback TVA Figure 8: Legacy traffic flooding does not increase the file transfer time or decrease the fraction of completed transfers with TVA. With SIFF, file transfer time increases and the fraction of completed trans￾fer drops when the intensity of attack increases; With pushback, as the number of attackers increases, pushback poorly isolates attack and le￾gitimate traffic, and the transfer time increases and the fraction of com￾pleted transfer drops sharply; with today’s Internet, the transfer time increases sharply, and the fraction of completion quickly approaches zero. to suffer losses. We see this in Figure 8 when the number of attack￾ers is greater than 10 and the fraction of completions drops and the completion time increases. When the aggregate attack bandwidth Ba is greater than the bottleneck bandwidth Bl, the packet loss rate p is approximately (Ba − Bl)/Ba. Once a request packet gets through, a sender’s subsequent packets are authorized packets and are treated with higher priority. So the probability that a file trans￾fer completes with SIFF is equal to the probability a request gets through within nine tries, i.e.,(1−p 9 ). When the number of attack￾ers is 100, p is 90%, giving a completion rate of (1−0.9 9 ) = 0.61. This is consistent with our results. Similarly, the average time for a transfer to complete with up to 9 tries is: Tavg = (X9 i=1 ·i · p i−1 · (1 − p))/(1 − p 9 ) When there are 100 attackers, this time is 4.05 seconds, which is again consistent with our results. Pushback performs well until the number of attackers is large, at which stage it provides poor isolation between attack traffic and legitimate traffic. This is because attack traffic becomes harder to identify as the number of attackers increases since each incoming link contributes a small fraction of the overall attack. As can be 248

0.8 0.8 806 8 0.iNter SIFF nternet母 SIF s 0.4 pushback s 0.2 pushback 0.2 TVA 10 100 Number of attackers Number of attackers 6 6 Inter 5 5 2 4 pushback IVA SIFF 3 pushback 2 0 100 100 Number of attackers Number of attackers Figure 9: Request packet flooding does not increase the transfer time Figure 10: With TVA, per-destination queue ensures that the desti- or decrease the fraction of completed transfers with TvA nation and the colluder equally share the access link bandwidth. The transfer time slightly increases (not noticeable from the figure)as a result of reduced bandwidth, and the fraction of completion remains seen in Figure 8, when the number of attackers is less than 40 100%. pushback is able to cut off a significant portion of the attack traf- fic: the file transfer time increases by less than 2 seconds, and the faction of completed transfers remains at 100%. However, pus queued separately, so that excessive requests from attackers will back becomes markedly less effective as the number of attackers be dropped without causing requests from legitimate users to be increases further. The file transfer time increases significantly, and dropped. We see this in Figure 9: the fraction of completions does the fraction of completed transfers drops sharply. TVa could be not drop and the transfer time does not increase. In contrast, the re- viewed as an alternative pushback scheme that is always activated sults for SIFF are similar to those for legacy packets floods, as SiFF and uses capabilities as a robust signature to separate attack traffic treats both requests and legacy trafic as low priority traffic. Both mate traffic pushback and the legacy Internet treat RTs traffic as regular data With the Internet, legitimate traffic and attack traffic are treated traffic. The results for them are the same as those for the legacy alike. Therefore, every packet from a legitimate user encounters traffic attack a loss rate of p. The probability for a file transfer of n packets to get through, each within a fixed number of retransmissions k is 5.3 Authorized packet Floods 1-p). This probability decreases polynomially as the drop rate increases and exponentially as the number of packets n(or the when paths can be found that share the bottleneck link with the des- tination. The colluders grant capabilities to requests from attackers, fraction of completed transfers quickly approaches to zero as the allowing the attackers to send authorized traffic at their maximum number of attackers increases. The few completed transfers have rate. Figure 10 shows the results under this attack. Because TVA al- a completion time hundreds of seconds, and are out of the y-axis locates bandwidth approximately fairly among all destinations and allows destinations to use fine-grained capabilities to control how 5.2 Request Packet Floods much bandwidth to allocate to a sender. this attack causes ban width to be fairly allocated between the colluder and the destina- The next scenario we consider is that of ead ker flooding tion. The transfer time slightly increases(from 0.31 second to 0.33 the destination with request packets at 1Mb/s attack, second, not noticeable from the figure)as a result of reduced band- assume the destination was able to distinguish from legit- width, and all transfers complete. If the number of colluders that imate users and those from attackers. With TVA, request packets share a bottleneck link with the destination increases, the destina- are rate limited and will not reduce the available capacity for au- tion gets a decreased share of the bandwidth. Each legitimate user horized packets. Requests from attackers and legitimate users are will get a lesser share of the bandwidth, but no user will be starved

0 0.2 0.4 0.6 0.8 1 1 10 100 Fraction of completion Number of attackers Internet SIFF pushback TVA 0 1 2 3 4 5 6 1 10 100 Transfer time (s) Number of attackers Internet SIFF pushback TVA Figure 9: Request packet flooding does not increase the transfer time or decrease the fraction of completed transfers with TVA. seen in Figure 8, when the number of attackers is less than 40, pushback is able to cut off a significant portion of the attack traf- fic: the file transfer time increases by less than 2 seconds, and the faction of completed transfers remains at 100%. However, push￾back becomes markedly less effective as the number of attackers increases further. The file transfer time increases significantly, and the fraction of completed transfers drops sharply. TVA could be viewed as an alternative pushback scheme that is always activated and uses capabilities as a robust signature to separate attack traffic and legitimate traffic. With the Internet, legitimate traffic and attack traffic are treated alike. Therefore, every packet from a legitimate user encounters a loss rate of p. The probability for a file transfer of n packets to get through, each within a fixed number of retransmissions k is (1−p k ) n . This probability decreases polynomially as the drop rate p increases and exponentially as the number of packets n (or the file size) increases. This explains the results we see in Figure 8: the fraction of completed transfers quickly approaches to zero as the number of attackers increases. The few completed transfers have a completion time hundreds of seconds, and are out of the y-axis scope in Figure 8. 5.2 Request Packet Floods The next scenario we consider is that of each attacker flooding the destination with request packets at 1Mb/s. In this attack, we assume the destination was able to distinguish requests from legit￾imate users and those from attackers. With TVA, request packets are rate limited and will not reduce the available capacity for au￾thorized packets. Requests from attackers and legitimate users are 0 0.2 0.4 0.6 0.8 1 1 10 100 Fraction of completion Number of attackers Internet SIFF pushback TVA 0 1 2 3 4 5 6 1 10 100 Transfer time (s) Number of attackers Internet SIFF pushback TVA Figure 10: With TVA, per-destination queue ensures that the desti￾nation and the colluder equally share the access link bandwidth. The transfer time slightly increases (not noticeable from the figure) as a result of reduced bandwidth, and the fraction of completion remains 100%. queued separately, so that excessive requests from attackers will be dropped without causing requests from legitimate users to be dropped. We see this in Figure 9: the fraction of completions does not drop and the transfer time does not increase. In contrast, the re￾sults for SIFF are similar to those for legacy packets floods, as SIFF treats both requests and legacy traffic as low priority traffic. Both pushback and the legacy Internet treat RTS traffic as regular data traffic. The results for them are the same as those for the legacy traffic attack. 5.3 Authorized Packet Floods Strategic attackers will realize that it is more effective to collude when paths can be found that share the bottleneck link with the des￾tination. The colluders grant capabilities to requests from attackers, allowing the attackers to send authorized traffic at their maximum rate. Figure 10 shows the results under this attack. Because TVA al￾locates bandwidth approximately fairly among all destinations and allows destinations to use fine-grained capabilities to control how much bandwidth to allocate to a sender, this attack causes band￾width to be fairly allocated between the colluder and the destina￾tion. The transfer time slightly increases (from 0.31 second to 0.33 second, not noticeable from the figure) as a result of reduced band￾width, and all transfers complete. If the number of colluders that share a bottleneck link with the destination increases, the destina￾tion gets a decreased share of the bandwidth. Each legitimate user will get a lesser share of the bandwidth, but no user will be starved. 249

Packet type I Processing time TVA I Request th a cached entry 6543 ll at once 10 at a time Renewal with a cached entry 439ns1 Table 1: Processing overhead of different types of packets. 2 policy described in Section 3.3: a destination initially grants all re- 0102030405060 quests, but stops renewing capabilities for senders that misbehave by flooding traffic. We set the destination to grant an initial capabil Time(s) ty of 32KB in 10 seconds. This allows an attacker to food at a rate of IMb/s, but for only 32KB until the capability expires. The desti- SIFF nation does not renew capabilities because of the attack. Figure 1 shows how the transfer time changes for TVa with this policy as 6 an attack commences. There are two attacks a high intensity one 5 all at once in which all 100 attackers attack simultaneously; and a low inten- 10 at a time sity one in which the 100 attackers divide into 10 groups that food one after the other, as one group finishes their attack. We see that 3 both attacks are effective for less than 5 seconds, causing temporary 窃2 Figure 1 l also shows the results for SIFF under the same attacks 0 In SIFf, the expiration of a capability depends on changing a router 0102030405060 secret-even if the destination determines that the sender is misbe aving it is powerless to revoke the authorization beforehand. This Time(s) suggests that rapid secret turnover is needed, but there are practi- cal limitations on how quickly the secret can be changed, e.g., the Figure 11: Attackers can only cause temporary damage if a destina- life time of a router secret should be longer than a small multi- tion stops renewing their capabilities. TVa uses a fine-grained capabil- ple of TCP timeouts. In ty to limit the impact of authorizing an attacker to a smaller amount pire its capabilities every three seconds. By contrast, TVA expires of attack traffic compared to SIFF, even assuming SIFF has a rapid router secret every 128 seconds. We see that both attacks have a changing router secret that expires every 3 seconds. much more pronounced effect on SIFF. The high intensity attack increases the transfer time by 4 seconds and the low intensity at- tack lasts for 30 seconds. In each attack period of three seconds, all Under the same attack with SIFF, legitimate users are completely legitimate requests are blocked until the next transition. As a result, starved when the intensity of the attack exceeds the bottleneck band the transfer time jumps to more than three seconds width. Again, this is because the request packets are treated with low priority and are dropped in favor of the authorized attack traffic. 6. IMPLEMENTATION pletion rate drops sharply We prototyped TVA using the Linux netfilter framework [19] to zero when the attacking bandwidth reaches the bottleneck band- running on commodity hardware. We implemented the host por- idth of 10Mb/s. The very few transfers that complete do so only tion of the protocol as a user-space proxy, as this allows legacy because of traffic fuctuations, and suffer a sharp increase in the average transfer time. Both pushback and the legacy Internet treat request traffic and capability processing as a kernel module using the AES-hash as the first hash function(for pre-capabilities)and Shal as the second authorized traffic as regular traffic. Thus the results for each scheme under an authorized traffic attack is similar to each scheme under hash function [ 17(for capabilities The purpose of this effort was to check the completeness of our a legacy traffic attack. Pushback will identify the flow aggregate design and to understand the processing costs of capabilities. We destined to the colluder as the offending aggregate that causes most did not consider the processing costs of fair queuing. In our exper iment, we set up a router using a dual-processor 3. 2GHz Pentium f bandwidth Xeon machine running a Linux 2.6.8 Kernel. It used the native 5.4 Imprecise authorization policies Linux forwarding module, We then used a kernel packet genera tor to generate different types of packets and sent them through the Finally, we consider the impact of imprecise policies, when a router, modifying the code to force the desired execution path. For destination sometimes authorizes attackers because it cannot reli- each run, our load generator machines sent one million packets of ably distinguish between legitimate users and attackers at the time each type to the router. We recorded the average number of instruc- that it receives a request. In the extreme that the destination cannot tion cycles for the router to process each type of packet, averaging differentiate attackers from users at all, it must grant them equally. the results over five experiments However, if the destination is able to differentiate likely attack Table I shows the results of this experiment, with cycles con- requests, even imprecisely, TVA is still able to limit the damage verted to time. In normal operation, the most common type of of Dos floods. To see this, we simulate the simple authorization packet is a regular packet with an entry at a router. The processing 50

0 1 2 3 4 5 6 0 10 20 30 40 50 60 Transfer time (s) Time (s) TVA all at once 10 at a time 0 1 2 3 4 5 6 0 10 20 30 40 50 60 Transfer time (s) Time (s) SIFF all at once 10 at a time Figure 11: Attackers can only cause temporary damage if a destina￾tion stops renewing their capabilities. TVA uses a fine-grained capabil￾ity to limit the impact of authorizing an attacker to a smaller amount of attack traffic compared to SIFF, even assuming SIFF has a rapid￾changing router secret that expires every 3 seconds. Under the same attack with SIFF, legitimate users are completely starved when the intensity of the attack exceeds the bottleneck band￾width. Again, this is because the request packets are treated with low priority and are dropped in favor of the authorized attack traffic. We see in Figure 10 that the request completion rate drops sharply to zero when the attacking bandwidth reaches the bottleneck band￾width of 10Mb/s. The very few transfers that complete do so only because of traffic fluctuations, and suffer a sharp increase in the average transfer time. Both pushback and the legacy Internet treat request traffic and authorized traffic as regular traffic. Thus, the results for each scheme under an authorized traffic attack is similar to each scheme under a legacy traffic attack. Pushback will identify the flow aggregate destined to the colluder as the offending aggregate that causes most packet drops, but it fails to rate limit the aggregate to its fair share of bandwidth. 5.4 Imprecise Authorization Policies Finally, we consider the impact of imprecise policies, when a destination sometimes authorizes attackers because it cannot reli￾ably distinguish between legitimate users and attackers at the time that it receives a request. In the extreme that the destination cannot differentiate attackers from users at all, it must grant them equally. However, if the destination is able to differentiate likely attack requests, even imprecisely, TVA is still able to limit the damage of DoS floods. To see this, we simulate the simple authorization Packet type Processing time Request 460 ns Regular with a cached entry 33 ns Regular without a cached entry 1486 ns Renewal with a cached entry 439 ns Renewal without a cached entry 1821 ns Table 1: Processing overhead of different types of packets. policy described in Section 3.3: a destination initially grants all re￾quests, but stops renewing capabilities for senders that misbehave by flooding traffic. We set the destination to grant an initial capabil￾ity of 32KB in 10 seconds. This allows an attacker to flood at a rate of 1Mb/s, but for only 32KB until the capability expires. The desti￾nation does not renew capabilities because of the attack. Figure 11 shows how the transfer time changes for TVA with this policy as an attack commences. There are two attacks: a high intensity one in which all 100 attackers attack simultaneously; and a low inten￾sity one in which the 100 attackers divide into 10 groups that flood one after the other, as one group finishes their attack. We see that both attacks are effective for less than 5 seconds, causing temporary congestion and increasing the transfer time of some connections by about 2 seconds. Figure 11 also shows the results for SIFF under the same attacks. In SIFF, the expiration of a capability depends on changing a router secret – even if the destination determines that the sender is misbe￾having it is powerless to revoke the authorization beforehand. This suggests that rapid secret turnover is needed, but there are practi￾cal limitations on how quickly the secret can be changed, e.g., the life time of a router secret should be longer than a small multi￾ple of TCP timeouts. In our experiment, we assume SIFF can ex￾pire its capabilities every three seconds. By contrast, TVA expires router secret every 128 seconds. We see that both attacks have a much more pronounced effect on SIFF. The high intensity attack increases the transfer time by 4 seconds, and the low intensity at￾tack lasts for 30 seconds. In each attack period of three seconds, all legitimate requests are blocked until the next transition. As a result, the transfer time jumps to more than three seconds. 6. IMPLEMENTATION We prototyped TVA using the Linux netfilter framework [19] running on commodity hardware. We implemented the host por￾tion of the protocol as a user-space proxy, as this allows legacy applications to run without modification. We implemented router capability processing as a kernel module using the AES-hash as the first hash function (for pre-capabilities) and SHA1 as the second hash function [17] (for capabilities). The purpose of this effort was to check the completeness of our design and to understand the processing costs of capabilities. We did not consider the processing costs of fair queuing. In our exper￾iment, we set up a router using a dual-processor 3.2GHz Pentium Xeon machine running a Linux 2.6.8 Kernel. It used the native Linux forwarding module. We then used a kernel packet genera￾tor to generate different types of packets and sent them through the router, modifying the code to force the desired execution path. For each run, our load generator machines sent one million packets of each type to the router. We recorded the average number of instruc￾tion cycles for the router to process each type of packet, averaging the results over five experiments. Table 1 shows the results of this experiment, with cycles con￾verted to time. In normal operation, the most common type of packet is a regular packet with an entry at a router. The processing 250

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
共12页,试读已结束,阅读完整版请下载
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有