Internet Indirection Infrastructure lon Stoica Daniel Adkins Shelley zhuang Scott Shen Sonesh Surana University of California, Berkeley istoica, dawkins, shelleyz, sonesh /@cs. berkeley. edu ABSTRACT to the(fixed) location of the desired IP address. The simplicity of this point-to-point communication abstraction contributed great tempts to generalize the Internets point-to-point communication to the scalability and efficiency of the Internet abstraction to provide services like multicast, anycast, and mobility ave faced challenging technical problems and deployment barri- However, many applications would benefit from ommunication abstractions, such as multicast, anycast, an an overlay-based Internet Indirection Infrastructure (13)that offers mobility. In these abstractions, the sending host no longer a rendezvous-based communication abstraction. Instead of explic- the identity of the receiving hosts (multicast and anycast) and the itly sending a packet to a destination, each packet is associated with location of the receiving host need not be fixed(mobility). Thus, n identifier; this identifier is then used by the receiver to obtain de- there is a significant and fundamental mismatch between the livery of the packet. This level of indirection decouples the act of inal point-to-point abstraction and these more general ones. All sending from the act of receiving, and allows i3 to efficiently sup attempts to implement these more general abstractions have relied ort a wide variety of fundamental communication services. To on a layer of indirection that decouples the sending hosts from the demonstrate the feasibility of this approach, we have designed and receiving hosts, for example, senders send to a group address(mul built a prototype based on the Chord lookup protocol. network is responsible for delivering the packet to the appropriate Categories and Subject descriptors location(s) Although these more general abstractions would undoubtedly H 4.3 [nformation Systems: Communication bring significant benefit to end-users, it remains unclear how to achieve them. These abstractions have proven difficult to imple General terms ment scalably at the IP layer [4, 13, 27). Moreover, deploying ad- ditional functionality at the lp layer requires a level of community wide consensus and commitment that is hard to achieve. In short, implementing these more general abstractions at the IP layer poses Keywords difficult technical problems and major deployment barriers Indirection, Abstraction, Scalable, Internet, Architecture In response, many researchers have turned to application-layer solutions(either end-host or overlay mechanisms)to support these 1. INTRODUCTION abstractions [4, 15, 24]. While these proposals achieve the desired functionality, they do so in a very disjointed fashion in that solu The original Internet architecture was designed to provide uni tions for one service are not solutions for other services; e.g,pro- cast point-to-point communication between fixed locations. In this posals for application-layer multicast don't address mobility, and asic service, the sending host knows the IP address of the receiver vice-versa. As a result, many similar and largely redundant mech- and the job of IP routing and forwarding is simply to deliver packets anisms are required to achieve these various goals. In addition, if This research was sponsored by Nsf under grant numbers Career overlay solutions are used, adding a new abstraction requires the Award AN1-0133811. and ItR Award ani-0085879. Views an deployment of an entirely new overlay infrastructure onclusions contained in this document are those of the authors In this paper, we propose a single new overlay network that and should not be interpreted as representing the official policies, serves as a general-purpose Internet Indirection Infrastructure(23) either expressed or implied, of NSF, or the U.S. government. 23 offers a powerful and flexible rendervous-based communication tiCSI Center for Internet Research (cIR), Berkel abstraction, applications can easily implement a variety of commu- shenker(@icsi. berkeley. edt nication services, such as multicast, anycast, and mobility, on top of this communication abstraction. Our approach provides a gen- eral overlay service that avoids both the technical and deployment challenges inherent in IP-layer solutions and the redundancy and Permission to make lack of synergy in more traditional application-layer approaches fee provided that copies are We thus hope to combine the generality of IP-layer solutions with bear this notice and the full citat the deployability of overlay solutions. sts,requires prior specific The paper is organized as follows. In Sections 2 and 3 we pre vide an overview of the 23 architecture and then a general discus- SIGCOMM02 Pittsburgh, Pennsylvania U sion on how 23 might be used in applications. Section 4 covers ad- Copyright 2002 ACM X-XXXXX-XX-X/XX/XX.5.00
Internet Indirection Infrastructure Ion Stoica Daniel Adkins Shelley Zhuang Scott Shenker ✁ Sonesh Surana University of California, Berkeley ✂ istoica, dadkins, shelleyz, sonesh✄ @cs.berkeley.edu ABSTRACT Attempts to generalize the Internet’s point-to-point communication abstraction to provide services like multicast, anycast, and mobility have faced challenging technical problems and deployment barriers. To ease the deployment of such services, this paper proposes an overlay-based Internet Indirection Infrastructure (☎✝✆) that offers a rendezvous-based communication abstraction. Instead of explicitly sending a packet to a destination, each packet is associated with an identifier; this identifier is then used by the receiver to obtain delivery of the packet. This level of indirection decouples the act of sending from the act of receiving, and allows ☎✝✆ to efficiently support a wide variety of fundamental communication services. To demonstrate the feasibility of this approach, we have designed and built a prototype based on the Chord lookup protocol. Categories and Subject Descriptors H.4.3 [Information Systems]: Communication General Terms Design Keywords Indirection, Abstraction, Scalable, Internet, Architecture 1. INTRODUCTION The original Internet architecture was designed to provide unicast point-to-point communication between fixed locations. In this basic service, the sending host knows the IP address of the receiver and the job of IP routing and forwarding is simply to deliver packets This research was sponsored by NSF under grant numbers Career Award ANI-0133811, and ITR Award ANI-0085879. Views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of NSF, or the U.S. government. ✁ ICSI Center for Internet Research (ICIR), Berkeley, shenker@icsi.berkeley.edu Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SIGCOMM ’02 Pittsburgh, Pennsylvania USA Copyright 2002 ACM X-XXXXX-XX-X/XX/XX ...$5.00. to the (fixed) location of the desired IP address. The simplicity of this point-to-point communication abstraction contributed greatly to the scalability and efficiency of the Internet. However, many applications would benefit from more general communication abstractions, such as multicast, anycast, and host mobility. In these abstractions, the sending host no longer knows the identity of the receiving hosts (multicast and anycast) and the location of the receiving host need not be fixed (mobility). Thus, there is a significant and fundamental mismatch between the original point-to-point abstraction and these more general ones. All attempts to implement these more general abstractions have relied on a layer of indirection that decouples the sending hosts from the receiving hosts; for example, senders send to a group address (multicast or anycast) or a home agent (mobility), and the IP layer of the network is responsible for delivering the packet to the appropriate location(s). Although these more general abstractions would undoubtedly bring significant benefit to end-users, it remains unclear how to achieve them. These abstractions have proven difficult to implement scalably at the IP layer [4, 13, 27]. Moreover, deploying additional functionality at the IP layer requires a level of communitywide consensus and commitment that is hard to achieve. In short, implementing these more general abstractions at the IP layer poses difficult technical problems and major deployment barriers. In response, many researchers have turned to application-layer solutions (either end-host or overlay mechanisms) to support these abstractions [4, 15, 24]. While these proposals achieve the desired functionality, they do so in a very disjointed fashion in that solutions for one service are not solutions for other services; e.g., proposals for application-layer multicast don’t address mobility, and vice-versa. As a result, many similar and largely redundant mechanisms are required to achieve these various goals. In addition, if overlay solutions are used, adding a new abstraction requires the deployment of an entirely new overlay infrastructure. In this paper, we propose a single new overlay network that serves as a general-purpose Internet Indirection Infrastructure (☎✝✆). ☎✝✆ offers a powerful and flexible rendezvous-based communication abstraction; applications can easily implement a variety of communication services, such as multicast, anycast, and mobility, on top of this communication abstraction. Our approach provides a general overlay service that avoids both the technical and deployment challenges inherent in IP-layer solutions and the redundancy and lack of synergy in more traditional application-layer approaches. We thus hope to combine the generality of IP-layer solutions with the deployability of overlay solutions. The paper is organized as follows. In Sections 2 and 3 we provide an overview of the ☎✝✆ architecture and then a general discussion on how ☎✝✆ might be used in applications. Section 4 covers ad-
,R) Trigger(t) (id,R) Figure 1:(a)i3s APl. Example illustrating communication between two nodes. (b) The receiver R inserts trigger(id, R).(e)The sender sends packet (id, data ). ditional aspects of the design such as scalability and efficient rout- inserts the trigger(id, R)into the network. When a packet is sent ing. Section 5 describes some simulation results on 3 performance to identifier id, the trigger causes it to be forwarded via IP to R along with a discussion on an initial implementation. Related work Thus, much as in IP multicast, the identifier id represents a log- discussed in Section 6, followed by a discussion on future work cal rendezvous between the sender's packets and the receiver's Section 7. We conclude with a summary in Section 8 trigger. This level of indirection decouples the sender from the receiver. The senders need neither be aware of the number of re- 2. 23OVERVIEW ceivers nor their location Similarly receivers need not be aware of In this section we present an overview of 23. We start with the the number or location of senders The above description is the simplest form of the abstraction. basic service model and communication abstraction, then briefly We now describe a generalization that allows inexact matching be- describe 83s design tween identifiers. (A second generalization that replaces identi- 2.1 Service model fiers with a stack of identifiers is described in Section 2. 5.) We The purpose of 13 is to provide indirection; that is, it sume identifiers are m bits long and that there is some exact-mate es threshold k with k m. We then say that an identifier id, in the act of sending from the act of receiving. The 13 service model trigger matches an identifier id in a packet if and only if s simple: sources send packets to a logical identifier, and receivers express interest in packets sent to an identifier. Delivery is best (a)id and id have a prefix match of at least k bits, and e ffort like in todays Internet, with no guarantees about packet de here is no trigger with an identifier that has a longer prefix This service model is similar to that of Ip multicast. The cru match with id cial difference is that the 23 equivalent of an IP multicast join is more flexible. IP multicast offers a receiver a binary decision of In other words, a trigger identifier idt matches a packet identi- whether or not to receive packets sent to that group(this can fier id if and only if idt is a longest prefix match(among all other indicated on a per-source basis). It is up to the multicast infrastruc trigger identifiers) and this prefix match is at least as long as the ture to build efficient delivery trees. The i3 equivalent of a join is exact-match threshold k. The value h is chosen to be large enough nserting a trigger. This operation is more flexible than an IP mul- so that the probability that two randomly chosen identifiers match ticast join as it allows receivers to control the routing of the packe is negligible. This allows end-hosts to choose the identifiers inde This provides two advantages. First, it allows them to create, at the pendently with negligible chance of collision application level, services such as mobility, anycast, and service 2.3 Overview of the design composition out of this basic service model. Thus, this one simple service model can be used to support a wide variety of application- We now briefly describe the infrastructure that supports this ren- level communication abstractions, alleviating the need for many dezvous communication abstraction; a more in-depth description parallel and redundant overlay infrastructures. Second, the infras- follows in Section 4. 23 is an overlay network which consists of tructure can give responsibility for efficient tree construction to the a set of servers that store triggers and forward packets(using IP) end-hosts. This allows the infrastructure to remain simple, robust, between 23 nodes and to end-hosts. Identifiers and triggers have meaning only in this 23 overlay One of the main challenges in implementing 23 is to efficiently 2.2 Rendezvous-Based Communication match the identifiers in the packets to those in triggers. This he service model is instantiated as a rendezvous-based com- done by mapping each identifier to a unique 23 node(server);at munication abstraction. In their simplest form, packets are pairs ny given time there is a single 23 node responsible for a given id data)where id is an m-bit identifier and data consists of When a trigger(id, addr) is inserted, it is stored on the i3 node a payload(typically a normal IP packet payload). Receivers use responsible for id. When a packet is sent to id it is routed by i3 to goers to indicate their interest in packets. In the simplest form, the node responsible for id; there it is matched against any triggers triggers are pairs(id, addr ), where id represents the trigger iden- for that &d and forwarded(using IP tifier, and addr represents a node's address which consists of an sent to that identifier. To facilitate inexact matching, we require that IP address and a port number. A trigger(id, addr)indicates that all id s that agree in the first k bits be stored on the same i3 serv all packets with an identifier id should be forwarded(at the IP The longest prefix match required for inexact matching can then be level) by the 23 infrastructure to the node identified by addr. More executed at a single node(where it can be done efficiently) specifically, the rendezvous-based communication abstraction ex Note that packets are not stored in 3; they are only forwarded. 23 ports three basic primitives as shown in Figure 1(a). ovides a best-effort service like todays Internet. i3 implements Figure I(b) illustrates the communication between two nodes neither reliability nor ordered delivery on top of IP, End-hosts use where receiver R wants to receive packets sent to id. The receiver In our implementation we choose m= 256 and k =128
✂✁’s Application Programming Interface (API) ✄✆☎✞✝✠✟☛✡✌☞✎✍✑✏✒☎✞✓✕✔✖✘✗ send packet ✝✠✄✞☎✆✙✚✓✜✛✢✙ ✜✣☛✣ ☎✞✙✤✔✥✓✦✗ insert trigger ✙✧☎✆★✪✩✞✫✬☎✑✛✢✙ ✜✣☛✣ ☎✞✙✤✔✥✓✦✗ remove trigger (a) ✭✮✭✮✭✮✭ ✯✮✯✮✯ ✰✮✰✮✰✮✰ ✰✮✰✮✰✮✰ ✱✮✱✮✱✮✱ ✱✮✱✮✱✮✱ ✲✮✲✮✲✮✲ ✳✮✳✮✳✮✳ ✴✮✴✮✴✮✴ ✵✮✵✮✵ (id, R) (b) (id, R) (c) sender (S) (R, data) (id, data) receiver (R) sender (S) receiver (R) Figure 1: (a) ☎✝✆’s API. Example illustrating communication between two nodes. (b) The receiver ✶ inserts trigger ✷ ☎✦✸✠✹✺✶✼✻ . (c) The sender sends packet ✷ ☎✦✸✠✹✕✸✾✽❀✿❁✽❂✻ . ditional aspects of the design such as scalability and efficient routing. Section 5 describes some simulation results on ☎ ✆ performance along with a discussion on an initial implementation. Related work is discussed in Section 6, followed by a discussion on future work Section 7. We conclude with a summary in Section 8. 2. ❃✚❄ OVERVIEW In this section we present an overview of ☎✝✆. We start with the basic service model and communication abstraction, then briefly describe ☎✝✆’s design. 2.1 Service Model The purpose of ☎✝✆ is to provide indirection; that is, it decouples the act of sending from the act of receiving. The ☎✝✆ service model is simple: sources send packets to a logical identifier, and receivers express interest in packets sent to an identifier. Delivery is besteffort like in today’s Internet, with no guarantees about packet delivery. This service model is similar to that of IP multicast. The crucial difference is that the ☎✝✆ equivalent of an IP multicast join is more flexible. IP multicast offers a receiver a binary decision of whether or not to receive packets sent to that group (this can be indicated on a per-source basis). It is up to the multicast infrastructure to build efficient delivery trees. The ☎✝✆ equivalent of a join is inserting a trigger. This operation is more flexible than an IP multicast join as it allows receivers to control the routing of the packet. This provides two advantages. First, it allows them to create, at the application level, services such as mobility, anycast, and service composition out of this basic service model. Thus, this one simple service model can be used to support a wide variety of applicationlevel communication abstractions, alleviating the need for many parallel and redundant overlay infrastructures. Second, the infrastructure can give responsibility for efficient tree construction to the end-hosts. This allows the infrastructure to remain simple, robust, and scalable. 2.2 Rendezvous-Based Communication The service model is instantiated as a rendezvous-based communication abstraction. In their simplest form, packets are pairs ✷ ☎❅✸✠✹✺✸✾✽✾✿❁✽❆✻ where ☎✦✸ is an ❇-bit identifier and ✸✾✽✾✿✕✽ consists of a payload (typically a normal IP packet payload). Receivers use triggers to indicate their interest in packets. In the simplest form, triggers are pairs ✷ ☎❅✸✠✹✺✽❀✸✾✸✾❈✒✻ , where ☎✦✸ represents the trigger identifier, and ✽❀✸✾✸✾❈ represents a node’s address which consists of an IP address and a port number. A trigger ✷ ☎❅✸✠✹✺✽❀✸✾✸✾❈✬✻ indicates that all packets with an identifier ☎❅✸ should be forwarded (at the IP level) by the ☎✝✆ infrastructure to the node identified by ✽❀✸✾✸✾❈. More specifically, the rendezvous-based communication abstraction exports three basic primitives as shown in Figure 1(a). Figure 1(b) illustrates the communication between two nodes, where receiver ✶ wants to receive packets sent to ☎❅✸. The receiver inserts the trigger ✷ ☎❅✸✠✹✑✶✼✻ into the network. When a packet is sent to identifier ☎✦✸, the trigger causes it to be forwarded via IP to ✶. Thus, much as in IP multicast, the identifier ☎✦✸ represents a logical rendezvous between the sender’s packets and the receiver’s trigger. This level of indirection decouples the sender from the receiver. The senders need neither be aware of the number of receivers nor their location. Similarly, receivers need not be aware of the number or location of senders. The above description is the simplest form of the abstraction. We now describe a generalization that allows inexact matching between identifiers. (A second generalization that replaces identi- fiers with a stack of identifiers is described in Section 2.5.) We assume identifiers are ❇ bits long and that there is some exact-match threshold ❉ with ❉❋❊●❇. We then say that an identifier ☎❅✸❂❍ in a trigger matches an identifier ☎❅✸ in a packet if and only if (a) ☎✦✸ and ☎✦✸❍ have a prefix match of at least ❉ bits, and (b) there is no trigger with an identifier that has a longer prefix match with ☎❅✸. In other words, a trigger identifier ☎❅✸❂❍ matches a packet identi- fier ☎❅✸ if and only if ☎❅✸❀❍ is a longest prefix match (among all other trigger identifiers) and this prefix match is at least as long as the exact-match threshold ❉. The value ❉ is chosen to be large enough so that the probability that two randomly chosen identifiers match is negligible.1 This allows end-hosts to choose the identifiers independently with negligible chance of collision. 2.3 Overview of the Design We now briefly describe the infrastructure that supports this rendezvous communication abstraction; a more in-depth description follows in Section 4. ☎✝✆ is an overlay network which consists of a set of servers that store triggers and forward packets (using IP) between ☎✝✆ nodes and to end-hosts. Identifiers and triggers have meaning only in this ☎ ✆ overlay. One of the main challenges in implementing ☎ ✆ is to efficiently match the identifiers in the packets to those in triggers. This is done by mapping each identifier to a unique ☎✝✆ node (server); at any given time there is a single ☎✝✆ node responsible for a given ☎❅✸. When a trigger ✷ ☎❅✸✠✹✺✽❀✸✾✸✾❈✒✻ is inserted, it is stored on the ☎✝✆ node responsible for ☎❅✸. When a packet is sent to ☎❅✸ it is routed by ☎✝✆ to the node responsible for ☎❅✸; there it is matched against any triggers for that ☎✦✸ and forwarded (using IP) to all hosts interested in packets sent to that identifier. To facilitate inexact matching, we require that all ☎✦✸’s that agree in the first ❉ bits be stored on the same ☎✝✆ server. The longest prefix match required for inexact matching can then be executed at a single node (where it can be done efficiently). Note that packets are not stored in ☎✝✆; they are only forwarded. ☎✝✆ provides a best-effort service like today’s Internet. ☎✝✆ implements neither reliability nor ordered delivery on top of IP. End-hosts use ■ In our implementation we choose ❇❑❏▼▲✤◆✎❖ and ❉P❏❘◗☛▲✎❙
(a)Mobility idplidsI, RI) R2|(R2 (过3R2( Y(d, R3 (b)Multicast (c)Anycast Figure 2: Communication abstractions provided by 3. (a)Mobility: The change of the receivers address from R to R' is transparent to the sender.(b)Multicast: Every packet (d, data) is forwarded to each receiver R; that inserts the trigger(d, R;).(c)Anycast The packet matches the trigger of receiver R2. d l ds denotes an identifier of size m, where dp represents the prefix of the k most significant bits, and dg represents the suffix of the m-k least significant bits. eshing to maintain their tris goers in 3. Hosts contact an ny packet that matches d is forwarded to all members of the group 3 node when sending 3 packets or inserting triggers. This 3 node as shown in Figure 2(b). We discuss how to make this approac then forwards these packets or triggers to the 3 node responsible scalable in Section 3 for the associated identifiers. Thus, hosts need only know one 2 Note that unlike ip multicast with 3 there is no difference be. node in order to use the 3 infrastructure tween unicast or multicast packets, in either sending or receiving 2.4 Communication Primitives Provided by i3 application can switch on-the-fly from unicast to multicast by sim- We now describe how e can be used by applications to achieve ly having more hosts maintain triggers with the same identifier he more general com cation abstractions of mobility, multi For example, in a telephony application this would allow multiple cast, and anycast. parties to seamlessly join a two-party conversation. In contrast, with IP, an application has to at least change the IP destination ad 2.4.1 Mobility dress in order to switch from unicast to multicas The form of mobility addressed here is when a host(e.g, a lap p) is assigned a new address when it moves from one location to 2.4.3 Anycast another. A mobile host that changes its address from R to R as a Anycast ensures that a packet is delivered to exactly one receiver result of moving from one subnet to another can preserve the end- in a group, if any. Anycast enables server selection, a basic building to-end connectivity by simply updating each of its existing triggers block for many of todays applications. To achieve this with 33, all from(d R)to(d, R'), as shown in Figure 2(a). The sending host hosts in an anycast group maintain triggers which are identical in needs not be aware of the mobile host,s current location or address. the h most significant bits. These h bits play the role of the anycast Furthermore, since each packet is routed based on its identifier to group identifier. To send a packet to an anycast group, a sender uses the server that stores its trigger, no additional operation needs to be an identifier whose k-bit prefix matches the anycast group identi- invoked when the sender moves. Thus, 3 can maintain end-to-end fier. The packet is then delivered to the member of the group whose connectivity even when both end-points move simultaneously trigger identifier best matches the packet identifier according to the With any scheme that supports mobility, efficiency is a major longest prefix matching rule(see Figure 2(c). Section 3.3 gives e the last k bits of the cached at the sender, and thus subsequent packets are forwarded directly to that server via IP. This way, most packets are forwarded 2.5 Stack of identifiers through one 3 server in the overlay network. Second to al In this section, we describe a second generalization of 3, which leviate the triangle routing problem due to the trigger being stored replaces identifiers with identifier stacks. An identifier stack is a at a server far away, end-hosts can use off-I ine heuristics to choose list of identifiers that takes the form d1,d2, e d3,..,e dk)where ggers that are stored at 3 servers close to themselves( see Sec di is either an identifier or an address. Packets p and triggers t ar thus of the form. 2.4.2 Multicast Packetp=(edstack, data) Creating a multicast group is equivalent to having all members of the group register triggers with the same identifier d. As a result
✁✁✁ ✂✁✂✁✂ ✄✁✄✁✄✁✄ ☎✁☎✁☎ ✆✁✆✁✆✁✆ ✝✁✝✁✝ ✞✁✞✁✞✁✞ ✟✁✟✁✟✁✟ ✠✁✠✁✠✁✠ ✠✁✠✁✠✁✠ ✡✁✡✁✡ ✡✁✡✁✡ ☛✁☛✁☛✁☛ ☛✁☛✁☛✁☛ ☞✁☞✁☞✁☞ ☞✁☞✁☞✁☞ ✌✁✌✁✌✁✌ ✍✁✍✁✍✁✍ (id |id , R1) p s1 (id |id , R2) p s2 (id |id , R3) p s3 (id |id , data) p s ✎✁✎✁✎✁✎ ✎✁✎✁✎✁✎ ✏✁✏✁✏ ✏✁✏✁✏ (a) Mobility ✑✁✑✁✑✁✑ ✑✁✑✁✑✁✑ ✒✁✒✁✒✁✒ ✒✁✒✁✒✁✒ ✓✁✓✁✓✁✓ ✔✁✔✁✔✁✔ receiver (R2) receiver (R1) receiver (R3) ✕✁✕✁✕✁✕ ✖✁✖✁✖ ✗✁✗✁✗✁✗ ✘✁✘✁✘✁✘ (b) Multicast (id, R2) sender (S) (R2, data) (id, R1) (id, R3) receiver (R2) (R3, data) receiver (R1) (R1, data) (id, data) receiver (R3) (c) Anycast (R2, data) sender (S) (id, data) sender (S) receiver (R’) (id, R) (id, data) sender (S) receiver (R) (R, data) (id, R’) (R’, data) Figure 2: Communication abstractions provided by ☎✝✆. (a) Mobility: The change of the receiver’s address from ✶ to ✶✚✙ is transparent to the sender. (b) Multicast: Every packet ✷ ☎✦✸✠✹✕✸✾✽❀✿❁✽❂✻ is forwarded to each receiver ✶✜✛ that inserts the trigger ✷ ☎❅✸✠✹✑✶✢✛ ✻ . (c) Anycast: The packet matches the trigger of receiver ✶✼▲. ☎❅✸✤✣✦✥ ☎❅✸★✧ denotes an identifier of size ❇, where ☎❅✸✩✣ represents the prefix of the ❉ most significant bits, and ☎✦✸✧ represents the suffix of the ❇✫✪ ❉ least significant bits. periodic refreshing to maintain their triggers in ☎✝✆. Hosts contact an ☎✝✆ node when sending ☎✝✆ packets or inserting triggers. This ☎✝✆ node then forwards these packets or triggers to the ☎✝✆ node responsible for the associated identifiers. Thus, hosts need only know one ☎✝✆ node in order to use the ☎ ✆ infrastructure. 2.4 Communication Primitives Provided by ❃✚❄ We now describe how ☎✝✆ can be used by applications to achieve the more general communication abstractions of mobility, multicast, and anycast. 2.4.1 Mobility The form of mobility addressed here is when a host (e.g., a laptop) is assigned a new address when it moves from one location to another. A mobile host that changes its address from ✶ to ✶✬✙ as a result of moving from one subnet to another can preserve the endto-end connectivity by simply updating each of its existing triggers from ✷ ☎❅✸✠✹✑✶✼✻ to ✷ ☎❅✸✠✹✑✶✢✙ ✻ , as shown in Figure 2(a). The sending host needs not be aware of the mobile host’s current location or address. Furthermore, since each packet is routed based on its identifier to the server that stores its trigger, no additional operation needs to be invoked when the sender moves. Thus, ☎✝✆ can maintain end-to-end connectivity even when both end-points move simultaneously. With any scheme that supports mobility, efficiency is a major concern [25]. With ☎ ✆, applications can use two techniques to achieve efficiency. First, the address of the server storing the trigger is cached at the sender, and thus subsequent packets are forwarded directly to that server via IP. This way, most packets are forwarded through only one ☎ ✆ server in the overlay network. Second, to alleviate the triangle routing problem due to the trigger being stored at a server far away, end-hosts can use off-line heuristics to choose triggers that are stored at ☎ ✆ servers close to themselves (see Section 4.5 for details). 2.4.2 Multicast Creating a multicast group is equivalent to having all members of the group register triggers with the same identifier ☎✦✸. As a result, any packet that matches ☎❅✸ is forwarded to all members of the group as shown in Figure 2(b). We discuss how to make this approach scalable in Section 3.4. Note that unlike IP multicast, with ☎✝✆ there is no difference between unicast or multicast packets, in either sending or receiving. Such an interface gives maximum flexibility to the application. An application can switch on-the-fly from unicast to multicast by simply having more hosts maintain triggers with the same identifier. For example, in a telephony application this would allow multiple parties to seamlessly join a two-party conversation. In contrast, with IP, an application has to at least change the IP destination address in order to switch from unicast to multicast. 2.4.3 Anycast Anycast ensures that a packet is delivered to exactly one receiver in a group, if any. Anycast enables server selection, a basic building block for many of today’s applications. To achieve this with ☎✝✆, all hosts in an anycast group maintain triggers which are identical in the ❉ most significant bits. These ❉ bits play the role of the anycast group identifier. To send a packet to an anycast group, a sender uses an identifier whose ❉-bit prefix matches the anycast group identi- fier. The packet is then delivered to the member of the group whose trigger identifier best matches the packet identifier according to the longest prefix matching rule (see Figure 2(c)). Section 3.3 gives two examples of how end-hosts can use the last ❇✭✪ ❉ bits of the identifier to encode their preferences. 2.5 Stack of Identifiers In this section, we describe a second generalization of ☎✝✆, which replaces identifiers with identifier stacks. An identifier stack is a list of identifiers that takes the form ✷ ☎✦✸ ■ ✹ ☎❅✸✯✮✎✹ ☎❅✸★✰✬✹✲✱✳✱✲✱ ✹ ☎❅✸✯✴✤✻ where ☎❅✸★✛ is either an identifier or an address. Packets ✵ and triggers ✿ are thus of the form: ✶ Packet ✵ ❏ ✷ ☎❅✸✧✕❍✸✷✺✹ ✴✒✹✺✸✾✽✾✿✕✽❂✻ ✶ Trigger ✿✢❏ ✷ ☎❅✸✠✹ ☎❅✸✯✧❁❍✸✷✲✹ ✴ ✻
23_recv(p)//upon receiving packet p d= head(p id_stack); //get head ofp's stack HIML-W ML transcoder(T lis local server responsible for id's best match? if ( is Match Local(id)=FALSE) 63-forward(p); / matching trigger stored elsewhere return: HIML-WML, idh, data pop(p id_stack); //pop id from p's stack (R) set=get_matches(id); //get all triggers matching id (a)Service composition if (p id_stack=0 MPEG-H. 263 transcoder T) drop(p)∥ nowhere i3_forward(p) while(setto)forward packet to each matching trigger t= get frigger(set_) pl=copy (p); // create new packet to sene l/.add I's stack at head of pl's stack Nid, R2)F ef(R2) prepend(t id _stack, pl id_stack) ward(p1); (b) Heterogeneous multicast 23-forward(p)//sendfonward packet p Figure 4:(a) Service composition: The sender(s)specifies that id= head(p id_stack); //get head of p's stack if (type(id)=IP-ADDR_TYPE) packets should be transcoded at server T before being delivered to the destination(R).(b) Heterogeneous multicast: Receiver IP-send(id, p); // id is an IP address send p to id via IP Rl specifies that wants to receive H 263 data, while R2 specifies else that wants to receive mpeg data. The sender sends mPeg forward(p); //forward p via overlay network data Figure 3: Pseudo-code of the receiving and forward operations executed by an 3 server For each matching trigger t, the identifier stack of the trig prepended to p's identifier stack. The packet p is then forwarded based on the first identifier in its stack The generalized form of packets allows a source to send a packet 3. USING i3 to a series of identifiers much as in sour rce routin ized form of triggers allows a trigger to send a packet to another In this section we present a few examples of how i3 can be used. identifier rather than to an address This extension allows for a We discuss service composition, heterogeneous multicast, server much greater flexibility. To illustrate this point, in Sections 3. 1, 3. 2, selection, and large scale multicast. In the remainder of the paper, and 4.3, we discuss how identifier stacks can be used to provide we say that packet p matches trigger t if the first identifier of ps service composition, implement heterogeneous multicast, and in- crease 23s robustness, respectively for ser identifer z always forwarded based on the first identifier id 3.1 Service Composition A packet ck until it reaches the server who is responsible Some applications may require third parties to process the data for storing the matching trigger(s) for p. Consider a packet p with before it reaches the destination [10]. An example is a wireless n identifier stack(idi, id2, id3). If there is no trigger in i3 whose application protocol (WAP) gateway translating HTML web pages identifier matches id1, id is popped from the stack. The process is to WML for wireless devices [35]. WML is a lightweight version of repeated until an identifier in ps identifier stack matches a trigger HTML designed to run on wireless devices with small screens and If no such trigger is found, packet p is dropped. If on the other limited capabilities. In this case, the server can forward the web hand. there is a trigger t whose identifier matches id, then idi is page to a third-party server T that implements the HTML-WML replaced by t's identifier stack. In particular, if ts identifier stack transcoding, which in turn processes the data and sends it to the is(a, ) then p,s identifier stack becomes(a, 3, id2, id3). If idi destination via WAP. is an IP address, p is sent via IP to that address, and the rest of In general, data might need to be transformed by a series of Ps identifier stack, i.e,(id2, id3)is forwarded to the application. third-party servers before it reaches the destination. In today's In- The semantics of id2 and id3 are in general application-specific. ternet, the application needs to hnow the set of servers that pe owever, in this paper we consider only examples in which the form transcoding and then explicitly forward data packets via these application is expected to use these identifiers to forward the packet servers. after it has processed it. Thus, an application that receives a packet With i3, this functionality can be easily implemented by using a ith identifier stack(id2, id3) is expected to send another packet ack of identifiers. Figure 4(a) shows how data packets containing with the same identifier stack(id2, id3). As shown in the next HTML information can be redirected to the transcoder and thus section this allows 23 to provide support for service composition. arrive at the receiver containing WML information. The sender Figure 3 shows the pseudo-code of the receiving and forward- associates with each data packet the stack(id HTM L-WML, id), ing operations executed by an 23 node. Upon receiving a packet where id represents the flow identifier. As a result, the data packet server first checks whether it is responsible for storing the trig is routed first to the server which performs the transcoding. Next, ger matching packet p. If not, the server forwards the packet at the server inserts packet(id, dat a) into 23, which delivers it to the the 23 level. If yes, the code returns the set of triggers that match receiver
☎✝✆ recv(✵) // upon receiving packet ✵ ☎❅✸ ❏✁✄✂✧✽❀✸ ✷✵ ✱ ☎✦✸ ☎ ✿✕✽✝✆❉❆✻ ; // get head of p’s stack // is local server responsible for id’s best match? if ✷ ☎✞☎✠✟✽✾✿✡✆☛✄☞✍✌✠✆✽✏✎❁✷ ☎❅✸✘✻✢❏ FALSE✻ ☎✝✆ forward(✵); // matching trigger stored elsewhere return; ✵✑✌✵ ✷✵✱ ☎❅✸ ☎ ✿✕✽✝✆✚❉❂✻ ; // pop id from p’s stack... ☎✒✂✚✿ ✿✢❏✔✓✏✂✚✿ ❇✽❀✿✕✆✖✄✂✠☎✒✷ ☎❅✸✘✻ ; // get all triggers matching id if ✷✗☎✒✂✧✿ ✿✢❏✁✘✒✻ if ✷✵ ✱ ☎❅✸ ☎ ✿✕✽✝✆❉P❏✁✘✒✻ drop(✵) // nowhere else to forward else ☎ ✆ forward ✷✵✻ ; while (☎✒✂✚✿ ✿✚✙❏✁✘) // forward packet to each matching trigger ✿ ❏✛✓✏✂✚✿ ✿✕❈☎✜✓✢✓✏✂✧❈✘✷✗☎✒✂✧✿ ✿✑✻ ; ✵◗ ❏✛✆☛✌✵✄✣ ✷✵✻ ; // create new packet to send // ... add t’s stack at head of p1’s stack ✵❆❈✤✂✵✥✂✒✦✸ ✷✜✿✺✱ ☎❅✸ ☎ ✿✕✽✝✆✚❉ ✹ ✵ ◗ ✱ ☎❅✸ ☎ ✿✕✽✝✆✚❉❂✻ ; ☎✝✆ forward(✵ ◗ ); ☎✝✆ forward(✵) // send/forward packet ✵ ☎❅✸ ❏✁✄✂✧✽❀✸ ✷✵ ✱ ☎✦✸ ☎ ✿✕✽✝✆❉❆✻ ; // get head of p’s stack if ✷✜✿✡✣✵✥✂✾✷ ☎❅✸✘✻ ❏ IP ADDR TYPE✻ IP send(☎❅✸✠✹ ✵); // id is an IP address; send p to id via IP else forward ✷✵✻ ; // forward ✵ via overlay network Figure 3: Pseudo-code of the receiving and forward operations executed by an ☎✝✆ server. The generalized form of packets allows a source to send a packet to a series of identifiers, much as in source routing. The generalized form of triggers allows a trigger to send a packet to another identifier rather than to an address. This extension allows for a much greater flexibility. To illustrate this point, in Sections 3.1, 3.2, and 4.3, we discuss how identifier stacks can be used to provide service composition, implement heterogeneous multicast, and increase ☎✝✆’s robustness, respectively. A packet ✵ is always forwarded based on the first identifier ☎❅✸ in its identifier stack until it reaches the server who is responsible for storing the matching trigger(s) for ✵. Consider a packet ✵ with an identifier stack ✷ ☎❅✸ ■ ✹ ☎❅✸✮ ✹ ☎❅✸✰ ✻ . If there is no trigger in ☎ ✆ whose identifier matches ☎❅✸ ■ , ☎❅✸ ■ is popped from the stack. The process is repeated until an identifier in ✵’s identifier stack matches a trigger ✿. If no such trigger is found, packet ✵ is dropped. If on the other hand, there is a trigger ✿ whose identifier matches ☎❅✸ ■ , then ☎✦✸ ■ is replaced by ✿’s identifier stack. In particular, if ✿’s identifier stack is ✷★✧ ✹✡✣❂✻ , then ✵’s identifier stack becomes ✷★✧ ✹✡✣ ✹ ☎❅✸✮✬✹ ☎❅✸★✰✧✻ . If ☎❅✸ ■ is an IP address, ✵ is sent via IP to that address, and the rest of ✵’s identifier stack, i.e., ✷ ☎❅✸✮✎✹ ☎❅✸★✰✧✻ is forwarded to the application. The semantics of ☎❅✸✮ and ☎✦✸✰ are in general application-specific. However, in this paper we consider only examples in which the application is expected to use these identifiers to forward the packet after it has processed it. Thus, an application that receives a packet with identifier stack ✷ ☎❅✸✮✎✹ ☎❅✸★✰☛✻ is expected to send another packet with the same identifier stack ✷ ☎✦✸✮✬✹ ☎❅✸★✰☛✻ . As shown in the next section this allows ☎ ✆ to provide support for service composition. Figure 3 shows the pseudo-code of the receiving and forwarding operations executed by an ☎✝✆ node. Upon receiving a packet ✵, a server first checks whether it is responsible for storing the trigger matching packet ✵. If not, the server forwards the packet at the ☎✝✆ level. If yes, the code returns the set of triggers that match ✩✪✩✪✩✪✩ ✫✪✫✪✫ ✬✪✬✪✬✪✬ ✭✪✭✪✭ ✮✪✮✪✮✪✮ ✯✪✯✪✯ ✰✪✰✪✰✪✰ ✱✪✱✪✱ HTML−WML transcoder (T) (id, R) (R, data) ((T, R1), data) (id, (id , R1)) MPEG−H.263 ((id , R1), data) MPEG−H.263 MPEG−H.263 transcoder (T) (id, R2) (R1, data) (R2, data) receiver (R2) receiver (R1) sender (S) (id, data) sender (S) ((T,id), data) (b) Heterogeneous multicast (a) Service composition HTML−WML MPEG−H.263 ((id , id), data) HTML−WML (id , T) (id , T) receiver (R) Figure 4: (a) Service composition: The sender (✲) specifies that packets should be transcoded at server ✳ before being delivered to the destination (✶). (b) Heterogeneous multicast: Receiver ✶◗ specifies that wants to receive H.263 data, while ✶✼▲ specifies that wants to receive MPEG data. The sender sends MPEG data. ✵. For each matching trigger ✿ , the identifier stack of the trigger is prepended to ✵’s identifier stack. The packet ✵ is then forwarded based on the first identifier in its stack. 3. USING ❃ ❄ In this section we present a few examples of how ☎✝✆ can be used. We discuss service composition, heterogeneous multicast, server selection, and large scale multicast. In the remainder of the paper, we say that packet ✵ matches trigger ✿ if the first identifier of ✵’s identifier stack matches ✿’s identifier. 3.1 Service Composition Some applications may require third parties to process the data before it reaches the destination [10]. An example is a wireless application protocol (WAP) gateway translating HTML web pages to WML for wireless devices [35]. WML is a lightweight version of HTML designed to run on wireless devices with small screens and limited capabilities. In this case, the server can forward the web page to a third-party server ✳ that implements the HTML-WML transcoding, which in turn processes the data and sends it to the destination via WAP. In general, data might need to be transformed by a series of third-party servers before it reaches the destination. In today’s Internet, the application needs to know the set of servers that perform transcoding and then explicitly forward data packets via these servers. With ☎✝✆, this functionality can be easily implemented by using a stack of identifiers. Figure 4(a) shows how data packets containing HTML information can be redirected to the transcoder, and thus arrive at the receiver containing WML information. The sender associates with each data packet the stack ✷ ☎❅✸✑✴✶✵✥✷✹✸✄✺✼✻✽✷✾✸ ✹ ☎❅✸✘✻ , where ☎❅✸ represents the flow identifier. As a result, the data packet is routed first to the server which performs the transcoding. Next, the server inserts packet ✷ ☎❅✸✠✹✺✸✾✽✾✿✕✽❂✻ into ☎✝✆, which delivers it to the receiver
3.2 Heterogeneous Multicast Figure 4(b) shows a more complex scenario in which an MPEC video stream is played back by one H 263 receiver and one MPEG To provide this functionality, we use the ability of the receiver instead of the sender(see Section 2.5), to control the transforma tions performed on data packets. In particular, the H 263 receiver Rl inserts trigger(id, (id Mhe s-HQ63, R1)), and the sender sends ets(id, data). Each packet matches R1's trigger, and as a result the packets identifier id is replaced by the triggers stack (idMhe s-H@63, T). Next, the packet is forwarded to the MPEG- 起 H 263 transcoder, and then directly to receiver Rl. In contrast, an MPEG receiver R2 only needs to maintain a trigger(id, R1) in i3 R2R3 This way, receivers with different display capabilities can subscribe to the same multicast group. Another useful application is to have the receiver insist that all data go through a firewall first before reaching it. 3.3 Server Selection use of the last m-k bits of the identifiers to encode application Figure 5: Example of a sealable multicast tree with bounded preferences. To illustrate this point consider two examples degree by using chains of triggers. In the first example, assume that there are several web servers and the goal is to balance the client requests among these servers This goal can be achieved by setting the m-k least significant bits receivers of the multicast group construct and maintain the hierar of both trigger and packet identifiers to random values. If servers hy of triggers have different capacities, then each server can insert a number of triggers proportional to its capacity. Finally, one can devise an 4. ADDITIONAL DESIGN AND PERFOR daptive algorithm in which each server varies the number of trig MANCE ISSUES gers as a function of its current load In the second example, consider the goal of selecting a server In this section we discuss some additional i3 design and per- that is close to the client in terms of latency. To achieve this goal, formance issues. The 23 design was intended to be(among other each server can use the last m-k bits of its trigger identifiers to properties)robust, self-organizing, efficient, secure, scalable, incre- mentally deployable, and compatible with legacy applications. In encode its location, and the client can use the last m-k bits in the this section we discuss these issues and some details of the design packets' identifier to encode its own location. In the simplest case, the location of an end-host (i.e, server or client) can be the zip that are relevant to them Before addressing these issues, we first review our basic design. fix matching procedure used by i3 would result then in the packet stores a subset of triggers. In the basic design, at any moment of being forwarded to a server that is relatively close to the cler time, a trigger is stored at only one server. Each end-host know. 3. 4 Large scale multicast about one or more i3 servers. When a host wants to send a packet (id, data), it forwards the packet to one of the servers it knows If The multicast abstraction presented in Section 2. 4.2 assumes the contacted server doesn't store the trigger matching(id, data) that all members of a multicast group insert triggers with identical the packet is forwarded via IP to another server. This process con- identifiers. Since triggers with identical identifier are stored at the tinues until the packet reaches the server that stores the matching same i3 server, that server is responsible for forwarding each mul ticast packet to every member of the multicast group. This solution trigger. The packet is then sent to the destination via IP obviously does not scale to large multicast groups 4.1 Properties of the Overlay One approach to address this problem is to build a hierarchy of The performance of i3 depends greatly on the nature of the un- triggers,where each member R: of a multicast group idg replaces derlying overlay network. In particular, we need an overlay net- its trigger(idg, Ri) by a chain of triggers(idg, I1),(a1, r2) (i,R). This substitution is transparent to the sender: a packet work that exhibits the following desirable properties (ids, data) will still reach R via the chain of triggers. Figure 5 Robustness: With a high shows an example of a multicast tree with seven receivers in which remains connected even in the face of massive server and no more than three triggers have the same identifier. This hierarchy communication failures of triggers can be constructed and maintained either cooperatively by the members of the multicast group, or by a third party provider. Scalability: The overlay network can handle the traffic ge In [18], we present an efficient distributed algorithm in which the erated by millions of end-hosts and applications. Recall that identifiers are m bits long and that k is the exact- Efficiency: Routing a packet to the server that stores the matching threshold packet,s best matching trigger involves a small number of SHere we assume that nodes geographically close to ther are also close in terms work distances, which lways true. One could instead use latency based encoding, Stability: The mapping between triggers and servers is rela- tively stable over time, that is, it is unlikely to change during
3.2 Heterogeneous Multicast Figure 4(b) shows a more complex scenario in which an MPEG video stream is played back by one H.263 receiver and one MPEG receiver. To provide this functionality, we use the ability of the receiver, instead of the sender (see Section 2.5), to control the transformations performed on data packets. In particular, the H.263 receiver ✶◗ insertstrigger ✷ ☎❅✸✠✹✧✷ ☎❅✸✷✁✄✂✄☎ ✺✥✴✝✆ ✮✟✞ ✰✤✹✑✶◗✧✻✺✻ , and the sender sends packets ✷ ☎❅✸✠✹✺✸✾✽✾✿✕✽❂✻ . Each packet matches ✶◗ ’s trigger, and as a result the packet’s identifier ☎✦✸ is replaced by the trigger’s stack ✷ ☎❅✸✏✷✁✄✂✄☎ ✺✥✴✠✆ ✮✟✞ ✰ ✹✡✳✻ . Next, the packet is forwarded to the MPEGH.263 transcoder, and then directly to receiver ✶◗ . In contrast, an MPEG receiver ✶✼▲ only needs to maintain a trigger ✷ ☎❅✸✠✹✑✶◗✧✻ in ☎✝✆. This way, receivers with different display capabilities can subscribe to the same multicast group. Another useful application is to have the receiver insist that all data go through a firewall first before reaching it. 3.3 Server Selection ☎✝✆ provides good support for basic server selection through the use of the last ❇✪ ❉ bits of the identifiers to encode application preferences.2 To illustrate this point consider two examples. In the first example, assume that there are several web servers and the goal is to balance the client requests among these servers. This goal can be achieved by setting the ❇ ✪ ❉ least significant bits of both trigger and packet identifiers to random values. If servers have different capacities, then each server can insert a number of triggers proportional to its capacity. Finally, one can devise an adaptive algorithm in which each server varies the number of triggers as a function of its current load. In the second example, consider the goal of selecting a server that is close to the client in terms of latency. To achieve this goal, each server can use the last ❇✁✪ ❉ bits of its trigger identifiers to encode its location, and the client can use the last ❇✁✪ ❉ bits in the packets’ identifier to encode its own location. In the simplest case, the location of an end-host (i.e., server or client) can be the zip code of the place where the end-host is located; the longest pre- fix matching procedure used by ☎✝✆ would result then in the packet being forwarded to a server that is relatively close to the client.3 3.4 Large Scale Multicast The multicast abstraction presented in Section 2.4.2 assumes that all members of a multicast group insert triggers with identical identifiers. Since triggers with identical identifier are stored at the same ☎ ✆ server, that server is responsible for forwarding each multicast packet to every member of the multicast group. This solution obviously does not scale to large multicast groups. One approach to address this problem is to build a hierarchy of triggers, where each member ✶✛ of a multicast group ☎❅✸☛✡ replaces its trigger ✷ ☎❅✸☛✡✾✹✑✶✛ ✻ by a chain of triggers ✷ ☎❅✸☛✡✾✹ ✧ ■ ✻ , ✷★✧ ■ ✹ ✧ ✮✚✻ , ✱✲✱✲✱ , ✷★✧ ✛ ✹✺✶✛ ✻ . This substitution is transparent to the sender: a packet ✷ ☎❅✸☞✡✾✹✑✸✾✽✾✿✕✽❂✻ will still reach ✶✛ via the chain of triggers. Figure 5 shows an example of a multicast tree with seven receivers in which no more than three triggers have the same identifier. This hierarchy of triggers can be constructed and maintained either cooperatively by the members of the multicast group, or by a third party provider. In [18], we present an efficient distributed algorithm in which the ✮ Recall that identifiers are ❇ bits long and that ❉ is the exactmatching threshold. ✰ Here we assume that nodes that are geographically close to each other are also close in terms of network distances, which is not always true. One could instead use latency based encoding, much as in [20]. ✏✍✏✍✏✑✍✑ ✌✍✌✍✌ ✎✍✎✍✎ ✒✍✒✍✒ ✓✍✓✍✓ ✔✍✔✍✔ ✔✍✔✍✔✕✍✕ ✕✍✕ ✖✍✖✍✖ ✖✍✖✍✖ ✗✍✗ ✗✍✗ ✘✍✘✍✘✙✍✙ id1 R2 ✚✍✚✍✚✛✍✛ R1 idg id1 id2 idg idg R5 R4 id2 R4 R5 id2 R6 id2 R6 R1 (idg, data) S R3 R2 R3 id1 Figure 5: Example of a scalable multicast tree with bounded degree by using chains of triggers. receivers of the multicast group construct and maintain the hierarchy of triggers. 4. ADDITIONAL DESIGN AND PERFORMANCE ISSUES In this section we discuss some additional ☎✝✆ design and performance issues. The ☎✝✆ design was intended to be (among other properties) robust, self-organizing, efficient, secure, scalable, incrementally deployable, and compatible with legacy applications. In this section we discuss these issues and some details of the design that are relevant to them. Before addressing these issues, we first review our basic design. ☎✝✆ is organized as an overlay network in which every node (server) stores a subset of triggers. In the basic design, at any moment of time, a trigger is stored at only one server. Each end-host knows about one or more ☎✝✆ servers. When a host wants to send a packet ✷ ☎❅✸✠✹✺✸✾✽✾✿✕✽❂✻ , it forwards the packet to one of the servers it knows. If the contacted server doesn’t store the trigger matching ✷ ☎❅✸✠✹✺✸✾✽✾✿✕✽❂✻ , the packet is forwarded via IP to another server. This process continues until the packet reaches the server that stores the matching trigger. The packet is then sent to the destination via IP. 4.1 Properties of the Overlay The performance of ☎✝✆ depends greatly on the nature of the underlying overlay network. In particular, we need an overlay network that exhibits the following desirable properties: ✶ Robustness: With a high probability, the overlay network remains connected even in the face of massive server and communication failures. ✶ Scalability: The overlay network can handle the traffic generated by millions of end-hosts and applications. ✶ Efficiency: Routing a packet to the server that stores the packet’s best matching trigger involves a small number of servers. ✶ Stability: The mapping between triggers and servers is relatively stable over time, that is, it is unlikely to change during
□Node The main use of a public trigger is to allow an end-host to contact nother end-host. The identifier of a public trigger is known by all end-hosts in the system. An example is a web server that maintains a public trigger to allow any client to contact it. A public trigger can be defined as a the hash of the hosts dns name of a web address, or of the public key associated with a web server. Public triggers are long lived, typically days or months. In contrast, private triggers are chosen by a small number of end-hosts and they are short lived Typically, private triggers exist only during the duration of a flow To illustrate the difference between public and private triggers, consider a client A accessing a web server B that maintains a public trigger(idpub, B). First, client A chooses a private trigger iden- fier ida, inserts trigger (ida, A)into 23, and sends ida to the web server via the server's public trigger (idpub, B). Once con- Figure 6: Routing information(finger tables) maintained by tacted, server B selects a private identifier idb, inserts the associ- the chord nodes ated trigger(idb, B)into 23, and sends its private trigger identifier ids to client A via As private trigger(ida, A). The client and the server then use both the private triggers to communicate. Once the duration of a flow. This property allows end-hosts to op- the communication terminates, the private triggers are destroyed. their performance by choosing triggers that are stored Sections 4.5 and 4 10 discuss how private triggers can be used to on nearby servers increase the routing eficiency, and the communication security Next, we discuss i3s properties in more detail To implement 23 we have chosen the Chord lookup protocol [26], 4.3 Robustness which is known to satisfy the above properties. Chord uses an bit circular identifier space where 0 follows 2-1. Each server in that routing of packets within 3 is fairly robust against 23 node protocol, each identifier id is mapped on the server with the clos. failures. In addition, end-hosts use periodic refreshing to maintain est identifier that follows id on the identifier circle. This server is called successor of id and it is denoted by successor(id). Fig- and efficient implementation and frees the 23 infrastructure from ure 6 shows an example in which there are three nodes, and m=3 ai. If Server 2 is responsible for identifiers o, 1, and 2, server 5 is respon- for example, as a result of an i3 server failure-the will be ible for 3, 4 and 5, and server 7 is responsible for 6 and 7. reinserted, possibly at another server, the next time the end-host To implement the routing operation, each server maintains a rout- refreshes it ing table of size m. The i-th entry in the routing table of server One potential problem with this approach is that although the triggers are eventually reinserted, the time during which they are 2i-1). This server is called the i-th finger of n. Note that the first unavailable due to server failures may be too large for some appli- finger is the same as the successor serve ations. There are at least two solutions to address this problem. Upon receiving a packet with identifier id, server n checks whether The first solution does not require 13-level changes. The idea is to id lies between itself and its successor. If yes, the server forward have each receiver R maintain a backup trigger(idbackup, R) the packet to its successor, which should store the packets trigger. addition to the primary trigger(id, R), and have the sender send If not, n sends the packet to the closest server(finger) in its rout packets with the identifier stack(id, idbackup). If the server sto ing table that precedes id. In this way, we are guaranteed that the ing the primary trigger fails, the packet will be then forwarded via distance to id in the identifier space is halved at each step. As a re- the backup trigger to R. Note that to accommodate the case when the best matching trigger for the packet, irrespective of the start.(see Section 3. 2), we use a flag in the packet header, whcih i ack sult, it takes O(log N) hops to route a packet to the server storing the packet is required to match every trigger in its identifier st ing point of the packet, where N is the number of i3 servers in the causes the packet to be dropped if the identifier at the head of its stack doesnt find a match. The second solution is to have the over- In the current implementation, we assume that all identifiers that lay network itself replicate the triggers and manage the replicas share the same k-bit prefix are stored on the same way to achieve this is to set the last m-k bits of every node iden- trigger on the immediate successor of the server responsible for that tifier to zero. As a result, finding the best matching trigger reduces trigger [5].Finally, note that when an end-host fails, its triggers are to performing a longest prefix matching operation local automatically deleted from 3 after they time-out. While i3 is implemented on top of Chord, in principle 23 can use 4.4 Self-Organizing any of the recently proposed P2P lookup systems such as CAn[22) Pastry [23] and Tapestry [12] 3 is an overlay infrastructure that may grow to large sizes. Thus, it is important that it not require extensive manual configuration or 4.2 Public and Private triggers human intervention. The Chord overlay network is self-configuring, Before discussing 13s properties, we introduce an important tech in that nodes joining the 23 infrastructure use a simple bootstrap- ping mechanism(see [26]) to find out about at least one existing ciently. With this technique applications make a distinction be node, and then contacts that node to join the 23 infrastructure. Sim- tween two types of triggers: public and private. This distinction 4Here we implicitly ry and backt made only at the application level: 83 itself doesn't differentiate are stored on different servers. The receiver can ensure the between private and public triggers. the case with high probability by choosing idbackup=2
i finger 1 5 2 5 3 7 1 2 7 1 4 5 3 6 2 Node i finger 1 7 2 7 3 2 i finger 2 2 3 5 0 Figure 6: Routing information (finger tables) maintained by the Chord nodes. the duration of a flow. This property allows end-hosts to optimize their performance by choosing triggers that are stored on nearby servers. To implement ☎ ✆ we have chosen the Chord lookup protocol [26], which is known to satisfy the above properties. Chord uses an ❇- bit circular identifier space where follows ▲✂✁ ✪▼◗ . Each server is associated a unique identifier in this space. In the original Chord protocol, each identifier ☎✦✸ is mapped on the server with the closest identifier that follows ☎✦✸ on the identifier circle. This server is called successor of ☎✦✸ and it is denoted by ☎☎✄✑✆☛✆✖✂✠☎ ☎✖✌✬❈✘✷ ☎❅✸✘✻ . Figure 6 shows an example in which there are three nodes, and ❇❑❏ ✆. Server 2 is responsible for identifiers 0, 1, and 2, server 5 is responsible for 3, 4 and 5, and server 7 is responsible for 6 and 7. To implement the routing operation, each server maintains a routing table of size ❇. The ☎-th entry in the routing table of server ✦ contains the first server that follows ✦✝✆ ▲ ✛★✺ ■ , i.e., ☎✞✄✄✆✆✖✂✠☎ ☎✒✌✎❈✘✷★✦✝✆ ▲ ✛✜✺ ■ ✻ . This server is called the ☎-th finger of ✦. Note that the first finger is the same as the successor server. Upon receiving a packet with identifier ☎❅✸,server ✦ checks whether ☎❅✸ lies between itself and its successor. If yes, the server forwards the packet to its successor, which should store the packet’s trigger. If not, ✦ sends the packet to the closest server (finger) in its routing table that precedes ☎✦✸. In this way, we are guaranteed that the distance to ☎❅✸ in the identifier space is halved at each step. As a result, it takes ✟P✷✡✠☞☛✂✌✎✍✮✻ hops to route a packet to the server storing the best matching trigger for the packet, irrespective of the starting point of the packet, where ✍ is the number of ☎ ✆ servers in the system. In the current implementation, we assume that all identifiers that share the same ❉-bit prefix are stored on the same server. A simple way to achieve this is to set the last ❇ ✪ ❉ bits of every node identifier to zero. As a result, finding the best matching trigger reduces to performing a longest prefix matching operation locally. While ☎✝✆ is implemented on top of Chord, in principle ☎✝✆ can use any of the recently proposed P2P lookup systemssuch as CAN [22], Pastry [23] and Tapestry [12]. 4.2 Public and Private Triggers Before discussing ☎✝✆’s properties, we introduce an important technique that allows applications to use ☎✝✆ more securely and effi- ciently. With this technique applications make a distinction between two types of triggers: public and private. This distinction is made only at the application level: ☎✝✆ itself doesn’t differentiate between private and public triggers. The main use of a public trigger is to allow an end-host to contact another end-host. The identifier of a public trigger is known by all end-hosts in the system. An example is a web server that maintains a public trigger to allow any client to contact it. A public trigger can be defined as a the hash of the host’s DNS name, of a web address, or of the public key associated with a web server. Public triggers are long lived, typically days or months. In contrast, private triggers are chosen by a small number of end-hosts and they are short lived. Typically, private triggers exist only during the duration of a flow. To illustrate the difference between public and private triggers, consider a client ✏ accessing a web server ✑ that maintains a public trigger ✷ ☎❅✸✣☎✒✔✓ ✹✕✑✻ . First, client ✏ chooses a private trigger identifier ☎❅✸★✷ , inserts trigger ✷ ☎❅✸★✷✘✹✕✏✻ into ☎✝✆, and sends ☎❅✸✩✷ to the web server via the server’s public trigger ✷ ☎❅✸✣☎✒✔✓ ✹✕✑✻ . Once contacted, server ✑ selects a private identifier ☎❅✸✓ , inserts the associated trigger ✷ ☎❅✸✓ ✹✕✑✻ into ☎ ✆, and sends its private trigger identifier ☎❅✸✓ to client ✏ via ✏’s private trigger ✷ ☎❅✸★✷❂✹✕✏✻ . The client and the server then use both the private triggers to communicate. Once the communication terminates, the private triggers are destroyed. Sections 4.5 and 4.10 discuss how private triggers can be used to increase the routing efficiency, and the communication security. Next, we discuss ☎✝✆’s properties in more detail. 4.3 Robustness ☎✝✆ inherits much of the robustness properties of the overlay itself in that routing of packets within ☎✝✆ is fairly robust against ☎✝✆ node failures. In addition, end-hosts use periodic refreshing to maintain their triggers into ☎✝✆. This soft-state approach allows for a simple and efficient implementation and frees the ☎✝✆ infrastructure from having to recover lost state when nodes fail. If a trigger is lost— for example, as a result of an ☎✝✆ server failure—the trigger will be reinserted, possibly at another server, the next time the end-host refreshes it. One potential problem with this approach is that although the triggers are eventually reinserted, the time during which they are unavailable due to server failures may be too large for some applications. There are at least two solutions to address this problem. The first solution does not require ☎✝✆-level changes. The idea is to have each receiver ✶ maintain a backup trigger ✷ ☎❅✸✓✷✲✹ ✴ ✒✲✣ ✹✑✶✼✻ in addition to the primary trigger ✷ ☎❅✸✠✹✑✶✼✻ , and have the sender send packets with the identifier stack ✷ ☎❅✸✠✹ ☎✦✸✓✷✲✹ ✴ ✒✳✣ ✻ . If the server storing the primary trigger fails, the packet will be then forwarded via the backup trigger to ✶. 4 Note that to accommodate the case when the packet is required to match every trigger in its identifier stack (see Section 3.2), we use a flag in the packet header, which, if set, causes the packet to be dropped if the identifier at the head of its stack doesn’t find a match. The second solution is to have the overlay network itself replicate the triggers and manage the replicas. In the case of Chord, the natural replication policy is to replicate a trigger on the immediate successor of the server responsible for that trigger [5]. Finally, note that when an end-host fails, its triggers are automatically deleted from ☎✝✆ after they time-out. 4.4 Self-Organizing ☎✝✆ is an overlay infrastructure that may grow to large sizes. Thus, it is important that it not require extensive manual configuration or human intervention. The Chord overlay network isself-configuring, in that nodes joining the ☎✝✆ infrastructure use a simple bootstrapping mechanism (see [26]) to find out about at least one existing ☎ ✆ node, and then contacts that node to join the ☎✝✆ infrastructure. Sim- ✖ Here we implicitly assume that the primary and backup triggers are stored on different servers. The receiver can ensure that this is the case with high probability by choosing ☎❅✸✗✓✷✲✹ ✴ ✒✲✣✼❏ ▲✔✁ ✪ ☎❅✸
larly, end-hosts wishing to use 23 can locate at least one 23 server tended server is all that's needed to fully utilize the 23 infrastructure 4.5 Routing effi overall efficiency of 23. While 23 tries to route each packet effi pcg p ntly to the server storing the best matching trigger, the routing in an overlay network such as i3 is typically far less efficient than the packet directly via IP. To alleviate this pr ender caches the 23 servers IP address. In particular, each data and trigger packet carry in their headers a refreshing flag f. when receiver(R2) a packet reaches an 23 server, the r checks whether it stores the best matching trigger for the packet. If not, it sets the flag f in Figure 7: Heterogeneous multicast application. Refer to Fig- the packet header before forwarding it. When a packet reaches the ure 4(b) for data forwarding in 23. server storing the best matching trigger, the server checks flag f in the packet header, and, if f is set, it returns its IP address back to the original sender. In turn, the sender caches this address and uses it a trigger t exceeds a certain threshold, the server S storing the trig. to send the subsequent packets with the same identifier. The sender ger pushes a copy of t to another server. This process can continue can periodically set the refreshing flag f as a keep-alive message recursively until the load is spread out. The decision of where to with the cached server responsible for this trigger Note that the optimization of caching the server s which stores the trigger to the server most likely to route the packets matching the receiver's trigger does not undermine the system robustness that trigger. Second, S should try to minimize the state it needs If the trigger moves to another server s(e.g, as the result of a to maintain, s at least needs to know the servers to which it has new server joining the system), i3 will simply route the subsequent already pushed triggers in order to forward refresh messages for will replace s with s in its cache. If the cached server fails, the simple way to address these problems is to always push the triggers client simply uses another known i3 server to communicate. This to the predecessor is the same fall-back mechanism as in the unoptimized case when If there are more triggers that share the same k-bit prefix with a the client uses only one i3 server to communicate with all the other popular trigger t, all these triggers need to be cached together with clients. Actually, the fact that the client caches the i3 server storing t. Otherwise, if the identifier of a packet matches the identifier the receiver's trigger can help reduce the recovery time. When the of a cached trigger t, we cannot be sure that t is indeed the best sender notices that the server has failed, it can inform the receiver matching trigger for the packet. th reinsert the trigger immediately. Note that this solution assumes 4.7 scalability that are not stored at the same 23 server Since typically each flow is required to maintain two triggers While caching the server storing the receiver's trigger reduces (one for each end-point), the number of triggers stored in i3 is of the the number of 23 hops, we still need to deal with the triangle rout order of the number of fows plus the number of end-hosts. At first ng problem. That is, if the sender and the receiver are close by, sight, this would be equivalent to a network in which each router but the server storing the trigger is far away, the routing can be in- maintains per- fiow state. Fortunately, this is not the case. While the efficient. For example, if the sender and the receiver are both in state of a flow is maintained by each router along its path, a trigger berkeley and the server storing the receivers trigger is in London is stored at only one node at a time. Thus, if there are n triggers ar ach packet will be forwarded to London before being delivered N servers, each server will store n/N triggers on the average. This back to Berkeley also suggests that 23 can be easily upgraded by simply adding more One solution to this problem is to have the receivers choose their servers to the network. One interesting point to note is that these private triggers such that they are located on nearby servers. This nodes do not need to be placed at specific locations in the network would ensure that packets won't take a long detour before reach- their destination. If an end-host knows the identifiers of the 4.8 Incremental Deployment earby 3 servers, then it can easily choose triggers with identifiers Since i3 is designed as an overlay network, i3 is incrementally To find these any system configuration. A new server simply joins the i3 system triggers(id, A)into i3, and then estimate the Rtt to the server sing the Chord protocol, and becomes automatically responsible that stores the trigger by simply sending packets, (id, dummy ),to or an interval in the identifier space. When triggers with identifiers itself. Note that since we assume that the mapping of triggers onto in that interval are refreshed/inserted they will be stored at the new servers is relatively stable over time, this operation can be done server. In this way, the addition of a new server is also transparent off-line. We evaluate this approach by simulation in Section 5.1 to the end-hosts 4.6 Avoiding Hot-Spots 4.9 Legacy Applications Consider the problem of a large number of clients that try to con- The packet delivery service implemented by 23 is best-effort, tact a popular trigger such as the CNN's trigger. This may cause the which allows existing UDP-based applications to work over i3 eas- server storing this trigger to overload. The classical solution to this lly. The end-host runs an 23 proxy that translates between the appli- roblem is to use caching. When the rate of the packets matching cations'UDP packets and i3 packets, and inserts/refreshes triggers
ilarly, end-hosts wishing to use ☎ ✆ can locate at least one ☎✝✆ server using a similar bootstrapping technique; knowledge of a single ☎✝✆ server is all that’s needed to fully utilize the ☎✝✆ infrastructure. 4.5 Routing Efficiency As with any network system, efficient routing is important to the overall efficiency of ☎✝✆. While ☎✝✆ tries to route each packet effi- ciently to the server storing the best matching trigger, the routing in an overlay network such as ☎ ✆ is typically far less efficient than routing the packet directly via IP. To alleviate this problem, the sender caches the ☎✝✆ server’s IP address. In particular, each data and trigger packet carry in their headers a refreshing flag . When a packet reaches an ☎✝✆ server, the server checks whether it stores the best matching trigger for the packet. If not, it sets the flag in the packet header before forwarding it. When a packet reaches the server storing the best matching trigger, the server checks flag in the packet header, and, if isset, it returns itsIP address back to the original sender. In turn, the sender caches this address and uses it to send the subsequent packets with the same identifier. The sender can periodically set the refreshing flag as a keep-alive message with the cached server responsible for this trigger. Note that the optimization of caching the server ☎ which stores the receiver’s trigger does not undermine the system robustness. If the trigger moves to another server ☎✳✙ (e.g., as the result of a new server joining the system), ☎ ✆ will simply route the subsequent packets from ☎ to ☎ ✙ . When the first packet reaches ☎ ✙ , the receiver will replace ☎ with ☎ ✙ in its cache. If the cached server fails, the client simply uses another known ☎ ✆ server to communicate. This is the same fall-back mechanism as in the unoptimized case when the client uses only one ☎ ✆ server to communicate with all the other clients. Actually, the fact that the client caches the ☎✝✆ server storing the receiver’s trigger can help reduce the recovery time. When the sender notices that the server has failed, it can inform the receiver to reinsert the trigger immediately. Note that this solution assumes that the sender and receiver can communicate via alternate triggers that are not stored at the same ☎✝✆ server. While caching the server storing the receiver’s trigger reduces the number of ☎✝✆ hops, we still need to deal with the triangle routing problem. That is, if the sender and the receiver are close by, but the server storing the trigger is far away, the routing can be inefficient. For example, if the sender and the receiver are both in Berkeley and the server storing the receiver’s trigger is in London, each packet will be forwarded to London before being delivered back to Berkeley! One solution to this problem is to have the receivers choose their private triggers such that they are located on nearby servers. This would ensure that packets won’t take a long detour before reaching their destination. If an end-host knows the identifiers of the nearby ☎✝✆ servers, then it can easily choose triggers with identifiers that map on these servers. In general, each end-host can sample the identifier space to find ranges of identifiers that are stored at nearby servers. To find these ranges, a node ✏ can insert random triggers ✷ ☎✦✸✠✹ ✏✻ into ☎✝✆, and then estimate the RTT to the server that stores the trigger by simply sending packets, ✷ ☎✦✸✠✹✕✸✄✠❇❇✣❆✻ , to itself. Note that since we assume that the mapping of triggers onto servers is relatively stable over time, this operation can be done off-line. We evaluate this approach by simulation in Section 5.1. 4.6 Avoiding Hot-Spots Consider the problem of a large number of clients that try to contact a popular trigger such as the CNN’s trigger. This may cause the server storing this trigger to overload. The classical solution to this problem is to use caching. When the rate of the packets matching Proxy i3 Proxy i3 Proxy i3 Proxy i3 ✁✂✁✂✁✂✁ ✄✂✄✂✄✂✄ (id, data) mpeg sender (S) (R2, data) ((T, R1), data) (R1, data) receiver (R1) tmndec receiver (R2) mpeg_play MPEG−H.263 transcoder (T) Figure 7: Heterogeneous multicast application. Refer to Figure 4(b) for data forwarding in ☎✝✆. a trigger ✿ exceeds a certain threshold, the server ✲ storing the trigger pushes a copy of ✿ to another server. This process can continue recursively until the load is spread out. The decision of where to push the trigger is subject to two constraints. First, ✲ should push the trigger to the server most likely to route the packets matching that trigger. Second, ✲ should try to minimize the state it needs to maintain; ✲ at least needs to know the servers to which it has already pushed triggers in order to forward refresh messages for these triggers (otherwise the triggers will expire). With Chord, one simple way to address these problems is to always push the triggers to the predecessor server. If there are more triggers that share the same ❉-bit prefix with a popular trigger ✿ , all these triggers need to be cached together with ✿. Otherwise, if the identifier of a packet matches the identifier of a cached trigger ✿, we cannot be sure that ✿ is indeed the best matching trigger for the packet. 4.7 Scalability Since typically each flow is required to maintain two triggers (one for each end-point), the number of triggersstored in ☎✝✆ is of the order of the number of flows plus the number of end-hosts. At first sight, this would be equivalent to a network in which each router maintains per-flow state. Fortunately, this is not the case. While the state of a flow is maintained by each router along its path, a trigger is stored at only one node at a time. Thus, if there are ✦ triggers and ✍ servers, each server will store ✦✆☎✍ triggers on the average. This also suggests that ☎✝✆ can be easily upgraded by simply adding more servers to the network. One interesting point to note is that these nodes do not need to be placed at specific locations in the network. 4.8 Incremental Deployment Since ☎✝✆ is designed as an overlay network, ☎✝✆ is incrementally deployable. At the limit, ☎✝✆ may consist of only one node that stores all triggers. Adding more servers to the system does not require any system configuration. A new server simply joins the ☎ ✆ system using the Chord protocol, and becomes automatically responsible for an interval in the identifier space. When triggers with identifiers in that interval are refreshed/inserted they will be stored at the new server. In this way, the addition of a new server is also transparent to the end-hosts. 4.9 Legacy Applications The packet delivery service implemented by ☎ ✆ is best-effort, which allows existing UDP-based applications to work over ☎✝✆ easily. The end-host runs an ☎ ✆ proxy that translates between the applications’ UDP packets and ☎ ✆ packets, and inserts/refreshes triggers
on behalf of the applications. The applications do not need to be private trigger ido, and sends this trigger back to A over As private and translated by the i3 proxy transparently. As a proof of concept, cannot impersonate e nder's trigger is encrypted, a malicious user modified, and are unaware of the 23 proxy. Packets are intercepted trigger id(. Since th we have implemented the heterogeneous multicast application pre ented in Section 3.2 over 3. The sender sends a MPEG stream, 4.10.2 Trigger hijacking and one receiver plays back with a MPEG player(mpeg -play) and A malicious user can isolate a host by removing its public trigger. the other with a H 263 player(tmndec), as shown in Figure 7 Similarly, a malicious user in a multicast group can remove other In [38], we present a solution using i3 to provide mobility support members from the group by deleting their triggers. While removing for TCP-based legacy applications a trigger also requires to specify the IP address of the trigger, this address is, in general, not hard to obtain 4.10 Security One possibility to guard against this attack is to add another level Unlike IP, where an end-host can only send and receive pack of indirection. Consider a server S that wants to advertise a public ets, in i3 end-hosts are also responsible for maintaining the routing trigger with identifier idp. Instead of inserting the trigger aidpc S), information through triggers. While this allows flexibility for ap. the server can insert two triggers, aidpea) and arcS), where r is an plications, it also(and unfortunately creates new opportunities fo identifier known only by S. Since a malicious user has to know malicious users. We now discuss several security issues and how in order to remove either of the two triggers, this simple i3 addresses them provides effective protection against this type of attack We emphasize that our main goal here is not to design a bullet performance penalties, the receiver can choose a such proof system. Instead, our goal is to design simple and efficient aidpe= )and a cS)are stored at the same server. With the current solutions that make i3 not worse and in many cases better than implementation this can be easily achieved by having id, an oday's Internet. The solutions outlined in this section should be share the same k-bit prefi viewed as a starting point towards more sophisticated and better security solutions that we will develop in the future 4.10.3 DoS attacks The fact that 23 gives end-hosts control on routing opens new 4.10./ Eavesdropping possibilities for Dos attacks. We consider two types of attacks: (a Recall that the key to enabling multicast functionality is to al- attacks on end-hosts, and(b) attacks on the infrastructure. In the low multiple triggers with the same identifer. Unfortunately, a ma- former case, a malicious user can insert a hierarchy of triggers(see ous user that knows a hosts trigger can use this flexibility to Figure 5)in which all triggers on the last level point to the victim eavesdrop the traffic towards that host by simply inserting a trig ger with the same identifier and its own address. In addressing thi will cause the packet to be replicated and all replicas to be sent problem, we consider two cases:(a)private and(b)public triggers to the victim. This way an attacker can mount a large scale D (see Section 4.2) attack by simply leveraging the 23 infrastructure. In the later case, malicious user can create trigger loops, for instance by connecting Private triggers are secretly chosen by the application end-points the leaves of a trigger hierarchy to its root. In this case, each packet and are not supposed to be revealed to the outside world. The lengt of the trigger' s identifier makes it difficult for a third party to use sent to the root will be exponentially replicated! a brute force attack. While other application constraints such as To alleviate these attacks, 23 uses three techniques toring a trigger at a server nearby can limit the identifier choice, 1. Challenges: 23 assumes implicitly that a trigger that points the identifier is long enough (i.e, 256 bits), such that the appli- to an end-host R is inserted by the end-host itself. An 23 cation can always reserve a reasonably large number of bits that server can easily verify this assumption by sending a chal- are randomly chosen. Assuming that an application chooses 128 nge to R the first time the trigger is inserted. The challenge random bits in the trigger's identifier, it will take an attacker 2 27 consists of a random nonce that is expected to be returned by probes on the average to guess the identifier. Even in the face of the receiver. If the receiver fails to answer the challenge a distributed attack of say one millions of hosts, it will take about trigger is removed. As a result an attacker cannot use a hier- per host to guess a private rchy of triggers to mount a dos attack (as described above), lote that the technique of using random identifiers as probabilistic the leaf triggers will be removed as soon as the server secure capabilities was previously used in(28, 371 detects that the victim hasnt inserted them Furthermore, end-points can periodically change the private trig 2. Resource allocation: Each server uses Fair Queueing [7to ers associated with a flow Another alternative would be for the locate resources amongst the triggers it stores. This way receIv th the damage inflicted by an attacker is only proportional to the sender to send packets randomly to one of these private triggers the number of triggers it maintains. An attacker cannot sim- he alternative left to a malicious user is to intercept all private trig oly use a hierarchy of triggers with loops to exponentially gers. However this is equivalent to eavesdropping at the IP level or increase its trafic. As soon as each trigger reaches its fair taking control of the 23 server storing the trigger, which makes 23 share the excess packets will be dropped. While this tech- no worse than IP nique doesn't solve the problem, it gives i3 time to detect With 23, a public trigger is known by all users in the system, and and to eventually break the cycles thus anyone can eavesdrop the traffic to such a trigger. To alleviate To increase protection, each server can also put a bound on this problem, end-hosts can use the public triggers to choose a pair the number of triggers that can be inserted by a particular of private triggers, and then use these private triggers to exchange end-host. This will preclude a malicious end-host from mo- the actual data. To keep the private triggers secret, one can use public key cryptography to exchange the private triggers. To initiate 5Note that an attacker can still count the I onnection, a host A encrypts its private trigger id( under the quests to B. However, this information ery limited use, if any. to the attacker. If. in the future. it that this is un- public key of a receiver B, and then sends it to B via B's public acceptable for some applications, then other security mechanisms trigger.b decrypts A,s private trigger id, then chooses its own such as public trigger authentication will need to be used
on behalf of the applications. The applications do not need to be modified, and are unaware of the ☎✝✆ proxy. Packets are intercepted and translated by the ☎✝✆ proxy transparently. As a proof of concept, we have implemented the heterogeneous multicast application presented in Section 3.2 over ☎✝✆. The sender sends a MPEG stream, and one receiver plays back with a MPEG player (mpeg play) and the other with a H.263 player (tmndec), as shown in Figure 7. In [38], we present a solution using ☎ ✆ to provide mobility support for TCP-based legacy applications. 4.10 Security Unlike IP, where an end-host can only send and receive packets, in ☎ ✆ end-hosts are also responsible for maintaining the routing information through triggers. While this allows flexibility for applications, it also (and unfortunately) creates new opportunities for malicious users. We now discuss several security issues and how ☎✝✆ addresses them. We emphasize that our main goal here is not to design a bullet proof system. Instead, our goal is to design simple and efficient solutions that make ☎✝✆ not worse and in many cases better than today’s Internet. The solutions outlined in this section should be viewed as a starting point towards more sophisticated and better security solutions that we will develop in the future. 4.10.1 Eavesdropping Recall that the key to enabling multicast functionality is to allow multiple triggers with the same identifer. Unfortunately, a malicious user that knows a host’s trigger can use this flexibility to eavesdrop the traffic towards that host by simply inserting a trigger with the same identifier and its own address. In addressing this problem, we consider two cases: (a) private and (b) public triggers (see Section 4.2). Private triggers are secretly chosen by the application end-points and are not supposed to be revealed to the outside world. The length of the trigger’s identifier makes it difficult for a third party to use a brute force attack. While other application constraints such as storing a trigger at a server nearby can limit the identifier choice, the identifier is long enough (i.e., ▲✤◆✎❖ bits), such that the application can always reserve a reasonably large number of bits that are randomly chosen. Assuming that an application chooses ◗☛▲✬❙ random bits in the trigger’s identifier, it will take an attacker ▲ ■ ✮✁ probes on the average to guess the identifier. Even in the face of a distributed attack of say one millions of hosts, it will take about ▲ ■ ✮✁ ✺ ✮✄✂ ❏ ▲ ■ ✂☎ probes per host to guess a private trigger. We note that the technique of using random identifiers as probabilistic secure capabilities was previously used in [28, 37]. Furthermore, end-points can periodically change the private triggers associated with a flow. Another alternative would be for the receiver to associate multiple private triggers to the same flow, and the sender to send packets randomly to one of these private triggers. The alternative left to a malicious user is to intercept all private triggers. However this is equivalent to eavesdropping at the IP level or taking control of the ☎✝✆ server storing the trigger, which makes ☎✝✆ no worse than IP. With ☎✝✆, a public trigger is known by all users in the system, and thus anyone can eavesdrop the traffic to such a trigger. To alleviate this problem, end-hosts can use the public triggers to choose a pair of private triggers, and then use these private triggers to exchange the actual data. To keep the private triggers secret, one can use public key cryptography to exchange the private triggers. To initiate a connection, a host ✏ encrypts its private trigger ☎✦✸✷ under the public key of a receiver ✑, and then sends it to ✑ via ✑’s public trigger. ✑ decrypts ✏’s private trigger ☎❅✸✷ , then chooses its own private trigger ☎✦✸✓ , and sends this trigger back to ✏ over ✏’s private trigger ☎✦✸✷ . Since the sender’s trigger is encrypted, a malicious user cannot impersonate ✑. 5 4.10.2 Trigger hijacking A malicious user can isolate a host by removing its public trigger. Similarly, a malicious user in a multicast group can remove other members from the group by deleting their triggers. While removing a trigger also requires to specify the IP address of the trigger, this address is, in general, not hard to obtain. One possibility to guard against this attack is to add another level of indirection. Consider a server ✲ that wants to advertise a public trigger with identifier ☎❅✸✣ . Instead of inserting the trigger ✷ ☎❅✸✣ ✹ ✲✻ , the server can insert two triggers, ✷ ☎❅✸✤✣❂✹✡✧ ✻ and ✷★✧ ✹ ✲✻ , where ✧ is an identifier known only by ✲. Since a malicious user has to know ✧ in order to remove either of the two triggers, this simple technique provides effective protection against this type of attack. To avoid performance penalties, the receiver can choose ✧ such that both ✷ ☎❅✸✣✘✹ ✧ ✻ and ✷★✧ ✹ ✲✢✻ are stored at the same server. With the current implementation this can be easily achieved by having ☎❅✸✣ and ✧ share the same ❉-bit prefix. 4.10.3 DoS Attacks The fact that ☎✝✆ gives end-hosts control on routing opens new possibilities for DoS attacks. We consider two types of attacks: (a) attacks on end-hosts, and (b) attacks on the infrastructure. In the former case, a malicious user can insert a hierarchy of triggers (see Figure 5) in which all triggers on the last level point to the victim. Sending a single packet to the trigger at the root of the hierarchy will cause the packet to be replicated and all replicas to be sent to the victim. This way an attacker can mount a large scale DoS attack by simply leveraging the ☎✝✆ infrastructure. In the later case, a malicious user can create trigger loops, for instance by connecting the leaves of a trigger hierarchy to its root. In this case, each packet sent to the root will be exponentially replicated! To alleviate these attacks, ☎✝✆ uses three techniques: 1. Challenges: ☎✝✆ assumes implicitly that a trigger that points to an end-host ✶ is inserted by the end-host itself. An ☎✝✆ server can easily verify this assumption by sending a challenge to ✶ the first time the trigger is inserted. The challenge consists of a random nonce that is expected to be returned by the receiver. If the receiver fails to answer the challenge the trigger is removed. As a result an attacker cannot use a hierarchy of triggers to mount a DoS attack (as described above), since the leaf triggers will be removed as soon as the server detects that the victim hasn’t inserted them. 2. Resource allocation: Each server uses Fair Queueing [7] to allocate resources amongst the triggers it stores. This way the damage inflicted by an attacker is only proportional to the number of triggers it maintains. An attacker cannot simply use a hierarchy of triggers with loops to exponentially increase its traffic. As soon as each trigger reaches its fair share the excess packets will be dropped. While this technique doesn’t solve the problem, it gives ☎✝✆ time to detect and to eventually break the cycles. To increase protection, each server can also put a bound on the number of triggers that can be inserted by a particular end-host. This will preclude a malicious end-host from mo- ✆ Note that an attacker can still count the number of connection requests to ✑. However, this information is of very limited use, if any, to the attacker. If, in the future, it turns out that this is unacceptable for some applications, then other security mechanisms such as public trigger authentication will need to be used
a power- law random graph topology generated with the INET opology generator [16] with 5000 nodes, where the delay of each link is uniformly distributed in the interval [ 5, 100)ms 58都5 The 23 servers are randomly assigned to the network nodes. ms for intra-transit domain links. 10 ms for transit-stub links and I ms for intra-stub domain links. 23 servers are randomly 5.1 End-to-End Latency Consider a source A that sends packets to a receiver R via trigger (id, R). As discussed in Section 4.5, once the first packet reaches umber of samples the server S storing the trigger(id, R), A caches S and sends all subsequent packets directly to S. As a result, the packets will be Figure 8: The 90th percentile latency stretch vs. number of routed via IP from a to s and then from s to R. The obvious samples for PLRG and transit-stub with 5000 nodes. question is how efficient is routing through S as compared to rout- ing directly from A to R. Section 4.5 presents a simple heuristic in which a receiver R samples the identifier space to find an iden- lizing a server ifier ide that is stored at a nearby server. Then R inserts trigger 3. Loop detection: When a trigger that doesn't point to an IP(ide, R) address is inserted, the server checks whether the new trigger Figure 8 plots the 90th percentile latency stretch versus the num- doesn't create a loop. A simple procedure is to send a special ber of samples k in a system with 16, 384 23 servers. Each point packet with a random nonce. If the packet returns back to the represents the 90th percentile over 1000 measurements. For each erver, the trigger is simply removed. To increase the robust measurement, we randomly choose a sender and a receiver. In ess, the server can invoke this procedure periodically after each case, the receiver generates k triggers with random identi- such a trigger is inserted. Another possibility to detect loops more efficiently would be to use a Bloom filter to encode the fiers. Among these triggers, the receiver retains the trigger that is stored at the closest server. Then we sum the shortest path latency set of 23 servers along the packets path, as proposed in th rom the sender to s and from s to the receiver and divide it by Icarus framework[34」 the shortest path latency from the sender to the receiver to obtain 4.11 Anonymity the latency stretch. Sampling the space of identifiers greatly low ers the stretch. While increasing the number of samples decreases Point-to-point communication networks such as the Internet pro- the stretch further, the improvement appears to saturate rapidly, in- ed support for anonymity. Packets usually carry the des- dicating that in practice, just 16-32 samples should suffice. The tination and the source addresses. which it relatively eas receiver does not need to search for a close identifier every time a for an eavesdropper to learn the sender and the receiver identi- connection is open; in practice, an end-host can sample the space ties. In contrast, with 23, eavesdropping the trafic of a sender will periodically and maintain a pool of identifiers which it can reuse not reveal the identity of the receiver, and eavesdropping the traf- fic of a receiver will not reveal the sender's identity. The level of 5.2 Proximity routing in 23 anonymity can be further enhanced by using chain of triggers or While Section 5. 1 evaluates the end-to-end latency experienced stack of identifiers to route packets by data packets after the sender caches the server storing the re- ceivers trigger t, in this section, we evaluate the latency incurred 5. SIMULATION RESULTS by the sender's first packet that matches trigger t. This packet is In this section, we evaluate the routing efficiency of 23 by sim- routed through the overlay network until it reaches the server stor- ing t. While Chord ensures that the overlay route length is only ulation. One of the main challenges in providing efficient routing O (og N), where N is the number of i3 servers, the routing latency ers. However, we show that simple heuristics can significantly can be quite large. This is because server identifiers are randomly enhance 23s performance. The metric we use to evaluate these chosen, and therefore servers close in the identifier space can be heuristics is the ratio of the inter-node latency on the i3 network to very far away in the underlying network. To alleviate this problem, the inter-node latency on the underlying IP network. This is called the latency stretch The simulator is based on the Chord protocol and uses iterative Closest finger replica In addition to each finger, a server style routing [26]. We assume that node identifiers are randomly maintains r-1 immediate suc of that finger. Thus distributed. This assumption is consistent with the way the iden- each node maintains references to about r log 2 N other tifiers are chosen in other lookup systems such as CAn [22] and nodes for routing proposes. To route a packet, a server se- astry [23]. As discussed in [26], using random node identifiers lects the closest node in terms of network distance amongst ()the finger with the largest identifier preceding the packets as shown to approximate the Internet latency well [201--onto the mensional Chord identifi In particular, we have used space filling curves, such as results do not show significant gains as to the heuristics curve, to map a d-dimensional geometric spacewhich resented in this section, so we omit their
0 5 10 15 20 25 30 35 1 1.5 2 2.5 3 3.5 4 4.5 Number of Samples 90th Percentile Latency Stretch power law random graph transit−stub Figure 8: The 90th percentile latency stretch vs. number of samples for PLRG and transit-stub with 5000 nodes. nopolizing a server’s resources. 3. Loop detection: When a trigger that doesn’t point to an IP address is inserted, the server checks whether the new trigger doesn’t create a loop. A simple procedure is to send a special packet with a random nonce. If the packet returns back to the server, the trigger is simply removed. To increase the robustness, the server can invoke this procedure periodically after such a trigger is inserted. Another possibility to detect loops more efficiently would be to use a Bloom filter to encode the set of ☎✝✆ servers along the packet’s path, as proposed in the Icarus framework [34]. 4.11 Anonymity Point-to-point communication networks such as the Internet provide limited support for anonymity. Packets usually carry the destination and the source addresses, which makes it relatively easy for an eavesdropper to learn the sender and the receiver identities. In contrast, with ☎✝✆, eavesdropping the traffic of a sender will not reveal the identity of the receiver, and eavesdropping the traf- fic of a receiver will not reveal the sender’s identity. The level of anonymity can be further enhanced by using chain of triggers or stack of identifiers to route packets. 5. SIMULATION RESULTS In this section, we evaluate the routing efficiency of ☎✝✆ by simulation. One of the main challenges in providing efficient routing is that end-hosts have little control over the location of their triggers. However, we show that simple heuristics can significantly enhance ☎ ✆’s performance. The metric we use to evaluate these heuristics is the ratio of the inter-node latency on the ☎ ✆ network to the inter-node latency on the underlying IP network. This is called the latency stretch. The simulator is based on the Chord protocol and uses iterative style routing [26]. We assume that node identifiers are randomly distributed. This assumption is consistent with the way the identifiers are chosen in other lookup systems such as CAN [22] and Pastry [23]. As discussed in [26], using random node identifiers increases system robustness and load-balancing. 6 We consider the ✞ We have also experimented with identifiers that have location semantics. In particular, we have used space filling curves, such as the Hilbert curve, to map a ✸-dimensional geometric space—which following network topologies in our simulations: ✶ A power-law random graph topology generated with the INET topology generator [16] with 5000 nodes, where the delay of each link is uniformly distributed in the interval ◆❀✹✧◗✞✂✤✻ ms. The ☎ ✆ servers are randomly assigned to the network nodes. ✶ A transit-stub topology generated with the GT-ITM topology generator [11] with 5000 nodes, where link latencies are 100 ms for intra-transit domain links, 10 ms for transit-stub links and 1 ms for intra-stub domain links. ☎✝✆ servers are randomly assigned to stub nodes. 5.1 End-to-End Latency Consider a source ✏ that sends packets to a receiver ✶ via trigger ✷ ☎❅✸✠✹✑✶✼✻ . As discussed in Section 4.5, once the first packet reaches the server ✲ storing the trigger ✷ ☎❅✸✠✹✑✶✼✻ , ✏ caches ✲ and sends all subsequent packets directly to ✲. As a result, the packets will be routed via IP from ✏ to ✲ and then from ✲ to ✶. The obvious question is how efficient is routing through ✲ as compared to routing directly from ✏ to ✶. Section 4.5 presents a simple heuristic in which a receiver ✶ samples the identifier space to find an identifier ☎❅✸✩✹ that is stored at a nearby server. Then ✶ inserts trigger ✷ ☎❅✸✹ ✹✺✶✼✻ . Figure 8 plots the 90th percentile latency stretch versus the number of samples ❉ in a system with ◗✚❖✘✹ ✆✤❙✂✁ ☎✝✆ servers. Each point represents the 90th percentile over 1000 measurements. For each measurement, we randomly choose a sender and a receiver. In each case, the receiver generates ❉ triggers with random identi- fiers. Among these triggers, the receiver retains the trigger that is stored at the closest server. Then we sum the shortest path latency from the sender to ✲ and from ✲ to the receiver, and divide it by the shortest path latency from the sender to the receiver to obtain the latency stretch. Sampling the space of identifiers greatly lowers the stretch. While increasing the number of samples decreases the stretch further, the improvement appears to saturate rapidly, indicating that in practice, just ◗✧❖★✪ ✆✒▲ samples should suffice. The receiver does not need to search for a close identifier every time a connection is open; in practice, an end-host can sample the space periodically and maintain a pool of identifiers which it can reuse. 5.2 Proximity Routing in ❃ ❄ While Section 5.1 evaluates the end-to-end latency experienced by data packets after the sender caches the server storing the receiver’s trigger ✿ , in this section, we evaluate the latency incurred by the sender’s first packet that matches trigger ✿. This packet is routed through the overlay network until it reaches the server storing ✿ . While Chord ensures that the overlay route length is only ✟P✷✡✠☞☛✂✌ ✍✻ , where ✍ is the number of ☎✝✆ servers, the routing latency can be quite large. This is because server identifiers are randomly chosen, and therefore servers close in the identifier space can be very far away in the underlying network. To alleviate this problem, we consider two simple heuristics: ✶ Closest finger replica In addition to each finger, a server maintains ❈★✪✼◗ immediate successors of that finger. Thus, each node maintains references to about ❈ ✠☞☛✂✌ ✮ ✍ other nodes for routing proposes. To route a packet, a server selects the closest node in terms of network distance amongst (1) the finger with the largest identifier preceding the packet’s was shown to approximate the Internet latency well [20]—onto the one-dimensional Chord identifier space. However, the preliminary results do not show significant gains as compared to the heuristics presented in this section, so we omit their presentation here
→- closest Number of i3 Servers Number of i3 servers Figure 9: The 90th percentile latency stretch in the case of (a)a power-law random network topology with 5000 nodes, and (b)a transit-stub topology with 5000 nodes. The 23 servers are randomly assigned to all nodes in case(a), and only to the stub nodes in case(b). dentifier and (2)the r-1 immediate successors of that fi theory, this allows us to use any of the proposed lookup algorithms ger. This heuristic was originally proposed in [5] that performs exact matching Both insert trigger requests and data packets share a common Closest finger set Each server s chooses log, N fingers as header of 48 bytes. In addition, data packets can carry a stack of successor(s +8), where(i< log, N)and b< 2. To to four triggers(this feature isnt used in the experiments). Trig- route a packet, server s considers only the closest log2 N gers need to be updated every 30 seconds or they will expire. The fingers in terms of network distances among all its log b N control protocol to maintain the overlay network is minimal. Each server performs stabilization every 30 seconds(see [26]).Dur ing every stabilization period all servers generate approximatel Figure 9 plots the 90th percentile latency stretch as a function N log N control messages. Since in our experiments the number of i3's size for the baseline Chord protocol and the two heuris- of servers N is in the order of tens, we neglect the overhead due to tics. The number of replicas r is 10, and b is chosen such that the control protocol The testbed used for all of our experiments was a cluster of Pet siders roughly the same number of routing entries. We vary the tium Il700 MHz machines running Linux. We ran tests on systems number of i3 servers from 20 to 2 6 and in each case we aver of up to 32 nodes, with each node running on its own processor. age routing latencies over 1000 routing queries. In all cases the 23 The nodes communicated over a shared I Gbps Ethernet. For time server identifiers are randomly generated measurements, we use the Pentium timestamp counter (TSC). This As shown in Figure 9, both heuristics can reduce the 90th per method gives very accurate wall clock times, but sometime it in- centile latency stretch up to 2-3 times as compared to the default cludes interrupts and context switches as well. For this reason, Chord protocol. In practice, we choose the"closest finger set high extremes in the data are unreliable heuristic. While this heuristic achieves comparable latency stretch with"closest finger replica", it is easier to implement and does not 5.4 Perfo orman require to increase the routing table size. The only change in the In the section, we present the overhead of the main operations Chord protocol is to sample the identifier space using base b in- performed by 23. Since these results are based on a very prelim stead of 2, and store only the closest log N fingers among the inary implementation, they should be seen as a proof of feasibil- nodes sampled so far. ity and not as a proof of efficiency. Other Chord related perfor 5.3 Implementation and Experiments mance metrics such as the route length and system robustness are presented in 5] We have implemented a bare-bones version of 23 using the Chord Trigger insertion: We consider the overhead of handling an protocol. The control protocol used to maintain the overlay net- insert trigger request locally, as opposed to forwarding a request work is fully asynchronous and is implemented on top of UDP. The to another server. Triggers are maintained in a hash table, so the aplementation uses 256 bit(m= 256)identifiers and assumes time is practically independent of the number of triggers. Inserting that the matching procedure requires exact matching on the 128 a trigger involves just a hash table lookup and a memory alloca- most significant bits(k= 128). This choice makes it very unlikely tion. The average and the standard deviation of the trigger inser- that a packet will erroneously match a trigger, and at the same time tion operation over 10,000 insertions are 12.5 usec, and 7 12 gives applications up to 128 bits to encode application specific in- respectively. This is mostly the time it takes the operating formation such as the host location(see Section 2.4.3) to process the packet and to hand it to the application. by ce For simplicity, in the current implementation we assume that all ison, memory allocation time is just 0. 25 usec on the test machine triggers that share the first 128 bits are stored on the same server. In Note that since each trigger is updated every 30 sec, a server would
0 2 4 6 8 10 12 14 x 104 2 4 6 8 10 12 14 16 Number of i3 Servers 90th Percentile Latency Stretch default chord closest finger set closest finger replica (a) 0 2 4 6 8 10 12 14 x 104 2 4 6 8 10 12 14 16 Number of i3 Servers 90th Percentile Latency Stretch default chord closest finger set closest finger replica (b) Figure 9: The 90th percentile latency stretch in the case of (a) a power-law random network topology with 5000 nodes, and (b) a transit-stub topology with 5000 nodes. The ☎ ✆ servers are randomly assigned to all nodes in case (a), and only to the stub nodes in case (b). identifier and (2) the ❈★✪ ◗ immediate successors of that finger. This heuristic was originally proposed in [5]. ✶ Closest finger set Each server ☎ chooses ✠☞☛✌ ✓ ✍ fingers as ☎✞✄✄✆☛✆☛✂✠☎✠☎✖✌✎❈❂✷✗☎ ✆✁✛ ✻ , where ✷ ☎ ❊ ✠☞☛✌ ✓ ✍✻ and ❊ ▲. To route a packet, server ☎ considers only the closest ✠☞☛✌ ✮ ✍ fingers in terms of network distances among all its ✠☞☛✂✌ ✓ ✍ fingers. Figure 9 plots the 90th percentile latency stretch as a function of ☎✝✆’s size for the baseline Chord protocol and the two heuristics. The number of replicas ❈ is 10, and is chosen such that ✠☞☛✌ ✓ ✍ ❏ ❈ ✠☞☛✌ ✮ ✍. Thus, with both heuristics, a server considers roughly the same number of routing entries. We vary the number of ☎✝✆ servers from ▲ ■ ✂ to ▲ ■ ✞ , and in each case we average routing latencies over 1000 routing queries. In all cases the ☎✝✆ server identifiers are randomly generated. As shown in Figure 9, both heuristics can reduce the 90th percentile latency stretch up to ▲★✪ ✆ times as compared to the default Chord protocol. In practice, we choose the “closest finger set” heuristic. While this heuristic achieves comparable latency stretch with “closest finger replica”, it is easier to implement and does not require to increase the routing table size. The only change in the Chord protocol is to sample the identifier space using base instead of ▲, and store only the closest ✠☞☛✌ ✮ ✍ fingers among the nodes sampled so far. 5.3 Implementation and Experiments We have implemented a bare-bones version of ☎ ✆ using the Chord protocol. The control protocol used to maintain the overlay network is fully asynchronous and is implemented on top of UDP. The implementation uses 256 bit (❇ ❏ ▲✤◆✎❖) identifiers and assumes that the matching procedure requires exact matching on the 128 most significant bits (❉P❏❘◗☛▲✎❙). This choice makes it very unlikely that a packet will erroneously match a trigger, and at the same time gives applications up to 128 bits to encode application specific information such as the host location (see Section 2.4.3). For simplicity, in the current implementation we assume that all triggers that share the first 128 bits are stored on the same server. In theory, this allows us to use any of the proposed lookup algorithms that performs exact matching. Both insert trigger requests and data packets share a common header of 48 bytes. In addition, data packets can carry a stack of up to four triggers (this feature isn’t used in the experiments). Triggers need to be updated every 30 seconds or they will expire. The control protocol to maintain the overlay network is minimal. Each server performs stabilization every 30 seconds (see [26]). During every stabilization period all servers generate approximately ✍ ✠☛✌✎✍ control messages. Since in our experiments the number of servers ✍ is in the order of tens, we neglect the overhead due to the control protocol. The testbed used for all of our experiments was a cluster of Pentium III 700 MHz machines running Linux. We ran tests on systems of up to 32 nodes, with each node running on its own processor. The nodes communicated over a shared 1 Gbps Ethernet. For time measurements, we use the Pentium timestamp counter (TSC). This method gives very accurate wall clock times, but sometime it includes interrupts and context switches as well. For this reason, the high extremes in the data are unreliable. 5.4 Performance In the section, we present the overhead of the main operations performed by ☎✝✆. Since these results are based on a very preliminary implementation, they should be seen as a proof of feasibility and not as a proof of efficiency. Other Chord related performance metrics such as the route length and system robustness are presented in [5]. Trigger insertion: We consider the overhead of handling an insert trigger request locally, as opposed to forwarding a request to another server. Triggers are maintained in a hash table, so the time is practically independent of the number of triggers. Inserting a trigger involves just a hash table lookup and a memory allocation. The average and the standard deviation of the trigger insertion operation over 10,000 insertions are 12.5 ✂sec, and 7.12 ✂sec, respectively. This is mostly the time it takes the operating system to process the packet and to hand it to the application. By comparison, memory allocation time is just 0.25 ✂sec on the test machine. Note that since each trigger is updated every 30 sec, a server would