正在加载图片...
3.2 Heterogeneous Multicast Figure 4(b) shows a more complex scenario in which an MPEC video stream is played back by one H 263 receiver and one MPEG To provide this functionality, we use the ability of the receiver instead of the sender(see Section 2.5), to control the transforma tions performed on data packets. In particular, the H 263 receiver Rl inserts trigger(id, (id Mhe s-HQ63, R1)), and the sender sends ets(id, data). Each packet matches R1's trigger, and as a result the packets identifier id is replaced by the triggers stack (idMhe s-H@63, T). Next, the packet is forwarded to the MPEG- 起 H 263 transcoder, and then directly to receiver Rl. In contrast, an MPEG receiver R2 only needs to maintain a trigger(id, R1) in i3 R2R3 This way, receivers with different display capabilities can subscribe to the same multicast group. Another useful application is to have the receiver insist that all data go through a firewall first before reaching it. 3.3 Server Selection use of the last m-k bits of the identifiers to encode application Figure 5: Example of a sealable multicast tree with bounded preferences. To illustrate this point consider two examples degree by using chains of triggers. In the first example, assume that there are several web servers and the goal is to balance the client requests among these servers This goal can be achieved by setting the m-k least significant bits receivers of the multicast group construct and maintain the hierar of both trigger and packet identifiers to random values. If servers hy of triggers have different capacities, then each server can insert a number of triggers proportional to its capacity. Finally, one can devise an 4. ADDITIONAL DESIGN AND PERFOR daptive algorithm in which each server varies the number of trig MANCE ISSUES gers as a function of its current load In the second example, consider the goal of selecting a server In this section we discuss some additional i3 design and per- that is close to the client in terms of latency. To achieve this goal, formance issues. The 23 design was intended to be(among other each server can use the last m-k bits of its trigger identifiers to properties)robust, self-organizing, efficient, secure, scalable, incre- mentally deployable, and compatible with legacy applications. In encode its location, and the client can use the last m-k bits in the this section we discuss these issues and some details of the design packets' identifier to encode its own location. In the simplest case, the location of an end-host (i.e, server or client) can be the zip that are relevant to them Before addressing these issues, we first review our basic design. fix matching procedure used by i3 would result then in the packet stores a subset of triggers. In the basic design, at any moment of being forwarded to a server that is relatively close to the cler time, a trigger is stored at only one server. Each end-host know. 3. 4 Large scale multicast about one or more i3 servers. When a host wants to send a packet (id, data), it forwards the packet to one of the servers it knows If The multicast abstraction presented in Section 2. 4.2 assumes the contacted server doesn't store the trigger matching(id, data) that all members of a multicast group insert triggers with identical the packet is forwarded via IP to another server. This process con- identifiers. Since triggers with identical identifier are stored at the tinues until the packet reaches the server that stores the matching same i3 server, that server is responsible for forwarding each mul ticast packet to every member of the multicast group. This solution trigger. The packet is then sent to the destination via IP obviously does not scale to large multicast groups 4.1 Properties of the Overlay One approach to address this problem is to build a hierarchy of The performance of i3 depends greatly on the nature of the un- triggers,where each member R: of a multicast group idg replaces derlying overlay network. In particular, we need an overlay net- its trigger(idg, Ri) by a chain of triggers(idg, I1),(a1, r2) (i,R). This substitution is transparent to the sender: a packet work that exhibits the following desirable properties (ids, data) will still reach R via the chain of triggers. Figure 5 Robustness: With a high shows an example of a multicast tree with seven receivers in which remains connected even in the face of massive server and no more than three triggers have the same identifier. This hierarchy communication failures of triggers can be constructed and maintained either cooperatively by the members of the multicast group, or by a third party provider. Scalability: The overlay network can handle the traffic ge In [18], we present an efficient distributed algorithm in which the erated by millions of end-hosts and applications. Recall that identifiers are m bits long and that k is the exact- Efficiency: Routing a packet to the server that stores the matching threshold packet,s best matching trigger involves a small number of SHere we assume that nodes geographically close to ther are also close in terms work distances, which lways true. One could instead use latency based encoding, Stability: The mapping between triggers and servers is rela- tively stable over time, that is, it is unlikely to change during3.2 Heterogeneous Multicast Figure 4(b) shows a more complex scenario in which an MPEG video stream is played back by one H.263 receiver and one MPEG receiver. To provide this functionality, we use the ability of the receiver, instead of the sender (see Section 2.5), to control the transforma￾tions performed on data packets. In particular, the H.263 receiver ✶◗ insertstrigger ✷ ☎❅✸✠✹✧✷ ☎❅✸✷✁￾✄✂✄☎ ✺✥✴✝✆ ✮✟✞ ✰✤✹✑✶◗✧✻✺✻ , and the sender sends packets ✷ ☎❅✸✠✹✺✸✾✽✾✿✕✽❂✻ . Each packet matches ✶◗ ’s trigger, and as a result the packet’s identifier ☎✦✸ is replaced by the trigger’s stack ✷ ☎❅✸✏✷✁￾✄✂✄☎ ✺✥✴✠✆ ✮✟✞ ✰ ✹✡✳✻ . Next, the packet is forwarded to the MPEG￾H.263 transcoder, and then directly to receiver ✶◗ . In contrast, an MPEG receiver ✶✼▲ only needs to maintain a trigger ✷ ☎❅✸✠✹✑✶◗✧✻ in ☎✝✆. This way, receivers with different display capabilities can subscribe to the same multicast group. Another useful application is to have the receiver insist that all data go through a firewall first before reaching it. 3.3 Server Selection ☎✝✆ provides good support for basic server selection through the use of the last ❇✪ ❉ bits of the identifiers to encode application preferences.2 To illustrate this point consider two examples. In the first example, assume that there are several web servers and the goal is to balance the client requests among these servers. This goal can be achieved by setting the ❇ ✪ ❉ least significant bits of both trigger and packet identifiers to random values. If servers have different capacities, then each server can insert a number of triggers proportional to its capacity. Finally, one can devise an adaptive algorithm in which each server varies the number of trig￾gers as a function of its current load. In the second example, consider the goal of selecting a server that is close to the client in terms of latency. To achieve this goal, each server can use the last ❇✁✪ ❉ bits of its trigger identifiers to encode its location, and the client can use the last ❇✁✪ ❉ bits in the packets’ identifier to encode its own location. In the simplest case, the location of an end-host (i.e., server or client) can be the zip code of the place where the end-host is located; the longest pre- fix matching procedure used by ☎✝✆ would result then in the packet being forwarded to a server that is relatively close to the client.3 3.4 Large Scale Multicast The multicast abstraction presented in Section 2.4.2 assumes that all members of a multicast group insert triggers with identical identifiers. Since triggers with identical identifier are stored at the same ☎ ✆ server, that server is responsible for forwarding each mul￾ticast packet to every member of the multicast group. This solution obviously does not scale to large multicast groups. One approach to address this problem is to build a hierarchy of triggers, where each member ✶✛ of a multicast group ☎❅✸☛✡ replaces its trigger ✷ ☎❅✸☛✡✾✹✑✶✛ ✻ by a chain of triggers ✷ ☎❅✸☛✡✾✹ ✧ ■ ✻ , ✷★✧ ■ ✹ ✧ ✮✚✻ , ✱✲✱✲✱ , ✷★✧ ✛ ✹✺✶✛ ✻ . This substitution is transparent to the sender: a packet ✷ ☎❅✸☞✡✾✹✑✸✾✽✾✿✕✽❂✻ will still reach ✶✛ via the chain of triggers. Figure 5 shows an example of a multicast tree with seven receivers in which no more than three triggers have the same identifier. This hierarchy of triggers can be constructed and maintained either cooperatively by the members of the multicast group, or by a third party provider. In [18], we present an efficient distributed algorithm in which the ✮ Recall that identifiers are ❇ bits long and that ❉ is the exact￾matching threshold. ✰ Here we assume that nodes that are geographically close to each other are also close in terms of network distances, which is not always true. One could instead use latency based encoding, much as in [20]. ✏✍✏✍✏✑✍✑ ✌✍✌✍✌ ✎✍✎✍✎ ✒✍✒✍✒ ✓✍✓✍✓ ✔✍✔✍✔ ✔✍✔✍✔✕✍✕ ✕✍✕ ✖✍✖✍✖ ✖✍✖✍✖ ✗✍✗ ✗✍✗ ✘✍✘✍✘✙✍✙ id1 R2 ✚✍✚✍✚✛✍✛ R1 idg id1 id2 idg idg R5 R4 id2 R4 R5 id2 R6 id2 R6 R1 (idg, data) S R3 R2 R3 id1 Figure 5: Example of a scalable multicast tree with bounded degree by using chains of triggers. receivers of the multicast group construct and maintain the hierar￾chy of triggers. 4. ADDITIONAL DESIGN AND PERFOR￾MANCE ISSUES In this section we discuss some additional ☎✝✆ design and per￾formance issues. The ☎✝✆ design was intended to be (among other properties) robust, self-organizing, efficient, secure, scalable, incre￾mentally deployable, and compatible with legacy applications. In this section we discuss these issues and some details of the design that are relevant to them. Before addressing these issues, we first review our basic design. ☎✝✆ is organized as an overlay network in which every node (server) stores a subset of triggers. In the basic design, at any moment of time, a trigger is stored at only one server. Each end-host knows about one or more ☎✝✆ servers. When a host wants to send a packet ✷ ☎❅✸✠✹✺✸✾✽✾✿✕✽❂✻ , it forwards the packet to one of the servers it knows. If the contacted server doesn’t store the trigger matching ✷ ☎❅✸✠✹✺✸✾✽✾✿✕✽❂✻ , the packet is forwarded via IP to another server. This process con￾tinues until the packet reaches the server that stores the matching trigger. The packet is then sent to the destination via IP. 4.1 Properties of the Overlay The performance of ☎✝✆ depends greatly on the nature of the un￾derlying overlay network. In particular, we need an overlay net￾work that exhibits the following desirable properties: ✶ Robustness: With a high probability, the overlay network remains connected even in the face of massive server and communication failures. ✶ Scalability: The overlay network can handle the traffic gen￾erated by millions of end-hosts and applications. ✶ Efficiency: Routing a packet to the server that stores the packet’s best matching trigger involves a small number of servers. ✶ Stability: The mapping between triggers and servers is rela￾tively stable over time, that is, it is unlikely to change during
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有