正在加载图片...
larly, end-hosts wishing to use 23 can locate at least one 23 server tended server is all that's needed to fully utilize the 23 infrastructure 4.5 Routing effi overall efficiency of 23. While 23 tries to route each packet effi pcg p ntly to the server storing the best matching trigger, the routing in an overlay network such as i3 is typically far less efficient than the packet directly via IP. To alleviate this pr ender caches the 23 servers IP address. In particular, each data and trigger packet carry in their headers a refreshing flag f. when receiver(R2) a packet reaches an 23 server, the r checks whether it stores the best matching trigger for the packet. If not, it sets the flag f in Figure 7: Heterogeneous multicast application. Refer to Fig- the packet header before forwarding it. When a packet reaches the ure 4(b) for data forwarding in 23. server storing the best matching trigger, the server checks flag f in the packet header, and, if f is set, it returns its IP address back to the original sender. In turn, the sender caches this address and uses it a trigger t exceeds a certain threshold, the server S storing the trig. to send the subsequent packets with the same identifier. The sender ger pushes a copy of t to another server. This process can continue can periodically set the refreshing flag f as a keep-alive message recursively until the load is spread out. The decision of where to with the cached server responsible for this trigger Note that the optimization of caching the server s which stores the trigger to the server most likely to route the packets matching the receiver's trigger does not undermine the system robustness that trigger. Second, S should try to minimize the state it needs If the trigger moves to another server s(e.g, as the result of a to maintain, s at least needs to know the servers to which it has new server joining the system), i3 will simply route the subsequent already pushed triggers in order to forward refresh messages for will replace s with s in its cache. If the cached server fails, the simple way to address these problems is to always push the triggers client simply uses another known i3 server to communicate. This to the predecessor is the same fall-back mechanism as in the unoptimized case when If there are more triggers that share the same k-bit prefix with a the client uses only one i3 server to communicate with all the other popular trigger t, all these triggers need to be cached together with clients. Actually, the fact that the client caches the i3 server storing t. Otherwise, if the identifier of a packet matches the identifier the receiver's trigger can help reduce the recovery time. When the of a cached trigger t, we cannot be sure that t is indeed the best sender notices that the server has failed, it can inform the receiver matching trigger for the packet. th reinsert the trigger immediately. Note that this solution assumes 4.7 scalability that are not stored at the same 23 server Since typically each flow is required to maintain two triggers While caching the server storing the receiver's trigger reduces (one for each end-point), the number of triggers stored in i3 is of the the number of 23 hops, we still need to deal with the triangle rout order of the number of fows plus the number of end-hosts. At first ng problem. That is, if the sender and the receiver are close by, sight, this would be equivalent to a network in which each router but the server storing the trigger is far away, the routing can be in- maintains per- fiow state. Fortunately, this is not the case. While the efficient. For example, if the sender and the receiver are both in state of a flow is maintained by each router along its path, a trigger berkeley and the server storing the receivers trigger is in London is stored at only one node at a time. Thus, if there are n triggers ar ach packet will be forwarded to London before being delivered N servers, each server will store n/N triggers on the average. This back to Berkeley also suggests that 23 can be easily upgraded by simply adding more One solution to this problem is to have the receivers choose their servers to the network. One interesting point to note is that these private triggers such that they are located on nearby servers. This nodes do not need to be placed at specific locations in the network would ensure that packets won't take a long detour before reach- their destination. If an end-host knows the identifiers of the 4.8 Incremental Deployment earby 3 servers, then it can easily choose triggers with identifiers Since i3 is designed as an overlay network, i3 is incrementally To find these any system configuration. A new server simply joins the i3 system triggers(id, A)into i3, and then estimate the Rtt to the server sing the Chord protocol, and becomes automatically responsible that stores the trigger by simply sending packets, (id, dummy ),to or an interval in the identifier space. When triggers with identifiers itself. Note that since we assume that the mapping of triggers onto in that interval are refreshed/inserted they will be stored at the new servers is relatively stable over time, this operation can be done server. In this way, the addition of a new server is also transparent off-line. We evaluate this approach by simulation in Section 5.1 to the end-hosts 4.6 Avoiding Hot-Spots 4.9 Legacy Applications Consider the problem of a large number of clients that try to con- The packet delivery service implemented by 23 is best-effort, tact a popular trigger such as the CNN's trigger. This may cause the which allows existing UDP-based applications to work over i3 eas- server storing this trigger to overload. The classical solution to this lly. The end-host runs an 23 proxy that translates between the appli- roblem is to use caching. When the rate of the packets matching cations'UDP packets and i3 packets, and inserts/refreshes triggersilarly, end-hosts wishing to use ☎ ✆ can locate at least one ☎✝✆ server using a similar bootstrapping technique; knowledge of a single ☎✝✆ server is all that’s needed to fully utilize the ☎✝✆ infrastructure. 4.5 Routing Efficiency As with any network system, efficient routing is important to the overall efficiency of ☎✝✆. While ☎✝✆ tries to route each packet effi- ciently to the server storing the best matching trigger, the routing in an overlay network such as ☎ ✆ is typically far less efficient than routing the packet directly via IP. To alleviate this problem, the sender caches the ☎✝✆ server’s IP address. In particular, each data and trigger packet carry in their headers a refreshing flag ￾. When a packet reaches an ☎✝✆ server, the server checks whether it stores the best matching trigger for the packet. If not, it sets the flag ￾ in the packet header before forwarding it. When a packet reaches the server storing the best matching trigger, the server checks flag ￾ in the packet header, and, if ￾ isset, it returns itsIP address back to the original sender. In turn, the sender caches this address and uses it to send the subsequent packets with the same identifier. The sender can periodically set the refreshing flag ￾ as a keep-alive message with the cached server responsible for this trigger. Note that the optimization of caching the server ☎ which stores the receiver’s trigger does not undermine the system robustness. If the trigger moves to another server ☎✳✙ (e.g., as the result of a new server joining the system), ☎ ✆ will simply route the subsequent packets from ☎ to ☎ ✙ . When the first packet reaches ☎ ✙ , the receiver will replace ☎ with ☎ ✙ in its cache. If the cached server fails, the client simply uses another known ☎ ✆ server to communicate. This is the same fall-back mechanism as in the unoptimized case when the client uses only one ☎ ✆ server to communicate with all the other clients. Actually, the fact that the client caches the ☎✝✆ server storing the receiver’s trigger can help reduce the recovery time. When the sender notices that the server has failed, it can inform the receiver to reinsert the trigger immediately. Note that this solution assumes that the sender and receiver can communicate via alternate triggers that are not stored at the same ☎✝✆ server. While caching the server storing the receiver’s trigger reduces the number of ☎✝✆ hops, we still need to deal with the triangle rout￾ing problem. That is, if the sender and the receiver are close by, but the server storing the trigger is far away, the routing can be in￾efficient. For example, if the sender and the receiver are both in Berkeley and the server storing the receiver’s trigger is in London, each packet will be forwarded to London before being delivered back to Berkeley! One solution to this problem is to have the receivers choose their private triggers such that they are located on nearby servers. This would ensure that packets won’t take a long detour before reach￾ing their destination. If an end-host knows the identifiers of the nearby ☎✝✆ servers, then it can easily choose triggers with identifiers that map on these servers. In general, each end-host can sample the identifier space to find ranges of identifiers that are stored at nearby servers. To find these ranges, a node ✏ can insert random triggers ✷ ☎✦✸✠✹ ✏✻ into ☎✝✆, and then estimate the RTT to the server that stores the trigger by simply sending packets, ✷ ☎✦✸✠✹✕✸✄✠❇❇✣❆✻ , to itself. Note that since we assume that the mapping of triggers onto servers is relatively stable over time, this operation can be done off-line. We evaluate this approach by simulation in Section 5.1. 4.6 Avoiding Hot-Spots Consider the problem of a large number of clients that try to con￾tact a popular trigger such as the CNN’s trigger. This may cause the server storing this trigger to overload. The classical solution to this problem is to use caching. When the rate of the packets matching Proxy i3 Proxy i3 Proxy i3 Proxy i3 ✁✂✁✂✁✂✁ ✄✂✄✂✄✂✄ (id, data) mpeg sender (S) (R2, data) ((T, R1), data) (R1, data) receiver (R1) tmndec receiver (R2) mpeg_play MPEG−H.263 transcoder (T) Figure 7: Heterogeneous multicast application. Refer to Fig￾ure 4(b) for data forwarding in ☎✝✆. a trigger ✿ exceeds a certain threshold, the server ✲ storing the trig￾ger pushes a copy of ✿ to another server. This process can continue recursively until the load is spread out. The decision of where to push the trigger is subject to two constraints. First, ✲ should push the trigger to the server most likely to route the packets matching that trigger. Second, ✲ should try to minimize the state it needs to maintain; ✲ at least needs to know the servers to which it has already pushed triggers in order to forward refresh messages for these triggers (otherwise the triggers will expire). With Chord, one simple way to address these problems is to always push the triggers to the predecessor server. If there are more triggers that share the same ❉-bit prefix with a popular trigger ✿ , all these triggers need to be cached together with ✿. Otherwise, if the identifier of a packet matches the identifier of a cached trigger ✿, we cannot be sure that ✿ is indeed the best matching trigger for the packet. 4.7 Scalability Since typically each flow is required to maintain two triggers (one for each end-point), the number of triggersstored in ☎✝✆ is of the order of the number of flows plus the number of end-hosts. At first sight, this would be equivalent to a network in which each router maintains per-flow state. Fortunately, this is not the case. While the state of a flow is maintained by each router along its path, a trigger is stored at only one node at a time. Thus, if there are ✦ triggers and ✍ servers, each server will store ✦✆☎✍ triggers on the average. This also suggests that ☎✝✆ can be easily upgraded by simply adding more servers to the network. One interesting point to note is that these nodes do not need to be placed at specific locations in the network. 4.8 Incremental Deployment Since ☎✝✆ is designed as an overlay network, ☎✝✆ is incrementally deployable. At the limit, ☎✝✆ may consist of only one node that stores all triggers. Adding more servers to the system does not require any system configuration. A new server simply joins the ☎ ✆ system using the Chord protocol, and becomes automatically responsible for an interval in the identifier space. When triggers with identifiers in that interval are refreshed/inserted they will be stored at the new server. In this way, the addition of a new server is also transparent to the end-hosts. 4.9 Legacy Applications The packet delivery service implemented by ☎ ✆ is best-effort, which allows existing UDP-based applications to work over ☎✝✆ eas￾ily. The end-host runs an ☎ ✆ proxy that translates between the appli￾cations’ UDP packets and ☎ ✆ packets, and inserts/refreshes triggers
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有