正在加载图片...
a power- law random graph topology generated with the INET opology generator [16] with 5000 nodes, where the delay of each link is uniformly distributed in the interval [ 5, 100)ms 58都5 The 23 servers are randomly assigned to the network nodes. ms for intra-transit domain links. 10 ms for transit-stub links and I ms for intra-stub domain links. 23 servers are randomly 5.1 End-to-End Latency Consider a source A that sends packets to a receiver R via trigger (id, R). As discussed in Section 4.5, once the first packet reaches umber of samples the server S storing the trigger(id, R), A caches S and sends all subsequent packets directly to S. As a result, the packets will be Figure 8: The 90th percentile latency stretch vs. number of routed via IP from a to s and then from s to R. The obvious samples for PLRG and transit-stub with 5000 nodes. question is how efficient is routing through S as compared to rout- ing directly from A to R. Section 4.5 presents a simple heuristic in which a receiver R samples the identifier space to find an iden- lizing a server ifier ide that is stored at a nearby server. Then R inserts trigger 3. Loop detection: When a trigger that doesn't point to an IP(ide, R) address is inserted, the server checks whether the new trigger Figure 8 plots the 90th percentile latency stretch versus the num- doesn't create a loop. A simple procedure is to send a special ber of samples k in a system with 16, 384 23 servers. Each point packet with a random nonce. If the packet returns back to the represents the 90th percentile over 1000 measurements. For each erver, the trigger is simply removed. To increase the robust measurement, we randomly choose a sender and a receiver. In ess, the server can invoke this procedure periodically after each case, the receiver generates k triggers with random identi- such a trigger is inserted. Another possibility to detect loops more efficiently would be to use a Bloom filter to encode the fiers. Among these triggers, the receiver retains the trigger that is stored at the closest server. Then we sum the shortest path latency set of 23 servers along the packets path, as proposed in th rom the sender to s and from s to the receiver and divide it by Icarus framework[34」 the shortest path latency from the sender to the receiver to obtain 4.11 Anonymity the latency stretch. Sampling the space of identifiers greatly low ers the stretch. While increasing the number of samples decreases Point-to-point communication networks such as the Internet pro- the stretch further, the improvement appears to saturate rapidly, in- ed support for anonymity. Packets usually carry the des- dicating that in practice, just 16-32 samples should suffice. The tination and the source addresses. which it relatively eas receiver does not need to search for a close identifier every time a for an eavesdropper to learn the sender and the receiver identi- connection is open; in practice, an end-host can sample the space ties. In contrast, with 23, eavesdropping the trafic of a sender will periodically and maintain a pool of identifiers which it can reuse not reveal the identity of the receiver, and eavesdropping the traf- fic of a receiver will not reveal the sender's identity. The level of 5.2 Proximity routing in 23 anonymity can be further enhanced by using chain of triggers or While Section 5. 1 evaluates the end-to-end latency experienced stack of identifiers to route packets by data packets after the sender caches the server storing the re- ceivers trigger t, in this section, we evaluate the latency incurred 5. SIMULATION RESULTS by the sender's first packet that matches trigger t. This packet is In this section, we evaluate the routing efficiency of 23 by sim- routed through the overlay network until it reaches the server stor- ing t. While Chord ensures that the overlay route length is only ulation. One of the main challenges in providing efficient routing O (og N), where N is the number of i3 servers, the routing latency ers. However, we show that simple heuristics can significantly can be quite large. This is because server identifiers are randomly enhance 23s performance. The metric we use to evaluate these chosen, and therefore servers close in the identifier space can be heuristics is the ratio of the inter-node latency on the i3 network to very far away in the underlying network. To alleviate this problem, the inter-node latency on the underlying IP network. This is called the latency stretch The simulator is based on the Chord protocol and uses iterative Closest finger replica In addition to each finger, a server style routing [26]. We assume that node identifiers are randomly maintains r-1 immediate suc of that finger. Thus distributed. This assumption is consistent with the way the iden- each node maintains references to about r log 2 N other tifiers are chosen in other lookup systems such as CAn [22] and nodes for routing proposes. To route a packet, a server se- astry [23]. As discussed in [26], using random node identifiers lects the closest node in terms of network distance amongst ()the finger with the largest identifier preceding the packets as shown to approximate the Internet latency well [201--onto the mensional Chord identifi In particular, we have used space filling curves, such as results do not show significant gains as to the heuristics curve, to map a d-dimensional geometric spacewhich resented in this section, so we omit their0 5 10 15 20 25 30 35 1 1.5 2 2.5 3 3.5 4 4.5 Number of Samples 90th Percentile Latency Stretch power law random graph transit−stub Figure 8: The 90th percentile latency stretch vs. number of samples for PLRG and transit-stub with 5000 nodes. nopolizing a server’s resources. 3. Loop detection: When a trigger that doesn’t point to an IP address is inserted, the server checks whether the new trigger doesn’t create a loop. A simple procedure is to send a special packet with a random nonce. If the packet returns back to the server, the trigger is simply removed. To increase the robust￾ness, the server can invoke this procedure periodically after such a trigger is inserted. Another possibility to detect loops more efficiently would be to use a Bloom filter to encode the set of ☎✝✆ servers along the packet’s path, as proposed in the Icarus framework [34]. 4.11 Anonymity Point-to-point communication networks such as the Internet pro￾vide limited support for anonymity. Packets usually carry the des￾tination and the source addresses, which makes it relatively easy for an eavesdropper to learn the sender and the receiver identi￾ties. In contrast, with ☎✝✆, eavesdropping the traffic of a sender will not reveal the identity of the receiver, and eavesdropping the traf- fic of a receiver will not reveal the sender’s identity. The level of anonymity can be further enhanced by using chain of triggers or stack of identifiers to route packets. 5. SIMULATION RESULTS In this section, we evaluate the routing efficiency of ☎✝✆ by sim￾ulation. One of the main challenges in providing efficient routing is that end-hosts have little control over the location of their trig￾gers. However, we show that simple heuristics can significantly enhance ☎ ✆’s performance. The metric we use to evaluate these heuristics is the ratio of the inter-node latency on the ☎ ✆ network to the inter-node latency on the underlying IP network. This is called the latency stretch. The simulator is based on the Chord protocol and uses iterative style routing [26]. We assume that node identifiers are randomly distributed. This assumption is consistent with the way the iden￾tifiers are chosen in other lookup systems such as CAN [22] and Pastry [23]. As discussed in [26], using random node identifiers increases system robustness and load-balancing. 6 We consider the ✞ We have also experimented with identifiers that have location se￾mantics. In particular, we have used space filling curves, such as the Hilbert curve, to map a ✸-dimensional geometric space—which following network topologies in our simulations: ✶ A power-law random graph topology generated with the INET topology generator [16] with 5000 nodes, where the delay of each link is uniformly distributed in the interval ￾◆❀✹✧◗✞￾✂￾✤✻ ms. The ☎ ✆ servers are randomly assigned to the network nodes. ✶ A transit-stub topology generated with the GT-ITM topology generator [11] with 5000 nodes, where link latencies are 100 ms for intra-transit domain links, 10 ms for transit-stub links and 1 ms for intra-stub domain links. ☎✝✆ servers are randomly assigned to stub nodes. 5.1 End-to-End Latency Consider a source ✏ that sends packets to a receiver ✶ via trigger ✷ ☎❅✸✠✹✑✶✼✻ . As discussed in Section 4.5, once the first packet reaches the server ✲ storing the trigger ✷ ☎❅✸✠✹✑✶✼✻ , ✏ caches ✲ and sends all subsequent packets directly to ✲. As a result, the packets will be routed via IP from ✏ to ✲ and then from ✲ to ✶. The obvious question is how efficient is routing through ✲ as compared to rout￾ing directly from ✏ to ✶. Section 4.5 presents a simple heuristic in which a receiver ✶ samples the identifier space to find an iden￾tifier ☎❅✸✩✹ that is stored at a nearby server. Then ✶ inserts trigger ✷ ☎❅✸✹ ✹✺✶✼✻ . Figure 8 plots the 90th percentile latency stretch versus the num￾ber of samples ❉ in a system with ◗✚❖✘✹ ✆✤❙✂✁ ☎✝✆ servers. Each point represents the 90th percentile over 1000 measurements. For each measurement, we randomly choose a sender and a receiver. In each case, the receiver generates ❉ triggers with random identi- fiers. Among these triggers, the receiver retains the trigger that is stored at the closest server. Then we sum the shortest path latency from the sender to ✲ and from ✲ to the receiver, and divide it by the shortest path latency from the sender to the receiver to obtain the latency stretch. Sampling the space of identifiers greatly low￾ers the stretch. While increasing the number of samples decreases the stretch further, the improvement appears to saturate rapidly, in￾dicating that in practice, just ◗✧❖★✪ ✆✒▲ samples should suffice. The receiver does not need to search for a close identifier every time a connection is open; in practice, an end-host can sample the space periodically and maintain a pool of identifiers which it can reuse. 5.2 Proximity Routing in ❃ ❄ While Section 5.1 evaluates the end-to-end latency experienced by data packets after the sender caches the server storing the re￾ceiver’s trigger ✿ , in this section, we evaluate the latency incurred by the sender’s first packet that matches trigger ✿. This packet is routed through the overlay network until it reaches the server stor￾ing ✿ . While Chord ensures that the overlay route length is only ✟P✷✡✠☞☛✂✌ ✍✻ , where ✍ is the number of ☎✝✆ servers, the routing latency can be quite large. This is because server identifiers are randomly chosen, and therefore servers close in the identifier space can be very far away in the underlying network. To alleviate this problem, we consider two simple heuristics: ✶ Closest finger replica In addition to each finger, a server maintains ❈★✪✼◗ immediate successors of that finger. Thus, each node maintains references to about ❈ ￾ ✠☞☛✂✌ ✮ ✍ other nodes for routing proposes. To route a packet, a server se￾lects the closest node in terms of network distance amongst (1) the finger with the largest identifier preceding the packet’s was shown to approximate the Internet latency well [20]—onto the one-dimensional Chord identifier space. However, the preliminary results do not show significant gains as compared to the heuristics presented in this section, so we omit their presentation here
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有