正在加载图片...
foP要E8四 Number of 3 Servers Per packet rding overhead as a function of Figure 11: Per packet routing overhead as a function of i3 se, the 23 header size is 48 bytes. nodes in the system. The packet payload size is zero Payload Size Avg. Throughput (std dev. Avg. Throughput ( bytes) pkts/sec) (payload Mbps) 53.00 be able to maintain up to 30*80,000=2.4*10triggers Data packet forwarding: Figure 10 plots the overhead of for 353 warding a data packet to its final destination. This involves looking the matching trigger and forwarding the packet to its destination 1,200 26,164(1,138 251.16 addresses. Since we didnt enable multicast, in our experiments 3,339(1,946) 261.39 there was never more than one address. Like trigger insertion, packet forwarding consists of a hash table lookup. In addition, th Figure 12: The throughput of the data packet forwarding measurement includes the time to send the data packet. Packet for- warding time, in our experiments, increases roughly linearly with the packet size. This indicates that as packet size increases, mem- 6. RELATED WORK ory copy operations and pushing the bits through the network dom The rendezvous-based communication is similar in spirit to the tuple space work in distributed systems [2, 14, 36]. A tuple space uting: Figure 1l plots the overhead of routing a packet to is a shared memory that can be accessed by any node in the system r 3 node. This differs from data packet forwarding in that Nodes communicate by inserting tuples and retrieving them from a we route the packet using a node s finger table rather than its trigg table. This occurs when a data packet's trigger is stored on some are more general than data packets. A tuple consists of arbitrary ookup, as evidenced by the graph. There are two reasons for this fier and a data pay load. In addition, tuples are guaranteed to be seemingly poor behavior. First, we augment the finger table with stored until they are explicitly removed. Unfortunately, the added cache containing the most recent servers that have sent control or expressiveness and stronger guarantees of tuple spaces make them data packets. Since in our experiments this cache is large enough very hard to efficiently implement on a large scale. Finally, with to store all servers in the system, the number of nodes used to route tuple spaces, a node has to explicitly ask for each data packet. This a packet (i.e, the fingers plus the cached nodes)increases roughly interface is not effective for high speed communications arly with the number of nodes in the system. Second, the fi 23s communication paradigm is similar to the publish-subscribe- ger table data structure in our implementation is a list. In a more notify(PSN) model. The PSN model itself exists in many pro- polished implementation, a more efficient data structure is clearly prietary forms already in commercial systems [29, 31]. While the needed to significantly improve the performance matching operations employed by these systems are typically much Throughput: Finally, some experiments to see the max more powerful than the longest prefix matching used by i3, it is not imum rate at which a node can process data packets. Ideally, this clear how scalable these systems are. In addition, these systems should be the inverse of overhead. To test throughput, a single dont provide support for service composition. node is bombarded with more packets than it can reasonably han- Active Networks aim to support rapid development and deploy le. We measure the time it takes for 100,000 packets to emerge ment of new network applications by downloading and executing from the node to determine throughput. Not surprisingly, as packet customized programs in the network[33]. 13 provides an alterna- ayload increases, throughput in packets decreases. In addition, we tive design that, while not as general and flexible as Active Net calculate the data throughput from the user perspective. Only th works, is able to realize a variety of basic communication services payload data is considered; headers are overhead to users. The user without the need for mobile code or any heavyweight protocols throughput in Mbps increases as the packet payload increases be- 23 is similar to many naming systems. This should come as no ause the overhead for headers and processing is roughly the same surprise, as identifiers can be viewed as semantic-less namesOne for both small and large payloads future research direction is to use i3 as a unifying framework to0 200 400 600 800 1000 1200 1400 0 5 10 15 20 25 30 35 40 Packet Payload Size (bytes) Per Data Packet Forwarding Overhead (usec) 50 percentile 25, 75 percentiles 10, 90 percentiles Figure 10: Per packet forwarding overhead as a function of payload packet size. In this case, the ☎✝✆ header size is 48 bytes. be able to maintain up to ✆ ￾ ￾ ❙ ￾✘✹ ￾✂￾ ￾ ❏▼▲✩✱ ✁ ￾ ◗☎￾ ✞ triggers. Data packet forwarding: Figure 10 plots the overhead of for￾warding a data packet to its final destination. This involves looking up the matching trigger and forwarding the packet to its destination addresses. Since we didn’t enable multicast, in our experiments there was never more than one address. Like trigger insertion, packet forwarding consists of a hash table lookup. In addition, this measurement includes the time to send the data packet. Packet for￾warding time, in our experiments, increases roughly linearly with the packet size. This indicates that as packet size increases, mem￾ory copy operations and pushing the bits through the network dom￾inate processing time. ☎✝✆ routing: Figure 11 plots the overhead of routing a packet to another ☎✝✆ node. This differs from data packet forwarding in that we route the packet using a node’s finger table rather than its trigger table. This occurs when a data packet’s trigger is stored on some other node. The most costly operation here is a linear finger table lookup, as evidenced by the graph. There are two reasons for this seemingly poor behavior. First, we augment the finger table with a cache containing the most recent servers that have sent control or data packets. Since in our experiments this cache is large enough to store all servers in the system, the number of nodes used to route a packet (i.e., the fingers plus the cached nodes) increases roughly linearly with the number of nodes in the system. Second, the fin￾ger table data structure in our implementation is a list. In a more polished implementation, a more efficient data structure is clearly needed to significantly improve the performance. Throughput: Finally, we ran some experiments to see the max￾imum rate at which a node can process data packets. Ideally, this should be the inverse of overhead. To test throughput, a single node is bombarded with more packets than it can reasonably han￾dle. We measure the time it takes for ◗☎￾ ￾✘✹ ￾ ￾✂￾ packets to emerge from the node to determine throughput. Not surprisingly, as packet payload increases, throughput in packets decreases. In addition, we calculate the data throughput from the user perspective. Only the payload data is considered; headers are overhead to users. The user throughput in Mbps increases as the packet payload increases be￾cause the overhead for headers and processing is roughly the same for both small and large payloads. 0 5 10 15 20 25 30 35 0 10 20 30 40 50 60 70 80 Number of I3 Servers Per Packet Routing Overhead (usec) 50 percentile 25, 75 percentiles 10, 90 percentiles Figure 11: Per packet routing overhead as a function of ☎✝✆ nodes in the system. The packet payload size is zero. Payload Size Avg. Throughput (std. dev.) Avg. Throughput (bytes) (pkts/sec) (payload Mbps) 0 35,753 (2,406) 0 200 33,130 (3,035) 53.00 400 28,511 (1,648) 91.23 600 28,300 (595) 135.84 800 27,842 (1,028) 178.18 1,000 27,060 (1,127) 216.48 1,200 26,164 (1,138) 251.16 1,400 23,339 (1,946) 261.39 Figure 12: The throughput of the data packet forwarding. 6. RELATED WORK The rendezvous-based communication is similar in spirit to the tuple space work in distributed systems [2, 14, 36]. A tuple space is a shared memory that can be accessed by any node in the system. Nodes communicate by inserting tuples and retrieving them from a tuple space, rather than by point-to-point communication. Tuples are more general than data packets. A tuple consists of arbitrary typed fields and values, while a packet consists of just an identi- fier and a data payload. In addition, tuples are guaranteed to be stored until they are explicitly removed. Unfortunately, the added expressiveness and stronger guarantees of tuple spaces make them very hard to efficiently implement on a large scale. Finally, with tuple spaces, a node has to explicitly ask for each data packet. This interface is not effective for high speed communications. ☎✝✆’s communication paradigm issimilar to the publish-subscribe￾notify (PSN) model. The PSN model itself exists in many pro￾prietary forms already in commercial systems [29, 31]. While the matching operations employed by these systems are typically much more powerful than the longest prefix matching used by ☎✝✆, it is not clear how scalable these systems are. In addition, these systems don’t provide support for service composition. Active Networks aim to support rapid development and deploy￾ment of new network applications by downloading and executing customized programs in the network [33]. ☎✝✆ provides an alterna￾tive design that, while not as general and flexible as Active Net￾works, is able to realize a variety of basic communication services without the need for mobile code or any heavyweight protocols. ☎✝✆ is similar to many naming systems. This should come as no surprise, as identifiers can be viewed as semantic-less names. One future research direction is to use ☎✝✆ as a unifying framework to
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有