正在加载图片...
To examine all claims, we present wide-area measure- ments of a synthetic work-load on CoralCdN nodes run- level 2 ning on PlanetLab, an internationally-deployed test bed eve1……… We use such an experimental setup because traditional level 0 tests tor CDNs or web servers are not interesting in evalu- ongin server· ating CoralCDN: (1)Client-side traces generally measure 200 the cacheability of data and client latencies. However, we are mainly interested in how well the system handles load g spikes, (2) Benchmark tests such as sPE Cwebgg mea- 100 patterns, while CoralCDN is designed to reduce load on off-the-shelf web servers that are network-bound The basic structure of the experiments were is follows 0 600 1200 First, on 166 Planet Lab machines geographically distri- Time(sec) buted mainly over North America and Europe, we launch a Coral daemon, as well as a dnssrv and CoralProxy Figure 4: The number of client accesses to CoralProxies and the For experiments referred to as multi-level, we configure a the cluster level from which data was fetched, and do not include old of level i to 60 msec and level 2 to 20 msec. Ex- requests handled through local caches periments referred to as single-level use only the level-0 6.1 Server load global cluster. No objects are evicted from CoralProxy Is allowed to stabilize for 30 minutes t y caches during these experiments. For simplicity, all nodes Figure 4 plots the number of requests per minute that are seeded with the same well-known ho e network could not be handled by a CoralProxy's local cache. dur- ing the initial minute, 15 requests hit the origin web serv econd, we run an unmodified Apache web server (for 12 unique files ). The 3 redundant lookups are due to sitting behind a DSL line with 384 Kbit/sec upstream the simultaneity at which requests are generated; subse- bandwidth, serving 12 different 41KB files, representing quently, requests are handled either through CoralCDN's groups of three embedded images referenced by four web wide-area cooperative cache or through a proxy's local cache, supporting our hypothesis that CoralCDN can mi- Third, we launch client processes on each machine that, grate load off of a web server after an additional random delay between 0 and 180 sec- During this first minute, equal numbers of requests onds for asynchrony begin making Http Get requests were handled by the level-i and level-2 cluster caches to Coralized URLS. Each client generates requests for the However, as the files propagated into CoralProxy caches group of three files, corresponding to a randomly selected requests quickly were resolved within faster level-2 clus- web page, for a period of 30 minutes. While we recognize ters. Within 8-10 minutes, the files became replicated at that web traffic generally has a Zipf distribution, we are nearly every server, so few client requests went further attempting merely to simulate a flash crowd to a popular than the proxies'local caches. Repeated runs of this ex- web page with multiple, large, embedded images (i. e, the periment yielded some variance in the relative magnitudes Slashdot effect). With 166 clients, we are generating 99. 6 of the initial spikes in requests to different levels, although requests/sec, resulting in a cumulative download rate of the number of origin server hits remained consistent. approximately 32, 800 Kb/sec. This rate is almost two or ders of magnitude greater than the origin web server could 6.2 Client Latency handle. Note that this rate was chosen synthetically and in no way suggests a maximum system throughput Figure 5 shows the end-to-end latency for a client to fetch clients.Instead, Coral nodes generate requests at very tion 2.2. The top graph shows the latency across all Plan- high rates, all for the same key, to examine how the D etLab nodes used in the experiment, the bottom graph indexing infrastructure prevents nodes close to a target ID only includes data from the clients located on 5 nodes from becoming overloaded in Asia(Hong Kong(2), Taiwan, Japan, and the Philip- pines). Because most nodes are located in the U.s. or Eu- The stabilization time could be made shorter by reducing the clus- rope, the performance benefit of clustering is much more is in fact a simpler task, as new nodes would immediately join nearby pronounced on the graph of Asian nodes tering period(5 minutes). Additionally, in real applications, clustering arge clusters as they join the pre-establishe em. In our setup, clus. Recall that this end-to-end latency includes the time for rs develop from an initial network comprised entirely of singletons. the client to make a DNS request and to connect to theTo examine all claims, we present wide-area measure￾ments of a synthetic work-load on CoralCDN nodes run￾ning on PlanetLab, an internationally-deployed test bed. We use such an experimental setup because traditional tests for CDNs or web servers are not interesting in evalu￾ating CoralCDN: (1) Client-side traces generally measure the cacheability of data and client latencies. However, we are mainly interested in how well the system handles load spikes. (2) Benchmark tests such as SPECweb99 mea￾sure the web server’s throughput on disk-bound access patterns, while CoralCDN is designed to reduce load on off-the-shelf web servers that are network-bound. The basic structure of the experiments were is follows. First, on 166 PlanetLab machines geographically distri￾buted mainly over North America and Europe, we launch a Coral daemon, as well as a dnssrv and CoralProxy. For experiments referred to as multi-level, we configure a three-level hierarchy by setting the clustering RTT thresh￾old of level 1 to 60 msec and level 2 to 20 msec. Ex￾periments referred to as single-level use only the level-0 global cluster. No objects are evicted from CoralProxy caches during these experiments. For simplicity, all nodes are seeded with the same well-known host. The network is allowed to stabilize for 30 minutes.5 Second, we run an unmodified Apache web server sitting behind a DSL line with 384 Kbit/sec upstream bandwidth, serving 12 different 41KB files, representing groups of three embedded images referenced by four web pages. Third, we launch client processes on each machine that, after an additional random delay between 0 and 180 sec￾onds for asynchrony, begin making HTTP GET requests to Coralized URLs. Each client generates requests for the group of three files, corresponding to a randomly selected web page, for a period of 30 minutes. While we recognize that web traffic generally has a Zipf distribution, we are attempting merely to simulate a flash crowd to a popular web page with multiple, large, embedded images (i.e., the Slashdot effect). With 166 clients, we are generating 99.6 requests/sec, resulting in a cumulative download rate of approximately 32, 800 Kb/sec. This rate is almost two or￾ders of magnitude greater than the origin web server could handle. Note that this rate was chosen synthetically and in no way suggests a maximum system throughput. For Experiment 4 (Section 6.4), we do not run any such clients. Instead, Coral nodes generate requests at very high rates, all for the same key, to examine how the DSHT indexing infrastructure prevents nodes close to a target ID from becoming overloaded. 5The stabilization time could be made shorter by reducing the clus￾tering period (5 minutes). Additionally, in real applications, clustering is in fact a simpler task, as new nodes would immediately join nearby large clusters as they join the pre-established system. In our setup, clus￾ters develop from an initial network comprised entirely of singletons. 0 100 200 300 0 300 600 900 1200 Requests / Minute Time (sec) level 2 level 1 level 0 origin server Figure 4: The number of client accesses to CoralProxies and the origin HTTP server. CoralProxy accesses are reported relative to the cluster level from which data was fetched, and do not include requests handled through local caches. 6.1 Server Load Figure 4 plots the number of requests per minute that could not be handled by a CoralProxy’s local cache. Dur￾ing the initial minute, 15 requests hit the origin web server (for 12 unique files). The 3 redundant lookups are due to the simultaneity at which requests are generated; subse￾quently, requests are handled either through CoralCDN’s wide-area cooperative cache or through a proxy’s local cache, supporting our hypothesis that CoralCDN can mi￾grate load off of a web server. During this first minute, equal numbers of requests were handled by the level-1 and level-2 cluster caches. However, as the files propagated into CoralProxy caches, requests quickly were resolved within faster level-2 clus￾ters. Within 8-10 minutes, the files became replicated at nearly every server, so few client requests went further than the proxies’ local caches. Repeated runs of this ex￾periment yielded some variance in the relative magnitudes of the initial spikes in requests to different levels, although the number of origin server hits remained consistent. 6.2 Client Latency Figure 5 shows the end-to-end latency for a client to fetch a file from CoralCDN, following the steps given in Sec￾tion 2.2. The top graph shows the latency across all Plan￾etLab nodes used in the experiment, the bottom graph only includes data from the clients located on 5 nodes in Asia (Hong Kong (2), Taiwan, Japan, and the Philip￾pines). Because most nodes are located in the U.S. or Eu￾rope, the performance benefit of clustering is much more pronounced on the graph of Asian nodes. Recall that this end-to-end latency includes the time for the client to make a DNS request and to connect to the 9
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有