正在加载图片...
This article has been accepted for publication in a future issue of this journal,but has not been fully edited.Content may change prior to final publication.Citation information:DOI 10.1109/TPDS.2015.2425403,IEEE Transactions on Parallel and Distributed Systems IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS,VOL.-.NO.-,MONTH YEAR 10 e (a)R=Re (b)R>Re (c)R<Re Fig.11.Packing results.The common settings are:p=0.01,d 16,pon =0.01,poff =0.09,and Ci E [80,100].(a) R6=Re,Ro and Re∈[2,20.(b)R6>Re,R6∈[12,20,Re∈2,10.(c)R6<Re,R∈2,10],Re∈[12,20j- 阳 oaoty Oven Number of VMs Number of VMs Number of VMs (a)R=R (b)R>Re (c)R<Re Fig.12.Comparison results of QUEUE and RB with respect to capacity overflow ratio(COR). 7.2 Simulation Results indicates the frequency of spike occurrence.For a bursty We first evaluate the computation cost of our algorithm workload,the spikes usually occur with low frequency briefly,and then quantify the reduction of the number and short duration,therefore,we choose pon =0.01 and of running PMs,as well as compare the runtime perfor- Poff =0.09.Workload patterns are distinguished via mance with two commonly-used packing strategies. setting different ranges for Ro and Re.For Figs.11(a) To investigate the performance of our algorithm in and 12(a),Ro=Re,Rb and Re E [2,20];for Figs.11(b) various settings,three kinds of workload patterns are and12(b),R>Re,R6∈[12,20],Re∈[2,10l,and for used for each experiment:Ro Re,R Re and Figs.11(c)and12(c),R<Re,R6∈[2,10,Re∈[12,20. Ro<Re,which denote workloads with normal spike We see that QUEUE significantly reduces the number size,small spike size,and large spike size,respectively.It of PMs used,as compared with FFD by R(denoted will be observed later that,the workload pattern of VMs as RP).When R Re,the number of PMs used in does affect the packing result,number of active PMs,and QUEUE is reduced by 45%compared with RP,where number of migrations. the ratios for Ro Re and R>Re are 30%and According to the results in Section 5.3,the time com- 18%,respectively.FFD by Ro(denoted as RB)uses even plexity of QUEUE is O(d4+nlogn +mn).In Fig.10,we fewer PMs,but the runtime performance is disastrous present the experimental computation cost of QUEUE according to Fig.12.The COR of RB is unacceptably with reasonable d and n values.We see that,our algo- high.With larger spike sizes (R<Re),the packing rithm incurs very few overheads with moderate n and d result of QUEUE is better,because more PMs are saved values.The cost variation with respect to n is not even compared with RP,and fewer additional PMs (for live distinguishable in the millisecond-level. migrations)are used compared with RB(see Fig.11(c)). To evaluate the consolidation performance of QUEUE Simultaneously,with larger spike sizes,the average COR in different settings,we then choose Ro and Re uniform- of QUEUE is slightly higher,but is still bounded by p ly and randomly from a certain range for each VM.We (see Fig.12(c)).The case of smaller spike sizes shows the repeat the experiments multiple times for convergence. opposite results. The capacity overflow ratio (COR)is used here as the We mention that,there are very few PMs with CORs metric of runtime performance.Since FFD by Rp never slightly higher than p in each experiment.This is be- incurs capacity violations,it is not included in the per- cause a Markov chain needs some time to enter into its formance assessment. stationary distribution.Though we did not theoretically Figs.11 and 12 show the packing results and COR evaluate whether the chain constructed in Section 5 results,respectively.The common settings for three sub-is rapid-mixing,in our experiments,we find that the figures are as follows:p=0.01,d=16,Pon =0.01,Poff time period before the chain enters into its stationary 0.09,and CjE [80,100].As we discussed in Section 3,pon distribution is very short. 1045-9219(c)2015 IEEE.Personal use is permitted,but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications standards/publications/rights/index.html for more information.1045-9219 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TPDS.2015.2425403, IEEE Transactions on Parallel and Distributed Systems IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. –, NO. -, MONTH YEAR 10 100 200 300 400 500 600 700 800 900 1000 0 50 100 150 Number of VMs Number of PMs used RP QUEUE RB (a) Rb = Re 100 200 300 400 500 600 700 800 900 1000 0 50 100 150 Number of VMs Number of PMs used RP QUEUE RB (b) Rb > Re 100 200 300 400 500 600 700 800 900 1000 0 50 100 150 Number of VMs Number of PMs used RP QUEUE RB (c) Rb < Re Fig. 11. Packing results. The common settings are: ρ = 0.01, d = 16, pon = 0.01, poff = 0.09, and Cj ∈ [80, 100]. (a) Rb = Re, Rb and Re ∈ [2, 20]. (b) Rb > Re, Rb ∈ [12, 20], Re ∈ [2, 10]. (c) Rb < Re, Rb ∈ [2, 10], Re ∈ [12, 20]. 100 200 400 600 800 1000 1% 10% 50% 70% 30% 100% 0.1% Number of VMs Average COR QUEUE RB Capacity Overflow Ratio Threshold (a) Rb = Re 100 200 400 800 1000 1% 10% 30% 50% 70% 100% 0.1% 600 Number of VMs Average C O R QUEUE RB Capacity Overflow Ratio Threshold (b) Rb > Re 100 200 400 800 1000 1% 10% 30% 50% 70% 100% 0.1% 600 Average C O R QUEUE RB Capacity Overflow Ratio Threshold 1XPEHURIVMs (c) Rb < Re Fig. 12. Comparison results of QUEUE and RB with respect to capacity overflow ratio (COR). 7.2 Simulation Results We first evaluate the computation cost of our algorithm briefly, and then quantify the reduction of the number of running PMs, as well as compare the runtime perfor￾mance with two commonly-used packing strategies. To investigate the performance of our algorithm in various settings, three kinds of workload patterns are used for each experiment: Rb = Re, Rb > Re and Rb < Re, which denote workloads with normal spike size, small spike size, and large spike size, respectively. It will be observed later that, the workload pattern of VMs does affect the packing result, number of active PMs, and number of migrations. According to the results in Section 5.3, the time com￾plexity of QUEUE is O(d 4 + nlogn + mn). In Fig. 10, we present the experimental computation cost of QUEUE with reasonable d and n values. We see that, our algo￾rithm incurs very few overheads with moderate n and d values. The cost variation with respect to n is not even distinguishable in the millisecond-level. To evaluate the consolidation performance of QUEUE in different settings, we then choose Rb and Re uniform￾ly and randomly from a certain range for each VM. We repeat the experiments multiple times for convergence. The capacity overflow ratio (COR) is used here as the metric of runtime performance. Since FFD by Rp never incurs capacity violations, it is not included in the per￾formance assessment. Figs. 11 and 12 show the packing results and COR results, respectively. The common settings for three sub- figures are as follows: ρ = 0.01, d = 16, pon = 0.01, poff = 0.09, and Cj ∈ [80, 100]. As we discussed in Section 3, pon indicates the frequency of spike occurrence. For a bursty workload, the spikes usually occur with low frequency and short duration, therefore, we choose pon = 0.01 and poff = 0.09. Workload patterns are distinguished via setting different ranges for Rb and Re. For Figs. 11(a) and 12(a), Rb = Re, Rb and Re ∈ [2, 20]; for Figs. 11(b) and 12(b), Rb > Re, Rb ∈ [12, 20], Re ∈ [2, 10], and for Figs. 11(c) and 12(c), Rb < Re, Rb ∈ [2, 10], Re ∈ [12, 20]. We see that QUEUE significantly reduces the number of PMs used, as compared with FFD by Rp (denoted as RP). When Rb < Re, the number of PMs used in QUEUE is reduced by 45% compared with RP, where the ratios for Rb = Re and Rb > Re are 30% and 18%, respectively. FFD by Rb (denoted as RB) uses even fewer PMs, but the runtime performance is disastrous according to Fig. 12. The COR of RB is unacceptably high. With larger spike sizes (Rb < Re), the packing result of QUEUE is better, because more PMs are saved compared with RP, and fewer additional PMs (for live migrations) are used compared with RB (see Fig. 11(c)). Simultaneously, with larger spike sizes, the average COR of QUEUE is slightly higher, but is still bounded by ρ (see Fig. 12(c)). The case of smaller spike sizes shows the opposite results. We mention that, there are very few PMs with CORs slightly higher than ρ in each experiment. This is be￾cause a Markov chain needs some time to enter into its stationary distribution. Though we did not theoretically evaluate whether the chain constructed in Section 5 is rapid-mixing, in our experiments, we find that the time period before the chain enters into its stationary distribution is very short.
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有