正在加载图片...
priority list,the top g resources can be selected to calculate P(C)=1-(1-P(C)" (16) the HA enhancement recommendation;the calculated result is According to Equation 13,the size of the cluster can be a near optimal solution only for the g candidate resources that are taken into consideration,but the computation complexity calculated as follows: can be reduced according to the selected number g (i.e.,the In(1-P(Ci)) calculation is only based on the selected IT resources). n= In(1-P(Ci)) (17) This weight-based optimization mechanism provides a flexi- ble trade-off between quality and performance.For very large- Leveraging the domain information for the resource compo- scale deployment topologies,adapting to the dynamics of nent,the HA cluster pattern can be generated and configured the environment usually requires making decisions to achieve into the deployment topology. business agility.In this scenario,weight-based optimization D.Computational Complexity becomes important and useful to quickly generate a suboptimal enhancement recommendation,rather than finding the optimal In this section,we compare the computational complexity of the exhaustive iteration method with that of our weak-point solution too late (due to computation time).Moreover,in our experimental evaluation,we found that this performance analysis methodology. optimization not only reduced computation complexity,but Assume that there exist n candidate IT resources that may need to be HA enhanced.We set the upper bound for also generated the same result as the original weak-point analysis methodology,when the top 60%of the IT resources the cluster size of any resource to k(this is necessary for were selected according to their weight.The reason could the iteration method,but not for our methodology).For the be that the weight calculation has properly indicated the exhaustive iteration method,the computational complexity to importance of each resource. reach the optimal solution is k.k.....,i.e.O(k).For our Lagrange multiplier-based method,the solution is calculated E.Practical Considerations by solving the set of equations in Equation 15.The compu- In some cases of HA enhancement,substituting an IT tational complexity is only bound by the number of variables resource with inherently better availability characteristics may in the equations,which have the computational complexity of a polynomial:O(n"),where m is a constant.As a result, be more appropriate than using a cluster or a failover solution. our method better scales and has a much lower computational For example,rather than applying a hot-standby solution to an instance of DB2 running on an x86 platform,it may be complexity than the exhaustive iteration method when the number of candidate IT resources is large. preferable to replace it with an instance of DB2 running on zOS on a mainframe. Furthermore,we use a weight-based optimization mecha- nism to reduce calculation complexity for very large-scale Based on this observation,we propose an algorithm for deployment topologies.Since the number of candidate IT selecting alternative IT resources.Here we abstract our avail- resources for availability enhancement can be extremely large ability weak-point analysis methodology into a function Weak- PointAnalysis().As shown in Algorithm 1,we first generate all in such topologies,it is useful to have a way to reduce the number of candidate IT resources,in order to simplify the possible resource lists and relevant utility functions according calculations required by Equation 15. to the various candidate resource types specified by the user. Next,we use the function WeakPointAnalysis()to calculate The principle of our weight-based optimization mechanism is to select a subset of the IT resources.based on weights.for various solutions according to these lists.This enables us to choose the best solution among all candidates. use in our weak-point analysis methodology.We exploit the As we saw in Section III.D.the exhaustive iteration method fact that enhancing the availability of the resources involved in more workflows with critical availability requirements will has an exponential computing complexity,in contrast to the yield a better overall HA enhancement for the workflows. polynomial computing complexity of our weak-point analy- To this end,we propose the following mechanism to select sis methodology.Conversely,the exhaustive iteration method relevant IT resources. finds the optimal solution,whereas our methodology calculates The weight for resource C;is defined as a near-optimal solution.Consequently,we see that there is a tradeoff between these two methods.The exhaustive iteration method performs better when computing complexity is low; W(Cj)=>(Rij.P) (18) consequently,when the topology only contains a limited i=1 number of resources and their maximum cluster size is not where Ri.;denotes the integer value defined in the workflow-too large,we should use the exhaustive iteration method to resource relationship matrix,and P denotes the availability make sure we calculate the optimal solution;but when the requirement of workflow Wi.The priority list of IT resources topology contains many resources and/or the cluster size is can then be determined according to their weights:the re- large,our Lagrange multiplier-based method is more efficient. sources that support more workflows and more availability- Based on this remark,we propose the algorithm COMB (see critical workflows are given higher weights.According to the Algorithm 2).P 0 (Ci) = 1 − (1 − P(Ci))n (16) According to Equation 13, the size of the cluster can be calculated as follows: n = d ln(1 − P 0 (Ci)) ln(1 − P(Ci)) e (17) Leveraging the domain information for the resource compo￾nent, the HA cluster pattern can be generated and configured into the deployment topology. D. Computational Complexity In this section, we compare the computational complexity of the exhaustive iteration method with that of our weak-point analysis methodology. Assume that there exist n candidate IT resources that may need to be HA enhanced. We set the upper bound for the cluster size of any resource to k (this is necessary for the iteration method, but not for our methodology). For the exhaustive iteration method, the computational complexity to reach the optimal solution is k · k · ... · k | {z } n , i.e. O(k n). For our Lagrange multiplier-based method, the solution is calculated by solving the set of equations in Equation 15. The compu￾tational complexity is only bound by the number of variables in the equations, which have the computational complexity of a polynomial: O(n m), where m is a constant. As a result, our method better scales and has a much lower computational complexity than the exhaustive iteration method when the number of candidate IT resources is large. Furthermore, we use a weight-based optimization mecha￾nism to reduce calculation complexity for very large-scale deployment topologies. Since the number of candidate IT resources for availability enhancement can be extremely large in such topologies, it is useful to have a way to reduce the number of candidate IT resources, in order to simplify the calculations required by Equation 15. The principle of our weight-based optimization mechanism is to select a subset of the IT resources, based on weights, for use in our weak-point analysis methodology. We exploit the fact that enhancing the availability of the resources involved in more workflows with critical availability requirements will yield a better overall HA enhancement for the workflows. To this end, we propose the following mechanism to select relevant IT resources. The weight for resource Cj is defined as W(Cj ) = Xn i=1 (Ri,j · Pi) (18) where Ri,j denotes the integer value defined in the workflow￾resource relationship matrix, and Pi denotes the availability requirement of workflow Wi . The priority list of IT resources can then be determined according to their weights: the re￾sources that support more workflows and more availability￾critical workflows are given higher weights. According to the priority list, the top q resources can be selected to calculate the HA enhancement recommendation; the calculated result is a near optimal solution only for the q candidate resources that are taken into consideration, but the computation complexity can be reduced according to the selected number q (i.e., the calculation is only based on the selected IT resources). This weight-based optimization mechanism provides a flexi￾ble trade-off between quality and performance. For very large￾scale deployment topologies, adapting to the dynamics of the environment usually requires making decisions to achieve business agility. In this scenario, weight-based optimization becomes important and useful to quickly generate a suboptimal enhancement recommendation, rather than finding the optimal solution too late (due to computation time). Moreover, in our experimental evaluation, we found that this performance optimization not only reduced computation complexity, but also generated the same result as the original weak-point analysis methodology, when the top 60% of the IT resources were selected according to their weight. The reason could be that the weight calculation has properly indicated the importance of each resource. E. Practical Considerations In some cases of HA enhancement, substituting an IT resource with inherently better availability characteristics may be more appropriate than using a cluster or a failover solution. For example, rather than applying a hot-standby solution to an instance of DB2 running on an x86 platform, it may be preferable to replace it with an instance of DB2 running on zOS on a mainframe. Based on this observation, we propose an algorithm for selecting alternative IT resources. Here we abstract our avail￾ability weak-point analysis methodology into a function Weak￾PointAnalysis(). As shown in Algorithm 1, we first generate all possible resource lists and relevant utility functions according to the various candidate resource types specified by the user. Next, we use the function WeakPointAnalysis() to calculate various solutions according to these lists. This enables us to choose the best solution among all candidates. As we saw in Section III.D, the exhaustive iteration method has an exponential computing complexity, in contrast to the polynomial computing complexity of our weak-point analy￾sis methodology. Conversely, the exhaustive iteration method finds the optimal solution, whereas our methodology calculates a near-optimal solution. Consequently, we see that there is a tradeoff between these two methods. The exhaustive iteration method performs better when computing complexity is low; consequently, when the topology only contains a limited number of resources and their maximum cluster size is not too large, we should use the exhaustive iteration method to make sure we calculate the optimal solution; but when the topology contains many resources and/or the cluster size is large, our Lagrange multiplier-based method is more efficient. Based on this remark, we propose the algorithm COMB (see Algorithm 2)
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有