当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《计算机基础》课程教学资源(参考论文)An Organizational Evolutionary Algorithm for Numerical Optimization

资源类别:文库,文档格式:PDF,文档页数:16,文件大小:3.32MB,团购合买
点击下载完整版文档(PDF)

SMCB-E-08102005-0551.R2 An Organizational Evolutionary Algorithm for Numerical Optimization Jing Liu,Member,IEEE Weicai Zhong,Member,IEEE and Licheng Jiao,Senior Member,IEEE Abstract-Taking inspiration from the interacting process I.INTRODUCTION among organizations in human societies,this paper designs a kind of structured population and corresponding evolutionary VOLUTIONARY algorithms (EAs)[1],based on an operators to form a novel algorithm,Organizational Evolutionary analogy to natural evolution,have recently gained Algorithm(OEA),for solving both unconstrained and constrained increasing interest.They are suitable for solving complex or optimization problems.In OEA,a population consists of ill-defined problems and have been successfully applied to the organizations,and an organization consists of individuals.All fields of numerical optimization,constraint satisfaction evolutionary operators are designed to simulate the interaction problems,data mining,neural networks,and many other among organizations.In experiments,15 unconstrained functions, 13 constrained functions,and 4 engineering design problems are engineering problems [2]-[10].Numerical optimization used to validate the performance of OEA,and thorough problems arise in almost every field of science,engineering,and comparisons are made between OEA and existing approaches.The business,and usually can be divided into two types,namely, results show that OEA obtains good performances in both the unconstrained optimization problems(UCOPs)and constrained solution quality and the computational cost.Moreover,for the optimization problems(COPs).The COPs usually are named as constrained problems,the good performances are obtained by only the general nonlinear programming problems.This paper incorporating two simple constraints handling techniques into OEA.Furthermore,systematic analyses have been made on all proposes a new EA for solving both UCOPs and COPs. parameters of OEA.The results show that OEA is quite robust A.Proposed Approach and easy to use. Traditionally,populations in EAs are simple non-ordered sets Index Terms-Evolutionary algorithms, organization. of individuals.Those individuals that will generate offspring are numerical optimization,constrained optimization problems. usually selected from all individuals according to their fitness. So the global fitness distribution of a population must be NOTATION LIST determined.The main consequence of this design is that the x) Objective function gene-flow inside the population is much higher compared to a 以x) Objective function with penalty term for real world situation,which often leads to premature genetic constrained optimization problems convergence.In fact,the real natural selection only occurs in a S Search space local environment,and each individual can only interact with 父 Feasible region those around it.That is,in some phase,the natural evolution is Dimension of the search space just a kind of local phenomenon.The information can be shared x,y,z,r,q Real-valued vectors in the search space globally only after a process of diffusion.Therefore,several Xp yo Zb ro q The ith Components in the vectors x,y,z,r,g studies tackled this problem by developing structured X Vector of the lower bound of the search space populations,such as cellular genetic algorithms [11], The ith Component in the vectorx multinational evolutionary algorithms [12],patchwork models 心 Vector of the upper bound of the search space [13],MAGA [7],MAEA-CSPs [8],and so on. 刻 In economics,R.H.Coase explained the sizing and formation The ith Component in the vector of organizations from the framework of transaction costs [14] g(x) Constraints The basic idea is that the organization exists because it reduces Number of constrains the overhead transaction costs associated with exchanging P Population in the tth generation goods and services.This concept was introduced to the learning classifiers based on genetic algorithms by Wilcox in 1995 [15] which put emphasis on inventing an autonomous mechanism using transaction costs for forming appropriately sized organizations within a classifier.Actually,in the real world situation,to achieve their purposes,organizations will compete Manuscript received August 10,2005.This work is supported by the National Natural Science Foundation of China under Grant 60502043. or cooperate with others so that they can gain more resources The authors are with the Institute of Intelligent Information Processing, As a result,the resources will be reasonably distributed among Xidian University,Xi'an,710071,China.(neouma@163.com)

SMCB-E-08102005-0551.R2 1 Abstract—Taking inspiration from the interacting process among organizations in human societies, this paper designs a kind of structured population and corresponding evolutionary operators to form a novel algorithm, Organizational Evolutionary Algorithm (OEA), for solving both unconstrained and constrained optimization problems. In OEA, a population consists of organizations, and an organization consists of individuals. All evolutionary operators are designed to simulate the interaction among organizations. In experiments, 15 unconstrained functions, 13 constrained functions, and 4 engineering design problems are used to validate the performance of OEA, and thorough comparisons are made between OEA and existing approaches. The results show that OEA obtains good performances in both the solution quality and the computational cost. Moreover, for the constrained problems, the good performances are obtained by only incorporating two simple constraints handling techniques into OEA. Furthermore, systematic analyses have been made on all parameters of OEA. The results show that OEA is quite robust and easy to use. Index Terms—Evolutionary algorithms, organization, numerical optimization, constrained optimization problems. NOTATION LIST f(x) Objective function ψ(x) Objective function with penalty term for constrained optimization problems S Search space F Feasible region n Dimension of the search space x, y, z, r, q Real-valued vectors in the search space xi, yi, zi, ri, qi The ith Components in the vectors x, y, z, r, q x Vector of the lower bound of the search space i x The ith Component in the vector x x Vector of the upper bound of the search space i x The ith Component in the vector x g(x) Constraints m Number of constrains Pt Population in the tth generation Manuscript received August 10, 2005. This work is supported by the National Natural Science Foundation of China under Grant 60502043. The authors are with the Institute of Intelligent Information Processing, Xidian University, Xi’an, 710071, China. (neouma@163.com) I. INTRODUCTION VOLUTIONARY algorithms (EAs) [1], based on an analogy to natural evolution, have recently gained increasing interest. They are suitable for solving complex or ill-defined problems and have been successfully applied to the fields of numerical optimization, constraint satisfaction problems, data mining, neural networks, and many other engineering problems [2]-[10]. Numerical optimization problems arise in almost every field of science, engineering, and business, and usually can be divided into two types, namely, unconstrained optimization problems (UCOPs) and constrained optimization problems (COPs). The COPs usually are named as the general nonlinear programming problems. This paper proposes a new EA for solving both UCOPs and COPs. A. Proposed Approach Traditionally, populations in EAs are simple non-ordered sets of individuals. Those individuals that will generate offspring are usually selected from all individuals according to their fitness. So the global fitness distribution of a population must be determined. The main consequence of this design is that the gene-flow inside the population is much higher compared to a real world situation, which often leads to premature genetic convergence. In fact, the real natural selection only occurs in a local environment, and each individual can only interact with those around it. That is, in some phase, the natural evolution is just a kind of local phenomenon. The information can be shared globally only after a process of diffusion. Therefore, several studies tackled this problem by developing structured populations, such as cellular genetic algorithms [11], multinational evolutionary algorithms [12], patchwork models [13], MAGA [7], MAEA-CSPs [8], and so on. In economics, R. H. Coase explained the sizing and formation of organizations from the framework of transaction costs [14]. The basic idea is that the organization exists because it reduces the overhead transaction costs associated with exchanging goods and services. This concept was introduced to the learning classifiers based on genetic algorithms by Wilcox in 1995 [15], which put emphasis on inventing an autonomous mechanism using transaction costs for forming appropriately sized organizations within a classifier. Actually, in the real world situation, to achieve their purposes, organizations will compete or cooperate with others so that they can gain more resources. As a result, the resources will be reasonably distributed among An Organizational Evolutionary Algorithm for Numerical Optimization Jing Liu, Member, IEEE Weicai Zhong, Member, IEEE and Licheng Jiao, Senior Member, IEEE E

SMCB-E-08102005-0551.R2 2 all organizations little by little.This process plays an important First,the number of the subpopulations and the size of each role in human societies.We think such a process can be viewed subpopulation of CGP-EAs are fixed during the evolutionary as a kind of optimization,and is an interesting new idea for process,whereas the number of organizations and the size of designing EAs. each organization of OEA change from generation to generation Taking inspiration from the aforementioned process,this Second,the main interaction among subpopulations is copying paper proposes a new framework for evolutionary optimization. better-performing or random individuals to neighboring named as Organizational Evolutionary Algorithm (OEA).In subpopulations at regular intervals.Whereas the interaction OEA,organizations are composed of members,and a among organizations is realized by the evolutionary operators. population is composed of organizations,so that a structured That is,OEA can be viewed as using additional operators in the population results.On the basis of such a structured population, migration process.Third,each subpopulation runs a local EA, all evolutionary operations are performed on organizations. while no evolutionary operations occur within the organizations Therefore,three evolutionary operators are developed for in the current form of OEA. organizations.These operators are composed of several However,from another viewpoint,OEA can be really viewed crossover and mutation operations in common use so as to prove as a kind of CGP-EA.Since CGP-EAs are just parallel the effectiveness of the new framework.The experimental techniques,they often present the same problem as traditional results on 15 UCOPs,13 COPs,and 4 well-studied engineering EAs.But OEA takes inspiration from simulating the interacting design problems show that OEA obtains good results. process among organizations in human societies,and the Being different from [15],OEA does not put emphasis on experimental results show that OEA obtains good performances forming the appropriately sized organizations,but on simulating for both UCOPs and COPs in a wide range of benchmark the interaction among organizations.Therefore,in OEA,all functions.Therefore,OEA can be viewed as a successful evolutionary operators are not directly performed on individuals development of CGP-EAs. but on organizations instead.As a result,there is no global Since EAs are unconstrained search techniques, selection at all,so the global fitness distribution is not required incorporating constraints into the fitness function of an EA is An organization interacts with others so that the information can still an open research area.There is a considerable amount of be diffused.Obviously,such a kind of population is more research regarding mechanisms that allow EAs to deal with similar to the real evolutionary mechanism in nature than the constraints. traditional population. The most common constraints handling methods in EAs is to In [10],we propose organizational coevolutionary algorithm use penalties.In these methods,a penalty term is added to the for classification (OCEC).But the organizations in OCEC are objective function.The penalty can be static,which is only different from the ones in OEA.OCEC is developed for solving related to the degree of constraint violation,or dynamic,which classification problems in data mining.Although both the is related to both the degree of constraint violation and the organizations in OCEC and in OEA are composed of members, generation number.The weakness of penalty methods is that the members in OCEC stand for the examples in the training set they often require several parameters.There also has some while the members in OEA stand for the real-valued vectors in literature proposing self-adaptive penalty methods,such as [17] the search space.Moreover,the fitness in OCEC is designed to used a two-stage penalty to measure the infeasibility,and no manifest the different importance of the attributes,while the parameter was used.Due to the simplicity and ease of fitness in OEA is the value of the objective functions.Of course. implementation,penalty methods are the most common their similarity is that both OCEC and OEA simulate the methods used in solving real world problems.Therefore,we interaction among organizations. incorporate a simplified version of the static penalty method B.Related Work into OEA,which only uses one parameter. The use of rules based on feasibility is another class of Some aspects of OEA are similar to the existing approaches. constraints handling method.Reference [18]used three simple The first is an analogy between OEA and similar EAs.The rules to deal with the constraints,and incorporated these rules second is the constraints handling techniques.In this subsection. into the self-adaptation mechanism of an evolution strategy we would like to briefly discuss the similarities and differences Since this method is proved to be effective in solving a wide between OEA and the existing approaches in these aspects. range of COPs,and no parameter is used,we also incorporate Since the organizations in OEA consist of members,and the the three rules into OEA,and make a comparison between OEA members are similar to the individuals used in traditional EAs. and SMES [18]in Subsection IV.C. one might argue that the organizations are similar to the Reference [19]introduced a stochastic ranking method in subpopulations in coarse-grained parallel EAs(CGP-EAs)or which the objective function values are used for ranking the multi-island EAs[16].CGP-EAs are a kind of distributed EA, solutions in the infeasible region of the search space.A and are a parallel implementation of EAs.In CGP-EAs, probability parameter is used to determine the likelihood of two evolution occurs in multiple parallel subpopulations,each individuals in the infeasible space being compared with each running a local EA,evolving independently with occasional other.This method is also proved to be effective in solving a migrations"of selected individuals among subpopulations. wide range of COPs,so a comparison is made between OEA There are several differences between OEA and CGP-EAs

SMCB-E-08102005-0551.R2 2 all organizations little by little. This process plays an important role in human societies. We think such a process can be viewed as a kind of optimization, and is an interesting new idea for designing EAs. Taking inspiration from the aforementioned process, this paper proposes a new framework for evolutionary optimization, named as Organizational Evolutionary Algorithm (OEA). In OEA, organizations are composed of members, and a population is composed of organizations, so that a structured population results. On the basis of such a structured population, all evolutionary operations are performed on organizations. Therefore, three evolutionary operators are developed for organizations. These operators are composed of several crossover and mutation operations in common use so as to prove the effectiveness of the new framework. The experimental results on 15 UCOPs, 13 COPs, and 4 well-studied engineering design problems show that OEA obtains good results. Being different from [15], OEA does not put emphasis on forming the appropriately sized organizations, but on simulating the interaction among organizations. Therefore, in OEA, all evolutionary operators are not directly performed on individuals, but on organizations instead. As a result, there is no global selection at all, so the global fitness distribution is not required. An organization interacts with others so that the information can be diffused. Obviously, such a kind of population is more similar to the real evolutionary mechanism in nature than the traditional population. In [10], we propose organizational coevolutionary algorithm for classification (OCEC). But the organizations in OCEC are different from the ones in OEA. OCEC is developed for solving classification problems in data mining. Although both the organizations in OCEC and in OEA are composed of members, the members in OCEC stand for the examples in the training set while the members in OEA stand for the real-valued vectors in the search space. Moreover, the fitness in OCEC is designed to manifest the different importance of the attributes, while the fitness in OEA is the value of the objective functions. Of course, their similarity is that both OCEC and OEA simulate the interaction among organizations. B. Related Work Some aspects of OEA are similar to the existing approaches. The first is an analogy between OEA and similar EAs. The second is the constraints handling techniques. In this subsection, we would like to briefly discuss the similarities and differences between OEA and the existing approaches in these aspects. Since the organizations in OEA consist of members, and the members are similar to the individuals used in traditional EAs, one might argue that the organizations are similar to the subpopulations in coarse-grained parallel EAs (CGP-EAs) or multi-island EAs [16]. CGP-EAs are a kind of distributed EA, and are a parallel implementation of EAs. In CGP-EAs, evolution occurs in multiple parallel subpopulations, each running a local EA, evolving independently with occasional “migrations” of selected individuals among subpopulations. There are several differences between OEA and CGP-EAs. First, the number of the subpopulations and the size of each subpopulation of CGP-EAs are fixed during the evolutionary process, whereas the number of organizations and the size of each organization of OEA change from generation to generation. Second, the main interaction among subpopulations is copying better-performing or random individuals to neighboring subpopulations at regular intervals. Whereas the interaction among organizations is realized by the evolutionary operators. That is, OEA can be viewed as using additional operators in the migration process. Third, each subpopulation runs a local EA, while no evolutionary operations occur within the organizations in the current form of OEA. However, from another viewpoint, OEA can be really viewed as a kind of CGP-EA. Since CGP-EAs are just parallel techniques, they often present the same problem as traditional EAs. But OEA takes inspiration from simulating the interacting process among organizations in human societies, and the experimental results show that OEA obtains good performances for both UCOPs and COPs in a wide range of benchmark functions. Therefore, OEA can be viewed as a successful development of CGP-EAs. Since EAs are unconstrained search techniques, incorporating constraints into the fitness function of an EA is still an open research area. There is a considerable amount of research regarding mechanisms that allow EAs to deal with constraints. The most common constraints handling methods in EAs is to use penalties. In these methods, a penalty term is added to the objective function. The penalty can be static, which is only related to the degree of constraint violation, or dynamic, which is related to both the degree of constraint violation and the generation number. The weakness of penalty methods is that they often require several parameters. There also has some literature proposing self-adaptive penalty methods, such as [17] used a two-stage penalty to measure the infeasibility, and no parameter was used. Due to the simplicity and ease of implementation, penalty methods are the most common methods used in solving real world problems. Therefore, we incorporate a simplified version of the static penalty method into OEA, which only uses one parameter. The use of rules based on feasibility is another class of constraints handling method. Reference [18] used three simple rules to deal with the constraints, and incorporated these rules into the self-adaptation mechanism of an evolution strategy. Since this method is proved to be effective in solving a wide range of COPs, and no parameter is used, we also incorporate the three rules into OEA, and make a comparison between OEA and SMES [18] in Subsection IV.C. Reference [19] introduced a stochastic ranking method in which the objective function values are used for ranking the solutions in the infeasible region of the search space. A probability parameter is used to determine the likelihood of two individuals in the infeasible space being compared with each other. This method is also proved to be effective in solving a wide range of COPs, so a comparison is made between OEA

SMCB-E-08102005-0551.R2 and this method in Subsection IV.B. feasible solution wins; Reference [20]used the Pareto ranking to deal with 3)If both solutions are infeasible,the one with the smaller constraints.Although this method can reduce the number of sum of constraint violation is preferred. additional inputs required for constraints handling,the This technique is non problem-dependent,and no parameter computational time involved in nondominated sorting was needs to be tuned.In the following text,this technique is labeled increased.Reference [20]used four well-studied engineering as CHc. design problems to evaluate the performance of the proposed C.Organization Definition method,so a comparison is also made between OEA and SCA 20]in Subsection IV.D. In OEA.an organization consists of several members,and a There are many other constraints handling methods making member is a solution of the objective function.Since a UCOP use of different techniques,such as GENOCOP [21], can be viewed as a COP satisfying the constraints,we define GENOCOP II [22],GENOCOP III [23],microgenetic Member for both problems as follows, algorithm [24],varying fitness functions [25],homomorphous Definition 1:A Member is a real-valued vector that belongs mappings method [26],coevolutionary augmented Lagrangian to the search space,namely,Membere S.There are two method [27],the methods based on multiobjective algorithms measures,Memberf and Member',to evaluate the member's [28],[29],and so on.More detailed descriptions about quality.Member denotes the fitness and constraints handling in EAs were given in [30],[31]. [-f(Member)for UCOPs Member" -f(Member)for COPs with CHc (5) II.ORGANIZATIONAL EVOLUTIONARY ALGORITHM -(Member)for COPs with CHp Member denotes the sum of constraint violation and A.Problem Definition A UCOP can be formulated as solving the objective function for UCOPs minimize fx),1 ...,x)eS (1) Member ={0 for COPs with CHp where SCR"defines the search space which is an n-dimensional maxg(Member)for COPs with CHe space bounded by the parametric constraints x,sx,x,,i=1, (6) 2,,n.Thus,S=[s,,where=(s,玉,,x)and Thus,the purpose of OEA is maximizing Member.When comparing the qualities of two members,both Memberf and 天=(瓦,x32,,n) Member'should be considered.So the members are compared A COP can be formulated as solving the objective function according to Definition 2. minimize f(x),x=(x,x,,x)esF (2) Definition 2:If two members,Member and Memberz, where S is the same with that of(1),and the feasible region Fis satisfy (7)or(8)or(9),then Member is better than Member2, and labeled as Member Member,. F={x∈R"g,(x)≤0,j=1,2,…,m (3) (Member"=0)and(Member:=0)and(Member>Member) where gx),/=1,2,...,m are constraints. (7) B.Constraints Handling (Member'=o)and(Member,>0j (8) The main purpose of this paper is to propose OEA and (Member>0)and(Member >0)and(Member<Member) illustrate the effectiveness of OEA's search ability,so two kinds (9) of simple constraints handling techniques are used. Definition 2 realizes the three rules of CHc.By Definitions 1 The first technique uses a simplified version of the static penalty method [32],that is,COPs are transformed into UCOPs and 2,OEA can use a uniform framework to deal with both by adding a very simple penalty term in(4) UCOPs and COPs with the two different constraints handling techniques.On the basis of Definitions 1 and 2,an w(x)=f(x)+max(0.g(x) (4) Organization is defined as follows, where AeR is a penalty coefficient and needs to be tuned for Definition 3:An Organization,org,is a set of members,and different problems.In the following text,this technique is the best member is called Leader,which is labeled as Leader labeled as CHp. that is,an organization and the leader satisfy (10)and(11), The second technique has been widely used in EAs for COPs where lorg indicates the cardinality of org. [18],[33],[34],and has been also extended to multiobjective (org={Member,Member,,Member》and(org≠) optimization problems [35],[36],[37].This technique uses a (10) comparison mechanism based on the following rules, VMembere org,LeaderMember (11) 1)Between two feasible solutions,the one with the larger fitness value wins; When two organizations are compared,only the two leaders are 2)If one solution is feasible and the other is infeasible.the considered.Namely,org is better than org2 if

SMCB-E-08102005-0551.R2 3 and this method in Subsection IV.B. Reference [20] used the Pareto ranking to deal with constraints. Although this method can reduce the number of additional inputs required for constraints handling, the computational time involved in nondominated sorting was increased. Reference [20] used four well-studied engineering design problems to evaluate the performance of the proposed method, so a comparison is also made between OEA and SCA [20] in Subsection IV.D. There are many other constraints handling methods making use of different techniques, such as GENOCOP [21], GENOCOP II [22], GENOCOP III [23], microgenetic algorithm [24], varying fitness functions [25], homomorphous mappings method [26], coevolutionary augmented Lagrangian method [27], the methods based on multiobjective algorithms [28], [29], and so on. More detailed descriptions about constraints handling in EAs were given in [30], [31]. II. ORGANIZATIONAL EVOLUTIONARY ALGORITHM A. Problem Definition A UCOP can be formulated as solving the objective function minimize f(x), x=(x1, x2, …, xn)∈S (1) where S⊆Rn defines the search space which is an n-dimensional space bounded by the parametric constraints iii xxx ≤ ≤ , i=1, 2, …, n. Thus, S = [ ] x x , , where 1 2 ( , , ..., ) n x = xx x and 1 2 ( , , ..., ) n x = xx x . A COP can be formulated as solving the objective function minimize ( ), ( , , , ) 1 2 n f xx x x x = ∈  ∩ S F (2) where S is the same with that of (1), and the feasible region F is { ( ) 0, 1, 2, , } n j F R =∈ ≤ = x x g j m  (3) where gj(x), j=1, 2, …, m are constraints. B. Constraints Handling The main purpose of this paper is to propose OEA and illustrate the effectiveness of OEA’s search ability, so two kinds of simple constraints handling techniques are used. The first technique uses a simplified version of the static penalty method [32], that is, COPs are transformed into UCOPs by adding a very simple penalty term in (4) { } 1 ( ) ( ) max 0, ( ) m j j ψ fA g = xx x = + ∑ (4) where A∈R is a penalty coefficient and needs to be tuned for different problems. In the following text, this technique is labeled as CHp. The second technique has been widely used in EAs for COPs [18], [33], [34], and has been also extended to multiobjective optimization problems [35], [36], [37]. This technique uses a comparison mechanism based on the following rules, 1) Between two feasible solutions, the one with the larger fitness value wins; 2) If one solution is feasible and the other is infeasible, the feasible solution wins; 3) If both solutions are infeasible, the one with the smaller sum of constraint violation is preferred. This technique is non problem-dependent, and no parameter needs to be tuned. In the following text, this technique is labeled as CHc. C. Organization Definition In OEA, an organization consists of several members, and a member is a solution of the objective function. Since a UCOP can be viewed as a COP satisfying the constraints, we define Member for both problems as follows, Definition 1: A Member is a real-valued vector that belongs to the search space, namely, Member∈S. There are two measures, MemberF and MemberV , to evaluate the member’s quality. MemberF denotes the fitness and ( ) for UCOPs ( ) for COPs with CHc ( ) for COPs with CHp F f f ψ −  = −  − Member Member Member Member (5) MemberV denotes the sum of constraint violation and { } 1 0 for UCOPs 0 for COPs with CHp max 0, ( ) for COPs with CHc V m j j g =    =   ∑ Member Member (6) Thus, the purpose of OEA is maximizing MemberF . When comparing the qualities of two members, both MemberF and MemberV should be considered. So the members are compared according to Definition 2. Definition 2: If two members, Member1 and Member2, satisfy (7) or (8) or (9), then Member1 is better than Member2, and labeled as Member Member 1 2  . 1 2 12 ( =0)and( 0)and( ) V V FF Member Member Member Member = > (7) ( )( ) 1 2 =0 and 0 V V Member Member > (8) 1 2 12 ( >0)and( 0)and( ) V V VV Member Member Member Member > < (9) Definition 2 realizes the three rules of CHc. By Definitions 1 and 2, OEA can use a uniform framework to deal with both UCOPs and COPs with the two different constraints handling techniques. On the basis of Definitions 1 and 2, an Organization is defined as follows, Definition 3: An Organization, org, is a set of members, and the best member is called Leader, which is labeled as Leaderorg, that is, an organization and the leader satisfy (10) and (11), where |org| indicates the cardinality of org. ( ) org org = ≠∅ {Member Member Member 1 2 || , , ..., and org } ( ) (10) , org ∀ ∈ Member Leader Member org ≺ (11) When two organizations are compared, only the two leaders are considered. Namely, org1 is better than org2 if

SMCB-E-08102005-0551.R2 Leader Leaderr,and labeled as orgorg2 (AnStr1);otherwise they are generated by Annexing Strategy 2 (AnStr2).The subscript j in U(0,1)indicates that the random D.Evolutionary Operators for Organications number is generated anew for each value of /and AnStrl and In the real world situation,there is a severe competition AnStr2 are given by(13)and(14),respectively.Given that the among organizations,and the strong ones always annex the leader of orgpl is (x1,x2,...,x)and new members are r(,, weak ones.In OEA,the strength of an organization manifests in 52,…,5j户1,2,,N the fitness of the leader.So the purpose of each organization is In AnStr1,rj=1,2,...,N are determined by (13), to increase the leader's fitness as much as possible.To achieve Bk, B=xx+ax(xx-yix) (13) basis of such interactions,three evolutionary operators are B.otherwise k=12,,n designed,that is,the splitting operator,the annexing operator, and the cooperating operator. where is a uniformly distributed random number over [0,1], Splitting operator:In human societies,an organization and is generated anew for each value of k. whose size is too large usually is split into several small In AnStr2,rj=1,2,...,N are determined by (14), organizations so that they can be easily managed.In OEA,if 4+×(-4)B(0,1)r,)and{0,(0,)<exp(gy-y5)}(15) (12) otherwise where U(,)is a uniform random number generator,and No is As can be seen,when r is better thany gets into orge so as the number of organizations in the whole population.To keep to improve the quality of orge When r,is worse thany in order the randomicity and a small difference between the sizes of to maintain the diversity,gets into org with a probability.For org and org to members are first randomly more clarity.Fig.2 illustrates the operations in this operator. selected from orgp to form orge,and the remainder forms org2. orge has (M+N)members,where Member,i=1,2,...,M,come For more clarity,Fig.1 illustrates the operations in this operator. from orgpl and Member i=M+1,M+2,...,M+N are generated As can be seen,without loss of generality,let the ith member is by the leader of orgpl and the members of orgp2 together.After orge is generated,the leader is also selected.In this case,the the best,and Mbe a random integer between and leader is the ith member (the leader of orgpl),or the best Therefore,the ith member is the Leader of orgp Mmembers are member among Member,Memberv2,...,and MemberM-N. randomly selected from orgp to form orgel,and the other Cooperating operator:This operator realizes the lorgp-M members form orge2.After orgel and orge2 are cooperation between two organizations.The leaders of the generated,the best members in orge and org are selected to be organizations interact with each other to generate two new their leaders,that is,the jth member in orge and the kth member members.Then,in each organization,a member being worse in orge2 are selected. than the new member is replaced. Annexing operator:This operator realizes the competition Two parent organizations,orgp=1,x2,...,x and between two organizations.The better organization is the orgp=2,,),are randomly selected from the current winner,and the other is the loser.The winner will annex the generation.They will cooperate with each other to generate two loser to form a larger organization.The members of the winner child organizations,orge and org2.Let CSe(0,1)be a can directly go into the new organization while those of the loser predefined parameter.Then,if U(0,1)<CS,orgel and org2 are must die.But the members of the loser maybe still have useful generated by Cooperating Strategy 1(CoStr1);otherwise they information,so they interact with the leader of the winner to are generated by Cooperating Strategy 2 (CoStr2),where generate new members. CoStrl and CoStr2 are given by (16)and(17),respectively. Two parent organizations,orgp=2,and Given that the leader of orgpl is (x,x2..,),the leader of orgp2=1,2,...,yN),are randomly selected from the current orgp2 is (v1,y2,...,y),and two new members are q=(q,q2,.., generation.Without loss of generality,let orgplD orgp2.Thus,q)and r=(r,2,...m). orgp will annex orgp2 to generate a child organization,org=, In CoStrl,g and r are determined by (16), 22,....ZM Z,ZM2,....ZMN).Where i=1,2,...,M.Let ASE(0,1)be a predefined parameter.Then,if U(0,1)<AS, g4=0×+(1-X4,k=l,2,n (16) =M+1,M+2,...,M+N are generated by Annexing Strategy 1 =(1-a)xx+×y

SMCB-E-08102005-0551.R2 4 1 2 Leader Leader org org  , and labeled as org1org2. D. Evolutionary Operators for Organizations In the real world situation, there is a severe competition among organizations, and the strong ones always annex the weak ones. In OEA, the strength of an organization manifests in the fitness of the leader. So the purpose of each organization is to increase the leader’s fitness as much as possible. To achieve this purpose, each organization must interact with others. On the basis of such interactions, three evolutionary operators are designed, that is, the splitting operator, the annexing operator, and the cooperating operator. Splitting operator: In human societies, an organization whose size is too large usually is split into several small organizations so that they can be easily managed. In OEA, if most of the members belong to the same organization, the evolutionary operations would be disabled. So when the size of an organization exceeds a limit, this organization must be split. Let MaxOS be the parameter controlling the maximum size of an organization, and MaxOS>1. If a parent organization, orgp, satisfies (12), then orgp will be split into two child organizations, orgc1 and orgc2. ( ) ( ) () | | or | | and 0,1 { ( ) o } org OS OS N org Max org Max U >≤   =  (13) where αk is a uniformly distributed random number over [0, 1], and is generated anew for each value of k. In AnStr2, rj , j=1, 2, …, N are determined by (14), ( ) () 1 , 0,1 , 1, 2, ..., otherwise k kk k n j k k x xx r kn x  +× − < α β = =   (14) where α and βk are uniformly distributed random numbers over [0, 1], and βk is generated anew for each value of k. After rj, j=1, 2, …, N are generated, zj+M, j=1, 2, …, N are determined by (15), ( ) { } ( ) ( ) and 0, 1 exp otherwise j jj F F jM j j j j j j j + U   = <−      r ry z r yr r y y (15) As can be seen, when rj is better than yj, rj gets into orgc so as to improve the quality of orgc. When rj is worse than yj, in order to maintain the diversity, rj gets into orgc with a probability. For more clarity, Fig.2 illustrates the operations in this operator. orgc has (M+N) members, where Memberi, i=1, 2, …, M, come from orgp1 and Memberi, i=M+1, M+2, …, M+N are generated by the leader of orgp1 and the members of orgp2 together. After orgc is generated, the leader is also selected. In this case, the leader is the ith member (the leader of orgp1), or the best member among MemberM+1, MemberM+2, …, and MemberM+N. Cooperating operator: This operator realizes the cooperation between two organizations. The leaders of the organizations interact with each other to generate two new members. Then, in each organization, a member being worse than the new member is replaced. Two parent organizations, orgp1={x1, x2, …, xM} and orgp2={y1, y2, …, yN}, are randomly selected from the current generation. They will cooperate with each other to generate two child organizations, orgc1 and orgc2. Let CS∈(0, 1) be a predefined parameter. Then, if U(0, 1)<CS, orgc1 and orgc2 are generated by Cooperating Strategy 1 (CoStr1); otherwise they are generated by Cooperating Strategy 2 (CoStr2), where CoStr1 and CoStr2 are given by (16) and (17), respectively. Given that the leader of orgp1 is (x1, x2, …, xn), the leader of orgp2 is (y1, y2, …, yn), and two new members are q=(q1, q2, …, qn) and r=(r1, r2, …, rn). In CoStr1, q and r are determined by (16), ( ) ( ) 1 , 1, 2, ..., 1 k kk k k k k k kk qx y k n r xy α α α α  = × +− ×  =  =− ×+ ×  (16)

SMCB-E-08102005-0551.R2 where a is the same with that of(13). generation.Finally,the best member in the whole population is In CoStr2,g and r are determined by (17), output as the result.The details are shown in ALGORITHM 1. 9=(,X2,…,X,,1,…,,1X+2…,X ALGORITHM 1 Organizational Evolutionary r=(,2,…,y-1,X,x+,…,X2,%2,yn2,…,yn Algorithm 01:begin (17) 02: Initializing population Po with No where 1<i<n,1<iz<n,and i<i. organizations,and each organization After g and rare generated,orge and org are determined by has one member; (18)and(19),respectively, 03: t←-0; [{x,x2,,,,x41x42,…,xw} 04: while (the termination criteria are not reached)do orge= 3x,∈ogp1,qP (18) 05: begin orgp otherwise 06: For each organization in Pt,if it satisfies (12),performing the {,h,,yy4y2,,w splitting operator on it,deleting 0r8e2= y,∈or82,rPy (19) it from P,and adding the child organizations into P1i 0r8p2 otherwise 07: while(the number of organizations For more clarity,Fig.3 illustrates the operations in this in P:is greater than 1)do 08: operator.The number of members oforge and org2 is the same begin 09: Randomly selecting two parent with that of orgp and orgp2,respectively.The kth member in organizations,orgpl and orgp2, orge and the lth member in org2 are determined by the leaders from P:i of orgpl and orgp2 together,and the others are the same with 10: Performing the annexing operator those of orgp and orgp2.After orge and orge are generated, or the cooperating operator on their leaders are also selected.In this case,the leader of orge is orgpl and orgp2 with the same Members or the leader of orgp,and the leader of org is probability; Member or the leader of orgp2. 11: Deleting orgpl and orgp2 from Pe, In fact,AnStrl is the heuristic crossover [38],AnStr2 is a and adding the child kind of mutation.CoStrl is the arithmetical crossover,and organizations into Pi; 12: end; CoStr2 is the discrete crossover [31].The three operators play different roles in OEA.The splitting operator restricts the size 13: Moving the organization in P:to of an organization and makes some organizations directly go Pu1i 14: t←-t+1; into the next generation.This manner is in favor of maintaining 15: end; the diversity of the population.Both AnStrl and AnStr2 make 16: Output the best member in P; use of the information of a leader,and are of the ability in local 17:end. search.On the other hand.since the leader is the best in an organization,the annexing operator can search around each In the initialization,each organization has only one member, leader.This is equivalent to performing a hill-climbing operator. and the population has total No organizations.During the As a result,the annexing operator can give more chances to the evolutionary process,under the effects of the annexing operator promising members.The cooperating operator improves the and the splitting operator,the organizations with only one qualities of the members by increasing the interaction between member grow to the ones with multiple members.Although the two leaders,so it is equivalent to performing a global search. number of organizations varies from generation to generation, E.Implementation ofOEA the number of members in the whole population remains In OEA,the population consists of organizations,and the constant. above three operators must be reasonably performed on organizations so as to obtain high quality members.In OEA,the size of each organization in the population is first checked in III.EXPERIMENTAL STUDIES ON UNCONSTRAINED each generation.If the size ofan organization satisfies(12),then OPTIMIZATION PROBLEMS the splitting operator is performed on this organization.Next, In this section,15 benchmark functions(F01-F15)are used to two parent organizations are randomly selected from the test the performance of OEA in solving UCOPs.F01-F13 are population,and the annexing operator or the cooperating fif in [39]and F14,F15 areff in [40].The problem operator are performed on them with the same probability.This dimension is set to 30 for F01-F13 and 100 for F14 and F15.In process is continued until the number of organizations in the this manner,these functions have so many local minima that population is less than 2.The population evolves generation by they are challenging enough for performance evaluation

SMCB-E-08102005-0551.R2 5 where αk is the same with that of (13). In CoStr2, q and r are determined by (17), ( ) ( ) 1 11 22 2 1 11 2 2 2 12 1 1 1 2 12 1 1 1 2 , , , , , , , , , , , , , , , , , , , , , , i ii ii i n i ii i i i n xx x y y y x x x yy y x x x y y y − + ++ − + ++  =    =      q r (17) where 1<i1<n, 1<i2<n, and i1<i2. After q and r are generated, orgc1 and orgc2 are determined by (18) and (19), respectively, { } 12 1 1 2 1 1 1 , , , , , , , , , otherwise i ii M c ip i p org org org − ++   = ∃∈      ≺ x x x qx x x x qx (18) { 12 1 1 2 } 2 2 2 , , , , , , , , , otherwise j jj N c jp j p org org org − ++   = ∃∈      ≺ y y y ry y y y ry (19) For more clarity, Fig.3 illustrates the operations in this operator. The number of members of orgc1 and orgc2 is the same with that of orgp1 and orgp2, respectively. The kth member in orgc1 and the lth member in orgc2 are determined by the leaders of orgp1 and orgp2 together, and the others are the same with those of orgp1 and orgp2. After orgc1 and orgc2 are generated, their leaders are also selected. In this case, the leader of orgc1 is Memberk or the leader of orgp1, and the leader of orgc2 is Memberl or the leader of orgp2. In fact, AnStr1 is the heuristic crossover [38], AnStr2 is a kind of mutation, CoStr1 is the arithmetical crossover, and CoStr2 is the discrete crossover [31]. The three operators play different roles in OEA. The splitting operator restricts the size of an organization and makes some organizations directly go into the next generation. This manner is in favor of maintaining the diversity of the population. Both AnStr1 and AnStr2 make use of the information of a leader, and are of the ability in local search. On the other hand, since the leader is the best in an organization, the annexing operator can search around each leader. This is equivalent to performing a hill-climbing operator. As a result, the annexing operator can give more chances to the promising members. The cooperating operator improves the qualities of the members by increasing the interaction between two leaders, so it is equivalent to performing a global search. E. Implementation of OEA In OEA, the population consists of organizations, and the above three operators must be reasonably performed on organizations so as to obtain high quality members. In OEA, the size of each organization in the population is first checked in each generation. If the size of an organization satisfies (12), then the splitting operator is performed on this organization. Next, two parent organizations are randomly selected from the population, and the annexing operator or the cooperating operator are performed on them with the same probability. This process is continued until the number of organizations in the population is less than 2. The population evolves generation by generation. Finally, the best member in the whole population is output as the result. The details are shown in ALGORITHM 1. ALGORITHM 1 Organizational Evolutionary Algorithm 01: begin 02: Initializing population P0 with No organizations, and each organization has one member; 03: t←0; 04: while (the termination criteria are not reached) do 05: begin 06: For each organization in Pt, if it satisfies (12), performing the splitting operator on it, deleting it from Pt, and adding the child organizations into Pt+1; 07: while(the number of organizations in Pt is greater than 1)do 08: begin 09: Randomly selecting two parent organizations, orgp1 and orgp2, from Pt; 10: Performing the annexing operator or the cooperating operator on orgp1 and orgp2 with the same probability; 11: Deleting orgp1 and orgp2 from Pt, and adding the child organizations into Pt+1; 12: end; 13: Moving the organization in Pt to Pt+1; 14: t←t+1; 15: end; 16: Output the best member in Pt; 17: end. In the initialization, each organization has only one member, and the population has total No organizations. During the evolutionary process, under the effects of the annexing operator and the splitting operator, the organizations with only one member grow to the ones with multiple members. Although the number of organizations varies from generation to generation, the number of members in the whole population remains constant. III. EXPERIMENTAL STUDIES ON UNCONSTRAINED OPTIMIZATION PROBLEMS In this section, 15 benchmark functions (F01-F15) are used to test the performance of OEA in solving UCOPs. F01-F13 are f1-f13 in [39] and F14, F15 are f7, f9 in [40]. The problem dimension is set to 30 for F01-F13 and 100 for F14 and F15. In this manner, these functions have so many local minima that they are challenging enough for performance evaluation

SMCB-E-08102005-0551.R2 6 Because FEP [39]and OGA/Q [40]obtained good -93.4804189,which is even much worse than the worst FV performances in solving UCOPs,comparisons are made among (-99.4393027)of OEA.On the other hand.since the NFE of FEP,OGA/Q,and OEA.Since the termination criteria used in OEA and OGA/Q are similar,and the mean running times of [39]and [40]are different significantly,to establish a fair OEA are shorter than those of OGA/Q on all the 6 functions. comparison basis,we implemented FEP and OGA/Q,and ran OGA/Q needs more computational cost than OEA.OGA/Q the three algorithms in the same environment.The parameters of applies orthogonal design to select offspring when each time FEP and OGA/Q are set according to [39]and [40],respectively, performing the orthogonal crossover,which increases the and those of OEA are set as follows:N=150,Maxos-20, computational cost. A.S-0.8,and CS=0.6.Since the number of function evaluations (NFE)of OEA and OGA/Q is different in each generation,FEP, OGA/Q,and OEA are terminated when the NFE is larger than IV.EXPERIMENTAL STUDIES ON CONSTRAINED OPTIMIZATION 300 000.The experimental results of FEP,OGA/Q,and OEA PROBLEMS are obtained over 50 independent trials.All the three algorithms In this section,13 benchmark functions (G01-G13)and 4 are realized by Delphi 5 on a 2.4GHz Pentium PC with 1G well-studied engineering design problems (Welded Beam RAM,and the operating system is Windows XP. Design,Spring Design,Speed Reducer Design,and Three-Bar A.Experimental Results of OEA Truss Design)are used to validate the performance of OEA in Table I summarizes the experimental results of OEA,which solving COPs.These functions are described in [19]and [20]. include the best,the mean,the standard deviation,and the worst The equality constraints have been converted into inequality function value (FV)found.Both the mean running time and the constraints,(x)0,using the degree of violation =104,the mean NFE are given.Table I also shows the global optimum for same with that of [19]. each function.What should be noted is the global optimum for Since RY [19]was a new kind of constraints handling F14 given in [40]is-99.2784,but the Best FV found by OEA is technique and obtained a good performance,we first 99.5643815 implemented RY and made a thorough comparison between RY As can be seen,the best,the mean,and the worst FVs of all and OEA on both G01-G13 and the 4 engineering design functions other than F05 are equal or very close to the global problems.Then,comparisons were made between SMES [18] optima,and the standard deviations are relatively small.The and OEA on G01-G13 and between SCA [20]and OEA on the 4 mean running times of the functions other than F12 and F14 are engineering design problems.The parameters of RY are set between 0.5 and 2 seconds.Especially,those of F01,F02,and according to [19],and those of OEA are set as follows:N=1500, F04-F07 are shorter than one second.Although that of F14 is Maxos,AS,and CS are the same with those of Section IIl.Both 6.570s,it is mainly due to the higher computational cost RY and OEA are terminated when the NFE is larger then required by the function itself. 240000 for G01-G13 and 24000 for the 4 engineer design problems.The experimental results of RY and OEA are B.Comparison between OEA and FEP obtained over 50 independent trials.The running environment is The comparison is shown in Table II.As can be seen,the the same with that of Section III. mean FVs of OEA are better than those of FEP on all functions A.Experimental Results of OEA other than F06.For F06,both OEA and FEP find the exact global optimum.On the other hand,since the NFE of OEA and G01-G13 are solved by OEA with two kinds of constraints FEP are similar,and the mean running times of OEA are handling techniques.The penalty coefficient A in(4)adopted by significantly shorter than those of FEP on all functions,FEP each function is shown in(20),and Table IV summarizes the definitely needs more computational cost than OEA.Although experimental results ofOEA,which include the best,the median, FEP is simple and has no complicated computation,FEP needs the mean,the standard deviation,and the worst FV found.Both to generate normally and Cauchy distributed random numbers, the mean running time and the mean NFE are given.Both OEA which spends lots of computational cost. with CHp and CHe provide 100%feasible solutions for all functions. C.Comparison between OEA and OGA/O Ae01-0.5 402=100 Aeo3=105 A0u=104 The comparison is shown in Table III.Because OGA/Q used 4G05=10 Aeo6-=5000 Am=1000 Aos=1000 the orthogonal design to generate the initial population,when (20) the global optima are at the middle point of the search space, Ae09-500 A10-5×10°Am=10 AG2-100 such as F01-F04,F06,F07,and F09-F11,the global optima 43-0.05 always exist in the initial population of OGA/Q.Therefore,the As described in Table IV,both OEA with CHp and CHc have comparison between OEA and OGA/Q is only made on F05, consistently found the global optima for all trials for G03,G04, F08.andF12-F15. G06,G08,G11,and G12.For GO1,G05,G10,and G13,OEA As can be seen,the mean FVs of OEA are better than those of with CHp outperforms OEA with CHc in terms of all the four OGA/Q on all the 6 functions.Especially for F14,the mean FV criteria.For G02,G09,and G07,OEA with CHc outperforms of OEA is -99.5024042 while that of OGA/Q is only OEA with CHp in terms of some criteria.In the computational

SMCB-E-08102005-0551.R2 6 Because FEP [39] and OGA/Q [40] obtained good performances in solving UCOPs, comparisons are made among FEP, OGA/Q, and OEA. Since the termination criteria used in [39] and [40] are different significantly, to establish a fair comparison basis, we implemented FEP and OGA/Q, and ran the three algorithms in the same environment. The parameters of FEP and OGA/Q are set according to [39] and [40], respectively, and those of OEA are set as follows: No=150, MaxOS=20, AS=0.8, and CS=0.6. Since the number of function evaluations (NFE) of OEA and OGA/Q is different in each generation, FEP, OGA/Q, and OEA are terminated when the NFE is larger than 300 000. The experimental results of FEP, OGA/Q, and OEA are obtained over 50 independent trials. All the three algorithms are realized by Delphi 5 on a 2.4GHz Pentium PC with 1G RAM, and the operating system is Windows XP. A. Experimental Results of OEA Table I summarizes the experimental results of OEA, which include the best, the mean, the standard deviation, and the worst function value (FV) found. Both the mean running time and the mean NFE are given. Table I also shows the global optimum for each function. What should be noted is the global optimum for F14 given in [40] is –99.2784, but the Best FV found by OEA is -99.5643815. As can be seen, the best, the mean, and the worst FVs of all functions other than F05 are equal or very close to the global optima, and the standard deviations are relatively small. The mean running times of the functions other than F12 and F14 are between 0.5 and 2 seconds. Especially, those of F01, F02, and F04-F07 are shorter than one second. Although that of F14 is 6.570s, it is mainly due to the higher computational cost required by the function itself. B. Comparison between OEA and FEP The comparison is shown in Table II. As can be seen, the mean FVs of OEA are better than those of FEP on all functions other than F06. For F06, both OEA and FEP find the exact global optimum. On the other hand, since the NFE of OEA and FEP are similar, and the mean running times of OEA are significantly shorter than those of FEP on all functions, FEP definitely needs more computational cost than OEA. Although FEP is simple and has no complicated computation, FEP needs to generate normally and Cauchy distributed random numbers, which spends lots of computational cost. C. Comparison between OEA and OGA/Q The comparison is shown in Table III. Because OGA/Q used the orthogonal design to generate the initial population, when the global optima are at the middle point of the search space, such as F01-F04, F06, F07, and F09-F11, the global optima always exist in the initial population of OGA/Q. Therefore, the comparison between OEA and OGA/Q is only made on F05, F08, and F12-F15. As can be seen, the mean FVs of OEA are better than those of OGA/Q on all the 6 functions. Especially for F14, the mean FV of OEA is -99.5024042 while that of OGA/Q is only -93.4804189, which is even much worse than the worst FV (-99.4393027) of OEA. On the other hand, since the NFE of OEA and OGA/Q are similar, and the mean running times of OEA are shorter than those of OGA/Q on all the 6 functions, OGA/Q needs more computational cost than OEA. OGA/Q applies orthogonal design to select offspring when each time performing the orthogonal crossover, which increases the computational cost. IV. EXPERIMENTAL STUDIES ON CONSTRAINED OPTIMIZATION PROBLEMS In this section, 13 benchmark functions (G01-G13) and 4 well-studied engineering design problems (Welded Beam Design, Spring Design, Speed Reducer Design, and Three-Bar Truss Design) are used to validate the performance of OEA in solving COPs. These functions are described in [19] and [20]. The equality constraints have been converted into inequality constraints, |h(x)|-δ≤0, using the degree of violation δ=10-4, the same with that of [19]. Since RY [19] was a new kind of constraints handling technique and obtained a good performance, we first implemented RY and made a thorough comparison between RY and OEA on both G01-G13 and the 4 engineering design problems. Then, comparisons were made between SMES [18] and OEA on G01-G13 and between SCA [20] and OEA on the 4 engineering design problems. The parameters of RY are set according to [19], and those of OEA are set as follows: No=1500, MaxOS, AS, and CS are the same with those of Section III. Both RY and OEA are terminated when the NFE is larger then 240000 for G01-G13 and 24000 for the 4 engineer design problems. The experimental results of RY and OEA are obtained over 50 independent trials. The running environment is the same with that of Section III. A. Experimental Results of OEA G01-G13 are solved by OEA with two kinds of constraints handling techniques. The penalty coefficient A in (4) adopted by each function is shown in (20), and Table IV summarizes the experimental results of OEA, which include the best, the median, the mean, the standard deviation, and the worst FV found. Both the mean running time and the mean NFE are given. Both OEA with CHp and CHc provide 100% feasible solutions for all functions. 5 4 G01 G02 G03 G04 G05 G06 G07 G08 6 G09 G10 G11 G12 G13 =0.5 =100 =10 =10 =10 =5000 =1000 =1000 =500 =5 10 =10 =100 =0.05 AAA A AA A A AA A A A     ×   (20) As described in Table IV, both OEA with CHp and CHc have consistently found the global optima for all trials for G03, G04, G06, G08, G11, and G12. For G01, G05, G10, and G13, OEA with CHp outperforms OEA with CHc in terms of all the four criteria. For G02, G09, and G07, OEA with CHc outperforms OEA with CHp in terms of some criteria. In the computational

SMCB-E-08102005-0551.R2 7 cost,the mean running times of all functions other than G02 and the four criteria. G12 are shorter than 0.5 second. The 4 engineering design problems are solved by OEA with CHc only.Table V summarizes the experimental results.OEA V.PARAMETER ANALYSES OF OEA provides 100%feasible solutions for all functions.As described in Table V.the standard deviations of the 4 problems are There are 4 parameters in OEA,namely,No,AS,CS,and Maxos.To study the sensitivity of OEA to these parameters,a relatively small,so the performance of OEA is very stable in series of empirical experiments are carried out on OEA with solving these problems.Moreover,the mean running times of different values of No,AS,CS,and Maxos. all the 4 functions are shorter than 50 milliseconds,which illustrates that the computational cost of OEA is very low. A.Effects of N on the Performance of OEA In the whole,OEA with CHp outperforms OEA with CHc In general,the fitness landscape of COPs is more complicated But the penalty coefficient in OEA with CHp needs to be tuned than that of UCOPs.such that algorithms are of a high for different problems,and this is inconvenient in real probability of trapping into the local optima.Therefore,many applications.On the contrary,although the performance of OEA researchers focus on developing new methods to handle the with CHc is not as good as that of OEA with CHp,the solution constraints.But the results of Tables IV-IX indicate that OEA quality of OEA with CHc is still competitive with that of OEA with a simple penalty term can obtain very good results.We with CHp.Moreover,no parameter needs to be tuned in OEA think this benefits mainly from the large population,that is, with CHc. N=1500,and the search mechanism of OEA.A common B.Comparison between OEA and RY opinion among researchers is that a large population can make The comparison on G01-G13 is shown in Table VI.As can be algorithm have a little chance to get trapped into the local seen,for G03,G04,G08,G11,and G12,both OEA and RY optima,but maybe reduce the convergence rate.Therefore,to have consistently found the global optima for all trials.For G02 study the effects of N.on the performance ofOEA with CHp,N G06,and G09,both OEA with CHp and CHc outperform RY in increases from 150 to 1500 in steps of 450.Here,Maxos=20, AS-0.8.and CS=0.6.Fig.4 shows the evolutionary process of terms of all the four criteria.For the other functions,OEA the mean best solutions over 50 trials for the 4 more difficult outperforms RY in terms of some criteria while is also functions,G01,G05,G06,and G10. outperformed by RY in terms of some criteria.For G05,OEA with CHp and RY find the same best solution,which is even As can be seen,for the 4 functions,N=150'displays a faster convergence rate in the beginning,but it is overtaken by better than the global optimum.This is the consequence of using N。-600',W。=1050',andN。-l500'quickly.As a result, inequalities to approximate equalities,although a very small is N=150'is obvious worse than the 3 others on the whole used.In the computational cost,the mean running times of OEA evolutionary process.In order to compare N.=600',N.=1050', are significantly shorter than those of RY on all functions,so RY definitely needs much more computational cost than OEA. and N=1500'further,we magnify them in the top-right part of each graph. The comparison on the 4 engineer design problems is shown in Table VII.As can be seen,except that the Best FV of RY for On the whole,the evolutionary process of a small population is much easier trapped into the local optima than that of a large Welded Beam Design is better than that of OEA,OEA population,and OEA with a large population can also converge outperforms RY on other functions in terms of all the four fast.We think this benefits mainly from the search mechanism criteria.Moreover,the mean running times of OEA are still of OEA.OEA searches around each leader by the annexing much shorter than those of RY. operator,which is equivalent to performing a hill-climbing To summarize,both OEA with CHp and CHc are competitive operator on the individual.As a result,once a promising with RY,which illustrate that OEA has an effective search ability,and OEA incorporated simple constraints handling individual appears in the population,it can quickly climb to the crest with a high probability. techniques can obtain good performances in solving COPs. B.Effects of AS and CS on the Performance of OEA C.Comparison between OEA and SMES on G01-G13 Benchmark functions,F01-F15,can be divided into four The comparison is shown in Table VIll where the results of groups,that is,unconstrained unimodal functions, SMES are obtained from [18].As can be seen,under the same unconstrained multimodal functions,equality constraints amount of the NFE,the performance of OEA is competitive functions and inequality constraints functions.So two functions with that of SMES are chosen from each group to use in this subsection,and A.S and D.Comparison between OEA and SCA on the 4 Engineering CS are analyzed on the criterion of Average Success Ratio, Design Problems Definition 4:If a trial satisfies (21).then the trial is called The comparison is shown in Table IX where the results of success;otherwise failure, SCA are obtained from [20].The NFE and the solution vector lF'-Fs Ke.IFI F≠0 for the Best FV are also compared.As can be seen,OEA (21) F*=0 significantly outperforms SCA for the 4 problems in terms of all IFkn Ke where Fbes denotes the best solution found by the trial,and F

SMCB-E-08102005-0551.R2 7 cost, the mean running times of all functions other than G02 and G12 are shorter than 0.5 second. The 4 engineering design problems are solved by OEA with CHc only. Table V summarizes the experimental results. OEA provides 100% feasible solutions for all functions. As described in Table V, the standard deviations of the 4 problems are relatively small, so the performance of OEA is very stable in solving these problems. Moreover, the mean running times of all the 4 functions are shorter than 50 milliseconds, which illustrates that the computational cost of OEA is very low. In the whole, OEA with CHp outperforms OEA with CHc. But the penalty coefficient in OEA with CHp needs to be tuned for different problems, and this is inconvenient in real applications. On the contrary, although the performance of OEA with CHc is not as good as that of OEA with CHp, the solution quality of OEA with CHc is still competitive with that of OEA with CHp. Moreover, no parameter needs to be tuned in OEA with CHc. B. Comparison between OEA and RY The comparison on G01-G13 is shown in Table VI. As can be seen, for G03, G04, G08, G11, and G12, both OEA and RY have consistently found the global optima for all trials. For G02, G06, and G09, both OEA with CHp and CHc outperform RY in terms of all the four criteria. For the other functions, OEA outperforms RY in terms of some criteria while is also outperformed by RY in terms of some criteria. For G05, OEA with CHp and RY find the same best solution, which is even better than the global optimum. This is the consequence of using inequalities to approximate equalities, although a very small δ is used. In the computational cost, the mean running times of OEA are significantly shorter than those of RY on all functions, so RY definitely needs much more computational cost than OEA. The comparison on the 4 engineer design problems is shown in Table VII. As can be seen, except that the Best FV of RY for Welded Beam Design is better than that of OEA, OEA outperforms RY on other functions in terms of all the four criteria. Moreover, the mean running times of OEA are still much shorter than those of RY. To summarize, both OEA with CHp and CHc are competitive with RY, which illustrate that OEA has an effective search ability, and OEA incorporated simple constraints handling techniques can obtain good performances in solving COPs. C. Comparison between OEA and SMES on G01-G13 The comparison is shown in Table VIII where the results of SMES are obtained from [18]. As can be seen, under the same amount of the NFE, the performance of OEA is competitive with that of SMES. D. Comparison between OEA and SCA on the 4 Engineering Design Problems The comparison is shown in Table IX where the results of SCA are obtained from [20]. The NFE and the solution vector for the Best FV are also compared. As can be seen, OEA significantly outperforms SCA for the 4 problems in terms of all the four criteria. V. PARAMETER ANALYSES OF OEA There are 4 parameters in OEA, namely, No, AS, CS, and MaxOS. To study the sensitivity of OEA to these parameters, a series of empirical experiments are carried out on OEA with different values of No, AS, CS, and MaxOS. A. Effects of No on the Performance of OEA In general, the fitness landscape of COPs is more complicated than that of UCOPs, such that algorithms are of a high probability of trapping into the local optima. Therefore, many researchers focus on developing new methods to handle the constraints. But the results of Tables IV-IX indicate that OEA with a simple penalty term can obtain very good results. We think this benefits mainly from the large population, that is, No=1500, and the search mechanism of OEA. A common opinion among researchers is that a large population can make algorithm have a little chance to get trapped into the local optima, but maybe reduce the convergence rate. Therefore, to study the effects of No on the performance of OEA with CHp, No increases from 150 to 1500 in steps of 450. Here, MaxOS=20, AS=0.8, and CS=0.6. Fig.4 shows the evolutionary process of the mean best solutions over 50 trials for the 4 more difficult functions, G01, G05, G06, and G10. As can be seen, for the 4 functions, ‘No=150’ displays a faster convergence rate in the beginning, but it is overtaken by ‘No=600’, ‘No=1050’, and ‘No=1500’ quickly. As a result, ‘No=150’ is obvious worse than the 3 others on the whole evolutionary process. In order to compare ‘No=600’, ‘No=1050’, and ‘No=1500’ further, we magnify them in the top-right part of each graph. On the whole, the evolutionary process of a small population is much easier trapped into the local optima than that of a large population, and OEA with a large population can also converge fast. We think this benefits mainly from the search mechanism of OEA. OEA searches around each leader by the annexing operator, which is equivalent to performing a hill-climbing operator on the individual. As a result, once a promising individual appears in the population, it can quickly climb to the crest with a high probability. B. Effects of AS and CS on the Performance of OEA Benchmark functions, F01-F15, can be divided into four groups, that is, unconstrained unimodal functions, unconstrained multimodal functions, equality constraints functions and inequality constraints functions. So two functions are chosen from each group to use in this subsection, and AS and CS are analyzed on the criterion of Average Success Ratio, Definition 4: If a trial satisfies (21), then the trial is called success; otherwise failure, | ||| 0 | | 0 best best FF F F F F ε ε ∗ ∗∗ ∗  − <⋅ ≠   < = (21) where Fbest denotes the best solution found by the trial, and F*

SMCB-E-08102005-0551.R2 the global optimum.Suppose Ms out of M trials succeed,and VI.CONCLUSION then Average Success Ratio is given as R =M./M A new numerical optimization algorithm,OEA,has been The smaller gis,the better the solution is.Here,gis set to 103 proposed in this paper.The experimental results in Tables I-III for G13 and 10>for the 7 other functions.N is set to 150 for indicate that OEA outperforms the two famous algorithms,FEP F02,F05,F08,and F10,and set to 1500 for G04,G08,G11,and and OGA/Q,in solving UCOPs.OEA can find better solutions G13.Maxos is set to 20 for all the 8 functions.For G04,G08, using a lower computational cost.Furthermore,the standard G11,and G13,OEA with CHp is used.The experiments are deviation ofOEA is small.which indicates that OEA has a more carried out as follows:A.S and CS increase from 0 to 1 in steps of stable performance.The experimental results in Tables IV-IX 0.05,with 21x21=441 groups of parameters obtained.50 trials indicate that both OEA with CHp and CHc obtain good are carried out for each selected function at each group of performances in a wide range of benchmark problems functions parameters,and the results are shown in Fig.5 in solving COPs,and the performance is competitive with the As can be seen,for different functions,Ra value change with recently published approaches,RY,SMES,and SCA. AS and CS in different manners.When AS is smaller than 0.1. Systematic analyses have been made on the 4 parameters of Rs values of the 8 functions are small.When AS is larger than OEA.To summarize,a large population can reduce the 0.9.R values of F05.F08 and F10 are also small.On the other probability of OEA trapped into the local optima,at the same hand,even in the case of g=105,Ras values of F02,F08,F10. time,the search mechanism of OEA can make a large G04,and G08 still achieve to 1 and R values of F05,G11,and population also have a fast convergence rate.The performance G13 also achieve to a high value at many groups of parameters. ofOEA is less sensitive to the parameters,AS,CS,and Maxos. On the whole,Fig.5 shows that AS has a larger effect on the Although the best values of AS,CS,and Maxos are different for performance of OEA than CS.The performance of OEA is less different functions,OEA can perform stably in a wide range of sensitive to AS and CS,and OEA can perform well in a wide values for AS,CS,and Maxos,which prove OEA is quite robust range of values for AS and CS.Since AS and CS control the and easy to use. probability of using AnStr1,AnStr2 in the annexing operator On the whole,OEA obtains a good performance for both and CoStrl,CoStr2 in the cooperating operator,the results also UCOPs and COPs.This benefits mainly from the following two show that combining the two strategies in the two operators can aspects.One is the structured population,and the other is the make OEA perform better.Therefore,it is better to select AS three evolutionary operators. and CS from 0.2-0.9. In the structured population in OEA,the individuals that can generate offspring are not selected from the whole population, C.Effects of Maxos on the Performance ofOEA but from an organization.This guarantees the diversity as well The functions,the criterion,and the parameter settings of e as the qualities of the selected individuals. and N.used in this subsection are the same with those of the The annexing operator is equivalent to performing a local above subsection.AS is set to 0.8,and CS is set to 0.6.For G04. search.The cooperating operator is equivalent to performing a G08,G11,and G13.OEA with CHp is also used.The global search.The splitting operator controls the size of each experiments are carried out as follows:Maxos increases from 5 organization so that the computational cost is reasonably to 150 in steps of 5 for the unconstrained functions and from 50 distributed between the local searching and the global searching. to 1500 in steps of 50 for the constrained functions.50 trials are In such a manner,once a promising individual appears in the carried out for each function at each group of parameters. population,it has a high probability to generate better The results show that Rs values of F08,F10,G04,and G08 individuals. achieve to 1 at each sampled value of Maxos.For the 4 other What should be also noted is OEA obtains good functions,Maxos has a larger effect on the performance of OEA. performances in solving COPs by only incorporating two simple and the results are shown in Fig.6.As can be seen,for F02 and existing constraints handling techniques.This also illustrates G11,R values are larger than 0.8 at each sampled value of that OEA has an effective searching mechanism.However,from Maxos,and Maxos does not affect them apparently.For F05,Rs another viewpoint,the performance of OEA on COPs is still value is larger than 0.6 only when Maxos is in 10-35.For G13, worse than that of some state-of-the-arts algorithms,such as the Rs value is larger than 0.6 only when Maxos is smaller than 100. algorithm based on evolution strategies and differential In order to investigate the effects of Maxos on G11 and G13 variation [41].Therefore,improving the performance of OEA further,Maxos increases from 5 to 100 in steps of 5,and the by developing novel constraints handling techniques is our results are shown in the top-right corner of Fig.6(b).For G13, further work. when Maxos is smaller than 40,Ras value is larger. On the whole,all results show that the performance of OEA is ACKNOWLEDGMENT less sensitive to Maxos,and OEA can perform well when Maxos The authors are grateful to the reviewers for their helpful is in 5-40.Maxos restricts the size of an organization which will comments and valuable suggestions. be annexed.Thus,selecting Maxos from this range can prevent OEA from doing too many searches around the same leader

SMCB-E-08102005-0551.R2 8 the global optimum. Suppose MS out of M trials succeed, and then Average Success Ratio is given as Ras s = M M . The smaller ε is, the better the solution is. Here, ε is set to 10-3 for G13 and 10-5 for the 7 other functions. No is set to 150 for F02, F05, F08, and F10, and set to 1500 for G04, G08, G11, and G13. MaxOS is set to 20 for all the 8 functions. For G04, G08, G11, and G13, OEA with CHp is used. The experiments are carried out as follows: AS and CS increase from 0 to 1 in steps of 0.05, with 21×21=441 groups of parameters obtained. 50 trials are carried out for each selected function at each group of parameters, and the results are shown in Fig.5. As can be seen, for different functions, Ras value change with AS and CS in different manners. When AS is smaller than 0.1, Ras values of the 8 functions are small. When AS is larger than 0.9, Ras values of F05, F08 and F10 are also small. On the other hand, even in the case of ε=10-5, Ras values of F02, F08, F10, G04, and G08 still achieve to 1 and Ras values of F05, G11, and G13 also achieve to a high value at many groups of parameters. On the whole, Fig.5 shows that AS has a larger effect on the performance of OEA than CS. The performance of OEA is less sensitive to AS and CS, and OEA can perform well in a wide range of values for AS and CS. Since AS and CS control the probability of using AnStr1, AnStr2 in the annexing operator and CoStr1, CoStr2 in the cooperating operator, the results also show that combining the two strategies in the two operators can make OEA perform better. Therefore, it is better to select AS and CS from 0.2-0.9. C. Effects of MaxOS on the Performance of OEA The functions, the criterion, and the parameter settings of ε and No used in this subsection are the same with those of the above subsection. AS is set to 0.8, and CS is set to 0.6. For G04, G08, G11, and G13, OEA with CHp is also used. The experiments are carried out as follows: MaxOS increases from 5 to 150 in steps of 5 for the unconstrained functions and from 50 to 1500 in steps of 50 for the constrained functions. 50 trials are carried out for each function at each group of parameters. The results show that Ras values of F08, F10, G04, and G08 achieve to 1 at each sampled value of MaxOS. For the 4 other functions, MaxOS has a larger effect on the performance of OEA, and the results are shown in Fig.6. As can be seen, for F02 and G11, Ras values are larger than 0.8 at each sampled value of MaxOS, and MaxOS does not affect them apparently. For F05, Ras value is larger than 0.6 only when MaxOS is in 10-35. For G13, Ras value is larger than 0.6 only when MaxOS is smaller than 100. In order to investigate the effects of MaxOS on G11 and G13 further, MaxOS increases from 5 to 100 in steps of 5, and the results are shown in the top-right corner of Fig.6 (b). For G13, when MaxOS is smaller than 40, Ras value is larger. On the whole, all results show that the performance of OEA is less sensitive to MaxOS, and OEA can perform well when MaxOS is in 5-40. MaxOS restricts the size of an organization which will be annexed. Thus, selecting MaxOS from this range can prevent OEA from doing too many searches around the same leader. VI. CONCLUSION A new numerical optimization algorithm, OEA, has been proposed in this paper. The experimental results in Tables I-III indicate that OEA outperforms the two famous algorithms, FEP and OGA/Q, in solving UCOPs. OEA can find better solutions using a lower computational cost. Furthermore, the standard deviation of OEA is small, which indicates that OEA has a more stable performance. The experimental results in Tables IV-IX indicate that both OEA with CHp and CHc obtain good performances in a wide range of benchmark problems functions in solving COPs, and the performance is competitive with the recently published approaches, RY, SMES, and SCA. Systematic analyses have been made on the 4 parameters of OEA. To summarize, a large population can reduce the probability of OEA trapped into the local optima, at the same time, the search mechanism of OEA can make a large population also have a fast convergence rate. The performance of OEA is less sensitive to the parameters, AS, CS, and MaxOS. Although the best values of AS, CS, and MaxOS are different for different functions, OEA can perform stably in a wide range of values for AS, CS, and MaxOS, which prove OEA is quite robust and easy to use. On the whole, OEA obtains a good performance for both UCOPs and COPs. This benefits mainly from the following two aspects. One is the structured population, and the other is the three evolutionary operators. In the structured population in OEA, the individuals that can generate offspring are not selected from the whole population, but from an organization. This guarantees the diversity as well as the qualities of the selected individuals. The annexing operator is equivalent to performing a local search. The cooperating operator is equivalent to performing a global search. The splitting operator controls the size of each organization so that the computational cost is reasonably distributed between the local searching and the global searching. In such a manner, once a promising individual appears in the population, it has a high probability to generate better individuals. What should be also noted is OEA obtains good performances in solving COPs by only incorporating two simple existing constraints handling techniques. This also illustrates that OEA has an effective searching mechanism. However, from another viewpoint, the performance of OEA on COPs is still worse than that of some state-of-the-arts algorithms, such as the algorithm based on evolution strategies and differential variation [41]. Therefore, improving the performance of OEA by developing novel constraints handling techniques is our further work. ACKNOWLEDGMENT The authors are grateful to the reviewers for their helpful comments and valuable suggestions

SMCB-E-08102005-0551.R2 9 REFERENCES problems,"in Proc.5h Annual Conf.Evolutionary Programming,San [1]T.Back,U.Hammel,and H.-P.Schwefel,"Evolutionary computation: Dieg0,CA,Pp.305-312,1996. [24]S.A.Kazarlis,S.E.Papadakis,J.B.Theocharis,and V.Petridis, Comments on the history and current state,"IEEE Trans.Evol.Comput., "Microgenetic algorithms as generalized hill-climbing operators for GA vol.1,no.1,pp.3-17,Apr.1997. optimization,"IEEE Trans.Evol.Comput.,vol.5,no.3,pp.204-217,Jun. [2]D.E.Goldberg,Genetic Algorithms in Search.Optimization Machine 2001 Learning,Addison-Wesley:Reading,MA,1989. [3]Z.Michalewicz,Genetic Algorithms Data Structures=Evolution [25]V.Petridis,S.Kazarlis,and A.Bakirtzis,"Varying fitness functions in genetic algorithm constrained optimization:The cutting stock and unit Programs,3rd Revised and Extended ed.New York:Springer-Verlag, commitment problems,"IEEE Trans.Syst..Man,Cybern.B,vol.28,no. 1996. 5,pp.629-640,0ct.1998. [4]D.Fogel,Evolutionary Computation:Toward a New Philosophy of [26]S.Koziel and Z.Michalewicz,"Evolutionary algorithms,homomorphous Machine Intelligence,2nd ed.New York:Wiley,1999. M.Mitchell,An Introduction to Genetic Algorithms.Reprint ed. mappings,and constrained parameter optimization,"Evol.Comput, voL.7,no.1,pp.19-44,1999. Cambridge,MA:MIT Press,1998. [27]M.J.Tahk and B.C.Sun,"Coevolutionary augmented lagrangian [6 T.Back,Evolutionary Algorithms in Theory and Practice:Evolution methods for constrained optimization,"IEEE Trans.Evol.Comput.,vol. Strategies.Evolutionary Programming.Genetic Algorithms.Oxford. 4,no.2,pp.114-124,Jul.2000. U.K.:Oxford Univ.Press,1996. [28]P.D.Surry and N.J.Radcliffe,"The COMOGA method:Constrained [7]W.Zhong,J.Liu,M.Xue,and L.Jiao,"A multiagent genetic algorithm optimization by multi-objective genetic algorithms,"Control Cybern., for global numerical optimization,"IEEE Trans.Syst..Man.and Cybern. vol.26,no.3pp.391-412,1997 B,vol.34,no.2,pp.1128-1141,Apr.2004 [29]C.A.C.Coello,"Treating constraints as objectives for single-objective [8]J.Liu,W.Zhong,and L.Jiao,"A multiagent evolutionary algorithm for constraint satisfaction problems,"IEEE Trans.Syst..Man,and Cybern.B. evolutionary optimization,"Eng.Opt.,vol.32,no.3,pp.275-308,2000. [30]C.A.C.Coello,"Theoretical and numerical constraint-handling vol.36,no.1,Pp.54-73,Feb.2006 techniques used with evolutionary algorithms:A survey of the state of the [9]J.Liu,W.Zhong,and L.Jiao,"Comments on 'the 1993 DIMACS Graph art,"Computer Methods in Applied Mechanics and Engineering,vol. Coloring Challenge'and 'Energy Function-Based Approaches to Graph Coloring',"IEEE Trans.Neural Networks,vol.17,no.2,pp.533,Mar. 191,no.11-12,pp.1245-1287,2002. [31]Z.Michalewicz and M.Schoenauer,"Evolutionary algorithm for 2006. constrained parameter optimization problems,"Evol.Comput.,vol.4,no. [10]L.Jiao,J.Jiu,and W.Zhong,"An organizational coevolutionary 1,pp.1-32,Feb.1996 algorithm for classification,"IEEE Trans.Evol.Comput.,vol.10,no.1, [32]A.Homaifar,S.H.Y.Lai,and X.Qi,"Constrained optimization via pp.67-80,Feb.2006. genetic algorithms,"Simulation,vol.62,no.4,pp.242-254,1994. [11]D.Whitley,"Cellular genetic algorithms,"in Proc.Fifth Int.Conference on Genetic Algorithms,R.K.Belew and L.B.Booker,Eds.San Mateo. [33]K.Deb,"An efficient constraint handling method for genetic algorithms," Computer Methods in Applied Mechanics and Engineering,vol.186,no. CA:Morgan Kaufmann,pp.658,1993. 2/4,pp.311-338,2000. [12]R.K.Ursem,"Multinational evolutionary algorithms,"in Proc.of the [34]F.Jimenez and J.L.Verdegay,"Evolutionary techniques for constrained Congress of Evolutionary Computation,P.J.Angeline,Z.Michalewicz, M.Schoenauer,X.Yao,and A.Zalzala,Eds.,vol.3,IEEE Press, optimization problems,"in Proc.7th Eur.Congr.Intelligent Techniques Pp.1633-1640,1999. and Soft Computing (EUFIT99),H.-J.Zimmermann,Ed.,Aachen, [13]T.Krink,B.H.Mayoh,and Z.Michalewicz,"A PATCHWORK model Germany,Springer-Verlag,1999. for evolutionary algorithms with structured and variable size [35]K.Deb,Multi-Objective Optimization using Evolutionary Algorithms, populations,"in Proc.of the Genetic and Evolutionary Computation John Wiley and Sons,LTD.2001. Conference,W.Banzhaf,J.Daida,A.E.Eiben,M.H.Garzon,V. [36]F.Jimenez,A.F.Gomez-Skarmeta,G.Sanchez,and K.Deb,"An Honavar,M.Jakiela,and R.E.Smith,Eds.,vol.2,Morgan Kaufmann, evolutionary algorithm for constrained multi-objective optimization,"in: pp.1321-1328,1999. Proc.IEEE World Congress on Evolutionary Computation,2002 [14]R.H.Coase,The Firm.the Market,and the Law,Chicago:University of [37]C.A.Coello,D.V.Veldhuizen,and G.V.Lamont,Evolutionary Chicago Press,1988. Algorithms for Solving Multi-Objective Problems,Kluwer Academic [15]J.R.Wilcox,Organizational Learning within a Learning Classifier Publishers,New York,2002. System,Master thesis,University of Illinois,1995.Also Tech.Report No. 38]A.H.Wright,"Genetic algorithms for real parameter optimization," 95003 IlliGAL. Foundations of Genetic Algorithms,G.J.E.Rawlins Eds.Morgan [16]J.P.Cohoon,S.U.Hegde,W.N.Martin,and D.S.Richards, Kaufmann,San Mateo,CA,pp.205-218,1991. "Punctuated equilibria:a parallel genetic algorithm,"in Proc.ofn [39]X.Yao,Y.Liu,and G.Lin,"Evolutionary programming made faster," Conf.on Genetic Algorithms,J.J.Gerfenstette,ed.,Cambridge,MA. IEEE Trans.Evol.Comput.,vol.3,no.3,pp.82-102,Jul.1999. USA,pp.148-154,1987. [40]Y.W.Leung and Y.Wang,"An orthogonal genetic algorithm with [17]R.Farmani and J.A.Wright,"Self-adaptive fitness formulation for quantization for global numerical optimization,"IEEE Trans.Evol. constrained optimization,"IEEE Trans.Evol.Comput,vol.7,no.5, Comput.,vol.5,no.1,pp.41-53,Feb.2001. pp.445-455,0ct.2003. [41]T.P.Runarsson and X.Yao,"Search biases in constrained evolutionary [18]E.M.Montes,C.A.C.Coello,"A simple multimembered evolution optimization,"IEEE Trans.Systems.Man.and Cybernetics-Part C.vol. strategy to solve constrained optimization problems,"IEEE Trans.Evol. 35,no.2,pp.233-243,May.2005. Comput.,vol.9,no.1,pp.1-17,Feb.2005. [19]T.P.Runarsson and X.Yao,"Stochastic ranking for constrained evolutionary optimization,"IEEE Trans.Evol.Comput.,vol.4,no.4. Jing Liu (M'06)was born in Xi'an,China.She pp.284-294,Sept2000. received the B.S.degree in computer science and [20]T.Ray and K.M.Liew,"Society and civilization:An optimization technology from Xidian University,Xi'an,China, algorithm based on the simulation of social behavior,"IEEE Trans.Evol. in 2000,and received the Ph.D.degree in circuits Comput.,vol.7,no.4,pp.386-396,Aug.2003. and systems from the Institute of Intelligent [21]Z.Michalewicz and C.Z.Janikow,"Handling constraints in genetic Information Processing of Xidian University in algorithms,"in Proc.Int.Conf.Genetic Algorithms,vol.4,pp.151-157, 2004.Now she is an associate professor in Xidian University. 1991. [22]Z.Michalewicz and N.F.Attia,"Evolutionary optimization of Her research interests include evolutionary constrained problems,"in Proc.34 Annu.Conf Evolutionary computation,multiagent systems,and data mining Programming,A.V.Sebald and L.J.Fogel,Eds.,River Edge,NJ, pp.98-108,1994. [23]Z.Michalewicz,G.Nazhiyath,and M.Michalewicz"A note on usefulness of geometrical crossover for numerical optimization

SMCB-E-08102005-0551.R2 9 REFERENCES [1] T. Bäck, U. Hammel, and H.-P. Schwefel, “Evolutionary computation: Comments on the history and current state,” IEEE Trans. Evol. Comput., vol.1, no.1, pp.3-17, Apr. 1997. [2] D. E. Goldberg, Genetic Algorithms in Search, Optimization & Machine Learning, Addison-Wesley: Reading, MA, 1989. [3] Z. Michalewicz, Genetic Algorithms + Data Structures=Evolution Programs, 3rd Revised and Extended ed. New York: Springer-Verlag, 1996. [4] D. Fogel, Evolutionary Computation: Toward a New Philosophy of Machine Intelligence, 2nd ed. New York: Wiley, 1999. [5] M. Mitchell, An Introduction to Genetic Algorithms, Reprint ed. Cambridge, MA: MIT Press, 1998. [6] T. Bäck, Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms. Oxford, U.K.: Oxford Univ. Press, 1996. [7] W. Zhong, J. Liu, M. Xue, and L. Jiao, “A multiagent genetic algorithm for global numerical optimization,” IEEE Trans. Syst., Man, and Cybern. B, vol. 34, no. 2, pp.1128-1141, Apr. 2004. [8] J. Liu, W. Zhong, and L. Jiao, “A multiagent evolutionary algorithm for constraint satisfaction problems,” IEEE Trans. Syst., Man, and Cybern. B, vol. 36, no. 1, pp. 54-73, Feb. 2006. [9] J. Liu, W. Zhong, and L. Jiao, “Comments on ‘the 1993 DIMACS Graph Coloring Challenge’ and ‘Energy Function-Based Approaches to Graph Coloring’,” IEEE Trans. Neural Networks, vol. 17, no. 2, pp.533, Mar. 2006. [10] L. Jiao, J. Jiu, and W. Zhong, “An organizational coevolutionary algorithm for classification,” IEEE Trans. Evol. Comput., vol. 10, no. 1, pp.67-80, Feb. 2006. [11] D. Whitley, “Cellular genetic algorithms,” in Proc. Fifth Int. Conference on Genetic Algorithms, R. K. Belew and L. B. Booker, Eds. San Mateo, CA: Morgan Kaufmann, pp.658, 1993. [12] R. K. Ursem, “Multinational evolutionary algorithms,” in Proc. of the Congress of Evolutionary Computation, P. J. Angeline, Z. Michalewicz, M. Schoenauer, X. Yao, and A. Zalzala, Eds., vol. 3, IEEE Press, pp.1633-1640, 1999. [13] T. Krink, B. H. Mayoh, and Z. Michalewicz, “A PATCHWORK model for evolutionary algorithms with structured and variable size populations,” in Proc. of the Genetic and Evolutionary Computation Conference, W. Banzhaf, J. Daida, A. E. Eiben, M. H. Garzon, V. Honavar, M. Jakiela, and R. E. Smith, Eds., vol. 2, Morgan Kaufmann, pp.1321-1328, 1999. [14] R. H. Coase, The Firm, the Market, and the Law, Chicago: University of Chicago Press, 1988. [15] J. R. Wilcox, Organizational Learning within a Learning Classifier System, Master thesis, University of Illinois, 1995. Also Tech. Report No. 95003 IlliGAL. [16] J. P. Cohoon, S. U. Hegde, W. N. Martin, and D. S. Richards, “Punctuated equilibria: a parallel genetic algorithm,” in Proc. of 2nd Int. Conf. on Genetic Algorithms, J. J. Gerfenstette, ed., Cambridge, MA, USA, pp.148-154, 1987. [17] R. Farmani and J. A. Wright, “Self-adaptive fitness formulation for constrained optimization,” IEEE Trans. Evol. Comput., vol. 7, no. 5, pp.445-455, Oct. 2003. [18] E. M. Montes, C. A. C. Coello, “A simple multimembered evolution strategy to solve constrained optimization problems,” IEEE Trans. Evol. Comput., vol. 9, no. 1, pp.1-17, Feb. 2005. [19] T. P. Runarsson and X. Yao, “Stochastic ranking for constrained evolutionary optimization,” IEEE Trans. Evol. Comput., vol. 4, no. 4, pp.284-294, Sept. 2000. [20] T. Ray and K. M. Liew, “Society and civilization: An optimization algorithm based on the simulation of social behavior,” IEEE Trans. Evol. Comput., vol. 7, no. 4, pp.386-396, Aug. 2003. [21] Z. Michalewicz and C. Z. Janikow, “Handling constraints in genetic algorithms,” in Proc. Int. Conf. Genetic Algorithms, vol. 4, pp.151-157, 1991. [22] Z. Michalewicz and N. F. Attia, “Evolutionary optimization of constrained problems,” in Proc. 3rd Annu. Conf. Evolutionary Programming, A. V. Sebald and L. J. Fogel, Eds., River Edge, NJ, pp.98-108, 1994. [23] Z. Michalewicz, G. Nazhiyath, and M. Michalewicz “A note on usefulness of geometrical crossover for numerical optimization problems,” in Proc. 5th Annual Conf. Evolutionary Programming, San Diego, CA, pp.305-312, 1996. [24] S. A. Kazarlis, S. E. Papadakis, J. B. Theocharis, and V. Petridis, “Microgenetic algorithms as generalized hill-climbing operators for GA optimization,” IEEE Trans. Evol. Comput., vol. 5, no. 3, pp.204-217, Jun. 2001 [25] V. Petridis, S. Kazarlis, and A. Bakirtzis, “Varying fitness functions in genetic algorithm constrained optimization: The cutting stock and unit commitment problems,” IEEE Trans. Syst., Man, Cybern. B, vol. 28, no. 5, pp.629-640, Oct. 1998. [26] S. Koziel and Z. Michalewicz, “Evolutionary algorithms, homomorphous mappings, and constrained parameter optimization,” Evol. Comput., vol.7, no. 1, pp.19-44, 1999. [27] M. J. Tahk and B. C. Sun, “Coevolutionary augmented lagrangian methods for constrained optimization,” IEEE Trans. Evol. Comput., vol. 4, no. 2, pp.114-124, Jul. 2000. [28] P. D. Surry and N. J. Radcliffe, “The COMOGA method: Constrained optimization by multi-objective genetic algorithms,” Control Cybern., vol. 26, no. 3, pp. 391-412, 1997. [29] C. A. C. Coello, “Treating constraints as objectives for single-objective evolutionary optimization,” Eng. Opt., vol. 32, no. 3, pp.275-308, 2000. [30] C. A. C. Coello, “Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art,” Computer Methods in Applied Mechanics and Engineering, vol. 191, no. 11-12, pp.1245-1287, 2002. [31] Z. Michalewicz and M. Schoenauer, “Evolutionary algorithm for constrained parameter optimization problems,” Evol. Comput., vol. 4, no. 1, pp.1-32, Feb. 1996. [32] A. Homaifar, S. H. Y. Lai, and X. Qi, “Constrained optimization via genetic algorithms,” Simulation, vol. 62, no. 4, pp.242-254, 1994. [33] K. Deb, “An efficient constraint handling method for genetic algorithms,” Computer Methods in Applied Mechanics and Engineering, vol. 186, no. 2/4, pp.311-338, 2000. [34] F. Jiménez and J. L. Verdegay, “Evolutionary techniques for constrained optimization problems,” in Proc. 7th Eur. Congr. Intelligent Techniques and Soft Computing (EUFIT'99), H.-J. Zimmermann, Ed., Aachen, Germany, Springer-Verlag, 1999. [35] K. Deb, Multi-Objective Optimization using Evolutionary Algorithms, John Wiley and Sons, LTD. 2001. [36] F. Jiménez, A. F. Gomez-Skarmeta, G. Sanchez, and K. Deb, “An evolutionary algorithm for constrained multi-objective optimization,” in: Proc. IEEE World Congress on Evolutionary Computation, 2002. [37] C. A. Coello, D. V. Veldhuizen, and G. V. Lamont, Evolutionary Algorithms for Solving Multi-Objective Problems, Kluwer Academic Publishers, New York, 2002. [38] A. H. Wright, “Genetic algorithms for real parameter optimization,” Foundations of Genetic Algorithms, G. J. E. Rawlins Eds. Morgan Kaufmann, San Mateo, CA, pp.205-218, 1991. [39] X. Yao, Y. Liu, and G. Lin, “Evolutionary programming made faster,” IEEE Trans. Evol. Comput., vol. 3, no. 3, pp.82-102, Jul. 1999. [40] Y. W. Leung and Y. Wang, “An orthogonal genetic algorithm with quantization for global numerical optimization,” IEEE Trans. Evol. Comput., vol. 5, no. 1, pp.41-53, Feb. 2001. [41] T. P. Runarsson and X. Yao, “Search biases in constrained evolutionary optimization,” IEEE Trans. Systems, Man, and Cybernetics - Part C, vol. 35, no. 2, pp.233-243, May. 2005. Jing Liu (M’06) was born in Xi’an, China. She received the B.S. degree in computer science and technology from Xidian University, Xi’an, China, in 2000, and received the Ph.D. degree in circuits and systems from the Institute of Intelligent Information Processing of Xidian University in 2004. Now she is an associate professor in Xidian University. Her research interests include evolutionary computation, multiagent systems, and data mining

SMCB-E-08102005-0551.R2 10 Weicai Zhong (M'06)was born in Jiangxi, org China.He received the B.S.degree in computer Member, d Member. science and technology from Xidian University. Member, Member, Xi'an,China,in 2000,and received the Ph.D. degree in pattern recognition and intelligent Leader Member information system from the Institute of 香金 0中0 Intelligent Information Processing of Xidian Member Member University in 2004.Now he is a Postdoctoral org Fellow in Xidian University. Member. Member, His research interests include evolutionary Member, Member computation,data mining,and statistical 。。 Leader. Member, Licheng Jiao (SM'89)was born in Shaanxi,China. Membery Membery on Oct.15,1959.He received the B.S.degree from Shanghai Jiaotong University,Shanghai,China,in Fig 3.Cooperating operator 1982.He received the M.S.and Ph.D.degrees from Xi'an Jiaotong University,Xi'an,China,in 1984 530 and 1990,respectively. From 1984 to 1986,he was an Assistant 5250 Professor in Civil Aviation Institute of China. Tianjing,China.During 1990 and 1991,he was a 5 20 Postdoctoral Fellow in the National Key Lab for Radar Signal Processing,Xidian University,Xi'an, China.Now he is the Dean of the electronic engineering school and the director of the Institute of Intelligent Information Processing of Xidian University.His 8 current research interests include signal and image processing,nonlinear circuit and systems theory,learning theory and algorithms,optimization problems, G10 wavelet theory,and data mining.He is the author of there books:Theory of Neural Network Systems (Xi'an,China:Xidian Univ.Press,1990),Theory and 4=1050 Application on Nonlinear Transformation Functions (Xi'an,China:Xidian N.=15(00 Univ.Press,1992),and Applications and Implementations of Neural Nenorks (Xi'an,China:Xidian Univ.Press,1996).He is the author or coauthor of more than 150 scientific papers. -7000 0.5 Org Member Member, Fig.4.The evolutionary process of the mean best solutions over 50 trials for G01,G05,G06,andG10 8 Member, Leader, Member, . Member 08 Or Leader Member 02 02 Member 。中年 on 08 Leader. 0.6 02 04 0402 04 04 1 Fig.1.Splitting operator 08 06 org org Member Member, 02 02 Member, Member. 0.8 布表 。布布 0.8 04 0.2 06 0.6 04 Leader 02 020406 g。年 Member,, Member 7g。2 Member Member Member Member 布中 非来多 Member Leader 。。 03 02 Member, Member.. 0406 0 04 0.2 02 02 0062040608 Fig.2.Annexing operator

SMCB-E-08102005-0551.R2 10 Weicai Zhong (M’06) was born in Jiangxi, China. He received the B.S. degree in computer science and technology from Xidian University, Xi’an, China, in 2000, and received the Ph.D. degree in pattern recognition and intelligent information system from the Institute of Intelligent Information Processing of Xidian University in 2004. Now he is a Postdoctoral Fellow in Xidian University. His research interests include evolutionary computation, data mining, and statistical learning. Licheng Jiao (SM’89) was born in Shaanxi, China, on Oct. 15, 1959. He received the B.S. degree from Shanghai Jiaotong University, Shanghai, China, in 1982. He received the M.S. and Ph.D. degrees from Xi’an Jiaotong University, Xi’an, China, in 1984 and 1990, respectively. From 1984 to 1986, he was an Assistant Professor in Civil Aviation Institute of China, Tianjing, China. During 1990 and 1991, he was a Postdoctoral Fellow in the National Key Lab for Radar Signal Processing, Xidian University, Xi’an, China. Now he is the Dean of the electronic engineering school and the director of the Institute of Intelligent Information Processing of Xidian University. His current research interests include signal and image processing, nonlinear circuit and systems theory, learning theory and algorithms, optimization problems, wavelet theory, and data mining. He is the author of there books: Theory of Neural Network Systems (Xi’an, China: Xidian Univ. Press, 1990), Theory and Application on Nonlinear Transformation Functions (Xi’an, China: Xidian Univ. Press, 1992), and Applications and Implementations of Neural Networks (Xi’an, China: Xidian Univ. Press, 1996). He is the author or coauthor of more than 150 scientific papers. Fig. 1. Splitting operator Fig. 2. Annexing operator Fig. 3. Cooperating operator Fig. 4. The evolutionary process of the mean best solutions over 50 trials for G01, G05, G06, and G10

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
共16页,试读已结束,阅读完整版请下载
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有