SMCB-E-03172004-0141.R2 1 A Multiagent Evolutionary algorithm for Constraint Satisfaction Problems Weicai Zhong,Jing Liu,and Licheng Jiao,Senior Member,IEEE subspace from the Cartesian product of all the variable domains. Abstract-With the intrinsic properties of constraint the computational complexity for solving most nontrivia satisfaction problems (CSPs)in mind,we divide CSPs into two problems remains exponential.Many studies have been types,namely,permutation CSPs and non-permutation CSPs. conducted to investigate various ways of improving the According to their characteristics,several behaviors are designed for agents by making use of the ability of agents to sense and act on backtracking method,such as consistency techniques [4],[5], the environment.These behaviors are controlled by means of the dependency-directed backtracking scheme [6],[7]. evolution,so that the multiagent evolutionary algorithm for Nevertheless,it is still unable to solve nontrivial large-scale constraint satisfaction problems (MAEA-CSPs)results.To CSPs in a reasonable runtime. overcome the disadvantages of the general encoding methods,the For the generate-and-test method,one of the most popular minimum conflict encoding is also proposed.Theoretical analyses show that MAEA-CSPs has a linear space complexity and ideas is to employ a local search technique [8]-[10].It generates converges to the global optimum.The first part of the experiments an initial configuration and then incrementally uses"repair"or uses 250 benchmark binary CSPs and 79 graph coloring problems "hill climbing"to modify the inconsistent configuration to move from the DIMACS challenge to test the performance of to a neighborhood configuration having the best or better MAEA-CSPs for non-permutation CSPs.MAEA-CSPs is evaluation value among the neighbors,until a solution is found. compared with six well-defined algorithms and the effect of the As related to the idea of local search.other heuristics have also parameters is analyzed systematically.The second part of the been developed,such as the min-conflicts heuristic [11],[12], experiments uses a classical CSP,n-queen problems,and a more practical case,job-shop scheduling problems (JSPs),to test the GSAT [13].In addition,local search techniques are often performance of MAEA-CSPs for permutation CSPs.The combined with Evolutionary Algorithms(EAs),another popular scalability of MAEA-CSPs along n for n-queen problems is studied kind of method for solving CSPs. with great care.The results show that MAEA-CSPs achieves good Historically,CSPs have been approached from many angles performance when n increases from 10 to 107,and has a linear by EAs.In this field,the method used to treat the constraints is time complexity.Even for 10'-queen problems,MAEA-CSPs finds the solutions by only 150 seconds.For JSPs,59 benchmark very important.The constraints can be handled directly. problems are used,and good performance is also obtained. indirectly or by mixed methods [14].Among the available methods,some put the emphasis on the usage of heuristics,such Index Terms-Constraint satisfaction problems,evolutionary as ARC-GA [15],[16],COE-H GA [17],[18],Glass-Box [19], algorithms,graph coloring problems,job-shop scheduling H-GA [20],[21],whereas others handle the constraints by problems,multiagent systems,n-queen problems. fitness function adaptation,such as CCS [22],[23],MID [24]-[26],SAW [27],[28].These methods dealt with CSPs by EAs from different angles and boosted the development of this I.INTRODUCTION field.Reference [29]has made an extensive performance large number of problems coming from artificial Aintelligence as well as other areas of computer science an comparison of all these EAs on a systematically generated test suite. engineering can be stated as Constraint Satisfaction Problems Agent-based computation has been studied for several years (CSPs)[1]-[3].A CSP has three components:variables,values, in the field of distributed artificial intelligence [30],[31]and has and constraints.The purpose is to find an assignment of values been widely used in other branches of computer science to variables such that all constraints are satisfied.Many methods [32]-[35].Problem solving is an area with which many have been proposed to deal with CSPs,where backtracking and multiagent-based applications are concerned.It includes generate-and-test methods are well-known traditional distributed solutions to problems,solving distributed problems approaches [1].Although the backtracking method eliminates a and distributed techniques for problem solving [30],[31]. Reference [33]introduced an application of distributed Manuscript received March 17,2004.This work is supported by the techniques for solving constraint satisfaction problems.They National Natural Science Foundation of China under Grant 60133010 and solved 7000-queen problems by an energy-based multiagent 60372045,and the "863"project under Grant 2002AA135080. model.In [34],multiagent systems and genetic algorithms(GAs) The authors are with the Institute of Intelligent Information Processing, Xidian University,Xi'an,710071,China are integrated to solve global numerical optimization problems, Jing Liu,corresponding author,phone:86-029-88202661;fax: and the method can find high quality solutions at a low 86-029-88201023,e-mail:neouma@163.com
SMCB-E-03172004-0141.R2 1 Abstract—With the intrinsic properties of constraint satisfaction problems (CSPs) in mind, we divide CSPs into two types, namely, permutation CSPs and non-permutation CSPs. According to their characteristics, several behaviors are designed for agents by making use of the ability of agents to sense and act on the environment. These behaviors are controlled by means of evolution, so that the multiagent evolutionary algorithm for constraint satisfaction problems (MAEA-CSPs) results. To overcome the disadvantages of the general encoding methods, the minimum conflict encoding is also proposed. Theoretical analyses show that MAEA-CSPs has a linear space complexity and converges to the global optimum. The first part of the experiments uses 250 benchmark binary CSPs and 79 graph coloring problems from the DIMACS challenge to test the performance of MAEA-CSPs for non-permutation CSPs. MAEA-CSPs is compared with six well-defined algorithms and the effect of the parameters is analyzed systematically. The second part of the experiments uses a classical CSP, n-queen problems, and a more practical case, job-shop scheduling problems (JSPs), to test the performance of MAEA-CSPs for permutation CSPs. The scalability of MAEA-CSPs along n for n-queen problems is studied with great care. The results show that MAEA-CSPs achieves good performance when n increases from 104 to 107 , and has a linear time complexity. Even for 107 -queen problems, MAEA-CSPs finds the solutions by only 150 seconds. For JSPs, 59 benchmark problems are used, and good performance is also obtained. Index Terms—Constraint satisfaction problems, evolutionary algorithms, graph coloring problems, job-shop scheduling problems, multiagent systems, n-queen problems. I. INTRODUCTION large number of problems coming from artificial intelligence as well as other areas of computer science and engineering can be stated as Constraint Satisfaction Problems (CSPs) [1]-[3]. A CSP has three components: variables, values, and constraints. The purpose is to find an assignment of values to variables such that all constraints are satisfied. Many methods have been proposed to deal with CSPs, where backtracking and generate-and-test methods are well-known traditional approaches [1]. Although the backtracking method eliminates a Manuscript received March 17, 2004. This work is supported by the National Natural Science Foundation of China under Grant 60133010 and 60372045, and the “863” project under Grant 2002AA135080. The authors are with the Institute of Intelligent Information Processing, Xidian University, Xi’an, 710071, China Jing Liu, corresponding author, phone: 86-029-88202661; fax: 86-029-88201023; e-mail: neouma@163.com. subspace from the Cartesian product of all the variable domains, the computational complexity for solving most nontrivial problems remains exponential. Many studies have been conducted to investigate various ways of improving the backtracking method, such as consistency techniques [4], [5], the dependency-directed backtracking scheme [6], [7]. Nevertheless, it is still unable to solve nontrivial large-scale CSPs in a reasonable runtime. For the generate-and-test method, one of the most popular ideas is to employ a local search technique [8]-[10]. It generates an initial configuration and then incrementally uses “repair” or “hill climbing” to modify the inconsistent configuration to move to a neighborhood configuration having the best or better evaluation value among the neighbors, until a solution is found. As related to the idea of local search, other heuristics have also been developed, such as the min-conflicts heuristic [11], [12], GSAT [13]. In addition, local search techniques are often combined with Evolutionary Algorithms (EAs), another popular kind of method for solving CSPs. Historically, CSPs have been approached from many angles by EAs. In this field, the method used to treat the constraints is very important. The constraints can be handled directly, indirectly or by mixed methods [14]. Among the available methods, some put the emphasis on the usage of heuristics, such as ARC-GA [15], [16], COE-H GA [17], [18], Glass-Box [19], H-GA [20], [21], whereas others handle the constraints by fitness function adaptation, such as CCS [22], [23], MID [24]-[26], SAW [27], [28]. These methods dealt with CSPs by EAs from different angles and boosted the development of this field. Reference [29] has made an extensive performance comparison of all these EAs on a systematically generated test suite. Agent-based computation has been studied for several years in the field of distributed artificial intelligence [30], [31] and has been widely used in other branches of computer science [32]-[35]. Problem solving is an area with which many multiagent-based applications are concerned. It includes distributed solutions to problems, solving distributed problems, and distributed techniques for problem solving [30], [31]. Reference [33] introduced an application of distributed techniques for solving constraint satisfaction problems. They solved 7000-queen problems by an energy-based multiagent model. In [34], multiagent systems and genetic algorithms (GAs) are integrated to solve global numerical optimization problems, and the method can find high quality solutions at a low A Multiagent Evolutionary Algorithm for Constraint Satisfaction Problems Weicai Zhong, Jing Liu, and Licheng Jiao, Senior Member, IEEE A
SMCB-E-03172004-0141.R2 2 computational cost even for the functions with 10 000 should be defined when multiagent systems are used to solve dimensions.All these results show that both agents and EAs problems.The first is the meaning and the purpose ofeach agent have a high potential in solving complex and ill-defined The second is the environment in which all agents live.Since problems. each agent has only local perceptivity,so the third is the local In this paper,with the intrinsic properties of CSPs in mind, environment.The last is the behaviors that each agent can take we divide CSPs into two types,namely,permutation CSPs and to achieve the purpose. non-permutation CSPs.According to the different A.Constraint satisfaction problems characteristics of the two types of problems,several behaviors are designed for agents.Furthermore,all such behaviors are A CSP has three components: controlled by means of evolution,so that a new algorithm, 1 A finite set of variables,x=,x2,.., multiagent evolutionary algorithm for constraint satisfaction 2)A domain set D,containing a finite and discrete domain for problems(MAEA-CSPs),results.In MAEA-CSPs,all agents each variable: live in a latticelike environment.Making use of the designed D=DD.DD=d,ddo,=1.2.n(1) behaviors,MAEA-CSPs realizes the ability of agents to sense where stands for the number of elements in the set; and act on the environment in which they live.During the process of interacting with the environment and the other agents, 3)A constraint set,C=(Ci(x),C2(x),..Cm(x")),wherex, each agent increases energy as much as possible,so that i=1,2,...,m is a subset ofx,and C)denotes the values that MAEA-CSPs can find solutions.Experimental results show that the variables in x'cannot take simultaneously.For example, MAEA-CSPs provides good performance. given a constraint C(1,x2))=(di,d2),it means when xi=di,d2 From another viewpoint,since MAEA-CSPs uses a cannot be assigned to x2,and whenx2=d2,di cannot be assigned lattice-based population,it can be also considered as a kind of toxi fined-grained parallel GA.Fined-grained parallel GAs are a Thus,the search space ofa CSP,S,is a Cartesian product of kind of parallel implementation of GAs [36]-[45],and have the n sets of finite domains,namely,S=DxD2x...xD.A been applied to many problems,such as combinatorial solution for a CSP,=s,52,S,is an assignment to all optimization [46],function optimization [47],[48],scheduling variables such that the values satisfy all constraints.Here is a problems [49].But since they are just parallel techniques,they simple example: often present the same problem of premature convergence as Example 1:A CSP is described as follows: traditional GAs [50].MAEA-CSPs makes use of the ability of x={x1,X2,X3} agents in sensing and acting on the environment,and puts D={D,D2,D},D,={12,3},i=1,2,3 emphasis on designing behaviors for agents.The experimental C={C({,x)=1,3》,C2({:,x2})=3,3》, results show that MAEA-CSPs achieves good performance for various CSPs,that is,binary CSPs,graph coloring problems C({x,3)=(2,1),C4({3,3)=(2,3》 (GCPs),n-queen problems,and job-shop scheduling problems C({x1,x3)=(3,1),C6({x1x3})=3,3》, (2) (JSPs).Therefore,MAEA-CSPs is a successful development of fined-grained parallel GAs. C,({x2,x3})=1,1),Cg({x2,x)=1,2), The rest of this paper is organized as follows:Section II C,({x2,)=1,3》,Co({x2,x3)=2,1), describes the agents designed for CSPs.Section III describes C({x2,s3)=3,1} the implementation of MAEA-CSPs,and analyzes the space complexity and convergence.Section IV uses binary CSPs and All solutions for this CSP are (1,2,2),(1,2,3),(2,2,2),(2, GCPs to investigate the performance of MAEA-CSPs on 3.2).(3.2.2) non-permutation CSPs,whereas Section V uses n-queen One may be interested in finding one solution,all solutions or problems and JSPs to investigate the performance on in proving that no solution exists.In the last case,one may want permutation CSPs.Finally,conclusions are presented in Section to find a partial solution optimizing certain criteria,for example, VI as many satisfied constraints as possible.We restrict our discussion in finding one solution. II.CONSTRAINT SATISFACTION AGENTS B.Definition of constraint satisfaction agents According to [31]and [33],an agent is a physical or virtual An agent used to solve CSPs is represented as a constraint entity essentially having the following properties:(a)it is able to satisfaction agent(CSAgent),and is defined as follows: live and act in the environment:(b)it is able to sense the local Definition /A constraint satisfaction agent,a,represents an environment;(c)it is driven by certain purposes and (d)it has element in the search space,S,with energy being equal to some reactive behaviors.Multiagent systems are computational VaeS,Energy(a)=-mx(a,C) (3) systems in which several agents interact or work together in order to achieve purposes.As can be seen,the meaning of an [1 a violates C where x(a,C.)= The purpose of each agent is very comprehensive,and what an agent represents is 0 otherwise different for different problems.In general,four elements CSAgent is to maximize the energy by the behaviors it can take
SMCB-E-03172004-0141.R2 2 computational cost even for the functions with 10 000 dimensions. All these results show that both agents and EAs have a high potential in solving complex and ill-defined problems. In this paper, with the intrinsic properties of CSPs in mind, we divide CSPs into two types, namely, permutation CSPs and non-permutation CSPs. According to the different characteristics of the two types of problems, several behaviors are designed for agents. Furthermore, all such behaviors are controlled by means of evolution, so that a new algorithm, multiagent evolutionary algorithm for constraint satisfaction problems (MAEA-CSPs), results. In MAEA-CSPs, all agents live in a latticelike environment. Making use of the designed behaviors, MAEA-CSPs realizes the ability of agents to sense and act on the environment in which they live. During the process of interacting with the environment and the other agents, each agent increases energy as much as possible, so that MAEA-CSPs can find solutions. Experimental results show that MAEA-CSPs provides good performance. From another viewpoint, since MAEA-CSPs uses a lattice-based population, it can be also considered as a kind of fined-grained parallel GA. Fined-grained parallel GAs are a kind of parallel implementation of GAs [36]-[45], and have been applied to many problems, such as combinatorial optimization [46], function optimization [47], [48], scheduling problems [49]. But since they are just parallel techniques, they often present the same problem of premature convergence as traditional GAs [50]. MAEA-CSPs makes use of the ability of agents in sensing and acting on the environment, and puts emphasis on designing behaviors for agents. The experimental results show that MAEA-CSPs achieves good performance for various CSPs, that is, binary CSPs, graph coloring problems (GCPs), n-queen problems, and job-shop scheduling problems (JSPs). Therefore, MAEA-CSPs is a successful development of fined-grained parallel GAs. The rest of this paper is organized as follows: Section II describes the agents designed for CSPs. Section III describes the implementation of MAEA-CSPs, and analyzes the space complexity and convergence. Section IV uses binary CSPs and GCPs to investigate the performance of MAEA-CSPs on non-permutation CSPs, whereas Section V uses n-queen problems and JSPs to investigate the performance on permutation CSPs. Finally, conclusions are presented in Section VI. II. CONSTRAINT SATISFACTION AGENTS According to [31] and [33], an agent is a physical or virtual entity essentially having the following properties: (a) it is able to live and act in the environment; (b) it is able to sense the local environment; (c) it is driven by certain purposes and (d) it has some reactive behaviors. Multiagent systems are computational systems in which several agents interact or work together in order to achieve purposes. As can be seen, the meaning of an agent is very comprehensive, and what an agent represents is different for different problems. In general, four elements should be defined when multiagent systems are used to solve problems. The first is the meaning and the purpose of each agent. The second is the environment in which all agents live. Since each agent has only local perceptivity, so the third is the local environment. The last is the behaviors that each agent can take to achieve the purpose. A. Constraint satisfaction problems A CSP has three components: 1) A finite set of variables, x={x1, x2, …, xn}; 2) A domain set D, containing a finite and discrete domain for each variable: D={D1, D2, …, Dn}, xi∈Di ={dd d 1 2 || , , , Di } , i=1, 2, …, n (1) where |•| stands for the number of elements in the set; 3) A constraint set, C={C1(x1 ), C2(x2 ), …, Cm(xm)}, where xi , i=1, 2, …, m is a subset of x, and Ci(xi ) denotes the values that the variables in xi cannot take simultaneously. For example, given a constraint C({x1, x2})=〈d1, d2〉, it means when x1=d1, d2 cannot be assigned to x2, and when x2=d2, d1 cannot be assigned to x1. Thus, the search space of a CSP, S, is a Cartesian product of the n sets of finite domains, namely, S=D1×D2×…×Dn. A solution for a CSP, s=〈s1, s2, …, sn〉∈S, is an assignment to all variables such that the values satisfy all constraints. Here is a simple example: Example 1: A CSP is described as follows: { () () () () () () ( ) 123 123 1 12 2 12 3 13 4 13 5 13 6 13 7 23 8 2 { , , } { , , }, {1, 2, 3}, 1,2,3 { , } 1,3 , { , } 3,3 , { , } 2,1 , { , } 2,3 , { , } 3,1 , { , } 3,3 , { , } 1,1 , { i xxx DDD D i C xx C xx C xx C xx C xx C xx C xx C x = = == = = ∑ ⌡ = ∑ ⌡ = ∑ ⌡ = ∑ ⌡ = ∑ ⌡ = ∑ ⌡ = ∑ ⌡ x D C ( ) () () ( ) } 3 9 2 3 10 2 3 11 2 3 , } 1, 2 , { , } 1,3 , { , } 2,1 , { , } 3,1 x C xx C xx C xx ⌠ ⌠ ⌠ ⌠ ⌠ ⌠ 〉 ⌠ ⌠ = ∑ ⌡ ⌠ ⌠ = ∑ ⌡ = ∑ ⌡ ⌠ ⌠ = ∑ ⌡ ∫ (2) All solutions for this CSP are 〈1, 2, 2〉, 〈1, 2, 3〉, 〈2, 2, 2〉, 〈2, 3, 2〉, 〈3, 2, 2〉. One may be interested in finding one solution, all solutions or in proving that no solution exists. In the last case, one may want to find a partial solution optimizing certain criteria, for example, as many satisfied constraints as possible. We restrict our discussion in finding one solution. B. Definition of constraint satisfaction agents An agent used to solve CSPs is represented as a constraint satisfaction agent (CSAgent), and is defined as follows: Definition 1: A constraint satisfaction agent, a, represents an element in the search space, S, with energy being equal to ∀a∈S, 1 ( ) ( , ) m E i i nergy C a a = − = χ (3) where 1 violates ( , ) 0 otherwise i i C χ C = a a . The purpose of each CSAgent is to maximize the energy by the behaviors it can take.
SMCB-E-03172004-0141.R2 As can be seen,the energy of each CSAgent is the negative Example 2:The CSP is given in Example 1 with the value of the number of violated constraints.Therefore,the following S? higher the energy is,the better the CSAgent is.When the energy of a CSAgent increases to 0.the CSAgent becomes a solution. S={《,3,x),,3,),(2,,x (6) The domain for each variable is finite and discrete,thus the (2,3,x)2〈3,1,x2)2(3,2,x} elements can be numbered by natural numbers,that is to say,the According to Decoderl,each element in S can be domain D=(di,d2,...,dop is equivalent to D=(1,2,...Dl). When all domains are transformed to the sets of natural numbers, transformed to a partial instantiation of the variables,namely, the solutions of some CSPs take on specific characteristics.On ,x2,x〉→1)(,x,x2〉→11,) the basis of them,CSPs can be divided into two types: 在,x,〉→(在1〉(2,,x)→在来 (7) Definition 2:Let S be the set of solutions for a CSP and all domain sets be represented by the sets of natural numbers.If x3,x,x2〉→(1,1〉(3,x2,x〉→(1*,) VseS is a permutation of 1,2,...,n,the CSP is defined as a where the"*"represents that the corresponding variable is left permutation CSP;otherwise it is defined as a uninstantiated.As can be seen,no element in S can be non-permutation CSP. transformed to a solution for the CSP by Decoderl. For a non-permutation CSP,the search space is still S.Since For CSAgents,we want to design an encoding method that permutation CSPs are a special case of non-permutation CSPs, not only covers solutions for any CSPs,but also maintains a the search space can be reduced to the set of all permutations of manageable search space.Therefore,on the basis of Decoder1, 1,2,...,n,that is, we propose minimum conflict encoding(MCE).In MCE,each s={B,,Pn}andP=(g,P2,,),1≤i≤nl CSAgent is represented as not only an element of S,but also andP∈{l,2,,n,1≤j≤n an element of S,so that some behaviors can be designed to deal with the values of the variables directly.As a result,each andk,1∈{l,2,,n川,(k≠)= 「P+B CSAgent must record some information,and is represented by e≠p the following structure: (4) When one uses EAs to solve problems,the search space must CSAgent Record be encoded such that individuals can be represented in a P:A permutation,PeS for permutation uniform encoding.For example,GAs usually encode the search CSPs,and PeS for non-permutation space by the binary coding,thus,each individual is a bit CSPs; sequence.For permutation CSPs,it is natural to represent each V:VeS,the value for each variable,and individual as a permutation of 1,2,....n.The search space is S for non-permutation CSPs only; with size of n!. E:The energy of the CSAgent,E=Energv(P) For non-permutation CSPs,the most straightforward for permutation CSPs,and E=Energu(V) encoding method is to represent each individual as an element for non-permutation CSPs; of S,which is used by most existing algorithms.Under this SL:The flag for the self-learning method,the search space is DD2x...xD,l.Meanwhile,some behavior,which will be defined later. methods use a permutation coding with a corresponding If SLis True,the self-learning decoder.For example,in [27],[28],each individual is behavior can be performed on the represented by a permutation of the variables,and the CSAgent;otherwise cannot; permutation is transformed to a partial instantiation by a simple End decoder that considers the variables in the order they occur in the permutation and assigns the first possible domain value to In the following text,CSAgent)is used to represent the that variable.If no value is possible without introducing a corresponding component in the above structure.By MCE,each constraint violation,the variable is left uninstantiated.In what CSAgent includes both a permutation and an assignment to all follows,this decoder is labeled as Decoderl and all variables.In fact,the assignment is what we need.So minimum permutations of the variables is labeled as S.In fact, conflict decoding(MCD)is designed corresponding to MCE, which transforms P to V.The main idea of MCD is to assign the xn,x3,…,x)eS→(《,B,…,P〉eS”) (5) value violating minimum number of constraints to a variable. Therefore,S is equivalent to S,and the size of the search The variables are considered in the order they occur in the permutation,and only the constraints between the variable and space is also n!.Decoderl uses a greedy algorithm,thus a those previously assigned values are taken into account.After serious problem exists.That is,for some CSPs,Decoderl MCD is performed,no variable is left uninstantiated.The cannot decode any permutation to a solution.As a result,the details of MCD are shown in Algorithm 1. algorithms based on Decoderl may not find solutions at all. Here is a simple example: Algorithm 1 Minimum conflict decoding
SMCB-E-03172004-0141.R2 3 As can be seen, the energy of each CSAgent is the negative value of the number of violated constraints. Therefore, the higher the energy is, the better the CSAgent is. When the energy of a CSAgent increases to 0, the CSAgent becomes a solution. The domain for each variable is finite and discrete, thus the elements can be numbered by natural numbers, that is to say, the domain D={d1, d2, …, d|D| } is equivalent to D={1, 2, …, |D|}. When all domains are transformed to the sets of natural numbers, the solutions of some CSPs take on specific characteristics. On the basis of them, CSPs can be divided into two types: Definition 2: Let S be the set of solutions for a CSP and all domain sets be represented by the sets of natural numbers. If ∀s∈S is a permutation of 1, 2, …, n, the CSP is defined as a permutation CSP; otherwise it is defined as a non-permutation CSP. For a non-permutation CSP, the search space is still S. Since permutation CSPs are a special case of non-permutation CSPs, the search space can be reduced to the set of all permutations of 1, 2, …, n, that is, { } { } { }( ) 1 2 12 ! , , ..., and , , ..., , 1 ! and 1, 2, ..., , 1 and , 1, 2, ..., , P n n i ii i j i k l k l i i PP P in P n jn kl n k l P P = = 〈 〉 ≤ ≤ ∈ ≤≤ ≠ ∀∈ ≠ ⇒ ≠ S PP P P P P (4) When one uses EAs to solve problems, the search space must be encoded such that individuals can be represented in a uniform encoding. For example, GAs usually encode the search space by the binary coding, thus, each individual is a bit sequence. For permutation CSPs, it is natural to represent each individual as a permutation of 1, 2, …, n. The search space is SP with size of n!. For non-permutation CSPs, the most straightforward encoding method is to represent each individual as an element of S, which is used by most existing algorithms. Under this method, the search space is |D1|×|D2|×…×|Dn|. Meanwhile, some methods use a permutation coding with a corresponding decoder. For example, in [27], [28], each individual is represented by a permutation of the variables, and the permutation is transformed to a partial instantiation by a simple decoder that considers the variables in the order they occur in the permutation and assigns the first possible domain value to that variable. If no value is possible without introducing a constraint violation, the variable is left uninstantiated. In what follows, this decoder is labeled as Decoder1 and all permutations of the variables is labeled as P Sx . In fact, ( ) ( ) 1 2 1 2 , ,, ,,, n P P PP P x n ∀∑ ⌡ x x x PP P ∈S S ∑ ⌡ ∈ (5) Therefore, P Sx is equivalent to SP , and the size of the search space is also n!. Decoder1 uses a greedy algorithm, thus a serious problem exists. That is, for some CSPs, Decoder1 cannot decode any permutation to a solution. As a result, the algorithms based on Decoder1 may not find solutions at all. Here is a simple example: Example 2: The CSP is given in Example 1 with the following P Sx { } 123 132 213 231 312 321 , , , , , , , , , , , , , , , , , P x xxx xxx xxx xxx xxx xxx = ∑ ⌡∑ ⌡∑ ⌡ ∑ ⌡∑ ⌡∑ ⌡ S . (6) According to Decoder1, each element in P Sx can be transformed to a partial instantiation of the variables, namely, 123 132 213 231 312 3 21 , , 1, 1, * , , 1, 1, * , , 1, 1, * , , 1, *, 1 , , 1, 1, * , , 1, *, 1 xxx xxx xxx xxx xxx xxx → → → → → → (7) where the “*” represents that the corresponding variable is left uninstantiated. As can be seen, no element in P Sx can be transformed to a solution for the CSP by Decoder1. For CSAgents, we want to design an encoding method that not only covers solutions for any CSPs, but also maintains a manageable search space. Therefore, on the basis of Decoder1, we propose minimum conflict encoding (MCE). In MCE, each CSAgent is represented as not only an element of P Sx , but also an element of S, so that some behaviors can be designed to deal with the values of the variables directly. As a result, each CSAgent must record some information, and is represented by the following structure: CSAgent = Record P: A permutation, P∈SP for permutation CSPs, and P P ∈Sx for non-permutation CSPs; V: V∈S, the value for each variable, and for non-permutation CSPs only; E: The energy of the CSAgent, E=Energy(P) for permutation CSPs, and E=Energy(V) for non-permutation CSPs; SL: The flag for the self-learning behavior, which will be defined later. If SL is True, the self-learning behavior can be performed on the CSAgent; otherwise cannot; End. In the following text, CSAgent(•) is used to represent the corresponding component in the above structure. By MCE, each CSAgent includes both a permutation and an assignment to all variables. In fact, the assignment is what we need. So minimum conflict decoding (MCD) is designed corresponding to MCE, which transforms P to V. The main idea of MCD is to assign the value violating minimum number of constraints to a variable. The variables are considered in the order they occur in the permutation, and only the constraints between the variable and those previously assigned values are taken into account. After MCD is performed, no variable is left uninstantiated. The details of MCD are shown in Algorithm 1. Algorithm 1 Minimum conflict decoding
SMCB-E-03172004-0141.R2 Input:CSAgent:the CSAgent needs to be information generated by /-behaviors can be preserved.For a decoded; new CSAgent,Pos is set to 1.MCD(CSAgent,Pos)represents Pos:the position to start MCD is performed on CSAgent. decoding; In fact,Algorithm I is just a general implementation of the Output:CSAgent(V; idea of MCD,and can be used to all kinds of CSPs.However, Let CSAgent(P)=R,xa,…,xg),CSAgent(=(, for specific CSPs,the idea of MCD can be combined with the 2,.,vm》,and knowledge of the problems,with a more efficient implementation obtained.For example,the degree of each Conflicts(,)=∑y,C) (8) vertex can be used when dealing with GCPs,and the details will where x(v,,C,)= [1 v;violates C be shown in Section IV.B.1. Conflicts()only 0 otherwise C.Environment of constraint satisfaction agents considers the variables assigned values.Minc In order to realize the local perceptivity of agents,the and Miny are the current minimum number of environment is organized as a latticelike structure,which is conflicts and the corresponding value. similar to our previous work in [34]. Definition 3:All CSAgents live in a latticelike environment, begin L,which is called an agent lattice.The size of L isL if (Pos=1)then where Lie is an integer.Each CSAgent is fixed on a begin lattice-point and can only interact with the neighbors.Suppose yn=1: that the CSAgent located at (i,)is represented asL, i=2: j=1,2....,Lse,then the neighbors of Lu,Neighbors are end defined as follows: else i:=Pos; Neighbors={L,L,4,Lw】 (9) repeat g=1 [i+li≠L 11 i=Lse Minc Conflicts(vp); Miny :1; j:=2; repeat The agent lattice can be represented as the one in Fig.1.Each Vr=ji circle represents a CSAgent,the data represent the position in the lattice,and two CSAgents can interact with each other if and if Conflicts(vp)<Minc then only if there is a line connecting them. begin In traditional EAs,those individuals used to generate Mine=Conflicts(vg): offspring are usually selected from all individuals according to Miny:=j; their fitness.Therefore,the global fitness distribution of a end; population must be determined.But in nature,a global selection j=t1; does not exist,and the global fitness distribution cannot be until (jD 1) determined either.In fact,the real natural selection only occurs in a local environment,and each individual can only interact Ve Miny i with those around it.That is,in each phase,the natural evolution i:=t1: is just a kind of local phenomenon.The information can be until (in); shared globally only after a process of diffusion. end. In the aforementioned agent lattice,to achieve their purposes, CSAgents will compete with others so that they can gain more Two kinds of behaviors are designed for CSAgents,namely, resources.Since each CSAgent can only sense the local the behaviors performing on P(P-behavior)and the behaviors environment,the behaviors can only take place between the performing on V(V-behavior).When V-behaviors are CSAgent and the neighbors.There is no global selection at all, performed on a CSAgent,the energy can be updated directly. so the global fitness distribution is not required.A CSAgent But when P-behaviors are performed on a CSAgent,it must be interacts with the neighbors so that information is transferred to decoded by MCD before updating the energy.If MCD starts to them.In such a manner,the information is diffused to the whole decode from the first variable in the permutation for any agent lattice.As can be seen,the model of the agent lattice is CSAgent,the information generated by V-behaviors would loss. closer to the real evolutionary mechanism in nature than the Therefore,we set a parameter,Pos,for MCD,which is the first model of the population in traditional EAs. position changed by P-behaviors.Thus,the value of the variables before Pos is left untouched such that some
SMCB-E-03172004-0141.R2 4 Input: CSAgent: the CSAgent needs to be decoded; Pos: the position to start decoding; Output: CSAgent(V); Let 1 2 ( ) , , , PP Pn CSAgent P = ∑ ⌡ xx x , CSAgent(V)=〈v1, v2, …, vn〉, and 1 ( ) ( , ) m i ij j Conflicts v v C χ= = ∑ (8) where 1 violates ( , ) 0 otherwise i j i j v C χ v C = . Conflicts(vi ) only considers the variables assigned values. MinC and MinV are the current minimum number of conflicts and the corresponding value. begin if (Pos=1) then begin 1 : 1 Pv = ; i := 2; end else i := Pos; repeat : 1 Pi v = ; : () Min Conflicts v C Pi = ; MinV := 1; j := 2; repeat : Pi v j = ; if ( ( ) P C i Conflicts v Min ); : P V i v Min = ; i := i+1; until (i>n); end. Two kinds of behaviors are designed for CSAgents, namely, the behaviors performing on P (P-behavior) and the behaviors performing on V (V-behavior). When V-behaviors are performed on a CSAgent, the energy can be updated directly. But when P-behaviors are performed on a CSAgent, it must be decoded by MCD before updating the energy. If MCD starts to decode from the first variable in the permutation for any CSAgent, the information generated by V-behaviors would loss. Therefore, we set a parameter, Pos, for MCD, which is the first position changed by P-behaviors. Thus, the value of the variables before Pos is left untouched such that some information generated by V-behaviors can be preserved. For a new CSAgent, Pos is set to 1. MCD(CSAgent, Pos) represents MCD is performed on CSAgent. In fact, Algorithm 1 is just a general implementation of the idea of MCD, and can be used to all kinds of CSPs. However, for specific CSPs, the idea of MCD can be combined with the knowledge of the problems, with a more efficient implementation obtained. For example, the degree of each vertex can be used when dealing with GCPs, and the details will be shown in Section IV.B.1. C. Environment of constraint satisfaction agents In order to realize the local perceptivity of agents, the environment is organized as a latticelike structure, which is similar to our previous work in [34]. Definition 3: All CSAgents live in a latticelike environment, L, which is called an agent lattice. The size of L is Lsize×Lsize, where Lsize is an integer. Each CSAgent is fixed on a lattice-point and can only interact with the neighbors. Suppose that the CSAgent located at (i, j) is represented as Li,j, i, j=1,2,…,Lsize, then the neighbors of Li,j, Neighborsi,j, are defined as follows: Neighbors L L L L ij i j ij i j ij , ,, ,, = { ′ ′ ′′ ′′ , , , } (9) where 1 1 1 size i i i L i − ≠ ′ = = , 1 1 1 size j j j L j − ≠ ′ = = , 1 1 size size i iL i i L + ≠ ′′ = = , 1 1 size size j jL j j L + ≠ ′′ = = . The agent lattice can be represented as the one in Fig.1. Each circle represents a CSAgent, the data represent the position in the lattice, and two CSAgents can interact with each other if and only if there is a line connecting them. In traditional EAs, those individuals used to generate offspring are usually selected from all individuals according to their fitness. Therefore, the global fitness distribution of a population must be determined. But in nature, a global selection does not exist, and the global fitness distribution cannot be determined either. In fact, the real natural selection only occurs in a local environment, and each individual can only interact with those around it. That is, in each phase, the natural evolution is just a kind of local phenomenon. The information can be shared globally only after a process of diffusion. In the aforementioned agent lattice, to achieve their purposes, CSAgents will compete with others so that they can gain more resources. Since each CSAgent can only sense the local environment, the behaviors can only take place between the CSAgent and the neighbors. There is no global selection at all, so the global fitness distribution is not required. A CSAgent interacts with the neighbors so that information is transferred to them. In such a manner, the information is diffused to the whole agent lattice. As can be seen, the model of the agent lattice is closer to the real evolutionary mechanism in nature than the model of the population in traditional EAs
SMCB-E-03172004-0141.R2 D.Behaviors of constraint satisfaction agents end The purpose of an algorithm for solving CSPs is to find else Swap(ci c); solutions at a low computational cost as much as possible.So end; the computational cost can be considered as the resources of the i:=il; environment in which all CSAgents live.Since the resources are until (i>n); limited and the behaviors of the CSAgents are driven by their if (non-permutation CSPs)then purposes,a CSAgent will compete with others to gain more MCD(Child Pos); resources.On the bases of this,three behaviors are designed for Child(SL):=True; CSAgents to realize their purposes,that is,the competitive end. behavior,the self-learning behavior,and the mutation behavior. The former two belong to P-behaviors,whereas the last belongs In fact,Child,is generated by exchanging a small part of to V-behaviors Max and is equivalent to performing a local search around Competitive behavior:In this behavior,the energy of a Max The purpose of the competitive behavior is to eliminate CSAgent is compared with those of the neighbors.The CSAgent the CSAgents with low energy,and give more chances to the can survive if the energy is maximum;otherwise the CSAgent potential CSAgents. must die,and the child of the one with maximum energy among Self-learning behavior:As well-known,integrating local the neighbors will take up the lattice-point. searches with EAs can improve the performance.Moreover. Suppose that the competitive behavior is performed on the agents have the knowledge relating to the problems.Therefore, CSAgent located at(i,),LandM is the CSAgent with inspired by the min-conflicts heuristic [11],[12]and taking into maximum energy among the neighbors of L namely, account the encoding methods of the CSAgents,we design the MaxE Neighbors and VCSAgente Neighborsi then self-learning behavior.Suppose that this behavior is performed CSAgent(E)<Max E).If LE)<Max(E),then Max onL The details are shown in Algorithm 3. generates a child CSAgent,Child,to replace Ly,and the method is shown in Algorithm 2;otherwise L is left untouched. Algorithm 3 Self-learning behavior Input: Lij:LiP)(m1,m2,....mn)for Algorithm 2 Competitive behavior permutation CSPs,and Input:Maxu:Max/(P)=(m,m2,..,mn)for L(P)=〈xm,xm,…,Xm)for permutation CSPs,and non-permutation CSPs; Mac(P)=xmx%,…,Xm〉for L(-(v1,2,,i non-permutation CSPs; Output:Lui Pe:A predefined parameter,and a real number between 0-1; begin Output:Child:Child(P)=(c1,c2,...,cn for repeat permutation CSPs,and Repeat :False; Child,(P)=a,xa,…,x)for k=1: Iteration:=1; non-permutation CSPs; while(k≤☒n)do Swap(x,y)exchanges the values of x and y.U(0, begin 1)is a uniform random number between 0 and 1.Random(n,i)is a random integer among 1,2,..., if(Con时flicts(vm)≠0)then n and is not equal to i.Min(i,j)is the smaller begin one between i and j. Energyold=Lu/E); I:=Random(n,k); begin if (non-permutation CSPs)then Child,(P):=MaxAP); begin i:=1: Swap(x,xm)i Pos:=n+1; MCD(L Min(k,D)); repeat end if (U(0,1)<pe)then else Swap(mg,mi); begin Energynew =LuE); 1:=Random(n,i); if (Energynen<Energyold)then if (non-permutation CSPs)then begin begin Swap(x,x)i Svap(mk,ml)(Swap(xm,xm): if (non-permutation CSPs)then if (Min(l,i)<Pos)then Pos:=Min(l,i); MCD(Li Min(k,D));
SMCB-E-03172004-0141.R2 5 D. Behaviors of constraint satisfaction agents The purpose of an algorithm for solving CSPs is to find solutions at a low computational cost as much as possible. So the computational cost can be considered as the resources of the environment in which all CSAgents live. Since the resources are limited and the behaviors of the CSAgents are driven by their purposes, a CSAgent will compete with others to gain more resources. On the bases of this, three behaviors are designed for CSAgents to realize their purposes, that is, the competitive behavior, the self-learning behavior, and the mutation behavior. The former two belong to P-behaviors, whereas the last belongs to V-behaviors. Competitive behavior: In this behavior, the energy of a CSAgent is compared with those of the neighbors. The CSAgent can survive if the energy is maximum; otherwise the CSAgent must die, and the child of the one with maximum energy among the neighbors will take up the lattice-point. Suppose that the competitive behavior is performed on the CSAgent located at (i, j), Li,j, and Maxi,j is the CSAgent with maximum energy among the neighbors of Li,j, namely, Maxi,j∈Neighborsi,j and ∀CSAgent∈Neighborsi,j, then CSAgent(E)≤Maxi,j(E). If Li,j(E) ≤ Maxi,j(E), then Maxi,j generates a child CSAgent, Childi,j, to replace Li,j, and the method is shown in Algorithm 2; otherwise Li,j is left untouched. Algorithm 2 Competitive behavior Input: Maxi,j: Maxi,j(P)=〈m1, m2, …, mn〉 for permutation CSPs, and 1 2 , ( ) , , , n ij m m m Max P = ∑ ⌡ xx x for non-permutation CSPs; pc: A predefined parameter, and a real number between 0-1; Output: Childi,j: Childi,j(P)=〈c1, c2, …, cn〉 for permutation CSPs, and 1 2 , ( ) , , , n ij c c c Child P = ∑ ⌡ xx x for non-permutation CSPs; Swap(x, y) exchanges the values of x and y. U(0, 1) is a uniform random number between 0 and 1. Random(n, i) is a random integer among 1, 2, …, n and is not equal to i. Min(i, j) is the smaller one between i and j. begin Childi,j(P) := Maxi,j(P); i := 1; Pos := n+1; repeat if (U(0, 1)n); if (non-permutation CSPs) then MCD(Childi,j, Pos); Childi,j(SL) := True; end. In fact, Childi,j is generated by exchanging a small part of Maxi,j, and is equivalent to performing a local search around Maxi,j. The purpose of the competitive behavior is to eliminate the CSAgents with low energy, and give more chances to the potential CSAgents. Self-learning behavior: As well-known, integrating local searches with EAs can improve the performance. Moreover, agents have the knowledge relating to the problems. Therefore, inspired by the min-conflicts heuristic [11], [12] and taking into account the encoding methods of the CSAgents, we design the self-learning behavior. Suppose that this behavior is performed on Li,j. The details are shown in Algorithm 3. Algorithm 3 Self-learning behavior Input: Li,j: Li,j(P)=〈m1, m2, …, mn〉 for permutation CSPs, and 1 2 , ( ) , , , n ij m m m L P = ∑ ⌡ xx x for non-permutation CSPs; Li,j(V)=〈v1, v2, …, vn〉; Output: Li,j; begin repeat Repeat := False; k := 1; Iteration := 1; while (k≤n) do begin if ( ( )0 mk Conflicts v ≠ ) then begin Energyold := Li,j(E); l := Random(n, k); if (non-permutation CSPs) then begin ( , ) m m k l Swap x x ; MCD(Li,j, Min(k, l)); end else Swap(mk, ml ); Energynew := Li,j(E); if (Energynew≤Energyold) then begin Swap(mk, ml) ( ( , ) m m k l Swap x x ); if (non-permutation CSPs) then MCD(Li,j, Min(k, l));
SMCB-E-03172004-0141.R2 6 end generation.At each generation,the competitive behavior is else Repeat:=True; performed on each CSAgent first.As a result,the CSAgents if (Iteration<n-1)then with low energy are cleaned out from the agent lattice so that Iteration :Iteration+l there is more space developed for the CSAgents with higher else begin energy.Then,the self-learning behavior and the mutation Iteration:=1; behavior are performed according to the type of the problems k=k+1: and the state of the CSAgent.In order to reduce the end; computational cost,the two behaviors are only performed on the end best CSAgent in the current agent lattice.The process is else k:=k+l; performed iteratively until a CSAgent with zero energy is found end; or the maximum computational cost is reached. until (Repeat=True); LLASL):=False; Algorithm 4 Multiagent evolutionary end. algorithm for constraint satisfaction problems The purpose of Algorithm 3 is to find a swap for the Input: Evaluationm:The maximum number of components violating constraints in the permutation,such that evaluations for the the energy of L increases after the swap is performed.For a energy; component,the algorithm iteratively performs a swap until no Lae:The scale of the agent lattice; constraint is violated or the predefined iterative count, pe:The parameter used in the Iteration=(n-1),is achieved.Then,the algorithm goes on to deal competitive behavior; with the next component.Iteration can prevent the algorithm Pm:The parameter used in the from repeating infinitely.After the self-learning behavior has mutation behavior,and only been performed on a CSAgent,the probability that the energy of for non-permutation CSPs; Output:A solution or an approximate the CSAgent can be increased by this behavior again is very low, solution; thus LSL)is set to False in the last step. L'represents the agent lattice in the ith Although the self-learning behavior is inspired by the min-conflicts heuristic [11],[12],they are different in essence. generation.CSAgenthe is the best in Lo,L,..., First,the min-conflicts heuristic performs on the values of the L',and CSAgentiner is the best in L'. variables directly,whereas the self-learning behavior performs on the permutation.Thus the main operation of the self-learning begin behavior is swapping.Second,the min-conflicts heuristic wants for i:=1 to Lose do to find a value that minimizes the number of conflicts,whereas for j:=1 to Lsse do the self-learning behavior just improves the energy of the begin CSAgent as much as possible.Since the self-learning behavior Generate a permutation randomly and aims at a permutation,an example of the performing process assign it to Lo,(P); will be given in Section V.A.I for permutation CSPs. if (non-permutation CSPs)then Mutation behavior:This behavior is similar to the mutation MCD(L 1) operator used in traditional GAs.The function is to assist the above behaviors.It can enlarge the search area so as to make up Compute L(E); the disadvantage of the decoding method.Suppose that the L (SL)=True mutation behavior is performed on L,and Lvi,v2,.. end; v).Then,the following operation is performed onLi Evaluations:=Lex LEe: if U(0,1)pm,then v-Random(IDa) (10) Update CSAgenter Where k=1,2,...,n,and U(0,1)represents the new random t=0: number for each k.p is a predefined parameter,and is a real repeat number between 0-1.Random(D)is a random integer in 1. 2,,D for i:=1 to Lase do for j:=1 to Lste do begin III.MULTIAGENT EVOLUTIONARY ALGORITHM FOR CONSTRAINT SATISFACTION PROBLEMS if (L,wins in the competitive behavior)then A.Implementation of MAEA-CSPs 瑞= To solve CSPs,all CSAgents must orderly adopt the three behaviors.Here these behaviors are controlled by means of else:=Child (generated according evolution so that the agent lattice can evolve generation by to Algorithm 2);
SMCB-E-03172004-0141.R2 6 end else Repeat := True; if (Iteration<n-1) then Iteration := Iteration+1 else begin Iteration := 1; k := k+1; end; end else k := k+1; end; until (Repeat=True); Li,j(SL) := False; end. The purpose of Algorithm 3 is to find a swap for the components violating constraints in the permutation, such that the energy of Li,j increases after the swap is performed. For a component, the algorithm iteratively performs a swap until no constraint is violated or the predefined iterative count, Iteration=(n-1), is achieved. Then, the algorithm goes on to deal with the next component. Iteration can prevent the algorithm from repeating infinitely. After the self-learning behavior has been performed on a CSAgent, the probability that the energy of the CSAgent can be increased by this behavior again is very low, thus Li,j(SL) is set to False in the last step. Although the self-learning behavior is inspired by the min-conflicts heuristic [11], [12], they are different in essence. First, the min-conflicts heuristic performs on the values of the variables directly, whereas the self-learning behavior performs on the permutation. Thus the main operation of the self-learning behavior is swapping. Second, the min-conflicts heuristic wants to find a value that minimizes the number of conflicts, whereas the self-learning behavior just improves the energy of the CSAgent as much as possible. Since the self-learning behavior aims at a permutation, an example of the performing process will be given in Section V.A.1 for permutation CSPs. Mutation behavior: This behavior is similar to the mutation operator used in traditional GAs. The function is to assist the above behaviors. It can enlarge the search area so as to make up the disadvantage of the decoding method. Suppose that the mutation behavior is performed on Li,j, and Li,j(V)=〈v1, v2, …, vn〉. Then, the following operation is performed on Li,j. if Uk(0, 1)<pm, then vk←Random(|Dk|) (10) Where k=1, 2, …, n, and Uk(0, 1) represents the new random number for each k. pm is a predefined parameter, and is a real number between 0-1. Random(|Dk|) is a random integer in 1, 2, …, |Dk|. III. MULTIAGENT EVOLUTIONARY ALGORITHM FOR CONSTRAINT SATISFACTION PROBLEMS A. Implementation of MAEA-CSPs To solve CSPs, all CSAgents must orderly adopt the three behaviors. Here these behaviors are controlled by means of evolution so that the agent lattice can evolve generation by generation. At each generation, the competitive behavior is performed on each CSAgent first. As a result, the CSAgents with low energy are cleaned out from the agent lattice so that there is more space developed for the CSAgents with higher energy. Then, the self-learning behavior and the mutation behavior are performed according to the type of the problems and the state of the CSAgent. In order to reduce the computational cost, the two behaviors are only performed on the best CSAgent in the current agent lattice. The process is performed iteratively until a CSAgent with zero energy is found or the maximum computational cost is reached. Algorithm 4 Multiagent evolutionary algorithm for constraint satisfaction problems Input: EvaluationMax: The maximum number of evaluations for the energy; Lsize: The scale of the agent lattice; pc: The parameter used in the competitive behavior; pm: The parameter used in the mutation behavior, and only for non-permutation CSPs; Output: A solution or an approximate solution; Lt represents the agent lattice in the tth generation. t CSA Best gent is the best in L0 , L1 , …, Lt , and t CSA tBest gent is the best in Lt . begin for i:=1 to Lsize do for j:=1 to Lsize do begin Generate a permutation randomly and assign it to 0 , ( ) L P i j ; if (non-permutation CSPs) then 0 MCD( , 1) Li j , ; Compute 0 , ( ) Li j E ; 0 , ( ): i j L SL True = ; end; Evaluations := Lsize× Lsize; Update 0 CSA Best gent ; t := 0; repeat for i:=1 to Lsize do for j:=1 to Lsize do begin if ( , t Li j wins in the competitive behavior) then 1 , , : t t ij ij + L L = else 1 , : t i j i, j + L Child = (generated according to Algorithm 2);
SMCB-E-03172004-0141.R2 7 Compute (E); [2n+2 for non-permutation CSPs UnitscsAgemt (12) Evaluations:=Evaluations+1; n+2 for permutation CSPs end; The number of space units required by MAEA-CSPs in total is Update CSAgent Numuin=NumcsAgemt XUnits sAgem= if (CSAgent(SL)=True)then (+2)n+(4L+)for non-permutation CSPs (13) Perform the self-learning behavior on (2L+1)n+(4L+2)for permutation CSPs CSAgentne Since (2+1)is independent of n,the space complexity else if (non-permutation CSP)then of multiagent evolutionary algorithm for constraint satisfaction begin Perform the mutation behavior on problems is O(n). CSAgent Theorem 1 shows that the space required by MAEA-CSPs increases linearly with the scale of the problem.Ifa unit requires Compute CSAgente(E)i 4 bytes and Lse is set to 3,MAEA-CSPs requires 725M for a Evaluations :Evaluations+1; 10-queen problem since n-queen problems belong to end; non-permutation CSPs. if(CSAgent(E)<CSAgent(E))then C.Convergence of MAEA-CSPs begin In practical cases,one is interested in knowing the likelihood CSAgentge=CSAgentei of an algorithm in finding solutions for CSPs.Most existing CSAgent=CSAgentso CSAgent is algorithms based on local searches do not guarantee that randomly selected from L and is solutions can be found.When they are trapped in local optima, different from CSAgent) they always restart.MAEA-CSPs starts from a randomly generated agent lattice,and evolves the agent lattice generation end by generation with three behaviors.It has no restarting step.Can else CSAgentc=CSAgente MAEA-CSPs find solutions? 1:=h1: The search space of non-permutation CSPs with n variables has Dx D2x...xD elements,and that of permutation CSPs until (CSAgentien (E)=0)or with n variables has n!elements.Let E=(Energy(a)I aeS). (Evaluations2 EvaluationMax): Since the energy describes the negative value of the number of end. constraints not satisfied by a CSAgent and the number of all B.Space complexity of MAEA-CSPs constraints is m,E is equal to (m+1)and E can be represented When dealing with a large-scale problem,the memory as E=f-m,-m+1,...,-2,-1,0).This immediately gives us the required by an algorithm must be taken into account.For opportunity to partition S into a collection of nonempty subsets, example,although the method proposed in [33]obtained good (S'l i=m,-m+1,...,-2,-1,0),where performance,it needs to store a nxn lattice to record the number S=al aeS and Energy(a)-i) (14) of collisions in each grid and the space complexity is O(n). Thus,even if each grid is recorded by an integer with 4 bytes, ∑°1sHss+0, (15) 38,147M memory locations are still required for a 102-queen sns1=o,i≠方U8s=s problem.Therefore,the method proposed in [33]has resource Obviously,any CSAgent belonging to So is a solution. requirements exceeding those currently available from PC Suppose that the energy of an agent lattice,L,is equal to computers. Theorem I:The space complexity of multiagent evolutionary Energv(L),which is determined by, Energy(L)=max(LE)ij=1,2...,Lse) (16) algorithm for constraint satisfaction problems is O(n). Proof:The main contribution to the space complexity is from Let L stand for the set of all agent lattices.Thus,VLeL, the storage for the agent lattices in the current generation and the -m<Energv(L)<0.Therefore,L can be partitioned into a next generation and the best CSAgent.So the number of collection of nonempty subsets(L=-m,-m+1,..,-2,-1,0), CSAgents recorded is where NumcsAgenf-2XLsEeXLe+1 (11) L←{LlL∈L and Energy(L)-i} (17) For non-permutation CSPs,P,V,E,and SL should be recorded for each CSAgent.For permutation CSPs,P,E,and SL ∑IHL北上≠o, (18) should be recoded for each CSAgent.So the number of space LnL=②i≠方U°L=L units each CSAgent required is L consists of all agent lattices with zero energy. Let L,i=-m,-m+1,...,-2,-1,0,j=1,2,...,IL,stand for the ith agent lattice in L'.In any generation,the three behaviors
SMCB-E-03172004-0141.R2 7 Compute 1 , ( ) t i j E + L ; Evaluations := Evaluations+1; end; Update 1 ( 1) t t Best + CSAgent + ; if ( ) 1 ( 1) ( ) t t Best SL True + CSAgent + = then Perform the self-learning behavior on 1 ( 1) t t Best + CSAgent + else if (non-permutation CSP) then begin Perform the mutation behavior on 1 ( 1) t t Best + CSAgent + ; Compute 1 ( 1) ( ) t t Best E + CSAgent + ; Evaluations := Evaluations+1; end; if ( 1 ( 1) () () t t t Best Best E E + CSAgent CSAgent + < ) then begin 1 : t t Best Best + CSAgent CSA = gent ; 1 : t t Random Best + CSAgent CSA = gent ( t 1 Random + CSAgent is randomly selected from Lt and is different from 1 ( 1) t t Best + CSAgent + ); end else 1 1 ( 1) : t t Best t Best + + CSAgent CSAgent = + ; t := t+1; until ( ) 1 () 0 t Best E + CSAgent = or (Evaluations ≥ EvaluationMax); end. B. Space complexity of MAEA-CSPs When dealing with a large-scale problem, the memory required by an algorithm must be taken into account. For example, although the method proposed in [33] obtained good performance, it needs to store a n×n lattice to record the number of collisions in each grid and the space complexity is O(n2 ). Thus, even if each grid is recorded by an integer with 4 bytes, 38,147M memory locations are still required for a 105 -queen problem. Therefore, the method proposed in [33] has resource requirements exceeding those currently available from PC computers. Theorem 1: The space complexity of multiagent evolutionary algorithm for constraint satisfaction problems is O(n). Proof: The main contribution to the space complexity is from the storage for the agent lattices in the current generation and the next generation and the best CSAgent. So the number of CSAgents recorded is NumCSAgent=2×Lsize×Lsize+1 (11) For non-permutation CSPs, P, V, E, and SL should be recorded for each CSAgent. For permutation CSPs, P, E, and SL should be recoded for each CSAgent. So the number of space units each CSAgent required is 2 2 for non-permutation CSPs 2 for permutation CSPs n Units n + = + CSAgent (12) The number of space units required by MAEA-CSPs in total is 2 2 2 2 (4 2) (4 2) for non-permutation CSPs (2 1) (4 2) for permutation CSPs Units size size size size Num Num Units L nL L nL =× = ++ + ++ + CSAgent CSAgent (13) Since 2 (2 1) Lsize + is independent of n, the space complexity of multiagent evolutionary algorithm for constraint satisfaction problems is O(n). Theorem 1 shows that the space required by MAEA-CSPs increases linearly with the scale of the problem. If a unit requires 4 bytes and Lsize is set to 3, MAEA-CSPs requires 725M for a 107 -queen problem since n-queen problems belong to non-permutation CSPs. C. Convergence of MAEA-CSPs In practical cases, one is interested in knowing the likelihood of an algorithm in finding solutions for CSPs. Most existing algorithms based on local searches do not guarantee that solutions can be found. When they are trapped in local optima, they always restart. MAEA-CSPs starts from a randomly generated agent lattice, and evolves the agent lattice generation by generation with three behaviors. It has no restarting step. Can MAEA-CSPs find solutions? The search space of non-permutation CSPs with n variables has |D1|×|D2|×…×|Dn| elements, and that of permutation CSPs with n variables has n! elements. Let E={Energy(a) | a∈S}. Since the energy describes the negative value of the number of constraints not satisfied by a CSAgent and the number of all constraints is m, |E| is equal to (m+1) and E can be represented as E={-m, -m+1, …, -2, -1, 0}. This immediately gives us the opportunity to partition S into a collection of nonempty subsets, {Si | i=-m, -m+1, …, -2, -1, 0}, where Si ={a | a∈S and Energy(a)=i} (14) 0 - 0 - | | | |; ; , ; i i i m ij i i m i j = = = ≠∅ =∅ ∀ ≠ = ∑ SSS SS SS (15) Obviously, any CSAgent belonging to S0 is a solution. Suppose that the energy of an agent lattice, L, is equal to Energy(L), which is determined by, Energy(L)=max{Li,j(E) | i,j=1, 2, …, Lsize} (16) Let L stand for the set of all agent lattices. Thus, ∀L∈L, -m≤Energy(L)≤0. Therefore, L can be partitioned into a collection of nonempty subsets {Li | i=-m, -m+1, …, -2, -1, 0}, where Li ={L | L∈L and Energy(L)=i} (17) 0 - 0 - | | | |; ; , ; i i i m ij i i m i j = = = ≠∅ =∅ ∀ ≠ = ∑ LLL LL LL (18) L0 consists of all agent lattices with zero energy. Let Lij , i=-m, -m+1, …, -2, -1, 0, j=1, 2, …, |Li |, stand for the jth agent lattice in Li . In any generation, the three behaviors
SMCB-E-03172004-0141.R2 transform the agent lattice,L,to another one,L,and this CSAgentines and CSAgentine (E)=i.Then,(24)can be process can be viewed as a transition from LtoL Letp be obtained according to Algorithm 4. the probability oftransition fromLtoLPbe the probability Energy(L)≥Energy(L)→Vki.For permutation A=∑,∑8P=1,pup (19) CSPs,since CSAgent is the best CSAgent in L',it must win On the basis of the concepts above and [51],[52],the over the neighbors in the competitive behavior,and will convergence of MAEA-CSPs is proved. generate a child according to Algorithm 2.According to Theorem 2:[53]Let P n'xn'be a reducible stochastic matrix theorem 3,the probability of generating CSAgent'from which means that by applying the same permutations to rows CSAgent Be is and columns,P can be brought into the form where C: Pc4 getS4eamr之 R T (25) m'xm'is a primitive stochastic matrix and R.T0.Then -p(p.会1>0 C 0 For non-permutation CSPs,the self-learning behavior or the p'=lim p=lim 0 (20) mutation behavior will perform on CSAgent Obviously, 0 P0 if it is the self-learning behavior to is a stable stochastic matrix with p=1p",where p"ppis perform on CSAgen If it is the mutation behavior to unique regardless of the initial distribution,and psatisfies: perform on CSAgent,suppose that there are m variables in p.>0forl≤sism'and p°=0form0 (26) Theorem 3:Let P=(P1,P2....,Pes,=(... )S,and PO.Let pobe the probability of transforming So the probability of transition from Lto any agent lattice in L 2 to P by Algorithm 2. Suppose that Pgh≥Pcs4>0 (27) DifB={g,lie{L,2,…,n and P.≠Q} and 口 Same =to lie(1,2,.,n)and P=0).Then Therefore,Vkzi,Pu2py>0. It follows from this theorem that there is always a positive Pe-p21-p.)1.(p.1 (21) probability to transit from an agent lattice to the one with Proof:When performing Algorithm 2 on O,the probability of identical or higher energy and a zero probability to the one with the elements in Same are left untouched is lower energy.Thus,once MAEA-CSPs enters Lo,it will never escape Pae=((l-p)m8别 (22) Theorem 5:Multiagent evolutionary algorithm for constraint For each element in Diff,if (0,1)is smaller than pe it satisfaction problems converges to the global optimum as time tends to infinity would be swapped with another elements in 2.If for Proof:It is clear that one can consider each L',ie E,as a state vOE Dife,the swap operation selectso,e Diff and O=P, in a homogeneous finite Markov chain.According to theorem 4, Same will be left untouched.Such swaps can let the last two the transition matrix of the Markov chain can be written as follows: elements in Dife become identical with the corresponding Poo 0 ones in P by one swap.Therefore,the maximum number of 1 swap operations is Diffe-1.Then, p'= P.1.0 (28) Pgp≥p(n.1.1-p.) P.m. (23)a P.m.o =-p.)1.(p.yH Obviously,R=-p.10p20…P.m0)>0,T0,C-(po.0)(1)0. Theorem 4:In multiagent evolutionary algorithm for According to theorem 2,Pis given by, constraint satisfaction problems,i,=0. >0,k2i C P =lim p"=lim k-o 0) (29) Proof:Let L,iE=1,2,...,Lbe the agent lattice in the th generation,and be labeled as L'.The best CSAgent among L'be where C=(1),R=(1 1...1).Thus,P'is a stable stochastic
SMCB-E-03172004-0141.R2 8 transform the agent lattice, Lij , to another one, Lkl, and this process can be viewed as a transition from Lij to Lkl. Let pij.kl be the probability of transition from Lij to Lkl, pij.k be the probability of transition from Lij to any agent lattice in Lk , and pi.k be the probability of transition from any agent lattice in Li to any agent lattice in Lk . It is obvious that | | . . 1 k ij k ij kl l p p = = ∑ L , 0 . - 1 ij k k mp = ∑ = , pi.k≥pij.k (19) On the basis of the concepts above and [51], [52], the convergence of MAEA-CSPs is proved. Theorem 2: [53] Let P′: n′×n′ be a reducible stochastic matrix which means that by applying the same permutations to rows and columns, P′ can be brought into the form C 0 R T , where C: m′×m′ is a primitive stochastic matrix and R, T≠0. Then 1 =0 lim lim k k k - k k i ki k i ∞ ∞ →∞ →∞ − ∞ ′ ′ = = ∑ 0 0 0 C C P= P T RC T R (20) is a stable stochastic matrix with P′ ∞=1′p∞, where p∞=p0 P′ ∞ is unique regardless of the initial distribution, and p∞ satisfies: 0 i p∞ > for 1≤i≤m′ and 0 i p∞ = for m′ ≥ = = i. For permutation CSPs, since t CSA tBest gent is the best CSAgent in Lt , it must win over the neighbors in the competitive behavior, and will generate a child according to Algorithm 2. According to theorem 3, the probability of generating CSAgent′ from t CSA tBest gent is | |1 | |1 1 (1 ) ( ) 0 t tBest t t tBest tBest Same Diff c c n p p p ′ ′ → ′ + − ≥ − ⋅⋅ > CSAgent CSAgent CSAgent CSAgent CSAgent CSAgent (25) For non-permutation CSPs, the self-learning behavior or the mutation behavior will perform on t CSA tBest gent . Obviously, t 0 tBest p → ′ > CSAgent CSAgent if it is the self-learning behavior to perform on t CSA tBest gent . If it is the mutation behavior to perform on t CSA tBest gent , suppose that there are n1 variables in ( ) t CSA tBest gent V which are different from the corresponding ones in CSAgent′(V). Then, the probability of generating CSAgent′ from t CSA tBest gent by the mutation behavior is: 1 1 t (1 ) 0 tBest n nn m m p pp − → ′ = ⋅− > CSAgent CSAgent (26) So the probability of transition from Lij to any agent lattice in Lk is . t 0 tBest ij k p p → ′ ≥ > CSAgent CSAgent (27) Therefore, ∀k≥i, pi.k≥pij.k>0. It follows from this theorem that there is always a positive probability to transit from an agent lattice to the one with identical or higher energy and a zero probability to the one with lower energy. Thus, once MAEA-CSPs enters L0 , it will never escape. Theorem 5: Multiagent evolutionary algorithm for constraint satisfaction problems converges to the global optimum as time tends to infinity. Proof: It is clear that one can consider each Li , i∈E, as a state in a homogeneous finite Markov chain. According to theorem 4, the transition matrix of the Markov chain can be written as follows: 0.0 -1.0 -1.-1 - .0 - .-1 - .- 0 0 0 m m mm p p p pp p ′ = 0 C P = R T (28) Obviously, R=(p-1.0 p-2.0 … p-m.0) T >0, T≠0, C=(p0.0)=(1)≠0. According to theorem 2, P′ ∞ is given by, 1 =0 lim lim k k k - i ki k k k i ∞ ∞ →∞ →∞ − ∞ ′ ′ = = ∑ 0 0 0 C C P= P T RC T R (29) where C∞=(1), R∞=(1 1 … 1)T . Thus, P′∞ is a stable stochastic
SMCB-E-03172004-0141.R2 9 matrix,and evaluations to solution (AES),which has been widely used to 10. 0 measure the computational cost of EAs.For a given algorithm,it 10 年 0 is defined as the average number of evaluations of successful 三 (30)runs.Consequently.when the SR=0.the AES is undefined.Note also,when the number of successful runs is relatively low,the 10.0 AES is not statistically reliable.Therefore,it is important to Therefore, consider the SR and AES together to make a clear interpretation lim Pr(Energv(L)=0)=1 (31) of the results.Among the three measures,the most important is the success rate since we ultimately want to solve problems.The where Pr stands for the probability.This implies that multiagent second is the AES,in the case of comparable SR figures,we evolutionary algorithm for constraint satisfaction problems prefer the faster EA.The ME is only used as a third measure as converges to the global optimum as time tends to infinity. it might suggest preferences between EAs failed. IV.EXPERIMENTAL STUDIES ON NON-PERMUTATION CONSTRAINT A.I Comparison between MAEA-CSPs and the existing SATISFACTION PROBLEMS algorithms Reference [54]indicates that any CSP can be equivalently Reference [29]has made a comparison among 11 available transformed to a binary CSP.Since binary CSPs belong to algorithms,and the results show that the best four algorithms are non-permutation CSPs,they are used to test the performance of H-GA.120].211.H-GA.3[201.[21l.SAW2I.[28l,and MAEA-CSPs in the section.Systematic analyses are also made Glass-Box [19].Therefore.we reimplemented the four to check the effect of the parameters,pe and p Afterwards, algorithms according to the descriptions and parameters given MAEA-CSPs is used to solve a more practical case,graph in [29],and made a comparison between them and coloring problems.Four parameters must be assigned before MAEA-CSPs.The results of the four algorithms are consistent MAEA-CSPs is used to solve problems.LL is equivalent with those of [29].The comparison results are shown in Fig.2 to the population size in traditional EAs,so Lis generally In order to facilitate other researchers to make a comparison selected from 3 to 10,and is set to 5 in the following between MAEA-CSPs and their algorithms,the results of experiments.Pe and pim control the competitive behavior and the MAEA-CSPs are also shown in Table I. mutation behavior,and are set to 0.2 and 0.05.Evaluation is As can be seen,the SR of MAEA-CSPs is the highest among set to 10%for binary CSPs,and 10 for GCPs.All experiments the five algorithms with 100%SR for p=0.24-0.26.Since the are executed on a 2.4-GHz Pentium IV PC with 1G RAM. ME of SAW is much larger than those of the other algorithms, the ME of SAW and MAEA-CSPs is compared in top-right of A.Binary constraint satisfaction problems Fig.2(b).Fig.2(b)shows that the ME of MAEA-CSPs is also the The experiments are conducted on the test suite [78]used to best among the five algorithms.Since the AES is not statistically compare the performances of 11 available algorithms in [29]. reliable when the SR is very low,only the AES of the instances The suite consists of 250 solvable problem instances.Each whose SR is larger than 10%is plotted in Fig.2(c).All the SR of instance has 20 variables,and the domain set is D={DD=(1, the five algorithms for p=0.30-0.33 is very low.But the ME of 2,..20)and 1ss20).The 250 instances are divided into 10 MAEA-CSPs is small,and only 1 to 3 constraints are not groups according to their difficulty,p=0.24,0.25,0.26,0.27, satisfied.To summarize,MAEA-CSPs outperforms the four 0.28,0.29,0.30,0.31,0.32,0.33},and each group has25 other algorithms. instances.The higher the value of p,the more difficult the instance is.We perform 10 independent runs on each of the 25 A.2 Parameter analyses of MAEA-CSPs instances belonging to a given p value,amounting to 250 data In this experiment,pe and p are increased from 0.05 to 1 in points used for calculating our measures. steps of 0.05,with 20x20=400 groups of parameters obtained The method used to measure the performance of MAEA-CSPs with the 400 groups of parameters is used to solve MAEA-CSPs is similar to that of [29],namely,the effectiveness the 10 groups of instances.According to the SR,the 10 groups and the efficiency are considered.The effectiveness is measured of instances can be divided into 3 classes.The first class by the success rate(SR)and the mean error (ME).The success includes the instances with low p values,that is,p=0.24-0.26 rate is the percentage of runs finding a solution.Since all Since the SR for this class is higher than 90%for most of the instances in the test suite are solvable,the highest value of the parameters and the ME is very low,the graphs for the SR and SR is 100%.The error is defined for a single run as the number the AES are shown in Fig.3.The second class includes the of constraints not satisfied by the best CSAgent in the agent instances with p varying from 0.27 to 0.29.Since the SR for this lattice when the run terminates.For any given set of runs,the class is lower,the graphs for the SR and the ME are shown in ME is the average of these error values.This measure provides Fig.4.The last class includes the instances with high p values. information on the quality of partial solutions.This can be a Since the instances in this class are very difficult,only the useful way of comparing algorithms having equal SR values graphs for the ME are shown in Fig.5. lower than 100%. As can be seen,pe has a larger effect on the performance of The efficiency is measured by the average number of MAEA-CSPs.For the first class.the SR is higher than 90%
SMCB-E-03172004-0141.R2 9 matrix, and 10 0 10 0 10 0 ∞ ′ P = (30) Therefore, lim { ( ) 0} 1 t t Pr Energy →∞ L = = (31) where Pr stands for the probability. This implies that multiagent evolutionary algorithm for constraint satisfaction problems converges to the global optimum as time tends to infinity. IV. EXPERIMENTAL STUDIES ON NON-PERMUTATION CONSTRAINT SATISFACTION PROBLEMS Reference [54] indicates that any CSP can be equivalently transformed to a binary CSP. Since binary CSPs belong to non-permutation CSPs, they are used to test the performance of MAEA-CSPs in the section. Systematic analyses are also made to check the effect of the parameters, pc and pm. Afterwards, MAEA-CSPs is used to solve a more practical case, graph coloring problems. Four parameters must be assigned before MAEA-CSPs is used to solve problems. Lsize×Lsize is equivalent to the population size in traditional EAs, so Lsize is generally selected from 3 to 10, and is set to 5 in the following experiments. pc and pm control the competitive behavior and the mutation behavior, and are set to 0.2 and 0.05. EvaluationMax is set to 105 for binary CSPs, and 104 for GCPs. All experiments are executed on a 2.4-GHz Pentium IV PC with 1G RAM. A. Binary constraint satisfaction problems The experiments are conducted on the test suite [78] used to compare the performances of 11 available algorithms in [29]. The suite consists of 250 solvable problem instances. Each instance has 20 variables, and the domain set is D={Di | Di={1, 2, …, 20} and 1≤i≤20}. The 250 instances are divided into 10 groups according to their difficulty, p={0.24, 0.25, 0.26, 0.27, 0.28, 0.29, 0.30, 0.31, 0.32, 0.33}, and each group has 25 instances. The higher the value of p, the more difficult the instance is. We perform 10 independent runs on each of the 25 instances belonging to a given p value, amounting to 250 data points used for calculating our measures. The method used to measure the performance of MAEA-CSPs is similar to that of [29], namely, the effectiveness and the efficiency are considered. The effectiveness is measured by the success rate (SR) and the mean error (ME). The success rate is the percentage of runs finding a solution. Since all instances in the test suite are solvable, the highest value of the SR is 100%. The error is defined for a single run as the number of constraints not satisfied by the best CSAgent in the agent lattice when the run terminates. For any given set of runs, the ME is the average of these error values. This measure provides information on the quality of partial solutions. This can be a useful way of comparing algorithms having equal SR values lower than 100%. The efficiency is measured by the average number of evaluations to solution (AES), which has been widely used to measure the computational cost of EAs. For a given algorithm, it is defined as the average number of evaluations of successful runs. Consequently, when the SR=0, the AES is undefined. Note also, when the number of successful runs is relatively low, the AES is not statistically reliable. Therefore, it is important to consider the SR and AES together to make a clear interpretation of the results. Among the three measures, the most important is the success rate since we ultimately want to solve problems. The second is the AES, in the case of comparable SR figures, we prefer the faster EA. The ME is only used as a third measure as it might suggest preferences between EAs failed. A.1 Comparison between MAEA-CSPs and the existing algorithms Reference [29] has made a comparison among 11 available algorithms, and the results show that the best four algorithms are H-GA.1 [20], [21], H-GA.3 [20], [21], SAW [27], [28], and Glass-Box [19]. Therefore, we reimplemented the four algorithms according to the descriptions and parameters given in [29], and made a comparison between them and MAEA-CSPs. The results of the four algorithms are consistent with those of [29]. The comparison results are shown in Fig.2. In order to facilitate other researchers to make a comparison between MAEA-CSPs and their algorithms, the results of MAEA-CSPs are also shown in Table I. As can be seen, the SR of MAEA-CSPs is the highest among the five algorithms with 100% SR for p=0.24-0.26. Since the ME of SAW is much larger than those of the other algorithms, the ME of SAW and MAEA-CSPs is compared in top-right of Fig.2(b). Fig.2(b) shows that the ME of MAEA-CSPs is also the best among the five algorithms. Since the AES is not statistically reliable when the SR is very low, only the AES of the instances whose SR is larger than 10% is plotted in Fig.2(c). All the SR of the five algorithms for p=0.30-0.33 is very low. But the ME of MAEA-CSPs is small, and only 1 to 3 constraints are not satisfied. To summarize, MAEA-CSPs outperforms the four other algorithms. A.2 Parameter analyses of MAEA-CSPs In this experiment, pc and pm are increased from 0.05 to 1 in steps of 0.05, with 20×20=400 groups of parameters obtained. MAEA-CSPs with the 400 groups of parameters is used to solve the 10 groups of instances. According to the SR, the 10 groups of instances can be divided into 3 classes. The first class includes the instances with low p values, that is, p=0.24-0.26. Since the SR for this class is higher than 90% for most of the parameters and the ME is very low, the graphs for the SR and the AES are shown in Fig.3. The second class includes the instances with p varying from 0.27 to 0.29. Since the SR for this class is lower, the graphs for the SR and the ME are shown in Fig.4. The last class includes the instances with high p values. Since the instances in this class are very difficult, only the graphs for the ME are shown in Fig.5. As can be seen, pc has a larger effect on the performance of MAEA-CSPs. For the first class, the SR is higher than 90%
SMCB-E-03172004-0141.R2 10 when pe is larger than 0.1.Although the AES increases with pe begin when pe is larger than 0.2,the AES is smaller when pe is in if (Pos=1)then 0.1-0.3.For the second and the last class,the results are similar, begin namely,when pe is in 0.1-0.3,the SR is higher,and the ME is g=1: smaller.Although p does not affect the performance of i=2 MAEA-CSPs obviously.Fig.4 and Fig.5 show the SR is slightly end higher and the ME is slightly smaller when pm is small. else i:=Pos; To summarize,it is better to choose pe from 0.1-0.3 and p repeat from 0.05-0.3.In addition,although the performance of for j:=1 to m do Conflicts.=0; MAEA-CSPs with above parameters is better,MAEA-CSPs still performs stably whenp is larger than 0.2.It shows that the for j:=1 to degreer do performance of MAEA-CSPs is not sensitive to the parameters, if vertexf has been assigned value) and MAEA-CSPs is quite robust and easy to use. then B.Graph coloring problems begin Many problems of practical interest can be modeled as graph Color :the color of vertex coloring problems.The purpose is to color each vertex of a Conflictscolor=Conflictscolor+1i graph by a certain color,such that no two vertices incident to end; any edge have the same color.Given an undirected graph G=V, Color :=j,where Conflicts,is minimum among E),where V=(V1,V2,...,V)is the set of vertices,and E=fel Conflicts1,Conflicts2,...,Conflictsmi V and V are connected)is the set of edges.An m-coloring Vp =Color i problem can be described as follows when representing each i:=itl; vertex as a variable,and the domain corresponds to the given set until (i>n); of m colors: end. x=,,x,x;represents Vev In Algorithm 5,the second "for"only considers the vertices D={D,D2,…,Dn}, connected to x and previously assigned values.When this D.={L,2,…,m},i=1,2,…,n (32) "for"scans once,the number of vertices with each color are C=C((xx=(d,d)vi,je(2 ....n, computed out.So the algorithm is more efficient. Vde (1,2,..m),ej eE B.2 Experimental results of MAEA-CSPs In what follows,MCD for GCPs is described first,and then Presently,the DIMACS challenge has 79 problems.These MAEA-CSPs is tested on the official benchmarks from problems include various graphs,such as random graphs DIMACS graph coloring challenge [79].Finally,a comparison (DSJCx.y),geometric random graphs (DSJRx.y), is made between MAEA-CSPs and two recent algorithms. "quasirandom"graphs (flatx_y_0),graphs from real-life applications (fpsol2.1.x,inithxi.x,mulsol.i.1,and zeroin.i.1).To B.I Minimum conflict decoding for graph coloring problems investigate the performance of MAEA-CSPs extensively,all the Since the constraints only occur between two connected 79 problems are considered,and the results averaged over 10 vertices,and the degree of a vertex is far smaller than in independent runs are shown in Table II.In the DIMACS general,we can record the degree and the list of vertices challenge,some problems counted each edge once whereas connected to an identical vertex in advance,and only check the some ones counted twice.Since the graphs are undirected,E]in lists when performing MCD. Table II counts each edge once.The value of m is set to the number of optimal colors when known,and the smallest value Algorithm 5 Minimum conflict decoding for we can find in the literature (values in parentheses)when graph coloring problems unknown.In Table II."c"stands for the number of constraints Input:CSAgent:the CSAgent needs to be not satisfied.Since each constraint corresponds to an edge,"c" decoded; is the number of edges whose two vertices have the same color. Pos:the position to start decoding; As can be seen,MAEA-CSPs finds exact solutions in all 10 Output:CSAgentV; runs for 33 out of 79 problems,and c of these problems is 0. Let CSAgent(P)=a,xna,,xg〉,CSAgent(=v, Moreover,the running times of these problems are very short. V2,..vn),degreer stands for the degree of "(1-c/EDx100"indicates the percentage of edges colored properly.Among the 79 problems,there are 66 problems that vertex xn,and vertex the jth vertex MAEA-CSPs colors more than 99%of the edges properly,9 connected to xe. problems that 98-99%of the edges properly,2 problems that 97-98%of the edges properly,and 2 problems that 94-95%of
SMCB-E-03172004-0141.R2 10 when pc is larger than 0.1. Although the AES increases with pc when pc is larger than 0.2, the AES is smaller when pc is in 0.1-0.3. For the second and the last class, the results are similar, namely, when pc is in 0.1-0.3, the SR is higher, and the ME is smaller. Although pm does not affect the performance of MAEA-CSPs obviously, Fig.4 and Fig.5 show the SR is slightly higher and the ME is slightly smaller when pm is small. To summarize, it is better to choose pc from 0.1-0.3 and pm from 0.05-0.3. In addition, although the performance of MAEA-CSPs with above parameters is better, MAEA-CSPs still performs stably when pc is larger than 0.2. It shows that the performance of MAEA-CSPs is not sensitive to the parameters, and MAEA-CSPs is quite robust and easy to use. B. Graph coloring problems Many problems of practical interest can be modeled as graph coloring problems. The purpose is to color each vertex of a graph by a certain color, such that no two vertices incident to any edge have the same color. Given an undirected graph G={V, E}, where V={V1, V2, …, Vn} is the set of vertices, and E={ei,j | Vi and Vj are connected} is the set of edges. An m-coloring problem can be described as follows when representing each vertex as a variable, and the domain corresponds to the given set of m colors: { ( ) } 1 2 1 2 , { , , , }, represents { , , , }, {1, 2, , }, 1, 2, , { , } , | , {1, 2, ..., }, {1, 2, ..., }, ni i n i i j i j xx x x V DD D D mi n C x x d d ij n d me = ∈ = = = = = 〈 〉 ∀ ∈ ∀∈ ∈ x V D C E (32) In what follows, MCD for GCPs is described first, and then MAEA-CSPs is tested on the official benchmarks from DIMACS graph coloring challenge [79]. Finally, a comparison is made between MAEA-CSPs and two recent algorithms. B.1 Minimum conflict decoding for graph coloring problems Since the constraints only occur between two connected vertices, and the degree of a vertex is far smaller than |V| in general, we can record the degree and the list of vertices connected to an identical vertex in advance, and only check the lists when performing MCD. Algorithm 5 Minimum conflict decoding for graph coloring problems Input: CSAgent: the CSAgent needs to be decoded; Pos: the position to start decoding; Output: CSAgent(V); Let 1 2 ( ) , , , PP Pn CSAgent P = ∑ ⌡ xx x , CSAgent(V)=〈v1, v2, …, vn〉, Pi degree stands for the degree of vertex Pi x , and Pi j vertex the jth vertex connected to Pi x . begin if (Pos=1) then begin 1 : 1 Pv = ; i := 2; end else i := Pos; repeat for j:=1 to m do Conflictsj:=0; for j:=1 to Pi degree do if ( Pi j vertex has been assigned value) then begin Color := the color of Pi j vertex ; ConflictsColor := ConflictsColor + 1; end; Color := j, where Conflictsj is minimum among Conflicts1, Conflicts2, …, Conflictsm; : Pi v Color = ; i := i+1; until (i>n); end. In Algorithm 5, the second “for” only considers the vertices connected to Pi x and previously assigned values. When this “for” scans once, the number of vertices with each color are computed out. So the algorithm is more efficient. B.2 Experimental results of MAEA-CSPs Presently, the DIMACS challenge has 79 problems. These problems include various graphs, such as random graphs (DSJCx.y), geometric random graphs (DSJRx.y), “quasirandom” graphs (flatx_y_0), graphs from real-life applications (fpsol2.i.x, inithxi.x, mulsol.i.1, and zeroin.i.1). To investigate the performance of MAEA-CSPs extensively, all the 79 problems are considered, and the results averaged over 10 independent runs are shown in Table II. In the DIMACS challenge, some problems counted each edge once whereas some ones counted twice. Since the graphs are undirected, |E| in Table II counts each edge once. The value of m is set to the number of optimal colors when known, and the smallest value we can find in the literature (values in parentheses) when unknown. In Table II, “c” stands for the number of constraints not satisfied. Since each constraint corresponds to an edge, “c” is the number of edges whose two vertices have the same color. As can be seen, MAEA-CSPs finds exact solutions in all 10 runs for 33 out of 79 problems, and c of these problems is 0. Moreover, the running times of these problems are very short. “(1-c/|E|)×100” indicates the percentage of edges colored properly. Among the 79 problems, there are 66 problems that MAEA-CSPs colors more than 99% of the edges properly, 9 problems that 98-99% of the edges properly, 2 problems that 97-98% of the edges properly, and 2 problems that 94-95% of