正在加载图片...
GRAPH THEORY 5 if they are not independent)and since the random variable can be written as a sum of n indicator random variables xv(vE V).where xv=1 if v Y and X知 0 otherwise we conclude that the e expected value of is at mos 吧+n(1-p)+ ly,there is at least one choice of X V such that Yx<np+n(1-p)6+1.The set U=XUYx is clearly a dominating set of G whose cardinality is at most this size. The above argument works for any p[0,1].To optimize the result we use elementary calculus. For convenience we bound 1-p(this holds for all nonnegative p and is a fairly close bound when p is small)to give the simpler bound IU川≤np+ne-p(6+1) Take the derivative of the right-hand side with respect to p and set it equal to zero. The right-hand side is minimized at mf6+1 p= ò+1 Formally,we set p equal to this value in the first line of the proof.We now have UI<n1+In(+1)1/(6+1)as claimec Three simple but important ideas are incorporated in the last proof.The first is appear in Chapter The second is perhaps more subtle and is s an example e of the "alteration"principle that is discussed in Chapter 3.The random choice did not supply the required dominating set U immediately:it only supplied the set X,which has to be altered a little(by adding to it the set Yx)to provide the required dominating set.The third involves the optim oice of p. often wants to o make a random choice but is not certain what probability p should be used.The idea is to carry out the proof with p as a parameter giving a result that is a function of p.At the end,that p is selected which gives the optimal result.There is here yet a fourth idea that might be called asymptotic calculu We wanted the asymptotics where p ranges over 0,1].The actual minimum p =1-(6+1)- to deal with and in many similar cases precise minima are impossible to find in closed form.Rather,we give away a little bit,bounding 1-p e-p,yielding a clean bound.A good pa t of the eart of the probabilisti emethod lies in finding suboptimal but clean bounds.Did we give away too much in this case?The answer depends on the emphasis for the original question.For 6=3 our rough bound gives IU<0.596n while the more precise calculation gives U<0.496n,perhaps a substantial difference.For large both methods give as otically nIn/6 It can easily be deduced from the results in Alon (1990b)that the ound in Theorem 1.2.2 is nearly optimal.A non probabilistic,algorithmic proof of this theorem can be obtained by choosing the vertices for the dominating set one by one,when in each step a vertex that covers the maximum number of yet uncovered vertices is picke note by C(v)the set consisting of together with all its neighbors.Suppose that during the process of picking verticesGRAPH THEORY 5 if they are not independent) and since the random variable \Y\ can be written as a sum of n indicator random variables \v (v & V), where \v = 1 if v € Y and Xv = 0 otherwise, we conclude that the expected valué of \X\ + \Y\ is at most np + n(\ — p)s+1 . Consequently, there is at least one choice oflc y such that \X\ + \Yx\ <np + n(l - p)s+1 . The set U = X U Yx is clearly a dominating set of G whose cardinality is at most this size. The above argument works for any p £ [0,1], To optimize the result we use elementary calculus. For convenience we bound 1 — p < e~ p (this holds for all nonnegative p and is a fairly cióse bound when p is small) to give the simpler bound \U\ < np + ne-^s+l) . Take the derivative of the right-hand side with respect to p and set it equal to zero. The right-hand side is minimized at ln(¿+l) Formally, we set p equal to this valué in the first line of the proof. We now have \U\ < n[l + ln(¿ + 1)]/(S + 1) as claimed. • Three simple but important ideas are incorporated in the last proof. The first is the linearity of expectation; many applications of this simple, yet powerful principie appear in Chapter 2. The second is perhaps more subtle and is an example of the "alteration" principie that is discussed in Chapter 3. The random choice did not supply the required dominating set U immediately; it only supplied the set X, which has to be altered a little (by adding to it the set Yx) to provide the required dominating set. The third involves the optimal choice of p. One often wants to make a random choice but is not certain what probability p should be used. The idea is to carry out the proof with p as a parameter giving a result that is a function of p. At the end, that p is selected which gives the optimal result. There is here yet a fourth idea that might be called asymptotic calculus. We wanted the asymptotics of min np + n( 1 — p)s+1 , where p ranges over [0,1]. The actual mínimum p = 1 - {8 + I)'1 / 5 is difficult to deal with and in many similar cases precise minima are impossible to find in closed form. Rather, we give away a little bit, bounding 1 — p < e~ p , yielding a clean bound. A good part of the art of the probabilistic method lies in finding suboptimal but clean bounds. Did we give away too much in this case? The answer depends on the emphasis for the original question. For 8 = 3 our rough bound gives \U\ < 0.596n while the more precise calculation gives \U\ < 0.496n, perhaps a substantial difference. For 8 large both methods give asymptotically n ln 8/5. It can easily be deduced from the results in Alón (1990b) that the bound in Theorem 1.2.2 is nearly optimal. A non probabilistic, algorithmic proof of this theorem can be obtained by choosing the vértices for the dominating set one by one, when in each step a vértex that covers the máximum number of yet uncovered vértices is picked. Indeed, for each vértex v denote by C(v) the set consisting of v together with all its neighbors. Suppose that during the process of picking vértices
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有