正在加载图片...
GRAPH THEORY > Prv]=Pr(v and its neighbors are not in X](1-p)+.Since the expected value of a sum of random variables is the sum of their expectations (even if they are not independent)and since the random variable IyI can be written as a sum of n indicator random variables vV),where =l if vE Y and=0otherwise. we conclude that the expected value of+Y]is at most np+n(1-p)5+1.Conse- quently,there is at least one choice of X C V such that+Yyl np+n(1-p)5+1 The set U=XUYx is clearly a dominating set of G whos cardinality is at most this size The above elementary calculus hent works for any p o opmize the resu (th nonnegative p and is a fairly close bound when p is small)to give the simpler bound IU≤np+ne6+) Take the derivative of the right-hand side with respect to p and set it equal to zero. The right-hand side is minimized at p=血+) δ+1 Formally,we set p equal to this value in the first line of the proof.We now have as claimed 0+1 Three simple but important ideas are incorporated in the last proof.The first is the linearity of expectation:many applications of this simple.yet powerful principle perhaps more subtle and is an example of the ciple that is discussed in Ch pter3.The ra ndom choice did n the set U im nediatel a little(b ng to j it only supp olied the set which has it the se ed domi ang se The third involves the optim n choic but is not certain what probability p should be used.The idea is to carry out the proo with p as a parameter giving a result that is a function of p.At the end,that p is selected which gives the optimal result.Here,there is yet a fourth idea that might be called asymptotic calculus.We want the asymptotics of min np+n(1-p)5+1,where p ranges over [0.1].The actual minimump=1-(+1)-1/6 is difficult to deal with. and in many similar case s precise minima are impossible to find in a closed form Rathe give away a little bit,bo .1 e-p ieldins a clean bound.A rt of th of the pro d ng su d we give away too much in this case?The answer de pen for the original question.For 6= 3.our rough bound gives s0.5 more precise calculation gives 0.496n,perhaps a substantial difference.For large,both methods give asymptotically n In 6/8. It can easily be deduced from the results in Alon (1990b)that the bound in Theorem 1.2.2 is nearly optimal.A non-probabilistic,algorithmic proof of this theorem can be obtained by choosing the vertices for the dominating set one byGRAPH THEORY 7 Pr[𝑣 ∈ Y] = Pr[𝑣 and its neighbors are not in X] ≤ (1 − p) 𝛿+1. Since the expected value of a sum of random variables is the sum of their expectations (even if they are not independent) and since the random variable |Y| can be written as a sum of n indicator random variables 𝜒𝑣 (𝑣 ∈ V), where 𝜒𝑣 = 1 if 𝑣 ∈ Y and 𝜒𝑣 = 0 otherwise, we conclude that the expected value of |X| + |Y| is at most np + n(1 − p) 𝛿+1. Conse￾quently, there is at least one choice of X ⊆ V such that |X| + |YX| ≤ np + n(1 − p) 𝛿+1. The set U = X ∪ YX is clearly a dominating set of G whose cardinality is at most this size. The above argument works for any p ∈ [0, 1]. To optimize the result we use elementary calculus. For convenience, we bound 1 − p ≤ e−p (this holds for all nonnegative p and is a fairly close bound when p is small) to give the simpler bound |U| ≤ np + ne−p(𝛿+1) . Take the derivative of the right-hand side with respect to p and set it equal to zero. The right-hand side is minimized at p = ln (𝛿 + 1) 𝛿 + 1 . Formally, we set p equal to this value in the first line of the proof. We now have |U| ≤ n 1 + ln (𝛿 + 1) 𝛿 + 1 , as claimed. ◾ Three simple but important ideas are incorporated in the last proof. The first is the linearity of expectation; many applications of this simple, yet powerful principle appear in Chapter 2. The second is perhaps more subtle and is an example of the “alteration” principle that is discussed in Chapter 3. The random choice did not supply the required dominating set U immediately; it only supplied the set X, which has to be altered a little (by adding to it the set YX) to provide the required dominating set. The third involves the optimal choice of p. One often wants to make a random choice but is not certain what probability p should be used. The idea is to carry out the proof with p as a parameter giving a result that is a function of p. At the end, that p is selected which gives the optimal result. Here, there is yet a fourth idea that might be called asymptotic calculus. We want the asymptotics of min np + n(1 − p) 𝛿+1, where p ranges over [0, 1]. The actual minimum p = 1 − (𝛿 + 1) −1∕𝛿 is difficult to deal with, and in many similar cases precise minima are impossible to find in a closed form. Rather, we give away a little bit, bounding 1 − p ≤ e−p, yielding a clean bound. A good part of the art of the probabilistic method lies in finding suboptimal but clean bounds. Did we give away too much in this case? The answer depends on the emphasis for the original question. For 𝛿 = 3, our rough bound gives |U| ≤ 0.596n, while the more precise calculation gives |U| ≤ 0.496n, perhaps a substantial difference. For 𝛿 large, both methods give asymptotically n ln 𝛿∕𝛿. It can easily be deduced from the results in Alon (1990b) that the bound in Theorem 1.2.2 is nearly optimal. A non-probabilistic, algorithmic proof of this theorem can be obtained by choosing the vertices for the dominating set one by
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有