正在加载图片...
4 ALAN FRIEZE.DIETER MITSCHE.XAVIER PEREZ-GIMENEZ.AND PAWEL PRALAT Let G=(V,E)be any graph.A set S CV is called independent if no edge eE has both endpoints in S.Denote by a(G)the independence number of G,that is,the size of a largest independent set of G.Let ko be defined as follows: =a红,刊=max{keN ()1-pm≥m} It is well known that ko is well defined (for all n sufficiently large and provided that p <1-e for some e>0)and that ko ~2log(np)with b=1/(1-p).The following result was proved in [8]. Theorem 4((8]).Suppose n-2/510g6/5n<p<1-e for some constant >0.Let GEG(n,p). Then Pa<=即(n(》 We obtain the following immediate corollary that will be useful to deal with dense random graphs. In fact,this lemma is the bottleneck for the argument that prevents us to extend Theorem 1 for sparser graphs. Corollary 5.Let e>0 be any constant and let w=w(n)=o(logn)be any function tending to infinity as n->oo.Suppose that w(loglog n)1/3(1ogn)2n-1/3≤p≤1-e. Let GEG(n,p).Then,a.a.s.every set S C V(G)with S]=s so:=n/(w log2n)contains an independent set of size ko =ko(s,p)~ko(n,p). Proof.Fix a set SCV(G)with S]=s>n/(wlog2n).First,let us note that ko=ko(s,p)~2logb(sp)2l0go(np)(1-O log log n ko(n,p). log n Moreover,since n/(wlog2n)<s<n,we can easily verify that p satisfies s-2/5l0g6/5sp< 1-E(with room to spare in the lower bound).It follows immediately from Theorem 4 that the probability of not having an independent set of size ko in S is at most em(()》 (()》-em(( smp3) exp(-(sw2loglog n)) On the other hand,the number of sets of size s can be bounded as follows: )s(s)°=xp(olo(ne/s/》≤epBs1 log og, n since log(ne/s)<log(ewlog2n)=(2+o(1))loglogn.Hence,the expected number of sets of size s for which the desired property fails is exp(-(sw2loglogn)).Summing over all s>so,the expected number of all such sets is exp(-(sow2 loglogn))=o(1).The claim holds by Markov's inequality. ▣ Given a graph G and some fixed ordering of its vertices (for example,we may assume that it is the natural ordering 1,2,...,n),the greedy colouring algorithm proceeds by scanning vertices of G in the given order and assigning the first available colour for the current vertex.We will use the following lemma due to [9](see also [5]).However,since we use it in a slightly different setting,we point out a few small differences in the argument by providing a sketch of the proof.In fact,this lemma is the bottleneck for the argument that prevents us to extend Theorem 2 for sparser graphs. Lemma 6.Let w=w(n):=log logn.Given any constant 0<<1,let GE G(n,p)with log2+en/n =po <p=o(1/logn).Then,a.a.s.every subgraph of G of size uuo:=n/(w log2n) has an independent set of size at least e(1-E)log(np)/(3p).4 ALAN FRIEZE, DIETER MITSCHE, XAVIER PEREZ-GIM ´ ENEZ, AND PAWE L PRA LAT ´ Let G = (V, E) be any graph. A set S ⊆ V is called independent if no edge e ∈ E has both endpoints in S. Denote by α(G) the independence number of G, that is, the size of a largest independent set of G. Let k0 be defined as follows: k0 = k0(n, p) = max  k ∈ N :  n k  (1 − p) ( k 2 ) ≥ n 4  . It is well known that k0 is well defined (for all n sufficiently large and provided that p ≤ 1 − ε for some ε > 0) and that k0 ∼ 2 logb (np) with b = 1/(1 − p). The following result was proved in [8]. Theorem 4 ([8]). Suppose n −2/5 log6/5 n ≪ p ≤ 1 − ε for some constant ε > 0. Let G ∈ G(n, p). Then Pr(α(G) < k0) = exp  −Ω  n 2 k 4 0 p  . We obtain the following immediate corollary that will be useful to deal with dense random graphs. In fact, this lemma is the bottleneck for the argument that prevents us to extend Theorem 1 for sparser graphs. Corollary 5. Let ε > 0 be any constant and let ω = ω(n) = o(log n) be any function tending to infinity as n → ∞. Suppose that ω(log log n) 1/3 (log n) 2n −1/3 ≤ p ≤ 1 − ε. Let G ∈ G(n, p). Then, a.a.s. every set S ⊆ V (G) with |S| = s ≥ s0 := n/(ω log2 n) contains an independent set of size k0 = k0(s, p) ∼ k0(n, p). Proof. Fix a set S ⊆ V (G) with |S| = s ≥ n/(ω log2 n). First, let us note that k0 = k0(s, p) ∼ 2 logb (sp) = 2 logb (np)  1 − O  log log n log n  ∼ k0(n, p). Moreover, since n/(ω log2 n) ≤ s ≤ n, we can easily verify that p satisfies s −2/5 log6/5 s ≪ p ≤ 1 − ε (with room to spare in the lower bound). It follows immediately from Theorem 4 that the probability of not having an independent set of size k0 in S is at most exp  −Ω  s 2 k 4 0 p  = exp  −Ω  s 2p 3 log4 n  = exp  −Ω  snp3 ω log6 n  = exp ￾ −Ω ￾ sω2 log log n  . On the other hand, the number of sets of size s can be bounded as follows:  n s  ≤ ne s s = exp (s log(ne/s)) ≤ exp (3s log log n), since log(ne/s) ≤ log(eω log2 n) = (2 + o(1)) log log n. Hence, the expected number of sets of size s for which the desired property fails is exp ￾ −Ω ￾ sω2 log log n . Summing over all s ≥ s0, the expected number of all such sets is exp ￾ −Ω ￾ s0ω 2 log log n  = o(1). The claim holds by Markov’s inequality. Given a graph G and some fixed ordering of its vertices (for example, we may assume that it is the natural ordering 1, 2, . . . , n), the greedy colouring algorithm proceeds by scanning vertices of G in the given order and assigning the first available colour for the current vertex. We will use the following lemma due to [9] (see also [5]). However, since we use it in a slightly different setting, we point out a few small differences in the argument by providing a sketch of the proof. In fact, this lemma is the bottleneck for the argument that prevents us to extend Theorem 2 for sparser graphs. Lemma 6. Let ω = ω(n) := log log n. Given any constant 0 < ε < 1, let G ∈ G(n, p) with log2+ε n/n =: p0 ≤ p = o(1/ log n). Then, a.a.s. every subgraph of G of size u ≥ u0 := n/(ω log2 n) has an independent set of size at least ε(1 − ε) log(np)/(3p).
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有