正在加载图片...
446 SHLOMO HOORY.NATHAN LINIAL.AND AVI WIGDERSON possible cardinality subject to the condition that every two strings in C are at a large Hamming distance. Definition 1.5 (the rate and distance of a dictionary).Let C{0,1)"be a dictionary.Its rate and(normalized)distance are defined by: R= log C] 6= minc1≠c2eCdH(c1,c2) n As we saw before,the distance of a dictionary controls its power to overcome noise.A code's rate measures its efficiency in channel utilization.At this point we can refine the problem and ask: Problem 1.6(refined communication problem).Is it possible to design arbitrarily large dictionaries {Ck}of size Ckl =2,with R(Ck)>Ro and 6(Ck)>6o for some absolute constants Ro,6o >0?Moreover,can we make these codes explicit and efficiently encodable and decodable? This problem and its relatives (optimizing the code's parameters,and the algo- rithms'efficiency,in this and other error models and communication settings)is the subject of Coding Theory,a rich and active field initiated by Shannon's work(see e.g.[MS77a,MS77b]and [vL99]for the general theory and Sudan's notes [Sud00] for complexity theoretic aspects of the field).It took over 20 years of research until even the basic Problem 1.6 was resolved,but below we present a simple solution to this problem using expander graphs.However,before we do that,let us present our third motivating problem. 1.1.3.Deterministic error amplification for RP.The field of probabilistic algo- rithms burst into existence within Theoretical Computer Science,with the fast primality tests of Rabin Rab80 and of Solovay and Strassen SS77.Given a k-bit integer x and a string r of k random bits,these algorithms efficiently compute a boolean valued function f(r,r)with the following property.If z is prime,then f(x,r)=1 for all choices of r.Otherwise,if x is composite,f(r,r)=1 with prob- ability smaller than 1/2 over a randomly chosen r.If f =1,the algorithm declares x a prime,and otherwise declares it to be composite.It never fails on primes,and for every composite z its probability of failure is at most 1/2. The error bound 1/2 may not be very satisfactory,and one would like to reduce it to some desired level.A very simple way to reduce our failure probability is to apply the same algorithm repeatedly with new randomly chosen r's.Repeating it (say)d times will reduce the probability of error to below 2-d.On the other hand, the running time and the number of random bits used increase by a factor of d. Is there a way to reduce the error "deterministically"without using more random bits,or at least using less than the obvious procedure above?We will see several answers to this question in these notes,and this section contains an initial advance on the problem.The importance of minimizing the number of random bits may not be evident,but we can assure the reader that it is a basic theoretical problem and. moreover,that getting your hands on good random bits is a nontrivial practical problem. The above-mentioned primality testing algorithms belong to the class RP of Randomized Polynomial-Time algorithms.It is in this general setting that we446 SHLOMO HOORY, NATHAN LINIAL, AND AVI WIGDERSON possible cardinality subject to the condition that every two strings in C are at a large Hamming distance. Definition 1.5 (the rate and distance of a dictionary). Let C ⊆ {0, 1}n be a dictionary. Its rate and (normalized) distance are defined by: R = log |C| n , δ = minc1=c2∈C dH(c1, c2) n . As we saw before, the distance of a dictionary controls its power to overcome noise. A code’s rate measures its efficiency in channel utilization. At this point we can refine the problem and ask: Problem 1.6 (refined communication problem). Is it possible to design arbitrarily large dictionaries {Ck} of size |Ck| = 2k, with R(Ck) ≥ R0 and δ(Ck) ≥ δ0 for some absolute constants R0, δ0 > 0? Moreover, can we make these codes explicit and efficiently encodable and decodable? This problem and its relatives (optimizing the code’s parameters, and the algo￾rithms’ efficiency, in this and other error models and communication settings) is the subject of Coding Theory, a rich and active field initiated by Shannon’s work (see e.g. [MS77a, MS77b] and [vL99] for the general theory and Sudan’s notes [Sud00] for complexity theoretic aspects of the field). It took over 20 years of research until even the basic Problem 1.6 was resolved, but below we present a simple solution to this problem using expander graphs. However, before we do that, let us present our third motivating problem. 1.1.3. Deterministic error amplification for RP. The field of probabilistic algo￾rithms burst into existence within Theoretical Computer Science, with the fast primality tests of Rabin [Rab80] and of Solovay and Strassen [SS77]. Given a k-bit integer x and a string r of k random bits, these algorithms efficiently compute a boolean valued function f(x, r) with the following property. If x is prime, then f(x, r) = 1 for all choices of r. Otherwise, if x is composite, f(x, r) = 1 with prob￾ability smaller than 1/2 over a randomly chosen r. If f = 1, the algorithm declares x a prime, and otherwise declares it to be composite. It never fails on primes, and for every composite x its probability of failure is at most 1/2. The error bound 1/2 may not be very satisfactory, and one would like to reduce it to some desired level. A very simple way to reduce our failure probability is to apply the same algorithm repeatedly with new randomly chosen r’s. Repeating it (say) d times will reduce the probability of error to below 2−d. On the other hand, the running time and the number of random bits used increase by a factor of d. Is there a way to reduce the error “deterministically” without using more random bits, or at least using less than the obvious procedure above? We will see several answers to this question in these notes, and this section contains an initial advance on the problem. The importance of minimizing the number of random bits may not be evident, but we can assure the reader that it is a basic theoretical problem and, moreover, that getting your hands on good random bits is a nontrivial practical problem. The above-mentioned primality testing algorithms belong to the class RP of Randomized Polynomial-Time algorithms. It is in this general setting that we
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有