Lecture 12.A glimpse of computational complexity This year's offering of the complexity course focuses on concrete models,and we'll use this last lecture to say something about various models defined by Turing machines.The standard (and recent)references are textbooks by Arora and Barak [AB09]and Goldreich [Gol08].For automata and Turing machines,a good textbook is [Sip]. Due to the vast amount of the materials,we cannot provide proof for most theorems,though we'll sketch some key ideas of the proof. 1.Time and space TIME(t(n)):Languages that can be decided in time t(n)on a deterministic Turing machine. Theorem (Hierarchy):TIME(t(n))TIME(@(t(n)log t(n))) (Proof idea:Diagonalization.) P=UTIME(n):Languages that can be decided in polynomial time on a deterministic Turing machine. Examples:Matching,MaxFlow,LP,PRIMES,.. EXP=U=TIME(2):Languages that can be decided in exponential time on a deterministic Turing machine. SPACE(s(n)):Languages that can be decided in space s(n)on a deterministic Turing machine PSPACE =U=SPACE(n):Languages that can be decided in polynomial space on a deterministic Turing machine. Theorem.TQBF is PSPACE-complete,where TQBF ={x:QiyQ2y2...Qkyk,(x,y1...,y)=1}for some E P,constant k,and quantifiers QiE(3,and y1,...,yk of polynomial lengths
Lecture 12. A glimpse of computational complexity This year’s offering of the complexity course focuses on concrete models, and we’ll use this last lecture to say something about various models defined by Turing machines. The standard (and recent) references are textbooks by Arora and Barak [AB09] and Goldreich [Gol08]. For automata and Turing machines, a good textbook is [Sip]. Due to the vast amount of the materials, we cannot provide proof for most theorems, though we’ll sketch some key ideas of the proof. 1. Time and space 𝐓𝐈𝐌𝐄(𝑡(𝑛)): Languages that can be decided in time 𝑡(𝑛) on a deterministic Turing machine. Theorem (Hierarchy): 𝐓𝐈𝐌𝐄(𝑡(𝑛)) ≠ 𝐓𝐈𝐌𝐄(𝜔(𝑡(𝑛) log 𝑡(𝑛))). (Proof idea: Diagonalization.) 𝐏 = ⋃ 𝐓𝐈𝐌𝐄(𝑛 𝑐 ) 𝑐≥1 : Languages that can be decided in polynomial time on a deterministic Turing machine. ⚫ Examples: Matching, MaxFlow, LP, PRIMES, … 𝐄𝐗𝐏 = ⋃ 𝐓𝐈𝐌𝐄(2 𝑛 𝐶 ) 𝑐≥1 : Languages that can be decided in exponential time on a deterministic Turing machine. 𝐒𝐏𝐀𝐂𝐄(𝑠(𝑛)): Languages that can be decided in space 𝑠(𝑛) on a deterministic Turing machine 𝐏𝐒𝐏𝐀𝐂𝐄 = ⋃ 𝐒𝐏𝐀𝐂𝐄(𝑛 𝑐 ) 𝑐≥1 : Languages that can be decided in polynomial space on a deterministic Turing machine. Theorem. TQBF is PSPACE-complete, where 𝑇𝑄𝐵𝐹 = {𝑥: 𝑄𝑖𝑦1𝑄2𝑦2 …𝑄𝑘 𝑦𝑘 ,𝜙(𝑥, 𝑦1 , . . , 𝑦𝑘 ) = 1} for some 𝜙 ∈ 𝐏, constant k, and quantifiers 𝑄𝑖 ∈ {∃, ∀} and 𝑦1 , …, 𝑦𝑘 of polynomial lengths
(Proof idea:recursively use 3 to search the middle configuration and use V to verify both parts.) EXPSPACE=UczSPACE(2):Languages that can be decided in exponential space on a deterministic Turing machine. L=Uc=SPACE(O(log n)):Languages that can be decided in logarithmic space on a deterministic Turing machine. 2.Nondeterministic,Alternation,Advice and Randomness NTIME(t(n)):Languages that can be decided in time t(n)on a nondeterministic Turing machine. NP Uc=1 NTIME(n): Languages that can be decided in polynomial time on a nondeterministic Turing machine. ● Languages that can be verified in polynomial time on a deterministic Turing machine. NP-Complete:Problems in NP to which any other NP problem can be reduce. SAT,Clique,HamiltonianCycle,TSP,SubsetSum,IP,... While most problems in NP are either in P or NP-complete,there are also problems in between. Theorem (Ladner)If P#NP,then there are languages that are neither in P or NP-complete. There are some specific problems not known to be in P or NPC.Some examples:Polynomial Identity Testing,Graph Isomorphism,Factoring,DiscreteLog. One can also define NEXP,languages decidable in exponential time on a nondeterministic Turing machine.This class is of course very large.Inside the smaller class PSPACE,people defined more classes
(Proof idea: recursively use ∃ to search the middle configuration and use ∀ to verify both parts.) 𝐄𝐗𝐏𝐒𝐏𝐀𝐂𝐄 = ⋃ 𝐒𝐏𝐀𝐂𝐄(2 𝑛 𝑐 ) 𝑐≥1 : Languages that can be decided in exponential space on a deterministic Turing machine. 𝐋 = ⋃ 𝐒𝐏𝐀𝐂𝐄(𝑂(log 𝑛)) 𝑐≥1 : Languages that can be decided in logarithmic space on a deterministic Turing machine. 2. Nondeterministic, Alternation, Advice and Randomness NTIME(𝑡(𝑛)): Languages that can be decided in time 𝑡(𝑛) on a nondeterministic Turing machine. 𝐍𝐏 = ⋃ 𝐍𝐓𝐈𝐌𝐄(𝑛 𝑐 ) 𝑐≥1 : ⚫ Languages that can be decided in polynomial time on a nondeterministic Turing machine. ⚫ Languages that can be verified in polynomial time on a deterministic Turing machine. NP-Complete: Problems in NP to which any other NP problem can be reduce. ⚫ SAT, Clique, HamiltonianCycle, TSP, SubsetSum, IP, … While most problems in NP are either in P or NP-complete, there are also problems in between. Theorem (Ladner) If 𝐏 ≠ 𝐍𝐏, then there are languages that are neither in P or NP-complete. There are some specific problems not known to be in P or NPC. Some examples: Polynomial Identity Testing, Graph Isomorphism, Factoring, DiscreteLog. One can also define NEXP, languages decidable in exponential time on a nondeterministic Turing machine. This class is of course very large. Inside the smaller class PSPACE, people defined more classes
Above NP,one can define more classes by using more (alternating)quantifiers. 2={x:3y1Vy2(x,yu,y2)=1}for some P. ={x:y13y2(x,y1,y2)=1 for some中∈P. Basically,they are one-level upper than NP and co-NP.One can define and II for larger k.Taking the union for all k(independent of the problem input size),we get the class PH(Polynomial Hierarchy).The classes and Ip are usually called the k-th layer of PH, and the first layer contains NP and co-NP.It's not hard to see from the definition that and IIWhile each layer has complete problems,PH doesn't unless it collapses to a certain level. Most researchers believe that PH doesn't collapse,and use this as a computational assumption.A basic fact is that if=II,then PH==IIP,thus PH collapses to the k-th level.For example,one can show that if the problem Graph Isomorphism is NP complete, then PH collapses to the second level. Advice is a string given to a Turing machine.It can depend on the input size,but not the input itself.We use TIME(t(n))/l(n)to denote the language decided by a t(n)-time Turing machine augmented with a l(n)-bit advice for inputs of size n.The class P/poly coincides with the class of languages decidable by polynomial-size circuits.The following theorem says that it's unlikely that the power of advice is powerful for NP. Theorem(Karp-Lipton)NP∈P/poly→PH=. (Proof idea.NP P/poly implies the existence of a circuit family to decide an NP language.This existence is the one to show I.) Randomness plays an important role in algorithm designing.A language L is in BPP if it can be decided with bounded error,say,1/3,by a polynomial-time Turing machine.Though many people believe that BPP=P,we can't even rule out the possibility that BPP=NEXP. What we do know includes the facts that BPP P/poly and that BPP E2n IP. (Proof idea of BPP P/poly:Reduce the error to exponential small and then find a random string that is correct for all inputs of a certain size.Use this string as the advice.Proof idea of BPP:Reduce the error to exponential small and then prove that xEL iff there
Above NP, one can define more classes by using more (alternating) quantifiers. 𝚺𝟐 𝐩 = {𝑥: ∃𝑦1∀𝑦2𝜙(𝑥, 𝑦1 , 𝑦2 ) = 1} for some 𝜙 ∈ 𝐏. 𝚷𝟐 𝐩 = {𝑥: ∀𝑦1∃𝑦2𝜙(𝑥, 𝑦1 , 𝑦2 ) = 1} for some 𝜙 ∈ 𝐏. Basically, they are one-level upper than NP and co-NP. One can define 𝚺𝐤 𝐩 and 𝚷𝐤 𝐩 for larger k. Taking the union for all k (independent of the problem input size), we get the class PH (Polynomial Hierarchy). The classes 𝚺𝐤 𝐩 and 𝚷𝐤 𝐩 are usually called the k-th layer of PH, and the first layer contains NP and co-NP. It’s not hard to see from the definition that 𝚺𝐤 𝐩 ⊆ 𝚷𝐤+𝟏 𝐩 and 𝚷𝐤 𝐩 ⊆ 𝚺𝐤+𝟏 𝐩 . While each layer has complete problems, PH doesn’t unless it collapses to a certain level. Most researchers believe that PH doesn’t collapse, and use this as a computational assumption. A basic fact is that if 𝚺𝐤 𝐩 = 𝚷𝐤 𝐩 , then 𝐏𝐇 = 𝚺𝐤 𝐩 = 𝚷𝐤 𝐩 , thus PH collapses to the k-th level. For example, one can show that if the problem Graph Isomorphism is NP complete, then PH collapses to the second level. Advice is a string given to a Turing machine. It can depend on the input size, but not the input itself. We use 𝐓𝐈𝐌𝐄(𝑡(𝑛))/𝑙(𝑛) to denote the language decided by a 𝑡(𝑛)-time Turing machine augmented with a 𝑙(𝑛)-bit advice for inputs of size n. The class P/poly coincides with the class of languages decidable by polynomial-size circuits. The following theorem says that it’s unlikely that the power of advice is powerful for NP. Theorem (Karp-Lipton) 𝐍𝐏 ⊆ 𝐏/𝐩𝐨𝐥𝐲 ⇒ 𝐏𝐇 = 𝚺𝟐 𝒑 . (Proof idea. 𝐍𝐏 ⊆ 𝐏/𝐩𝐨𝐥𝐲 implies the existence of a circuit family to decide an NP language. This existence is the one to show 𝚷𝟐 𝒑 ⊆ 𝚺𝟐 𝒑 .) Randomness plays an important role in algorithm designing. A language L is in BPP if it can be decided with bounded error, say, 1/3, by a polynomial-time Turing machine. Though many people believe that 𝐁𝐏𝐏 = 𝐏, we can’t even rule out the possibility that 𝐁𝐏𝐏 = 𝐍𝐄𝐗𝐏. What we do know includes the facts that 𝐁𝐏𝐏 ⊆ 𝐏/𝐩𝐨𝐥𝐲 and that 𝐁𝐏𝐏 ⊆ 𝚺𝟐 𝒑 ∩ 𝚷𝟐 𝒑 . (Proof idea of 𝐁𝐏𝐏 ⊆ 𝐏/𝐩𝐨𝐥𝐲: Reduce the error to exponential small and then find a random string that is correct for all inputs of a certain size. Use this string as the advice. Proof idea of 𝐁𝐏𝐏 ⊆ 𝚺𝟐 𝒑 ∩ 𝚷𝟐 𝒑 : Reduce the error to exponential small and then prove that 𝑥 ∈ 𝐿 iff there
exists a set of shifts of the random string s.t.at least one of them makes the machine to accept.) 3.Interaction Interactive proof system:NP plus interaction IP:Languages decidable by interactive proof system with polynomial-time Verifier. Theorem.IP PSPACE. (Proof idea:Some technique called algebraization...) A special type of interactive proof systems is one in which Verifier sends only public coins These are called AM[k],where k is the number of rounds.Let AM =Uk=1AM[k] Theorem.AM[k]AM[k+1]for any constant k.Thus AM AM[2]. (Proof idea:Switch the first two rounds,thus Verifier first sends public coins.If the original soundness error is small enough,then the switching maintains a decent soundness.) One can also define IP[k]as the languages decidable by k-round interactive proof systems. Theorem.IP[k]=AM[k +2].Thus all interactive proofs can be made public-coin. (Proof idea:The key tool is to efficiently certify the approximate size of a set.) One can also define one-round IP,which is the class MA.It's like NP but now Verifier is a BPP-machine (namely,Bounded-error,Polynomial-time and Probabilistic machine). Another important one-round IP is PCP[r(n),g(n)],in which Verifier uses at most r(n) random bits and at most g(n)queries to the proof. Theorem.PCP[O(log n),0(1)]=NP. The PCP theorems have direct implications in approximation algorithms(to show impossibility results)and many other connections to other areas in complexity theory. One can also consider more provers,who are allowed to discuss before seeing the input.We use MIP to denote the class of languages decidable by such multiple prover systems
exists a set of shifts of the random string s.t. at least one of them makes the machine to accept.) 3. Interaction Interactive proof system: NP plus interaction. IP: Languages decidable by interactive proof system with polynomial-time Verifier. Theorem. 𝐈𝐏 = 𝐏𝐒𝐏𝐀𝐂𝐄. (Proof idea: Some technique called algebraization…) A special type of interactive proof systems is one in which Verifier sends only public coins. These are called AM[k], where k is the number of rounds. Let 𝐀𝐌 = ⋃ 𝐀𝐌[𝑘] 𝑘≥1 . Theorem. 𝐀𝐌[𝑘] = 𝐀𝐌[𝑘 + 1] for any constant k. Thus 𝐀𝐌 = 𝐀𝐌[2]. (Proof idea: Switch the first two rounds, thus Verifier first sends public coins. If the original soundness error is small enough, then the switching maintains a decent soundness.) One can also define 𝐈𝐏[𝑘] as the languages decidable by k-round interactive proof systems. Theorem. 𝐈𝐏[𝑘] = 𝐀𝐌[𝑘 + 2]. Thus all interactive proofs can be made public-coin. (Proof idea: The key tool is to efficiently certify the approximate size of a set.) One can also define one-round IP, which is the class MA. It’s like NP but now Verifier is a BPP-machine (namely, Bounded-error, Polynomial-time and Probabilistic machine). Another important one-round IP is 𝐏𝐂𝐏[𝑟(𝑛),𝑞(𝑛)], in which Verifier uses at most 𝑟(𝑛) random bits and at most 𝑞(𝑛) queries to the proof. Theorem. 𝐏𝐂𝐏[𝑂(log 𝑛), 𝑂(1)] = 𝐍𝐏. The PCP theorems have direct implications in approximation algorithms (to show impossibility results) and many other connections to other areas in complexity theory. One can also consider more provers, who are allowed to discuss before seeing the input. We use MIP to denote the class of languages decidable by such multiple prover systems
Theorem.MIP NEXP References [AB09]Sanjeev Arora and Boaz Barak,Complexity theory:A modern approach, Cambridge University Press,2009. [Gol08]Oded Goldreich,Complexity theory:A conceptual approach,Cambridge University Press,2008. [Sip12]Michael Sipser,Introduction to the Theory of Computation,3 edition,Course Technology,2012
Theorem. 𝐌𝐈𝐏 = 𝐍𝐄𝐗𝐏. References [AB09] Sanjeev Arora and Boaz Barak, Complexity theory: A modern approach, Cambridge University Press, 2009. [Gol08] Oded Goldreich, Complexity theory: A conceptual approach, Cambridge University Press, 2008. [Sip12] Michael Sipser, Introduction to the Theory of Computation, 3 edition, Course Technology, 2012