正在加载图片...
About this book Computational complexity theory has developed rapidly in the past three decades.The list of surprising and fundamental results proved since 1990 alone could fill a book:these include new probabilistic definitions of classical complexity classes (IP PSPACE and the PCP Theorems) and their implications for the field of approximation algorithms;Shor's algorithm to factor integers using a quantum computer;an understanding of why current approaches to the famous P versus NP will not be successful;a theory of derandomization and pseudorandomness based upon com- putational hardness;and beautiful constructions of pseudorandom objects such as extractors and expanders. This book aims to describe such recent achievements of complexity theory in the context of the classical results.It is intended to both serve as a textbook as a reference for self-study.This means it must simultaneously cater to many audiences,and it is carefully designed with that goal. Throughout the book we explain the context in which a certain notion is useful,and why things are defined in a certain way.Examples and solved exercises accompany key definitions.We assume essentially no computational background and very minimal mathematical background,which we review in Appendix A. We have also provided a web site for this book at http://www.cs.princeton.edu/theory/complexity/ with related auxiliary material.This includes web chapters on automata and computability theory, detailed teaching plans for courses based on this book,a draft of all the book's chapters,and links to other online resources covering related topics. The book is divided into three parts: Part I:Basic complexity classes.This volume provides a broad introduction to the field.Start- ing from the definition of Turing machines and the basic notions of computability theory,this volumes covers the basic time and space complexity classes,and also includes a few more modern topics such probabilistic algorithms,interactive proofs and cryptography. Part II:Lower bounds on concrete computational models.This part describes lower bounds on resources required to solve algorithmic tasks on concrete models such as circuits,decision trees,etc.Such models may seem at first sight very different from Turing machines,but looking deeper one finds interesting interconnections. Part III:Advanced topics.This part is largely devoted to developments since the late 1980s.It includes average case complexity,derandomization and pseudorandomness,the PCP theorem and hardness of approximation,proof complexity and quantum computing. Almost every chapter in the book can be read in isolation (though we recommend reading Chapters 1,2 and 7 before reading later chapters).This is important because the book is aimed Web draft2007-01-0821:59 in Complexity Theory:A Modern Approach.C2006 Sanjeev Arora and Boaz Barak.References and attributions are still incomplete.DRAFT About this book Computational complexity theory has developed rapidly in the past three decades. The list of surprising and fundamental results proved since 1990 alone could fill a book: these include new probabilistic definitions of classical complexity classes (IP = PSPACE and the PCP Theorems) and their implications for the field of approximation algorithms; Shor’s algorithm to factor integers using a quantum computer; an understanding of why current approaches to the famous P versus NP will not be successful; a theory of derandomization and pseudorandomness based upon com￾putational hardness; and beautiful constructions of pseudorandom objects such as extractors and expanders. This book aims to describe such recent achievements of complexity theory in the context of the classical results. It is intended to both serve as a textbook as a reference for self-study. This means it must simultaneously cater to many audiences, and it is carefully designed with that goal. Throughout the book we explain the context in which a certain notion is useful, and why things are defined in a certain way. Examples and solved exercises accompany key definitions. We assume essentially no computational background and very minimal mathematical background, which we review in Appendix A. We have also provided a web site for this book at http://www.cs.princeton.edu/theory/complexity/ with related auxiliary material. This includes web chapters on automata and computability theory, detailed teaching plans for courses based on this book, a draft of all the book’s chapters, and links to other online resources covering related topics. The book is divided into three parts: Part I: Basic complexity classes. This volume provides a broad introduction to the field. Start￾ing from the definition of Turing machines and the basic notions of computability theory, this volumes covers the basic time and space complexity classes, and also includes a few more modern topics such probabilistic algorithms, interactive proofs and cryptography. Part II: Lower bounds on concrete computational models. This part describes lower bounds on resources required to solve algorithmic tasks on concrete models such as circuits, decision trees, etc. Such models may seem at first sight very different from Turing machines, but looking deeper one finds interesting interconnections. Part III: Advanced topics. This part is largely devoted to developments since the late 1980s. It includes average case complexity, derandomization and pseudorandomness, the PCP theorem and hardness of approximation, proof complexity and quantum computing. Almost every chapter in the book can be read in isolation (though we recommend reading Chapters 1, 2 and 7 before reading later chapters). This is important because the book is aimed Web draft 2007-01-08 21:59 Complexity Theory: A Modern Approach. © 2006 Sanjeev Arora and Boaz Barak. References and attributions are still incomplete. iii
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有