正在加载图片...
xvi Preface flexible as possible so that readers who have no interest in learning one or two of the APIs can still read the remaining material with little effort.Thus,the chapters on the three APIs are largely independent of each other:they can be read in any order, and one or two of these chapters can be bypass.This independence has a cost:It was necessary to repeat some of the material in these chapters.Of course,repeated material can be simply scanned or skipped. Readers with no prior experience with parallel computing should read Chapter 1 first.It attempts to provide a relatively nontechnical explanation of why parallel sys- tems have come to dominate the computer landscape.The chapter also provides a short introduction to parallel systems and parallel programming. Chapter 2 provides some technical background in computer hardware and soft- ware.Much of the material on hardware can be scanned before proceeding to the API chapters.Chapters 3,4,and 5 are the introductions to programming with MPI, Pthreads,and OpenMP,respectively. In Chapter 6 we develop two longer programs:a parallel n-body solver and a parallel tree search.Both programs are developed using all three APIs.Chapter 7 provides a brief list of pointers to additional information on various aspects of parallel computing. We use the C programming language for developing our programs because all three APIs have C-language interfaces,and,since C is such a small language,it is a relatively easy language to learn-especially for C++and Java programmers,since they are already familiar with C's control structures. Classroom Use This text grew out of a lower-division undergraduate course at the University of San Francisco.The course fulfills a requirement for the computer science major,and it also fulfills a prerequisite for the undergraduate operating systems course.The only prerequisites for the course are either a grade of"B"or better in a one-semester introduction to computer science or a "C"or better in a two-semester introduction to computer science.The course begins with a four-week introduction to C program- ming.Since most students have already written Java programs,the bulk of what is covered is devoted to the use pointers in C.I The remainder of the course provides introductions to programming in MPI,Pthreads,and OpenMP. We cover most of the material in Chapters 1,3,4,and 5,and parts of the material in Chapters 2 and 6.The background in Chapter 2 is introduced as the need arises. For example,before discussing cache coherence issues in OpenMP(Chapter 5),we cover the material on caches in Chapter 2. The coursework consists of weekly homework assignments,five programming assignments,a couple of midterms,and a final exam.The homework usually involves IInterestingly,a number of students have said that they found the use of C pointers more difficult than MPI programming.xvi Preface flexible as possible so that readers who have no interest in learning one or two of the APIs can still read the remaining material with little effort. Thus, the chapters on the three APIs are largely independent of each other: they can be read in any order, and one or two of these chapters can be bypass. This independence has a cost: It was necessary to repeat some of the material in these chapters. Of course, repeated material can be simply scanned or skipped. Readers with no prior experience with parallel computing should read Chapter 1 first. It attempts to provide a relatively nontechnical explanation of why parallel sys￾tems have come to dominate the computer landscape. The chapter also provides a short introduction to parallel systems and parallel programming. Chapter 2 provides some technical background in computer hardware and soft￾ware. Much of the material on hardware can be scanned before proceeding to the API chapters. Chapters 3, 4, and 5 are the introductions to programming with MPI, Pthreads, and OpenMP, respectively. In Chapter 6 we develop two longer programs: a parallel n-body solver and a parallel tree search. Both programs are developed using all three APIs. Chapter 7 provides a brief list of pointers to additional information on various aspects of parallel computing. We use the C programming language for developing our programs because all three APIs have C-language interfaces, and, since C is such a small language, it is a relatively easy language to learn—especially for C++ and Java programmers, since they are already familiar with C’s control structures. Classroom Use This text grew out of a lower-division undergraduate course at the University of San Francisco. The course fulfills a requirement for the computer science major, and it also fulfills a prerequisite for the undergraduate operating systems course. The only prerequisites for the course are either a grade of “B” or better in a one-semester introduction to computer science or a “C” or better in a two-semester introduction to computer science. The course begins with a four-week introduction to C program￾ming. Since most students have already written Java programs, the bulk of what is covered is devoted to the use pointers in C.1 The remainder of the course provides introductions to programming in MPI, Pthreads, and OpenMP. We cover most of the material in Chapters 1, 3, 4, and 5, and parts of the material in Chapters 2 and 6. The background in Chapter 2 is introduced as the need arises. For example, before discussing cache coherence issues in OpenMP (Chapter 5), we cover the material on caches in Chapter 2. The coursework consists of weekly homework assignments, five programming assignments, a couple of midterms, and a final exam. The homework usually involves 1 Interestingly, a number of students have said that they found the use of C pointers more difficult than MPI programming
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有