当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《高等数值分析(高性能计算/并行计算)》课程教学资源(参考资料)MPI - A Message-Passing Interface Standard Version 4.0

资源类别:文库,文档格式:PDF,文档页数:1138,文件大小:4.3MB,团购合买
点击下载完整版文档(PDF)

MPI:A Message-Passing Interface Standard Version 4.0 Message Passing Interface Forum June9,2021

MPI: A Message-Passing Interface Standard Version 4.0 Message Passing Interface Forum June 9, 2021

1 This document describes the Message-Passing Interface(MPI)standard,version 4.0. 2 The MPIstandard includes p oint-to-point ollective nicator concepts mmunication nd ntal ,grou ana ation and manage opolo ent.proce ided com tended collectiv eration interfaces.I/o.】 eoustopics,and multiple tool interfaces.Language binding for C and Fortra re defined 7 of the standard s MPI-10(MP1 ddition to MPI-2.0 (July 18,1997),with 10 onts 1.1 and 1.2 mbined nd to MPL-21 (Ju 23.2008 Version MPI-3.0( r21 2012 1 of MPI-2.2.Ve MPI-31 (I was an exten 0015 5 PI-4.0 (June 9, doqpe3nsioms 2021)adds significant new features t 18 Comments.Please send comments on MPI to the MPI Forum as follows: 1.Subscribe to https://lists.mpi-forum.org/mailman/listinfo/mpi-comments t-fortogeher wth the veron Onlyuetheotfdalvesio e page and line numbers on w ch you are commenting 34 Your comment will be forwarded to mpl forum committee members for consideration Messages sent from an unsubscribed e-mail address will not be considered. ©1993,1994,1995,1996,1997,2008,2009,2012,2015,2021 University of Tennessee. 46 Knoxville,Tennessee.Permission to copy without fee all or part of this material is granted, provided the University of Tennessee copyright notice and the title of this document appear, and notice is given that copying is by permission of the University of Tennessee

This document describes the Message-Passing Interface (MPI) standard, version 4.0. The MPI standard includes point-to-point message-passing, collective communications, group and communicator concepts, process topologies, environmental management, process cre￾ation and management, one-sided communications, extended collective operations, external interfaces, I/O, some miscellaneous topics, and multiple tool interfaces. Language bindings for C and Fortran are defined. Historically, the evolution of the standard is from MPI-1.0 (May 5, 1994) to MPI-1.1 (June 12, 1995) to MPI-1.2 (July 18, 1997), with several clarifications and additions and published as part of the MPI-2 document, to MPI-2.0 (July 18, 1997), with new functional￾ity, to MPI-1.3 (May 30, 2008), combining for historical reasons the documents 1.1 and 1.2 and some errata documents to one combined document, and to MPI-2.1 (June 23, 2008), combining the previous documents. Version MPI-2.2 (September 4, 2009) added additional clarifications and seven new routines. Version MPI-3.0 (September 21, 2012) was an exten￾sion of MPI-2.2. Version MPI-3.1 (June 4, 2015) added clarifications and minor extensions to MPI-3.0. Version MPI-4.0 (June 9, 2021) adds significant new features to MPI-3.1. Comments. Please send comments on MPI to the MPI Forum as follows: 1. Subscribe to https://lists.mpi-forum.org/mailman/listinfo/mpi-comments 2. Send your comment to: mpi-comments@lists.mpi-forum.org, together with the version of the MPI standard and the page and line numbers on which you are commenting. Only use the official versions. Your comment will be forwarded to MPI Forum committee members for consideration. Messages sent from an unsubscribed e-mail address will not be considered. ©1993, 1994, 1995, 1996, 1997, 2008, 2009, 2012, 2015, 2021 University of Tennessee, Knoxville, Tennessee. Permission to copy without fee all or part of this material is granted, provided the University of Tennessee copyright notice and the title of this document appear, and notice is given that copying is by permission of the University of Tennessee. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 ii

Version 4.0:June 9.2021.This version of the MPI-4 Standard is a major update and includes significant new functionality.The largest changes are the addition of large-count versions of many routines to address the limitations of using an int or INTEGER for the 3 count narameter persistent collectives,partitioned communications,an alternative way to initialize MPl,application info assertions,and improvements to the definitions of error 5 handling.In addition,there are a number of smaller improvements and corrections 6 This do in MPI-3.0.Addi nge ally,new added include pula king and rout 10 me for MPI_T riable 12 13 Version 3.0:September 21,2012.Coincident with the development of MPI-2.2,the MPI 4 Forum began discussions of a major extension to MPl.This document contains the MPl 5 3 Standard.This version of the MPl-3 standard contains significant extensions to MP 16 functionality,including nonblocking collectives,new one-sided communication operations, and Fortran 2008 bindings.Unlike MPI-2.2,this standard is considered a major update to 18 the MPI standard.As with previous versions,new features have been adopted only when here were compelling needs for the users.Some features.however.may have more than a 20 2 minor impact on existing MPl Implementations Version 2.2:September 4.2009. 23 This docum ont contains mostly corrections and clarifi. ations to the MPL2 1 documen A few extensions have been added.however all correct 25 MPI-2.1 programs are correct MPI-2.2 programs.New features were adopted only when 26 here were compelling needs for users.open source implementations.and minor impact on existing MPI implementations 28 tended Co MP-2.0, Ex fMPI-1.3. re Operations 32 errat rum are uded in this Version 1.3:May 30,2008.This document combines the previous documents MPI-1.1(June 2,1995)and the MPI-1.2 chapter in MPI-2(July 18,1997).Additional errata collected by the MPI Forum referring to MPI-1.1 and MPl-1.2 are also included in this document. 38 Version 2 0.luly 18 1997 Beginning after the release of MPl1 1 the MPl Forum began meeting to consider corrections and extensions.MPI-2 has been focused on process creation and management,one-sided communications,extended collective communications,external interfaces and parallel i/o.a miscellany chapter discusses items that do not fit elsewhere 43 in particular language interoperability. xten clarifications and mir 1.130 197.3he andard

Version 4.0: June 9, 2021. This version of the MPI-4 Standard is a major update and includes significant new functionality. The largest changes are the addition of large-count versions of many routines to address the limitations of using an int or INTEGER for the count parameter, persistent collectives, partitioned communications, an alternative way to initialize MPI, application info assertions, and improvements to the definitions of error handling. In addition, there are a number of smaller improvements and corrections. Version 3.1: June 4, 2015. This document contains mostly corrections and clarifications to the MPI-3.0 document. The largest change is a correction to the Fortran bindings introduced in MPI-3.0. Additionally, new functions added include routines to manipulate MPI_Aint values in a portable manner, nonblocking collective I/O routines, and routines to get the index value by name for MPI_T performance and control variables. Version 3.0: September 21, 2012. Coincident with the development of MPI-2.2, the MPI Forum began discussions of a major extension to MPI. This document contains the MPI- 3 Standard. This version of the MPI-3 standard contains significant extensions to MPI functionality, including nonblocking collectives, new one-sided communication operations, and Fortran 2008 bindings. Unlike MPI-2.2, this standard is considered a major update to the MPI standard. As with previous versions, new features have been adopted only when there were compelling needs for the users. Some features, however, may have more than a minor impact on existing MPI implementations. Version 2.2: September 4, 2009. This document contains mostly corrections and clarifi- cations to the MPI-2.1 document. A few extensions have been added; however all correct MPI-2.1 programs are correct MPI-2.2 programs. New features were adopted only when there were compelling needs for users, open source implementations, and minor impact on existing MPI implementations. Version 2.1: June 23, 2008. This document combines the previous documents MPI-1.3 (May 30, 2008) and MPI-2.0 (July 18, 1997). Certain parts of MPI-2.0, such as some sections of Chapter 4, Miscellany, and Chapter 7, Extended Collective Operations, have been merged into the chapters of MPI-1.3. Additional errata and clarifications collected by the MPI Forum are also included in this document. Version 1.3: May 30, 2008. This document combines the previous documents MPI-1.1 (June 12, 1995) and the MPI-1.2 chapter in MPI-2 (July 18, 1997). Additional errata collected by the MPI Forum referring to MPI-1.1 and MPI-1.2 are also included in this document. Version 2.0: July 18, 1997. Beginning after the release of MPI-1.1, the MPI Forum began meeting to consider corrections and extensions. MPI-2 has been focused on process creation and management, one-sided communications, extended collective communications, external interfaces and parallel I/O. A miscellany chapter discusses items that do not fit elsewhere, in particular language interoperability. Version 1.2: July 18, 1997. The MPI-2 Forum introduced MPI-1.2 as Chapter 3 in the standard “MPI-2: Extensions to the Message-Passing Interface”, July 18, 1997. This section contains clarifications and minor corrections to Version 1.1 of the MPI Standard. The only iii 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48

of the MPI Standard the very few difecon between MPLLd MeL12 butre diterences betwern MPk12 and MPI-2. Version 1.1:June,1995 Beginning in March,1995,the Me ge-Passing Interface Forum ow as Version 1.0. from Version 1.0 are minor. 1 Version 1.0:May,1994.The Message-Passing Interface Forum,with participation from over 40 organizations,has been meeting since January 1993 to discuss and define a set of library interface standards for message passing.The Message-Passing Interface Forum is 15 not sanctioned or supported by any official standards organization. The goal of the Message-Passing Interface,simply stated,is to develop a widely used standard for writing message-passing programs. As such the interface should establish a practical,portable,efficient,and flexible standard for message-passing. This is the final report,Version 1.0,of the Message-Passing Interface Forum.This document contains all the technical features proposed for the interface.This copy of the 2 draft was processed by LYTEX on May 5,1994. 2 32 222 2

new function in MPI-1.2 is one for identifying to which version of the MPI Standard the implementation conforms. There are small differences between MPI-1 and MPI-1.1. There are very few differences between MPI-1.1 and MPI-1.2, but large differences between MPI-1.2 and MPI-2. Version 1.1: June, 1995. Beginning in March, 1995, the Message-Passing Interface Forum reconvened to correct errors and make clarifications in the MPI document of May 5, 1994, referred to below as Version 1.0. These discussions resulted in Version 1.1. The changes from Version 1.0 are minor. A version of this document with all changes marked is available. Version 1.0: May, 1994. The Message-Passing Interface Forum, with participation from over 40 organizations, has been meeting since January 1993 to discuss and define a set of library interface standards for message passing. The Message-Passing Interface Forum is not sanctioned or supported by any official standards organization. The goal of the Message-Passing Interface, simply stated, is to develop a widely used standard for writing message-passing programs. As such the interface should establish a practical, portable, efficient, and flexible standard for message-passing. This is the final report, Version 1.0, of the Message-Passing Interface Forum. This document contains all the technical features proposed for the interface. This copy of the draft was processed by LATEX on May 5, 1994. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 iv

Contents List of Figures List of Tables Acknowledgments xxii 1 1,MPI-12,and MPL-2.0 3 and MPI- 112234 Background of MPl-2.2 1.6 Background of MF 1.7 Background of MPI-3.1 1.0 of MPI-4.0 1 Who Should Use This Standard? 1.10 What Platforms Are Targets for Implementation? 445555 1.11 What Is Included in the Standard? What Is Not Included in the Standard? 1.13 Organization of This Document 6 2 MPI Terms s and Con nventions 4234 MPL 3 2.5 D Opaqu Objects Array Arguments 2.5 amed Constants 。 resses and Relative Address Displacements 112133157181800022223 2.5.7 File Offsets 2.5.8 Counts 26 Language Binding

Contents List of Figures xviii List of Tables xx Acknowledgments xxii 1 Introduction to MPI 1 1.1 Overview and Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Background of MPI-1.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Background of MPI-1.1, MPI-1.2, and MPI-2.0 . . . . . . . . . . . . . . . . 2 1.4 Background of MPI-1.3 and MPI-2.1 . . . . . . . . . . . . . . . . . . . . . . 3 1.5 Background of MPI-2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.6 Background of MPI-3.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.7 Background of MPI-3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.8 Background of MPI-4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.9 Who Should Use This Standard? . . . . . . . . . . . . . . . . . . . . . . . 5 1.10 What Platforms Are Targets for Implementation? . . . . . . . . . . . . . . 5 1.11 What Is Included in the Standard? . . . . . . . . . . . . . . . . . . . . . . 5 1.12 What Is Not Included in the Standard? . . . . . . . . . . . . . . . . . . . . 6 1.13 Organization of This Document . . . . . . . . . . . . . . . . . . . . . . . . 6 2 MPI Terms and Conventions 11 2.1 Document Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 Naming Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3 Procedure Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.4 Semantic Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.4.1 MPI Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.4.2 MPI Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.4.3 MPI Datatypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.5 Datatypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.5.1 Opaque Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.5.2 Array Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.5.3 State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.5.4 Named Constants . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.5.5 Choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.5.6 Absolute Addresses and Relative Address Displacements . . . . . 22 2.5.7 File Offsets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.5.8 Counts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.6 Language Binding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 v

2.6.1 Deprecated and Removed Interfaces 2.6.9 Fortran Binding Issues. 2.6.3 C Binding Iss 26.4 tions and Mac 2.7 28 Error Handling 2.0 2.9.1 of Basic Runtime 2.9.2 with Si 2.10 gnals Examples. 3 Point-to-Point Communication 31 3.1 Introduction 3.2 Blocking Send and Receive Operations 3.21 Blocking Send 322 Message Data 3.2.3 Message envelope 324 Blocking Receive 325 Return Status 3.2.6 Passing MPI_STATUS_IGNORE for Status 327 Blocking Send Receive 42 3.3 Datatype Matching and Data Conversion 3.3.1 Type Matchi ng rules Type MPI_CHARACTER 3.3.2 Data Conv sion 3.4 Co mication modes 3.5 Semantics of Point-to-Point Communication 3.6 Buffer Allocation and Usage 947 3.6.1 Model Implementation of Buffered Mod 3.7 Nonblocking Co 271 Co nication re Obi 379 3.7.3 37 tics of Nonblocking C nicati 3.75 Multiple Co 3.7.6 Dest ctive Tes 3.8 and Cancel 3.82 D 3.8.3 ched Receives 38 7188880090 3.9 ent on Null Processes Requests 3.10 4 Partitioned Point-to-Point Communication 103 4.1 Introduction 103 4.2 Semantics of Partitioned point-to-Point communication 104 421 Communication Initialization and Starting with Partitioning 106

2.6.1 Deprecated and Removed Interfaces . . . . . . . . . . . . . . . . . 23 2.6.2 Fortran Binding Issues . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.6.3 C Binding Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.6.4 Functions and Macros . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.7 Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.8 Error Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.9 Implementation Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.9.1 Independence of Basic Runtime Routines . . . . . . . . . . . . . . 28 2.9.2 Interaction with Signals . . . . . . . . . . . . . . . . . . . . . . . . 29 2.10 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3 Point-to-Point Communication 31 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.2 Blocking Send and Receive Operations . . . . . . . . . . . . . . . . . . . . 32 3.2.1 Blocking Send . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.2.2 Message Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.2.3 Message Envelope . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.2.4 Blocking Receive . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.2.5 Return Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.2.6 Passing MPI_STATUS_IGNORE for Status . . . . . . . . . . . . . . . 41 3.2.7 Blocking Send-Receive . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.3 Datatype Matching and Data Conversion . . . . . . . . . . . . . . . . . . . 45 3.3.1 Type Matching Rules . . . . . . . . . . . . . . . . . . . . . . . . . 45 Type MPI_CHARACTER . . . . . . . . . . . . . . . . . . . . . . . . 47 3.3.2 Data Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.4 Communication Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.5 Semantics of Point-to-Point Communication . . . . . . . . . . . . . . . . . 54 3.6 Buffer Allocation and Usage . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.6.1 Model Implementation of Buffered Mode . . . . . . . . . . . . . . 60 3.7 Nonblocking Communication . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3.7.1 Communication Request Objects . . . . . . . . . . . . . . . . . . . 62 3.7.2 Communication Initiation . . . . . . . . . . . . . . . . . . . . . . . 62 3.7.3 Communication Completion . . . . . . . . . . . . . . . . . . . . . . 70 3.7.4 Semantics of Nonblocking Communications . . . . . . . . . . . . . 74 3.7.5 Multiple Completions . . . . . . . . . . . . . . . . . . . . . . . . . 75 3.7.6 Non-Destructive Test of status . . . . . . . . . . . . . . . . . . . . 83 3.8 Probe and Cancel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 3.8.1 Probe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 3.8.2 Matching Probe . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 3.8.3 Matched Receives . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 3.8.4 Cancel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 3.9 Persistent Communication Requests . . . . . . . . . . . . . . . . . . . . . . 94 3.10 Null Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4 Partitioned Point-to-Point Communication 103 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 4.2 Semantics of Partitioned Point-to-Point Communication . . . . . . . . . . 104 4.2.1 Communication Initialization and Starting with Partitioning . . . 106 vi

4.2.2 Communication Completion under Partitioning 110 4.2.3 Semantics of Communications in Partitioned Mode 111 4.3 Partitioned Communication Examples. 112 4.3.1 Partition Communication with Threads/Tasks Using OpenMP 4.0 or later 119 4.3.2 Send-only Partitioning Example with Tasks and OpenMP version 4 0 or later 11 4.3.3 Send and receive partitioning example with openmP version 4.0 or later 115 5 Datatypes 119 5.1 Derived Datatypes. 119 5.1. Type Constructors with Explicit Addresses 121 5.1.2 Datatype Constructors 。。。。。。,。。,。。。,,。。 121 5.13 Subarray Datatype Constructor 133 51.4 Distributed array datatype constructor 135 5.1.5 Address and Size Functions 141 5.1.6 Lower-Bound and Upper-Bound markers 144 5.1.7 Extent and bounds of Datatypes 147 5.1.8 True Extent of Datatypes 149 5.1.9 ●ommit and Free 150 5.1.10 Duplicating a Datatype 5111 Use of General Datatypes in Communication 5.1.12 Correct Use of Addresses 5.1.13 Decoding a Datatype 157 5.1.14 Examples 165 5.2 Pack and Unpack 174 5.3 Canonical MPI_PACK and MPI_UNPACK 182 6 Collective Communication 187 6.1 Introduction and Overview 6.2 Commnunicator Argument 19d 6.2.1 Specifics for Intra-Communicator Collective Operations 。 1 6.2. Applying Collective Operations to Inter-Communicators 191 6.2.3 Specifics for Inter-Communicator Collective Operations 192 6.3 Barrier Synchronization............ 194 6.4 Broadcas 。。。。。。,。。。。。,。。 6.4.1 Example using MPI_BCAST..... 6.5 Gather ,。 196 6.5.1 Examples using MPI_GATHER,MPI_GATHERV... 200 6.6 Scatter 。 206 6.6.1 Examples using MPI_SCATTER,MPI_SCATTERV 210 6.7 Gather-to-all . 213 6.71 Example using mPl allGATHER 216 6.8 All-to-All Scatter/Gather... 217 6.9 Global Reduction Operations......... 223 6.9.1 Reduce 224 6.9.2 Predefined Reduction Operations 226 vii

4.2.2 Communication Completion under Partitioning . . . . . . . . . . . 110 4.2.3 Semantics of Communications in Partitioned Mode . . . . . . . . . 111 4.3 Partitioned Communication Examples . . . . . . . . . . . . . . . . . . . . . 112 4.3.1 Partition Communication with Threads/Tasks Using OpenMP 4.0 or later . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.3.2 Send-only Partitioning Example with Tasks and OpenMP version 4.0 or later . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 4.3.3 Send and Receive Partitioning Example with OpenMP version 4.0 or later . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 5 Datatypes 119 5.1 Derived Datatypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 5.1.1 Type Constructors with Explicit Addresses . . . . . . . . . . . . . 121 5.1.2 Datatype Constructors . . . . . . . . . . . . . . . . . . . . . . . . 121 5.1.3 Subarray Datatype Constructor . . . . . . . . . . . . . . . . . . . 133 5.1.4 Distributed Array Datatype Constructor . . . . . . . . . . . . . . 135 5.1.5 Address and Size Functions . . . . . . . . . . . . . . . . . . . . . . 141 5.1.6 Lower-Bound and Upper-Bound Markers . . . . . . . . . . . . . . 144 5.1.7 Extent and Bounds of Datatypes . . . . . . . . . . . . . . . . . . . 147 5.1.8 True Extent of Datatypes . . . . . . . . . . . . . . . . . . . . . . . 149 5.1.9 Commit and Free . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 5.1.10 Duplicating a Datatype . . . . . . . . . . . . . . . . . . . . . . . . 152 5.1.11 Use of General Datatypes in Communication . . . . . . . . . . . . 153 5.1.12 Correct Use of Addresses . . . . . . . . . . . . . . . . . . . . . . . 156 5.1.13 Decoding a Datatype . . . . . . . . . . . . . . . . . . . . . . . . . 157 5.1.14 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 5.2 Pack and Unpack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 5.3 Canonical MPI_PACK and MPI_UNPACK . . . . . . . . . . . . . . . . . . . 182 6 Collective Communication 187 6.1 Introduction and Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 6.2 Communicator Argument . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 6.2.1 Specifics for Intra-Communicator Collective Operations . . . . . . 190 6.2.2 Applying Collective Operations to Inter-Communicators . . . . . . 191 6.2.3 Specifics for Inter-Communicator Collective Operations . . . . . . 192 6.3 Barrier Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 6.4 Broadcast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 6.4.1 Example using MPI_BCAST . . . . . . . . . . . . . . . . . . . . . . 195 6.5 Gather . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 6.5.1 Examples using MPI_GATHER, MPI_GATHERV . . . . . . . . . . . 200 6.6 Scatter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 6.6.1 Examples using MPI_SCATTER, MPI_SCATTERV . . . . . . . . . 210 6.7 Gather-to-all . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 6.7.1 Example using MPI_ALLGATHER . . . . . . . . . . . . . . . . . . 216 6.8 All-to-All Scatter/Gather . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 6.9 Global Reduction Operations . . . . . . . . . . . . . . . . . . . . . . . . . . 223 6.9.1 Reduce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 6.9.2 Predefined Reduction Operations . . . . . . . . . . . . . . . . . . . 226 vii

6.9.3 Signed characters and reductions 22g 6.9.4 MINLOC and MAXLOC 6.9.5 User-Defined Reduction Operation le of User-Defined Reduc 60e All-Redu 807 Pro s-Local Reduction 40 6 10 Redu 2 6.10.1 MPI REDUCE SCATTER BLOCK 610.2 MPI REDUCE SCATTER 244 6.11 24E sive Sca 240 6.112 247 6.11.3 g MPI_SCAN 248 6.12 50 25 6.12.2 Broad MPI_IBCAST ng Sca o1 ng( -t catter/Gather ing Re ng All-Reduce Scatter with E Equal Blocks 6.12.1 Nonbloc ocking Inclusive Scan 6.13 Pe tent Collectiv 6.13. Persistent Barrier Synchronization 6.13.2 Persistent Broadcast.······. 6.13. Persistent Gather 6.13.4 77 Persistent Scatter 6.13. Persistent Gather-to 6.13.0 Persistent All-to-All Scatter/Gather 29 6.13. Persistent Reduce 29 6.13. Persistent All-Reduce 295 6.13.9 Persistent Reduce-Scatter with Equal Blocks 296 6.13.10 Persistent Reduce-Scatter 298 6.13.11 Persistent Inclusive Scan 299 6.13.12 Persistent Exclusive Scan 6.14 Correctness............ 301 oduc upport Librarie MPI's Support for Libraries..... 7.2 7.2.1 Concepts Groups viii

6.9.3 Signed Characters and Reductions . . . . . . . . . . . . . . . . . . 229 6.9.4 MINLOC and MAXLOC . . . . . . . . . . . . . . . . . . . . . . . 229 6.9.5 User-Defined Reduction Operations . . . . . . . . . . . . . . . . . 233 Example of User-Defined Reduce . . . . . . . . . . . . . . . . . . . 237 6.9.6 All-Reduce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 6.9.7 Process-Local Reduction . . . . . . . . . . . . . . . . . . . . . . . 240 6.10 Reduce-Scatter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 6.10.1 MPI_REDUCE_SCATTER_BLOCK . . . . . . . . . . . . . . . . . . 242 6.10.2 MPI_REDUCE_SCATTER . . . . . . . . . . . . . . . . . . . . . . . 244 6.11 Scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 6.11.1 Inclusive Scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 6.11.2 Exclusive Scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 6.11.3 Example using MPI_SCAN . . . . . . . . . . . . . . . . . . . . . . 248 6.12 Nonblocking Collective Operations . . . . . . . . . . . . . . . . . . . . . . . 250 6.12.1 Nonblocking Barrier Synchronization . . . . . . . . . . . . . . . . . 252 6.12.2 Nonblocking Broadcast . . . . . . . . . . . . . . . . . . . . . . . . 253 Example using MPI_IBCAST . . . . . . . . . . . . . . . . . . . . . 253 6.12.3 Nonblocking Gather . . . . . . . . . . . . . . . . . . . . . . . . . . 254 6.12.4 Nonblocking Scatter . . . . . . . . . . . . . . . . . . . . . . . . . . 257 6.12.5 Nonblocking Gather-to-all . . . . . . . . . . . . . . . . . . . . . . . 260 6.12.6 Nonblocking All-to-All Scatter/Gather . . . . . . . . . . . . . . . . 263 6.12.7 Nonblocking Reduce . . . . . . . . . . . . . . . . . . . . . . . . . . 269 6.12.8 Nonblocking All-Reduce . . . . . . . . . . . . . . . . . . . . . . . . 270 6.12.9 Nonblocking Reduce-Scatter with Equal Blocks . . . . . . . . . . . 271 6.12.10 Nonblocking Reduce-Scatter . . . . . . . . . . . . . . . . . . . . . 273 6.12.11 Nonblocking Inclusive Scan . . . . . . . . . . . . . . . . . . . . . . 274 6.12.12 Nonblocking Exclusive Scan . . . . . . . . . . . . . . . . . . . . . . 275 6.13 Persistent Collective Operations . . . . . . . . . . . . . . . . . . . . . . . . 276 6.13.1 Persistent Barrier Synchronization . . . . . . . . . . . . . . . . . . 277 6.13.2 Persistent Broadcast . . . . . . . . . . . . . . . . . . . . . . . . . . 278 6.13.3 Persistent Gather . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 6.13.4 Persistent Scatter . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 6.13.5 Persistent Gather-to-all . . . . . . . . . . . . . . . . . . . . . . . . 286 6.13.6 Persistent All-to-All Scatter/Gather . . . . . . . . . . . . . . . . . 289 6.13.7 Persistent Reduce . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 6.13.8 Persistent All-Reduce . . . . . . . . . . . . . . . . . . . . . . . . . 295 6.13.9 Persistent Reduce-Scatter with Equal Blocks . . . . . . . . . . . . 296 6.13.10 Persistent Reduce-Scatter . . . . . . . . . . . . . . . . . . . . . . . 298 6.13.11 Persistent Inclusive Scan . . . . . . . . . . . . . . . . . . . . . . . 299 6.13.12 Persistent Exclusive Scan . . . . . . . . . . . . . . . . . . . . . . . 300 6.14 Correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 7 Groups, Contexts, Communicators, and Caching 311 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 7.1.1 Features Needed to Support Libraries . . . . . . . . . . . . . . . . 311 7.1.2 MPI’s Support for Libraries . . . . . . . . . . . . . . . . . . . . . . 312 7.2 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 7.2.1 Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 viii

7.2.2 Contexts 314 7.2.3 Intra-Communicators 7.2.4 Predefined Intra-Communicators 73 Gro up Management 316 731 Group Accessors 732 Group Constructors 7.3.3 Groun Destructors 324 74 Communicator Management 741 Communicator Ac 7.4.2 Communicator Constructors 7.4.3 345 74 Comm Motivating Ex 34 7.5.1 C ctice 4 7.5.2 rent Practice #2 4 7.5.3 (App roximate)Cu rent Practice 349 7.5.4 ication Safety Example 35 7.5.5 Library Example #1 351 75.6 Library Exa mple #2 353 10 atio 355 76.1 Inter-Cor 357 7.6.2 icator o 358 763 nication E 362 D eline 362 mple 2:Three-Gr "Ri 7.7 Cachin 65 7.1 3 77 s for Invalid Keyva 77.d ibutes 78 Obj ali oosely Synchronous 7.9.2 or E ecut1o】 Static C catio Dynami municator Allocatior The General Case 8 Pro cess Topologies 389 389 82 Virtual Topologie 390 Embeddin in MPI of the Functions 8.5 truc 392 8.5.1 Carte 392 8.5.2 nstructor ian co nce fuinction:mPl dIms CrEAte 393 ix

7.2.2 Contexts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 7.2.3 Intra-Communicators . . . . . . . . . . . . . . . . . . . . . . . . . 315 7.2.4 Predefined Intra-Communicators . . . . . . . . . . . . . . . . . . . 315 7.3 Group Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 7.3.1 Group Accessors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 7.3.2 Group Constructors . . . . . . . . . . . . . . . . . . . . . . . . . . 318 7.3.3 Group Destructors . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 7.4 Communicator Management . . . . . . . . . . . . . . . . . . . . . . . . . . 325 7.4.1 Communicator Accessors . . . . . . . . . . . . . . . . . . . . . . . 325 7.4.2 Communicator Constructors . . . . . . . . . . . . . . . . . . . . . 327 7.4.3 Communicator Destructors . . . . . . . . . . . . . . . . . . . . . . 345 7.4.4 Communicator Info . . . . . . . . . . . . . . . . . . . . . . . . . . 345 7.5 Motivating Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 7.5.1 Current Practice #1 . . . . . . . . . . . . . . . . . . . . . . . . . . 348 7.5.2 Current Practice #2 . . . . . . . . . . . . . . . . . . . . . . . . . . 349 7.5.3 (Approximate) Current Practice #3 . . . . . . . . . . . . . . . . . 349 7.5.4 Communication Safety Example . . . . . . . . . . . . . . . . . . . 350 7.5.5 Library Example #1 . . . . . . . . . . . . . . . . . . . . . . . . . . 351 7.5.6 Library Example #2 . . . . . . . . . . . . . . . . . . . . . . . . . . 353 7.6 Inter-Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 7.6.1 Inter-Communicator Accessors . . . . . . . . . . . . . . . . . . . . 357 7.6.2 Inter-Communicator Operations . . . . . . . . . . . . . . . . . . . 358 7.6.3 Inter-Communication Examples . . . . . . . . . . . . . . . . . . . 362 Example 1: Three-Group “Pipeline” . . . . . . . . . . . . . . . . . 362 Example 2: Three-Group “Ring” . . . . . . . . . . . . . . . . . . . 363 7.7 Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 7.7.1 Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 7.7.2 Communicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 7.7.3 Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 7.7.4 Datatypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 7.7.5 Error Class for Invalid Keyval . . . . . . . . . . . . . . . . . . . . 379 7.7.6 Attributes Example . . . . . . . . . . . . . . . . . . . . . . . . . . 379 7.8 Naming Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 7.9 Formalizing the Loosely Synchronous Model . . . . . . . . . . . . . . . . . 386 7.9.1 Basic Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386 7.9.2 Models of Execution . . . . . . . . . . . . . . . . . . . . . . . . . . 386 Static Communicator Allocation . . . . . . . . . . . . . . . . . . . 387 Dynamic Communicator Allocation . . . . . . . . . . . . . . . . . 387 The General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 8 Process Topologies 389 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 8.2 Virtual Topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390 8.3 Embedding in MPI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390 8.4 Overview of the Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 8.5 Topology Constructors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 8.5.1 Cartesian Constructor . . . . . . . . . . . . . . . . . . . . . . . . . 392 8.5.2 Cartesian Convenience Function: MPI_DIMS_CREATE . . . . . . . 393 ix

8.5.3 Graph constructor 8.5.4 8.5.5 op Function 8.5.6 logy Ing rdinate 8.5.7 of Cart Functions 8 6 Neighborhood Collec 8.6.i Neighborhood Gather 8.6.2 Neighbo 8> 87.1 Nonblocking 8.7.2 ersistent Ne 8.9 An Application 9 MPI Environmental Management 451 9.1 Implementation Information 4 9.1.1 Version Inquiries 012 ental Inquiries 45 Tag values 453 Host Rank 453 IO Rank 454 Clock Synchronization 454 sor Nam 9.2 455 9.3 Error Handling 458 9.3.1 Error 461 9.3.2 Error Handlers for Windows 463 9.3.3 Handlers for Files 465 9.3.4 Handler fo 466 9.3.5 and Retrievi ing Er ror Strings 463 04 Erro d Classe 469 9.5 and Error Hand 9.6 10 The Info Object 479 11 Process Initialization,Creation,and Management 11.2 The World Mode 11.2.1 Starting MPI Processes 11.2.2 Finalizing MPI . 11.2.3 Determining Whether MPI Has Been Initialized When Using the World Model.. 11.2.4 Allowing User Functions at MPI Finalization 48 ll.3 The Sessions Model..················ 4 4

8.5.3 Graph Constructor . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 8.5.4 Distributed Graph Constructor . . . . . . . . . . . . . . . . . . . . 396 8.5.5 Topology Inquiry Functions . . . . . . . . . . . . . . . . . . . . . . 403 8.5.6 Cartesian Shift Coordinates . . . . . . . . . . . . . . . . . . . . . . 412 8.5.7 Partitioning of Cartesian Structures . . . . . . . . . . . . . . . . . 413 8.5.8 Low-Level Topology Functions . . . . . . . . . . . . . . . . . . . . 414 8.6 Neighborhood Collective Communication . . . . . . . . . . . . . . . . . . . 416 8.6.1 Neighborhood Gather . . . . . . . . . . . . . . . . . . . . . . . . . 417 8.6.2 Neighbor Alltoall . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422 8.7 Nonblocking Neighborhood Communication . . . . . . . . . . . . . . . . . 429 8.7.1 Nonblocking Neighborhood Gather . . . . . . . . . . . . . . . . . . 429 8.7.2 Nonblocking Neighborhood Alltoall . . . . . . . . . . . . . . . . . 432 8.8 Persistent Neighborhood Communication . . . . . . . . . . . . . . . . . . . 437 8.8.1 Persistent Neighborhood Gather . . . . . . . . . . . . . . . . . . . 438 8.8.2 Persistent Neighborhood Alltoall . . . . . . . . . . . . . . . . . . . 441 8.9 An Application Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 9 MPI Environmental Management 451 9.1 Implementation Information . . . . . . . . . . . . . . . . . . . . . . . . . . 451 9.1.1 Version Inquiries . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451 9.1.2 Environmental Inquiries . . . . . . . . . . . . . . . . . . . . . . . . 453 Tag Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 Host Rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 IO Rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454 Clock Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . 454 Inquire Processor Name . . . . . . . . . . . . . . . . . . . . . . . . 454 9.2 Memory Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455 9.3 Error Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 9.3.1 Error Handlers for Communicators . . . . . . . . . . . . . . . . . . 461 9.3.2 Error Handlers for Windows . . . . . . . . . . . . . . . . . . . . . 463 9.3.3 Error Handlers for Files . . . . . . . . . . . . . . . . . . . . . . . . 465 9.3.4 Error Handlers for Sessions . . . . . . . . . . . . . . . . . . . . . . 466 9.3.5 Freeing Errorhandlers and Retrieving Error Strings . . . . . . . . 468 9.4 Error Codes and Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469 9.5 Error Classes, Error Codes, and Error Handlers . . . . . . . . . . . . . . . 473 9.6 Timers and Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . 477 10 The Info Object 479 11 Process Initialization, Creation, and Management 487 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 11.2 The World Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488 11.2.1 Starting MPI Processes . . . . . . . . . . . . . . . . . . . . . . . . 488 11.2.2 Finalizing MPI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494 11.2.3 Determining Whether MPI Has Been Initialized When Using the World Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497 11.2.4 Allowing User Functions at MPI Finalization . . . . . . . . . . . . 498 11.3 The Sessions Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499 x

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
共1138页,可试读40页,点击继续阅读 ↓↓
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有