当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

香港浸会大学:并行输入输出(PPT讲稿)Parallel I/O

资源类别:文库,文档格式:PPT,文档页数:81,文件大小:476KB,团购合买
1. Introduction 2. Applications 3. Characteristics of Serial I/O 4. Characteristics of Parallel I/O 5. Introduction to MPI-2 Parallel I/O 6. MPI-2 File Structure 7. Initializing MPI-2 File I/O 8. Defining A View 9. Data Access - Reading Data 10.Data Access - Writing Data 11.Closing MPI-2 File I/O
点击下载完整版文档(PPT)

Parallel vo

Parallel I/O

) Objectives The material covered to this point discussed how multiple processes can share data stored in separate memory spaces (See Section 1.1-Parallel Architectures ). This is achieved by sending messages between processes Parallel w o covers the issue of how data are distributed among l/ 0 devices. While the memory subsystem can be different from machine to machine the logical methods of accessing memory are generally common, i.e., the same model should apply from machine to machine Parallel w/O is complicated in that both the physical and logical configurations often differ from machine to machine

Objectives • The material covered to this point discussed how multiple processes can share data stored in separate memory spaces (See Section 1.1 - Parallel Architectures). This is achieved by sending messages between processes. • Parallel I/O covers the issue of how data are distributed among I/O devices. While the memory subsystem can be different from machine to machine, the logical methods of accessing memory are generally common, i.e., the same model should apply from machine to machine. • Parallel I/O is complicated in that both the physical and logical configurations often differ from machine to machine

) Objectives MP-2 is the first version of mpi to include routines for handling parallel / O. As such, much of the material in this chapter is applicable only if the system you are working on has an implementation of MPi that includes parallel I/O This section proposes to introduce you to the general concepts of parallel l /o, with a focus on MPl-2 file w /o The material is organized to meet the following three goals 1. Learn fundamental concepts that define parallel I/o 2. Understand how parallel I/0 is different from traditional, serial 1O 3. Gain experience with the basic MP[-2 /0 function calls

Objectives • MPI-2 is the first version of MPI to include routines for handling parallel I/O. As such, much of the material in this chapter is applicable only if the system you are working on has an implementation of MPI that includes parallel I/O. • This section proposes to introduce you to the general concepts of parallel I/O, with a focus on MPI-2 file I/O. • The material is organized to meet the following three goals: 1. Learn fundamental concepts that define parallel I/O 2. Understand how parallel I/O is different from traditional, serial I/O 3. Gain experience with the basic MPI-2 I/O function calls

) Objectives The topics to be covered are 1. Introduction 2. Applications 3. Characteristics of serial 1/o 4 Characteristics of parallel l/o 5. Introduction to mp -2 Parallel vo 6. MP-2 File structure 7. Initializing mpi-2 File 1/0 8. Defining a view 9. Data Access-Reading data 10.Data Access -Writing Data 11.Closing MP1-2 File 1/0

Objectives • The topics to be covered are 1. Introduction 2. Applications 3. Characteristics of Serial I/O 4. Characteristics of Parallel I/O 5. Introduction to MPI-2 Parallel I/O 6. MPI-2 File Structure 7. Initializing MPI-2 File I/O 8. Defining A View 9. Data Access - Reading Data 10.Data Access - Writing Data 11.Closing MPI-2 File I/O

Introduction

Introduction

Introduction a traditional programming style teaches that computer programs can be broken down by function into three main sections 1. input 2. computation 3. output For science applications, much of what has been learned about mel in this course addresses the computation phase. With parallel systems allowing larger computational models, these applications often produce large amounts of output

Introduction • A traditional programming style teaches that computer programs can be broken down by function into three main sections: 1. input 2. computation 3. output • For science applications, much of what has been learned about MPI in this course addresses the computation phase. With parallel systems allowing larger computational models, these applications often produce large amounts of output

Introduction Serial 1/0 on a parallel machine can have large time penalties for many reasons Larger datasets generated from parallel applications have a serial bottleneck if l/o is only done on one node Many MPP machines are built from large numbers of slower processors which increase the time penalty as the serial lo gets funneled through a single, slower processor Some parallel datasets are too large to be sent back to one node for file O Decomposing the computation phase while leaving the l /o channeled through one processor to one file can cause the time required for i o to be of the same order or exceed the time required for parallel computation There are also non-science applications in which input and output are the dominant processes and significant performance improvement can be obtained with parallel I /O

Introduction • Serial I/O on a parallel machine can have large time penalties for many reasons. – Larger datasets generated from parallel applications have a serial bottleneck if I/O is only done on one node – Many MPP machines are built from large numbers of slower processors, which increase the time penalty as the serial I/O gets funneled through a single, slower processor – Some parallel datasets are too large to be sent back to one node for file I/O • Decomposing the computation phase while leaving the I/O channeled through one processor to one file can cause the time required for I/O to be of the same order or exceed the time required for parallel computation. • There are also non-science applications in which input and output are the dominant processes and significant performance improvement can be obtained with parallel I/O

Applications

Applications

I Applications The ability to parallelize /0 can offer significant performance improvements. Several applications are given here to show examples

Applications • The ability to parallelize I/O can offer significant performance improvements. Several applications are given here to show examples

Large Computational Grids/Meshes Many new applications are utilizing finer resolution meshes and grids. Computed properties at each node and/or element often need to be written to storage for data analysis at a later time. These large computational grid/meshes Increase i/o requirements because of the larger number of data points to be saved Increase l/o time because data is being funneled through slower commodity processors in MPP

Large Computational Grids/Meshes • Many new applications are utilizing finer resolution meshes and grids. Computed properties at each node and/or element often need to be written to storage for data analysis at a later time. These large computational grid/meshes – Increase I/O requirements because of the larger number of data points to be saved. – Increase I/O time because data is being funneled through slower commodity processors in MPP

点击下载完整版文档(PPT)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
共81页,可试读20页,点击继续阅读 ↓↓
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有