当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

电子科技大学:《GPU并行编程 GPU Parallel Programming》课程教学资源(课件讲稿)Lecture 04 Performance considerations

资源类别:文库,文档格式:PDF,文档页数:65,文件大小:1.52MB,团购合买
Warps and SIMD performance impact of control divergence Parallel reduction Memory parallelism
点击下载完整版文档(PDF)

LECTURE4 PERFORMANCE CONSIDERATIONS

Warps and SIMD Hardware

Warps and SIMD performance impact of control divergence Parallel reduction Memory parallelism 电子件做女字

2 Warps and SIMD performance impact of control divergence Parallel reduction Memory parallelism

Objective To understand how CUDA threads execute on SIMD Hardware Warp partitioning -SIMD Hardware - Control divergence 电子科妓女学 O

3 Objective – To understand how CUDA threads execute on SIMD Hardware – Warp partitioning – SIMD Hardware – Control divergence

执行过程 软件 硬件 0 □ CUDA Core=ALU=SP CUDA 线程 Core 瑙 SM=内核=逻辑架构里的CORE Thread Multiprocessor Block 当调用kerneli函数时,启动起来很多线程 ,然后分配给硬件去执行,执行过程中要 占用硬件资源。 电F神线女学 Device

4 软件 硬件 CUDA Core=ALU=SP 线程 CUDA Core Thread Block Multiprocessor Grid 执行过程 Device SM=内核=逻辑架构里的CORE 当调用kernel函数时,启动起来很多线程 ,然后分配给硬件去执行,执行过程中要 占用硬件资源

Warps A thread block consists of 32- 32 Threads thread warps 32 Threads 汹 日 32 Threads A warp is executed physically in 32 Threads Thread Multiprocessor parallel (SIMD)on a Block Warps multiprocessor 电子科妓女学 O

5 Multiprocessor 32 Threads Warps A thread block consists of 32- thread warps A warp is executed physically in parallel (SIMD) on a multiprocessor = Warps Thread Block 32 Threads 32 Threads 32 Threads

Warps as Scheduling Units Block 1 Warps Block 2 Warps Block 3 Warps t0t1t2.t31 0.t1t to t1 t2...t31 一 Each block is divided into 32-thread warps An implementation technique,not part of the CUDA programming model - Warps are scheduling units in SM Threads in a warp execute in Single Instruction Multiple Data (SIMD)manner The number of threads in a warp may vary in future generations 电子科妓女学 O

6 Warps as Scheduling Units – Each block is divided into 32-thread warps – An implementation technique, not part of the CUDA programming model – Warps are scheduling units in SM – Threads in a warp execute in Single Instruction Multiple Data (SIMD) manner – The number of threads in a warp may vary in future generations … …t0 t1 t2 … t31 … …t0 t1 t2 … t31 Block 1 Warps Block 2 Warps … …t0 t1 t2 … t31 Block 3 Warps

Warps in Multi-dimensional Thread Blocks The thread blocks are first linearized into 1D in row major order In x-dimension first,y-dimension next,and z-dimension last Too Tol T02 T03 T10 T11 T12 T13 logical 2-D T20 T21 T22 T23 organization T30T31T32T33 T00 T01 T02T031 T2T13T20 T21T22T23T30T31T32T33 linear order 例 电子神越女学 O

7 Warps in Multi-dimensional Thread Blocks – The thread blocks are first linearized into 1D in row major order – In x-dimension first, y-dimension next, and z-dimension last 7 Figure 6.1: Placing 2D threads into linear order

Blocks are partitioned after linearization Linearized thread blocks are partitioned Thread indices within a warp are consecutive and increasing Warp 0 starts with Thread 0 -Partitioning scheme is consistent across devices Thus you can use this knowledge in control flow However,the exact size of warps may change from generation to generation DO NOT rely on any ordering within or between warps If there are any dependencies between threads,you must _syncthreads()to get correct results(more later). 电子科妓女学

8 Blocks are partitioned after linearization – Linearized thread blocks are partitioned – Thread indices within a warp are consecutive and increasing – Warp 0 starts with Thread 0 – Partitioning scheme is consistent across devices – Thus you can use this knowledge in control flow – However, the exact size of warps may change from generation to generation – DO NOT rely on any ordering within or between warps – If there are any dependencies between threads, you must __syncthreads() to get correct results (more later)

SMs are SIMD Processors Control unit for instruction fetch,decode,and control is shared among multiple processing units Control overhead is minimized (Module 1) Memory I/O Processing Unit Shared Memory ALU/ Register File Control Unit PC IR Processor (SM) 电子神越女学 Uaimraity at Eleetreie Scieeand Teclegd O

9 SMs are SIMD Processors – Control unit for instruction fetch, decode, and control is shared among multiple processing units – Control overhead is minimized (Module 1) Memory Processing Unit I/O ALU Processor (SM) Shared Memory Register File Control Unit PC IR

Control Divergence Control divergence occurs when threads in a warp toko Nifforont control flownth hy mokin Nifferent C Time(clocks) ALU1 ALU2 .. ·ALU8 TT F T FFF P 1f(×>0)( is than X y pow(x,exp); X XX然 y *Ks; X XXXX refl =y Ka; else X然 X X=0; x refl Ka; hs are Worst case:1/8 peak performance i ne numper ot ditterent patns can be large when considering nested control flow statements 电子神越女学 0

10 Control Divergence – Control divergence occurs when threads in a warp take different control flow paths by making different control decisions – Some take the then-path and others take the else-path of an if-statement – Some threads take different number of loop iterations than others – The execution of threads taking different paths are serialized in current GPUs – The control paths taken by the threads in a warp are traversed one at a time until there is no more. – The number of different paths can be large when considering nested control flow statements

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
共65页,可试读20页,点击继续阅读 ↓↓
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有