当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

电子科技大学:《GPU并行编程 GPU Parallel Programming》课程教学资源(课件讲稿)Lecture 01 Introduction To Cuda C

资源类别:文库,文档格式:PDF,文档页数:42,文件大小:1.62MB,团购合买
Introduction to Heterogeneous Parallel Computing CUDA C vs. CUDA Libs vs. OpenACC Memory Allocation and Data Movement API Functions Data Parallelism and Threads
点击下载完整版文档(PDF)

GPU Teaching Kit LECTURE 1 - INTROdUCTION TO cudA c CUDA C vs. Thrust vs. CUDA Libraries

CUDA C vs. Thrust vs. CUDA Libraries Accelerated Computing GPU Teaching Kit

Introduction to Heterogeneous Parallel Computing CUDA C vs.CUDA Libs vs.OpenACC Memory Allocation and Data Movement API Functions Data Parallelism and Threads 十件发女亨 University of Electrei Science and TachnolopfChina

Introduction to Heterogeneous Parallel Computing CUDA C vs. CUDA Libs vs. OpenACC Memory Allocation and Data Movement API Functions Data Parallelism and Threads

OBJECTIVES To learn the major differences between latency devices (CPU cores)and throughput devices(GPU cores) To understand why winning applications increasingly use both types of devices 电子料做女学 Universityof ElectriScience and TachnolopChina O

OBJECTIVES ▪ To learn the major differences between latency devices (CPU cores) and throughput devices (GPU cores) ▪ To understand why winning applications increasingly use both types of devices

CPU AND GPU ARE DESIGNED VERY DIFFERENTLY CPU GPU Latency Oriented Cores Throughput Oriented Cores Chip Chip Core Compute Unit Cache/Local Mem Local Cache Registers Registers SIMD Unit Contro SIMD Unit Threading 电子料发女学 Universityof Electrei Science and TachnolopChina O

CPU AND GPU ARE DESIGNED VERY DIFFERENTLY CPU Latency Oriented Cores Chip Core Local Cache Registers SIMD Unit Control GPU Throughput Oriented Cores Chip Compute Unit Cache/Local Mem Registers SIMD Unit Threading

CPUS:LATENCY ORIENTED DESIGN Powerful ALU ALU ALU Reduced operation latency Control ALU ALU Large caches CPU Convert long latency memory Cache accesses to short latency cache accesses Sophisticated control DRAM Branch prediction for reduced branch latency Data forwarding for reduced data latency 电子料皮女学 Universityof Electrei Science and TachnolofChina O

CPUS: LATENCY ORIENTED DESIGN ▪ Powerful ALU ▪ Reduced operation latency ▪ Large caches ▪ Convert long latency memory accesses to short latency cache accesses ▪ Sophisticated control ▪ Branch prediction for reduced branch latency ▪ Data forwarding for reduced data latency Cache ALU Control ALU ALU ALU DRAM CPU

GPUS:THROUGHPUT ORIENTED DESIGN Small caches To boost memory throughput GPU Simple control No branch prediction No data forwarding Energy efficient ALUs DRAM Many,long latency but heavily pipelined for high throughput .Require massive number of threads to tolerate latencies Threading logic .Thread state 电子料发女学 Universityof ElectriScience and TachnolopfChina O

GPUS: THROUGHPUT ORIENTED DESIGN ▪ Small caches ▪ To boost memory throughput ▪ Simple control ▪ No branch prediction ▪ No data forwarding ▪ Energy efficient ALUs ▪ Many, long latency but heavily pipelined for high throughput ▪ Require massive number of threads to tolerate latencies ▪ Threading logic ▪ Thread state DRAM GPU

WINNING APPLICATIONS USE BOTH CPU AND GPU .GPUs for parallel CPUs for sequential parts where parts where latency throughput wins matters .GPUs can be 10X+faster .CPUs can be 10X+faster than CPUs for parallel than GPUs for sequential code code 电子科发女学 Universityof Electri Science and Tachnolopf China O

WINNING APPLICATIONS USE BOTH CPU AND GPU ▪ GPUs for parallel parts where throughput wins ▪ GPUs can be 10X+ faster than CPUs for parallel code ▪ CPUs for sequential parts where latency matters ▪ CPUs can be 10X+ faster than GPUs for sequential code

Introduction to Heterogeneous Parallel Computing CUDA C vs.CUDA Libs vs.OpenACC Memory Allocation and Data Movement API Functions Data Parallelism and Threads 十件发女亨 University of Electrei Science and TachnolopChina

Introduction to Heterogeneous Parallel Computing CUDA C vs. CUDA Libs vs. OpenACC Memory Allocation and Data Movement API Functions Data Parallelism and Threads

OBJECTIVE .To learn the main venues and developer resources for GPU computing Where CUDA C fits in the big picture 电子料皮女学 niversitof Electr Science and TachnoloChina O

OBJECTIVE ▪To learn the main venues and developer resources for GPU computing ▪ Where CUDA C fits in the big picture

3 WAYS TO ACCELERATE APPLICATIONS Applications Libraries Compiler Programming Directives Languages Easy to use Easy to use Most Performance Most Performance Portable code Most Flexibility 电子料烛女学 University of Electricience and TachnolopChina

3 WAYS TO ACCELERATE APPLICATIONS Applications Libraries Easy to use Most Performance Programming Languages Most Performance Most Flexibility Easy to use Portable code Compiler Directives

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
共42页,可试读14页,点击继续阅读 ↓↓
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有