当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

PARALLELISM IN HASKELL(Kathleen Fisher)

资源类别:文库,文档格式:PPTX,文档页数:68,文件大小:954.85KB,团购合买
点击下载完整版文档(PPTX)

Cs242 PARALLELISM IN HASKELL Kathleen Fisher Reading:A Tutorial on Parallel and Concurrent Programming in Haskell Skip Section 5 on STM Thanks to Simon Peyton Jones,Satnam Singh,and Don Stewart for these slides

Kathleen Fisher cs242 Reading: A Tutorial on Parallel and Concurrent Programming in Haskell Skip Section 5 on STM Thanksto Simon Peyton Jones, Satnam Singh, and Don Stewart for these slides

The Grand challenge a Making effective use of multi- core hardware is the challenge for programming languages now. a Hardware is getting increasingly complicated Nested memory hierarchies Hybrid processors: GPU+ CPU, Cell, FPGA Massive compute power sitting mostly idle We need new programming models to program new commodity machines effectively

 Making effective use of multi-core hardware is the challenge for programming languages now.  Hardware is getting increasingly complicated: - Nested memory hierarchies - Hybrid processors: GPU + CPU, Cell, FPGA... - Massive compute power sitting mostly idle.  We need new programming models to program new commodity machines effectively

Candidate models in haskell Explicit threads main : Io o do ch < newChan Non-deterministic by design forkIo (ioManager ch) Monadic, forkIo and sTm i forkIo (worker 1 ch) etc Semi-implicit parallelism Deterministic Pure: par and pseqr Data parallelism Deterministic Pure: parallel arrays Shared memory initially; distributed memory eventually; possibly even GPu

▪ Explicit threads ▪ Non-deterministic by design ▪ Monadic: forkIO and STM ▪ Semi-implicit parallelism ▪ Deterministic ▪ Pure: par and pseq ▪ Data parallelism ▪ Deterministic ▪ Pure: parallel arrays ▪ Shared memory initially; distributed memory eventually; possibly even GPUs… main :: IO () = do { ch <- newChan ; forkIO (ioManager ch) ; forkIO (worker 1 ch) ... etc ... }

Parallelism vs Concurrency a parallel program exploits real parallel computing resources to run faster while computing the same answer Expectation of genuinely simultaneous execution Deterministic A concurrent program models independent agents that can communicate and synchronize Meaningful on a machine with one processor Non-deterministic

 A parallel program exploits real parallel computing resources to run faster while computing the same answer. - Expectation of genuinely simultaneous execution - Deterministic  A concurrent program models independent agents that can communicate and synchronize. - Meaningful on a machine with one processor - Non-deterministic

Haskell Execution Model Thunk Pointer to the b implementation 10 I Values for free variables Storage slot for the result fib 0=0 fib 1=1 fib n =fib (n-1)+ fib(n-2)

fib 0 = 0 fib 1 = 1 fib n = fib (n-1) + fib (n-2) 10 9 8 3 5 8 6 5 8 1 1 “Thunk” for fib 10 Pointer to the implementation Storage slot for the result Values for free variables

Functional Programming to the rescue? No side effects makes parallelism easy right? It is always safe to speculate on pure code Execute each sub-expression in its own thread? Alas, the 80s dream does not work Far too many parallel tasks, many of which are too small to be worth the overhead of forking them Difficult/impossible for compiler to guess which are worth forking Idea: Give the user control over which expressions might run in parallel

 No side effects makes parallelism easy, right? - It is always safe to speculate on pure code. - Execute each sub-expression in its own thread?  Alas, the 80s dream does not work. - Far too many parallel tasks, many of which are too small to be worth the overhead of forking them. - Difficult/impossible for compiler to guess which are worth forking. Idea: Give the user control over which expressions might run in parallel

The par combinator par::a→>b->b x par y Value(ie, thunk bound to x is sparked for speculative evaluation Runtime may instantiate a spark on a thread running in parallel with the parent thread Operationally, x par y y ypically, x Is used inside y: blurRows par(mix blurcols burrOws) All parallelism built up from the par combinator

 Value (ie, thunk) bound to x is sparked for speculative evaluation.  Runtime may instantiate a spark on a thread running in parallel with the parent thread.  Operationally, x `par` y = y  Typically, x is used inside y:  All parallelism built up from the par combinator. par :: a -> b -> b x `par` y blurRows `par` (mix blurCols blurRows)

Concurrency Hierarch Very very Light Sparks Light Haskell Threads Heavy OS Threads pu cpu cpucpu

The meaning of par par does not guarantee a new haskell thread It hints that it would be good to evaluate the first argument in parallel The runtime decides whether to convert spark Depending on current workload This allows par to be very cheap Programmers can use it almost anywhere Safely over-approximate program parallelism

 par does not guarantee a new Haskell thread.  It hints that it would be good to evaluate the first argument in parallel.  The runtime decides whether to convert spark - Depending on current workload.  This allows par to be very cheap. - Programmers can use it almost anywhere. - Safely over-approximate program parallelism

Example: One processor x par (y x y is evaluated x is evaluated x is sparked x fizzles

x y is evaluated y x is evaluated x x is sparked x fizzles x `par` (y + x)

点击下载完整版文档(PPTX)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
共68页,可试读20页,点击继续阅读 ↓↓
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有