正在加载图片...
data.Conventions governing locks in a program are programs as a series of explicitly atomic blocks,which sometimes written down,but they're almost never stated appear to execute indivisibly,so concurrently execut- precisely enough for a tool to check them. ing operations see the shared state strictly before or after Locks have other more subtle problems.Locking is an atomic action executes.Although many people view a global program property,which is difficult to localize transactional memory as a promising direction,it is still a to a single procedure,class,or framework.All code that subject of active research. accesses a piece of shared state must know and obey the locking convention,regardless of who wrote the code or WHAT WE NEED IN PROGRAMMING LANGUAGES where it resides. We need higher-level language abstractions,including Attempts to make synchronization a local property evolutionary extensions to current imperative languages, do not work all the time.Consider a popular solution so that existing applications can incrementally become such as lava's synchronized methods.Each of an object's concurrent.The programming model must make concur- methods can take a lock on the object,so no two threads rency easy to understand and reason about,not only dur- can directly manipulate the object's state simultaneously. ing initial development but also during maintenance. As long as an object's state is accessed only by its meth- ods and programmers remember to add the synchronized EXPLICIT,IMPLICIT.AND AUTOMATIC PARALLELIZATION declaration,this approach works. Explicit programming models provide abstractions that There are at least three major problems with synchro- require programmers to state exactly where concurrency nized methods.First,they are not appropriate for types can occur.The major advantage of expressing concur- whose methods call virtual functions on other objects rency explicitly is that it allows programmers to take full (e.g.,Java's Vector and .NET's SyncHashTable),because advantage of their application domain knowledge and calling into third-party code while holding a lock opens express the full potential concurrency in the application. the possibility of deadlock.Second,synchronized methods It has drawbacks,however.It requires new higher-level can perform too much locking,by acquiring and releas- programming abstractions and a higher level of program- ing locks on all object instances,even those never shared mer proficiency in the presence of shared data. across threads (typically the majority).Third,synchro- Implicit programming models hide concurrency nized methods can also perform too little locking,by inside libraries or behind APIs,so that a caller retains a not preserving atomicity when a program calls multiple sequential worldview while the library performs the work methods on an object or on different objects.As a simple in parallel.This approach lets naive programmers safely example of the latter,consider a banking transfer: use concurrency.Its main drawback is that some kinds of concurrency-related performance gains can't be realized account1.Credit(amount);account2.Debit(amount) this way.Also,it is difficult to design interfaces that do not expose the concurrency in any circumstance-for Per-object locking protects each call,but does not prevent example,when a program applies the operation to several another thread from seeing the inconsistent state of the instances of the same data. two accounts between the calls.Operations of this type, Another widely studied approach is automatic paral- whose atomicity does not correspond to a method call lelization,where a compiler attempts to find parallel- boundary,require additional,explicit synchronization. ism,typically in programs written in a conventional language such as Fortran.As appealing as it may seem, LOCK ALTERNATIVES this approach has not worked well in practice.Accurate For completeness,we note two major alternatives to program analysis is necessary to understand a program's locks.The first is lock-free programming.By relying on a potential behavior.This analysis is challenging for simple deep knowledge of a processor's memory model,it is pos- languages such as Fortran,and far more difficult for sible to create data structures that can be shared without languages,such as C,that manipulate pointer-based data. explicit locking.Lock-free programming is difficult and Moreover,sequential programs often use sequential algo- fragile;inventing a new lock-free data-structure imple- rithms and contain little concurrency. mentation is still often a publishable result. The second alternative is transactional memory,which IMPERATIVE AND FUNCTIONAL LANGUAGES. brings the central idea of transactions from databases Popular commercial programming languages (e.g.,Pascal, into programming languages.Programmers write their C,C++,Java,C#)are imperative languages in which a more queue:www.acmqueue.com QUEUE September 2005 59more queue: www.acmqueue.com QUEUE September 2005 59 data. Conventions governing locks in a program are sometimes written down, but they’re almost never stated precisely enough for a tool to check them. Locks have other more subtle problems. Locking is a global program property, which is difficult to localize to a single procedure, class, or framework. All code that accesses a piece of shared state must know and obey the locking convention, regardless of who wrote the code or where it resides. Attempts to make synchronization a local property do not work all the time. Consider a popular solution such as Java’s synchronized methods. Each of an object’s methods can take a lock on the object, so no two threads can directly manipulate the object’s state simultaneously. As long as an object’s state is accessed only by its meth￾ods and programmers remember to add the synchronized declaration, this approach works. There are at least three major problems with synchro￾nized methods. First, they are not appropriate for types whose methods call virtual functions on other objects (e.g., Java’s Vector and .NET’s SyncHashTable), because calling into third-party code while holding a lock opens the possibility of deadlock. Second, synchronized methods can perform too much locking, by acquiring and releas￾ing locks on all object instances, even those never shared across threads (typically the majority). Third, synchro￾nized methods can also perform too little locking, by not preserving atomicity when a program calls multiple methods on an object or on different objects. As a simple example of the latter, consider a banking transfer: account1.Credit(amount); account2.Debit(amount) Per-object locking protects each call, but does not prevent another thread from seeing the inconsistent state of the two accounts between the calls. Operations of this type, whose atomicity does not correspond to a method call boundary, require additional, explicit synchronization. LOCK ALTERNATIVES For completeness, we note two major alternatives to locks. The first is lock-free programming. By relying on a deep knowledge of a processor’s memory model, it is pos￾sible to create data structures that can be shared without explicit locking. Lock-free programming is difficult and fragile; inventing a new lock-free data-structure imple￾mentation is still often a publishable result. The second alternative is transactional memory, which brings the central idea of transactions from databases into programming languages. Programmers write their programs as a series of explicitly atomic blocks, which appear to execute indivisibly, so concurrently execut￾ing operations see the shared state strictly before or after an atomic action executes. Although many people view transactional memory as a promising direction, it is still a subject of active research. WHAT WE NEED IN PROGRAMMING LANGUAGES We need higher-level language abstractions, including evolutionary extensions to current imperative languages, so that existing applications can incrementally become concurrent. The programming model must make concur￾rency easy to understand and reason about, not only dur￾ing initial development but also during maintenance. EXPLICIT, IMPLICIT, AND AUTOMATIC PARALLELIZATION Explicit programming models provide abstractions that require programmers to state exactly where concurrency can occur. The major advantage of expressing concur￾rency explicitly is that it allows programmers to take full advantage of their application domain knowledge and express the full potential concurrency in the application. It has drawbacks, however. It requires new higher-level programming abstractions and a higher level of program￾mer proficiency in the presence of shared data. Implicit programming models hide concurrency inside libraries or behind APIs, so that a caller retains a sequential worldview while the library performs the work in parallel. This approach lets naïve programmers safely use concurrency. Its main drawback is that some kinds of concurrency-related performance gains can’t be realized this way. Also, it is difficult to design interfaces that do not expose the concurrency in any circumstance—for example, when a program applies the operation to several instances of the same data. Another widely studied approach is automatic paral￾lelization, where a compiler attempts to find parallel￾ism, typically in programs written in a conventional language such as Fortran. As appealing as it may seem, this approach has not worked well in practice. Accurate program analysis is necessary to understand a program’s potential behavior. This analysis is challenging for simple languages such as Fortran, and far more difficult for languages, such as C, that manipulate pointer-based data. Moreover, sequential programs often use sequential algo￾rithms and contain little concurrency. IMPERATIVE AND FUNCTIONAL LANGUAGES. Popular commercial programming languages (e.g., Pascal, C, C++, Java, C#) are imperative languages in which a
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有