正在加载图片...
are not that concerned about space issues these days.In our work with customers we have encountered interpretation.Generally we will look at space considerationsonly when they interfere with run-time performance,as in caching and paging. In discussing time efficie we will often mention the terms "pathlength"and "instructi unt interchang ly.Both stand for the number of assembler lang instructions enerated by a fragment of hsha hibits a rea oabe6cal e to or more.but in an event.poor instruction indicate rexecution time, regardless of processorar chitecture.A good tionwithtime Organization of This Book We start the performance tour close to home with a real-life example.Chapter 1 is a ar story of C++code This example will drive home some Object-oriented design in C++might harbor a performance ost.This is what we pay for the power of OO e of this cost,the factors affecting it,and how and when you can get around it are Chapter5is dedicated to temporaries.The creation of temporary objects is aC++feature that catches new Cprogrammers are not used to the( aries are g erated by theC and how to oid them. now when Memory managen 6c四01o的8N5 uncti ssucn as ne an ete de ared Oftentimes,you are in a position to make simplifying assumptions about code that will significantly boost the spe ol me ory alloca de ocation.I he chap Inlining is probably the second most r opular performance tip.right after passing objects by reference.It is not as simple as it sounds.Thek yword,just likes,is just a hint that the compiler often ne is likely to be ignored and other unexpected consequences are 10 Performance,flexibility,and reuse seldom go hand-in-hand.The Standard Template Library is an attempt ofnd an e three into a powertul component.we will examine the Reference counting is a technique often used by omwithout coverage of this technique.discussed in Chapter 12. Software perfo ocyml l mcieneaion d by t seem e su xiiixiii are not that concerned about space issues these days. In our work with customers we have encountered concerns with run-time efficiency for the most part. Since customers drive requirements, we will adopt their focus on time efficiency. From here on, we will restrict performance to its time-efficiency interpretation. Generally we will look at space considerations only when they interfere with run-time performance, as in caching and paging. In discussing time efficiency, we will often mention the terms "pathlength" and "instruction count" interchangeably. Both stand for the number of assembler language instructions generated by a fragment of code. In a RISC architecture, if a code fragment exhibits a reasonable "locality of reference" (i.e., cache hits), the ratio between instruction counts and clock cycles will approximate one. On CISC architectures it may average two or more, but in any event, poor instruction counts always indicate poor execution time, regardless of processor architecture. A good instruction count is necessary but not sufficient for high performance. Consequently, it is a crude performance indicator, but still useful. It will be used in conjunction with time measurements to evaluate efficiency. Organization of This Book We start the performance tour close to home with a real-life example. Chapter 1 is a war story of C++ code that exhibited atrocious performance, and what we did to resolve it. This example will drive home some performance lessons that might very well apply to diverse scenarios. Object-oriented design in C++ might harbor a performance cost. This is what we pay for the power of OO support. The significance of this cost, the factors affecting it, and how and when you can get around it are discussed in Chapters 2, 3, and 4. Chapter 5 is dedicated to temporaries. The creation of temporary objects is a C++ feature that catches new C++ programmers off guard. C programmers are not used to the C compiler generating significant overhead "under the covers." If you aim to write high-efficiency C++, it is essential that you know when temporaries are generated by the C++ compiler and how to avoid them. Memory management is the subject of Chapters 6 and 7. Allocating and deallocating memory on the fly is expensive. Functions such as new() and delete() are designed to be flexible and general. They deal with variable-sized memory chunks in a multithreaded environment. As such, their speed is compromised. Oftentimes, you are in a position to make simplifying assumptions about your code that will significantly boost the speed of memory allocation and deallocation. These chapters will discuss several simplifying assumptions that can be made and the efficient memory managers that are designed to leverage them. Inlining is probably the second most popular performance tip, right after passing objects by reference. It is not as simple as it sounds. The inline keyword, just like register, is just a hint that the compiler often ignores. Situations in which inline is likely to be ignored and other unexpected consequences are discussed in Chapters 8, 9, and 10. Performance, flexibility, and reuse seldom go hand-in-hand. The Standard Template Library is an attempt to buck that trend and to combine these three into a powerful component. We will examine the performance of the STL in Chapter 11. Reference counting is a technique often used by experienced C++ programmers. You cannot dedicate a book to C++ performance without coverage of this technique, discussed in Chapter 12. Software performance cannot always be salvaged by a single "silver bullet" fix. Performance degradation is often a result of many small local inefficiencies, each of which is insignificant by itself. It is the combination that results in a significant degradation. Over the years, while resolving many performance bugs in various C++ products, we have come to identify certain bugs that seem to float to the surface
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有