当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

南京大学:《面向对象技术 OOT》课程教学资源(教材电子版)01 Software quality

资源类别:文库,文档格式:PDF,文档页数:18,文件大小:179.75KB,团购合买
点击下载完整版文档(PDF)

Software quality kqulitysofwarenstheproution of quaitr This book introduces a set of techniques which hold the potential for remarkable improvements in the quality of software products. Before studying these techniques,we must clarify their goals.Software quality is best described as a combination of several factors.This chapter analyzes some of these factors,shows where improvements are most sorely needed,and points to the directions where we shall be looking for solutions in the rest of our journey 1.1 EXTERNAL AND INTERNAL FACTORS We all want our software systems to be fast,reliable,easy to use,readable,modular, structured and so on.But these adjectives describe two different sorts of qualities. On one side,we are considering such qualities as speed or ease of use,whose presence or absence in a software product may be detected by its users.These properties may be called external quality factors. Under"users"we should include not only the people who actually interact with the final products,like an airline agent using a flight reservation system,but also those who purchase the software or contract out its development,like an airline executive in charge of acquiring or commissioning flight reservation systems.So a property such as the ease with which the software may be adapted to changes of specifications-defined later in this discussion as extendibility-falls into the category of external factors even though it may not be of immediate interest to such"end users"as the reservations agent. Other qualities applicable to a software product,such as being modular,or readable, are internal factors,perceptible only to computer professionals who have access to the actual software text. In the end,only external factors matter.If I use a Web browser or live near a computer-controlled nuclear plant,little do I care whether the source program is readable or modular if graphics take ages to load,or if a wrong input blows up the plant.But the key to achieving these extemal factors is in the internal ones:for the users to enjoy the visible qualities,the designers and implementers must have applied internal techniques that will ensure the hidden qualities

1 Software quality Engineering seeks quality; software engineering is the production of quality software. This book introduces a set of techniques which hold the potential for remarkable improvements in the quality of software products. Before studying these techniques, we must clarify their goals. Software quality is best described as a combination of several factors. This chapter analyzes some of these factors, shows where improvements are most sorely needed, and points to the directions where we shall be looking for solutions in the rest of our journey. 1.1 EXTERNAL AND INTERNAL FACTORS We all want our software systems to be fast, reliable, easy to use, readable, modular, structured and so on. But these adjectives describe two different sorts of qualities. On one side, we are considering such qualities as speed or ease of use, whose presence or absence in a software product may be detected by its users. These properties may be called external quality factors. Under “users” we should include not only the people who actually interact with the final products, like an airline agent using a flight reservation system, but also those who purchase the software or contract out its development, like an airline executive in charge of acquiring or commissioning flight reservation systems. So a property such as the ease with which the software may be adapted to changes of specifications — defined later in this discussion as extendibility — falls into the category of external factors even though it may not be of immediate interest to such “end users” as the reservations agent. Other qualities applicable to a software product, such as being modular, or readable, are internal factors, perceptible only to computer professionals who have access to the actual software text. In the end, only external factors matter. If I use a Web browser or live near a computer-controlled nuclear plant, little do I care whether the source program is readable or modular if graphics take ages to load, or if a wrong input blows up the plant. But the key to achieving these external factors is in the internal ones: for the users to enjoy the visible qualities, the designers and implementers must have applied internal techniques that will ensure the hidden qualities

SOFTWARE QUALITY $1.2 The following chapters present of a set of modern techniques for obtaining internal quality.We should not,however,lose track of the global picture;the internal techniques are not an end in themselves,but a means to reach external software qualities.So we must start by looking at extemal factors.The rest of this chapter examines them. 1.2 A REVIEW OF EXTERNAL FACTORS Here are the most important external quality factors,whose pursuit is the central task of object-oriented software construction. Correctness Definition:correctness Correctness is the ability of software products to perform their exact tasks, as defined by their specification. Correctness is the prime quality.If a system does not do what it is supposed to do, everything else about it-whether it is fast,has a nice user interface...-matters little. But this is easier said than done.Even the first step to correctness is already difficult: we must be able to specify the system requirements in a precise form,by itself quite a challenging task. Methods for ensuring correctness will usually be conditional.A serious software system,even a small one by today's standards,touches on so many areas that it would be impossible to guarantee its correctness by dealing with all components and properties on a single level.Instead,a layered approach is necessary,each layer relying on lower ones: Layers in Compiler software development Operating System In the conditional approach to correctness,we only worry about guaranteeing that each layer is correct on the assumption that the lower levels are correct.This is the only realistic technique,as it achieves separation of concerns and lets us concentrate at each stage on a limited set of problems.You cannot usefully check that a program in a high- level language X is correct unless you are able to assume that the compiler on hand implements X correctly.This does not necessarily mean that you trust the compiler blindly, simply that you separate the two components of the problem:compiler correctness,and correctness of your program relative to the language's semantics. In the method described in this book,even more layers intervene:software development will rely on libraries of reusable components,which may be used in many different applications

4 SOFTWARE QUALITY §1.2 The following chapters present of a set of modern techniques for obtaining internal quality. We should not, however, lose track of the global picture; the internal techniques are not an end in themselves, but a means to reach external software qualities. So we must start by looking at external factors. The rest of this chapter examines them. 1.2 A REVIEW OF EXTERNAL FACTORS Here are the most important external quality factors, whose pursuit is the central task of object-oriented software construction. Correctness Correctness is the prime quality. If a system does not do what it is supposed to do, everything else about it — whether it is fast, has a nice user interface… — matters little. But this is easier said than done. Even the first step to correctness is already difficult: we must be able to specify the system requirements in a precise form, by itself quite a challenging task. Methods for ensuring correctness will usually be conditional. A serious software system, even a small one by today’s standards, touches on so many areas that it would be impossible to guarantee its correctness by dealing with all components and properties on a single level. Instead, a layered approach is necessary, each layer relying on lower ones: In the conditional approach to correctness, we only worry about guaranteeing that each layer is correct on the assumption that the lower levels are correct. This is the only realistic technique, as it achieves separation of concerns and lets us concentrate at each stage on a limited set of problems. You cannot usefully check that a program in a high￾level language X is correct unless you are able to assume that the compiler on hand implements X correctly. This does not necessarily mean that you trust the compiler blindly, simply that you separate the two components of the problem: compiler correctness, and correctness of your program relative to the language’s semantics. In the method described in this book, even more layers intervene: software development will rely on libraries of reusable components, which may be used in many different applications. Definition: correctness Correctness is the ability of software products to perform their exact tasks, as defined by their specification. Layers in software development Application system Compiler Operating System Hardware

$1.2 A REVIEW OF EXTERNAL FACTORS 5 Layers in a development process that Application library includes reuse ..More libraries... Base library Kernel library Compiler Operating System The conditional approach will also apply here:we should ensure that the libraries are correct and,separately,that the application is correct assuming the libraries are. Many practitioners,when presented with the issue of software correctness,think about testing and debugging.We can be more ambitious:in later chapters we will explore a number of techniques,in particular typing and assertions,meant to help build software that is correct from the start-rather than debugging it into correctness.Debugging and testing remain indispensable,of course,as a means of double-checking the result. It is possible to go further and take a completely formal approach to software construction.This book falls short of such a goal,as suggested by the somewhat timid terms“check”,“guarantee'”and"“ensure”used above in preference to the word"prove'”. Yet many of the techniques described in later chapters come directly from the work on mathematical techniques for formal program specification and verification,and go a long way towards ensuring the correctness ideal. Robustness Definition:robustness Robustness is the ability of software systems to react appropriately to abnormal conditions. Robustness complements correctness.Correctness addresses the behavior of a system in cases covered by its specification;robustness characterizes what happens outside of that specification. Robustness versus SPECIFICATION correctness Correctness Robustness

§1.2 A REVIEW OF EXTERNAL FACTORS 5 The conditional approach will also apply here: we should ensure that the libraries are correct and, separately, that the application is correct assuming the libraries are. Many practitioners, when presented with the issue of software correctness, think about testing and debugging. We can be more ambitious: in later chapters we will explore a number of techniques, in particular typing and assertions, meant to help build software that is correct from the start — rather than debugging it into correctness. Debugging and testing remain indispensable, of course, as a means of double-checking the result. It is possible to go further and take a completely formal approach to software construction. This book falls short of such a goal, as suggested by the somewhat timid terms “check”, “guarantee” and “ensure” used above in preference to the word “prove”. Yet many of the techniques described in later chapters come directly from the work on mathematical techniques for formal program specification and verification, and go a long way towards ensuring the correctness ideal. Robustness Robustness complements correctness. Correctness addresses the behavior of a system in cases covered by its specification; robustness characterizes what happens outside of that specification. Definition: robustness Robustness is the ability of software systems to react appropriately to abnormal conditions. Application system Application library Operating System … More libraries … Base library Kernel library Hardware Compiler Layers in a development process that includes reuse Robustness versus correctness SPECIFICATION Correctness Robustness

6 SOFTWARE QUALITY $1.2 As reflected by the wording of its definition,robustness is by nature a more fuzzy notion than correctness.Since we are concerned here with cases not covered by the specification,it is not possible to say,as with correctness,that the system should "perform its tasks"in such a case;were these tasks known,the abnormal case would become part of the specification and we would be back in the province of correctness. This definition of "abnormal case"will be useful again when we study exception On exception handling.It implies that the notions of normal and abnormal case are always relative to a handling see certain specification;an abnormal case is simply a case that is not covered by the chapter 12. specification.If you widen the specification,cases that used to be abnormal become normal-even if they correspond to events such as erroneous user input that you would prefer not to happen."Normal"in this sense does not mean "desirable",but simply "planned for in the design of the software".Although it may seem paradoxical at first that erroneous input should be called a normal case,any other approach would have to rely on subjective criteria,and so would be useless. There will always be cases that the specification does not explicitly address.The role of the robustness requirement is to make sure that if such cases do arise,the system does not cause catastrophic events;it should produce appropriate error messages,terminate its execution cleanly,or enter a so-called "graceful degradation"mode. Extendibility Definition:extendibility Extendibility is the ease of adapting software products to changes of specification. Software is supposed to be sofi,and indeed is in principle;nothing can be easier than to change a program if you have access to its source code.Just use your favorite text editor. The problem of extendibility is one of scale.For small programs change is usually not a difficult issue;but as software grows bigger,it becomes harder and harder to adapt. A large software system often looks to its maintainers as a giant house of cards in which pulling out any one element might cause the whole edifice to collapse. We need extendibility because at the basis of all software lies some human phenomenon and hence fickleness.The obvious case of business software("Management Information Systems"),where passage of a law or a company's acquisition may suddenly invalidate the assumptions on which a system rested,is not special;even in scientific computation,where we may expect the laws of physics to stay in place from one month to the next,our way of understanding and modeling physical systems will change Traditional approaches to software engineering did not take enough account of change,relying instead on an ideal view ofthe software lifecycle where an initial analysis stage freezes the requirements,the rest of the process being devoted to designing and building a solution.This is understandable:the first task in the progress of the discipline was to develop sound techniques for stating and solving fixed problems,before we could worry about what to do if the problem changes while someone is busy solving it.But now

6 SOFTWARE QUALITY §1.2 As reflected by the wording of its definition, robustness is by nature a more fuzzy notion than correctness. Since we are concerned here with cases not covered by the specification, it is not possible to say, as with correctness, that the system should “perform its tasks” in such a case; were these tasks known, the abnormal case would become part of the specification and we would be back in the province of correctness. This definition of “abnormal case” will be useful again when we study exception handling. It implies that the notions of normal and abnormal case are always relative to a certain specification; an abnormal case is simply a case that is not covered by the specification. If you widen the specification, cases that used to be abnormal become normal — even if they correspond to events such as erroneous user input that you would prefer not to happen. “Normal” in this sense does not mean “desirable”, but simply “planned for in the design of the software”. Although it may seem paradoxical at first that erroneous input should be called a normal case, any other approach would have to rely on subjective criteria, and so would be useless. There will always be cases that the specification does not explicitly address. The role of the robustness requirement is to make sure that if such cases do arise, the system does not cause catastrophic events; it should produce appropriate error messages, terminate its execution cleanly, or enter a so-called “graceful degradation” mode. Extendibility Software is supposed to be soft, and indeed is in principle; nothing can be easier than to change a program if you have access to its source code. Just use your favorite text editor. The problem of extendibility is one of scale. For small programs change is usually not a difficult issue; but as software grows bigger, it becomes harder and harder to adapt. A large software system often looks to its maintainers as a giant house of cards in which pulling out any one element might cause the whole edifice to collapse. We need extendibility because at the basis of all software lies some human phenomenon and hence fickleness. The obvious case of business software (“Management Information Systems”), where passage of a law or a company’s acquisition may suddenly invalidate the assumptions on which a system rested, is not special; even in scientific computation, where we may expect the laws of physics to stay in place from one month to the next, our way of understanding and modeling physical systems will change. Traditional approaches to software engineering did not take enough account of change, relying instead on an ideal view of the software lifecycle where an initial analysis stage freezes the requirements, the rest of the process being devoted to designing and building a solution. This is understandable: the first task in the progress of the discipline was to develop sound techniques for stating and solving fixed problems, before we could worry about what to do if the problem changes while someone is busy solving it. But now Definition: extendibility Extendibility is the ease of adapting software products to changes of specification. On exception handling see chapter 12

$1.2 A REVIEW OF EXTERNAL FACTORS 7 with the basic software engineering techniques in place it has become essential to recognize and address this central issue.Change is pervasive in software development: change of requirements,of our understanding of the requirements,of algorithms,of data representation,of implementation techniques.Support for change is a basic goal ofobject technology and a running theme through this book. Although many of the techniques that improve extendibility may be introduced on small examples or in introductory courses,their relevance only becomes clear for larger projects.Two principles are essential for improving extendibility: Design simplicity:a simple architecture will always be easier to adapt to changes than a complex one. Decentralization:the more autonomous the modules,the higher the likelihood that a simple change will affect just one module,or a small number of modules,rather than triggering off a chain reaction of changes over the whole system The object-oriented method is,before anything else,a system architecture method which helps designers produce systems whose structure remains both simple (even for large systems)and decentralized.Simplicity and decentralization will be recurring themes in the discussions leading to object-oriented principles in the following chapters. Reusability Definition:reusability Reusability is the ability of software elements to serve for the construction of many different applications. The need for reusability comes from the observation that software systems often follow similar patterns;it should be possible to exploit this commonality and avoid reinventing solutions to problems that have been encountered before.By capturing such a pattern,a reusable software element will be applicable to many different developments. Reusability has an influence on all other aspects of software quality,for solving the reusability problem essentially means that less software must be written,and hence that more effort may be devoted (for the same total cost)to improving the other factors,such as correctness and robustness. Here again is an issue that the traditional view of the software lifecycle had not properly recognized,and for the same historical reason:you must find ways to solve one problem before you worry about applying the solution to other problems.But with the growth of software and its attempts to become a true industry the need for reusability has become a pressing concern. Chapter 4. Reusability will play a central role in the discussions of the following chapters,one of which is in fact devoted entirely to an in-depth examination of this quality factor,its concrete benefits,and the issues it raises

§1.2 A REVIEW OF EXTERNAL FACTORS 7 with the basic software engineering techniques in place it has become essential to recognize and address this central issue. Change is pervasive in software development: change of requirements, of our understanding of the requirements, of algorithms, of data representation, of implementation techniques. Support for change is a basic goal of object technology and a running theme through this book. Although many of the techniques that improve extendibility may be introduced on small examples or in introductory courses, their relevance only becomes clear for larger projects. Two principles are essential for improving extendibility: • Design simplicity: a simple architecture will always be easier to adapt to changes than a complex one. • Decentralization: the more autonomous the modules, the higher the likelihood that a simple change will affect just one module, or a small number of modules, rather than triggering off a chain reaction of changes over the whole system. The object-oriented method is, before anything else, a system architecture method which helps designers produce systems whose structure remains both simple (even for large systems) and decentralized. Simplicity and decentralization will be recurring themes in the discussions leading to object-oriented principles in the following chapters. Reusability The need for reusability comes from the observation that software systems often follow similar patterns; it should be possible to exploit this commonality and avoid reinventing solutions to problems that have been encountered before. By capturing such a pattern, a reusable software element will be applicable to many different developments. Reusability has an influence on all other aspects of software quality, for solving the reusability problem essentially means that less software must be written, and hence that more effort may be devoted (for the same total cost) to improving the other factors, such as correctness and robustness. Here again is an issue that the traditional view of the software lifecycle had not properly recognized, and for the same historical reason: you must find ways to solve one problem before you worry about applying the solution to other problems. But with the growth of software and its attempts to become a true industry the need for reusability has become a pressing concern. Reusability will play a central role in the discussions of the following chapters, one of which is in fact devoted entirely to an in-depth examination of this quality factor, its concrete benefits, and the issues it raises. Definition: reusability Reusability is the ability of software elements to serve for the construction of many different applications. Chapter 4

8 SOFTWARE QUALITY $1.2 Compatibility Definition:compatibility Compatibility is the ease of combining software elements with others. Compatibility is important because we do not develop software elements in a vacuum: they need to interact with each other.But they too often have trouble interacting because they make conflicting assumptions about the rest of the world.An example is the wide variety of incompatible file formats supported by many operating systems.A program can directly use another's result as input only if the file formats are compatible Lack of compatibility can yield disaster.Here is an extreme case: DALLAS-Last week,AMR,the parent company of American Airlines,Inc.,said it fell San Jose (Calif.) on its sword trying to develop a state-of-the-art,industry-wide system that could also Mercury News,July handle car and hotel reservations. 20,1992.Ouoted in AMR cut off development of its new Confirm reservation system only weeks after it was the“comp.risks” supposed to start taking care of transactions for partners Budget Rent-A-Car,Hilton Usenet newsgroup. 13.67,Jl1992. Hotels Corp.and Marriott Corp.Suspension of the $125 million,4-year-old project Slightly abridged. translated into a $165 million pre-tax charge against AMR's earnings and fractured the company's reputation as a pacesetter in travel technology.[... As far back as January,the leaders of Confirm discovered that the labors of more than 200 programmers,systems analysts and engineers had apparently been for naught.The main pieces of the massive project-requiring 47,000 pages to describe-had been developed separately,by different methods.When put together,they did not work with each other.When the developers attempted to plug the parts together,they could not. Different "modules"could not pull the information needed from the other side of the bridge. AMR Information Services fired eight senior project members,including the team leader. [...In late June,Budget and Hilton said they were dropping out. The key to compatibility lies in homogeneity of design,and in agreeing on standardized conventions for inter-program communication.Approaches include: Standardized file formats,as in the Unix system,where every text file is simply a sequence of characters. Standardized data structures,as in Lisp systems,where all data,and programs as well,are represented by binary trees(called lists in Lisp). Standardized user interfaces,as on various versions of Windows,OS/2 and MacOS, where all tools rely on a single paradigm for communication with the user,based on standard components such as windows,icons,menus etc. More general solutions are obtained by defining standardized access protocols to all On abstract data important entities manipulated by the software.This is the idea behind abstract data types types see chapter 6. and the object-oriented approach,as well as so-called middleware protocols such as CORBA and Microsoft's OLE-COM(ActiveX)

8 SOFTWARE QUALITY §1.2 Compatibility Compatibility is important because we do not develop software elements in a vacuum: they need to interact with each other. But they too often have trouble interacting because they make conflicting assumptions about the rest of the world. An example is the wide variety of incompatible file formats supported by many operating systems. A program can directly use another’s result as input only if the file formats are compatible. Lack of compatibility can yield disaster. Here is an extreme case: DALLAS — Last week, AMR, the parent company of American Airlines, Inc., said it fell on its sword trying to develop a state-of-the-art, industry-wide system that could also handle car and hotel reservations. AMR cut off development of its new Confirm reservation system only weeks after it was supposed to start taking care of transactions for partners Budget Rent-A-Car, Hilton Hotels Corp. and Marriott Corp. Suspension of the $125 million, 4-year-old project translated into a $165 million pre-tax charge against AMR’s earnings and fractured the company’s reputation as a pacesetter in travel technology. […] As far back as January, the leaders of Confirm discovered that the labors of more than 200 programmers, systems analysts and engineers had apparently been for naught. The main pieces of the massive project — requiring 47,000 pages to describe — had been developed separately, by different methods. When put together, they did not work with each other. When the developers attempted to plug the parts together, they could not. Different “modules” could not pull the information needed from the other side of the bridge. AMR Information Services fired eight senior project members, including the team leader. […] In late June, Budget and Hilton said they were dropping out. The key to compatibility lies in homogeneity of design, and in agreeing on standardized conventions for inter-program communication. Approaches include: • Standardized file formats, as in the Unix system, where every text file is simply a sequence of characters. • Standardized data structures, as in Lisp systems, where all data, and programs as well, are represented by binary trees (called lists in Lisp). • Standardized user interfaces, as on various versions of Windows, OS/2 and MacOS, where all tools rely on a single paradigm for communication with the user, based on standard components such as windows, icons, menus etc. More general solutions are obtained by defining standardized access protocols to all important entities manipulated by the software. This is the idea behind abstract data types and the object-oriented approach, as well as so-called middleware protocols such as CORBA and Microsoft’s OLE-COM (ActiveX). Definition: compatibility Compatibility is the ease of combining software elements with others. San Jose (Calif.) Mercury News, July 20, 1992. Quoted in the “comp ● risks” Usenet newsgroup, 13.67, July 1992. Slightly abridged. On abstract data types see chapter 6

$1.2 A REVIEW OF EXTERNAL FACTORS Efficiency Definition:efficiency Efficiency is the ability of a software system to place as few demands as possible on hardware resources,such as processor time,space occupied in internal and external memories,bandwidth used in communication devices. Almost synonymous with efficiency is the word"performance".The software community shows two typical attitudes towards efficiency: Some developers have an obsession with performance issues,leading them to devote a lot of efforts to presumed optimizations But a general tendency also exists to downplay efficiency concerns,as evidenced by such industry lore as"make it right before you make it fast"and "next year's computer model is going to be 50%faster anyway". It is not uncommon to see the same person displaying these two attitudes at different times,as in a software case of split personality (Dr.Abstract and Mr.Microsecond). Where is the truth?Clearly,developers have often shown an exaggerated concern for micro-optimization.As already noted,efficiency does not matter much if the software is not correct (suggesting a new dictum,"do not worry how fast it is unless it is also right", close to the previous one but not quite the same).More generally,the concern for efficiency must be balanced with other goals such as extendibility and reusability;extreme optimizations may make the software so specialized as to be unfit for change and reuse. Furthermore,the ever growing power of computer hardware does allow us to have a more relaxed attitude about gaining the last byte or microsecond. All this,however,does not diminish the importance of efficiency.No one likes to wait for the responses of an interactive system,or to have to purchase more memory to run a program.So offhand attitudes to performance include much posturing;if the final system is so slow or bulky as to impede usage,those who used to declare that "speed is not that important"will not be the last to complain. This issue reflects what I believe to be a major characteristic ofsoftware engineering, not likely to move away soon:software construction is difficult precisely because it requires taking into account many different requirements,some of which,such as correctness,are abstract and conceptual,whereas others,such as efficiency,are concrete and bound to the properties of computer hardware. For some scientists,software development is a branch of mathematics;for some engineers,it is a branch ofapplied technology.In reality,it is both.The software developer must reconcile the abstract concepts with their concrete implementations,the mathematics of correct computation with the time and space constraints deriving from physical laws and from limitations of current hardware technology.This need to please the angels as well as the beasts may be the central challenge of software engineering

§1.2 A REVIEW OF EXTERNAL FACTORS 9 Efficiency Almost synonymous with efficiency is the word “performance”. The software community shows two typical attitudes towards efficiency: • Some developers have an obsession with performance issues, leading them to devote a lot of efforts to presumed optimizations. • But a general tendency also exists to downplay efficiency concerns, as evidenced by such industry lore as “make it right before you make it fast” and “next year’s computer model is going to be 50% faster anyway”. It is not uncommon to see the same person displaying these two attitudes at different times, as in a software case of split personality (Dr. Abstract and Mr. Microsecond). Where is the truth? Clearly, developers have often shown an exaggerated concern for micro-optimization. As already noted, efficiency does not matter much if the software is not correct (suggesting a new dictum, “do not worry how fast it is unless it is also right ”, close to the previous one but not quite the same). More generally, the concern for efficiency must be balanced with other goals such as extendibility and reusability; extreme optimizations may make the software so specialized as to be unfit for change and reuse. Furthermore, the ever growing power of computer hardware does allow us to have a more relaxed attitude about gaining the last byte or microsecond. All this, however, does not diminish the importance of efficiency. No one likes to wait for the responses of an interactive system, or to have to purchase more memory to run a program. So offhand attitudes to performance include much posturing; if the final system is so slow or bulky as to impede usage, those who used to declare that “speed is not that important” will not be the last to complain. This issue reflects what I believe to be a major characteristic of software engineering, not likely to move away soon: software construction is difficult precisely because it requires taking into account many different requirements, some of which, such as correctness, are abstract and conceptual, whereas others, such as efficiency, are concrete and bound to the properties of computer hardware. For some scientists, software development is a branch of mathematics; for some engineers, it is a branch of applied technology. In reality, it is both. The software developer must reconcile the abstract concepts with their concrete implementations, the mathematics of correct computation with the time and space constraints deriving from physical laws and from limitations of current hardware technology. This need to please the angels as well as the beasts may be the central challenge of software engineering. Definition: efficiency Efficiency is the ability of a software system to place as few demands as possible on hardware resources, such as processor time, space occupied in internal and external memories, bandwidth used in communication devices

10 SOFTWARE QUALITY $1.2 The constant improvement in computer power,impressive as it is,is not an excuse for overlooking efficiency,for at least three reasons: Someone who purchases a bigger and faster computer wants to see some actual benefit from the extra power-to handle new problems,process previous problems faster,or process bigger versions of the previous problems in the same amount of time.Using the new computer to process the previous problems in the same amount of time will not do! One ofthe most visible effects of advances in computer power is actually to increase the lead of good algorithms over bad ones.Assume that a new machine is twice as fast as the previous one.Let n be the size of the problem to solve,and N the maximum n that can be handled by a certain algorithm in a given time.Then if the algorithm is in O(n),that is to say,runs in a time proportional to n,the new machine will enable you to handle problem sizes of about 2 N for large N.For an algorithm in O(n)the new machine will only yield a 41%increase of N.An algorithm in O(2),similar to certain combinatorial,exhaustive-search algorithms,would just add one to N-not much of an improvement for your money. In some cases efficiency may affect correctness.A specification may state that the computer response to a certain event must occur no later than a specified time;for example,an in-flight computer must be prepared to detect and process a message from the throttle sensor fast enough to take corrective action.This connection between efficiency and correctness is not restricted to applications commonly thought of as "real time";few people are interested in a weather forecasting model that takes twenty-four hours to predict the next day's weather. Another example,although perhaps less critical,has been of frequent annoyance to me: a window management system that I used for a while was sometimes too slow to detect that the mouse cursor had moved from a window to another,so that characters typed at the keyboard,meant for a certain window,would occasionally end up in another. In this case a performance limitation causes a violation of the specification,that is to say ofcorrectness,which even in seemingly innocuous everyday applications can cause nasty consequences:think of what can happen if the two windows are used to send electronic mail messages to two different correspondents.For less than this marriages have been broken,even wars started. Because this book is focused on the concepts ofobject-oriented software engineering, not on implementation issues,only a few sections deal explicitly with the associated performance costs.But the concemn for efficiency will be there throughout.Whenever the discussion presents an object-oriented solution to some problem,it will make sure that the solution is not just elegant but also efficient;whenever it introduces some new O-O mechanism,be it garbage collection (and other approaches to memory management for object-oriented computation),dynamic binding,genericity or repeated inheritance,it will do so based on the knowledge that the mechanism may be implemented at a reasonable cost in time and in space;and whenever appropriate it will mention the performance consequences of the techniques studied

10 SOFTWARE QUALITY §1.2 The constant improvement in computer power, impressive as it is, is not an excuse for overlooking efficiency, for at least three reasons: • Someone who purchases a bigger and faster computer wants to see some actual benefit from the extra power — to handle new problems, process previous problems faster, or process bigger versions of the previous problems in the same amount of time. Using the new computer to process the previous problems in the same amount of time will not do! • One of the most visible effects of advances in computer power is actually to increase the lead of good algorithms over bad ones. Assume that a new machine is twice as fast as the previous one. Let n be the size of the problem to solve, and N the maximum n that can be handled by a certain algorithm in a given time. Then if the algorithm is in O (n), that is to say, runs in a time proportional to n, the new machine will enable you to handle problem sizes of about 2 ∗ N for large N. For an algorithm in O (n 2 ) the new machine will only yield a 41% increase of N. An algorithm in O (2n), similar to certain combinatorial, exhaustive-search algorithms, would just add one to N — not much of an improvement for your money. • In some cases efficiency may affect correctness. A specification may state that the computer response to a certain event must occur no later than a specified time; for example, an in-flight computer must be prepared to detect and process a message from the throttle sensor fast enough to take corrective action. This connection between efficiency and correctness is not restricted to applications commonly thought of as “real time”; few people are interested in a weather forecasting model that takes twenty-four hours to predict the next day’s weather. Another example, although perhaps less critical, has been of frequent annoyance to me: a window management system that I used for a while was sometimes too slow to detect that the mouse cursor had moved from a window to another, so that characters typed at the keyboard, meant for a certain window, would occasionally end up in another. In this case a performance limitation causes a violation of the specification, that is to say of correctness, which even in seemingly innocuous everyday applications can cause nasty consequences: think of what can happen if the two windows are used to send electronic mail messages to two different correspondents. For less than this marriages have been broken, even wars started. Because this book is focused on the concepts of object-oriented software engineering, not on implementation issues, only a few sections deal explicitly with the associated performance costs. But the concern for efficiency will be there throughout. Whenever the discussion presents an object-oriented solution to some problem, it will make sure that the solution is not just elegant but also efficient; whenever it introduces some new O-O mechanism, be it garbage collection (and other approaches to memory management for object-oriented computation), dynamic binding, genericity or repeated inheritance, it will do so based on the knowledge that the mechanism may be implemented at a reasonable cost in time and in space; and whenever appropriate it will mention the performance consequences of the techniques studied

$1.2 A REVIEW OF EXTERNAL FACTORS 11 Efficiency is only one of the factors of quality;we should not(like some in the profession)let it rule our engineering lives.But it is a factor,and must be taken into consideration,whether in the construction of a software system or in the design of a programming language.If you dismiss performance,performance will dismiss you. Portability Definition:portability Portability is the ease of transferring software products to various hardware and software environments. Portability addresses variations not just of the physical hardware but more generally of the hardware-software machine,the one that we really program,which includes the operating system,the window system if applicable,and other fundamental tools.In the rest of this book the word "platform"will be used to denote a type of hardware-software machine;an example of platform is "Intel X86 with Windows NT"(known as "Wintel") Many of the existing platform incompatibilities are unjustified,and to a naive observer the only explanation sometimes seems to be a conspiracy to victimize humanity in general and programmers in particular.Whatever its causes,however,this diversity makes portability a major concern for both developers and users of software. Ease of use Definition:ease of use Ease of use is the ease with which people of various backgrounds and qualifications can learn to use software products and apply them to solve problems.It also covers the ease of installation,operation and monitoring. The definition insists on the various levels ofexpertise ofpotential users.This requirement poses one of the major challenges to software designers preoccupied with ease ofuse:how to provide detailed guidance and explanations to novice users,without bothering expert users who just want to get right down to business. As with many of the other qualities discussed in this chapter,one of the keys to ease of use is structural simplicity.A well-designed system,built according to a clear,well thought-out structure,will tend to be easier to learn and use than a messy one.The condition is not sufficient,of course (what is simple and clear to the designer may be difficult and obscure to users,especially if explained in designer's rather than user's terms),but it helps considerably. This is one of the areas where the object-oriented method is particularly productive; many O-O techniques,which appear at first to address design and implementation,also yield powerful new interface ideas that help the end users.Later chapters will introduce several examples

§1.2 A REVIEW OF EXTERNAL FACTORS 11 Efficiency is only one of the factors of quality; we should not (like some in the profession) let it rule our engineering lives. But it is a factor, and must be taken into consideration, whether in the construction of a software system or in the design of a programming language. If you dismiss performance, performance will dismiss you. Portability Portability addresses variations not just of the physical hardware but more generally of the hardware-software machine, the one that we really program, which includes the operating system, the window system if applicable, and other fundamental tools. In the rest of this book the word “platform” will be used to denote a type of hardware-software machine; an example of platform is “Intel X86 with Windows NT” (known as “Wintel”). Many of the existing platform incompatibilities are unjustified, and to a naïve observer the only explanation sometimes seems to be a conspiracy to victimize humanity in general and programmers in particular. Whatever its causes, however, this diversity makes portability a major concern for both developers and users of software. Ease of use The definition insists on the various levels of expertise of potential users. This requirement poses one of the major challenges to software designers preoccupied with ease of use: how to provide detailed guidance and explanations to novice users, without bothering expert users who just want to get right down to business. As with many of the other qualities discussed in this chapter, one of the keys to ease of use is structural simplicity. A well-designed system, built according to a clear, well thought-out structure, will tend to be easier to learn and use than a messy one. The condition is not sufficient, of course (what is simple and clear to the designer may be difficult and obscure to users, especially if explained in designer’s rather than user’s terms), but it helps considerably. This is one of the areas where the object-oriented method is particularly productive; many O-O techniques, which appear at first to address design and implementation, also yield powerful new interface ideas that help the end users. Later chapters will introduce several examples. Definition: portability Portability is the ease of transferring software products to various hardware and software environments. Definition: ease of use Ease of use is the ease with which people of various backgrounds and qualifications can learn to use software products and apply them to solve problems. It also covers the ease of installation, operation and monitoring

12 SOFTWARE QUALITY $1.2 Software designers preoccupied with ease of use will also be well-advised to See WilfredJ. consider with some mistrust the precept most frequently quoted in the user interface Hansen,“User literature,from an early article by Hansen:know the user.The argument is that a good Engineering Princi- ples for Interactive designer must make an effort to understand the system's intended user community.This Systems”,Proceed. view ignores one of the features of successful systems:they always outgrow their initial ings ofF/CC 39, audience.(Two old and famous examples are Fortran,conceived as a tool to solve the AFIPS Press, problem of the small community of engineers and scientists programming the IBM 704, Montvale(NJ), and Unix,meant for internal use at Bell Laboratories.)A system designed for a specific 1971,Pp523-532. group will rely on assumptions that simply do not hold for a larger audience. Good user interface designers follow a more prudent policy.They make as limited assumptions about their users as they can.When you design an interactive system,you may expect that users are members of the human race and that they can read,move a mouse,click a button,and type(slowly);not much more.If the software addresses a specialized application area,you may perhaps assume that your users are familiar with its basic concepts.But even that is risky.To reverse-paraphrase Hansen's advice: User Interface Design principle Do not pretend you know the user;you don't. Functionality Definition:functionality Functionality is the extent of possibilities provided by a system. One of the most difficult problems facing a project leader is to know how much functionality is enough.The pressure for more facilities,known in industry parlance as featurism (often "creeping featurism"),is constantly there.Its consequences are bad for internal projects,where the pressure comes from users within the same company,and worse for commercial products,as the most prominent part of a journalist's comparative review is often the table listing side by side the features offered by competing products. Featurism is actually the combination of two problems,one more difficult than the other.The easier problem is the loss of consistency that may result from the addition of new features,affecting its ease of use.Users are indeed known to complain that all the "bells and whistles"of a product's new version make it horrendously complex.Such comments should be taken with a grain of salt,however,since the new features do not come out of nowhere:most of the time they have been requested by users-other users. What to me looks like a superfluous trinket may be an indispensable facility to you. The solution here is to work again and again on the consistency of the overall product,trying to make everything fit into a general mold.A good software product is based on a small number of powerful ideas;even if it has many specialized features,they should all be explainable as consequences of these basic concepts.The"grand plan"must be visible,and everything should have its place in it

12 SOFTWARE QUALITY §1.2 Software designers preoccupied with ease of use will also be well-advised to consider with some mistrust the precept most frequently quoted in the user interface literature, from an early article by Hansen: know the user. The argument is that a good designer must make an effort to understand the system’s intended user community. This view ignores one of the features of successful systems: they always outgrow their initial audience. (Two old and famous examples are Fortran, conceived as a tool to solve the problem of the small community of engineers and scientists programming the IBM 704, and Unix, meant for internal use at Bell Laboratories.) A system designed for a specific group will rely on assumptions that simply do not hold for a larger audience. Good user interface designers follow a more prudent policy. They make as limited assumptions about their users as they can. When you design an interactive system, you may expect that users are members of the human race and that they can read, move a mouse, click a button, and type (slowly); not much more. If the software addresses a specialized application area, you may perhaps assume that your users are familiar with its basic concepts. But even that is risky. To reverse-paraphrase Hansen’s advice: Functionality One of the most difficult problems facing a project leader is to know how much functionality is enough. The pressure for more facilities, known in industry parlance as featurism (often “creeping featurism”), is constantly there. Its consequences are bad for internal projects, where the pressure comes from users within the same company, and worse for commercial products, as the most prominent part of a journalist’s comparative review is often the table listing side by side the features offered by competing products. Featurism is actually the combination of two problems, one more difficult than the other. The easier problem is the loss of consistency that may result from the addition of new features, affecting its ease of use. Users are indeed known to complain that all the “bells and whistles” of a product’s new version make it horrendously complex. Such comments should be taken with a grain of salt, however, since the new features do not come out of nowhere: most of the time they have been requested by users — other users. What to me looks like a superfluous trinket may be an indispensable facility to you. The solution here is to work again and again on the consistency of the overall product, trying to make everything fit into a general mold. A good software product is based on a small number of powerful ideas; even if it has many specialized features, they should all be explainable as consequences of these basic concepts. The “grand plan” must be visible, and everything should have its place in it. User Interface Design principle Do not pretend you know the user; you don’t. Definition: functionality Functionality is the extent of possibilities provided by a system. See Wilfred J. Hansen, “User Engineering Princi￾ples for Interactive Systems”, Proceed￾ings of FJCC 39, AFIPS Press, Montvale (NJ), 1971, pp 523-532

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
共18页,试读已结束,阅读完整版请下载
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有