3 Modularity om the goals of xtendibilityand rusability,two of the principal quaity factors introduced in chapter 1,follows the need for flexible system architectures,made of autonomous software components.This is why chapter 1 also introduced the term modularity to cover the combination of these two quality factors. Modular programming was once taken to mean the construction of programs as assemblies of small pieces,usually subroutines.But such a technique cannot bring real extendibility and reusability benefits unless we have a better way of guaranteeing that the resulting pieces -the modules -are self-contained and organized in stable architectures.Any comprehensive definition of modularity must ensure these properties. A software construction method is modular,then,if it helps designers produce software systems made of autonomous elements connected by a coherent,simple structure.The purpose of this chapter is to refine this informal definition by exploring what precise properties such a method must possess to deserve the"modular"label.The focus will be on design methods,but the ideas also apply to earlier stages of system construction (analysis,specification)and must of course be maintained at the implementation and maintenance stages. As it turns out,a single definition of modularity would be insufficient;as with software quality,we must look at modularity from more than one viewpoint.This chapter introduces a set of complementary properties:five criteria,five rules and five principles of modularity which,taken collectively,cover the most important requirements on a modular design method. For the practicing software developer,the principles and the rules are just as important as the criteria.The difference is simply one of causality:the criteria are mutually independent-and it is indeed possible for a method to satisfy one of them while violating some of the others-whereas the rules follow from the criteria and the principles follow from the rules. You might expect this chapter to begin with a precise description of what a module looks like.This is not the case,and for a good reason:our goal for the exploration of modularity issues,in this chapter and the next two,is precisely to analyze the properties which a satisfactory module structure must satisfy;so the form of modules will be a conclusion of the discussion,not a premise.Until we reach that conclusion the word
3 Modularity From the goals of extendibility and reusability, two of the principal quality factors introduced in chapter 1, follows the need for flexible system architectures, made of autonomous software components. This is why chapter 1 also introduced the term modularity to cover the combination of these two quality factors. Modular programming was once taken to mean the construction of programs as assemblies of small pieces, usually subroutines. But such a technique cannot bring real extendibility and reusability benefits unless we have a better way of guaranteeing that the resulting pieces — the modules — are self-contained and organized in stable architectures. Any comprehensive definition of modularity must ensure these properties. A software construction method is modular, then, if it helps designers produce software systems made of autonomous elements connected by a coherent, simple structure. The purpose of this chapter is to refine this informal definition by exploring what precise properties such a method must possess to deserve the “modular” label. The focus will be on design methods, but the ideas also apply to earlier stages of system construction (analysis, specification) and must of course be maintained at the implementation and maintenance stages. As it turns out, a single definition of modularity would be insufficient; as with software quality, we must look at modularity from more than one viewpoint. This chapter introduces a set of complementary properties: five criteria, five rules and five principles of modularity which, taken collectively, cover the most important requirements on a modular design method. For the practicing software developer, the principles and the rules are just as important as the criteria. The difference is simply one of causality: the criteria are mutually independent — and it is indeed possible for a method to satisfy one of them while violating some of the others — whereas the rules follow from the criteria and the principles follow from the rules. You might expect this chapter to begin with a precise description of what a module looks like. This is not the case, and for a good reason: our goal for the exploration of modularity issues, in this chapter and the next two, is precisely to analyze the properties which a satisfactory module structure must satisfy; so the form of modules will be a conclusion of the discussion, not a premise. Until we reach that conclusion the word
40 MODULARITY $3.I "module"will denote the basic unit of decomposition of our systems,whatever it actually is.If you are familiar with non-object-oriented methods you will probably think of the subroutines present in most programming and design languages,or perhaps of packages as present in Ada and (under a different name)in Modula.The discussion will lead in a later chapter to the O-O form of module-the class-which supersedes these ideas.If you have encountered classes and O-O techniques before,you should still read this chapter to understand the requirements that classes address,a prerequisite if you want to use them well. 3.1 FIVE CRITERIA A design method worthy of being called "modular"should satisfy five fundamental requirements,explored in the next few sections: ·Decomposability. ·Composability. ·Understandability. ·Continuity. ·Protection. Modular decomposability A software construction method satisfies Modular Decomposability if it helps in the task of decomposing a software problem into a small number of less complex subproblems,connected by a simple structure,and independent enough to allow further work to proceed separately on each of them The process will often be self-repeating since each subproblem may still be complex enough to require further decomposition. Decomposabil- y
40 MODULARITY §3.1 “module” will denote the basic unit of decomposition of our systems, whatever it actually is. If you are familiar with non-object-oriented methods you will probably think of the subroutines present in most programming and design languages, or perhaps of packages as present in Ada and (under a different name) in Modula. The discussion will lead in a later chapter to the O-O form of module — the class — which supersedes these ideas. If you have encountered classes and O-O techniques before, you should still read this chapter to understand the requirements that classes address, a prerequisite if you want to use them well. 3.1 FIVE CRITERIA A design method worthy of being called “modular” should satisfy five fundamental requirements, explored in the next few sections: • Decomposability. • Composability. • Understandability. • Continuity. • Protection. Modular decomposability The process will often be self-repeating since each subproblem may still be complex enough to require further decomposition. A software construction method satisfies Modular Decomposability if it helps in the task of decomposing a software problem into a small number of less complex subproblems, connected by a simple structure, and independent enough to allow further work to proceed separately on each of them Decomposability
$3.1 FIVE CRITERIA 41 A corollary of the decomposability requirement is division of labor:once you have decomposed a system into subsystems you should be able to distribute work on these subsystems among different people or groups.This is a difficult goal since it limits the dependencies that may exist between the subsystems: You must keep such dependencies to the bare minimum;otherwise the development ofeach subsystem would be limited by the pace of the work on the other subsystems. The dependencies must be known:if you fail to list all the relations between subsystems,you may at the end of the project get a set of software elements that appear to work individually but cannot be put together to produce a complete system satisfying the overall requirements of the original problem. As discussed below, The most obvious example of a method meant to satisfy the decomposability top-down design is criterion is top-down design.This method directs designers to start with a most abstract not as well suited to description of the system's function,and then to refine this view through successive steps, other modularity criteria. decomposing each subsystem at each step into a small number of simpler subsystems, until all the remaining elements are of a sufficiently low level of abstraction to allow direct implementation.The process may be modeled as a tree. A top-down Topmost functional abstraction hierarchy Sequence B Loop Conditional The term“temporal A typical counter-example is any method encouraging you to include,in each cohesion”comes software system that you produce,a global initialization module.Many modules in a from the method system will need some kind of initialization-actions such as the opening of certain files known as structured or the initialization of certain variables,which the module must execute before it performs design;see the bib- its first directly useful tasks.It may seem a good idea to concentrate all such actions,for liographical notes. all modules of the system,in a module that initializes everything for everybody.Such a module will exhibit good "temporal cohesion"in that all its actions are executed at the same stage of the system's execution.But to obtain this temporal cohesion the method would endanger the autonomy of modules:you will have to grant the initialization module authorization to access many separate data structures,belonging to the various modules of the system and requiring specific initialization actions.This means that the author of the initialization module will constantly have to peek into the internal data structures of the other modules,and interact with their authors.This is incompatible with the decomposability criterion. In the object-oriented method,every module will be responsible for the initialization of its own data structures
§3.1 FIVE CRITERIA 41 A corollary of the decomposability requirement is division of labor: once you have decomposed a system into subsystems you should be able to distribute work on these subsystems among different people or groups. This is a difficult goal since it limits the dependencies that may exist between the subsystems: • You must keep such dependencies to the bare minimum; otherwise the development of each subsystem would be limited by the pace of the work on the other subsystems. • The dependencies must be known: if you fail to list all the relations between subsystems, you may at the end of the project get a set of software elements that appear to work individually but cannot be put together to produce a complete system satisfying the overall requirements of the original problem. The most obvious example of a method meant to satisfy the decomposability criterion is top-down design. This method directs designers to start with a most abstract description of the system’s function, and then to refine this view through successive steps, decomposing each subsystem at each step into a small number of simpler subsystems, until all the remaining elements are of a sufficiently low level of abstraction to allow direct implementation. The process may be modeled as a tree. A typical counter-example is any method encouraging you to include, in each software system that you produce, a global initialization module. Many modules in a system will need some kind of initialization — actions such as the opening of certain files or the initialization of certain variables, which the module must execute before it performs its first directly useful tasks. It may seem a good idea to concentrate all such actions, for all modules of the system, in a module that initializes everything for everybody. Such a module will exhibit good “temporal cohesion” in that all its actions are executed at the same stage of the system’s execution. But to obtain this temporal cohesion the method would endanger the autonomy of modules: you will have to grant the initialization module authorization to access many separate data structures, belonging to the various modules of the system and requiring specific initialization actions. This means that the author of the initialization module will constantly have to peek into the internal data structures of the other modules, and interact with their authors. This is incompatible with the decomposability criterion. In the object-oriented method, every module will be responsible for the initialization of its own data structures. As discussed below, top-down design is not as well suited to other modularity criteria. A top-down hierarchy A B D C C1 I I1 C2 I2 Sequence Loop Conditional Topmost functional abstraction The term “temporal cohesion” comes from the method known as structured design; see the bibliographical notes
42 MODULARITY $3.I Modular composability A method satisfies Modular Composability if it favors the production of software elements which may then be freely combined with each other to produce new systems,possibly in an environment quite different from the one in which they were initially developed. Where decomposability was concerned with the derivation of subsystems from overall systems,composability addresses the reverse process:extracting existing software elements from the context for which they were originally designed,so as to use them again in different contexts. Composability A modular design method should facilitate this process by yielding software elements that will be sufficiently autonomous -sufficiently independent from the immediate goal that led to their existence-as to make the extraction possible. Composability is directly connected with the goal of reusability:the aim is to find ways to design software elements performing well-defined tasks and usable in widely different contexts.This criterion reflects an old dream:transforming the software design process into a construction box activity,so that we would build programs by combining standard prefabricated elements. Example 1:subprogram libraries.Subprogram libraries are designed as sets of composable elements.One of the areas where they have been successful is numerical computation,which commonly relies on carefully designed subroutine libraries to solve problems of linear algebra,finite elements,differential equations etc. Example 2:Unix Shell conventions.Basic Unix commands operate on an input viewed as a sequential character stream,and produce an output with the same standard structure.This makes them potentially composable through the operator of the command language("shell"):4B represents a program which will take 4's input,have A process it,send the output to B as input,and have it processed by B. This systematic convention favors the composability of software tools. Counter-example:preprocessors.A popular way to extend the facilities of programming languages,and sometimes to correct some of their deficiencies,is to
42 MODULARITY §3.1 Modular composability Where decomposability was concerned with the derivation of subsystems from overall systems, composability addresses the reverse process: extracting existing software elements from the context for which they were originally designed, so as to use them again in different contexts. A modular design method should facilitate this process by yielding software elements that will be sufficiently autonomous — sufficiently independent from the immediate goal that led to their existence — as to make the extraction possible. Composability is directly connected with the goal of reusability: the aim is to find ways to design software elements performing well-defined tasks and usable in widely different contexts. This criterion reflects an old dream: transforming the software design process into a construction box activity, so that we would build programs by combining standard prefabricated elements. • Example 1: subprogram libraries. Subprogram libraries are designed as sets of composable elements. One of the areas where they have been successful is numerical computation, which commonly relies on carefully designed subroutine libraries to solve problems of linear algebra, finite elements, differential equations etc. • Example 2: Unix Shell conventions. Basic Unix commands operate on an input viewed as a sequential character stream, and produce an output with the same standard structure. This makes them potentially composable through the | operator of the command language (“shell”): A | B represents a program which will take A’s input, have A process it, send the output to B as input, and have it processed by B. This systematic convention favors the composability of software tools. • Counter-example: preprocessors. A popular way to extend the facilities of programming languages, and sometimes to correct some of their deficiencies, is to A method satisfies Modular Composability if it favors the production of software elements which may then be freely combined with each other to produce new systems, possibly in an environment quite different from the one in which they were initially developed. Composability
$3.1 FIVE CRITERIA 43 use "preprocessors"that accept an extended syntax as input and map it into the standard form of the language.Typical preprocessors for Fortran and C support graphical primitives,extended control structures or database operations.Usually, however,such extensions are not compatible;then you cannot combine two of the preprocessors,leading to such dilemmas as whether to use graphics or databases. The figure illustrat- Composability is independent of decomposability.In fact,these criteria are often at ing top-downdesign odds.Top-down design,for example,which we saw as a technique favoring was on page 41. decomposability,tends to produce modules that are not easy to combine with modules com ing from other sources.This is because the method suggests developing each module to fulfill a specific requirement,corresponding to a subproblem obtained at some point in the refinement process.Such modules tend to be closely linked to the immediate context that led to their development,and unfit for adaptation to other contexts.The method provides neither hints towards making modules more general than immediately required, nor any incentives to do so;it helps neither avoid nor even just detect commonalities or redundancies between modules obtained in different parts of the hierarchy That composability and decomposability are both part of the requirements for a modular method reflects the inevitable mix of top-down and bottom-up reasoning -a complementarity that Rene Descartes had already noted almost four centuries ago,as shown by the contrasting two paragraphs of the Discourse extract at the beginning of part B. Modular understandability A method favors Modular Understandability if it helps produce software in which a human reader can understand each module without having to know the others,or,at worst,by having to examine only a few of the others. See“ABOUT The importance of this criterion follows from its influence on the maintenance process. SOFTWARE MAIN-Most maintenance activities,whether of the noble or not-so-noble category,involve TENANCE”,I.3, having to dig into existing software elements.A method can hardly be called modular if a page 17. reader of the software is unable to understand its elements separately. Understan- dability
§3.1 FIVE CRITERIA 43 use “preprocessors” that accept an extended syntax as input and map it into the standard form of the language. Typical preprocessors for Fortran and C support graphical primitives, extended control structures or database operations. Usually, however, such extensions are not compatible; then you cannot combine two of the preprocessors, leading to such dilemmas as whether to use graphics or databases. Composability is independent of decomposability. In fact, these criteria are often at odds. Top-down design, for example, which we saw as a technique favoring decomposability, tends to produce modules that are not easy to combine with modules coming from other sources. This is because the method suggests developing each module to fulfill a specific requirement, corresponding to a subproblem obtained at some point in the refinement process. Such modules tend to be closely linked to the immediate context that led to their development, and unfit for adaptation to other contexts. The method provides neither hints towards making modules more general than immediately required, nor any incentives to do so; it helps neither avoid nor even just detect commonalities or redundancies between modules obtained in different parts of the hierarchy. That composability and decomposability are both part of the requirements for a modular method reflects the inevitable mix of top-down and bottom-up reasoning — a complementarity that René Descartes had already noted almost four centuries ago, as shown by the contrasting two paragraphs of the Discourse extract at the beginning of part B. Modular understandability The importance of this criterion follows from its influence on the maintenance process. Most maintenance activities, whether of the noble or not-so-noble category, involve having to dig into existing software elements. A method can hardly be called modular if a reader of the software is unable to understand its elements separately. A method favors Modular Understandability if it helps produce software in which a human reader can understand each module without having to know the others, or, at worst, by having to examine only a few of the others. The figure illustrating top-down design was on page 41. See “ABOUT SOFTWARE MAINTENANCE”, 1.3, page 17. Understandability
44 MODULARITY $3.I This criterion,like the others,applies to the modules of a system description at any level:analysis,design,implementation. Counter-example:sequential dependencies.Assume some modules have been so designed that they will only function correctly if activated in a certain prescribed order,for example,B can only work properly if you execute it after 4 and before C, perhaps because they are meant for use in"piped"form as in the Unix notation encountered earlier: ABIC Then it is probably hard to understand B without understanding 4 and C too. In later chapters,the modular understandability criterion will help us address two See also,later in this important questions:how to document reusable components;and how to index reusable chapter,.“Sel Documentation”, components so that software developers can retrieve them conveniently through queries. page 54. The criterion suggests that information about a component,useful for documentation or for retrieval,should whenever possible appear in the text of the component itself,tools for documentation,indexing or retrieval can then process the component to extract the needed pieces of information.Having the information included in each component is preferable to storing it elsewhere,for example in a database of information about components. Modular continuity A method satisfies Modular Continuity if,in the software architectures that it yields,a small change in a problem specification will trigger a change of just one module,or a small number of modules. This criterion is directly connected to the general goal of extendibility.As emphasized in See "Extendibiliny". an earlier chapter,change is an integral part of the software construction process.The page 6. requirements will almost inevitably change as the project progresses.Continuity means that small changes should affect individual modules in the structure of the system,rather than the structure itself. The term "continuity"is drawn from an analogy with the notion of a continuous function in mathematical analysis.A mathematical function is continuous if(informally) a small change in the argument will yield a proportionally small change in the result.Here the function considered is the software construction method,which you can view as a mechanism for obtaining systems from specifications: software construction method:Specification System
44 MODULARITY §3.1 This criterion, like the others, applies to the modules of a system description at any level: analysis, design, implementation. • Counter-example: sequential dependencies. Assume some modules have been so designed that they will only function correctly if activated in a certain prescribed order; for example, B can only work properly if you execute it after A and before C, perhaps because they are meant for use in “piped” form as in the Unix notation encountered earlier: A | B | C Then it is probably hard to understand B without understanding A and C too. In later chapters, the modular understandability criterion will help us address two important questions: how to document reusable components; and how to index reusable components so that software developers can retrieve them conveniently through queries. The criterion suggests that information about a component, useful for documentation or for retrieval, should whenever possible appear in the text of the component itself; tools for documentation, indexing or retrieval can then process the component to extract the needed pieces of information. Having the information included in each component is preferable to storing it elsewhere, for example in a database of information about components. Modular continuity This criterion is directly connected to the general goal of extendibility. As emphasized in an earlier chapter, change is an integral part of the software construction process. The requirements will almost inevitably change as the project progresses. Continuity means that small changes should affect individual modules in the structure of the system, rather than the structure itself. The term “continuity” is drawn from an analogy with the notion of a continuous function in mathematical analysis. A mathematical function is continuous if (informally) a small change in the argument will yield a proportionally small change in the result. Here the function considered is the software construction method, which you can view as a mechanism for obtaining systems from specifications: software_construction_method: Specification → System A method satisfies Modular Continuity if, in the software architectures that it yields, a small change in a problem specification will trigger a change of just one module, or a small number of modules. See also, later in this chapter, “SelfDocumentation”, page 54. See “Extendibility”, page 6
$3.1 FIVE CRITERIA 45 Continuity This mathematical term only provides an analogy,since we lack formal notions of size for software.More precisely,it would be possible to define a generally acceptable measure of what constitutes a "small"or"large"change to a program;but doing the same for the specifications is more ofa challenge.If we make no pretense of full rigor,however, the concepts should be intuitively clear and correspond to an essential requirement on any modular method. This will be one of Example 1:symbolic constants.A sound style rule bars the instructions of a program our principles of from using any numerical or textual constant directly;instead,they rely on symbolic style:Symbolic names,and the actual values only appear in a constant definition (constant in Pascal Constant Principle, page 884. or Ada,preprocessor macros in C,PARAMETER in Fortran 77,constant attributes in the notation of this book).If the value changes,the only thing to update is the constant definition.This small but important rule is a wise precaution for continuity since constants,in spite of their name,are remarkably prone to change. See"Uniform Example 2:the Uniform Access principle.Another rule states that a single notation Access",page 55. should be available to obtain the features of an object,whether they are represented as data fields or computed on demand.This property is sufficiently important to warrant a separate discussion later in this chapter. Counter-example 1:using physical representations.A method in which program designs are patterned after the physical implementation of data will yield designs that are very sensitive to slight changes in the environment. Counter-example 2:static arrays.Languages such as Fortran or standard Pascal, which do not allow the declaration ofarrays whose bounds will only be known at run time,make program evolution much harder. Modular protection A method satisfies Modular Protection if it yields architectures in which the effect of an abnormal condition occurring at run time in a module will remain confined to that module,or at worst will only propagate to a few neighboring modules
§3.1 FIVE CRITERIA 45 This mathematical term only provides an analogy, since we lack formal notions of size for software. More precisely, it would be possible to define a generally acceptable measure of what constitutes a “small” or “large” change to a program; but doing the same for the specifications is more of a challenge. If we make no pretense of full rigor, however, the concepts should be intuitively clear and correspond to an essential requirement on any modular method. • Example 1: symbolic constants. A sound style rule bars the instructions of a program from using any numerical or textual constant directly; instead, they rely on symbolic names, and the actual values only appear in a constant definition (constant in Pascal or Ada, preprocessor macros in C, PARAMETER in Fortran 77, constant attributes in the notation of this book). If the value changes, the only thing to update is the constant definition. This small but important rule is a wise precaution for continuity since constants, in spite of their name, are remarkably prone to change. • Example 2: the Uniform Access principle. Another rule states that a single notation should be available to obtain the features of an object, whether they are represented as data fields or computed on demand. This property is sufficiently important to warrant a separate discussion later in this chapter. • Counter-example 1: using physical representations. A method in which program designs are patterned after the physical implementation of data will yield designs that are very sensitive to slight changes in the environment. • Counter-example 2: static arrays. Languages such as Fortran or standard Pascal, which do not allow the declaration of arrays whose bounds will only be known at run time, make program evolution much harder. Modular protection A method satisfies Modular Protection if it yields architectures in which the effect of an abnormal condition occurring at run time in a module will remain confined to that module, or at worst will only propagate to a few neighboring modules. Continuity This will be one of our principles of style: Symbolic Constant Principle, page 884. See “Uniform Access”, page 55
46 MODULARITY $3.2 The underlying issue,that of failures and errors,is central to software engineering.The The guestion ofhow errors considered here are run-time errors,resulting from hardware failures,erroneous to handle abnormal input or exhaustion ofneeded resources(for example memory storage).The criterion does cases is discussed in not address the avoidance or correction oferrors,but the aspect that is directly relevant to detail in chapter 12. modularity:their propagation. Protection violation Example:validating input at the source.A method requiring that you make every More on this topic in module that inputs data also responsible for checking their validity is good for "Assertions are not an input checking mech- modular protection. anism"page 346 Counter-example:undisciplined exceptions.Languages such as PL/I,CLU,Ada, On exception han- C++and Java support the notion of exception.An exception is a special signal that dling,see chapter 12. may be"raised"by a certain instruction and "handled"in another,possibly remote part of the system.When the exception is raised,control is transferred to the handler. (Details of the mechanism vary between languages;Ada or CLU are more disciplined in this respect than PL/I.)Such facilities make it possible to decouple the algorithms for normal cases from the processing of erroneous cases.But they must be used carefully to avoid hindering modular protection.The chapter on exceptions will investigate how to design a disciplined exception mechanism satisfying the criterion. 3.2 FIVE RULES From the preceding criteria,five rules follow which we must observe to ensure modularity: ·Direct Mapping ·Few Interfaces. Small interfaces (weak coupling). Explicit Interfaces. ·Information Hiding The first rule addresses the connection between a software system and the external systems with which it is connected;the next four all address a common issue-how modules will communicate.Obtaining good modular architectures requires that communication occur in a controlled and disciplined way
46 MODULARITY §3.2 The underlying issue, that of failures and errors, is central to software engineering. The errors considered here are run-time errors, resulting from hardware failures, erroneous input or exhaustion of needed resources (for example memory storage). The criterion does not address the avoidance or correction of errors, but the aspect that is directly relevant to modularity: their propagation. • Example: validating input at the source. A method requiring that you make every module that inputs data also responsible for checking their validity is good for modular protection. • Counter-example: undisciplined exceptions. Languages such as PL/I, CLU, Ada, C++ and Java support the notion of exception. An exception is a special signal that may be “raised” by a certain instruction and “handled” in another, possibly remote part of the system. When the exception is raised, control is transferred to the handler. (Details of the mechanism vary between languages; Ada or CLU are more disciplined in this respect than PL/I.) Such facilities make it possible to decouple the algorithms for normal cases from the processing of erroneous cases. But they must be used carefully to avoid hindering modular protection. The chapter on exceptions will investigate how to design a disciplined exception mechanism satisfying the criterion. 3.2 FIVE RULES From the preceding criteria, five rules follow which we must observe to ensure modularity: • Direct Mapping. • Few Interfaces. • Small interfaces (weak coupling). • Explicit Interfaces. • Information Hiding. The first rule addresses the connection between a software system and the external systems with which it is connected; the next four all address a common issue — how modules will communicate. Obtaining good modular architectures requires that communication occur in a controlled and disciplined way. The question of how to handle abnormal cases is discussed in detail in chapter 12. Protection violation More on this topic in “Assertions are not an input checking mechanism”, page 346 On exception handling, see chapter 12
$3.2 FIVE RULES 47 Direct Mapping Any software system attempts to address the needs of some problem domain.If you have a good model for describing that domain,you will find it desirable to keep a clear correspondence (mapping)between the structure of the solution,as provided by the software,and the structure of the problem,as described by the model.Hence the first rule: The modular structure devised in the process of building a software system should remain compatible with any modular structure devised in the process of modeling the problem domain. This advice follows in particular from two of the modularity criteria: Continuity:keeping a trace of the problem's modular structure in the solution's structure will make it easier to assess and limit the impact of changes Decomposability:if some work has already been done to analyze the modular structure of the problem domain,it may provide a good starting point for the modular decomposition of the software. Few Interfaces The Few Interfaces rule restricts the overall number of communication channels between modules in a software architecture: Every module should communicate with as few others as possible. Communication may occur between modules in a variety of ways.Modules may call each other(if they are procedures),share data structures etc.The Few Interfaces rule limits the number of such connections. T①ypes of module interconnection structures (4) (B) (C) More precisely,if a system is composed of n modules,then the number of intermodule connections should remain much closer to the minimum,n-/,shown as (A) in the figure,than to the maximum,n (n-/)/2,shown as (B). This rule follows in particular from the criteria of continuity and protection:if there are too many relations between modules,then the effect of a change or of an error may
§3.2 FIVE RULES 47 Direct Mapping Any software system attempts to address the needs of some problem domain. If you have a good model for describing that domain, you will find it desirable to keep a clear correspondence (mapping) between the structure of the solution, as provided by the software, and the structure of the problem, as described by the model. Hence the first rule: This advice follows in particular from two of the modularity criteria: • Continuity: keeping a trace of the problem’s modular structure in the solution’s structure will make it easier to assess and limit the impact of changes. • Decomposability: if some work has already been done to analyze the modular structure of the problem domain, it may provide a good starting point for the modular decomposition of the software. Few Interfaces The Few Interfaces rule restricts the overall number of communication channels between modules in a software architecture: Communication may occur between modules in a variety of ways. Modules may call each other (if they are procedures), share data structures etc. The Few Interfaces rule limits the number of such connections. More precisely, if a system is composed of n modules, then the number of intermodule connections should remain much closer to the minimum, n–1, shown as (A) in the figure, than to the maximum, n (n – 1) /2, shown as (B). This rule follows in particular from the criteria of continuity and protection: if there are too many relations between modules, then the effect of a change or of an error may The modular structure devised in the process of building a software system should remain compatible with any modular structure devised in the process of modeling the problem domain. Every module should communicate with as few others as possible. Types of module interconnection structures (A) (B) (C)
48 MODULARITY $3.2 propagate to a large number of modules.It is also connected to composability (if you want a module to be usable by itself in a new environment,then it should not depend on too many others),understandability and decomposability. Case (A)on the last figure shows a way to reach the minimum number of links,n 1,through an extremely centralized structure:one master module;everybody else talks to it and to it only.But there are also much more"egalitarian"structures,such as (C)which has almost the same number of links.In this scheme,every module just talks to its two immediate neighbors,but there is no central authority.Such a style of design is a little surprising at first since it does not conform to the traditional model of functional,top-down design.But it can yield robust,extendible architectures;this is the kind of structure that object-oriented techniques,properly applied,will tend to yield. Small Interfaces The Small Interfaces or "Weak Coupling"rule relates to the size of intermodule connections rather than to their number: If two modules communicate,they should exchange as little information as possible An electrical engineer would say that the channels of communication between modules must be of limited bandwidth: Communication X.V bandwidth between modules The Small Interfaces requirement follows in particular from the criteria of continuity and protection. An extreme counter-example is a Fortran practice which some readers will recognize: the "garbage common block".A common block in Fortran is a directive of the form COMMON Icommon_namel variablej...variable indicating that the variables listed are accessible not just to the enclosing module but also to any other module which includes a COMMON directive with the same common name. It is not infrequent to see Fortran systems whose every module includes an identical gigantic COMMON directive,listing all significant variables and arrays so that every module may directly use every piece of data
48 MODULARITY §3.2 propagate to a large number of modules. It is also connected to composability (if you want a module to be usable by itself in a new environment, then it should not depend on too many others), understandability and decomposability. Case (A) on the last figure shows a way to reach the minimum number of links, n – 1, through an extremely centralized structure: one master module; everybody else talks to it and to it only. But there are also much more “egalitarian” structures, such as (C) which has almost the same number of links. In this scheme, every module just talks to its two immediate neighbors, but there is no central authority. Such a style of design is a little surprising at first since it does not conform to the traditional model of functional, top-down design. But it can yield robust, extendible architectures; this is the kind of structure that object-oriented techniques, properly applied, will tend to yield. Small Interfaces The Small Interfaces or “Weak Coupling” rule relates to the size of intermodule connections rather than to their number: An electrical engineer would say that the channels of communication between modules must be of limited bandwidth: The Small Interfaces requirement follows in particular from the criteria of continuity and protection. An extreme counter-example is a Fortran practice which some readers will recognize: the “garbage common block”. A common block in Fortran is a directive of the form COMMON /common_name/ variable1,… variablen indicating that the variables listed are accessible not just to the enclosing module but also to any other module which includes a COMMON directive with the same common_name. It is not infrequent to see Fortran systems whose every module includes an identical gigantic COMMON directive, listing all significant variables and arrays so that every module may directly use every piece of data. If two modules communicate, they should exchange as little information as possible Communication bandwidth between z modules x, y