当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《电子工程师手册》学习资料(英文版)Chapter 87 Programming

资源类别:文库,文档格式:PDF,文档页数:47,文件大小:714.82KB,团购合买
The true language of computers is a stream of 1s and 0s—bits. Everything in the computer, be it numbers or text or program, spreadsheet or database or 3-D rendering, is nothing but an array of bits. The meaning of the bits is in the “eye of the beholder”; it is determined entirely by context. Bits are not a useful medium for human consumption. Instead, we insist that what we read be formatted spatially and presented
点击下载完整版文档(PDF)

Feldman, J M, Czeck, E W, Lewis, T.G., Martin, JJ."Programm The Electrical Engineering Handbook Ed. Richard C. Dorf Boca raton crc Press llc. 2000

Feldman, J.M., Czeck, E.W., Lewis, T.G., Martin, J.J. “Programming” The Electrical Engineering Handbook Ed. Richard C. Dorf Boca Raton: CRC Press LLC, 2000

87 Programming James M. Feldman 87.1 Assembly Language Edward W. czeck Numbercount().comparisOnsDownontheFactory Floor.compilerOptimizationandASsemblyLanguage 87.2 High-Level Languages Ted G. Lewis What Is a HLL?. How High Is a HLL?. HLLs and Paradigms 87.3 Data Types and Data Structures Abstract Data Types. Fundamental Data Types. Type Johannes J Martin Constructors. Dynamic Types. More Dynamic Data University of New Orleans Types.Object-Oriented Programming 87.1 Assembly Language James M. Feldman and Edward W. Czeck The true language of computers is a stream of Is and Os-bits. Everything in the computer, be it numbers or text or program, spreadsheet or database or 3-D rendering, is nothing but an array of bits. The meaning of the bits is in the"eye of the beholder; it is determined entirely by context. Bits are not a useful medium for human consumption. Instead, we insist that what we read be formatted spatially and presented in a modest range of isually distinguishable characters. 0 and 1 arranged in a dense, page-filling array do not fulfill these require- ments in any way. The several languages that are presented in this handbook are all intended to make something readable to two quite different readers On the one hand, they serve the human reader with his/her requirements on symbols and layout; on the other, they provide a grammatically regular language for interpretation by a mpiler. A compiler, of course, is normally a program running on a computer, but human beings can and sometimes do play both sides of this game. They want to play with both the input and output. Such accessibility requires that not only the input but the output of the compilation process be comfortably readable by humans The language of the input is called a high-level language(HLL). Examples are C, Pascal, Ada and Modula II They are designed to express both regularly and concisely the kinds of operations and the kinds of constructs that programmers manipulate The output end of the compiler generates object code-a generally unreadable, binary representation of machine language, lacking only the services of a linker to turn it into true machine language. The language that has been constructed to represent object code for human consumption is assembly language. That is the subject of this section. Some might object to our statement of purpose for assembly language. While few will contest the concept of assembly language as the readable form of object code, some see writing assembly code as the way to"get their hands on the inner workings of the machine. They see it as a"control"issue. Since most HLLs today give the user reasonably direct ways to access hardware, where does the"control"issue arise? What assembly proponents see as the essential reason for having an assembly language is the option to optimize the"important sections of a program by doing a better job of machine code generation than the compiler does. This perspective was valid enough when compilers were mediocre optimizers. It was not unlike the old days when a car came with a complete set of tools because you needed them. The same thing that has happened to cars has happened to compilers. They are engineered to be"fuel efficient "and perform their assigned functions with remarkable c 2000 by CRC Press LLC

© 2000 by CRC Press LLC 87 Programming 87.1 Assembly Language NumberCount( ) • Comparisons Down on the Factory Floor • Compiler Optimization and Assembly Language 87.2 High-Level Languages What Is a HLL? • How High Is a HLL? • HLLs and Paradigms 87.3 Data Types and Data Structures Abstract Data Types • Fundamental Data Types • Type Constructors • Dynamic Types • More Dynamic Data Types • Object-Oriented Programming 87.1 Assembly Language James M. Feldman and Edward W. Czeck The true language of computers is a stream of 1s and 0s—bits. Everything in the computer, be it numbers or text or program, spreadsheet or database or 3-D rendering, is nothing but an array of bits. The meaning of the bits is in the “eye of the beholder”; it is determined entirely by context. Bits are not a useful medium for human consumption. Instead, we insist that what we read be formatted spatially and presented in a modest range of visually distinguishable characters. 0 and 1 arranged in a dense, page-filling array do not fulfill these require￾ments in any way. The several languages that are presented in this handbook are all intended to make something readable to two quite different readers. On the one hand, they serve the human reader with his/her requirements on symbols and layout; on the other, they provide a grammatically regular language for interpretation by a compiler. A compiler, of course, is normally a program running on a computer, but human beings can and sometimes do play both sides of this game. They want to play with both the input and output. Such accessibility requires that not only the input but the output of the compilation process be comfortably readable by humans. The language of the input is called a high-level language (HLL). Examples are C, Pascal, Ada and Modula II. They are designed to express both regularly and concisely the kinds of operations and the kinds of constructs that programmers manipulate. The output end of the compiler generates object code—a generally unreadable, binary representation of machine language, lacking only the services of a linker to turn it into true machine language. The language that has been constructed to represent object code for human consumption is assembly language. That is the subject of this section. Some might object to our statement of purpose for assembly language. While few will contest the concept of assembly language as the readable form of object code, some see writing assembly code as the way to “get their hands on the inner workings of the machine.” They see it as a “control” issue. Since most HLLs today give the user reasonably direct ways to access hardware, where does the “control” issue arise? What assembly proponents see as the essential reason for having an assembly language is the option to optimize the “important” sections of a program by doing a better job of machine code generation than the compiler does. This perspective was valid enough when compilers were mediocre optimizers. It was not unlike the old days when a car came with a complete set of tools because you needed them. The same thing that has happened to cars has happened to compilers. They are engineered to be “fuel efficient” and perform their assigned functions with remarkable James M. Feldman Northeastern University Edward W. Czeck Northeastern University Ted G. Lewis Navel Postgraduate School Johannes J. Martin University of New Orleans

ability. When the cars or compilers get good enough and complex enough, the tinkerer may do more harm than good. IBM's superscalar RISC computer-the RS6000--comes with superb compilers and no assembl at all. The Pentagon took a long look at their costs of programming their immense array of computers. Contrary to popular legend, they decided to save money. The first amendment not withstanding, their conclusion was Thou shalt not assemble The Any sizable programming job gets done at least four times faster in a HLL Most modern compilers are good optimizers of code; some are superb. Almost all important code goes through revisions--maintenance Reworking old assembly code is similar to breaking good encryption; it takes fore Most important of all is portability. To move any program to a different computer, you must generate machine code for that new platform. With a program in a HLL, a new platform is almost free; all it requires is another pass through the compiler for the target platform. with assembly code, you are back to square one. Assembly code is unique to the platform Given all of that, the question naturally arises: Why have an article on assembly language? We respond with two reasons, both of which we employ in our work as teachers and programmers An essential ingredient in understanding computer hardware and in designing new computer systems and compilers is a detailed appreciation of the operations of central processing units(CPUs). These are best expressed in assembly language. Our undergraduate Computer Engineering courses include a healthy dose of assembly language programming for this specific reason. If you are concerned about either cpu design or compiler effectiveness, you have to be able to look in great detail at the interface between them-machine language. As we have said, the easiest way to read machine language is by translating it to assembly language. This is one way to get assembly language, not by writing in it as a source of code but by running the object code itself through a backward translator called a disassembler. While many compilers will oblige you by providing an assembly listing if asked, often that listing does not include optimizations that occur only when the several modules are linked together, providing opportunities for truly global optimization. Some compilers"help"the reader by sing macros(names for predefined blocks of code)in place of the real machine instructions and register assignments. The absence of the optimizations and the inclusion of unexpected macros can make the assembly listing almost useless for obtaining insight into the programs fine detail. The compilers that we have used on the DECstations and SPARC machines do macro inclusion. To see what is really going in these machines, you must disassemble the machine code. That is precisely what the Think C& ompiler on the Macintosh does when you ask for machine code. It disassembles what it just did in compiling and linking the whole program. What you see is what is really there. The code we present for the 68000 was obtained in that way. These are important applications. Even if most or all other programming needs can be better met in HLLs, these applications are sufficient reason for many engineers to want to know something about assembly language. There are other applications of assembly language, but they tend to be specific to rather specialized and infrequent tasks. For example, the back end of most HLL compilers is a machine code generator. To write one f those, you certainly must know something about assembly language. On rare occasions, you may find som necessary machine-specific transaction which is not supported by the hLL of choice or which requires some special micro optimization. A"patch" of assembly code is a way to fit this inexpressible thought into the program's vocabulary. These are rare events. The reason why we recommend to you this section on assembly code is that it improves your understanding of HLLs and of computer architecture We will take a single subroutine which we express in C and look at the machine code that is generated or two representative machines. The machines include two widely used complex instruction set computers( CISCs and one reduced instruction set computer(RISC). These are the 68000, the VAX@, and a SParC. We will have e 2000 by CRC Press LLC

© 2000 by CRC Press LLC ability. When the cars or compilers get good enough and complex enough, the tinkerer may do more harm than good. IBM’s superscalar RISC computer—the RS6000—comes with superb compilers and no assembler at all. The Pentagon took a long look at their costs of programming their immense array of computers. Contrary to popular legend, they decided to save money. The first amendment not withstanding, their conclusion was: “Thou shalt not assemble.” The four principal reasons for not writing assembly language are • Any sizable programming job gets done at least four times faster in a HLL. • Most modern compilers are good optimizers of code; some are superb. • Almost all important code goes through revisions—maintenance. Reworking old assembly code is similar to breaking good encryption; it takes forever. • Most important of all is portability. To move any program to a different computer, you must generate machine code for that new platform. With a program in a HLL, a new platform is almost free; all it requires is another pass through the compiler for the target platform. With assembly code, you are back to square one. Assembly code is unique to the platform. Given all of that, the question naturally arises: Why have an article on assembly language? We respond with two reasons, both of which we employ in our work as teachers and programmers: • An essential ingredient in understanding computer hardware and in designing new computer systems and compilers is a detailed appreciation of the operations of central processing units (CPUs). These are best expressed in assembly language. Our undergraduate Computer Engineering courses include a healthy dose of assembly language programming for this specific reason. • If you are concerned about either CPU design or compiler effectiveness, you have to be able to look in great detail at the interface between them—machine language. As we have said, the easiest way to read machine language is by translating it to assembly language. This is one way to get assembly language, not by writing in it as a source of code but by running the object code itself through a backward translator called a disassembler. While many compilers will oblige you by providing an assembly listing if asked, often that listing does not include optimizations that occur only when the several modules are linked together, providing opportunities for truly global optimization. Some compilers “help” the reader by using macros (names for predefined blocks of code) in place of the real machine instructions and register assignments. The absence of the optimizations and the inclusion of unexpected macros can make the assembly listing almost useless for obtaining insight into the program’s fine detail. The compilers that we have used on the DECstations and SPARC machines do macro inclusion. To see what is really going on in these machines, you must disassemble the machine code. That is precisely what the Think C® compiler on the Macintosh does when you ask for machine code. It disassembles what it just did in compiling and linking the whole program. What you see is what is really there. The code we present for the 68000 was obtained in that way. These are important applications. Even if most or all other programming needs can be better met in HLLs, these applications are sufficient reason for many engineers to want to know something about assembly language. There are other applications of assembly language, but they tend to be specific to rather specialized and infrequent tasks. For example, the back end of most HLL compilers is a machine code generator. To write one of those, you certainly must know something about assembly language. On rare occasions, you may find some necessary machine-specific transaction which is not supported by the HLL of choice or which requires some special micro optimization. A “patch” of assembly code is a way to fit this inexpressible thought into the program’s vocabulary. These are rare events. The reason why we recommend to you this section on assembly code is that it improves your understanding of HLLs and of computer architecture. We will take a single subroutine which we express in C and look at the machine code that is generated on two representative machines. The machines include two widely used complex instruction set computers (CISCs) and one reduced instruction set computer (RISC). These are the 68000®, the VAX®, and a SPARC®. We will have two objectives:

To see how a variety of paradigms in HLLs are translated (or, in other words, to see what is really going on when you ask for a particular HLL operation To compare the several architectures to see how they are the same and how they differ The routine attempts to get a count of the number of numbers which occur in a block of text. Since we are seeking numbers and not digits, the task is more complex than you might first assume. This is why we say attempts. The function that we present below handles all of the normal text forms mbers written in a fixed-point format, such as 12.3 or 0. 1738 Numbers written in a floating-point format, such as-127e +19 or 6.781E2 If our program were to scan the indented block of code above, it would report finding six numbers. The symbols that the program recognizes as potentially part of a number include the digits 0 to 9 and the symbols,E, and+. Now it is certainly possible to include other symbols in legitimate numbers, such as HEX numbers or the like, but this little routine will not properly deal with them. Our purpose was not to handle all comers but to provide a routine with some variety of expression and possible application. Let us be Number Count() We enter the program at the top with one pointer passed from the calling routine and a set of local variables comprising two integers and eight Boolean variables. Most of the Boolean variables will be used in pairs. The first element of a pair, for instance, ees of ees and latch, indicates that the current character is one of a particular class of non-numeric characters which might be found inside a number. If you consider that the number begin at the first digit, then these characters can occur legally only once within a given number ees will be set true if the current character is the first instance of either 'e or 'E. The paired variable, latch, is set true if there has ever been one of those characters in the current number. The other pairs are period and latch and sign and latchs There is also a pair of Booleans which indicate if the current character is a digit and if the scanner is currently inside a number. Were you to limit your numbers to integers, these two are the only booleans which would be needed. At the top of the program, all Booleans are reset(made FALSE). Then we step through the block looking for numbers. The search stops when we encounter the first null [char()] marking the end of the block. ry running through the routine with text containing the three forms of number. You will quickly convince yourself that the routine works with all normal numbers. If someone writes.14"or 3.14ee6, the program will count 2 numbers. That is probably right in the first two cases. Who knows in the third? Let us look at this short routine in C define blk length 20001 int Numbercount(char block() atche=o, latch=0, period=O, latchs=o, sign=o char source block digit=(* source>=“0〃)&&(* source<=‘9); period =(*source='')&& inside & !latchp; & !lathe latch =(latch I period); ees=((*source=E)I(*source='e))&& inside &&!lathe; latch =(latch I ees); I (*source=-))&&inside & latch &&!latchs latchs =(latchs I sign) if (!(digit I ees period I sign)) inside=latch=latche=latchs=0; } else if (digit) e 2000 by CRC Press LLC

© 2000 by CRC Press LLC • To see how a variety of paradigms in HLLs are translated (or, in other words, to see what is really going on when you ask for a particular HLL operation) • To compare the several architectures to see how they are the same and how they differ The routine attempts to get a count of the number of numbers which occur in a block of text. Since we are seeking numbers and not digits, the task is more complex than you might first assume. This is why we say “attempts.” The function that we present below handles all of the normal text forms: • Integers, such as 123 or –17 • Numbers written in a fixed-point format, such as 12.3 or 0.1738 • Numbers written in a floating-point format, such as –12.7e+19 or 6.781E2 If our program were to scan the indented block of code above, it would report finding six numbers. The symbols that the program recognizes as potentially part of a number include the digits 0 to 9 and the symbols ‘e’, ‘E’, ‘ . ’, ‘–’ and ‘+’. Now it is certainly possible to include other symbols in legitimate numbers, such as HEX numbers or the like, but this little routine will not properly deal with them. Our purpose was not to handle all comers but to provide a routine with some variety of expression and possible application. Let us begin. NumberCount( ) We enter the program at the top with one pointer passed from the calling routine and a set of local variables comprising two integers and eight Boolean variables. Most of the Boolean variables will be used in pairs. The first element of a pair, for instance, ees of ees and latche, indicates that the current character is one of a particular class of non-numeric characters which might be found inside a number. If you consider that the number begins at the first digit, then these characters can occur legally only once within a given number. ees will be set true if the current character is the first instance of either ‘e’ or ‘E’. The paired variable, latche, is set true if there has ever been one of those characters in the current number. The other pairs are period and latchp and sign and latchs. There is also a pair of Booleans which indicate if the current character is a digit and if the scanner is currently inside a number. Were you to limit your numbers to integers, these two are the only Booleans which would be needed. At the top of the program, all Booleans are reset (made FALSE). Then we step through the block looking for numbers. The search stops when we encounter the first null [char(0)] marking the end of the block. Try running through the routine with text containing the three forms of number. You will quickly convince yourself that the routine works with all normal numbers. If someone writes “3..14” or “3.14ee6”, the program will count 2 numbers. That is probably right in the first two cases. Who knows in the third? Let us look at this short routine in C. # define blk_length 20001 int NumberCount(char block[]) { int count=0,inside=0,digit; int ees=0, latche=0, latchp=0, period=0, latchs=0, sign=0; char *source; source = block; do { digit = (*source >= ‘0’) && (*source <= ‘9’); period = (*source==’.’) && inside && !latchp; && !latche; latchp = (latchp || period); ees = ((*source==’E’) || (*source==’e’)) && inside && !latche; latche = (latche || ees); sign = ((*source==’+’) || (*source==’-’)) && inside && latche && !latchs; latchs = (latchs || sign); if (inside) { if (!(digit || ees || period || sign)) inside=latchp=latche=latchs=0; } else if (digit) {

inside 1 source+t while ((*source !=\0')&&(( source-block )<blk length+1)); return counti To access values within the character array, the normal C paradigm is to step a pointer along the array Source points at the current character in the array; *source is the character("what source points at). source is initialized at the top of the program before the loop (source block; )and incremented(source++;) at the bottom of the loop. Note the many repetitions of *source. Each one means the same current character. If you read that expression as the character which source is pointing to, it looks like an invitation to fetch the same character from memory eight times. A compiler that optimizes by removing common subexpressions should eliminate all but the first such fetch. This optimization is one of the things that we want to look for For those less familiar with C, the meanings of the less familiar symbols are qual (in the logical sense) not equal count++ increment count by 1 unit(after using it) C uses 0 as FALSE and anything else as TRUE. Comparisons Down on the Factory Floor Now let us see what we can learn by running this program through compilers on several quite different hosts. The items that we wish to examine include: I. Subroutine operations comprising: A. Building the call block C. Obtaining memory space for local variables D. Accessing th E. Returning the function value F. Returning to th g routin A. Load and store B. Arithmetic C. Logical IIL. Pro control B. if and the issue of multiple tests Our objectives are to build three quite different pictures An appreciation for the operations underlying the HLL statements An overview of the architectures of several important examples of CISC and RISC processors An appreciation for what a HLL optimizer should be doing for you We will attempt to do all three all of the time. e 2000 by CRC Press LLC

© 2000 by CRC Press LLC count++; inside = 1; } source++; } while ((*source != ‘\0’) && ((source-block)<blk_length+1)); return count; } To access values within the character array, the normal C paradigm is to step a pointer along the array. Source points at the current character in the array; *source is the character (“what source points at”). source is initialized at the top of the program before the loop (source = block;) and incremented (source++;) at the bottom of the loop. Note the many repetitions of *source. Each one means the same current character. If you read that expression as the character which source is pointing to, it looks like an invitation to fetch the same character from memory eight times. A compiler that optimizes by removing common subexpressions should eliminate all but the first such fetch. This optimization is one of the things that we want to look for. For those less familiar with C, the meanings of the less familiar symbols are: == equal (in the logical sense) ! not != not equal && and || or count++ increment count by 1 unit (after using it) C uses 0 as FALSE and anything else as TRUE. Comparisons Down on the Factory Floor Now let us see what we can learn by running this program through compilers on several quite different hosts. The items that we wish to examine include: I. Subroutine operations comprising: A. Building the call block B. The call itself C. Obtaining memory space for local variables D. Accessing the call block E. Returning the function value F. Returning to the calling routine II. Data operations A. Load and store B. Arithmetic C. Logical D. Text III. Program control A. Looping B. if and the issue of multiple tests Our objectives are to build three quite different pictures: • An appreciation for the operations underlying the HLL statements • An overview of the architectures of several important examples of CISC and RISC processors • An appreciation for what a HLL optimizer should be doing for you We will attempt to do all three all of the time

Let us begin with the calling operations. Our first machine will be the MC68000, one of the classical and widely available CISC processors. It or one of its progeny is found in many machines and forms the heart of the Macintosh(not the Power Mac)and the early Sun workstations. Programmatically, the 68000 family shares a great deal with the very popular VAX family of processors. Both of these CISC designs derive in rather linear fashion from DEC's PDP-11 machines that were so widely used in the 1970s. Comparisons to that style of machine will be done with the SPARC, a RISC processor found in Sun, Solbourne, and other workstations Memory and Registers All computers will have data stored in memory and <H Little endian some space in the CPU for manipulating data Memory can be considered to be a long list of bytes(8-bit data Big endian-D 00 011011 blocks)with addresses(locations in the list)spanning 000xx ome large range of numbers from 0 to typically 4 billion(4 GB). The memory is constructed physically 100xx by grouping chips so that they appear to form enor wn in Fig. 87. Since each column can deliver one byte on each request, the number of adjacent columns determines the number of bytes which may be obtained from a gle request. Machines today have 1, 2, 4, or 8 sucl columns.(Some machines, the 68000 being our cur- FIGURE 87. 1 Memory arranged as 4 columns of bytes. The rent example, have only 2 columns but arrange to have binary addresses are shown in the two formats widely used the CPU ask for two successive transfers to get a total in computers. The illustration shows only 32 bytes in a 4 of 4 bytes. In general, the CPU may manipulate 38 array, but a more realistic span would be 4 3 1,000,000 gle step a datum as wide as the memory. For all of or 4 3 4,000,000(4 MB to 16 MB) the machines which we will consider that maximum datum size is 32 bits or 4 bytes. While convention would have us call this biggest datum a word, historical reason has led both the vAX and MC68000 to call it a longword. Then, 2 bytes is either a halfword or a word. We will use the VAX/68000 notation(longword, word, and byte) wherever possible to simplify the reading. To load data from memory, the CPU sends the address and the datum size to the memory and gets the datum as the reply. To store data, the address is sent and then the datum and datum size. Some machines require that the datum be properly aligned with the stacking order of the memory columns in Fig. 87. 1. Thus, on the SPARC, a longword must have an address ending in 00(xxx00 in Fig. 87. 1), and a word address must end in 0. The programmer who arranges to violate this rule will be greeted with an address error. Since the MC68000 has only two columns, it complains only if you ask for words or longwords with odd addresses. Successor models of that chip(68020, 30, and 40), like the VAX, accept any address and have the CPU read two longwords and do the proper repacking The instruction explicitly specifies the size and indicates how the CPu should calculate the address. An instruction to load a byte, for example, is LB, MOVE. B, or MOVB on the SPARC, MC68000, and VAX, e pectively. These are followed immediately by an expression which specifies an address. We will discuss how pecify an address later. First, we must introduce the concept of a register The space for holding data and working on it in the CPU is the register set. Registers are a very important esource. Bringing data in from memory is quite separate from any operations on that data. Data in memory must first be fetched, then acted upon. Data in registers can be acted on immediately. Thus, the availability of registers to store very active variables and intermediate results makes a processor inherently faster. In some nachines, most or all of the registers are tied to specific uses. The most prevalent example would be Intels 80x86 processors, which power the ubiquitous PC. Such architectures, however, are considered quite old fashioned. All of the machines that we are considering are of a type called general register machines in that they have a large group of registers which may be used for any purpose. The machines that we include have either 16or32 Table 87.1 shows the general register resources in the three machines. The SPARC is a little strange. The machine provides eight global registers and then a window blind of 128 registers which sits behind a frame e 2000 by CRC Press LLC

© 2000 by CRC Press LLC Let us begin with the calling operations. Our first machine will be the MC68000, one of the classical and widely available CISC processors. It or one of its progeny is found in many machines and forms the heart of the Macintosh (not the PowerMac) and the early Sun workstations. Programmatically, the 68000 family shares a great deal with the very popular VAX family of processors. Both of these CISC designs derive in rather linear fashion from DEC’s PDP-11 machines that were so widely used in the 1970s. Comparisons to that style of machine will be done with the SPARC, a RISC processor found in Sun, Solbourne, and other workstations. Memory and Registers All computers will have data stored in memory and some space in the CPU for manipulating data. Memory can be considered to be a long list of bytes (8-bit data blocks) with addresses (locations in the list) spanning some large range of numbers from 0 to typically 4 billion (4 GB). The memory is constructed physically by grouping chips so that they appear to form enor￾mously deep columns of bytes, as shown in Fig. 87.1. Since each column can deliver one byte on each request, the number of adjacent columns determines the number of bytes which may be obtained from a single request. Machines today have 1, 2, 4, or 8 such columns. (Some machines, the 68000 being our cur￾rent example, have only 2 columns but arrange to have the CPU ask for two successive transfers to get a total of 4 bytes.) In general, the CPU may manipulate in a single step a datum as wide as the memory. For all of the machines which we will consider, that maximum datum size is 32 bits or 4 bytes. While convention would have us call this biggest datum a word, historical reason has led both the VAX and MC68000 to call it a longword. Then, 2 bytes is either a halfword or a word. We will use the VAX/68000 notation (longword, word, and byte) wherever possible to simplify the reading. To load data from memory, the CPU sends the address and the datum size to the memory and gets the datum as the reply. To store data, the address is sent and then the datum and datum size. Some machines require that the datum be properly aligned with the stacking order of the memory columns in Fig. 87.1. Thus, on the SPARC, a longword must have an address ending in 00 (xxx00 in Fig. 87.1), and a word address must end in 0. The programmer who arranges to violate this rule will be greeted with an address error. Since the MC68000 has only two columns, it complains only if you ask for words or longwords with odd addresses. Successor models of that chip (68020, 30, and 40), like the VAX, accept any address and have the CPU read two longwords and do the proper repacking. The instruction explicitly specifies the size and indicates how the CPU should calculate the address. An instruction to load a byte, for example, is LB, MOVE.B, or MOVB on the SPARC, MC68000, and VAX, respectively. These are followed immediately by an expression which specifies an address. We will discuss how to specify an address later. First, we must introduce the concept of a register. The space for holding data and working on it in the CPU is the register set. Registers are a very important resource. Bringing data in from memory is quite separate from any operations on that data. Data in memory must first be fetched, then acted upon. Data in registers can be acted on immediately. Thus, the availability of registers to store very active variables and intermediate results makes a processor inherently faster. In some machines, most or all of the registers are tied to specific uses. The most prevalent example would be Intel’s 80x86 processors, which power the ubiquitous PC. Such architectures, however, are considered quite old￾fashioned. All of the machines that we are considering are of a type called general register machines in that they have a large group of registers which may be used for any purpose. The machines that we include have either 16 or 32 registers, with only a few tied to specific machine operations. Table 87.1 shows the general register resources in the three machines. The SPARC is a little strange. The machine provides eight global registers and then a window blind of 128 registers which sits behind a frame FIGURE 87.1 Memory arranged as 4 columns of bytes. The binary addresses are shown in the two formats widely used in computers. The illustration shows only 32 bytes in a 4 3 8 array, but a more realistic span would be 4 3 1,000,000 or 4 3 4,000,000 (4 MB to 16 MB)

TABLE 87.1 General Registers in the Three Machines Reg Special Names Comments AC6800016 ress)register operations are 32 bits wide. Address ge gisters as bases. D(data) registers allow byte, word, and AP, FP,SP and PC hold the addresses of the argument block, the frame, the stack AP, FP, SR, PC and the current place in the program, respectively. All data instructions car use any register. gl. g7, The 4 groups of eight registers comprise: global (g), incoming parameters (i), 10.15, FR, RA, local (l) and outgoing parameters(o). go i 10.l7,00.05, wastebasket as a destination. The registers are arranged as a window bl (see text)with the gs always visible and the others moveable in multiple overlapping frames of 24. ithin the set of general registers. Where a PC is not listed, it exists as a special register and can be used as an address when the program uses program-relative addressing. which exposes 24 of the 128. A program can ask the machine to raise or lower the blind by 16 registers. That leaves an overlap of eight between successive yanks or rewinds. This arrangement is called a multiple overlapping register set(MORS). If you think of starting with register r8 at the bottom and r31 at the top, a yank of 16 on the blind will now have r49 at the top and r24 at the bottom r24 to r31 are shared between the old set and the new. To avoid having to keep track of which registers are showing, the set of 24 are divided into what came in from the last set, those that are only local, and those that will go out to the next set. These names apply to going toward increasing numbers. In going the other direction, the ins of the current set will become the outs of the next set. Almost all other machines keep their registers screwed down to the local masonry, but you will see in a moment how useful a MORS can be ( Like other useful but expensive accessories, the debate is always on whether it is worth it[Patterson and Hennessy, 1989].) Stack. Most subroutines define a number of local variables Number Count in C, for example, defines 10 local variables. While these local variables will often be created and kept in register, there is always some need for a bit of for each invocation of (call to)a subroutine In the" good old days, this local storage was often tied to the block of code comprising the subroutine. However, such a fixed block means that a subroutine could never call itself or be called by something that it called. To avoid that problem(and for other purposes)a memory structure called a stack was invented which got its name because it behaved like the spring-loaded plate stack in a restaurant. Basically, it is a last-in-first-out(LiFO) structure whose top is defined by a pointer (address)which resides in a register commonly called the stack pointer or SP Heap. When a subroutine needs space to store local variables, it acquires that space on the stack. When the subroutine finishes, it returns that stack space for use by other routines. Thus, local variable allocations live and die with their subroutines. It is often necessary to create a data structure which is passed to other routines whose lives are independent of the creating routine. This kind of storage must be independent of the creator To meet this need, the heap was invented. This is an expandable storage area managed by the system. You get an allocation by asking for it [malloc(structure_ size)in C]. You get back a pointer to the allocation and the routine can pass that pointer to any other routine and then go away. When it comes time to dispose of the allocation--that is, return the space for other usesthe program must do that actively by a deallocation call the memory for other uses, all by passing the pointer to the structure from one to another er one can return Free(pointer) in C]. Thus, one function can create a structure, several may use it, and anothe Both heap and stack provide a mechanism to obtain large(or small) amounts of storage dynamically. Thus, large structures which are created only at run time need not have static space stored for them in programs that are stored on disk nor need they occupy great chunks of memory when the program does not need them Dynamic allocation is very useful and all modern HLLs provide for it. Since there are two types of dynamic storage, there must be some way to lay out memory so that unpredictable needs in either stack or heap can be met at all times. The mechanism is simplicity itself. The program is stuffed into low addresses in memory along with any static storage(e.g, globals)which are declared in the program. The entire remaining space is then devoted to dynamic storage. The heap starts right after the program and e 2000 by CRC Press LLC

© 2000 by CRC Press LLC which exposes 24 of the 128. A program can ask the machine to raise or lower the blind by 16 registers. That leaves an overlap of eight between successive yanks or rewinds. This arrangement is called a multiple overlapping register set (MORS). If you think of starting with register r8 at the bottom and r31 at the top, a yank of 16 on the blind will now have r49 at the top and r24 at the bottom. r24 to r31 are shared between the old set and the new. To avoid having to keep track of which registers are showing, the set of 24 are divided into what came in from the last set, those that are only local, and those that will go out to the next set. These names apply to going toward increasing numbers. In going the other direction, the ins of the current set will become the outs of the next set. Almost all other machines keep their registers screwed down to the local masonry, but you will see in a moment how useful a MORS can be. (Like other useful but expensive accessories, the debate is always on whether it is worth it [Patterson and Hennessy, 1989].) Stack. Most subroutines define a number of local variables. NumberCount in C, for example, defines 10 local variables. While these local variables will often be created and kept in register, there is always some need for a bit of memory for each invocation of (call to) a subroutine. In the “good old days,” this local storage was often tied to the block of code comprising the subroutine. However, such a fixed block means that a subroutine could never call itself or be called by something that it called. To avoid that problem (and for other purposes) a memory structure called a stack was invented which got its name because it behaved like the spring-loaded plate stack in a restaurant. Basically, it is a last-in-first-out (LIFO) structure whose top is defined by a pointer (address) which resides in a register commonly called the stack pointer or SP. Heap. When a subroutine needs space to store local variables, it acquires that space on the stack. When the subroutine finishes, it returns that stack space for use by other routines. Thus, local variable allocations live and die with their subroutines. It is often necessary to create a data structure which is passed to other routines whose lives are independent of the creating routine. This kind of storage must be independent of the creator. To meet this need, the heap was invented. This is an expandable storage area managed by the system. You get an allocation by asking for it [malloc (structure_size) in C]. You get back a pointer to the allocation and the routine can pass that pointer to any other routine and then go away. When it comes time to dispose of the allocation—that is, return the space for other uses—the program must do that actively by a deallocation call [free(pointer) in C]. Thus, one function can create a structure, several may use it, and another one can return the memory for other uses, all by passing the pointer to the structure from one to another. Both heap and stack provide a mechanism to obtain large (or small) amounts of storage dynamically. Thus, large structures which are created only at run time need not have static space stored for them in programs that are stored on disk nor need they occupy great chunks of memory when the program does not need them. Dynamic allocation is very useful and all modern HLLs provide for it. Since there are two types of dynamic storage, there must be some way to lay out memory so that unpredictable needs in either stack or heap can be met at all times. The mechanism is simplicity itself. The program is stuffed into low addresses in memory along with any static storage (e.g., globals) which are declared in the program. The entire remaining space is then devoted to dynamic storage. The heap starts right after the program and TABLE 87.1 General Registers in the Three Machines Reg Special Names Comments MC68000 16 1 D0..D7 A0..A7 A(ddress) register operations are 32 bits wide. Address generation uses A registers as bases. D (data) registers allow byte, word, and longword operations. A7 is SP. VAX 16 4 r0..r11 AP, FP, SP, PC AP,FP,SP and PC hold the addresses of the argument block, the frame, the stack and the current place in the program, respectively. All data instructions can use any register. SPARC 32 (136) 4 zero, g1..g7, i0..i5, FP, RA, l0..l7, o0..o5, SP, o7 The 4 groups of eight registers comprise: global (g), incoming parameters (i), local (l) and outgoing parameters (o). g0 is a hardwired 0 as a data source and a wastebasket as a destination. The registers are arranged as a window blind (see text) with the g’s always visible and the others moveable in multiple overlapping frames of 24. The special registers are within the set of general registers. Where a PC is not listed, it exists as a special register and can be used as an address when the program uses program-relative addressing

grows toward higher addresses; the stack goes in at the top of memory and grows down. The system is responsible to see that they never collide(a stack crash). When it all goes together, it looks like Fig. 87. 2 [Aho et al., 1986] There is one last tidbit that an assembly programmer must be aware of in looking at memory. Just as some human alphabets are written left to right and some right to left(not to mention top to bottom), computer manufacturers have chosen to disagree on how to arrange words in memory. The two schemes are called big-endian and little-endian(after which end of a number goes in the program lowest-numbered byte and also after a marvelous episode in Gulliver's Travels) The easiest way to perceive how it is done in the two systems is to think of all numbers as being written in conventional order(left to right), but for big-endian you start counting on the upper left of the page and on little-endian you start counting on the upy,l byte, thIs easy description makes big-endian text read er right(see Fig 87.1). Since each character in a text block is a number of length mal order(left to right)but little-endian text reads from right to left. Figure 87.3 shows the sentence This is a sentence " followed by the two hexadecimal HEX)numbers 01020304 and OAOBOCOD written to consecutive bytes in the two systems. Why must we bring this up? Because anyone working in assembly language must know how the bytes are arranged. Furthermore, two of the systems we are considering are big-endian and one(the VAX) is little-endian. which is the better system? Either one. It is having both of them that is a nuisance Onhs you look at Fig. 87.3, undoubtedly you will prefer big-endian, but that is because it appeals to your prejudices. In truth, either works well. what is FIGURE 87.2 Layout of a important is that you be able to direct your program to go fetch the item of dynamic storage in memory. choice. In both systems, you use the lowest-numbered byte to indicate the item of choice. Thus, for the number 01020304, the address will be 13. For the big-endian system, 13 will point to ne byte containing 04 and for the little-endian system, it will point at the byte containing o1 Figure 87.3 contains a problem for some computers which we alluded to in the discussion of Fig. 87.1.We have arranged the bytes to be four in a row as in Fig. 87. 1. That is the way that the memory is arranged in two of our three machines. (In the 68000, there are only two columns. )A good way to look at the fetch operation is that the memory always delivers a whole row and then the processor must acquire the parts that it wants and then properly arrange them. (This is the effect if not always the method. )Some processors--the VAX being a conspicuous example-are willing to undertake getting a longword by fetching two longwords and then iecing together the parts that it wants. Others(in our case, the 68000 and the SPARC) are not so accommo- dating. Those machines opt for simplicity and speed and require that the program keep its data aligned. To use one of those machines, you(or the compiler or assembler)must rearrange Fig 87.3 by inserting a null byt into Fig. 87. 2. This modification is shown in Fig. 87.4. With this modification, all three machines could fetch the two numbers in one operation without rearrangement. Look closely at the numbers 01020304 and OAOBOCOD in Fig. 87. 4. Notice that for both configurations, the numbers read from left to right and that(visually) they appear to be in the same place. Furthermore, as pointed out in the discussion of Fig. 87.3, the"beginning "or address of each of the numbers is identical. However, the byte that is pointed at by the address is not the same and the internal bytes do not have the same addresses. Getting big-endian and little-endian machines in a conversation is not easy. It proves to be even more muddled than these figures suggest a delightful and cogent discussion of the whole issue is found in Cohen [1981 The principal objective in this whole section has been accomplished if looking at Fig. 87. 4 and given the command to load a byte from location 0000 0019, you get the number OB in the big-endian machine and oC in the little-endian machine If you are not already familiar with storing structures in memory, look at the string(sentence)and ask how those letters get in memory. To begin with, every typeable symbol and all of the unprintable actions such as tabbing and carriage returns have been assigned a numerical value from the aSciI code. Each assignment is a byte-long number. What"This"really looks like(HEX, left to right) is 54 68 69 73. The spaces are HEX 20 the period 2E. With the alignment null byte at the end, this list of characters forms a proper C string. It is a structure of 20 bytes. A structure of any number of bytes can be stored, but from the assembly point of view, e 2000 by CRC Press LLC

© 2000 by CRC Press LLC grows toward higher addresses; the stack goes in at the top of memory and grows down. The system is responsible to see that they never collide (a stack crash). When it all goes together, it looks like Fig. 87.2 [Aho et al., 1986]. There is one last tidbit that an assembly programmer must be aware of in looking at memory. Just as some human alphabets are written left to right and some right to left (not to mention top to bottom), computer manufacturers have chosen to disagree on how to arrange words in memory. The two schemes are called big-endian and little-endian (after which end of a number goes in the lowest-numbered byte and also after a marvelous episode in Gulliver’s Travels). The easiest way to perceive how it is done in the two systems is to think of all numbers as being written in conventional order (left to right), but for big-endian you start counting on the upper left of the page and on little-endian you start counting on the upper right (see Fig. 87.1). Since each character in a text block is a number of length 1 byte, this easy description makes big-endian text read in normal order (left to right) but little-endian text reads from right to left. Figure 87.3 shows the sentence “This is a sentence” followed by the two hexadecimal (HEX) numbers 01020304 and 0A0B0C0D written to consecutive bytes in the two systems. Why must we bring this up? Because anyone working in assembly language must know how the bytes are arranged. Furthermore, two of the systems we are considering are big-endian and one (the VAX) is little-endian. Which is the better system? Either one. It is having both of them that is a nuisance. As you look at Fig. 87.3, undoubtedly you will prefer big-endian, but that is only because it appeals to your prejudices. In truth, either works well. What is important is that you be able to direct your program to go fetch the item of choice. In both systems, you use the lowest-numbered byte to indicate the item of choice. Thus, for the number 01020304, the address will be 13. For the big-endian system, 13 will point to the byte containing 04 and for the little-endian system, it will point at the byte containing 01. Figure 87.3 contains a problem for some computers which we alluded to in the discussion of Fig. 87.1. We have arranged the bytes to be four in a row as in Fig. 87.1. That is the way that the memory is arranged in two of our three machines. (In the 68000, there are only two columns.) A good way to look at the fetch operation is that the memory always delivers a whole row and then the processor must acquire the parts that it wants and then properly arrange them. (This is the effect if not always the method.) Some processors—the VAX being a conspicuous example—are willing to undertake getting a longword by fetching two longwords and then piecing together the parts that it wants. Others (in our case, the 68000 and the SPARC) are not so accommo￾dating. Those machines opt for simplicity and speed and require that the program keep its data aligned. To use one of those machines, you (or the compiler or assembler) must rearrange Fig. 87.3 by inserting a null byte into Fig. 87.2. This modification is shown in Fig. 87.4. With this modification, all three machines could fetch the two numbers in one operation without rearrangement. Look closely at the numbers 01020304 and 0A0B0C0D in Fig. 87.4. Notice that for both configurations, the numbers read from left to right and that (visually) they appear to be in the same place. Furthermore, as pointed out in the discussion of Fig. 87.3, the “beginning” or address of each of the numbers is identical. However, the byte that is pointed at by the address is not the same and the internal bytes do not have the same addresses. Getting big-endian and little-endian machines in a conversation is not easy. It proves to be even more muddled than these figures suggest. A delightful and cogent discussion of the whole issue is found in Cohen [1981]. The principal objective in this whole section has been accomplished if looking at Fig. 87.4 and given the command to load a byte from location 0000 0019, you get the number 0B in the big-endian machine and 0C in the little-endian machine. If you are not already familiar with storing structures in memory, look at the string (sentence) and ask how those letters get in memory. To begin with, every typeable symbol and all of the unprintable actions such as tabbing and carriage returns have been assigned a numerical value from the ASCII code. Each assignment is a byte-long number. What “This” really looks like (HEX, left to right) is 54 68 69 73. The spaces are HEX 20; the period 2E. With the alignment null byte at the end, this list of characters forms a proper C string. It is a structure of 20 bytes. A structure of any number of bytes can be stored, but from the assembly point of view, FIGURE 87.2 Layout of a program, static storage, and dynamic storage in memory

T 18 Little Endian (SPARC, MC68000) FIGURE 87.3 Byte numbering and number placement for big- and little-endian systems. Hexadecimal numbers are used memory addresses 0 01020304 18 1B Big Endian Little Endian (SPARC, MC68000) (VAX) FIGURE 87.4 The same items as in Fig. 87.3, but with justification of the long integers to begin on a longword bound it is all just a list of bytes. You may access them two at a time, four at a time, or one at a time. Any interpretation of those bytes is entirely up to the program. Unlike the HLL which requires that you tell it what each named variable is, assembly language knows only bytes and groups of bytes. In assembly language, the"T"can be thought of as a letter or the number 54(HEX). Your choice. Or, more importantly, your programs choice Addressing Now that we have both memory and addresses, we should next consider how these processors require that programmers specify the data that is to be acted upon by the instructions All of these machines have multiple modes of address. The VAX has the biggest vocabulary; the SPARC the smallest. Yet all can accomplish the same tasks. Four general types of address specification are quite common mong assembly languages. These are shown in Table 87. 2. They are spelled out in words in the table, but their usage is really developed in the examples which follow in this and the succeeding sections. In Table 87. 2, formats 1.4 and 1.5 and the entries in 4 require some expansion. The others will be clear in ne examples we will present. Base-index addressing is the mechanism for dealing with subscripts. The base ints at the starting point of a data structure, such as a string or a vector; the index measures the offset from the start of the structure to the element in question. For most machines, the index is simply a separate regist which counts the bytes from the base to the item in question. If the items in the list are 4 bytes long, then to increment the index, you add 4. While that is not hard to remember, the vax does its multiplication by the item length for you. Furthermore, it allows you to index any form of address that you can write. To show you what that means, consider expanding numbers stored in words into numbers stored in longwords. The extension is to preserve sign. The VAX provides specific instructions for conversions. If we were moving these words in one array to longwords in another array, we would write: e 2000 by CRC Press LLC

© 2000 by CRC Press LLC it is all just a list of bytes. You may access them two at a time, four at a time, or one at a time. Any interpretation of those bytes is entirely up to the program. Unlike the HLL which requires that you tell it what each named variable is, assembly language knows only bytes and groups of bytes. In assembly language, the “T” can be thought of as a letter or the number 54 (HEX). Your choice. Or, more importantly, your program’s choice. Addressing Now that we have both memory and addresses, we should next consider how these processors require that programmers specify the data that is to be acted upon by the instructions. All of these machines have multiple modes of address. The VAX has the biggest vocabulary; the SPARC the smallest. Yet all can accomplish the same tasks. Four general types of address specification are quite common among assembly languages. These are shown in Table 87.2. They are spelled out in words in the table, but their usage is really developed in the examples which follow in this and the succeeding sections. In Table 87.2, formats 1.4 and 1.5 and the entries in 4 require some expansion. The others will be clear in the examples we will present. Base-index addressing is the mechanism for dealing with subscripts. The base points at the starting point of a data structure, such as a string or a vector; the index measures the offset from the start of the structure to the element in question. For most machines, the index is simply a separate register which counts the bytes from the base to the item in question. If the items in the list are 4 bytes long, then to increment the index, you add 4. While that is not hard to remember, the VAX does its multiplication by the item length for you. Furthermore, it allows you to index any form of address that you can write. To show you what that means, consider expanding numbers stored in words into numbers stored in longwords. The extension is to preserve sign. The VAX provides specific instructions for conversions. If we were moving these words in one array to longwords in another array, we would write: FIGURE 87.3 Byte numbering and number placement for big- and little-endian systems. Hexadecimal numbers are used for the memory addresses. FIGURE 87.4 The same items as in Fig. 87.3, but with justification of the long integers to begin on a longword boundary

TABLE 87.2 Addressing Modes 1. Explicit addresses Example 1. 1. Absolute addressing 765 The actual address written into the instruction 1. 2. Register indirect 3) Meaning"the address is in register 3. 1.3. Base-displacement -12(r3) Meaning"12 bytes before the address in register 3. 14. Baseindex (r3, r4) Meaning make an address by adding the contents of r3 and r4. This mode has many riations which are discussed below 1.5. Double indirect @5(r4) Very uncommon! Means calculate an address as in 1.3, then fetch the longword there, and then use it as the address of what you really want. 2. Direct data specification 2.1. Immediate/literal 6 or 6 Meaning"use 6 as the datum. "In machines which use #6, 6 without# means address 6. This is called"absolute addressing. 3. Program-relative 3.1. Labels loop: The label (typically an alphanumeric ending in a colon)is a marker in the program which the assembler and linker keep track of. The common uses are to jump to a labeled spot or to load labeled constants stored with the program. 4. Address-modifying form 4. 1. Postincrement (sp) Same as 1. 2 except that, after the address is used, it is incremented by the size of the datum in bytes and returned to the register from which it came. 4.2. Predecrement -(sp) The value in SP is decremented by the size of the datum in bytes, used as the address and returned to the register from which it came CVTWL (r4)[r5], (r6)r5] i convert the words starting at(r4)to longwords starting at(r6) Note that the same index, [r5], is used for both arrays. On the left, the contents of r5 are multiplied by 2 and added to r4 to get the address; on the right, the address is r5*4+r6. You would be saying: Convert the 4th word to the 4th longword. This is undoubtedly compact and sometimes convenient. It is also unique to the VAX. For the 68000, the designers folded both base-displacement and base- index into one mode and made room for word or longword indices. It looks like dress =(A3+64)+sign-extended(D2) The 68000 limits the displacement to a signed byte, but other than that, it is indeed a rather general indexing format. If you do not want the displacement, set it to O For the powerful but simple SPARC, the simple base-index form shown in 1. 4 is all that you have (or need The double- indirect format, 1.5, is so rarely used that it has been left out of almost all designs but the VAX. What makes it occasionally useful is that subroutines get pointers to"pass by pointer"variables. Thus, if you want to get the variable, first you must load the address and then the variable. The vaX allows you to do this one instruction. While that sounds compact, it is expensive in memory cycles. If you want to use that pointer again, it pays to have it in register. The two items under heading 4 are strange at first. Their principal function is adding items to and removing them from a dynamic stack, or for C, to execute the operation *X++ or*(X). The action may be viewed with the code below and the illustration of memory in Fig. 87.2 movl r4,-(sp) make room on the stack(subtract 4 from SP)and put movl(sp)+, r4 stake a longword off the stack, shorten the stack by 4 bytes, and put the longword in r4 RISCs abhor instructions which do two unrelated things. Instead of using a dynamic stack, they use a quasi static stack. If a subroutine needs 12 bytes of stack space, it explicitly subtracts 12 from SP. Then it works from there with the base-displacement format(1.3)to reference any place in the block of bytes just defined. If you want to use a pointer and then increment the pointer, RISCs will do that as two independent instructions. Let us consider one short section of MC68000 code from our sample program in C to see how these modes work and to sample some of the flavor of the language e 2000 by CRC Press LLC

© 2000 by CRC Press LLC CVTWL (r4)[r5],(r6)[r5] ;convert the words starting at (r4) to longwords starting at (r6) Note that the same index, [r5], is used for both arrays. On the left, the contents of r5 are multiplied by 2 and added to r4 to get the address; on the right, the address is r5*4+r6. You would be saying: “Convert the 4th word to the 4th longword.” This is undoubtedly compact and sometimes convenient. It is also unique to the VAX. For the 68000, the designers folded both base-displacement and base-index into one mode and made room for word or longword indices. It looks like: add.1 64(A3,D2.w),D3 ;address = (A3+64) +sign-extended(D2) The 68000 limits the displacement to a signed byte, but other than that, it is indeed a rather general indexing format. If you do not want the displacement, set it to 0. For the powerful but simple SPARC, the simple base-index form shown in 1.4 is all that you have (or need). The double-indirect format, 1.5, is so rarely used that it has been left out of almost all designs but the VAX. What makes it occasionally useful is that subroutines get pointers to “pass by pointer” variables. Thus, if you want to get the variable, first you must load the address and then the variable. The VAX allows you to do this in one instruction. While that sounds compact, it is expensive in memory cycles. If you want to use that pointer again, it pays to have it in register. The two items under heading 4 are strange at first. Their principal function is adding items to and removing them from a dynamic stack, or for C, to execute the operation *X++ or *(– –X). The action may be viewed with the code below and the illustration of memory in Fig. 87.2: movl r4, –(sp) ;make room on the stack (subtract 4 from SP) and put the contents of r4 in that spot movl (sp)+, r4 ;take a longword off the stack, shorten the stack by 4 bytes, and put the longword in r4 RISCs abhor instructions which do two unrelated things. Instead of using a dynamic stack, they use a quasi￾static stack. If a subroutine needs 12 bytes of stack space, it explicitly subtracts 12 from SP. Then it works from there with the base-displacement format (1.3) to reference any place in the block of bytes just defined. If you want to use a pointer and then increment the pointer, RISCs will do that as two independent instructions. Let us consider one short section of MC68000 code from our sample program in C to see how these modes work and to sample some of the flavor of the language: TABLE 87.2 Addressing Modes 1. Explicit addresses Example 1.1. Absolute addressing 765 The actual address written into the instruction. 1.2. Register indirect (r3) Meaning “the address is in register 3.” 1.3. Base-displacement –12(r3) Meaning “12 bytes before the address in register 3.” 1.4. Base-index (r3,r4) Meaning make an address by adding the contents of r3 and r4. This mode has many variations which are discussed below. 1.5. Double indirect @5(r4) Very uncommon! Means calculate an address as in 1.3, then fetch the longword there, and then use it as the address of what you really want. 2. Direct data specification 2.1. Immediate/literal #6 or 6 Meaning “use 6 as the datum.” In machines which use #6, 6 without # means address 6. This is called “absolute addressing.” 3. Program-relative 3.1. Labels loop: The label (typically an alphanumeric ending in a colon) is a marker in the program which the assembler and linker keep track of. The common uses are to jump to a labeled spot or to load labeled constants stored with the program. 4. Address-modifying forms (CISC only) 4.1. Postincrement (sp)+ Same as 1.2 except that, after the address is used, it is incremented by the size of the datum in bytes and returned to the register from which it came. 4.2. Predecrement –(sp) The value in SP is decremented by the size of the datum in bytes, used as the address and returned to the register from which it came

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
共47页,可试读16页,点击继续阅读 ↓↓
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有