当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《电子工程师手册》学习资料(英文版)Computer-Aided Design 78

资源类别:文库,文档格式:PDF,文档页数:18,文件大小:364.22KB,团购合买
78.1Introduction 78.2 The Role of Simulation 78.3 Motivation for the Use of Simulation 78.4 Limitations of Simulation 78.5 Simulation Structure 78.6 The Interdisciplinary Nature of Simulation 78.7 Model Design
点击下载完整版文档(PDF)

Tranter, W.H., Kosbar, K.L."Computer-Aided Design and Analysis of Communication The electrical Engineering Handbook Ed. Richard C. dorf Boca Raton: CRc Press llc. 2000

Tranter, W.H., Kosbar, K.L. “Computer-Aided Design and Analysis of Communication Systems” The Electrical Engineering Handbook Ed. Richard C. Dorf Boca Raton: CRC Press LLC, 2000

78 Computer-Aided Design and analysis of Communication Systems 78.1 Introduction 78.2 The Role of Simulation 78.3 Motivation for the Use of Simulation 78.4 Limitations of Simulation 78.5 Simulation Structure 78.6 The Interdisciplinary Nature of Simulation 78.7 Model Design 78.8 Low-Pass Models 78.9 Pseudorandom Signal and Noise Generators William h. tranter 78.10 Transmitter, Channel, and Receiver Modeling niversity of Missouri-Rolla 78.11 Symbol Error Rate Estimation 78.12 Validation of Simulation Results Kurt l kosbar 78.13 A Simple Example Illustrating Simulation Products University of Missouri-Rolla 78.14 Conclusions 78.1 Introduction It should be clear from the preceding chapters that communication systems exist to perform a wide variety of tasks. The demands placed on today's communication systems necessitate higher data rates, greater flexibilit and increased reliability. Communication systems are therefore becoming increasingly complex, and the result ing systems cannot usually be analyzed using traditional (pencil and paper )analysis techniques. In addition communication systems often operate in complicated environments that are not analytically tractable. Examples include channels that exhibit severe bandlimiting, multipath, fading, interference, non-Gaussian noise, and perhaps even burst noise. The combination of a complex system and a complex environment makes the design and analysis of these communication systems a formidable task. Some level of computer assistance must usually be invoked in both the design and analysis process. The appropriate level of computer assistance can range from simply using numerical techniques to solve a differential equation defining an element or subsystem to developing a computer simulation of the end-to-end communication system There is another important reason for the current popularity of computer-aided analysis and simulation techniques. It is now practical to make extensive use of these techniques. The computing power of many personal computers and workstations available today exceeds the capabilities of many large mainframe computers of only a decade ago. The low cost of these computing resources make them widely available. As a result, significant computing resources are available to the communications engineer within the office or even the home environment. c 2000 by CRC Press LLC

© 2000 by CRC Press LLC 78 Computer-Aided Design and Analysis of Communication Systems 78.1 Introduction 78.2 The Role of Simulation 78.3 Motivation for the Use of Simulation 78.4 Limitations of Simulation 78.5 Simulation Structure 78.6 The Interdisciplinary Nature of Simulation 78.7 Model Design 78.8 Low-Pass Models 78.9 Pseudorandom Signal and Noise Generators 78.10 Transmitter, Channel, and Receiver Modeling 78.11 Symbol Error Rate Estimation 78.12 Validation of Simulation Results 78.13 A Simple Example Illustrating Simulation Products 78.14 Conclusions 78.1 Introduction It should be clear from the preceding chapters that communication systems exist to perform a wide variety of tasks. The demands placed on today’s communication systems necessitate higher data rates, greater flexibility, and increased reliability. Communication systems are therefore becoming increasingly complex, and the result￾ing systems cannot usually be analyzed using traditional (pencil and paper) analysis techniques. In addition, communication systems often operate in complicated environments that are not analytically tractable. Examples include channels that exhibit severe bandlimiting, multipath, fading, interference, non-Gaussian noise, and perhaps even burst noise. The combination of a complex system and a complex environment makes the design and analysis of these communication systems a formidable task. Some level of computer assistance must usually be invoked in both the design and analysis process. The appropriate level of computer assistance can range from simply using numerical techniques to solve a differential equation defining an element or subsystem to developing a computer simulation of the end-to-end communication system. There is another important reason for the current popularity of computer-aided analysis and simulation techniques.It is now practical to make extensive use of these techniques. The computing power of many personal computers and workstations available today exceeds the capabilities of many large mainframe computers of only a decade ago. The low cost of these computing resources make them widely available.As a result, significant computing resources are available to the communications engineer within the office or even the home environment. William H. Tranter University of Missouri–Rolla Kurt L. Kosbar University of Missouri–Rolla 8574/ch078/frame Page 1749 Wednesday, May 6, 1998 11:08 AM

Encoder Transmitte Channell Receive FIGURE 78.1 Basic communication link. Personal computers and workstations tend to be resources dedicated to a specific individual or project. Since the communications engineer working at his or her desk has control over the computing resource, lengthy simulations can be performed without interfering with the work of others. Over the past few years a number of software packages have been developed that allow complex communication systems to be simulated with relative ease [Shanmugan, 1988]. The best of these packages contains a wide variety of subsystem models as well as integrated graphics packages that allow waveforms, spectra, histograms, and performance characteristics to be displayed without leaving the simulation environment. For those motivated to generate their own simulation code, the widespread availability of high-quality C, Pascal, and FORTRAN compilers makes it possible for large application-specific simulation programs to be developed for personal computers and work- stations. When computing tools are both available and convenient to use, they will be employed in the day-to- day efforts of system analysts and designers e The purpose of this chapter is to provide a brief introduction to the subject of computer-aided design and analysis of communication systems. Since computer-aided design and analysis almost always involves some level of simulation, we focus our discussion on the important subject of the simulation of communication systems Computer simulations can, of course, never replace a skilled engineer, although they can be a tremendous help in both the design and analysis process. The most powerful simulation program cannot solve all the problems that arise, and the process of making trade-off decisions will always be based on experience. In ddition, evaluating and interpreting the results of a complex simulation require considerable skill and insight. While these remarks seem obvious, as computer-aided techniques become more powerful, one is tempted replace experience and insight with computing power. 78.2 The role of simulation The main purposes of simulation are to help us understand the operation of a complex communication system to determine acceptable or optimum parameters for implementation of a system, and to determine the per formance of a communication system. There are basically two types of systems in which communication engineers have interest: communication links and communication networks A communication link is usually a single source, a single user, and the components and channel between source and user. A typical link architecture is shown in Fig. 78. 1. The important performance parameter in a digital communication link is typically the reliability of the communication link as measured by the symbo or bit error rate(BER). In an analog communication link the performance parameter of interest is typically the signal-to-noise ratio (SNR)at the receiver input or the mean-square error of the receiver output. The simulation is usually performed to determine the effect of system parameters, such as filter bandwidths or code rate,or to determine the effect of environmental parameters, such as noise levels, noise statistics, or power ral o A communication network is a collection of communication links with many signal sources and many users. Computer simulation programs for networks often deal with problems of routing, flow and congestion control and the network delay. While this deals with the communication link, the reader is reminded that network simulation is also an important area of study. The simulation methodologies used for communication networks are different from those used on links because, in a communication link simulation, each waveform present in the system is sampled using a constant sampling frequency. In contrast, network simulations are event-driven, with the important events being such quantities as the time of arrival of a message e 2000 by CRC Press LLC

© 2000 by CRC Press LLC Personal computers and workstations tend to be resources dedicated to a specific individual or project. Since the communications engineer working at his or her desk has control over the computing resource, lengthy simulations can be performed without interfering with the work of others. Over the past few years a number of software packages have been developed that allow complex communication systems to be simulated with relative ease [Shanmugan, 1988]. The best of these packages contains a wide variety of subsystem models as well as integrated graphics packages that allow waveforms, spectra, histograms, and performance characteristics to be displayed without leaving the simulation environment. For those motivated to generate their own simulation code, the widespread availability of high-quality C, Pascal, and FORTRAN compilers makes it possible for large application-specific simulation programs to be developed for personal computers and work￾stations. When computing tools are both available and convenient to use, they will be employed in the day-to￾day efforts of system analysts and designers. The purpose of this chapter is to provide a brief introduction to the subject of computer-aided design and analysis of communication systems. Since computer-aided design and analysis almost always involves some level of simulation, we focus our discussion on the important subject of the simulation of communication systems. Computer simulations can, of course, never replace a skilled engineer, although they can be a tremendous help in both the design and analysis process. The most powerful simulation program cannot solve all the problems that arise, and the process of making trade-off decisions will always be based on experience. In addition, evaluating and interpreting the results of a complex simulation require considerable skill and insight. While these remarks seem obvious, as computer-aided techniques become more powerful, one is tempted to replace experience and insight with computing power. 78.2 The Role of Simulation The main purposes of simulation are to help us understand the operation of a complex communication system, to determine acceptable or optimum parameters for implementation of a system, and to determine the per￾formance of a communication system. There are basically two types of systems in which communication engineers have interest: communication links and communication networks. A communication link is usually a single source, a single user, and the components and channel between source and user. A typical link architecture is shown in Fig. 78.1. The important performance parameter in a digital communication link is typically the reliability of the communication link as measured by the symbol or bit error rate (BER). In an analog communication link the performance parameter of interest is typically the signal-to-noise ratio (SNR) at the receiver input or the mean-square error of the receiver output. The simulation is usually performed to determine the effect of system parameters, such as filter bandwidths or code rate, or to determine the effect of environmental parameters, such as noise levels, noise statistics, or power spectral densities. A communication network is a collection of communication links with many signal sources and many users. Computer simulation programs for networks often deal with problems of routing, flow and congestion control, and the network delay. While this chapter deals with the communication link, the reader is reminded that network simulation is also an important area of study. The simulation methodologies used for communication networks are different from those used on links because, in a communication link simulation, each waveform present in the system is sampled using a constant sampling frequency. In contrast, network simulations are event-driven, with the important events being such quantities as the time of arrival of a message. FIGURE 78.1 Basic communication link. 8574/ch078/frame Page 1750 Wednesday, May 6, 1998 11:08 AM

Simulations can be developed to investigate either transient phenomena or steady-state properties of The study of the acquisition time of a phase-lock loop receiver is an example of a transient phend Simulations that are performed to study transient behavior often focus on a single subsystem such as synchronization system Simulations that are developed to study steady-state behavior often model system. An example is a simulation to determine the BeR of a system 78.3 Motivation for the Use of simulation As mentioned previously, simulation is a reasonable approach to many design and analysis problems because complex problems demand that computer-based techniques be used to support traditional analytical approaches. There are many other motivations for making use of simulation. A carefully developed simulation is much like having a breadboard implementation of the communication system available for study. Experiments can be performed using the simulation much like experiments can be performed using hardware. System parameters can be easily changed, and the impact of these changes can be evaluated. By ontinuing this process, parameteric studies can easily be conducted and acceptable, or perhaps even optimun parameter values can be determined. By changing parameters, or even the system topology, one can play "what if games much more quickly and economically using a simulation than with a system realized in hardware. It is often overlooked that simulation can be used to support analysis. Many people incorrectly view simu lation as a tool to be used only when a system becomes too complex to be analyzed using traditional analysis hniques. Used properly, simulation goes hand in hand with traditional techniques in that simulation can often be used to guide analysis. a properly developed simulation provides insight into system operation. As an cample, if a system has many parameters, these can be varied in a way that allows the most important parameters, in terms of system performance, to be identified. The least important parameters can then often be discarded, with the result being a simpler system that is more tractable analytically. Analysis also aids mulation. The development of an accurate and efficient simulation is often dependent upon a careful analysis of various portions of the syst 78.4 Limitations of Simulation Simulation, useful as it is, does have limitations. It must be remembered that a system simulation is an approximation to the actual system under study. The nature of the approximations must be understood if one is to have confidence in the simulation results. The accuracy of the simulation is limited by the accuracy to which the various components and subsystems within the system are modeled. It is often necessary to collect extensive experimental data on system components to ensure that simulation models accurately reflect the ehavior of the components. Even if this step is done with care, one can only trust the simulation model over the range of values consistent with the previously collected experimental data. A main source of error in a simulation results because models are used at operating points beyond which the models are valid In addition to modeling difficulties, it should be realized that the digital simulation of a system can seldo be made perfectly consistent with the actual system under study. The simulation is affected by phenomena not present in the actual system. Examples are the aliasing errors resulting from the sampling operation and the finite word length (quantization) effects present in the simulation. Practical communication systems use a number of filters, and modeling the analog filters present in the actual system by the digital filters required by the simulation involves a number of approximations. The assumptions and approximations used in modeling an analog filter using impulse-invariant digital filter synthesis techniques are quite different from the ass tions and approximations used in bilinear z-transform techniques. Determining the appropriate modeling technique requires careful thought. Another limitation of simulation lies in the excessive computer run time that is often necessary for estimating performance parameters. An example is the estimation of the system BER for systems having very low nominal bit error rates. We will expand on this topic later in this chapter. e 2000 by CRC Press LLC

© 2000 by CRC Press LLC Simulations can be developed to investigate either transient phenomena or steady-state properties of a system. The study of the acquisition time of a phase-lock loop receiver is an example of a transient phenomenon. Simulations that are performed to study transient behavior often focus on a single subsystem such as a receiver synchronization system. Simulations that are developed to study steady-state behavior often model the entire system. An example is a simulation to determine the BER of a system. 78.3 Motivation for the Use of Simulation As mentioned previously, simulation is a reasonable approach to many design and analysis problems because complex problems demand that computer-based techniques be used to support traditional analytical approaches. There are many other motivations for making use of simulation. A carefully developed simulation is much like having a breadboard implementation of the communication system available for study. Experiments can be performed using the simulation much like experiments can be performed using hardware. System parameters can be easily changed, and the impact of these changes can be evaluated. By continuing this process, parameteric studies can easily be conducted and acceptable, or perhaps even optimum, parameter values can be determined. By changing parameters, or even the system topology, one can play “what if” games much more quickly and economically using a simulation than with a system realized in hardware. It is often overlooked that simulation can be used to support analysis. Many people incorrectly view simu￾lation as a tool to be used only when a system becomes too complex to be analyzed using traditional analysis techniques. Used properly, simulation goes hand in hand with traditional techniques in that simulation can often be used to guide analysis. A properly developed simulation provides insight into system operation. As an example, if a system has many parameters, these can be varied in a way that allows the most important parameters, in terms of system performance, to be identified. The least important parameters can then often be discarded, with the result being a simpler system that is more tractable analytically. Analysis also aids simulation. The development of an accurate and efficient simulation is often dependent upon a careful analysis of various portions of the system. 78.4 Limitations of Simulation Simulation, useful as it is, does have limitations. It must be remembered that a system simulation is an approximation to the actual system under study. The nature of the approximations must be understood if one is to have confidence in the simulation results. The accuracy of the simulation is limited by the accuracy to which the various components and subsystems within the system are modeled. It is often necessary to collect extensive experimental data on system components to ensure that simulation models accurately reflect the behavior of the components. Even if this step is done with care, one can only trust the simulation model over the range of values consistent with the previously collected experimental data. A main source of error in a simulation results because models are used at operating points beyond which the models are valid. In addition to modeling difficulties, it should be realized that the digital simulation of a system can seldom be made perfectly consistent with the actual system under study. The simulation is affected by phenomena not present in the actual system. Examples are the aliasing errors resulting from the sampling operation and the finite word length (quantization) effects present in the simulation. Practical communication systems use a number of filters, and modeling the analog filters present in the actual system by the digital filters required by the simulation involves a number of approximations. The assumptions and approximations used in modeling an analog filter using impulse-invariant digital filter synthesis techniques are quite different from the assump￾tions and approximations used in bilinear z-transform techniques. Determining the appropriate modeling technique requires careful thought. Another limitation of simulation lies in the excessive computer run time that is often necessary for estimating performance parameters. An example is the estimation of the system BER for systems having very low nominal bit error rates. We will expand on this topic later in this chapter. 8574/ch078/frame Page 1751 Wednesday, May 6, 1998 11:08 AM

Simulation Simulation Preprocessor Exerciser Postprocessor FIGURE 78.2 Typical structure of a simulation program. 78.5 Simulation Structure As illustrated in Fig. 78.1, a communication system is a collection of subsystems such that the overall system provides a reliable path for information flow from source to user. In a computer simulation of the system, the individual subsystems must first be accurately modeled by signal processing operations. The overall simulation program is a collection of these signal processing operations and must accurately model the overall commu- nication system. The important subject of subsystem modeling will be treated in a following section The first step in the development of a simulation program is to define the topology of the system, which ecifies the manner in which the individual subsystems are connected. The subsystem models must then be defined by specifying the signal processing operation to be performed by each of the various subsystems. A simulation structure may be either fixed topology or free topology. In a fixed topology simulation, the basic tructure shown in Fig. 78.1 is modeled. Various subsystems can be bypassed if desired by setting switches, but the basic topology cannot be modified In a free topology structure, subsystems can be interconnected in any way desired and new additional subsystems can be added at will A simulation program for a communication system is a collection of at least three operations, shown in Fig 78.2, although in a well-integrated simulation these operations tend to merge together. The first operation, sometimes referred to as the preprocessor, defines the parameters of each subsystem and the intrinsic parameters that control the operation of the simulation. The second operation is the simulation exercisor, which is the imulation program actually executed on the computer. The third operation performed in a simulation program is that of postprocessing. This is a collection of routines that format the simulation output in a way which provides insight into system operations and allows the performance of the communication system under study to be evaluated. A postprocessor usually consists of a number of graphics-based routines, allowing the user to view waveforms and other displays generated by the simulation. The postprocessor also consists of a number of routines that allow estimation of the bit error rate, signal-to-noise ratios, histograms, and power spectral densities When faced with the problem of developing a simulation of a communication system, the first fundamental choice is whether to develop a custom simulation using a general-purpose high-level language or to use of the many special-purpose communication system simulation languages available. If the decision is made to develop a dedicated simulation using a general-purpose language, a number of resources are needed beyond compiler and a mathematics library Also needed are libraries for filtering routines, software models for the subsystems contained in the overall system, channel models, and the waveform display and data routines needed for the analysis of the simulation results(postprocessing). While at least some of the required software will have to be developed at the time the simulation is being written, many of the required routines can probably be obtained from digital signal processing(DSP) programs and other available sources. As more simulation projects are completed, the database of available routines becomes larger The other alternative is to use a dedicated simulation language, which makes it possible for one who does not have the necessary skills to create a custom simulation using a high-level language to develop a commu- nication system simulation. Many simulation languages are available for both personal computers and work stations[Shanmugan, 1988. While the use of these resources can speed simulation development, the user must ensure that the assumptions used in developing the models are well understood and applicable to the problem of interest. In choosing a dedicated language from among those that are available, one should select a language that has an extensive model library, an integrated postprocessor with a wide variety of data analysis routines, on-line help and documentation capabilities, and extensive error-checking routines. e 2000 by CRC Press LLC

© 2000 by CRC Press LLC 78.5 Simulation Structure As illustrated in Fig. 78.1, a communication system is a collection of subsystems such that the overall system provides a reliable path for information flow from source to user. In a computer simulation of the system, the individual subsystems must first be accurately modeled by signal processing operations. The overall simulation program is a collection of these signal processing operations and must accurately model the overall commu￾nication system. The important subject of subsystem modeling will be treated in a following section. The first step in the development of a simulation program is to define the topology of the system, which specifies the manner in which the individual subsystems are connected. The subsystem models must then be defined by specifying the signal processing operation to be performed by each of the various subsystems. A simulation structure may be either fixed topology or free topology. In a fixed topology simulation, the basic structure shown in Fig. 78.1 is modeled. Various subsystems can be bypassed if desired by setting switches, but the basic topology cannot be modified. In a free topology structure, subsystems can be interconnected in any way desired and new additional subsystems can be added at will. A simulation program for a communication system is a collection of at least three operations, shown in Fig. 78.2, although in a well-integrated simulation these operations tend to merge together. The first operation, sometimes referred to as the preprocessor, defines the parameters of each subsystem and the intrinsic parameters that control the operation of the simulation. The second operation is the simulation exercisor, which is the simulation program actually executed on the computer. The third operation performed in a simulation program is that of postprocessing. This is a collection of routines that format the simulation output in a way which provides insight into system operations and allows the performance of the communication system under study to be evaluated. A postprocessor usually consists of a number of graphics-based routines, allowing the user to view waveforms and other displays generated by the simulation. The postprocessor also consists of a number of routines that allow estimation of the bit error rate, signal-to-noise ratios, histograms, and power spectral densities. When faced with the problem of developing a simulation of a communication system, the first fundamental choice is whether to develop a custom simulation using a general-purpose high-level language or to use one of the many special-purpose communication system simulation languages available. If the decision is made to develop a dedicated simulation using a general-purpose language, a number of resources are needed beyond a quality compiler and a mathematics library. Also needed are libraries for filtering routines, software models for each of the subsystems contained in the overall system, channel models, and the waveform display and data analysis routines needed for the analysis of the simulation results (postprocessing). While at least some of the required software will have to be developed at the time the simulation is being written, many of the required routines can probably be obtained from digital signal processing (DSP) programs and other available sources. As more simulation projects are completed, the database of available routines becomes larger. The other alternative is to use a dedicated simulation language, which makes it possible for one who does not have the necessary skills to create a custom simulation using a high-level language to develop a commu￾nication system simulation. Many simulation languages are available for both personal computers and work￾stations [Shanmugan, 1988]. While the use of these resources can speed simulation development, the user must ensure that the assumptions used in developing the models are well understood and applicable to the problem of interest. In choosing a dedicated language from among those that are available, one should select a language that has an extensive model library, an integrated postprocessor with a wide variety of data analysis routines, on-line help and documentation capabilities, and extensive error-checking routines. FIGURE 78.2 Typical structure of a simulation program. 8574/ch078/frame Page 1752 Wednesday, May 6, 1998 11:08 AM

Processor Time Execute Simulation Small Complexity of Crude Elabora FIGURE 78.3 Design constraints and trade-offs. 78.6 The Interdisciplinary nature of Simulation The subject of computer-aided design and analysis of communication systems is very much interdisciplinary in nature. The major disciplines that bear on the subject are communication theory, DSP, numerical analysis and stochastic process theory. The roles played by these subjects is clear. The simulation user must have knowledge of the behavior of communication theory if the simulation results are to be understood. The analysis chniques of communication theory allow simulation results to be verified. Since each subsystem in the overall communication system is a signal processing operation, the tools of dsp provide the algorithms to realize filters alysis techniques are used extensively in the development of signal pro cessing algorithms. Since communication systems involve random data signals, as well as noise and other disturbances, the concepts of stochastic process theory are important in developing models of these quantities and also for determining performance estimates. 78.7 Model design Practicing engineers frequently use models to investigate the behavior of complex systems. Traditionally, models have been physical devices or a set of mathematical expressions. The widespread use of powerful digital computers now allows one to generate computer programs that model physical systems. Although the detailed development and use of computer models differs significantly from their physical and mathematical counter- parts, the computer models share many of the same design constraints and trade-offs. For any model to be useful one must guarantee that the response of the model to stimuli will closely match the response of the target system, the model must be designed and fabricated in much less time and at significantly less expense than the target system, and the model must be reasonably easy to validate and modify. In addition to these constraints, designers of computer models must assure that the amount of processor time required to execute ne model is not excessive. The optimal model is the one that appropriately balances these conflicting require ments Figure 78.3 describes the typical design trade-off faced when developing computer models. A somewhat surprising observation is that the optimal model is often not the one that most closely approximates the target system. A highly detailed model will typically require a tremendous amount of time to develop, will be difficult to validate and modify, and may require prohibitive processor time to execute. Selecting a model that achieves a good balance between these constraints is as much an art as a science. Being aware of the trade-offs which exist,and must be addressed, is the first step toward mastering the art of modeling e 2000 by CRC Press LLC

© 2000 by CRC Press LLC 78.6 The Interdisciplinary Nature of Simulation The subject of computer-aided design and analysis of communication systems is very much interdisciplinary in nature. The major disciplines that bear on the subject are communication theory, DSP, numerical analysis, and stochastic process theory. The roles played by these subjects is clear. The simulation user must have knowledge of the behavior of communication theory if the simulation results are to be understood. The analysis techniques of communication theory allow simulation results to be verified. Since each subsystem in the overall communication system is a signal processing operation, the tools of DSP provide the algorithms to realize filters and other subsystems. Numerical analysis techniques are used extensively in the development of signal pro￾cessing algorithms. Since communication systems involve random data signals, as well as noise and other disturbances, the concepts of stochastic process theory are important in developing models of these quantities and also for determining performance estimates. 78.7 Model Design Practicing engineers frequently use models to investigate the behavior of complex systems. Traditionally, models have been physical devices or a set of mathematical expressions. The widespread use of powerful digital computers now allows one to generate computer programs that model physical systems. Although the detailed development and use of computer models differs significantly from their physical and mathematical counter￾parts, the computer models share many of the same design constraints and trade-offs. For any model to be useful one must guarantee that the response of the model to stimuli will closely match the response of the target system, the model must be designed and fabricated in much less time and at significantly less expense than the target system, and the model must be reasonably easy to validate and modify. In addition to these constraints, designers of computer models must assure that the amount of processor time required to execute the model is not excessive. The optimal model is the one that appropriately balances these conflicting require￾ments. Figure 78.3 describes the typical design trade-off faced when developing computer models. A somewhat surprising observation is that the optimal model is often not the one that most closely approximates the target system. A highly detailed model will typically require a tremendous amount of time to develop, will be difficult to validate and modify, and may require prohibitive processor time to execute. Selecting a model that achieves a good balance between these constraints is as much an art as a science. Being aware of the trade-offs which exist, and must be addressed, is the first step toward mastering the art of modeling. FIGURE 78.3 Design constraints and trade-offs. 8574/ch078/frame Page 1753 Wednesday, May 6, 1998 11:08 AM

IX(y -fn -fc -fr FIGURE 78.4 Amplitude spectrum of a bandpass signal. 78. 8 Low-Pass Models In most cases of practical interest the physical layer of the communication system will use continuous time (CT) signals, while the simulation will operate in discrete time(DT). For the simulation to be useful, one must develop DT signals and systems that closely match their CT counterparts. This topic is discussed at length in introductory DSP texts. A prominent result in this field is the Nyquist sampling theorem, which states that if a CT signal has no energy above frequency f, Hz, one can create a DT signal that contains exactly the same information by sampling the CT signal at any rate in excess of 2 fh samples per second. Since the execution time of the simulation is proportional to the number of samples it must process, one naturally uses the lowest sampling rate possible. While the Nyquist theorem should not be violated for arbitrary signals, when the Ct signal is bandpass one can use low-pass equivalent(LPE) waveforms that contain all the information of the T signal but can be sampled slower than 2 r Assume the energy in a bandpass signal is centered about a carrier frequency of f Hz and ranges from f to f Hz, resulting in a bandwidth of f-f=W Hz, as in Fig. 78.4. It is not unusual for w to be many orders of magnitude less than fe The bandpass waveform x(t) can be expressed as a function of two low-pass signals. Two essentially equivalent LPE expansions are known as the envelope/phase representation[Davenport and Root,1958], x(t)=A(r)cos[2 f t+A(t (78.1) and the quadrature representation x(t)=x(t)cos(2I r)-x(r)sin(2I f t) (78.2) All four real signals A(n),0(o), x(o), and x, r)are low pass and have zero energy above W/2 Hz. A computer simulation that replaces x(o with a pair of LPE signals will require far less processor time since the LPE waveforms can be sampled at W as opposed to 2 h samples per second. It is cumbersome to work with two signals rather than one signal. A more mathematically elegant LPE expansion is c(t)= Rev(t)el2rjcry where v(o)is a low-pass, complex-time domain signal that has no energy above W/2 Hz. Signal w(t) is known as the complex envelope of x(n)[Haykin, 1983]. It contains all the information of x(n) and can be sampled at w mples per second without aliasing. This notation is disturbing to engineers accustomed to viewing all time domain signals as real. However, a complete theory exists for complex time domain signals, and with surprisingly little effort one can define convolution, Fourier transforms, analog-to-digital and digital-to-analog conversions, and many other signal processing algorithms for complex signals. If f and W are known, the LPE mapping one-to-one so that x(r) can be completely recovered from w(r). While it is conceptually simpler to sample the CT signals at a rate in excess of 2fh and avoid the mathematical difficulties of the LPE representation, the tremendous difference between f and W makes the LPE far more efficient for computer simulation. This type e 2000 by CRC Press LLC

© 2000 by CRC Press LLC 78.8 Low-Pass Models In most cases of practical interest the physical layer of the communication system will use continuous time (CT) signals, while the simulation will operate in discrete time (DT). For the simulation to be useful, one must develop DT signals and systems that closely match their CT counterparts. This topic is discussed at length in introductory DSP texts. A prominent result in this field is the Nyquist sampling theorem, which states that if a CT signal has no energy above frequency fh Hz, one can create a DT signal that contains exactly the same information by sampling the CT signal at any rate in excess of 2 fh samples per second. Since the execution time of the simulation is proportional to the number of samples it must process, one naturally uses the lowest sampling rate possible. While the Nyquist theorem should not be violated for arbitrary signals, when the CT signal is bandpass one can use low-pass equivalent (LPE) waveforms that contain all the information of the CT signal but can be sampled slower than 2 fh. Assume the energy in a bandpass signal is centered about a carrier frequency of fc Hz and ranges from fl to fh Hz, resulting in a bandwidth of fh – fl =W Hz, as in Fig. 78.4. It is not unusual for W to be many orders of magnitude less than fc. The bandpass waveform x(t) can be expressed as a function of two low-pass signals. Two essentially equivalent LPE expansions are known as the envelope/phase representation [Davenport and Root, 1958], x(t) = A(t) cos[2p fct + q(t)] (78.1) and the quadrature representation, x(t) = xc(t) cos(2pfct) – xs (t) sin(2pfct) (78.2) All four real signals A(t), q(t), xc(t), and xs (t) are low pass and have zero energy above W/2 Hz. A computer simulation that replaces x(t) with a pair of LPE signals will require far less processor time since the LPE waveforms can be sampled at W as opposed to 2 fh samples per second. It is cumbersome to work with two signals rather than one signal. A more mathematically elegant LPE expansion is x(t) = Re{v(t)ej 2pfct} (78.3) where v(t) is a low-pass, complex-time domain signal that has no energy above W/2 Hz. Signal v(t) is known as the complex envelope of x(t) [Haykin, 1983]. It contains all the information of x(t) and can be sampled at W samples per second without aliasing. This notation is disturbing to engineers accustomed to viewing all time domain signals as real.However, a complete theory exists for complex time domain signals, and with surprisingly little effort one can define convolution, Fourier transforms, analog-to-digital and digital-to-analog conversions, and many other signal processing algorithms for complex signals. If fc and W are known, the LPE mapping is one-to-one so that x(t) can be completely recovered from v(t). While it is conceptually simpler to sample the CT signals at a rate in excess of 2fh and avoid the mathematical difficulties of the LPE representation, the tremendous difference between fc and W makes the LPE far more efficient for computer simulation. This type FIGURE 78.4 Amplitude spectrum of a bandpass signal. 8574/ch078/frame Page 1754 Wednesday, May 6, 1998 11:08 AM

de-off frequently occurs in computer simulation. A careful mathematical analysis of the modeling problem ucted before any computer code is generated can yield substantial performance improvements over a conceptually simpler, but numerically inefficient approach. The fundamental reason the LPE representation outlined above is popular in simulation is that one can easily generate LPE models of linear time-invariant bandpass filters. The LPE of the output of a bandpass filter is merely the convolution of the LPE of the input signal and the LPE of the impulse response of the filter. It is far more difficult to determine a LPE model for nonlinear and time-varying systems. There are numerous approaches that trade off flexibility and simplicity. If the system is nonlinear and time invariant, a Volterra series can be used. While this series will exactly represent the nonlinear device, it is often analytically intractable and numerically inefficient For nonlinear devices with a limited amount of memory the AM/AM, AM/PM [Shimbo, 1971] LPE model is useful. This model accurately describes the response of many microwave amplifiers including traveling-wave tubes, solid-state limiting amplifiers, and, under certain conditions, devices which exhibit hysteresis. The Chebyshev transform[Blachman, 1964] is useful for memoryless nonlinearities such as hard and soft limiters. If the nonlinear device is so complex that none of the conventional LPE models can be d, one may need to convert the LPe signal back to its bandpass representation, route the bandpass signal through a model of the nonlinear device, and then reconvert the output to a Lpe signal for further processing If this must be done, one has the choice of increasing the sampling rate for the entire simulation or using different sampling rates for various sections of the simulation. The second of these approaches is known multirate simulation [Cochiere and Rabiner, 1983]. The interpolation and decimation operations required to convert between sampling rates can consume significant amounts of processor time. One must carefully examine nis trade-off to determine if a multirate simulation will substantially reduce the execution time over a single, high sampling rate simulation. Efficient and flexible modeling of nonlinear devices is in general a difficult task and continues to be an area of active research 78.9 Pseudorandom Signal and Noise Generators The preceding discussion was motivated by the desire to efficiently model filters and nonlinear amplifiers. Since these devices often consume the majority of the processor time, they are given high priority. However, there are a number of other subsystems that do not resemble filters. One example is the data source that generates he message or waveform which must be transmitted. While signal sources may be analog or digital in nature, we will focus exclusively on binary digital sources. The two basic categories of signals produced by these devices are known as deterministic and random. When performing worst-case analysis, one will typically produce known, repetitive signal patterns designed to stress a particular subsystem within the overall communication system For example, a signal with few transitions may stress the symbol synchronization loops, while a signal with many regularly spaced transitions may generate unusually wide bandwidth signals. The generation of this ty of signal is straightforward and highly application dependent. To test the nominal system performance one typically uses a random data sequence. While generation of a truly random signal is arguably impossible [Knuth, 1981, one can easily generate pseudorandom(PN)sequences. PN sequence generators have been extensively studied since they are used in Monte Carlo integration and simulation [Rubinstein, 1981] programs and in a variety of wideband and secure communication systems. The two basic structures for generating PN sequences are binary shift registers(BSRs )and linear congruential algorithms(LCAs) Digital data sources typically use BSRs, while noise generators often use LCAs. A logic diagram for a simple BSR is shown in Fig. 78.5. This BSR consists of a clock, six D-type flip-flops(F/F), and an exclusive OR gate denoted by a modulo-two adder. If all the F/F are initialized to 1, the output of the device is the waveform shown in Fig. 78.6. Notice that the waveform is periodic with period 63= 26-1, but within one cycle the output has many of the properties of a random sequence. This demonstrates all the properties of the BSR, LCA, and more advanced PN sequence generators. All Pn generators have memory and must therefore be initialized by the user before the first sample is generated. The initialization data is typically called the seed. One must choose this seed carefully to ensure the output will have the desired properties (in this example, one must avoid setting all F/F to zero). All PN sequence generators will produce periodic sequences. This may or may not be e 2000 by CRC Press LLC

© 2000 by CRC Press LLC of trade-off frequently occurs in computer simulation.A careful mathematical analysis of the modeling problem conducted before any computer code is generated can yield substantial performance improvements over a conceptually simpler, but numerically inefficient approach. The fundamental reason the LPE representation outlined above is popular in simulation is that one can easily generate LPE models of linear time-invariant bandpass filters. The LPE of the output of a bandpass filter is merely the convolution of the LPE of the input signal and the LPE of the impulse response of the filter. It is far more difficult to determine a LPE model for nonlinear and time-varying systems. There are numerous approaches that trade off flexibility and simplicity. If the system is nonlinear and time invariant, a Volterra series can be used. While this series will exactly represent the nonlinear device, it is often analytically intractable and numerically inefficient. For nonlinear devices with a limited amount of memory the AM/AM, AM/PM [Shimbo, 1971] LPE model is useful. This model accurately describes the response of many microwave amplifiers including traveling-wave tubes, solid-state limiting amplifiers, and, under certain conditions, devices which exhibit hysteresis. The Chebyshev transform [Blachman, 1964] is useful for memoryless nonlinearities such as hard and soft limiters. If the nonlinear device is so complex that none of the conventional LPE models can be used, one may need to convert the LPE signal back to its bandpass representation, route the bandpass signal through a model of the nonlinear device, and then reconvert the output to a LPE signal for further processing. If this must be done, one has the choice of increasing the sampling rate for the entire simulation or using different sampling rates for various sections of the simulation. The second of these approaches is known as a multirate simulation [Cochiere and Rabiner, 1983]. The interpolation and decimation operations required to convert between sampling rates can consume significant amounts of processor time. One must carefully examine this trade-off to determine if a multirate simulation will substantially reduce the execution time over a single, high sampling rate simulation. Efficient and flexible modeling of nonlinear devices is in general a difficult task and continues to be an area of active research. 78.9 Pseudorandom Signal and Noise Generators The preceding discussion was motivated by the desire to efficiently model filters and nonlinear amplifiers. Since these devices often consume the majority of the processor time, they are given high priority. However, there are a number of other subsystems that do not resemble filters. One example is the data source that generates the message or waveform which must be transmitted. While signal sources may be analog or digital in nature, we will focus exclusively on binary digital sources. The two basic categories of signals produced by these devices are known as deterministic and random.When performing worst-case analysis, one will typically produce known, repetitive signal patterns designed to stress a particular subsystem within the overall communication system. For example, a signal with few transitions may stress the symbol synchronization loops, while a signal with many regularly spaced transitions may generate unusually wide bandwidth signals. The generation of this type of signal is straightforward and highly application dependent. To test the nominal system performance one typically uses a random data sequence.While generation of a truly random signal is arguably impossible [Knuth, 1981], one can easily generate pseudorandom (PN) sequences. PN sequence generators have been extensively studied since they are used in Monte Carlo integration and simulation [Rubinstein, 1981] programs and in a variety of wideband and secure communication systems. The two basic structures for generating PN sequences are binary shift registers (BSRs) and linear congruential algorithms (LCAs). Digital data sources typically use BSRs, while noise generators often use LCAs. A logic diagram for a simple BSR is shown in Fig. 78.5. This BSR consists of a clock, six D-type flip-flops (F/F), and an exclusive OR gate denoted by a modulo-two adder. If all the F/F are initialized to 1, the output of the device is the waveform shown in Fig. 78.6. Notice that the waveform is periodic with period 63 = 26 – 1, but within one cycle the output has many of the properties of a random sequence. This demonstrates all the properties of the BSR, LCA, and more advanced PN sequence generators. All PN generators have memory and must therefore be initialized by the user before the first sample is generated. The initialization data is typically called the seed. One must choose this seed carefully to ensure the output will have the desired properties (in this example, one must avoid setting all F/F to zero). All PN sequence generators will produce periodic sequences. This may or may not be 8574/ch078/frame Page 1755 Wednesday, May 6, 1998 11:08 AM

FIGURE 78.5 Six-stage binary shift register PN generator. One Period FIGURE 78.6 Output of a six-stage maximal length BSR. Binary Shift Register Sequence Generator Clock FIGURE 78.7 M-ary PN sequence generator Colored Gaussian Clock FIGURE 78.8 Generation of Gaussian noise a problem. If it is a concern, one should ensure that one period of the PN sequence generator is longer than the total execution time of the simulation. This is usually not a significant problem, since one can easily construct BSRs that have periods greater than 1027clock cycles. The final concern is how closely the behavior of the PN sequence generator matches a truly random sequence. Standard statistical analysis algorithms have been applied to many of these generators to validate their performance Many digital communication systems use m bit(M-ary) sources where m> 1. Figure 78.7 depicts a simple algorithm for generating a M-ary random sequence from a binary sequence. The clock must now cycle through m cycles for every generated symbol, and the period of the generator has been reduced by a factor of m. This may force the use of a longer-period BSR. Another common application of PN sequence generators is to produce e 2000 by CRC Press LLC

© 2000 by CRC Press LLC a problem. If it is a concern, one should ensure that one period of the PN sequence generator is longer than the total execution time of the simulation. This is usually not a significant problem, since one can easily construct BSRs that have periods greater than 1027 clock cycles. The final concern is how closely the behavior of the PN sequence generator matches a truly random sequence. Standard statistical analysis algorithms have been applied to many of these generators to validate their performance. Many digital communication systems use m bit (M-ary) sources where m > 1. Figure 78.7 depicts a simple algorithm for generating a M-ary random sequence from a binary sequence. The clock must now cycle through m cycles for every generated symbol, and the period of the generator has been reduced by a factor of m. This may force the use of a longer-period BSR.Another common application of PN sequence generators is to produce FIGURE 78.5 Six-stage binary shift register PN generator. FIGURE 78.6 Output of a six-stage maximal length BSR. FIGURE 78.7 M-ary PN sequence generator. FIGURE 78.8 Generation of Gaussian noise. 8574/ch078/frame Page 1756 Wednesday, May 6, 1998 11:08 AM

samples of a continuous stochastic process, such as Gaussian noise. A structure for producing the shown in Fig. 78.8. In this case the BSR has been replaced by an LCA [Knuth, 1981]. The LCA is very similar to BSR in that it requires a seed value, is clocked once for each symbol generated, and will generate a periodic sequence. One can generate a white noise process with an arbitrary first-order probability density function (pdf) by passing the output of the LCa through an appropriately designed nonlinear, memoryless mapping Simple and well-documented algorithms exist for the uniform to Gaussian mapping. If one wishes to generate a nonwhite process, the output can be passed through the appropriate filter. Generation of a wide-sense stationary Gaussian stochastic process with a specified power spectral density is a well-understood and -documented problem. It is also straightforward to generate a white sequence with an arbitrary first-order pdf or to generate a specified power spectral density if one does not attempt to control the pdf. However, the problem of generating a noise source with an arbitrary pdf and an arbitrary power spectral density is a significant allenge [Sondhi, 1983 78.10 Transmitter, Channel, and Receiver Modeling Most elements of transmitters, channels, and receivers are implemented using standard DSP techniques. Effects that are difficult to characterize using mathematical analysis can often be included in the simulation with little additional effort. Common examples include gain and phase imbalance in quadrature circuits, nonlinear amplifiers, oscillator instabilities, and antenna platform motion. One can typically use LPE waveforms and devices to avoid translating the modulator output to the carrier frequency. Signal levels in physical systems often vary by many orders of magnitude, with the output of the transmitters being extremely high energy signals and the input to receivers at very low energies. To reduce execution time and avoid working with extremely large and small signal level simulations, one often omits the effects of linear amplifiers and attenuators and uses normalized signals. Since the performance of most systems is a function of the signal-to-noise ratio, and not of absolute signal level, normalization will have no effect on the measured performance. One must be areful to document the normalizing constants so that the original signal levels can be reconstructed if needed. Even some rather complex functions, such as error detecting and correcting codes, can be handled in this manner. If one knows the uncoded error rate for a system, the coded error rate can often be closely approximated by applying a mathematical mapping. As will be pointed out below, the amount of processor time required to produce a meaningful error rate estimate is often inversely proportional to the error rate. While an uncoded error rate may be easy to measure, the coded error rate is usually so small that it would be impractical to execute simulation to measure this quantity directly. The performance of a coded communication system is most often determined by first executing a simulation to establish the channel symbol error rate. An analytical mapping can then be used to determine the decoded BER from the channel symbol error rate Once the signal has passed though the channel, the original message is recovered by a receiver. This can A receiver encounters a number of clearly identifiable problems that one may wish to address independently For example, receivers must initially synchronize themselves to the incoming signal. This may involve detecting that an input signal is present, acquiring an estimate of the carrier amplitude, frequency, phase, symbol synchronization, frame synchronization, and, in the case of spread spectrum systems, code synchronization Once acquisition is complete, the receiver enters a steady-state mode of operation, where concerns such as symbol error rate, mean time to loss of lock, and reaction to fading and interference are of primary importance. To characterize the system, the user may wish to decouple the analysis of these parameters to investigate relationships that may exist. For example, one may run a number of acquisition scenarios and gather statistics concerning the probability of acquisition within a specified time interval or the mean time to acquisition. To isolate the problems face synchronization from the inherent limitation of the channel, one may wish to use perfect synchronization information to determine the minimum possible BER. Then the symbol or carrier synchronization can be held at fixed errors to determine sensitivity to these parameters and to investigate worst-case performance. Noise processes can be used to vary these parameters to investigate more typical performance. The designer may also e 2000 by CRC Press LLC

© 2000 by CRC Press LLC samples of a continuous stochastic process, such as Gaussian noise. A structure for producing these samples is shown in Fig. 78.8. In this case the BSR has been replaced by an LCA [Knuth, 1981]. The LCA is very similar to BSR in that it requires a seed value, is clocked once for each symbol generated, and will generate a periodic sequence. One can generate a white noise process with an arbitrary first-order probability density function (pdf) by passing the output of the LCA through an appropriately designed nonlinear, memoryless mapping. Simple and well-documented algorithms exist for the uniform to Gaussian mapping. If one wishes to generate a nonwhite process, the output can be passed through the appropriate filter. Generation of a wide-sense stationary Gaussian stochastic process with a specified power spectral density is a well-understood and -documented problem. It is also straightforward to generate a white sequence with an arbitrary first-order pdf or to generate a specified power spectral density if one does not attempt to control the pdf. However, the problem of generating a noise source with an arbitrary pdf and an arbitrary power spectral density is a significant challenge [Sondhi, 1983]. 78.10 Transmitter, Channel, and Receiver Modeling Most elements of transmitters, channels, and receivers are implemented using standard DSP techniques. Effects that are difficult to characterize using mathematical analysis can often be included in the simulation with little additional effort. Common examples include gain and phase imbalance in quadrature circuits, nonlinear amplifiers, oscillator instabilities, and antenna platform motion. One can typically use LPE waveforms and devices to avoid translating the modulator output to the carrier frequency. Signal levels in physical systems often vary by many orders of magnitude, with the output of the transmitters being extremely high energy signals and the input to receivers at very low energies. To reduce execution time and avoid working with extremely large and small signal level simulations, one often omits the effects of linear amplifiers and attenuators and uses normalized signals. Since the performance of most systems is a function of the signal-to-noise ratio, and not of absolute signal level, normalization will have no effect on the measured performance. One must be careful to document the normalizing constants so that the original signal levels can be reconstructed if needed. Even some rather complex functions, such as error detecting and correcting codes, can be handled in this manner. If one knows the uncoded error rate for a system, the coded error rate can often be closely approximated by applying a mathematical mapping. As will be pointed out below, the amount of processor time required to produce a meaningful error rate estimate is often inversely proportional to the error rate. While an uncoded error rate may be easy to measure, the coded error rate is usually so small that it would be impractical to execute a simulation to measure this quantity directly. The performance of a coded communication system is most often determined by first executing a simulation to establish the channel symbol error rate. An analytical mapping can then be used to determine the decoded BER from the channel symbol error rate. Once the signal has passed though the channel, the original message is recovered by a receiver. This can typically be realized by a sequence of digital filters, feedback loops, and appropriately selected nonlinear devices. A receiver encounters a number of clearly identifiable problems that one may wish to address independently. For example, receivers must initially synchronize themselves to the incoming signal. This may involve detecting that an input signal is present, acquiring an estimate of the carrier amplitude, frequency, phase, symbol synchronization, frame synchronization, and, in the case of spread spectrum systems, code synchronization. Once acquisition is complete, the receiver enters a steady-state mode of operation, where concerns such as symbol error rate, mean time to loss of lock, and reaction to fading and interference are of primary importance. To characterize the system, the user may wish to decouple the analysis of these parameters to investigate relationships that may exist. For example, one may run a number of acquisition scenarios and gather statistics concerning the probability of acquisition within a specified time interval or the mean time to acquisition. To isolate the problems faced in synchronization from the inherent limitation of the channel, one may wish to use perfect synchronization information to determine the minimum possible BER. Then the symbol or carrier synchronization can be held at fixed errors to determine sensitivity to these parameters and to investigate worst-case performance. Noise processes can be used to vary these parameters to investigate more typical performance. The designer may also 8574/ch078/frame Page 1757 Wednesday, May 6, 1998 11:08 AM

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
共18页,试读已结束,阅读完整版请下载
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有