Principles of linear systems and signals 2nd edition




















Actually, there is nothing mysterious about these systems and their approximate realization through physical systems with delay. If we want to know what will happen one year from now, we have two choices: go to a prophet an unrealizable person who can give the answers instantly, or go to a wise man and allow him a delay of one year to give us the answer!

If the wise man is truly wise, he may even be able, by studying trends, to shrewdly guess the future very closely with a delay of less than a year. Such is the case with noncausal systems—nothing more and nothing less. Systems whose inputs and outputs are continuous-time signals are continuous-time systems. Systems whose inputs and outputs are discrete-time signals are discrete-time systems.

A digital computer is a familiar example of this type of system. In practice, discrete-time signals can arise from sampling continuous-time signals. For example, when the sampling is uniform, the discrete instants t0 , t1 , t2 ,.

A typical discrete-time signal is shown in Fig. A discrete-time signal may also be viewed as a sequence of numbers. Thus a discrete-time system may be seen as processing a sequence of numbers x[n] and yielding as an output another sequence of numbers y[n]. Discrete-time signals arise naturally in situations that are inherently discrete time, such as population studies, amortization problems, national income models, and radar tracking. In this manner we can process a continuous-time signal with an appropriate discrete-time system such as a digital computer.

A system whose input and output signals are analog is an analog system; a system whose input and output signals are digital is a digital system. A digital computer is an example of a digital binary system.

Observe that a digital computer is digital as well as a discrete-time system. If we can obtain the input x t back from the corresponding output y t by some operation, the system S is said to be invertible. Therefore, for an invertible system, it is essential that every input have a unique output so that there is a one-to-one mapping between an input and the corresponding output.

The system that achieves the inverse operation [of obtaining x t from y t ] is the inverse system for S. For instance, if S is an ideal integrator, then its inverse system is an ideal differentiator. Consider a system S connected in tandem with its inverse Si , as shown in Fig. The input x t to this tandem system results in signal y t at the output of S, and the signal y t , which now acts as an input to Si , yields back the signal x t at the output of Si. Thus, Si undoes the operation of S on x t , yielding back x t.

A system whose output is equal to the input for all possible inputs is an identity system. Cascading a system with its inverse system, as shown in Fig.

Inverse systems are very important in signal processing. In many applications, the signals are distorted during the processing, and it is necessary to undo the distortion. It is necessary to restore the signal as closely as possible to its original shape.

Such equalization is also used in audio systems and photographic systems. Stability can be internal or external. If every bounded input applied at the input terminal results in a bounded output, the system is said to be stable externally. The external stability can be ascertained by measurements at the external terminals input and output of the system. The concept of internal stability is postponed to Chapter 2 because it requires some understanding of internal system behavior, introduced in that chapter.

As mentioned earlier, systems theory encompasses a variety of systems, such as electrical, mechanical, hydraulic, acoustic, electromechanical, and chemical, as well as social, political, economic, and biological.

In this chapter we shall consider only continuous-time systems. Modeling of discrete-time systems is discussed in Chapter 3. In addition, we must determine the various constraints on voltages and currents when several electrical elements are interconnected. From all these equations, we eliminate unwanted variables to obtain equation s relating the desired output variable s to the input s.

The following ex- amples demonstrate the procedure of deriving input-output relationships for some LTI electrical systems. With this notation, Eq. This procedure may yield erroneous results when the factor D occurs in the numerator as well as in the denominator.

This happens, for instance, in circuits with all-inductor loops or all-capacitor cut sets. To eliminate this problem, avoid the integral operation in system equations so that the resulting equations are differential rather than integro-differential. In electrical circuits, this can be done by using charge instead of current variables in loops containing capacitors and choosing current variables for loops without capacitors. As mentioned earlier, such procedure gives erroneous results only in special systems, such as the circuits with all-inductor loops or all-capacitor cut sets.

Fortunately such systems constitute a very small fraction of the systems we deal with. For further discussion of this topic and a correct method of handling problems involving integrals, see Ref.

Recall that Eq. Clearly, a polynomial in D multiplied by y t represents a certain differential operation on y t. We shall restrict ourselves to motions in one dimension. The laws of various mechanical elements are now discussed. For a mass M Fig. For a linear dashpot Fig. The input is the force x t , and the output is the mass position y t. By t c Figure 1. In Fig. The displacement of the mass is denoted by y t. The variables used to describe rotational motion are torque in place of force , angular position in place of linear position , angular velocity in place of linear velocity , and angular acceler- ation in place of linear acceleration.

The system elements are rotational mass or moment of inertia in place of mass and torsional springs and torsional dashpots in place of linear springs and dashpots.

The terminal equations for these elements are analogous to the corresponding equations for translational elements. If J is the moment of inertia or rotational mass of a ro- tating body about a certain axis, then the external torque required for this motion is equal to J rotational mass times the angular acceleration.

Here we consider a rather simple example of an armature- controlled dc motor driven by a current source x t , as shown in Fig. The torque T t generated in the motor is proportional to the armature current x t. This torque drives a mechanical load whose free-body diagram is shown in Fig. We have found an external description not the internal description of systems in all the examples discussed so far. This may puzzle the reader because in each of these cases, we derived the input—output relationship by analyzing the internal structure of that system.

Why is this not an internal description? What makes a description internal? We could have obtained the input—output description by making observations at the external input and output terminals, for example, by measuring the output for certain inputs such as an impulse or a sinusoid. A description that can be obtained from measurements at the external terminals even when the rest of the system is sealed inside an inaccessible black box is an external description.

Clearly, the input—output description is an external description. What, then, is an internal description? Internal description is capable of providing the complete information about all possible signals in the system. An external description may not give such complete information.

An external description can always be found from an internal description, but the converse is not necessarily true. We shall now give an example to clarify the distinction between an external and an internal description. Let the circuit in Fig. To determine its external description, let us apply a known voltage x t at the input terminals and measure the resulting output voltage y t.

Let us also assume that there is some initial charge Q 0 present on the capacitor. The output voltage will generally depend on both, the input x t and the initial charge Q 0. Clearly, the capacitor charge results in zero voltage at the output. The current i t Fig. Thus, the voltage across the capacitor continues to remain zero. Therefore, for the purpose of computing the current i t , the capacitor may be removed or replaced by a short. The resulting circuit is equivalent to that shown in Fig.

Clearly, for the external description, the capacitor does not exist. No external measurement or external observation can detect the presence of the capacitor.

An internal description, however, can provide every possible signal inside the system. In Example 1. For most systems, the external and internal descriptions are equivalent, but there are a few exceptions, as in the present case, where the external description gives an inadequate picture of the systems. The output component due to the input x t assuming zero initial capacitor charge is the zero-state response.

Complete analysis of this problem is given later in Example 1. Such systems are undesirable in practice and should be avoided in any system design. The system in Fig. It can be represented structurally as a combination of the systems in Fig. In this approach, we identify certain key variables, called the state variables, of the system. These variables have the property that every possible signal in the system can be expressed as a linear combination of these state variables.

For example, we can show that every possible signal in a passive RLC circuit can be expressed as a linear combination of independent capacitor voltages and inductor currents, which, therefore, are state variables for the circuit. To illustrate this point, consider the network in Fig. We identify two state variables; the capacitor voltage q1 and the inductor current q2.

If the values of q1 , q2 , and the input x t are known at some instant t, we can demonstrate that every possible signal current or voltage in the circuit can be determined at t.

Clearly, state variables consist of the key variables in a system; a knowledge of the state variables allows one to determine every possible output of the system. Note that the state-variable description is an internal description of a system because it is capable of describing all possible signals in the system.

Consider again the network in Fig. This can be done by simple inspection of Fig. This set of equations is known as the state equations. Once these equations have been solved for q1 and q2 , everything else in the circuit can be determined by using Eqs. The set of output equations 1. Thus, in this approach, we have two sets of equations, the state equations and the output equations.

Once we have solved the state equations, all possible outputs can be obtained from the output equations. In the input—output description, an N th-order system is described by an N th-order equation. If it is not, the input—output description equation will be of an order lower than the corresponding number of state equations. This circuit has only one capacitor and no inductors. Hence, there is only one state variable, the capacitor voltage q t.

There are two sources in this circuit: the input x t and the capacitor voltage q t. Examining the circuit in Fig. The total current in any branch is the sum of the currents in that branch in Fig. Hence, the set of Eqs. Once we have solved the state equation 1. The state equation 1. Moreover, Eq.

Thus, the system state cannot be observed from the output terminals. Hence, the system is neither controllable nor observable. Such is not the case of other systems examined earlier. Consider, for example, the circuit in Fig. Hence, the system is controllable. Moreover, as the output Eqs. Hence, the states are also observable. Indeed, state equations are ideally suited for the analysis, synthesis, and optimization of MIMO systems. Compact matrix notation and the powerful techniques of linear algebra greatly facilitates complex manipulations.

State equations can yield a great deal of information about a system even when they are not solved explicitly. State equations lend themselves readily to digital computer simulation of complex sys- tems of high order, with or without nonlinearities, and with multiple inputs and outputs.

Much of the book is devoted to introduction of the basic concepts of linear systems analysis, which must necessarily begin with simpler systems without using the state-space approach. Chapter 10 deals with the state-space analysis of linear, time invariant, continuous-time, and discrete-time systems. A system processes input signals to modify them or extract additional information from them to produce output signals response.

A system may be made up of physical components hardware realization , or it may be an algorithm that computes an output signal from an input signal software realization. For periodic signals the time averaging need be performed only over one period in view of the periodic repetition of the signal.

An analog signal is a signal whose amplitude can take on any value over a continuum. The terms discrete-time and continuous-time qualify the nature of a signal along the time axis horizontal axis. A periodic signal remains unchanged when shifted by an integer multiple of its period.

A periodic signal x t can be generated by a periodic extension of any con- tiguous segment of x t of duration T0. Hence, periodic signals are everlasting signals.

A signal can be either an energy signal or a power signal, but not both. However, there are signals that are neither energy nor power signals. A signal whose physical description is known completely in a mathematical or graphical form is a deterministic signal. A random signal is known only in terms of its probabilistic description such as mean value or mean-square value, rather than by its mathematical or graphical form.

The unit step function u t is very useful in representing causal signals and signals with different mathematical descriptions over different intervals. The impulse function has a sampling or sifting property, which states that the area under the product of a function with a unit impulse is equal to the value of that function at the instant at which the impulse is located assuming the function to be continuous at the impulse location.

The exponential function est , where s is complex, encompasses a large class of signals that includes a constant, a monotonic exponential, a sinusoid, and an exponentially varying sinusoid. The product of an even function and an odd function is an odd function. However, the product of an even function and an even function or an odd function and an odd function is an even function. Every signal can be expressed as a sum of odd and even function of time. A system processes input signals to produce output signals response.

The input is the cause, and the output is its effect. In general, the output is affected by two causes: the internal conditions of the system such as the initial conditions and the external input. Linear systems are characterized by the linearity property, which implies superposition; if several causes such as various inputs and initial conditions are acting on a linear system, the total output response is the sum of the responses from each cause, assuming that all the remaining causes are absent.

A system is nonlinear if superposition does not hold. In time-invariant systems, system parameters do not change with time. The parameters of time-varying-parameter systems change with time. For memoryless or instantaneous systems, the system response at any instant t depends only on the value of the input at t. For systems with memory also known as dynamic systems , the system response at any instant t depends not only on the present value of the input, but also on the past values of the input values before t.

In contrast, if a system response at t also depends on the future values of the input values of input beyond t , the system is noncausal. In causal systems, the response does not depend on the future values of the input. Because of the dependence of the response on the future values of input, the effect response of noncausal systems occurs before the cause. When the independent variable is time temporal systems , the noncausal systems are prophetic systems, and therefore, unrealizable, although close approximation is possible with some time delay in the response.

Noncausal systems with independent variables other than time e. Systems whose inputs and outputs are continuous-time signals are continuous-time sys- tems; systems whose inputs and outputs are discrete-time signals are discrete-time sys- tems.

If a continuous-time signal is sampled, the resulting signal is a discrete-time signal. We can process a continuous-time signal by processing the samples of the signal with a discrete-time system. Systems whose inputs and outputs are analog signals are analog systems; those whose inputs and outputs are digital signals are digital systems. If we can obtain the input x t back from the output y t of a system S by some operation, the system S is said to be invertible.

Otherwise the system is noninvertible. A system is stable if bounded input produces bounded output. The internal stability, discussed later in Chapter 2, is mea- sured in terms of the internal behavior of the system. The system model derived from a knowledge of the internal structure of the system is its internal description. In contrast, an external description is a representation of a system as seen from its input and output terminals; it can be obtained by applying a known input and measuring the resulting output.

In the majority of practical systems, an external description of a system so obtained is equivalent to its internal description. At times, however, the external description fails to describe the system adequately. Such is the case with the so-called uncontrollable or unobservable systems. State equations of a system represent an internal description of that system.

Papoulis, A. The Fourier Integral and Its Applications. McGraw-Hill, New York, Mason, S. Electronic Circuits, Signals, and Systems. Wiley, New York, Lathi , gives students an introduction to linear systems and signals. The text is essential for students specializing in Electronics Engineering and Electrical Engineering.

Oxford University Press is a renowned publishing house that develops and publishes high quality textbooks, scholarly works, and academic books for school courses, bilingual dictionaries and also digital materials for both learning and teaching. It is a division of the University of Oxford. The first book was locally published in the year Certified Buyer , Akbarpur. Certified Buyer , Agartala. Explore Plus. Higher Education and Professional Books.

An extensive development of the Laplace transform is given in : Skip to content. Author : Bhagawandas P. Author : B. Author : William S.



0コメント

  • 1000 / 1000