• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Control System Theory & Design: Notes
 

Control System Theory & Design: Notes

on

  • 2,126 views

These are notes I prepared for a course on control that I taught at King Abdulaziz University....

These are notes I prepared for a course on control that I taught at King Abdulaziz University.

The notes did not go through the intended revision after the end of the semester due to my schedule so they remain a rough first draft.

They are largely (but not completely) inspired by a control course taught by Dr. Gregory Shaver at the ME dept. at Purdue.

Much of the information was gleaned through a variety of textbooks/papers/experiences, too many to mention here.

Although a reasonable attempt has been made to ensure all the facts are correct, these notes are as-is with no guarantee of accuracy. Use at your own risk.

Statistics

Views

Total Views
2,126
Views on SlideShare
2,114
Embed Views
12

Actions

Likes
1
Downloads
85
Comments
0

2 Embeds 12

http://www.linkedin.com 11
https://www.linkedin.com 1

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Control System Theory & Design: Notes Control System Theory & Design: Notes Document Transcript

    • Control System Theory & Design Lecture Notes Ismail Hameduddin KING ABDULAZIZ UNIVERSITY
    • ContentsContents 21 Introduction to feedback control 4 1.1 Basic notion of feedback control . . . . . . . . . . . . . . . . . . 4 1.2 Control architectures . . . . . . . . . . . . . . . . . . . . . . . . 82 System models and representation 15 2.1 Model classification . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.2 State-space representation . . . . . . . . . . . . . . . . . . . . . 15 2.3 Input/output differential equation . . . . . . . . . . . . . . . . . 16 2.4 Transfer functions . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Dynamic response of systems 19 3.1 First-order system response . . . . . . . . . . . . . . . . . . . . . 19 3.2 Second-order systems . . . . . . . . . . . . . . . . . . . . . . . . 19 3.3 Design considerations . . . . . . . . . . . . . . . . . . . . . . . . 21 3.4 Routh-Hurwitz stability criterion . . . . . . . . . . . . . . . . . . 21 3.5 Pole/Zero effects . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Frequency response tools 25 4.1 Frequency response . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.2 Bode plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.3 Gain and phase margins . . . . . . . . . . . . . . . . . . . . . . . 27 4.4 Phase margin and second-order systems . . . . . . . . . . . . . . 29 4.5 Root-locus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.6 Nyquist stability criterion . . . . . . . . . . . . . . . . . . . . . . 32 4.7 Feedback with disturbances . . . . . . . . . . . . . . . . . . . . . 37 4.8 Trends from Bode & Poisson’s integral constraints . . . . . . . . 39 4.9 Upper bounds on peaks for sensitivity integrals . . . . . . . . . . 425 Frequency domain control design 43 5.1 Direct pole-placement . . . . . . . . . . . . . . . . . . . . . . . . 43 5.2 Modeling errors . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 5.3 Robust stability . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 5.4 Robust performance . . . . . . . . . . . . . . . . . . . . . . . . . 47 5.5 Internal Model Principle . . . . . . . . . . . . . . . . . . . . . . . 486 State-space techniques 49 6.1 Transfer function to state-space . . . . . . . . . . . . . . . . . . . 49 2
    • Contents 6.2 Eigenvalues & eigenvectors . . . . . . . . . . . . . . . . . . . . . 50 6.3 Solution and stability of dynamic systems . . . . . . . . . . . . . 50 6.4 State transformation . . . . . . . . . . . . . . . . . . . . . . . . . 55 6.5 Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 6.6 State-space to transfer function . . . . . . . . . . . . . . . . . . . 57 6.7 Cayley-Hamilton Theorem . . . . . . . . . . . . . . . . . . . . . . 58 6.8 Controllability & reachability . . . . . . . . . . . . . . . . . . . . 59 6.9 Controllability gramian . . . . . . . . . . . . . . . . . . . . . . . 64 6.10 Output controllability . . . . . . . . . . . . . . . . . . . . . . . . 65 6.11 Control canonical form and controllability . . . . . . . . . . . . . 66 6.12 Stabilizability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 6.13 Popov-Belevitch-Hautus test for controllability and stabilizability . 72 6.14 Observability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 6.15 Observability gramian . . . . . . . . . . . . . . . . . . . . . . . . 75 6.16 Observable canonical form and observability . . . . . . . . . . . . 75 6.17 Detectability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 6.18 PHB test for observability and detectability . . . . . . . . . . . . 76 6.19 Duality between observability and controllability . . . . . . . . . . 767 Control design in state-space 77 7.1 State feedback design/Pole-placement . . . . . . . . . . . . . . . 77 7.2 Ackermann’s formula . . . . . . . . . . . . . . . . . . . . . . . . 79 7.3 SISO tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 7.4 SISO tracking via complementary sensitivity functions . . . . . . . 81 7.5 SISO tracking via integral action . . . . . . . . . . . . . . . . . . 82 7.6 Stabilizable systems . . . . . . . . . . . . . . . . . . . . . . . . . 83 7.7 State observer design/Luenberger state observer/estimator design 87 7.8 Disturbance observers . . . . . . . . . . . . . . . . . . . . . . . . 90 7.9 Output feedback control . . . . . . . . . . . . . . . . . . . . . . 92 7.10 Transfer function for output feedback compensator . . . . . . . . 93 7.11 SISO tracking with output feedback control . . . . . . . . . . . . 938 Basic notions of linear algebra 95 3
    • Chapter 1Introduction to feedback controlWe introduce feedback in terms of what will be discussed in these notes.1.1 Basic notion of feedback controlA block diagram illustration of an ideal standard feedback control system is shownin Fig. 1.1. Figure 1.1: Block diagram illustration of ideal feedback control system.The idea of feedback is straightforward: we monitor the actual output of the systemor plant, then generate an error signal based on the difference between the desiredoutput and the actual output of the dynamic system and finally use this error termin a compensator which gives a control signal that can act as an input to the actualsystem.The compensator is a general term and it can be anything such as an algorithm,function or just a simple gain, i.e., a constant value multiplying the error. Thegoal of control design is to determine an appropriate compensator to achieve per-formance objectives. The plant in the system is the dynamic system that is beingcontrolled. We model this system via mathematical tools and use it to designthe compensator. A mathematical model is always an approximation of the realmodel since we cannot effectively model everything present in the real world and wecannot also take into account variations in parameters such as geometries, forcesand operating conditions. Thus, there always exists some uncertainty in the plantmodel which directly affects compensator design. 4
    • 1.1. Basic notion of feedback controlFeedback goes beyond mathematical functions and abstractions. It is a fact ofeveryday life. An obvious example of human feedback is when you adjust thewater temperature in the shower. The hot-cold knob is adjusted until a comfortableposition is reached. In this case, the skin monitors the output of the system, i.e.,the temperature of the water. It then sends this signal to the brain that judges thedifference between the desired water temperature and actual water temperature.This difference is then used by the brain to general a control signal that commandsthe hand to adjust the knobs to reduce this difference. If the water is cold, youshiver, the brain determines that you need to be warmer, it sends a signal toyour hand to adjust the knob for warmer water until you stop shivering. Controlengineering is then simply the attempt to mimic the elegant, efficient and effectivecontrol of processes apparent in nature.We mentioned that the goal of control design is to determine an appropriate com-pensator to achieve performance objectives. Some common performance objectivesare listed below: • Stability of the closed-loop system. • Dynamic response to be satisfactory, e.g., – Settling time (time of response). – Percent overshoot. • Tracking/steady-state error small. • Remain within hardware limits, e.g., forces and voltages. • Eliminate impact of disturbances such as wind gusts in UAV’s and inclines in a car cruise control system. • Eliminate impact of measurement noise. • Eliminate impact of model uncertainty.The above wish list is not something that is not usually completely possible due tothe existence of fundamental limitations in the the system. There is nothing calleda free lunch. However, we can via feedback control and other strategies, trade-off these fundamental limitations with respect to each other to achieve the bestpossible scenario based on requirements. This is the essence of control engineering.An example of the previous is when we try to have aggressive control, i.e. smallresponse time, and also eliminate the impact of model uncertainty. The bandwidthof the system is a measure of the aggressiveness of the control action and/orthe dynamic response. A higher bandwidth implies a higher frequency at whichwe have good response and therefore more aggressive control. But at the sametime, in most models, the magnitude of the uncertainty becomes large at higherfrequencies. Hence we limit the bandwidth or frequency of response dependingon the frequency at which uncertainty begins to become significant. We want lowmagnitude of response at frequencies where model uncertainty magnitude becomeshigh to reduce the system sensitivity to this uncertainty. This is a situation where,since we cannot avoid model uncertainties, we have to limit the aggressiveness ofthe response to avoid the uncertainty becoming significant. 5
    • 1.1. Basic notion of feedback controlActual control systemsWe often treat the control and feedback of systems as an idealized scenario to aidin the design process. It is important for the control engineer to appreciate theactual complex nature of control systems. It is instructive to do this through asimple example.Recently, there has been a growing research interest in applications of piezoelec-tric materials. These are very sensitive materials that produce a voltage sig-nal when they sense a force. The converse phenomenon is also true, i.e., aforce/displacement is produced in the material when voltage is applied to it. Thischaracteristic is considered particularly advantageous and is being tested in a widevariety of applications. Among these is their use as a variable valve actuator infuel injector systems.Figure 1.2: Schematic of a proposed piezoelectric variable valve injector system.Fuel injection for an internal combustion engine should ideally be tightly controlledto produce an air-fuel mixture that reduces pollutants and increases efficiency whileat the same time mitigating undesirable phenomena such as combustion instability.This is possible through a variable valve that regulates the amount of fuel flowingthrough the cylinder nozzle.One type of assembly of such a variable valve available in the literature is shown 6
    • 1.1. Basic notion of feedback controlin Fig. 1.21 . The piezo stack at the top consists of a “stacked” combinationof piezeoelectric materials that produces the desired displacement characteristicswhen voltage is applied. The voltage is applied by the piezo stack driver. Whenthe piezo stack expands (due to voltage via the piezo stack driver), it moves thetop link down which in turn pushes the bottom link down. This displaces fluidout of the needle upper volume into the injector body and we would expect theneedle lower volume to be reduced as well. However, assuming incompressibility,this cannot happen because the needle lower volume does not have an openinginto the injector body. Instead, to compensate for the loss of volume induced bythe bottom link moving down, the needle is pushed up by the fluid. This opens thenozzle. In a similar fashion, with appropriate voltage, the nozzle is closed as well.The piezo stack via mechanical means, controls the nozzle state and it is hencecalled a variable valve actuator (VVA). The chief benefits of using piezoelectricmaterials here is their quick response (high bandwidth) and because incorporationof a piezoelectric actuator helps eliminate many mechanical, moving parts which inturn reduces cost and increases reliability. The disadvantage is that the maximumexpansion achievable by piezoelectric materials is usually low. This is remediedby using the stair-like “ledge” shown in Fig. 1.2 which acts as a displacementmultiplier. A larger ledge causes a larger loss of needle lower volume when thebottom link moves down which makes the needle displace more to compensate forlost volume.The idealized block diagram that is used in control design is shown in Fig. 1.3. Itis supposed to be as simple as possible in order to capture only the most essentialcharacteristics of the system. The actual valve position and the desired valveposition together generate an error signal and the compensator is used to determinethe appropriate voltage signal that will eventually reduce this error to zero. Herethe VVA is actually a 5 state control oriented model, i.e., it needs 5 variables to befully described in the time domain. These states include position of the actuator,velocity of the actuator and the pressure. A full model description of the VVA wouldentail many more states that do not have as much effect on the system responsebut greatly affect the complexity of the problem. Such models are sometimes calledsimulation models because we use them to simulate the response of the systemrather than use them to determine an appropriate compensator (control design).Figure 1.3: Block diagram of an ideal closed-loop variable valve actuator system.The actual or close to actual block diagram is shown in Fig. 1.4. Here we do notignore any known elements from our system. The valve position is measurement 1 Chris Satkoski, Gregory M. Shaver, Ranjit More, Peter Meckl, and Douglas Memering,Dynamic Modeling of a Piezo-electric Actuated Fuel Injector, IFAC Workshop on Engine andPowertrain Control Simulation and Modeling, 11/30-12/2/2009, IFP, Rueil-Malmaison, France. 7
    • 1.2. Control architecturesby the linear variable displacement transducer (LVDT) in an analog form. This isthen converted to a digital signal by an analog to digital converter (A/D). It isthen used to generate an error signal based on the desired valve position. The erroris fed into the compensator which outputs a digital signal which is converted toan analog voltage via a digital to analog converter (D/A). This signal is amplifiedusing a pulse-width modulation (PWM) amplifier and then supplied to the VVAactuator. Figure 1.4: Block diagram of a closed-loop variable valve actuator system.It is obvious that this is far more complicated system than the one consideredearlier. Every element in the system has its own dynamics which may or maynot include time delay. Furthermore we have errors in measurement and digiti-zation/quantization error. We cannot remove any elements from consideration inthis system without first verifying their effect and on the entire system. If thedynamics of the element are fast enough to approximate as a straight line ratherthan a block in the diagram, then we can often ignore the element. For sensorssuch as the LVDT, this may be accomplished by checking manufacturer publishedinformation about them such as bandwidth, damping ratio, etc. This is a fairlysimple example but more complicated systems such as flight control systems mayhave hundreds of these blocks. These notes only deal with the situation in Fig.1.3.1.2 Control architecturesWe now consider several commonly used control architectures. In the following,the reference command is given by the transfer function R(s), the plant by G(s),the output by Y (s), the compensator C(s), disturbances by D(s) and noise byN (s). 8
    • 1.2. Control architecturesOpen-loop/Feedforward controlWe now consider the simplest case of control called open-loop or feedforwardcontrol. As the name suggests, there is no feedback of the output back to thecompensator. Rather everything is done without output measurements. A blockdiagram representing a simple feedforward scheme is shown in Fig. 1.5. Figure 1.5: Block diagram of a feedforward variable valve actuator system.The transfer function from the output to the input is simply Y = CG ⇒ Y = CGR (1.1) Rwhere the arguments have been dropped for convenience. Since ideally we wantY (s) = R(s), a natural choice for the compensator would be C(s) = G−1 (s)which leads to Y = CGR = G−1 GR = R. (1.2)This looks like a perfect scenario but we made some critical assumptions whichmakes the previous result problematic. One assumption was that there are nodisturbances anywhere in the system. This assumption is false since we alwayshave disturbances in the system.Although there may be multiple disturbances in different parts of the system, wenow consider one type of disturbance in the system for simplicity and for illustrativepurposes. Let there be a disturbance in the control signal being fed into the plant,i.e., the control signal input into the plant is not the same as the compensatoroutput. This situation is illustrated in Fig. 1.6. For such a system we can deriveFigure 1.6: Block diagram of a feedforward variable valve actuator system withdisturbance.the transfer function from the reference R(s) to the output Y (s) as Y = G(CR + D) = GCR + GD (1.3)and substituting the previous compensator choice C(s) = G−1 (s) Y = GG−1 R + GD = R + GD. (1.4) 9
    • 1.2. Control architecturesTherefore, even in the case of a single source of disturbance, feedforward fails toprovide good reference tracking since now we have an extra G(s)D(s) term thatis unwanted and perturbs the output from the reference signal.Another assumption is of having no model uncertainty. This is always false sincethere is always uncertainty in a mathematical reasons for a variety of reasons suchas lack of modeling of certain dynamics for simplicity and physical variation betweensystems. Thus, we don’t have an actual true model but a perturbed model G(s)and therefore we can only have G−1 (s). Substituting this into (1.1) gives Y = CGR = G−1 GR = εR. (1.5)where ε is some factor dependent on the model uncertainty. Feedforward failswhen we have uncertainty in the model. A more acute problem occurs when G(s)and hence G(s) have right-half plane (RHP) zeros, i.e., roots of the numeratorhave positive real parts. In this situation, G−1 (s) will have RHP poles, i.e., rootsof the denominator have positive real parts; something that characterizes unstablesystems. And since G−1 = G−1 , there will never be perfect cancellation of the RHPpoles and zeros. Hence, the entire feedforward system (even without disturbances)will be unstable because of the existence of RHP poles.Even though the previous has painted a bleak picture, feedforward control is still auseful tool as long as it is used in an intelligent manner. We sometimes combinefeedback and feedforward control to achieve excellent results as will be shown later.Feedback/closed-loop controlWe consider the previous problem of having input disturbance but now we employa feedback scheme to tackle the control and reference tracking problem. We alsoadd noise to the measurement of the output; something that is expected andunavoidable. The block diagram of such a closed-loop system is shown in Fig. 1.7.Figure 1.7: Block diagram of a feedback variable valve actuator system with dis-turbance and noise. 10
    • 1.2. Control architecturesFollowing basic procedures, we determine the system transfer function as Y = GC(R − Y − N ) + GD (1.6) GC G GC = R+ D− N (1.7) 1 + GC 1 + GC 1 + GC = YR + YD − YN (1.8)where YR (s) is the transfer function from R(s) to Y (s), YD (s) is the transferfunction from D(s) to Y (s) and YN (s) is the transfer function from N (s) toY (s). For the output to perfectly track the reference input, we want YR (s) → 1,YD (s) → 0 and YN (s) → 0. Let the compensator be a large number, i.e., C(s)∞, then G → 1 ⇒ YD = 0 (1.9) 1 + GCwhich implies complete disturbance rejection, i.e., the feedforward problem issolved. Furthermore, GC → 1 ⇒ YR = 1 (1.10) 1 + GCwhich is exactly what we want for perfect reference tracking. This is the motivationbehind high gain feedback. But we also have GC → 1 ⇒ YN = 1 (1.11) 1 + GCwhich means the noise is amplified; something we certainly do not want. Thereis an inherent problem here because the coefficient of R(s) and N (s) are thesame. Thus, we cannot have YR (s) → 1 and YN (s) → 0 simultaneously. Wehave a fundamental contradiction here. The only way to get rid of noise is to setC(s) = 0 but then this leads to no control action.Feedback helps us deal with disturbances and plant uncertainty. High gain feedbacklooks similar to plant inversion in feedforward control but only better. It givesus a plant inversion like process and mitigates the effect of disturbances. Thedrawback, however, is that mitigating noise and perfect tracking cannot be donesimultaneously.In many situations, the reference tracking is in a low frequency range. An exampleof this would be the reference signal of the bank angle of a commercial airliner.Also, for most sensors, noise becomes prevalent mainly in the frequency ranges.Thus we can design C(s) such that the transfer function GC YR = YN = (1.12) 1 + GCis 1 at low frequencies (to aid in reference track) and 0 at high frequencies (tomitigate measurement noise). The frequency response for such a transfer functionis shown in Fig. 1.8.Figure 1.8 shows a transfer function with a bandwidth (i.e. approx. roll-off fre-quency) of 7 radians per second. It implies, for perfect tracking and noise mitiga-tion, that the reference signal must be predominant in frequencies below 7 radiansper second and the measurement noise must be predominant in frequencies above 7radians per second. Thus, the extent to which you can be aggressive with control,i.e. have high bandwidth, depends on two factors: 11
    • 1.2. Control architectures −5 −10 −15 −20 Magnitude (dB) −25 −30 −35 −40 −45 −50 −1 0 1 2 3 10 10 10 10 10 Frequency (rad/sec)Figure 1.8: Typical frequency response magnitude of YR = YN to mitigate noiseand provide good tracking.. • The frequency range at which the noise is present. • The reference signal frequency range.This is an example of a fundamental limitation in a feedback situation. The ag-gressiveness of the control is limited by the measurement noise and the referencesignal frequency range.When we close the loop, it allows us to trade off the impact of disturbances, noise,reference tracking signals and stability among others. It gives us the freedom totrade off quantities which we might not have been able to otherwise, such asplant uncertainty, noise mitigation and tracking accuracy. These notes deal withdeveloping a sophisticated outlook and precise tools to do tradeoffs.Feedback with feedforwardAnother way to tackle the above problems is to include a feedforward term inthe feedback architecture as shown in Fig. (1.9). Here Gf (s) is the feedforwardtransfer function. The transfer function of the system is Y = GC(R − Y − N ) + GGf R + GD ⇒ Y (1 + GC) = (GC + GGf )R + GD − GCN (1.13)and hence GC + GGf G GC Y = R+ D− N (1.14) 1 + GC 1 + GC 1 + GC = YR + YD − YN . (1.15)As before, we want YR (s) → 1, YD (s) → 0 and YN (s) → 0. Notice we have anextra degree of freedom in the design of YR (s) due to the feedforward term Gf (s). 12
    • 1.2. Control architecturesFigure 1.9: Block diagram of a two degree of freedom feedback-feedforward variablevalve actuator system with disturbance and noise.Let Gf (s) = G−1 (s) and thus GC + GGf GC + GG−1 (s) = =1 (1.16) 1 + GC 1 + GCwhich gives accurate steady-state tracking although we still run into problems ofmodel uncertainty as in the feedforward architecture. But most importantly, sincewe are not using feedforward or feedback alone, accurate steady-state trackingdoes not imply noise amplification. Now the challenge is the tradeoff between thenoise and disturbance. If we set C(s) ∞ G →0 (1.17) 1 + GCwhich implies YD (s) → 0. But for YN (s) → 0, we need C(s) = 0 GC → 0. (1.18) 1 + GCThese seemingly contradictary/conflicting requirements represent a tradeoff thatwe cannot avoid. If our model is excellent, feedforward is beneficial in this casebut we still have tradeoffs. Thus, control cannot change the underlying problembut can give us tools to play with the fundamental tradeoffs to get an acceptabledesign.Feedback with disturbance compensationAssuming we know the disturbance D(s), will this buy us anything? Consider thefeedback architecture shown in Fig. 1.10. Here GD (s) is an additional transferfunction called the disturbance compensation transfer function. 13
    • 1.2. Control architecturesFigure 1.10: Block diagram of a combined feedback and disturbance feedbackvariable valve actuator system with disturbance and noise.From Fig. 1.10 we have Y = GC(R − Y − N ) + CGD GD + GD ⇒ Y (1 + GC) = GCR + (GCGD + G)D − GCN (1.19)and hence GC GGD C + G GC Y = R+ D− N (1.20) 1 + GC 1 + GC 1 + GC = YR + YD − YN . (1.21)Again we want YR (s) → 1, YD (s) → 0 and YN (s) → 0. Now we seem to havemore flexibility because setting GD (s) = −C −1 (s) gives GGD C + G GGD C + G = −CC −1 G + G = 0 ⇒ =0 (1.22) 1 + GCwhich implies that we have perfect disturbance rejection. But again this methodhas its own challenges because we need to have knowledge of the disturbance.This can be through estimation, especially if we have some idea about the natureof disturbance. For example, the disturbance in a manufacturing when the floorshaking in a repeating. 14
    • Chapter 2System models and representation2.1 Model classificationThere are many approaches to developing system models and a similarly largenumber of classifications of model types. System models can be classified as white,grey or black box models. Models where the underlying physics of the system arefirst considered to help develop the model are known as white box models. Blackbox models, on the other hand, are entirely data drive. The output of a system isconsidered subject to a given input and a corresponding model is formed using toolssuch as Fourier analysis. Grey box models combine the above two approaches inthat the model form is derived from physical principles while the model parametersare determined using experimental data.Models can also be classified as nominal/control models or simulation/calibrationmodels. The nominal/control model form is a simplified dynamic model wherethe desire is to capture the dynamic coupling between control inputs and systemoutputs. These models are directed towards usage for controller design since asimplified model aids in this process. Conversely, simulation models are typicallygenerated to capture as many aspects of the system behavior as accurately aspossible. The intended use of these types of models is for system and controllervalidation, intuition development and assumption interrogation.2.2 State-space representationConsider a set of 1st-order ordinary differential equations, xi = f1 (x1 , ..., xn , u1 , ..., um ) ˙ (2.1) . . . xn = fn (x1 , ..., xn , u1 , ..., um ) ˙ (2.2)where x1 , ..., xn are called the system states and u1 , ..., um are the system inputs. 15
    • 2.3. Input/output differential equationNext consider a set of algebraic equations relating outputs to state variables andinputs, y1 = g1 (x1 , ..., xn , u1 , ..., um ) (2.3) . . . yp = gp (x1 , ..., xn , u1 , ..., um ). (2.4)If we let x = [x1 , ..., xn ]T , u = [u1 , ..., um ]T , and y = [y1 , ..., yp ]T , then the aboverelationships can be written in the compact state-space form ˙ x = f (x, u) (2.5) ˙ y = g(x, u). (2.6)If the system considered is linear then it can be written in the linear parametervarying form (LPV) ˙ x = A(t)x + B(t)u (2.7) y = C(t)x + D(t)u (2.8)or if the system is linear time-invariant (LTI) ˙ x = Ax + Bu (2.9) y = Cx + Du (2.10)where A, B, C and D are the relevant matrices.States are the smallest set of n variables (state variables) such that knowledgeof these n variables at t = t0 together with knowledge of the inputs for t ≥ t0determines system behaviour for t ≥ t0 . The state vector is the n-th order vectorwith states as components and the state-space is the n-dimensional space withcoordinates as the state variables. Correspondingly, the state trajectory is the pathproduced in the state-space as the state vector evolves over time.The advantages of the state-space representation are that the dynamic model isrepresented in a compact form with regular notation, the internal behaviour of thesystem is given treatment, the model can easily incorporate complicated outputfunctions, definition of states helps build intuition and MIMO systems are easilydealt with. There are many tools available for control design for this type of modelform. The drawback of the state-space representation is that the particular equa-tions themselves may not always be very intuitive.2.3 Input/output differential equationThe input/output differential equation relates the outputs directly to the inputs.It is obtainable usually through taking the Laplace transform of the state-spacerepresentation, substituting/rearranging appropriately and finally taking the inverse 16
    • 2.4. Transfer functionsLaplace transform to obtain an high-order differential equation. A linear time-invariant input/output differential equation is given by y n + an−1 y n−1 + ... + a1 y + a0 y = bm um + bm−1 um−1 + .... + b1 u + b0 u (2.11) ˙ ˙The advantages of input/output differential equations are that they are concep-tually simple, can easily be converted to transfer functions and many tools areavailable in this context for control design. It is however difficult to solve in thetime domain since we would need to solve the Laplace transform which is generallynot an easy task.2.4 Transfer functionsRecall from the study of Laplace transforms the following important transforma-tions L[f˙(t)] = sF (s) − f (0) (2.12) L[f (t)] = s2 F (s) − sf (0) − f˙(0) ¨ (2.13) . . . L[f n (t)] = sn F (s) − sn−1 f (0) − sn−2 f˙(0) − ... − sy n−2 (0) − y n−1 (0). (2.14)Now consider a generic LTI input/output differential equation given by y n + an−1 y n−1 + ... + a1 y + a0 y = bm um + bm−1 um−1 + .... + b1 u + b0 u. (2.15) ˙ ˙Applying the Laplace transforms from above to this differential equation yields sn Y (s) + an−1 sn−1 Y (s) + ... + a1 sY (s) + a0 Y (s) + fy (s, t = 0) = bm sm U (s) + ... + b1 sU (s) + b0 U (s) + fu (s, t = 0) (2.16)where fy (s, t = 0) and fu (s, t = 0) are functions of the initial conditions as givenby the Laplace transform. Rearranging the above gives bm sm + ... + b1 s + b0 fu (s, t = 0) − fy (s, t = 0) Y (s) = U (s) + . sn+ an−1 sn−1 + ... + a0 sn + an−1 sn−1 + ... + a0 1 2 (2.17)Box 1 represents the transfer function and it describes the forced response of thesystem. Box 2 represents the free response of the system depending on the initialconditions. Depending on the system (whether it has control input or not) and theexact system scenario (whether the initial conditions are zero or not), Y (s) maycomprise box 1 or box 2 or both. For control analysis, we usually assume box 2is zero and thus box 1 represents the entire system response. It will be seen laterthat many results which are true for this case are also true when box 1 is zero andbox 2 is not.The roots of the numerator of a transfer functions are called zeros since they sendthe transfer function to zero whereas the roots of the denominator of a transfer 17
    • 2.4. Transfer functionsfunction are called poles since they send the transfer function to ∞. The poles andzeros are very important in analyzing system response and for designing appropriatecompensators. 18
    • Chapter 3Dynamic response of systems3.1 First-order system responseA first-order system (such as a spring-mass system) takes on the general form τ y + y = ku ˙ (3.1) y k ⇒ y+ = u ˙ (3.2) τ τ ⇒ y + ay = bu, ˙ a = 1/τ , b = k/τ (3.3) ⇒ τ [sY (s) + y(0)] + Y (s) = kU (s) (3.4) k/τ y(0) ⇒ τ Y (s) = U (s) + (3.5) s + (1/τ ) s + (1/τ ) 1 2where τ is the time constant of the system response and k is a constant.The roots of the denominator of the transfer function (the characteristic equation) 1are called poles and in this case the pole is −a = − τ . If the pole lies in theleft-half of the complex s-plane (LHP), it is stable (exponentially decreasing). If itis in the right-half of the complex s-plane (RHP) then it is unstable (exponentiallyincreasing). A pole on the imaginary axis (with real part equal to 0) is marginallystable.The free response of this system (due to the initial conditions) is given by yfree (t) = ty(0)e− τ = y(0)e−at .The forced response of the system to a step input is given by ystep (t) = k + t[y(0) − k]e− τ . The value of y at the time constant value τ is given by y(τ ) =0.368y(0) + 0.632k. The lower τ is, the faster the response. Thus, making |a|bigger gives a faster response.3.2 Second-order systemsThe general form of a second-order system is given by 2 2 y + 2ζωn y + ωn y = kωn u ¨ ˙ (3.6) 19
    • 3.2. Second-order systemswhere k is a constant, ωn is the natural frequency of the system, and ζ is thedamping ratio. Taking the Laplace transform as before yields the system in termsof its forced and free response, 2 kωn [s + 2ζωn ]y(0) + y(0) ˙ Y (s) = U (s) + (3.7) s2 + 2ζωn s + ωn 2 s2 + 2ζω s + ω 2 n n 1 2where the common denominator is the open loop characteristic equation and whoseroots give the system poles. For such a canonical form of second-order system, thepoles are given by −ζωn ± ωn ζ 2 − 1. If 0 < ζ < 1, then we have two complexpoles (since ζ 2 − 1 is negative) implying oscillation in the system free response. Inthis case poles are given by −ζωn ± ωn 1 − ζ 2 jThe free response of a second-order system (0 < ζ < 1) is given by y(0) 1 − ζ2 yfree (t) = e−ζωn t cos ωt + tan−1 (3.8) 1 − ζ2 ζIt can be easily seen from the above that we need ζωn > 0 or alternatively thepoles in the LHP for stability.The forced response of a second-order system (0 < ζ < 1) to a step is given by e−ζωn t 1 − ζ2 ystep (t) = k 1 − sin ωd t + tan−1 (3.9) 1 − ζ2 ζwhere ωd = ωn 1 − ζ 2.For a second-order system, the rise time tr is given by π−β tr = (3.10) ωdwhere β is the angle such that cos β = ζ. The time to maximum overshoot isgiven by π Td tp = = (3.11) ωd 2and the maximum overshoot Mp itself is given by − √ πζ Mp = e 1−ζ 2 ≈ e−πζ (3.12)where the percent overshoot can be determined by multiplying the above expressionby 100.If the system is nonminimum phase (RHP zeroes), then the response will haveundershoot instead of overshoot. When the system has a RHP zero at s = c, alower bound for the undershoot Mu is given by 1−δ Mu ≥ (3.13) ects − 1where δ is the maximum allowable steady-state error in response beyond the settlingtime. Thus for a 2% settling time, δ = 0.02. 20
    • 3.3. Design considerations3.3 Design considerationsWe consider the case where we require a certain percent overshoot in our system.We know that the damping ratio ζ is a critical factor in this, but how must it bechosen such that we have a certain percent maximum overshoot? Moreover, howmust we choose our poles so that this choice of ζ is satisfied.We can use the relationship for percent overshoot given previously to determinethe damping ratio that will ensure a certain maximum overshoot, ζ% OS . As longas we choose a damping ratio greater than ζ% OS , we will not exceed the specifiedmaximum overshoot. In the complex s-plane, the following can easily be shown tobe true cos θ = ζ (3.14)where θ is a counterclockwise angle made with the negative real axis. Hence anypoles that lie on or between the lines described by θ = ±cos−1 (ζ%OS ) satisfyζ ≥ ζ%OS .For a LHP pole a, it is true that τ = 1/a. Thus, a larger magnitude of a LHPpole signifies a faster response and these poles are known as ”fast poles”. The 2%setting time is given by t2% = 4τ and the 5% settling time is given by t5% = 3τ ,where τ is the time constant. Hence, to achieve a settling time of t2% or lessrequires placing the poles of the system to the left of a = t−4 . A similar result 2%can be easily derived for t5% .We can combine the above to results for maximum overshoot and settling time todetermine precise poles for a desired system response.3.4 Routh-Hurwitz stability criterionAs discussed previously, poles in the RHP indicate instability of a system. There-fore, the roots of the denominator of a transfer function hold key informationregarding the stability and response characteristics of the system.Consider the denominator of a closed-loop transfer function, also known as theclosed-loop characteristic equation, CLCE(s) = sn + a1 sn−1 + . . . + an−1 s + an = 0 (3.15)where a1 , . . . , an are real constants. The system is stable if the roots of theCLCE(s) are in the LHP. If the system is stable, then a1 , . . . , an > 0 but a1 , . . . , an >0 does not necessarily imply stability. Furthermore, if any ak ≤ 0, then the systemhas at least one pole with a positive or zero real part which implies stability ormarginal stability, respectively.The Routh-Hurwitz stability criterion helps determine not only whether a systemis stable or not but it also specifies how many unstable poles are present withoutexplicitly solving the characteristic equation, CLCE(s). It can also be used formultiple gain closed-loop systems.By the Routh-Hurwitz stability criterion, if a1 , . . . , an = 0 and if a1 , . . . , an are allpositive (or equivalently all negative) then we can use the Routh array to determinestability and the number of unstable poles. 21
    • 3.4. Routh-Hurwitz stability criterionTo form the Routh array, first arrange a1 , . . . , an in two rows as follows Row n 1 a2 a4 . . . 0 n-1 a1 a3 a5 . . . 0Then we determine the rest of the elements in the array. For the third row,coefficients are computed using the pattern below 1 a2 1 a4 1 a6 a1 a3 a1 a5 a1 a7 b1 = − b2 = − b3 = − . . . (3.16) a1 a1 a1This is continued until the remaining elements are all zero. Similarly, for the fourthrow a1 a3 a1 a5 a1 a7 b1 b2 b1 b3 b1 b4 c1 = − c2 = − c3 = − . . . (3.17) b1 b1 b1and for the fifth row b1 b2 b1 b3 b1 b4 c1 c2 c1 c3 c1 c4 d1 = − d2 = − d3 = − . . . (3.18) c1 c1 c1where all the series of elements are continued until the remaining are all zero. TheRouth array is then given by Row n 1 a2 a4 . . . 0 n-1 a1 a3 a5 . . . 0 n-2 b1 b3 b5 . . . 0 n-3 c1 c3 c5 . . . 0 n-4 d1 d3 d5 . . . 0 . . . . . . . . . 2 e1 e2 1 f1 0 g1The number of roots of the closed-loop characteristic equation in the RHP is equalto the number of changes in sign of the coefficients of the first column of the array[1, a1 , b1 , c1 , . . .]. The system is stable if and only if a1 , . . . , an > 0 and all theterms in [1, a1 , b1 , c1 , . . .] are positive.We have a special case when the first element in a row becomes zero, since thisimplies division by 0 for succeeding rows. For this case, replace the 0 with ε(arbitrarily small constant) and continue with obtaining expressions for the elementsin the succeeding rows. Finally, when all expressions are obtained, let ε → 0 andevaluate the sign of the expressions. This is all that is needed since we are only 22
    • 3.5. Pole/Zero effectsconcerned with sign changes. If we find no sign changes, then the 0 elementsignifies a pair of poles on the imaginary axis (no real parts) implying marginalstability.Another special case is when all the elements in a row become zero. In this casewe may still proceed in constructing the Routh array using the derivative of apolynomial defined using the previous (nonzero) row. If the n-th row is all zero,first form a polynomial of order n − 1 in s using the elements in row n − 1. Then dobtain the derivative of this polynomial with respect to s, i.e. ds and replace thezero row (the n-th row) with the coefficients of this derivative. The rest of thearray is completed as usual.3.5 Pole/Zero effectsThe location of the closed-loop system poles determines the nature of the systemmodels but the location of the closed-loop zeros determines the proportion inwhich these modes are combined. This can be easily verified via partial fractionexpansions. Poles and zeros far away from the imaginary axis are known as fastpoles and zeros, respectively. Conversely, poles and zeros close to the imaginaryaxis are known as slow poles and zeros. Fast zeros, RHP or LHP, have little to noimpact on system response.Slow zeros have a significant effect on system response. Slow LHP zeros lead toovershoot and slow RHP zeros lead to undershoot. A lower bound Mu for themaximum undershoot (due to the presence of a RHP zero at s = c) is given by 1−δ Mu ≥ (3.19) ects − 1where δ = 0.02 for a 2% settling time ts . From the denominator of the above, itis obvious that fast zeros (large c)lead to a smaller lower bound for the maximumundershoot Mu and vice versa. Alternatively a short settling time ts , implying ahigh bandwidth, would lead to more undershoot. Hence for a given RHP zero,we can only lower ts so much before getting unacceptable undershoot. To avoidundershoot, the closed-loop bandwidth is limited by the magnitude of a RHP zero.We determine the maximum bandwidth acceptable below.The the approximate 2% settling time is given by ts ≈ 4τ = 4/a where τ is thetime constant and a is the real part of the slowest LHP closed-loop system pole.Thus Mu can then be expressed as 1−δ Mu ≥ ∼ . (3.20) e4c/a −1Thus if the magnitude of the real part of the slowest LHP closed-loop pole issmaller than the RHP zero, undershoot will remain low since the denominator ofthe above will become large. The converse is true as well. Keep real part of slowestLHP pole less than real part of any RHP zero to avoid undershoot.The above results are true for LHP zeros as well in an analogous manner but forovershoot instead. Thus, in order to avoid overshoot, we must keep |a| < |c| wherea is the slowest LHP pole and c is the slowest LHP zero. 23
    • 3.5. Pole/Zero effectsIf the magnitude of the real part of the dominant closed-loop poles is less than themagnitude of the largest unstable open-loop pole, then significant overshoot willoccur. Keep real part of slowest LHP pole greater than real part of any RHP poleto avoid overshoot. 24
    • Chapter 4Frequency response tools4.1 Frequency responseWe consider the input to any transfer function G(s) = B(s)/A(s) to be u(t) =A sin ωt where A is a constant and ω is the system frequency. Then it can beshown that the steady-state output is then given by yss (t) = |G(jω)|A sin ωt + ∠G(jω) (4.1)where |G(jω)| is simply the magnitude of G(s) when evaluated at jω and thephase ∠G(jω) is given by Im B(jω) Im A(jω) ∠G(jω) = ∠B(jω)−∠A(jω) = tan−1 −tan−1 (4.2) Re B(jω) Re A(jω)for all ω ∈ R+ . Much useful and generalizable information can be gleaned fromthe above (the frequency response function G(jω) or FRF).4.2 Bode plotsThe pair of plots of magnitude |G(jω)| vs. frequency ω and phase ∠G(jω) vs.frequency ω are collectively known as the Bode plots of G(s). The magnitudeis usually plotted in decibels (dB) and the frequency for both plots is on thelogarithmic scale. The decibel value of the FRF G(jω) is given by G(jω) = 20 log10 |G(jω)| (4.3)and the logarithmic scale value of ω is log10 ω. The appropriate values for themagnitude and phase are determined as previously shown.The benefit of using a logarithmic scale are many fold but most importantly, multi-plication in a linear scale implies addition in the logarithmic scale. Thus, since thepoles/zeros of a transfer function can be expressed as an arbitrarily long multiplica-tion of factors, in the logarithmic scale we can decompose a transfer function intoa summation of elementary transfer functions based on the original poles/zeros.This helps in plotting the function and obtaining fundamental insights into thebehaviour of the plant. 25
    • 4.2. Bode plots Example Describe the frequency response of the following closed- loop transfer function using bode plots 45s + 237 G(s) = . (4.4) (s2 + 3s + 1)(s + 13) The bode plots for the transfer function can be generated using the bode command in MATLAB. These are shown in Figs. (4.1) and (4.2). Figure (4.1) shows the frequency response bode plots on the standard logarithmic scale and Fig. (4.2) shows the frequency response bode plots on a linear scale. The benefits of the logarithmic scale are obvi- ous from the figures since they show the response of the system better when compared to the linear scale plots. This is especially true at very low and very high frequencies where the linear scale response grows asymptotically. 40 20 Magnitude (dB) 0 −20 −40 −60 −80 0 −45 Phase (deg) −90 −135 −180 −3 −2 −1 0 1 2 10 10 10 10 10 10 Frequency (Hz) Figure 4.1: Logarithmic scale bode plot of G(s). Note that the frequency is shown in Hz and not in rad/sec to make the linear scale bode plots distinguishable.Many systems, including control systems, have their performance specified in termsof frequency response criteria. An important one of these is the bandwidth ofthe system. We define it, in general terms, as the frequency range for whichthe magnitude of the transfer function from input to output is close to unity (inabsolute terms) or zero (in dB). This implies that the bandwidth is the frequencyrange for which the system response is close to ideal.The previous definition can be expressed more exactly for control systems since theynormally behave like low-pass systems. Low-pass systems are systems for whichthe response begins near zero decibels and then rolls off with increasing frequency. 26
    • 4.3. Gain and phase margins 20 15 Magnitude (abs) 10 5 0 0 −45 Phase (deg) −90 −135 −180 10 20 30 40 50 60 70 80 90 100 Frequency (Hz) Figure 4.2: Linear scale bode plot of G(s).It is usually the case that the magnitude response does not roll off immediately butrather remains level for some frequency range. For such systems, the bandwidth isthe frequency at which the magnitude has rolled off by 3 dB from its low-frequencylevel value. Another definition which is the used in the rest of these notes is thatthe bandwidth is the real part of the slowest LHP pole. It is commonly the casethat the bandwidth from the two definitions usually closely coincides.4.3 Gain and phase marginsAs has been previously discussed, a system is stable if all of its poles are in the LHP.Otherwise it is unstable. Not all stable systems are the same and some systems aremore stable than others. Thus, we may are faced with the problem of the extentof stability of a system. The concepts of gain and phase margins are useful inthis regard and bode plots in particular are helpful in determining these. It mustbe noted that gain and phase margins are only meaningful for stable closed-loopsystems that become unstable with increasing gain.Consider a simple stable feedback system with the plant H(s) = B(s)/A(s) andthe compensator given by C(s) = KP (s)/L(s) where K is a constant gain. LetG(s) be the closed-loop transfer function given by H(s)C(s) G(s) = (4.5) 1 + H(s)C(s)Then the closed-loop characteristic equation CLCE which is the denominator ofthe closed-loop transfer function can be easily found to be CLCE(s) = 1 + KH(s)C(s) (4.6) 27
    • 4.3. Gain and phase marginswhere the roots of 1 + KH(s)C(s) are the closed-loop poles of the system. Theseare given by the solutions to KH(s)C(s) = −1. (4.7)Since angles in the complex s-plane are measured in a counterclockwise directionfrom the positive real axis, in polar coordinates, the point −1 lies at an angle ∠ −1 = −180◦ and magnitude | − 1| = 1. Thus, if we evaluate the frequency responseof the closed-loop system G(jω), the system will become unstable precisely ata point where we have both ∠G(jω) = −180◦ and |G(jω)| = 1. It can alsobe shown that the closed-loop system is also stable for |G(jω)| < 1. But sincewe already have a stable system there will be no point where |G(jω)| < 1 and∠G(jω) = −180◦ simultaneously. This point is commonly known as the crossoverfrequency. The gain crossover frequency is when only |G(jω)| = 1 or G(jω) = 0is true and the phase crossover frequency is when only ∠G(jω) = −180◦ is true.It can be seen from the previous that since systems with RHP poles are a prioriunstable, the concepts of phase and gain margins are meaningless.It follows that we can define the gain and phase margins as how much we willneed to change the system (in terms of magnitude and angle) before we reachinstability. Moreover, we can exploit the bode plots to formulate succinct definitionsof these. Thus, the phase margin is defined as the amount of additional phase lagneeded at the gain crossover frequency which will bring the system to the vergeof instability (i.e. ∠G(jω) = −180◦ or ∠G(jω) = +180◦ ). Correspondingly, thegain margin is defined as the reciprocal of the magnitude |G(jω)| or the negativedecibel magnitude − G(jω) at the phase crossover frequency. This gain margindetermination is based on a decibel plot since 20 log10 1 = 0.For minimum-phase systems, the phase margin must be negative for stability. Al-ternatively, we can use ∠G(jω) = 180◦ as the stability criterion since it refers tothe same angle in the complex s-plane. In this case, for minimum-phase systems,the phase margin must be positive for stability.The gain margin describes the maximum multiple of the gain K that can be appliedin feedback before the system becomes unstable. A gain margin in decibels thatis positive implies that the system is stable and conversely, a negative decibel gainmargin implies instability of the underlying system.For a stable minimum-phase system, the gain margin indicates how much wecan increase the gain before the onset of instability. Conversely for an unstableminimum-phase system, it indicates how much we must decrease gain to regainstability. However, since an unstable system would yield a negative decibel value,this implies 0 < K < 1. This is not a feasible control gain for most systems andtherefore is not a meaningful measure of instability for many unstable systems.For systems that never approach the phase crossover frequency, the gain margin iseither ∞ or −∞ indicating that the system is stable for all gains or unstable forall gains, respectively. Furthermore, the system may have more than one crossoverfrequency indicating that the system is stable only for gain values within a certainrange. For stable systems with more than one gain crossover frequency, the phasemargin is measured at the highest gain crossover frequency.For nonminimum-phase system or systems with undefined phase/gain margins, thebest recourse is to use the Nyquist stability criterion. 28
    • 4.4. Phase margin and second-order systems4.4 Phase margin and second-order systemsFor a strict canonical second-order closed-loop system, the phase margin is relatedto the damping ratio, ζ. We must be careful to make sure we have a true second-order closed-loop system before using the following results. The second-orderclosed-loop transfer function CLTF(s) is given by 2 ωn CLTF(s) = (4.8) s2 + 2ζωn s + ωn 2and the related second-order open-loop transfer function OLTF(s) is given by 2 ωn OLTF(s) = . (4.9) s(s + 2ζωn )The phase margin for this open-loop transfer function in closed-loop with gain Kcan then be related to ζ as follows   2ζ Phase margin = tan−1   (4.10) 1 + 4ζ 4 − 2ζ 2 ≈ 100ζ for 0 < Phase margin < 70◦ (4.11)The gain crossover frequency ωgc for such a second-order system can also bedetermined as ωgc = ωn 1 + 4ζ 4 − 2ζ 2 (4.12)This allows us to determine the required gain crossover for a specified percentovershoot (since percent overshoot can be used to determine ζ).The bandwidth, ωBW of the system is given by ωBW = ωn 1 − 2ζ 2 + 4ζ 4 − 4ζ 2 + 2 (4.13)It can also be shown for the above system that ωgc ≤ ωBW ≤ 2ωgc .Since the time constant τ = 1/ωBW , a higher bandwidth implies a faster response(smaller time constant). It also allows us to specify ωBW in terms of τ and de-termine the corresponding range of ωgc . We can use this and the previous resultto design a system in terms of phase margins and crossover gains using specifieddesign requirements.4.5 Root-locusThe basic characteristics of the transient response of a system are closely relatedto pole locations. Pole locations depends on the value of the loop gain in a simplefeedback setting where the compensator is a constant gain. Hence, it is becomesimportant to know how the closed-loop poles move in the complex s-plane as thegain is varied. Once the desired poles are determined using previously discussedtechniques, the design problem then only involves determining the appropriate gain 29
    • 4.5. Root-locusto place the system poles at their desired locations. In many cases, simple gainwill not work and we would need to add a more complex compensator.Root-locus addresses the design problem when dealing with adjusting a simpleparameter (such as a gain but could be otherwise). It was developed by W. R.Evans and involves plotting the roots of the characteristic equation of the closed-loop system for all values (0 to ∞) of an adjustable system parameter that isusually the loop gain. Root-locus means the locus of roots of the characteristicequation.Consider a simple feedback system with plant H(s) and compensator C(s). Theclosed-loop transfer function G(s) is then given by H(s)C(s) G(s) = . (4.14) 1 + H(s)C(s)The characteristic equation is the denominator of the above equated to 0 and itsatisfies the following H(s)C(s) = −1. (4.15)We can express this in the polar form as ∠H(s)C(s) = ±180◦ |H(s)C(s)| = 1. (4.16)Only the values of s that satisfy the above satisfy the characteristic equation andit follows that these are the poles of the closed-loop transfer function G(s).Let the H(s)C(s) have poles at p1 , . . . , pn and zeroes at z1 , . . . , zm . Then theangle at a given s maybe determined using the following ∠H(s)C(s) = ∠z1 + . . . + ∠zm − ∠p1 − . . . − ∠pn (4.17)where the angle of an arbitrary zero/pole x is given by ∠x = tan−1 (Im x/Re x). √The magnitude of any complex number x is given by |x| = Re x + Im x. Thiscan be used to determine |H(s)C(s)|. • Angles are measured counterclockwise from the positive real axis. • Root-loci are symmetric about the real-axis, due to the presence of complex conjugates. • If the difference between the orders of the numerator and denominator of H(s)C(s) is greater or equal to two, then the sum of the poles is a constant. This implies that if some poles move toward the right, others have to move toward the left. • A slight change in pole-zero configurations may cause significant changes in the root-locus. • The patterns of the root-loci depend only on the relative separation of the open-loop poles and zeros. • If a pole in the root-locus plot only moves over into the RHP (implying instability) or LHP for a fixed bounded interval of the gain, then the system is called conditionally stable. It is not desirable in a control system since the danger always exists of becoming unstable/stable by both increasing the gain too much or decreasing the gain too much. 30
    • 4.5. Root-locus• If the construction of G(s) involved a cancellation of poles with zeros be- cause of the interaction between the plant and compensator, all the roots of the true characteristic equation will not show (since we are dealing with a reduced equation). To avoid this problem, add the canceled closed-loop pole retroactively to the closed-loop poles obtained from the root-locus plot of G(s). This implies that the canceled pole is still a pole of the system but is only canceled in feedback (or it is available after feedback). Example For the feedback system shown in Fig. (4.3) with 1 H(s) = (4.18) s(s + 1)2 find the root-locus plot and the gain values that ensure the systemR(s) U(s) Y(s) + K H(s) - Figure 4.3: Simple gain feedback system. stable. The root-locus plot for this transfer function can be approximately drawn by hand or plotted using the rlocus command in MATLAB as shown in Fig. (4.4). The two poles and their movement with increas- 0.86 0.76 0.64 0.5 0.34 0.16 1 0.94 0.5 0.985 Imaginary Axis 2 1.75 1.5 1.25 1 0.75 0.5 0.25 0 0.985 −0.5 0.94 −1 0.86 0.76 0.64 0.5 0.34 0.16 −2 −1.5 −1 −0.5 0 0.5 Real Axis Figure 4.4: Root-locus plot for H(s). 31
    • 4.6. Nyquist stability criterion 1.2 1.2 0.5 0.38 0.24 0.12 1 1 0.64 System: sys Gain: 2 0.8 Pole: 0.000468 + 1i 0.8 Damping: −0.000468 Overshoot (%): 100 Imaginary Axis 0.76 0.6 Frequency (rad/sec): 1 0.6 0.4 0.4 0.88 0.2 0.2 0.97 0 −0.2 0.97 0.2 −0.8 −0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 Real Axis Figure 4.5: Maximum stabilizing gain for closed-loop system. ing gain is indicated on the plot. The stable pole at s = −1 does not become unstable for any value of K. However, the marginally stable pole at s = −1 does cross the imaginary axis to become unstable for some value of K. This maximum value of the gain is shown in Fig. (4.5). Thus for any gain value K > 2, the system becomes unstable.4.6 Nyquist stability criterionThis is a helpful tool that is useful for many kinds of systems including timedelay systems and robust stability. Given a closed-loop transfer function G(s), wewant to determine closed-loop stability/instability and also the number of unstableclosed-loop poles.Nyquist plots are obtained by plotting G(jω) for all ω on the complex plane (realvs. imaginary). The direction of the plotted line is also indicated in terms ofincreasing ω. These can usually be done with the aid of bode plots by reading offvalues of magnitude/phase for a certain frequency from the bode plots and thenplotting these points in the complex plane using polar coordinates (i.e. magnitudeand direction/angle). Angles are measured in a counterclockwise manner from thepositive real axis. The Nyquist plot is symmetric about the real axis due to thepresence of complex conjugates.The number of unstable poles N is then given by the equation N = NCW + NOL (4.19)where NOL is the number of open-loop unstable poles and NCW is the number ofclockwise encirclements of the point −1/K by the Nyquist plot. The variable Kis an independent gain on the plant and is usually considered 1 when the Nyquist 32
    • 4.6. Nyquist stability criterionplot is not being used for design purposes. A counterclockwise encirclement is thesame as a negative clockwise encirclement. Example Consider the simple feedback system shown in Fig. (4.6). Given R(s) U(s) Y(s) + C(s) G(s) - Figure 4.6: Block diagram of simple single degree of freedom feedback system. 3 K G(s) = , C(s) = (4.20) s−1 s+3 where K is a gain to be determined, find the appropriate Nyquist plot and comment on the stability of the closed loop system. Since Nyquist plotting requires an independent gain on the plant, we rearrange and manipulate the feedback system to obtain the closed- loop transfer function as 3 GCL (s) = KH(s) = K . (4.21) (s + 3)(s − 1) The corresponding block diagram is shown in Fig. (4.7). The bode R(s) U(s) Y(s) + K H(s) -Figure 4.7: Block diagram of feedback system manipulated for generating theNyquist plot. diagram is then plotted as in Fig. (4.8) and values at relevant points are picked out in order to make the Nyquist plot. Typical points to get the angle/magnitude of the frequency response are when the angle is at multiples of 90◦ . A more complex Nyquist plot would require more points to get an accurate representation. The corresponding Nyquist plot is shown in Fig. (4.9) with the directions put in according to increasing frequency. It is clear from the expression of H(s) that we have 1 unstable open- loop pole at s = 1, i.e. NOL = 1. From the Nyquist plot we have 1 33
    • 4.6. Nyquist stability criterion 0 −20 Magnitude (dB) −40 −60 −80 −150 −155 Phase (deg) −160 −165 −170 −175 −180 −2 −1 0 1 2 10 10 10 10 10 Frequency (rad/sec) Figure 4.8: Bode plot of closed-loop system. 0.4 0.3 0.2 0.1Imag Axis 0 −0.1 −0.2 −0.3 −0.4 −1.5−1.4−1.3−1.2 −1.1 −1 −0.9 −0.8−0.7−0.6 −0.5−0.4−0.3−0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 Real Axis Figure 4.9: Nyquist plot of closed-loop system. counterclockwise encirclement or -1 clockwise encirclements for −1 < −1/K < 0, i.e. NCW = −1. It follows that −1 + 1 = 0 for − 1 < −1/K < 0 NCL = NCW + NOL = 0+1=1 for − 1/K < 1 (4.22) and thus 0 if K > 1 NCL = (4.23) 1 if K < 1 34
    • 4.6. Nyquist stability criterion In other words, any gain greater than 1 ensures stability of the closed- loop system.When there is a pole at the origin or anywhere else on the imaginary axis, theNyquist plot will become singular which leads to a problem of counting the encir-clements. This problem is countered by evaluating G(s) at a point which is notsingular but is very close to the singular point. If the singular point is given by ωs j,then the point close to it is usually taken to be ε + ωs j, where ε is an arbitrarilysmall positive real number as shown in Fig. (4.10).Figure 4.10: s-plane showing path taken to determine how symmetric singularitiesjoin when making Nyquist plots.This point is then located on the plot and is made arbitrarily large in magnitude(tending to ∞) which helps indicate in which direction the singular points join atinfinite distance. Example Consider a feedback system similar to the one shown in Fig. (4.7) with closed-loop transfer function 1 GCL (s) = KH(s) = K . (4.24) s(s + 1)2 where K is a gain to be determined. Find the appropriate Nyquist plot. The bode diagram is then plotted as in Fig. (4.11) and values at rele- vant points are picked out in order to make the Nyquist plot. Typical points to get the angle/magnitude of the frequency response are when the angle is at multiples of 90◦ . A more complex Nyquist plot would require more points to get an accurate representation. 35
    • 4.6. Nyquist stability criterion 20 0 Magnitude (dB) −20 −40 −60 −90 −135 Phase (deg) −180 −225 −270 −1 0 1 10 10 10 Frequency (rad/sec) Figure 4.11: Bode plot of closed-loop system.Since this transfer function has a pole on the imaginary axis at s = 0,we do not evaluate GCL (j0) but instead evaluate at GCL ( + j0)where is an arbitrarily small number. This is in order to determinethe asymptotic direction of the closed contour of the Nyquist plot. Thecorresponding Nyquist plot with directions is shown in Fig. (4.12). 10 5 Imag Axis 0 −5 −10 −2 0 2 4 6 8 Real Axis Figure 4.12: Nyquist plot of closed-loop system.The stability of the closed-loop system maybe determined as before 36
    • 4.7. Feedback with disturbances using the Nyquist plot.4.7 Feedback with disturbancesConsider a single degree of freedom feedback system (shown below) with inputdisturbance Di (disturbance between the compensator C(s) output U (s) and theinput of the plant G(s)), output disturbance Do (disturbance in the plant output)and measurement noise/disturbance Dm . Let the system input be R(s). Di Do R(s) U(s) + + Y(s) + C(s) + G(s) + - + + DmFigure 4.13: Block diagram of a single degree-of-freedom feedback system withdisturbances.For such a system, we can write the following two functions Y (s) = T (s)R(s) + S(s)Do (s) + Si (s)Di (s) − T (s)Dm (s) (4.25) U (s) = Su (s)R(s) + Su (s)Do (s) − T (s)Di (s) − Su (s)Dm (s) (4.26)where S(s) is the sensitivity function, T (s) is the complementary sensitivity func-tion, Si (s) is the input-disturbance sensitivity function and Su (s) is the controlsensitivity function. These are given by the following relations G(s)C(s) T (s) = (4.27) 1 + G(s)C(s) 1 S(s) = (4.28) 1 + G(s)C(s) G(s) Si (s) = = G(s)S(s) (4.29) 1 + G(s)C(s) C(s) Su (s) = = C(s)S(s) (4.30) 1 + G(s)C(s)We can use the above to derive transfer functions between any input/output pair.Furthermore, we can independently assess the stability of each input/output pairby evaluating the relevant transfer function. 37
    • 4.7. Feedback with disturbances Example Find the closed-loop sensitivity functions for a closed-loop system with plant transfer function 3 G(s) = (4.31) (s + 4)(−s + 2) and compensator −s + 2 C(s) = . (4.32) s Also comment on the stability of the system. Since G(s) = B(s)/A(s) and C(s) = P (s)/L(s) B(s) = 3 A(s) = (s + 4)(−s + 2) P (s) = −s + 2 L(s) = s (4.33) which gives us GC BP 3 T (s) = = = (4.34) 1 + GC AL + BP s+7 1 AL s(s + 4) S(s) = = = 2 (4.35) 1 + GC AL + BP s + 4s + 3 (s + 4)(−s + 2) Su (s) = SC = (4.36) s2 + 4s + 3 3s Si (s) = SG = 2 . (4.37) (s + 4s + 3)(−s + 2) All the above closed-loop transfer functions are stable except for the input-disturbance Si (s) since it has a positive pole. Thus, the input- disturbance to output path is not stable and therefore the entire system is unstable.As can be seen, the complementary sensitivity function T (s) is the basic transferfunction from the input R(s) and the output Y (s). Thus, for good tracking weneed T (s) → 1. But as can be seen from the first function for Y (s), this increasesthe measurement noise Dm . The sensitivity on the other hand is the transferfunction from the output disturbance Do (s) to the output Y (s). Moreover theS(s) sensitivity function is directly related to the input sensitivity Si (s) and thecontrol sensitivity Su (s), both of which we want to reduce to as low as possible.Therefore, we want S(s) → 0 and T (s) → 1. This is both possible and is actuallya constraint on the system as it can be easily shown that T (s) + S(s) = 1 (4.38)But as mentioned earlier, T (s) → 1 amplifies measurement noise. If T (s) = 1(and thus S(s) = 0) for all frequencies ω, then the system has infinite bandwidth(fast response). However, it amplifies the measurement noise at all frequencieswhich is even more noticeable at high frequencies since measurement noise isusually significant at higher frequencies. Therefore, having a high bandwidth isnot necessarily something desirable from a control design standpoint. Infinite bandwidth leads to high measurement noise because T (s) = 1 ∀ ω. 38
    • 4.8. Trends from Bode & Poisson’s integral constraintsIn real systems, T (s) cannot be 1 for all frequencies and rolls off at some finitebandwidth frequency. • The bandwidth is defined as ωBW = 1/τ where τ is the time constant of the slowest LHP pole or in an alternate definition is the 3 dB point on a bode magnitude plot. • When the true plant transfer function is not available, we proceed with our analysis using the nominal plant transfer function G0 (s). In this case the sensitivity functions remain the same but with G0 (s) instead of G(s). Thus we have T0 (s), S0 (s), Si0 (s) and Su0 (s). • For a plant G(s) = B(s)/A(s) and a compensator C(s) = P (s)/L(s), the closed-loop characteristic Acl is as before and is the denominator of all the sensitivity functions above Acl = AL + BP = 0 (4.39) • We need all the sensitivity functions to be stable for stability, i.e. we need all the roots of Acl to be in the LHP. • This is only true if there is no cancellation of unstable poles between C(s) and G(s). Canceling unstable poles is not recommended since we will never have a perfect model and this will cause us to end up with a nonminimum- phase zero instead of canceling the unstable pole. Moreover, this will result in a RHP zero and pole very close to each other and this leads to much higher instability as will be seen later.4.8 Trends from Bode & Poisson’s integral constraintsConsider the case when we have no open-loop RHP planes or zeros. Then forstability, the following is satisfied ∞ 0 τd > 0 ln|S(jω)| dω = (4.40) 0 −k π 2 τd = 0where k = lims→∞ sG(s)C(s) and τd is the time delay in the system. Thisis the first of Bode’s integral constraints on sensitivity. The next equation forcomplementary sensitivity also needs to be satisfied for stability ∞ 1 1 π ln T j d = πτd − (4.41) 0 ω ω 2kr −1where kr = − lims→∞ 1/sG(s)C(s). Since an integral is the area under a graph,the above signify how much of the graph of T (jω) and S(jω) needs to be aboveor below the 0 axis. Upon inspection of the equations, it can be seen that timedelay causes a more positive integral, leading to the fact that time delays causeboth T (jω) and S(jω) to need to move more above the 0 axis for stability (tosatisfy the more positive integrals). This is undesirable as mentioned before andtherefore, time delays are undesirable. 39
    • 4.8. Trends from Bode & Poisson’s integral constraintsConsider the case when we have N open-loop RHP (unstable) poles p1 , . . . , pNand no open-loop RHP zeros. Then for stability, the following needs to be satisfied  π N Re{pi }  ∞ i=1 nr > 1 ln|S(jω)| dω = (4.42) n 0 −k π + π i=1 Re{pi } nr = 1  2where nr is the relative degree of the open-loop system (difference between ordersof numerator and denominator of a proper transfer function). This is the second ofBode’s integral constraints on sensitivity. It is obvious from the above that havinglarger RHP poles leads to a more positive sensitivity integral, which is somethingundesirable because it pushes S(s) above the 0 axis causing peaks and increasedsensitivity at certain frequencies. An equation for the integral of the complementarysensitivity is not available.Consider the case when we have M open-loop RHP (nonminimum-phase) zerosc1 , . . . , cM and no RHP poles. For this case, there is no integral available for S(jω)but the following must be satisfied for stability ∞ M 1 1 π 1 π ln T j d = τd + π − . (4.43) 0 ω ω 2 i=1 ci 2krIt can be seen from the above that RHP zeros close to the imaginary axis makethe integral more positive (since 1/ci becomes large) leading to a larger area abovethe 0 axis. This usually implies poor tracking and overshoot in certain frequencyranges and is undesirable. Time delay also makes the integral more positive and isundesirable.Consider the case when we have N RHP poles p1 , . . . , pN and M RHP zerosc1 , . . . , cM . Let the k-th pole be given by pk = ψk + jδk and the k-th zero byck = αk + jβk . The the following must be satisfied for stability ∞ N ψk ck − pi ln|S(jω)| dω = −πln −∞ 2 ψk + (δk − ω)2 i=1 ck + p∗ i for k = 1, . . . , M (4.44)where a superscripted ∗ signifies the complex conjugate. A similar expression canbe derived for the complementary sensitivity function ∞ M αk pk − ci ln|T (jω)| dω = −πln + πτd αk −∞ 2 + (β − ω)2 αk k p + c∗ i=1 k i for k = 1, . . . , N (4.45)where the complex conjugate is indicated as before. The above two are knownas Poisson’s integral constraints on sensitivity and complementary sensitivity, re- N ck −pi Mspectively. Since ln i=1 ck +p∗ < 0 and ln i=1 pi −c∗ < 0, the integrals will k pi +ck iin general be positive. Thus, having a RHP pole and a RHP zero close to eachother would make the denominators of the right hand-side small, leading to largepositive integrals. This implies the presence of large peaks and significant areas 40
    • 4.8. Trends from Bode & Poisson’s integral constraintsabove the 0 axis for both sensitivity functions. This is undesirable and therefore,we must avoid having a RHP pole and RHP zero close to each other.The Poisson integral constraints containing a weighting term that concentratesthe frequency response in a certain frequency band. Thus, the integral effects aremore pronounced so we need to make a positive integral in a more or less finitefrequency rather than the infinite frequency afforded to the previous integrals.What we want: • We want to avoid peaks in T (jω) above 0 since this would imply that we have overshoot, poor tracking and a higher measurement noise amplification in a certain frequency range. • Similarly, we want to avoid peaks in S(jω) above 0 because this implies that we have an increased sensitivity to disturbances in a certain frequency range.Summary of trends: • We have a water-bed effect in both S(jω) and T (jω) even when we have no RHP poles or zeros but the integral is negative so it is easier to deal with. – The presence of time delays introduces (higher) peaks in both S(jω) and T (jω) since the integrals are not necessarily negative anymore and become more positive. In the case of S(jω), the integral becomes zero (4.40) and in the case of T (jω) the integral can range from a small negative to a large positive (4.41). • The presence of only RHP poles makes the water-bed effect worse since it makes the integral for sensitivity S(jω) more positive (4.42). – Larger real parts of RHP poles are undesirable because they lead to a a higher sensitivity peak because of a more positive integral. – A relative degree higher than 1 is undesirable since it eliminates a helpful negative term in the integral expression. • The presence of only RHP zeros, in general, makes the water-bed effect worse since the integral for T (jω) has more positive terms than negative (4.43). – Smaller real parts of RHP zeros are undesirable because they lead to a higher sensitivity peak because the reciprocal of the zeros makes the integral more positive. – Time delays are also undesirable since they introduce another positive term in the integral. • In the presence of both RHP zeros and poles, water-bed effect is most no- ticeable [(4.44) and (4.45)]. – Having a RHP pole and RHP zero close to each other is undesirable since it greatly increases the integral value for both S(jω) and T (jω) leading to large peaks in both. This is why we avoid canceling a RHP pole. – The integral is more pronounced in this case because our frequency is concentrated in a finite band. 41
    • 4.9. Upper bounds on peaks for sensitivity integrals4.9 Upper bounds on peaks for sensitivity integralsDefine the weighting functions used in the Poisson integral constraint on sensitivity ψk W (ck , ω) = 2 (4.46) ψk + (δk − ω)2Then the integral of the weighting function is given as ωc Ω(ck , ωc ) ≡ 2 W (ck , ω) dω (4.47) 0 ωc − δk δk = 2 tan−1 + 2 tan−1 (4.48) ψk ψkWe can use this to split the integration limits using the fact Ω(ck , ∞) = π.Lets say we want to limit S(jω) to a low value for a certain range of frequencyω. We may want to do this because disturbances are significant in that frequencyrange or because most of the response will be in that frequency range. From ourprevious discussion, this is not possible without a penalty which will be in the formof a peak in the sensitivity function.Let us assume we want to limit S(jω) to a small number between the frequencies0 and ωl . We can then use our previous results to split the integration limits ofthe Poisson integral constraints to get an expressions for a lower bound on themaximum peak in S(jω) N ck −pi πln i=1 ck +p∗ + |(ln )Ω(ck , ωl )| i ln|S(jω)max | > (4.49) π − Ω(ck , ωl )We can use this to determine the lower bound on the complementary sensitivityfunction since T (s) + S(s) = 1. • Closed-loop bandwidth should not exceed the magnitude of the smallest RHP open-loop zero. The penalty for not following this guideline is that a very large sensitivity peak will occur leading to fragile (nonrobust) loops and large undershoots/overshoots. • The lower bound on the complementary sensitivity peak is larger for systems with time delay and influence of delay increases for fast unstable poles. • If closed-loop bandwidth is much smaller than RHP pole, we will have large complementary sensitivity peak and thus large overshoot. 42
    • Chapter 5Frequency domain control design5.1 Direct pole-placementThis involves placing the poles of the closed-loop characteristic equation (CLCE orAcl ) at desired locations. Let the plant transfer function be proper and given byH(s) = B(s)/A(s) and the compensator transfer function by C(s) = P (s)/L(s).It can then be shown that the closed-loop characteristic equation for this onedegree-of-freedom system is equivalent to the following Acl = A(s)L(s) + B(s)P (s) (5.1)where the polynomial order of A(s) (denominator of the plant transfer function)is n. 1. Construct a desired closed-loop characteristic equation Acld based on design requirements such as response time and overshoot. • To cancel a stable pole in the plant transfer function through the com- pensator, just ignore that pole in all calculations (both in compensator as well as plant). 2. To obtain the compensator C(s) coefficients, find the needed order of P (s) and L(s). • If the polynomial degree of Acld is given by nc = 2n − 1, then Acl will exist for a biproper compensator with polynomial degree np = nl = n − 1 where np is the polynomial order of P (s) and nl is the polynomial order of L(s). • If the polynomial order of Acl,d is given by nc = 2n − 1 + κ where κ is a positive integer, then we can use the same order of P (s) but we will have to change the order of L(s) to be nl = n − 1 + κ. • If nc < 2n − 1, then we have more equations than unknowns and most likely there is no solution. 3. Form general equations for P (s) and L(s) in terms of unknown variables and analytically determine Acl from Acl = A(s)L(s) + B(s)P (s). (5.2) 43
    • 5.1. Direct pole-placement 4. Compare the coefficients of this polynomial to the coefficients of Acl,d to obtain the compensator coefficients.Usually we can determine the compensator by following intuition rather than thestrict rules mentioned above. Example Consider the system shown in Fig. (5.1) with the plant transfer function given by R(s) U(s) Y(s) + C(s) G(s) - Figure 5.1: Block diagram of feedback system. B(s) 1 G(s) = = 2 . (5.3) A(s) s + 3s + 1 Find a suitable compensator C(s) = P (s)/L(s) that places the poles of the closed-loop system at −10, −3 ± 4j. We follow the steps outlined earlier. 1. Since the desired closed-loop poles are given, the desired closed- loop characteristic equation is given by Acl,d = (s + 10)(s + 3 + 4j)(s + 3 − 4j) (5.4) 3 2 = s + 16s + 85 + 250 (5.5) 2. The orders of P (s) and L(s) are now determined. The order of A(s) is n = 2, thus nc = 2n − 1 = 3. Since order of the Acl,d is also 3, the compensator will be biproper with polynomial degree np = n − 1 = 1. Thus, let P (s) b1 s + b0 C(s) = = (5.6) L(s) s + a0 where a0 , b0 , b1 are coefficients to be determined. This is usually the most straightforward choice for a compensator. The com- pensator is limited to three variables to avoid more unknowns than equations since Acl,d is of order three. This may also be determined via trial-and-error. 3. Form a general equation for Acl using C(s) as specified before Acl = A(s)L(s) + B(s)P (s) (5.7) 2 = (s + 3s + 1)L(s) + (1)P (s) (5.8) 3 2 = s + (3 + a0 )s + (1 + 3a0 + b1 )s + (ao + b0 ). (5.9) 44
    • 5.2. Modeling errors 4. Comparing coefficients of Acl and Acl,d 3 + a0 = 15 (5.10) 1 + 3a0 + b1 = 85 (5.11) ao + b0 = 250 (5.12) which gives a0 = 13 (5.13) b0 = 237 (5.14) b1 = 45. (5.15) Thus the compensator C(s) that places the poles of the closed-loop system at −10 and −3 ± 4j is given by P (s) 45s + 237 G(s) = = (5.16) L(s) s + 135.2 Modeling errorsLet the nominal plant transfer function be given by G0 (s) and the actual planttransfer function by G(s). How can we describe the difference between the nominaland actual plant model? There are two methods: • The additive modeling error (AME) Gε (s) is given in the equation G(s) = G0 (s) + Gε (s) (5.17) and is an absolute measure of uncertainty. • The multiplicative modeling error (MME) G∆ (s) is given in the equation G(s) = G0 (s) [1 + G∆ (s)] (5.18) and is a scaled (relative to the nominal transfer function) measure of uncer- tainty.5.3 Robust stabilityIf the model used for control design is not the actual transfer function G(s) butrather a nominal model G0 (s) (which is usually the case) and we know the stabilityproperties of the nominal closed-loop system, we can derive a test for the stabilityproperties of the true closed-loop system.Assume that the nominal compensator-plant transfer function C(s)G0 (s) is stable,i.e. number of unstable poles N = 0. Also assume that the open-loop transferfunctions of both G(s) and G0 (s) have the same number of unstable poles, i.e.number of open-loop unstable poles NOL is the same for both nominal and trueplant transfer functions. 45
    • 5.3. Robust stabilityBy the Nyquist stability criterion N = NOL + NCW (5.19)where, in the case of the nominal compensator-plant transfer function, N = 0 andNOL is the same for both nominal and true systems. Thus, to ensure stability ofthe true system, it is sufficient to ensure that the clockwise encirclements madeby the Nyquist plot of the true compensator-plant transfer function is the sameas the clockwise encirclements made by the nominal compensator-plant transferfunction. In other words, if C(s)G(s) does not make any more encirclements of -1than C(s)G0 (s) then the true closed-loop system is stable as well.Consider the Nyquist sketch of C(jω)G0 (jω). Then the maximum uncertaintyat any point in the sketch in terms of the multiplicative modeling error (MME)G∆ (jω) is given by|C(jω)G0 (jω) − C(jω)G(jω)| = |C(jω)[G0 (jω) − G0 (jω) − G0 (jω)G∆ (jω)]| (5.20) = |C(jω)G0 (jω)G∆ (jω)|. (5.21)This can be represented by a circle of radius |C(jω)G0 (jω)G∆ (jω)| around theNyquist sketch of C(jω)G0 (jω). The distance from the center of the circle(C(s)G0 (s)) to the -1 point can be found to be | − 1 − C(jω)G0 (jω)| = |1 + C(jω)G0 (jω)| (5.22)as illustrated in Fig. (5.2). Figure 5.2: Plot showing the robust stability condition. Note s = jω.Thus, as long as this distance is more than the radius of the uncertainty circle, the 46
    • 5.4. Robust performancesystem maintains the same number of encirclements. This is given by |C(jω)G0 (jω)G∆ (jω)| < |1 + C(jω)G0 (jω)|, ∀ω (5.23) C(jω)G0 (jω) ⇒ G∆ (jω) < 1, ∀ω (5.24) 1 + C(jω)G0 (jω) ⇒ |T0 (jω)G∆ (jω)| < 1, ∀ω. (5.25)The above boxed constraint is a sufficient (but not necessary) condition for robuststability. An algebraic derivation is available for this as well.5.4 Robust performanceWe can show robust stability as above but what about the performance of anominal system compared to the true system? Can we derive a test or measure ofperformance robustness?It can be easily shown that the true sensitivity functions are related to the nominalsensitivity functions by the following expressions T = T0 (1 + G∆ )S∆ (5.26) S = S0 S∆ (5.27) Si = Si0 (1 + G∆ )S∆ (5.28) Su = Su0 S∆ (5.29)where S∆ is known as the error sensitivity function and is given by 1 S∆ = . (5.30) 1 + T0 G∆From the expressions for the nominal sensitivity functions, it can be seen thatS∆ ≈ 1 implies that the nominal sensitivity functions are close to the true sensitivityfunctions. This would in turn lead to true performance matching the nominalperformance since all transfer functions would be roughly equivalent.It can be easily seen from the expression for S∆ that as T0 G∆ → 0, S∆ → 1.Thus, for robust performance we require T0 G∆ ≈ 0 or |T0 G∆ | 1 (5.31)Since modeling is typically better at lower frequencies, G∆ ≈ 0 for low frequenciesis usually true. On the other hand, having a finite bandwidth implies that T0 ≈ 0for some high frequency since T0 is expected to roll off beyond the bandwidthfrequency. Thus, for a properly designed control system the robust performancerequirement T0 G∆ ≈ 0 is possible even though both T0 and G∆ may not be smallat the same time.This exposes another drawback of having very aggressive control or infinite band-width; that robust performance is not guaranteed at all frequencies. 47
    • 5.5. Internal Model Principle Bandwidth must be limited based on response of G∆ to maintain robust performance: T0 G∆ ≈ 0.It can also be seen from the above equations that robust performance is a muchstricter requirement than robust stability, confirming intuitive notions.5.5 Internal Model PrincipleLet f (s, 0) represent a function of the initial conditions. Then we can have thefollowing models of disturbances; ˙ • Constant/step disturbance, d = 0. Then the transfer function of the distur- bance D(s) is given by f (s, 0) D(s) = (5.32) s • Sinusoidal disturbance, d(t) = A sin ωd + φd . Then D(s) is given by f (s, 0) D(s) = 2 (5.33) s2 + ωd • Mixed constant/step and sinusoidal disturbance, d(t) = A0 + A sin ωd + φd . Then D(s) is given by f (s, 0) D(s) = 2 (5.34) s(s2 + ωd )The denominator of the disturbance transfer function is called the disturbancegenerating polynomial. Assume the closed-loop since is stable. This would implythat disturbance go to 0 as t → ∞.Steady-state disturbance compensation can be accomplished by including the dis-turbance generating polynomial in the denominator of the compensator C(s). Thisis known as the internal model principle. It must be noted that the roots of the dis-turbance generating polynomial impose some tradeoffs on the closed-loop systemas if they were poles in the open-loop plant.We can use the same idea for robust reference tracking. If we need to track areference, we can use the same transfer functions as the disturbances or if wehave another reference to track, we can use another function to derive a transferfunction for the reference. The denominator of this transfer function is then calledthe reference generating polynomial. If the closed-loop system is stable, one wayto achieve robust tracking is to include the reference generating polynomial in thedenominator of the compensator. 48
    • Chapter 6State-space techniques6.1 Transfer function to state-spaceStates are the smallest set of variables such that knowledge of these variables atsome time t = t0 and input u(t) for t ≥ t0 completely and uniquely determinessystem behaviour.We can obtain an input/output differential equation as below for any transferfunction y n + an−1 y n−1 + . . . + a1 y + a0 y = bm um + bm−1 um−1 + . . . + b1 u + b0 u (6.1) ˙ ˙where the a’s represent the coefficients of the denominator of the transfer functionand the b’s represent the coefficients of the numerator. The state-space represen-tation of this system is then given by       0 1 0 0 ... 0   0 x1 ˙  0 x1  .   0 1 0 ... 0   .  0  . = . .   .  + . u (6.2)    . . .. .  .  . . . . . . .. xn ˙ xn −a0 −a1 −a2 . . . −an−2 an−1 1   x1 0  .   .  y = b0 b1 ... bm ... . (6.3) xnThis is known as the controllable canonical form.Another state-space representation for the same system is given by     b0   0 ... 0 −a0    b1  x1 ˙ 1 0 . . . 0 −a1  x1  .     .  −a2   .  +  .  u    .  0 1 . . . 0  . = .  .    (6.4) . .. .  . bm  xn ˙ .. . .  x . n  .   .  . 0 0 0 1 −an−1 0 49
    • 6.2. Eigenvalues & eigenvectors   x1 1  .   .  y= 0 0 ... . (6.5) xnThis is known as the observable canonical form.6.2 Eigenvalues & eigenvectorsλ is an eigenvalue of A ∈ Rn×n if |λI − A| = 0. Furthermore there exist nontrivialvectors v ∈ Cn , called eigenvectors, such that (λI − A)v = 0.Alternatively, this concept may be explained by the following: Consider a λ suchthat Av = λv = λIv (6.6)Then (A − λI)v = 0 (6.7) −1Now if (A − λI) was invertible (i.e. |A − λI| = 0), then v(A − λI) = 0 is theonly solution and is trivial. Therefore for a nontrivial v to exist, |A − λI| = 0. • If v is an eigenvector of A with eigenvalue λ, then so is αv for any α ∈ C. Since A(αv) = λ(αv) → (αA)v = (αλ)v. • Even if A is real, λ and v can be complex. • Scaling interpretation for Av = λv: Consider λ to be real for now, then it is a scaling factor for the eigenvector v. Its sign determines the direction and its magnitude determines length. In this picture, Av = λv is nothing but a vector in the phase plane. • For A ∈ Rn×n , |A − λI| = 0 will result in a nth-order polynomial with n possibly repeated solutions. This polynomial is known as the characteristic equation of A. • If we have n distinct eigenvalues, then we have n distinct eigenvectors. How- ever, if we have k < n eigenvalues due to repeated solutions of |A − λI| = 0, then we have l ≤ n distinct eigenvectors. In general, l = k. • The vector space spanned by the eigenvectors of A is called the eigenspace of A. The closure property for this defined space may be easily confirmed. • Eigenvectors corresponding to different eigenvalues are linearly independent. Thus, the set of distinct eigenvectors of A form a basis (for the eigenspace of A).6.3 Solution and stability of dynamic systemsConsider an autonomous linear time-invariant (LTI) system in state-space form ˙ x = Ax (6.8) 50
    • 6.3. Solution and stability of dynamic systemswhere an eigenvalue of A is λ and the corresponding eigenvector is v. We know Av = λv. (6.9)Now assume a solution to (6.8) x = eλt v. (6.10)We will check this solution to see if it satisfies (6.8). Taking the time derivativeof our assumed solution d x(t) = x = λeλt v. ˙ (6.11) dtNow substitute (6.10) into (6.8) x = Ax = Aeλt v = eλt Av ˙ (6.12)but from (6.11) λeλt v = eλt Av ⇒ λv = Av (6.13)which we know from (6.9) is true. Thus, this is a correct assumption for a solution.The above is true assuming we have a nondefective matrix, i.e., we have a fullset of eigenvectors (no repeated eigenvectors). We almost always have more thanone eigenvalue/eigenvector pair for a nondefective matrix, then how is the abovesolution expressed in terms of a single eigenvalue/eigenvector pair applicable? Inthis case, the solution to the entire system is given by a linear combination of thedistinct solutions corresponding to the different eigenvalue/eigenvector pairs. Weillustrate this by the 2 by 2 case.Consider the system ˙ x1 a a12 x1 ˙ x= = Ax = 11 . (6.14) ˙ x2 a21 a22 x2Let λ1 , λ2 be eigenvalues of A and v1 , v2 eigenvectors of A. The complete solutionto the system is then given by x1 x= = c1 eλ1 t v1 + c2 eλ2 t v2 (6.15) x2where c1 , c2 ∈ R are scalar constants that depend on the initial conditions of thesystem x(0) = x0 . To find them, set t = 0 in the general solution given aboveand equate the resulting expression to x0 .Take a closer look at the above solution to the system x1 x= = c1 eλ1 t v1 + c2 eλ2 t v2 (6.16) x2 = f (t) + g(t) (6.17)where f (t) : R → R2 = c1 eλ1 t v1 and g(t) : R → R2 = c2 eλ2 t v2 , i.e., the solutionis the sum of two distinct exponentially time-dependent functions. Thus we cansay that f (t) and g(t) are the fundamental components that make up the responseof the system. They are the two modes of the system. Hence, the response of alinear time-invariant system is a linear combination of the response of its individual 51
    • 6.3. Solution and stability of dynamic systemsmodes or, equivalently, the response of a linear time-invariant system is a sum ofthe scaled response of its individual modes.Modes are a very important tool in the study of the dynamical behavior of systems.For example, in an aircraft, the dynamic response is comprised of many differentand varied modes. They are a key tool in characterizing the stability of an aircraftfor designing controllers. It is essential to have a fundamental and intuitive under-standing of what the modes of a system are to aid in identifying desired responsecharacteristics and also to design efficient controllers.In order to understand modes in a comprehensive manner, we need to be able toexamine the individual response of each mode. This corresponds to a situationwhere either c1 = 0 or c2 = 0. It turns out that this happens when the initialcondition is an eigenvector of the system. In this case, only the mode correspondingto that eigenvector will make up the response. If x0 = v1 then x(t) = c1 eλ1 t v1 .Similarly, if x0 = v2 then x(t) = c2 eλ2 t v2 .The solution to a linear dynamical system can be seen as a sum of eigenvectors.Notice from the solution to the dynamical system x1 x= = c1 eλ1 t v1 + c2 eλ2 t v2 (6.18) x2that c1 eλ1 t and c2 eλ2 t are scalar numbers at any given point in time. Lettingk1 (t) = c1 eλ1 t and k2 (t) = c2 eλ2 t x1 x= = k1 v1 + k2 v2 (6.19) x2but remember that eigenvectors are not unique and any multiple of an eigenvectoris also an eigenvector. Thus k1 v1 and k2 v2 are both eigenvectors and the solutionof the system is a sum of its eigenvectors scaled with time. Since eigenvectors aresimply vectors in the phase plane, the solution vector is simply a sum of two scaledvectors.Solution via the matrix exponentialWhen the system matrix A is defective, the previous solution technique is not valid.We now present a solution applicable for all linear time-invariant systems. Firstconsider the case when we have a scalar dynamical system given by x = ax ˙ (6.20)where a ∈ R and x(0) = x0 . Assume a solution of the form x(t) = eat . Differen-tiating with respect to time d x(t) = x = aeat = ax ˙ (6.21) dtwhich implies that the original solution is valid. Adjusting for the initial conditiongives the solution x(t) = eat x0 . (6.22) 52
    • 6.3. Solution and stability of dynamic systemsBut we do not have a scalar constant a but rather a matrix A. For this we definethe matrix exponential eAt A2 t2 A3 t 3 eAt = I + At + + + ... (6.23) 2! 3!which is based on the basic definition of the exponential function (from the Taylorseries) a2 a3 eat = 1 + a + + + ... (6.24) 2! 3!The time derivative of the matrix exponential eAt is given by d At 2A2 t 3A3 t e =0+A+ + + ... (6.25) dt 2! 3! A2 t 2 = A I + At + + . . . = AeAt (6.26) 2which is similar to eat . It follows that the solution to x = Ax in terms of the ˙matrix exponential is given by x(t) = eAt x0 . (6.27)This is a useful in computation and numerical applications but does not providemuch insight into the behavior of the system. Furthermore, we have the problemof computing an infinite series.Another related useful result arises when we can express the system in terms of adiagonal matrix. Using appropriate transformations as will be shown in the next ˙sections, we can represent the system x = Ax equivalently as ˙ z = Λz (6.28)where Λ is a diagonal matrix and z is the transformed state vector. It can beshown that for diagonal matrix Λ, the matrix exponential can be expressed in thefollowing compact form eλ1 t 0 eΛt = (6.29) 0 eλ2 twhere λ1 and λ2 are eigenvalues of Λ. A similar result holds for higher dimension ˙diagonal matrices. It follows that the solution to the transformed system z = Λzis given by eλ1 t 0 z(t) = eΛt z0 = z . (6.30) 0 eλ2 t 0It must be noted that the previous diagonalization is not possible for defectivematrices and therefore, in this case, computing the matrix exponential remains thepreferred solution method.Stability via eigenvalues and eigenvectors ˙Recall the solution to x = Ax x = c1 eλ1 t v1 + c2 eλ2 t v2 . (6.31) 53
    • 6.3. Solution and stability of dynamic systemsIt is clear from the above that if λ1 , λ2 < 0, then x(t) has a decaying response tozero. Also if λ1 and/or λ2 are greater than zero, the response will exponentiallyincrease and x(t) will go towards infinity. If λ1 = λ2 = 0, then there is no dynamicresponse and x(t) remains at its initial condition for all future time.From these observations, we can formulate the following stability rules for au- ˙tonomous systems given by x = Ax • x = Ax is stable if both λ1 , λ2 < 0. ˙ • x = Ax is stable if either λ1 < 0 or λ2 < 0. ˙ • x = Ax is neutrally or marginally stable if λ1 = λ2 = 0. ˙Solution of a forced dynamical systemWe now consider the solution to the forced dynamical system given by ˙ x = Ax + Bu. (6.32)Multiply this by e−At e−At x = e−At Ax + e−At Bu ˙ (6.33) −At −At −At ⇒e x−e ˙ Ax = e Bu (6.34)where e−At is the matrix exponential given by A2 t 2 A3 t3 eAt = I + At + + + .... (6.35) 2! 3!Since by the product rule of differentiation we have d −At (e x) = e−At x − e−At Ax ˙ (6.36) dtand hence d −At (e x) = e−At Bu (6.37) dtSeparating variables and integrating the previous between initial time t = 0 andfinal time t = T gives T T .d(e−At x) = e−At Bu .dt (6.38) 0 0 T ⇒ e−AT x(T ) − eA(0) x(0) = e−At Bu .dt (6.39) 0 T ⇒ e−AT x(T ) = e−At Bu .dt + x(0). (6.40) 0Multiplying by a constant eAT gives the solution T x(T ) = eAT x(0) + eAT e−At Bu .dt (6.41) 0 T ⇒ x(T ) = eAT x(0) + eA(T −t) Bu .dt (6.42) 0 54
    • 6.4. State transformationwhich can alternatively be written in a more convenient form as t x(t) = eAt x(0) + eA(t−τ ) Bu .dτ (6.43) 0The output equation is then given by y(t) = Cx + Du (6.44) t = CeAt x(0) + CeA(t−τ ) Bu .dτ + Du (6.45) 06.4 State transformationAs mentioned earlier, there are an infinite number of state-space representationspossible for a given system. Any two of these state-space representations are,in essence, coordinate transformations of the same underlying dynamical system.Thus they can be related to each other via a state transformation matrix T . It mustbe mentioned that the number of states is unique (the smallest set of variablesthat completely describe the system) and therefore the dimension of the state-spacedoes not change, rather only the states change (hence state transformation).Let x ∈ Rn and z ∈ Rn represent two state vectors for the same SISO system.These state vectors can be then be related to each other by T ∈ Rn×n as such x = Tz ⇔ z = T −1 x (6.46)The corresponding dynamical systems for the two states are given by Representation 1: ˙ x = Ax + Bu y = Cx + Du (6.47) Representation 2: ˙ z = Az + Bu y = Cz + Du (6.48)where A, A ∈ Rn×n , B, B ∈ Rn×1 , C, C ∈ R1×n and D, D ∈ R.Now we relate the two systems via the state transformation matrix T ˙ ˙ x = T z = Ax + Bu = AT z + Bu (6.49) −1 −1 ⇒z = T ˙ AT z + T Bu (6.50)and y = Cx + Du = CT z + Du. (6.51)Therefore, A = T −1 AT B = T −1 B C = CT D=D (6.52)6.5 DiagonalizationConsider a matrix A with eigenvalues λi (including repeated eigenvalues) anddistinct eigenvectors vk . If i = k, then A is a nondefective matrix. The algebraicmultiplicity of an eigenvalue is the number of times it is repeated and the geometric 55
    • 6.5. Diagonalizationmultiplicity of an eigenvalue is the number of distinct eigenvectors associated withan eigenvalue. If the algebraic multiplicity of the eigenvalues of a matrix is equalto their geometric multiplicity (not necessarily of the same eigenvalue), then A isagain a nondefective matrix. A nondefective matrix has a full set of eigenvectors;it has an n-dimensional eigenspace.For a nondefective matrix A, Avi = λi ∀i = 1, . . . , n and thus   λ1 . . . 0 . . . vn  . . . . .  . . .  A v1 v2 ... vn = v1 v2 . . (6.53) 0 . . . λnNow let T = v1 v2 ... vn , then AT = T Λ (6.54)where Λ is the diagonal eigenvalue matrix   λ1 . . . 0 Λ =  . . . . .  = diag(λ1 , . . . , λn ). . .  . . (6.55) 0 . . . λnBut since A is nondefective, {v1 , v2 , . . . , vn } are distinct and therefore T is in-vertible. Then Λ = T −1 AT (6.56)Thus, for a nondefective A, there exists a state transformation T that diagonalizesA. In this case, A is considered diagonalizable. Example Apply a diagonalizing state transformation to the 3 tank problem described by     −1 1 0 1 0 0 x =  1 −2 1  x + 0 1 0 u ˙ (6.57) 0 1 −1 0 0 1 The eigenvectors of the system matrix are       1 1 1 v1 = 1 v2 =  0  v3 = −2 (6.58) 1 −1 1 and the eigenvalues are λ = 0, −1, −3. (6.59) Define the state transformation x = Tz (6.60) 56
    • 6.6. State-space to transfer function and let     1 1 1 2 2 2 1 T = [v1 v2 v3 ] = 1 0 −2 ⇒ T −1 = 3 0 −3 . 6 1 −1 1 1 −2 1 (6.61) Now apply the state transformation to the system     2 2 2 −1 1 0 1 1 1 1 ˙ z= 3 0 −3  1 −2 1  1 0 −2 z 6 1 −2 1 0 1 −1 1 −1 1    2 2 2 1 0 0 1 + 3 0 −3 0 1 0 u (6.62) 6 1 −2 1 0 0 1     0 0 0 2 2 2 1 = 0 −1 0  z + 3 0 −3 u (6.63) 6 0 0 −3 1 −2 1 which is consistent with the diagonalization requirements. Notice that the eigenvalues of the system matrix are on the diagonal of the trans- formed system matrix.6.6 State-space to transfer functionWe derive an expression to convert a SISO state-space representation of a systemto its corresponding transfer function. This can be used for MIMO systems as wellas long as the input/outputs are are appropriately restricted before applying theformula.Consider the state-space representation of a SISO system ˙ x = Ax + Bu y = Cx + Du x(0) = 0 (6.64)Take the Laplace transform (remember x(0) = 0) of the state equation sIX(s) = AX(s) + BU (s) (6.65) (sI − A)X(s) = BU (s) (6.66) −1 X(s) = (sI − A) BU (s) (6.67)Now, substitute the above into the Laplace transform of the output equation Y (s) = CX(s) + DU (s) (6.68) −1 = C(sI − A) BU (s) + DU (s) (6.69) −1 = [C(sI − A) B + D]U (s) (6.70)Thus, the transfer function from U (s) to Y (s) is given by Y (s) = C(sI − A)−1 B + D (6.71) U (s) 57
    • 6.7. Cayley-Hamilton Theorem • The conversion from state-space to transfer function is not dependent on the states. This may be verified by carrying out the above derivation using a transformed state vector x = T z. Consider the transformed state and output equations z = T −1 AT z + T −1 Bu ˙ y = Cx + Du = CT z + Du. (6.72) Then A = T −1 AT B = T −1 B C = CT D=D (6.73) and Y (s) = C(sI − A)−1 B + D (6.74) U (s) = CT (sI − T −1 AT )−1 T −1 B + D (6.75) but (c1 c2 c3 )−1 = c−1 c−1 c−1 since for any invertible matrices c1 , c2 , c3 , we 3 2 1 can write the above as −1 CT (sI − T −1 AT )−1 T −1 B + D = C T (sI − T −1 AT )T −1 B+D (6.76) −1 = C T sIT −1 − T T −1 AT T −1 B+D (6.77) = C(sI − A)−1 B + D (6.78) • The direct relation between poles and eigenvalues can be seen from the formula above. Consider the matrix inversion of (sI − A) adj(sI − A) (sI − A)−1 = (6.79) |sI − A| where adj denotes the adjoint. For a zero direct transmission term, D = 0, the state-space to transfer function equation becomes Y (s) Cadj(sI − A)B = C(sI − A)−1 B + D = (6.80) U (s) |sI − A| The solutions of the denominator of the above are clearly the poles of the transfer function but the expression is also exactly the same as the charac- teristic equation whose solutions are the eigenvalues of A. Therefore, poles and eigenvalues of a given system are one and the same.6.7 Cayley-Hamilton TheoremTheorem 1. Every matrix satisfies its own characteristic equation. If |sI − A| = sn + an−1 sn−1 + . . . + a0 (6.81)then An + an−1 An−1 + . . . + a0 I = 0. (6.82) 58
    • 6.8. Controllability & reachabilityProof. We know adj(sI − A) (sI − A)−1 = → |sI − A|I = adj(sI − A)(sI − A). (6.83) |sI − A|Now let adj(sI − A) = Qn sn + Qn−1 sn−1 + . . . + Q0 for some matrix Q |sI − A|I = adj(sI − A)(sI − A)⇒ (sn + an−1 sn−1 + . . . + a0 )I = (Qn sn + Qn−1 sn−1 + . . . + Q0 )(sI − A). (6.84)Equate coefficients on both sides Qn = 0 (6.85) Qn−1 = I (6.86) −Qn−1 A + Qn−2 = an−1 I (6.87) −Qn−2 A + Qn−3 = an−2 I (6.88) . . . (6.89) −Q1 A + Q0 = a1 I (6.90) −Q0 A = a0 I. (6.91)Solving the above by eliminating the Qi terms gives An + an−1 An−1 + . . . + a0 I = 0. (6.92)6.8 Controllability & reachabilityControllability is concerned with whether a given system can be steered to the originx = 0 within a finite time using the control authority provided given a certain initialcondition x0 . Steering to the origin is not a very restrictive concept since a simplecoordinate translation transformation can accommodate many different types of‘’origins” or reference points leading to the related concept of reachability.Definition 1. A state vector x0 is said to be controllable if there exists a finiteinterval [0, T ] and an input {u(t), t ∈ [0, T ]} such that x(T ) = 0. If all statevectors are controllable, then the system is said to be completely state controllable.Otherwise, it is uncontrollable.In other words, if the system can be driven from any initial condition to the originin a finite time via a control input then it is controllable. The definition of theclosely related notion of reachability is given below.Definition 2. A state vector x is said to be reachable (from the origin) if, given ¯x(0) = 0, there exists a finite time interval [0, T ] and an input {u(t), t ∈ [0, T ]}such that x(T ) = x. If all state vectors are reachable, then the system is said to ¯be completely reachable. Otherwise, it is unreachable. 59
    • 6.8. Controllability & reachabilityFor continuous, linear time-invariant (LTI) systems, there is no difference betweencomplete controllability and complete reachability. However, differences betweenthe two notions do become obvious in some types of other systems such as discrete-time models.Since we limit our discussion to continuous LTI systems, controllability will beused to refer to either notion (or the stronger condition). In this case, an alternatedefinition of controllability is convenient.Definition 3. A system is completely state controllable if there exists an input{u(t), t ∈ [0, T ]} that will transfer the state vector from any x1 = x(t0 ) to anyx2 = x(T ) in a finite time T .The restriction of time to begin at t = 0 is not restrictive since we can alwaystranslate the time domain in question to achieve the same notion.Controllability for continuous LTI systems can be verified using many differenttechniques. One of the most popular and convenient is through the controllabilitymatrix, ωc (A, B). ˙Theorem 2. The n-state system x = Ax + Bu is completely state controllable ifthe controllability matrix ωc (A, B) given by . . . . ωc (A, B) = [B . AB . A2 B . . . . . An−1 B] . . . . (6.93)has rank n.Proof. Let RT be the set of all vectors of the form x(T ) obtained by evaluatingall possible inputs {u(t), t ∈ [0, T ]} in the solution to x = Ax + Bu. Thus, the ˙system is completely state controllable if RT = Rn .If rank{ωc (A, B)} = n, then ωc (A, B) spans Rn . Thus, it is sufficient to showthat RT is in the range space of ωc (A, B) for the proof. Hence RT = Rn , implyingcomplete state controllability. ˙Recall the general solution to x = Ax + Bu t x(t) = eAt x(0) + eA(t−τ ) Bu(τ )dτ. (6.94) 0Since the matrix exponential eAt is defined as ∞ At A2 t 2 A3 t 3 tk k e = I + At + + + ... = A (6.95) 2! 3! k! k=0we can write the general solution at the time T where x(T ) ∈ RT as ∞ T (t − τ )k x(T ) = eAT x(0) + Ak B u(τ )dτ (6.96) 0 k! k=0 ∞ T Tk (t − τ )k = Ak B x(0) + u(τ )dτ . (6.97) k! 0 k! k=0 60
    • 6.8. Controllability & reachabilityIf {an−1 , . . . , a1 , a0 } are the coefficients of the characteristic equation of A, thenby the Cayley-Hamilton theorem An = −an−1 An−1 − . . . − a1 A − a0 I. (6.98)which implies that every {Ak : k ≥ n} can be written as a linear combination of{An−1 , . . . , A, I}. It follows that all matrices {Ak B : k ≥ n} can be expressedas a linear combination of {An−1 B, . . . , AB, B}. Therefore x(T ) is in the rangespace of the matrix whose columns are {An−1 B, . . . , AB, B} since it is a linearcombination of these. This is precisely the controllability matrix ωc . . . . ωc (A, B) = [B . AB . A2 B . . . . . An−1 B]. . . . . (6.99)If rank{ωc (A, B)} = n, then ωc (A, B) is a basis for Rn and the possible valuesfor x(T ) is the entire space Rn . That is, all state vectors are reachable and thesystem is completely state controllable. Example Consider the 3 tank system shown in Fig. (6.1) with three actuators, one in every tank. Figure 6.1: The 3 tank 3 actuator system. ˙ The system can be described by x = Ax + Bu where     −1 1 0 1 0 0 A =  1 −2 1  , B = 0 1 0 . (6.100) 0 1 −1 0 0 1 The controllability matrix is given by   . . 2 1 0 0 ... ωc (A, B) = [B . AB . A B] = 0 . . 1 0 . . . 0 0 1 ... ⇒ rank[ωc (A, B)] = 3 = n (6.101) which implies complete state controllability. This is an obvious an expected result since we have all three of the states independently actuated. But can we still achieve complete state controllability with fewer actuators? If we can, where can we place them? This may have potential benefits such as lower actuator costs, higher reliability and lower maintenance. 61
    • 6.8. Controllability & reachabilityFigure 6.2: The 3 tank system with actuators in the first and second tanks. Consider the system when we remove the third actuator as in Fig. (6.2). The control matrix is then given by   1 0 B = 0 1 (6.102) 0 0 and the controllability matrix is   1 0 −1 1 2 −3 ωc (A, B) = 0 1 1 −2 −3 6  ⇒ rank[ωc (A, B)] = 3 0 0 0 1 1 −3 (6.103) which implies that the third controller is not necessary and we can still have complelete state controllability with only two actuators. Now change the locations of the two actuators. Move an actuator from the second tank to the third tank as in Fig. (??).Figure 6.3: The 3 tank system with actuators in the first and third tanks. The control matrix is then given as   1 0 B = 0 0 (6.104) 0 1 62
    • 6.8. Controllability & reachabilityand the controllability matrix is given by   1 0 −1 0 2 1 ωc (A, B) = 0 0 1 1 −3 −3 0 1 0 −1 1 2 ⇒ rank[ωc (A, B)] = 3. (6.105)Thus, the system is completely state controllable even if we changethe locations of actuators. So are two actuators even necessary? Canwe still have complete state controllability with only one actuator?Consider the situation where we only have one actuator in this firsttank as shown in Fig. (6.4).Figure 6.4: The 3 tank system with only one actuator in the first tank.The control matrix is then given by   1 B = 0 (6.106) 0and the controllability matrix is given by   1 −1 2 ωc (A, B) = 0 1 −3 ⇒ rank[ωc (A, B)] = 3. (6.107) 0 0 1Thus the system is completely state controllable even with only oneactuator! But can the actuator be placed in any tank?Consider the situation where there is only one actuator in the secondtank as shown in Fig. (6.5).The control matrix is then given by   0 B = 1 (6.108) 0and the controllability matrix is given by   0 1 −3 ωc (A, B) = 1 −2 6  ⇒ rank[ωc (A, B)] = 2. (6.109) 0 1 −3 63
    • 6.9. Controllability gramian Figure 6.5: The 3 tank system with only one actuator in the second tank. Thus the system is not completely state controllable if we place the actuator in the second tank. This nonintuitive result illustrates why checking controllability is important.6.9 Controllability gramianWe have proved for a completely state controllable system: there exists a control{u(t), t ∈ [0, T ]} that drives the system to any final state vector x(t = T ) in afinite time T . The question may then be asked: What is this control vector andhow to compute it? The controllability gramian becomes relevant in this regard. ˙Definition 4. For a system x = Ax + Bu with x(0) = 0, the controllabilitygramian is t T Xc (t) = eAτ BB T eA τ dτ. (6.110) 0 ˙Theorem 3. Consider a completely state controllable system x = Ax + Bu withx(0) = 0. The control input that transfers the system from 0 to some vector x(t1 )in a finite time T = t1 is given by T (t1 −t) u(t) = B T eA X−1 (t1 )x(t1 ) c (6.111)where X−1 (t1 ) is the inverse of controllability gramian evaluated at time t1 . cProof. Substituting the proposed control input u(t) into the general solution gives t x(t) = eAt x(0) + eA(t−τ ) Bu(τ )dτ (6.112) 0 t1 T = eA(t1 −τ ) BB T eA (t1 −τ ) X−1 (t1 )x(t1 )dτ c (6.113) 0 t1 T = eA(t1 −τ ) BB T eA (t1 −τ ) dτ X−1 (t1 )x(t1 ) c (6.114) 0 t1 t1 −1 T T = eA(t1 −τ ) BB T eA (t1 −τ ) dτ eAτ BB T eA τ dτ x(t1 ) (6.115) 0 0 = x(t1 ) (6.116) 64
    • 6.10. Output controllabilityThe above requires that the controllability gramian be invertible. This will beshown through contradiction. Assume Xc (t1 ) is singular (noninvertible) and hencethere exists a nonzero vector v such that Xc (t1 )v = 0 due to the linear dependenceof the columns of Xc (t1 ). This implies that t1 T vT eAτ BB T eA τ vdτ = 0. (6.117) 0Since the integrand is nonnegative, it must be that T vT eAτ BB T eA τ v=0 ∀ τ ∈ [0, t1 ]. (6.118) TIf y(τ ) = B T eA τ v, the above implies y T (τ ) = 0 for all τ ∈ [0, t1 ]. Since t1 > 0,it must be true that the first and higher derivatives at τ = 0+ must be zero, i.e. dk y T + (0 ) = 0 (6.119) dτ kbut from the series expansion of the matrix exponential dk y T + (0 ) = vT Ak B (6.120) dτ kimplying that vT Ak B = 0 for all k = 0, 1, . . . , n. The controllability matrix beingnonsingular implies that this is only true for v = 0. But this is a contradictionsince we initially assumed v = 0. Thus, the original assumption of singular Xc (t1 )is false and Xc (t1 ) is proven invertible.Theorem 4. If the controllability gramian for a system is positive definite for everyt ≥ 0 then the system is controllable.Proof. See (Mackenroth, 2003), pp.104-105.A state transformation such as x = T z on a system does not change the con-trollability characteristics of the system since controllability is an inherent systemproperty.6.10 Output controllability ˙Definition 5. The system x = Ax+Bu is output controllable if an input u(t) existsthat will transfer the output from y(t = t0 ) to any y(t = t1 ) for any t1 − t0 > 0. ˙Theorem 5. The system x = Ax + Bu, y = Cx + Du is output controllable ifthe matrix given by . . . . . [CB . CAB . CA2 B . . . . . CAn−1 B . D] . . . . . (6.121)has rank m where m is the number of outputs.Output controllability and state controllability are related but one does not implythe other. 65
    • 6.11. Control canonical form and controllability Figure 6.6: The 3 tank system with only one actuator in the second tank. Example Consider the 3 tank system with one actuator shown in Fig. (6.6). The state equation of the system is     −1 1 0 0 x =  1 −2 1  x + 1 u. ˙ (6.122) 0 1 −1 0 This system was shown to be not completely state controllable earlier. For output y = x2 , is the system output controllable? For the given output, the output matrix is given by C = [0 1 0] (6.123) and the direct transmission matrix D = 0. The output controllability matrix is then given by   0 1 −3 Cωc (A, B) = [0 1 0] 1 −2 6  = [1 −2 6]. (6.124) 0 1 −3 The number of outputs m = 1 and thus rank[Cωc (A, B)] = 1 = m which implies that the system is output controllable. This is an example of a system that is not completely state controllable but is output controllable.6.11 Control canonical form and controllability ˙Recall the control canonical form for xc = Ac xc + Bc u,     0 1 0 0 ... 0 x1 ˙  0 0 1 0 ... 0  Ac =  .  =  .  .   (6.125)  . . . . .. . .   . . . .  xn ˙ −a0 −a1 −a2 . . . −an−2 an−1and   0 0 Bc =  .  . (6.126)   . . 1 66
    • 6.11. Control canonical form and controllabilitywhere a0 , . . . , an−1 are the coefficients of the characteristic equation of the system:|sI − Ac | = sn + an−1 sn−1 + . . . + a1 s + a0 = 0.The controllability matrix for a system in this form ωc (Ac , Bc ) . . . n−1 ωc (Ac , Bc ) = [Bc . Ac Bc . . . . . Ac Bc ] . . . (6.127)   0 0 ... 1 . . . . . . . −an−1   . .  = 0  0 1 ... .  (6.128)  .  .  0 1 −an−1 . . . . 1 −an−1 ...Since the above has all nonzero pivots/diagonals, it is full rank implying that therank of ωc (Ac , Bc ) is n. Thus, any system that can be written in control canonicalform is completely state controllable. The converse is also true, any completelystate controllable SISO system can be transformed into the control canonical formby an appropriate state transformation.The state transformation that transforms a completely state controllable SISO ˙ ˙system x = Ax + Bu to control canonical form xc = Ac xc + Bc u is given by −1 Tsc = ωc (A, B)ωc (Ac , Bc ) (6.129)where   a1 a2 ... an−1 1   a2 a3 ... 1 0 −1 . . . . . ωc (Ac , Bc ) =  .. (6.130)    . . 0 0 an−1 1 ... 0 0 1 0 ... 0 0Thus the state transformation x = Tsc xc transfers any completely state control- ˙ ˙lable SISO system x = Ax + Bu to control canonical form xc = Ac xc + Bc u.Why does this choice of Tsc work? Consider Ac and Bc written in terms of Tsc −1 Ac = Tsc ATsc (6.131) −1 Bc = Tsc B (6.132)and the following identity that holds for any similar (transformed) matrices −1 −1 −1 −1 A2 = Tsc ATsc Tsc ATsc = Tsc A2 Tsc ⇒ An−1 = Tsc An−1 Tsc c c (6.133)Now by definition . . . . ωc (Ac , Bc ) = [Bc . Ac Bc . A2 Bc . . . . . An−1 Bc ] . . c . . c (6.134) 67
    • 6.12. Stabilizabilityimplying . −1 . −1 . ωc (Ac , Bc ) = [Tsc B . Tsc ATsc Tsc B . Tsc A2 Tsc Tsc B . . . . −1 . −1 . −1 . . −1 . . . . Tsc An−1 Tsc Tsc B] (6.135) . −1 . −1 . −1 . . −1 = [Tsc B . Tsc AB . Tsc A2 B . . . . . Tsc An−1 B] −1 . . . . (6.136) . . . . = Tsc [B . AB . A2 B . . . . . An−1 B] = Tsc ωc (A, B). (6.137) −1 . . . . −1Therefore −1 −1 ωc (Ac , Bc ) = Tsc ωc (A, B) ⇒ Tsc = ωc (A, B)ωc (Ac , Bc ) (6.138)6.12 StabilizabilityThis section deals with systems that are not completely state controllable. Iscontrollability possible for any subset of states? And is feedback control possiblefor a subset of controllable states while ignoring the uncontrollable ones? Thesequestions are answered by first introducing the next theorem regarding the normalform of a system. ˙Theorem 6. Suppose ωc (A, B) for x = Ax + Bu has rank r < n. Then there isa similarity transformation such that the following are true: • The transformed pair has the form A11 A12 B1 A = T −1 AT = , B = T −1 B = (6.139) 0 A22 0 where A11 ∈ Rr×r and B1 ∈ Rr×l . This form is called the normal form of the system. • The system described by A11 and B1 is controllable.(l is the number of inputs, we let l = 1 for our consideration) .Proof. Consider a transformation T = [T2 . T2 ] where . . . . T1 = [B . AB . . . . . Ak−1 B] . . . (6.140) .and T2 is chosen such that T is invertible/nonsingular. Now let T −1 = [S1 . S2 ]T , .then .   S1 T1 . S1 T2    S1 . 1 0 I = T −1 T ⇒ . . . T1 . T2 =  . . .  . . ...  = (6.141)  .  0 1 S2 S T . S T 2 1 . 2 2 68
    • 6.12. Stabilizabilityimplying S1 T1 = I (6.142) S1 T2 = 0 (6.143) S2 T1 = 0 (6.144) S2 T2 = I. (6.145)Now consider the transformed system B     S1 S1 B B = T −1 B = . . . B =  . . .  . (6.146) S2 S2 B . . .But by inspection of T1 , it is obvious that B is in the range of T1 = [B . AB . . . . . Ak−1 B], . . .i.e., B = T1 K for some matrix K. Then it is also true that S2 B = (S2 T1 )K (6.147)but S2 T1 = 0, thus S2 B = 0. Now let Bc = S1 B and it follows that       S1 B S1 B Bc B =  . . .  =  . . .  = . . . (6.148) S2 B 0 0Consider the similarity transformation T applied to the system matrix A .     S1 . S AT S1 AT1 . 1 2 A = T −1 AT = . . . A T1 . T2 =  . . . . .  ... .  (6.149) S2 . . S AT S2 AT1 . 2 2Now . . . . . . AT1 = A[B . AB . . . . . Ak−1 B] = [AB . A2 B . . . . . Ak B] . . . . . . (6.150)and recall, from the Cayley-Hamilton theorem Ak = −an−1 Ak−1 − . . . − a1 A − a0 I (6.151)i.e., Ak is some function of A, . . . , Ak−1 . This implies that AT1 is in the range ofT1 , i.e., AT1 = T1 L for some matrix L, and hence S2 AT1 = S2 T1 L (6.152)but since S2 T1 = 0, it is true that S2 AT1 = 0. Let A11 = S1 AT1 , A12 = S1 AT2 ,and A22 = S2 AT2 . Therefore .   . S1 AT1 . S1 AT2  A =  ...  ...  (6.153) . . S2 AT1 . S2 AT2 . .     . . A S1 AT1 . S1 AT2  A11 . 12  =  ...  ...  = ...   ...  (6.154) . . . . A 0 . S2 AT2 0 . 22 69
    • 6.12. StabilizabilityThe benefit of this theorem is that we can ensure a separation between statesthat can be effected via the control and states that cannot. Therefore if theuncontrollable states are stable and we can stabilize the controllable states, then wecan ensure the entire system to be stable. As will be shown later, any controllableset of states can be stabilized via appropriate feedback. The next definition sumsthe above. ˙Definition 6. If the matrix A22 in the normal form of the system x = Ax +Bu has all LHP eigenvalues, then the system is stabilizable. Alternatively, if theuncontrollable subspace of the system is stable then the system is stabilizable.The similarity transformation T that yields the normal form of a system is straight-forward to construct as was shown in the proof of Theorem (6). It is given by . T = [T2 . T2 ] . (6.155)where . . . T1 = [B . AB . . . . . Ak−1 B] . . . (6.156)and T2 is chosen such that T is invertible/nonsingular. Example Is the familiar 3 tank problem below completely state con- trollable? If not, is it stabilizable? u(t) x1 x2 x3 The system is described as ˙ x = Ax + Bu (6.157) y = Cx + Du (6.158) where         −1 1 0 0 1 0 0 0 A= 1 −2 1 B = 1 C = 0 1 0 D = 0 0 1 −1 0 0 0 1 0 (6.159) The controllability matrix ωc (A, B) for the system is given by   0 1 −3 ωc (A, B) = [B AB A2 B] = 1 −2 6  . (6.160) 0 1 −3 70
    • 6.12. StabilizabilityIt can be easily verified that rank[ωc (A, B)] = 2 < n = 3 and thereforethe system is not completely state controllable. However, we do knowthat two states are controllable. We determine next if the system isstabilizable.Construct the state transformation matrix T = [T1 T2 ] where   0 1 T1 = [B AB] = 1 −2 (6.161) 1 1and T2 is chosen such that T is invertible. Let   1 T2 = 0 (6.162) 0and thus   0 1 1 T = 1 −2 0 . (6.163) 1 1 0Applying this transformation as x = T z gives z = T −1 AT z + T −1 Bu ˙ (6.164)     0 1 2 −1 1 0 0 1 1 = 0 0 1   1 −2 1  1 −2 0 z 1 0 −1 0 1 −1 1 1 0    0 1 2 0 + 0 0 1  1 u (6.165) 1 0 −1 0     0 0 1 1 = 1 −3 0  z + 0 u (6.166) 0 0 −1 0 = Az + Bu (6.167)and y = CT z + Du (6.168)    1 0 0 0 1 1 = 0 1 0 1 −2 0 z (6.169) 0 0 1 1 1 0   0 1 1 = 1 −2 0 z (6.170) 1 1 0 = Cz. (6.171)It is obvious from the previous that     0 1 1 0 0 1A11 = , A12 = , A22 = −1, C11 = 1 −2 C12 = 0 1 −3 0 0 1, 0 (6.172) 71
    • 6.13. Popov-Belevitch-Hautus test for controllability and stabilizability and since the eigenvalues of A22 are negative, the system is stabilizable.6.13 Popov-Belevitch-Hautus test for controllability and stabilizabilityTheorem 7. ˙ 1. The system x = Ax + Bu is controllable if and only if . rank(A − λI . B) = n . ∀λ ∈ C (6.173) ˙ 2. The system x = Ax + Bu is stabilizable if and only if . rank(A − λI . B) = n . {∀λ ∈ C : Re λ ≥ 0} (6.174)Proof. See (Mackenroth, 2003), p. 107.6.14 ObservabilityObservability is an important property often needed for practical control design. Itdeals with the problem of estimating the system states (since they are needed forfeedback control) if only the outputs and the control are known apart from thesystem model. Estimation is used for a variety of reasons. In some cases it is usedout of necessity when system states cannot be measured directly. In situationswhere sensors are expensive, estimation provides a vastly more cost-effective andreliable alternative. ˙Definition 7. The system given by x = Ax + Bu and y = Cx + Du is completelystate observable if every state in x can be uniquely determined from observationsof y and knowledge of the input u(t), over a finite time interval, t0 ≤ t ≤ t1 .Consider the special case where observability is a trivial notion. Let C be square(i.e., number of outputs m = number of states n) and of full rank. Then the allthe states can be easily determined given the input u(t) y = Cx + Du → x = C −1 (−Du + y) . (6.175)In all other cases, it is necessary to independently verify the observability of thesystem. The observability matrix ωo (A, C) is helpful in this regard. ˙Theorem 8. The n-state, m-output system given by x = Ax + Bu and y =Cx + Du is completely state observable if the observability matrix ωo (A, C) givenby   C  CA  ωo (A, C) =  .  (6.176)    . . CAn−1has rank n. 72
    • 6.14. ObservabilityProof. Consider the n − 1 derivatives of the output y = Cx + Du y = Cx + Du (6.177) ˙ y = C x + Du = C(Ax + Bu) + Du ˙ ˙ ˙ (6.178) y = C(Ax + B u) + D¨ = CA2 x + CABu + CB u + D¨ ¨ ˙ ˙ u ˙ u (6.179) y (3) = CA3 x + CA2 Bu + CAB u + CB u + Du(3) ˙ ¨ (6.180) . . . . . . . . . . . . . . . . . . . . . . . .which we can generalize as follows       y C D 0 0 ... 0 u y ˙ u ˙   CA   CB D 0 ... 0    CA2        y ¨ =  x+ CAB CB D ... 0 u ¨ .   . .    .  . . . .. .  .   .  . . . .  .   . .  . . . . . .  y (n−1) CAn−1 CAn−2 B CAn−3 B CAn−4 B ... D u(n−1) (6.181)Let     y C  y ˙   CA  CA2     Y =  y ¨ ,  θ=  ,   . .   . .  .   . (n−1) n−1 y CA     D 0 0 ... 0 u  CB  D 0 ... 0    u ˙ τ =  CAB  CB D ... 0, U =  u ¨ (6.182) .    . . . . . . .. . .  .  . . . . .  . CAn−2 B CAn−3 B CAn−4 B ... D u(n−1)then Y = θx + τ U → θx = Y − τ U (6.183)where θ ∈ Rnm×n . All the quantities in the previous equation are known exceptx. Thus, if we can ensure that Y − τ U is in the range space of θ then we havea solution. Furthermore, for the solution to be unique, the nullspace of θ must beempty. This is because a solution to the system is of the form xr + xn where xris in the range space and xn is any element in the nullspace. Thus the solution isunique only if the nullspace has only one element, 0 (the trivial solution).In the special case of m = 1 (single output), θ is square and invertibility is anecessary and sufficient condition for observability. Thus θ must be of full rank(rank(θ) = n) for observability.In all other cases, invertibility is too strong a condition that cannot be met sinceθ will not be square and therefore we test if θ has a trivial nullspace. If it does,the system is observable and otherwise it is not. The nullspace is related to the 73
    • 6.14. Observabilitycolumn space (range space) of θ. Thus if the columns of θ are linearly independent,then the nullspace only contains the trivial solution and the system is observable.Since θ ∈ Rnm×n , it is required that rank(θ) = n for observability. Note thatθ = ωo (A, C).We only go up to the (n − 1)th derivative of y because from the Cayley-Hamiltontheorem, Ak with k > n − 1 can always be expressed as a linear combination ofA, A2 , . . . , An−1 . Thus evaluating higher-order derivatives of y does not changethe rank condition of θ and the previous condition on θ for observability of x stillholds. Example Consider the 3 tank example as before with system equa- tions   −1 1 0 ˙ x= 1 −2 1  x + Bu (6.184) 0 1 −1 y = Cx (6.185) where B can be any vector. Since observability is dependent on the output matrix C we will consider observability for different situations. If all the three tank heights can be measured, the output matrix is simply the 3 × 3 identity matrix. It is fairly obvious that the system is completely state observable in this case because the C matrix is invertible and it follows that rank[ωo (A, C)] = 3 = m. If we can measure the height in the first and last tanks only, the C matrix is given by 1 0 0 C= . (6.186) 0 0 1 Although the rank of C = 2 < n, it does not imply anything about state observability. Then   1 0 0 0 0 1   −1 1 0 ωo (A, C) =   ⇒ rank[ωo (A, C)] = 3 = n (6.187) 0  1 −1  2 −3 1  1 −3 2 which means the system is still completely state observable. If we have only one height of the tank available, would the system would still be completely state observable? When only the the first tank height is available in the output, C is given as C = [1 0 0] (6.188) and thus   1 0 0 ωo (A, C) = −1 1 0 ⇒ rank[ωo (A, C)] = 3 = n (6.189) 2 −3 1 74
    • 6.15. Observability gramian which implies that the system is still completely observable. This is an important finding since it means we can find all three tank heights based on the height of one of the tanks. Hence, we need only one sensor as opposed to three. Now consider the situation where only the height of the second tank is available as a measurement. The C matrix is then given as C = [0 1 0] (6.190) and   0 1 0 ωo (A, C) =  1 −2 1  ⇒ rank[ωo (A, C)] = 2 < n. (6.191) −3 6 −3 Therefore, if the measurement is of the second tank height only then the system is not completely state observable. This nonintuitive result underscores the need for observability tests.6.15 Observability gramian ˙Theorem 9. The system given by x = Ax + Bu and y = Cx + Du is completelystate observable if the observability gramian Xo (t) given by t T Xo (t) = eA τ C T CeAτ dτ (6.192) 0is positive definite for each t ≥ 0.6.16 Observable canonical form and observability ˙Any completely state observable system given by x = Ax + Bu and y = Cx + Du ˙can be written in observable canonical form: z = Ao z + Bo u and y = Co z + Du.The is done using the transformation Tso given by −1 Tso = ωo (A, C)ωo (Ao , Co ) (6.193)where ωo (Ao , Co ) is given by   a1 a2 ... an−1 1 a2 a3 ... 1 0 −1 ωo (Ao , Co ) =  . . . (6.194)   . . .. . . ... . . . . 1 0 ... 0 0The constants a1 , . . . , an−2 are the coefficients of the characteristic equation ofA. 75
    • 6.17. Detectability6.17 DetectabilityWhat is a system is not completely state observable? Can we observe some ofthe states successfully? The results are similar to stabilizability for controllablesystems. ˙Theorem 10. Suppose ωo (A, B) for a system given by x = Ax + Bu and y =Cx + Du has rank r < n. Then there is a similarity transformation such that thefollowing are true: • The transformed pair has the form A11 0 A = T −1 AT = , C = CT = C1 0 (6.195) A21 A22 where A11 ∈ Rr×r and C1 ∈ Rm×r . This is another normal form of the system. • The system described by A11 and C1 is observable. ˙Definition 8. The system given by x = Ax + Bu and y = Cx + Du is detectableif the matrix A22 is stable.6.18 PHB test for observability and detectabilityTheorem 11. ˙ 1. The system given by x = Ax + Bu and y = Cx + Du is observable if and only if   A − λI rank  . . .  = n ∀λ ∈ C (6.196) C ˙ 2. The system given by x = Ax + Bu and y = Cx + Du is detectable if and only if   A − λI rank  . . .  = n {∀λ ∈ C : Re λ ≥ 0} (6.197) C6.19 Duality between observability and controllability ˙Theorem 12. Consider a system described by x = Ax + Bu and y = Cx + Du.Then the system is completely state controllable if and only if the dual systemgiven by x = AT x + C T u and y = B T x + DT u is completely observable. ˙Proof. This is obvious and can easily be deduced from the observability and con-trollability matrices. 76
    • Chapter 7Control design in state-space7.1 State feedback design/Pole-placement ˙We will derive a feedback scheme for the system x = Ax + Bu. We assumecomplete state controllability for the system.The feedback problem can be classified as either state feedback or output feedback.As the names suggests, output feedback uses the output error signal y − yr (yr isthe reference output) and state feedback uses the entire state error signal x − xr(xr is the reference state vector).When the reference values xr or yr are zero and all the states are available infeedback, the control problem is called output/state feedback regulation. A reg-ulation problem is not as restrictive as it seems because we can always translatethe state-space system to any nonzero set-point (i.e. not time varying). When xror yr are nonzero functions (time-varying in general), the problem is referred to asoutput/state feedback reference tracking.The most basic type of state-space control problem is the state feedback regulator.The remaining types of control problems may be dealt with using the same frame-work with some extensions. Thus, we will first consider a basic state feedbackdesign technique known as pole-placement (introduced earlier in terms of transferfunctions and SISO systems).Let the control input u be given by u = K(xr − x) = −Kx (7.1)where K = k1 k2 . . . kn is some vector gain to be determined. The closed-loop system is then given by x = Ax − BKx = (A − BK)x = ACL x ˙ (7.2)where the ACL = A − BK is the closed-loop system matrix. The control problemis then reduced to choosing K such that the poles of the closed-loop system, i.e.,the eigenvalues of ACL are at their desired values. The desired values of the polesare determined as discussed in previous chapters. 77
    • 7.1. State feedback design/Pole-placementConvert the closed-loop system to controllable canonical form using the standardstate transformation x = Tsc xc . x = (A − BK)x ˙ (7.3)⇒ xc = (Ac − Bc Kc )x ˙ (7.4)      0 1 0 ... 0 0  0 0 1 ... 0  0        . . . .. .  − . k =  . . . .   .  1,c k2,c ... kn,c  xc   . . . . .  .   0 0 0 ... 1  0  −a0 −a1 −a2 ... −an−1 1 (7.5)   0 1 0 ... 0   0 0 1 ... 0   = . . . . . . .. . .  xc .    . . . . .   0 0 0 ... 1  −(a0 + k1,c ) −(a1 + k2,c ) −(a2 + k3,c ) ... −(an−1 + kn,c ) (7.6)The previous expression allows us to determine the closed-loop poles in terms ofthe individual gains ki . Thus, Closed-loop poles = eigenvalues of (Ac − Bc Kc ) (7.7) = roots[|sI − (Ac − Bc Kc )|] (7.8) n n−1 = roots[s + (an−1 + kn,c )s + . . . + (a1 + k2,c )s (7.9) + (a0 + k1,c )] (7.10) n n−1 = roots[s + αn−1 s + . . . + α1 s + α0 (a0 + k1,c )] (7.11)where α0 , . . . , αn−1 are the coefficients of the desired closed-loop characteristicequation based on the desired closed-loop poles. Equating (7.10), (7.11) andrearranging yields an expression for Kc Kc = k1,c k2,c k3,c ... kn,c (7.12) = (α0 − a0 ) (α1 − a1 ) (α2 − a2 ) . . . (αn−1 − an−1 ) . (7.13) −1Now apply the inverse state transformation xc = Tsc x to the gain matrix Kc toobtain K −1 K = Kc Tsc (7.14) −1 = (α0 − a0 ) (α1 − a1 ) (α2 − a2 ) . . . (αn−1 − an−1 ) Tsc (7.15)which we can use in the original system as a feedback gain vector.The state transformation matrix Tsc is found as before: −1 Tsc = ωc (A, B)ωc (Ac , Bc ) (7.16)where   a1 a2 ... an−1 1   a2 a3 ... 1 0 −1 . . . . . ωc (Ac , Bc ) = .. 0 . (7.17)    . . 0  an−1 1 ... 0 0 1 0 ... 0 0 78
    • 7.2. Ackermann’s formula Example Consider the system     −2 1 0 0 x =  0 −2 ˙ 0  x + 1 u. (7.18) 0 0 −1 1 Find a state feedback controller u = −Kx that yields all closed-loop poles at -4. First check for controllability   0 1 −4 ωc (A, B) = [B AB A2 B] = 1 −2 4 (7.19) 1 −1 1 ⇒ rank[ωc (A, B)] = 3 (7.20) which implies complete state controllability. The open-loop characteristic equation of system is |sI − A| = (s + 2)2 (s + 1) = s3 + a2 s2 + a1 s + a0 (7.21) 3 2 = s + 5s + 8s + 4 (7.22) and the desired closed-loop characteristic equation is (s + 4)3 = s3 + α2 s2 + α1 s + α0 = s3 + 12s2 + 48s + 64. (7.23) Then −1 K = [(α0 − a0 ) (α1 − a1 ) (α2 − a2 )]Tsc (7.24) −1 = [60 40 7]Tsc (7.25) and −1 Tsc = ωc (A, B)ωc (Ac , Bc ) (7.26)     a1 a2 1 1 1 0 = ωc (A, B) a2 1 0 = 2 3 1 (7.27) 1 0 0 4 4 1 Thus, the gain K is given by   1 1 0 K = [60 40 7] 2 3 1 = [−8 − 20 27] (7.28) 4 4 17.2 Ackermann’s formulaAckermann’s formula is a convenient method to obtain the gain K for pole place-ment that has the benefit of avoiding a state transformation, but it still requires amatrix inversion and possibly tedious matrix algebra. 79
    • 7.2. Ackermann’s formula ˙Theorem 13 (Ackermann’s formula). Consider the open-loop system x = Ax+Buwith characteristic equation sn + an−1 sn−1 + . . . + a1 s + a0 . (7.29)Then for the closed-loop system given by x = (A − BK)x, the gain K is ˙ −1 K= 0 ... 0 1 ωc (A, B)φ(A) (7.30)where ωc (A, B) is the controllability matrix of the system and φ is the desiredcharacteristic equation given by φ(s) = sn + αn−1 sn−1 + . . . + α1 + α0 . (7.31)Proof. By the Cayley-Hamilton theorem, the closed-loop system matrix ACL =A − BK must satisfy its own characteristic equation φ(s) φ(ACL ) = 0 (7.32)but this is not true for A, i.e., φ(A) = 0.We will demonstrate the validity of the Ackermann’s formula for the case whenn = 3 for clarity but as will be obvious, the same technique can be used to extendthe result to any n. For n = 3 φ(ACL ) = A3 + α2 A2 + α1 ACL + α0 I = 0 CL CL (7.33)but since ACL = A − BK A2 = (A − BK)(A − BK) CL (7.34) 2 2 = A − ABK − BKA − B K (7.35) 2 = A − ABK − BK(A − BK) (7.36) 2 = A − ABK − BKACL (7.37)likewise A3 = (A − BK)(A − BK) CL (7.38) = (A2 − ABK − BKACL )(A − BK) (7.39) 3 2 = A − A BK − AN KACL − BKA2 . CL (7.40)Substituting the above into φ(ACL ) = 0 gives A3 − A2 BK − AN KACL − BKA2 + α2 (A2 − ABK − BKACL ) CL + α1 (A − BK) + α0 I = 0 ⇒ A + α2 A + α1 A + α0 I = B(KA2 + α2 KACL + α1 K) 3 2 CL + AB(KACL + α2 K) + (A2 B)K ⇒ φ(A) = B(KA2 + α2 KACL + α1 K) + AB(KACL + α2 K) + (A2 B)K CL KA2 + α2 KACL + α1 K   CL ⇒ φ(A) = ωc (A, B)  KACL + α2 K  K −1 ⇒K= 0 0 1 ωc (A, B)φ(A). (7.41)The expression given in (7.41) is precisely the Ackermann’s formula for a systemwith n = 3. With some tedious algebra, this result can be generalized for all n. 80
    • 7.3. SISO tracking Example Consider the system     −2 1 0 0 x =  0 −2 ˙ 0  x + 1 u. (7.42) 0 0 −1 1 Find a state feedback controller u = −Kx using Ackermann’s formula that yields all closed-loop poles at -4. The controllability matrix as before is   0 1 −4 ωc (A, B) = [B AB A2 B] = 1 −2 4 (7.43) 1 −1 1 and its inverse is   −2 −3 4 −1 ωc (A, B) = −3 −4 4 . (7.44) −1 −1 1 Then   8 12 0 3 2 3 0 2 φ(A) = A +α2 A +α1 A+α0 = A +12A +48A+64I = 8 0 0 0 27 (7.45) and the gain K is given by −1 K = [0 0 1]ωc (A, B)φ(A) (7.46)    −2 −3 4 8 12 0 = [0 0 1] −3 −4 4 0 8 0 (7.47) −1 −1 1 0 0 27 = [−8 − 20 27] (7.48)7.3 SISO trackingWe will now consider the case when we want the output y to track a given nonzerotime-varying reference signal yr = r. In particular, we will discuss to methods toachieve accurate steady-state tracking: an approach utilizing frequency domainideas and an approach using integral control.7.4 SISO tracking via complementary sensitivity functionsFor this, let the control u be given as u = −Kx + Kr r (7.49) 81
    • 7.5. SISO tracking via integral actionwhere K is a gain matrix and Kr is a scalar gain (assuming single output), bothof which are yet to be determined. The closed-loop system is then given by ˙ x = Ax + B(−Kx + Kr r) (7.50) = (A − BK)x + BKr r (7.51) = ACL x + BCL r (7.52)where ACL = A − BK is the closed-loop system matrix and BCL = BKr is theclosed-loop control matrix. As earlier, the output equation is given by y = Cx. (7.53)The gain K in ACL is chosen, as before (via the pole-placement method or Ack-ermann’s formula), such that the closed-loop system matrix eigenvalues are attheir desired locations. Next the reference signal gain Kr is chosen such thatthe transfer function T (s) = Y (s)/R(S) from the reference signal to the out-put is 1 at steady-state, i.e., limω→0 T (jω) = 1. The complementary sensitivityfunction T (s) can be obtain via previously introduced techniques for converting astate-space representation to a transfer function Y (s) T (s) = = C(sI − ACL )−1 BCL = C(sI − ACL )−1 BKr (7.54) R(s)In this type of controller, we have two degrees of freedom and proper choice of thegains leads to accurate steady-state tracking and the desired transient response.7.5 SISO tracking via integral actionAn alternative way to incorporate reference tracking is to borrow the idea of integralaction from classical PID control. This can be accomplished in state-space formusing the following steps 1. Augment the state vector with a running integration of the difference/error between the reference signal r and the output y   x ˜ x =  ...  (7.55) xn+1 where xn+1 = y − r ⇒ xn+1 = ˙ (y − r).dt = (Cx − r).dt (7.56) since y = Cx. 2. Use a full-state compensator that incorporates xn+1 along with the other ˜ states x, i.e., feedback using x. Hence, let u = −K x = −Kx − kn+1 xn+1 ˜ (7.57) 82
    • 7.6. Stabilizable systems . where K = [K . kn+1 ]. Substituting this into the state equations gives . ˙ x A 0 x BK Bkn+1 x 0 = − + r (7.58) xn+1 ˙ C 0 xn+1 0 0 xn+1 −1 A 0 B x 0 = − K kn+1 + r (7.59) C 0 0 xn+1 −1 and x y= C 0 (7.60) xn+1 Now let A 0 A= (7.61) C 0 B B= (7.62) 0 C= C 0 . (7.63) Substituting the previous in the extended state equations gives ˙ 0 x = (A − B K)˜ + ˜ x ˜ r = ACL x + BCL r (7.64) −1 ˜ y = Cx (7.65) where ACL is the extended state closed-loop system matrix and BCL is the closed-loop control matrix. 3. Design K such that roots(|sI − ACL |) = roots(|sI − A + B K|) = desired closed loop poles. (7.66) The gain on the integral term kn+1 is the same as the integral gain in a tradition PI controller.7.6 Stabilizable systemsThe previous results for controller design were all based on assumptions on completestate controllability, a condition that may not always be satisfied. We now considerthe case where only k < n states are controllable. Is it possible to design acontroller for such a system? If yes, how can we?In general, stable controllers can be designed for uncontrollable systems as longthe uncontrollable subspace of the system is stable, i.e. the system is stabilizable(as defined earlier). ˙Recall the normal form of x = Ax + Bu obtainable via an appropriate statetransformation ˙ zc A11 A12 ˙ zc B1 ˙ zc ˙ z= = + u, y=C + Du (7.67) ˙ znc 0 A22 ˙ znc 0 ˙ znc 83
    • 7.6. Stabilizable systemsthen the controllable subspace is given by the states zc and the correspondingsystem matrix is A11 . The matrix A12 corresponds to the uncontrollable sub-space that affects the controllable subspace. Since the uncontrollable subspace isassumed stable (i.e., A22 has only negative eigenvalues), it follows from superpo-sition that we can independently design a controller for the controllable subspaceand apply it on to the system without affecting it’s stability properties. ˙The following are steps to design a controller for a sytem x = Ax + Bu that hask < n controllable states: 1. Define a state transformation T such that T = [T1 T2 ] (7.68) where . . . T1 = [B . AB . . . . . Ak−1 B] and T2 : rank(T2 ) = n. . . . (7.69) 2. Apply the state transformation x = T z to get the state equations ˙ z = Az + Bu y = Cz + Du (7.70) where ˙ zc A11 A12 B1 ˙ z= , A= , B= , C = CT (7.71) ˙ znc 0 A22 0 3. Let . u = −Kz, K = [K1 . K2 ] . (7.72) which results in the closed-loop system z = (A − B K)z = ACL z ˙ (7.73) A11 A12 B1 . = − [K1 . K2 ] z . (7.74) 0 A22 0 where our closed-loop system matrix is ACL . 4. Now choose K such that the closed-loop are at their desired locations. Recall we can only control k states, implying that we only select k poles. Thus n − k poles of the desired characteristic equation φ(s) must correspond to the eigenvalues of A22 (the eigenvalues of the uncontrollable subspace). The closed-loop characteristic equation may be determined as follows CL char. eqn. = |sI − ACL | (7.75) = |sI − A + B K)z| (7.76) A11 A12 B1 . = det sI − + [K1 . K2 ] . (7.77) 0 A22 0 sI − A11 + B1 K1 −A12 + B1 K2 = det (7.78) 0 sI − A22 = |sI − A11 + B1 K1 | |sI − A22 |. (7.79) 84
    • 7.6. Stabilizable systemsIt can be easily seen from (7.79) that the control gain K does not affectA22 . Moreover K2 has no effect on the closed-loop system. It can thereforetake any value we often let K2 = 0 for convenience.The closed-loop poles are then the combination of the roots of |sI − A11 +B1 K1 | and the roots of |sI − A22 |, the latter of which we cannot changewith control action (i.e., the uncontrollable subspace). We then use thepole-placement method or Ackermann’s formula to determine a gain thatplaces the k poles of a system whose characteristic equation is given by|sI − A11 + B1 K1 | at their k desired locations.Example Consider the system     −2 1 0 0 x =  0 −2 ˙ 0  x + 1 u. (7.80) 0 0 −2 1Find a state feedback controller that yields all controllable closed-looppoles at -4.First find the rank of the controllability matrix   0 1 −4 2ωc (A, B) = [B AB A B] = 1 −2 4  ⇒ rank[ωc (A, B)] = 2. 1 −2 4 (7.81)Since the rank of the controllability matrix k = 2 is less than n = 3,the system is not completely state controllable. But we do have twocontrollable states. 1. Construct the state transformation T = [T1 T2 ] with   0 1 T1 = [B AB] = 1 −2 (7.82) 1 −2 and let   1 T2 = 0 (7.83) 1 to ensure invertibility of T . The inverse of T is given by   2 3 −2 T −1 = 1 1 −1 (7.84) 0 −1 1 85
    • 7.6. Stabilizable systems2. Apply the state transformation z = T −1 AT z + T −1 Bu ˙ (7.85)     2 3 −2 −2 1 0 0 1 1 = 1 1 −1  0 −2 0  1 −2 0 z (7.86) 0 −1 1 0 0 −2 1 −2 1    2 3 −2 0 + 1 1 −1 1 u (7.87) 0 −1 1 1     0 −4 0 1 = 1 −4 0  z + 0 u (7.88) 0 0 −2 0 = Az + Bu. (7.89) Since A22 = −2 has negative eigenvalues, the system is stabiliz- able and we can proceed in designing a stable controller.3. Let . u = −Kz = −[K1 . K2 ]z . (7.90) = −[K11 K12 0]z (7.91) and since x = T z K = KT −1 = −[K11 K12 0]T −1 . (7.92)4. Focus on the controllable subsystem described by: ˙ zc = Ac zc + Bc u (7.93) 0 −4 1 = z + u (7.94) 1 −4 c 0 a) The open-loop characteristic equation is given by s2 + a1 s + a0 = |sI − Ac | = s2 + 4s + 4. (7.95) b) Determine the desired closed-loop characteristic equation as s2 + α1 s + α0 = (s + 4)2 = s2 + 8s + 16. (7.96) c) Determine K = [K11 K12 0]: [K11 K12 ] = [(α0 − a0 ) (α1 − a1 )]T −1 (7.97) where a1 1 4 1 T = ωc (Ac , Bc ) = 1 0 1 0 0 1 ⇒ T −1 = . (7.98) 1 −4 86
    • 7.7. State observer design/Luenberger state observer/estimator design Note that ωc (Ac , Bc ) in this case refers to the controllability matrix of the controllable subspace of the system and not to the controllability matrix of the controllable canonical form of the system. Then [K11 K12 ] = [(α0 − a0 ) (α1 − a1 )]T −1 (7.99) 0 1 = [12 4] = [4 − 4] (7.100) 1 −4 and K = [4 −4 0] (7.101) d) Transform K to obtain K: K = KT −1 (7.102)   2 3 −2 = [4 −4 0] 1 1 −1 = [4 8 − 4] (7.103) 0 −1 1 which gives the closed-loop controller u = −Kx (7.104) = −[4 8 − 4]x. (7.105) This controller places two closed-loop poles at -4 and the third closed-loop pole remains at -2 due to uncontrollability.7.7 State observer design/Luenberger state observer/estimator design ˙Consider the system given by x = Ax + Bu. State observers are used to esti-mate/reconstruct the states given knowledge of the system matrices A, B, C, theinput u and the output y. State observers and state estimators are used inter-changeably since they both refer to the same concept.Let the equation for the state observer be given by ˙ x = Ax + Bu + J(y − Cx) (7.106) = Ax + Bu + J(Cx − Cx) (7.107)where y is the measured/actual output and Cx = y is the output estimate basedon the state estimates. Since the estimation method is based on a dynamic model,there is obviously a need for an initial condition. Numerical integration on thedifferential equations then yields the online estimates.The dynamic model is similar to the system’s dynamic model except that it incor-porates an error term times a gain J, i.e., J(y − Cx), which goes to zero when themeasured output is exactly the same as the estimated output. In this situation,we get a dynamic estimation model that is exactly the same as the system model. 87
    • 7.7. State observer design/Luenberger state observer/estimator designThus the output error term acts like an estimate correction term in the estimationmodel (for a proper choice of J). It can be easily seen that it is desirable that thisterm in the estimation model be consistently decreasing and eventually going tozero, implying asymptotic convergence of the estimates to their true values. Thistype of observer/estimator is called a Luenberger observer, named after David G.Luenberger who in 1963 initiated the theory of observers for the reconstruction ofstates of a linear dynamical system.We will derive results that give better insight into the Luenberger observer andalso help in determining the appropriate value of the gain J. Define the state error˜x as x = x − x. ˜ (7.108)Differentiating with respect to time and substituting the Luenberger observer equa-tions gives ˙ ˜ ˙ ˙ x=x−x (7.109) = [Ax + Bu] − [Ax + Bu + J(Cx − Cx)] (7.110) = (A − JC)(x − x) (7.111) = (A − JC)˜ . x (7.112)The estimation error dynamics, which we desire to be stable and have a quicktransient response (i.e., fast convergence), are given in (7.112). Thus, the gain Jis chosen such that the eigenvalues of A − JC are at their desired values. Thisis similar to pole placement. The observer eigenvalues/poles must all be negativeto ensure stable error dynamics and are usually chosen to be 5 to 10 times fasterthan the closed-loop system poles so that estimation lag does not affect systemresponse significantly.To find J, first apply a state transformation x = Tso xo to the system to change ˙it to observable canonical form given by xo = Axo + Bo u and yo = Co xo . Thenthe estimator model is given by ˙ xo = Ao x + Bo u + Jo (Co xo − Co xo ) (7.113)and the estimation error dynamics are given by ˙ xo = (Ao − Jo Co )˜ o and xo = xo − xo ˜ x ˜ (7.114)      0 ... 0 −a0 Jo,0 1 . . . 0 −a1   Jo,1   =  . . . − .  0 ... 0 ˜ 1  xo (7.115)      .  .. .. . . .   .  . .  0 . . . 1 −an−1 Jo,n−1   0 ... 0 −(a0 + Jo,0 ) 1 . . . 0 −(a1 + Jo,1 )  = . . ˜  xo . (7.116)   . . .. .. . . . .  0 . . . 1 −(an−1 + Jo,n−1 )The characteristic equation for this observable canonical form system is |sI − Ao + Jo Co | = sn + (an−1 + Jo,n−1 )sn−1 + . . . + (a1 + Jo,1 )s + (a0 + Jo,0 ) (7.117) 88
    • 7.7. State observer design/Luenberger state observer/estimator designand let the observer desired characteristic equation be given by φ(s) = sn + αn−1 sn−1 + . . . + α1 s + α0 . (7.118)Equating both expressions yields an expression for the observable canonical formobserver gain Jo   α0 − a0  α1 − a1  Jo =  . (7.119)   . .  .  αn−1 − an−1Now what remains is to transform the gain back to the original state-space formusing Tso , i.e.,   α0 − a0  α1 − a1  J = Tso Jo = Tso  . (7.120)   . .  .  αn−1 − an−1The method to find J is very similar to controller design using pole placementbut this is not surprising since, from duality, we already know that the concepts ofcontrollability and observability are closely related. Example Consider the system 0 1 0 ˙ x= x+ u, y = [1 0]x. (7.121) −1 −1 1 Find a state observer for this system with two poles at -18. 1. Check for observability: C 1 0 ωo (A, C) = = CA 0 1 ⇒ rank[ωo (A, C)] = 2 = n (7.122) which implies complete observability. 2. The open-loop characteristic equation is given by |sI − A| = s2 + a1 s + a0 = s2 + s + 1. (7.123) 3. The desired observability characteristic equation is given by φ(s) = s2 + α1 s + α0 = (s + 18)2 = s2 + 36s + 324. (7.124) 4. The transformation matrix is −1 Tso = ωo (A, C)ωo (Ao , Co ) (7.125) 1 0 a1 1 0 1 = = . (7.126) 0 1 1 0 1 −1 89
    • 7.8. Disturbance observers 5. The gain is then given by α0 − a0 0 1 324 − 1 35 J = Tso = = . (7.127) α1 − a1 1 −1 36 − 1 288 The observer model is as follows ˙ 35 x = Ax + Bu + (y − Cx) (7.128) 2887.8 Disturbance observersIn almost all cases, disturbances are present in the dynamic system and in measure-ment. In some of these cases, we know the general nature of the disturbance andcan use this knowledge to design an observer robust to these disturbances. This isdone via estimating the disturbance and minimizing its effect. The key to design-ing an appropriate disturbance observer is to extend the state-space structure withadditional states that describe the disturbance and then designing an observer forthis extended state as before.When the disturbance d is a constant or approximately constant, we can assume ˙d = 0 and let the extended state xn+1 = d which implies xn+1 = 0. Thus the ˙state-space form contains an additional line consisting only of zeros.When the disturbance is a time-varying nonlinear function, the derivation of theextended states and their rates is more involved. Consider the case when d(t) = A sin (ωt + φ). (7.129)Differentiating with respect to time twice gives ˙ d(t) = Aω cos (ωt + φ) (7.130) ¨ d(t) = −Aω 2 sin (ωt + φ) (7.131) 2 = −ω d (7.132)which we can use to define the extended states and their rates in linear form touse in the state-space representation xn+1 = d(t) ⇒ ˙ xn+1 = d(t) = xn+2 ˙ (7.133) ˙ xn+2 = d(t) ⇒ ˙ ¨ xn+2 = d(t) = −ω 2 d = −ω 2 xn+1 . (7.134)Then disturbance effects can be captured by extending the state-space representa-tion of the system dynamics by the following disturbance subsystem xn+1 ˙ 0 1 xn+1 = . (7.135) xn+2 ˙ −ω 2 0 xn+2After extending the state-space representation, the robust observer is designedusing the extended state formulation and the usual observer design techniques. 90
    • 7.8. Disturbance observers x1 d(t) m F Figure 7.1: A mass-spring-damper system with a disturbance.Example A mass-spring-damper is shown below. The dynamicmodel is given by x + x + x = F − d(t) ¨ ˙ (7.136)Let d(t) = A sin (ωt + φ) and find a disturbance robust observer forthis system that estimates x and x. ˙First reduce the left-hand side of the system equation in state-spaceform by defining variables: x1 = x and x2 = x ˙ (7.137)which gives x1 ˙ 0 1 0 0 ˙ x= = + F+ d(t) (7.138) x2 ˙ −1 −1 1 −1Differentiate the disturbance observer twice with respect to time d(t) = A sin (ωt + φ) (7.139) ˙ d(t) = Aω cos (ωt + φ) (7.140) ¨ d(t) = −Aω 2 sin (ωt + φ) = −ω 2 d (7.141) ˙and let x3 = d(t), x4 = d(t) to give ˙ ˙ x3 = d(t) = x4 (7.142) ˙ ¨ x4 = d(t) = −ω 2 d = −ω 2 x3 . (7.143)We can use this to get the subsystem x3 ˙ 0 1 x3 = (7.144) x4 ˙ −ω 2 0 x4which we can use along with (7.138) to obtain an extended state-spacemodel        x1 ˙ 0 1 0 0 x1 0 x2  −1 −1 −1 0 x2  1 ˙   ˙  x= =   +  F (7.145) x3 ˙ 0 0 0 1 x3  0 2 x4 ˙ 0 0 −ω 0 x4 0 91
    • 7.9. Output feedback control and since x = x1 and x = x2 are the desired outputs to be observed ˙   x1 x1 1 0 0 0 x2  y= =  . (7.146) x2 0 1 0 0 x3  x4 This model is then used in the usual manner to design an observer, i.e., observability is first checked, a desired characteristic equation is specified, the state transformation Tso is computed and finally an ap- propriate gain J is determined. The resulting observer robustly esti- mates not only the two states but also the disturbance x3 = d(t) and ˙ the disturbance rate of change x4 = d(t).7.9 Output feedback controlAs mentioned earlier, output feedback refers to the situation where only the out-puts y are available for feedback and not the entire state vector x. Since all thecontrol design techniques introduced so far depend on the availability of all thestates for feedback, we cannot use them directly in designing a controller for asystem that has only outputs available in feedback. But as seen in the design ofobservers, if the system is completely state observable, we can estimate all thestates using knowledge of the output. That leads us to the question: Is it feasi-ble to design an observer to estimate the states and then use these states in anindependently designed state feedback controller to create, in essence, an outputfeedback controller?The answer to the previous question is yes due to what is known as the separationprinciple. It deals with an output feedback controller that consists of a combinationof an observer and a state feedback controller. The closed-loop poles of a systemthat utilizes such a controller are a combination of the poles of the observer andthe poles specified via the state feedback controller. This implies that we canindependently design the observer and the state feedback controller and if bothare separately stable, then they are also stable in combination. The separationprinciple greatly simplifies the process of designing an output feedback controllerand allows the design of intuitive controller-observer combinations. It is also onlyvalid for linear systems but not necessarily for nonlinear systems, which leads toefforts to design ”nonlinear observers” that have separation principle-like properties.The separation principle can be easily derived. The observer error is given by x=x−x ˜ (7.147)where x are the estimates of the actual states x. Then, as before, the observererror dynamics are ˜˙ x = (A − JC)˜ x (7.148)Consider the state feedback control based on the state estimates u = −Kx. (7.149) 92
    • 7.10. Transfer function for output feedback compensatorSubstituting this control input into the standard state equations and manipulatinggives x = Ax + Bu = Ax − BKx ˙ (7.150) = Ax − BKx + BK(x − x) (7.151) = (A − BK)x + BK x. ˜ (7.152)Now extend the state-space form to include the estimated states ˙ x A − BK BK x x ˙ = ˜ x 0 A − JC ˜ x = Atotal ˜ x . (7.153)Then for the entire system: poles of closed-loop system = roots(|sI − Atotal |) (7.154) = roots(|sI − A + BK||sI − A + JC|) (7.155) = roots(|sI − A + BK|), roots(|sI − A + JC|).The expression in (7.155) is precisely a mathematical statement of the separationprinciple. Furthermore, we can use the complete closed-loop observer-controllersystem equation (7.153) for output feedback design.7.10 Transfer function for output feedback compensatorWe can derive a transfer function represent the output feedback situation. Consider ˙ x = Ax + Bu + J(y − Cx and u = −Kx. (7.156)Taking the Laplace transform and simplifying sX(s) = AX(s) + BU (s) + JY (s) + JC X(s) (7.157) = (A − JC − BK)X(s) + JY (s) (7.158) ⇒ (sI − A + JC + BK)X(s) = JY (s) (7.159) −1 ⇒ X(s) = (sI − A + JC + BK) JY (s) (7.160)but since u = −Kx U (s) = −K X(s) = −K(sI − A + JC + BK)−1 JY (s) U (s) ⇒ C(s) = = −K(sI − A + JC + BK)−1 J. (7.161) Y (s)Note that the transfer has a negative sign to indicate negative feedback. If thenegative feedback was already built into the block diagram addition block, thecompensator equation does not need to have a negative sign.7.11 SISO tracking with output feedback controlAs a final case, consider the situation where we only have the outputs y availablefor feedback and we also want to track a time-varying reference signal. The output 93
    • 7.11. SISO tracking with output feedback controlfeedback system as before is given by ˙ x = Ax + Bu + J(y − Cx) (7.162) = (A − JC)x + Bu + Jy (7.163)and let the input u be given by u = −Kx + kr r (7.164)where, as before, r is the reference signal and the reference gain kr is yet tobe determined. The system gain matrix K is determined as outlined earlier.The reference gain kr is chosen such that there is no steady-state error, i.e.,limω→0 T (jω) = 1.Let us now derive a transfer function for the closed-loop system and find T (s).Taking the Laplace transform of the system sX(s) = AX(s) + Bu + J[Y (s) − C X(s)] (7.165) = AX(s) − B[K X(s) − kr R(s)] + J[Y (s) − C X(s)] (7.166) = (A − JC − BK)X(s) + Bkr R(s) + JY (s) (7.167) −1 ⇒ X(s) = (sI − A + JC + BK) [Bkr R(s) + JY (s)]. (7.168)Substituting (7.168) in the control input equation U (s) = −K X(s) + kr R(s) (7.169) −1 = −K(sI − A + JC + BK) [Bkr R(s) + JY (s)] + kr R(s) (7.170) −1 = kr R(s) − K(sI − A + JC + BK) Bkr R(s) − K(sI − A + JC + BK)−1 JY (s) (7.171) = kr [1 − K(sI − A + JC + BK)−1 Bkr ]R(s) − K(sI − A + JC + BK)−1 JY (s) (7.172) = H(s)R(s) − C(s)Y (s) (7.173)where H(s) and C(s) are compensators in a two degrees of freedom system. Then Y (s) H(s)G(s) T (s) = = (7.174) R(s) 1 + C(s)G(s)and we want lims→0 T (s) = 1 for accurate steady-state tracking. 94
    • Chapter 8Basic notions of linear algebraPrerequisites: Matrix addition/subtraction, matrix multiplication, Gaussian elimi-nation.A system of linear equations may be represented by a matrix equation. Example The system of equations x1 + 2x2 = 5 (8.1) 3x1 + 4x2 = 12 (8.2) can be represented as 1 2 5 1 2 x1 5 x + x = ⇒ = ⇒ Ax = b. (8.3) 3 1 4 2 12 3 4 x2 12 Example Similarly 2x1 + 5x2 + 7x3 = 13 (8.4) 3x1 + 1x2 + 5x3 = 7 (8.5) can be represented as   x 2 5 7 13 2 5 7  1 13 x + x + x = ⇒ x2 = ⇒ Ax = b. 3 1 1 2 3 3 7 3 1 5 7 x3 (8.6)One of the central questions of linear algebra is the existence and uniqueness ofsolutions to systems of equations similar to those shown above. The solution set,or the set of vectors x that satisfy a matrix equation Ax = b, can always bedescribed by one of the following: 1. The empty set, i.e., there are no vectors x that satisfy the matrix equation Ax = b and therefore there are no solutions to the matrix equation. The matrix equation is then also known as being “inconsistent” or “overdeter- mined”. 95
    • 2. A set of infinitely many elements, i.e., there are an infinite number of vectors x that satisfy the matrix equation Ax = b and therefore there are an infinite number of solutions to the matrix equation. This does not imply that the vectors are arbitrary from Rn . Rather they are specific and structured but we have an infinite number of them. The matrix is then known as being “underdetermined”. 3. A set consisting of one unique element, i.e., there is only one vector x that satisfies the matrix equation Ax = b. In this case the matrix equation is said to be “uniquely determined”.The type of solution set and its description is closely interconnected with conceptssuch as linear independence, invertibility and row/column space. This module willbriefly explore some of these concepts at a later stage.An easy way to check for the type of solution set is to apply Gaussian eliminationto the matrix equation to yield the row echelon form or the reduced row echelonform. The number of pivots are critical in this situation and the following definitionbecomes relevant.Definition 9 (Rank). The number of pivots k in the row echelon form (or thereduced row echelon form) of a matrix A ∈ Rn×m is called the rank of A. This iswritten as rank(A) = k.Consider the case when A ∈ Rn×m , b ∈ Rn and rank(A) = k: 1. If n < m, i.e., we have a “fat” system matrix A, and there are an infinite number of solution vectors x since there will always be a free variable(s) in the system (more unknowns than equations). 2. If n ≥ m, i.e., we have a “tall” (n > m) or a square (n = m) system matrix, • If k = m, a solution may exist and if a solution does exist then it is unique. The existence of a solution depends on the column vector b. The column vector b must be consistent with the system matrix A for a solution to exist, i.e., all matrix equation must be valid/consistent when in reduced row echelon form. If it is, then b is said to be in the column space or range space or image of A. • If k < m, a solution may exist and if a solution does exist then there will be an infinite number of them. The existence of a solution again depends on the column vector b. • k m always.Definition 10 (Linear combination). A vector q ∈ Rn is said to be a linear combi-nation of vectors v1 , v2 , . . . , vm ∈ Rn if there exist some constants c1 , c2 , . . . , cm ∈R such that c1 v1 + c2 v2 + . . . + cm vm = q. (8.7)A matrix equation Ax = b is an example of a linear combination; the vector b isa linear combination of the columns of A. 96
    • Example Consider      2 1 1 x1 5 Ax = b ⇒  4 −6 0 x2  = −2 −2 7 2 x3 −9         2 1 1 5 ⇒  4  x1 + −6 x2 + 0 x3 = −2 (8.8) −2 7 2 −9 and let       2 1 1 v1 =  4  , v2 = −6 , v3 = 0 . (8.9) −2 7 2 Therefore b = v 1 x1 + v 2 x2 + v 3 x3 (8.10) which, since x1 , x2 , x3 are constants, implies that b is a linear combi- nation of v1 , v2 , and v3 ; the columns of A. It can be easily verified that x1 = 1, x2 = 1 and x = 2 are the correct constants to give b.Definition 11 (Linear independence). A set of vectors v1 , v2 , . . . , vm ∈ Rn are lin-early independent if there exist no nontrivial linear combinations of v1 , v2 , . . . , vmthat give the zero vector, i.e., if the linear combination c1 v1 + c2 v2 + . . . + cm vm = 0 (8.11)is only true for c1 = c2 = . . . = cm = 0, then v1 , v2 , . . . , vm are linearly inde-pendent. The set of vectors v1 , v2 , . . . , vm are linearly dependent if they are notlinearly independent.For example, if any one ci is nonzero for the above linear combination, the vectorsv1 , v2 , . . . , vm are linearly dependent, i.e. one vector can be expressed as a combi-nation of others. This gives us another intuitive description of linear independence:if any vi can be expressed as a combination of the other vectors, then it is a linearlydependent vector. A set of vectors that includes a linearly dependent vector is alinearly dependent set of vectors. If that linearly dependent vector is removed andthe remaining vectors are linearly independent, then the set is linearly independent.Linear independence can also be examined in terms of columns/rows of matrices.The columns of the matrix   1 5 8 2 10 17 (8.12) 4 20 2are linearly dependent since the second column is five times the first column.Gaussian elimination involves eliminating rows of a matrix that can be expressedas a combination of other rows. It follows that any nonzero rows (or rows withpivots) in the row echelon form of a matrix cannot be expressed as combinationsof each other. Therefore, the nonzero rows of a matrix in row echelon form alsorepresent a linearly independent set of vectors. Since the number of rows withpivots in the row echelon form of a matrix is called the rank, it follows that therank of a matrix equals the number of linearly independent rows in that matrix. 97
    • Theorem 14. The number of linearly independent rows is equal to the number oflinearly independent columns of a matrix.Proof. We know that the number of linearly independent rows of a matrix is therank of the matrix and the rank of a matrix is the number of rows in row echelonform with nonzero pivots. Furthermore, the columns of A are the rows of AT .If rank of a matrix A is k and we transpose the row echelon form of the matrix,it can be easily seen that the rows of the transpose also have k rows with nonzeropivots. Thus AT has rank k or the number of linearly independent rows of AT isk or the number of linearly independent columns of A is k.Theorem 15. A set of n vectors in Rm are linearly dependent if m < n.Proof. Put the set of n vectors as rows of a matrix A ∈ Rn×m . Then the vectorsare independent if rank(A) = n which implies that we must have a nonzero pivotin every row of A. This is not possible since m < n means that we do not haveenough columns to have a pivot in all the rows. Specifically, the n − m rows atthe end of the matrix would not have enough columns to have pivots. Example The columns of the following matrix cannot be indepen- dent 1 5 33 (8.13) 7 12 3 since these are three vectors in R2 .We next begin the fundamental study of vector spaces. Firstly, a generalizeddefinition of vector addition and scalar multiplication is given.Definition 12 (Vector addition and scalar multiplication). The operations: vectoraddition (+) and scalar multiplication (.) are defined on a set if the following eightrules are satisfied for vectors x, y, z and constants c, c1 , c2 . 1. x + y = y + x. 2. x + (y + z) = (x + y) + z. 3. There is a unique “zero vector” such that x + 0 = x for all x. 4. For each x there is a unique vector −x such that x + (−x) = 0. 5. 1 · x = x. 6. (c1 · c2 ) · x = c1 · (c2 · x). 7. c · (x + y) = c · x + c · y. 8. (c1 + c2 ) · x = c1 · x + c2 · x.The generalized definition of vector addition and scalar multiplication helps indefining and working with more abstract algebraic constructs. We now use thisdefinition to describe an important concept called “closure”. 98
    • Definition 13 (Closure). Consider a set of vectors Ω on which vector addition andscalar multiplication are defined. If the following hold true 1. x + y = z where z ∈ Ω ∀x, y ∈ Ω. 2. cx ∈ Ω ∀x ∈ Ω, c ∈ R.then closure is satisfied on Ω.In other words, if any linear combination of vectors in the set produces a vectorsthat is also in the set, the set satisfies closure. We can now define vector spacesand correspondingly subspaces:Definition 14 (Vector space). A set of vectors that satisfies closure is a vectorspace.Definition 15 (Subspace). A subset of a vector space that satisfies closure on itsown subset is a subspace of that vector space.A vector space may be even further generalized if we let vectors refer to any typeof element and not necessarily those in Rn . This leads to the idea of functionspaces where the “vectors” are actually functions. These generalized vector spacesneed to satisfy the same conditions for the real vector spaces, i.e., vector addition,scalar multiplication must be defined such that the conditions in Definition (12)are satisfied and linear combinations of vectors in the space must remain in thespace. There are many examples of vector spaces and any set of vectors may bechecked for closure in order to determine if they are vector spaces or not. Thereader is referred to the many textbooks available in the reference list below formore information.We now introduce the four fundamental subspaces of linear algebra: the columnspace (or the range space or the image for function spaces), the row space (orthe coimage for function spaces), the nullspace (or kernel for function spaces),and the left nullspace (or cokernal for function spaces). Consider a system matrixA ∈ Rn×m and rank(A) = k.Definition 16 (Fundamental subspace 1: column space). The column space of A,denoted C(A), is the space of all linear combinations of the columns of A. It is asubspace of Rm . Example For the matrix   1 4 7 A = 2 4 3 (8.14) 1 7 9 the column space C(A) is the space of all the linear combinations of the vectors       1 4 7 v1 = 2 , v2 = 4 , v3 = 3 . (8.15) 1 7 9 99
    • It can be easily verified that the closure property is satisfied. Another way to appreciate the column space is to see it as the product of a matrix and a vector Ax = b. (8.16) In the previous, b ∈ C(A) for any given x. In fact, the entire column space is defined by letting x = Rn .We can generalize the column space notion to non-matrix “vectors” (e.g., in func-tion spaces), in which case it is referred to as the range space R(A) or the imageIm(A). Consider a function of x f (x) = y. (8.17)Then the range space or the image of f is y, i.e., R(f ) = y or Im(f ) = y.We now define a closely related subspace.Definition 17 (Fundamental subspace 2: row space). The row space of A, denotedC(AT ) is the space of all linear combinations of the rows of A or the columns ofAT . It is a subspace of Rn .This concept of this space is very similar to that of the column space, since we arereferring to the column space of a transposed matrix. Example Consider the previous example, for the matrix   1 4 7 A = 2 4 3 (8.18) 1 7 9 the row space C(AT ) is the space of all the linear combinations of the vectors       1 2 1 v1 = 4 , v2 = 4 , v3 = 7 . (8.19) 7 3 9 As before, it can be easily verified that the closure property is satisfied. The row space R(A) for function spaces is called the coimage.Definition 18 (Fundamental subspace 3: nullspace). The nullspace of A, denotedN (A), is the set of all vectors x such that Ax = 0. It is a subspace of Rn .It can be easily verified that closure is satisfied on such a set of vectors. Thenullspace can be thought of as all the vectors that render the matrix zero uponmultiplication. The nullspace is usually nontrivial and can be described as a linearcombination of some vectors but when we have rank(A) = n = m, i.e., A is squareand of full rank, the nullspace is trivial. The only vector in the nullspace in this caseis the zero vector. Whenever the rank of A is less than the number of columns ofA, the nullspace is nontrivial. This is because when the rank of A is less than thenumber of columns of A, we always have a free variable in the system. Therefore,that free variable can be arbitrarily assigned for any b and still yield a solution,implying that the nullspace is nonempty. 100
    • Example Consider the matrix   1 0 1 A = 5 4 9 . (8.20) 2 4 6 It can be easily verified via Gaussian elimination that A is rank deficient, i.e. rank(A) = 2 < n = 3. Thus, although A is square, it still has a nontrivial nullspace. By solving Ax = 0, the nullspace is given by “linear combinations” of the vector   1 v1 =  1  . (8.21) −1 Since we only have one vector, “linear combinations” only implies mul- tiples of v1 , i.e., N (A) = c v1 where c ∈ R.When we have non-matrix “vectors”, the nullspace of A is referred to as the kernelof A, ker(A). For a function f , ker(f ) is all x such that f (x) = 0. (8.22)Definition 19 (Fundamental subspace 4: left nullspace). The left nullspace of Ais the set of all vectors x such that AT x = 0The left nullspace is closely related to the nullspace since it is also a nullspace butof a matrix AT . Closure, obviously, follows. For non-matrix “vector” spaces, theleft nullspace is referred to as the cokernel of A, coker(A).In the previous introduction to vector spaces and discussion of different vectorspaces, a recurring theme was that the different vector spaces were determined aslinear combinations of a set of vectors v1 , v2 , . . . , vm . This motivates the nextdefinition.Definition 20 (Spanning set). If a vector space V consists of all linear combina-tions of a set of vectors Ω = {v1 , v2 , . . . , vm }, then it said that Ω is a spanningset for V or the vectors in Ω span V .It is clear that every vector in V is some combination of the vectors in Ω. Usingthis terminology, we can redefine the column space of A as the space spanned bythe columns of A. Similarly the row space of A can be redefined as the spacespanned by rows of A or the columns of AT . Example The column space of the matrix   1 2 1 A = 2 4 7  (8.23) 3 6 11 is spanned by the vectors       1 2 1 v1 = 2 , v2 = 4 , v3 =  7  . (8.24) 3 6 11 101
    • But notice for the previous example that the rank of A = 2 < n = 3 which impliesthat the columns are linearly dependent. It is clear that v1 and v2 are linearlydependent since they are simply multiples of one another, v2 = 2v1 . Therefore,we have an extra redundant vector in the spanning set that is not needed to describethe vector space. The next definition is relevant in this regard.Definition 21 (Basis). A linearly independent set of vectors that spans a vectorspace V is called a basis for V . Example From the previous example, a basis for the column space of A is given by     2 1 v2 = 4 , v3 =  7  . (8.25) 6 11 We achieved this basis by simply eliminating the linearly dependent vector(s) (v1 ).From the definition, the two requirements for a set of vectors to be a basis are 1. The vectors in a basis must be linearly independent. 2. These vectors must also span the vector space V .Note that a basis for a vector space is not unique. In fact, as an example, any setof n independent vectors in Rn span Rn and thus also form a basis for Rn .The number of independent vectors in a basis for a given space is an important(and also invariant) property.Definition 22 (Dimension). The number of vectors in a basis for a vector spaceV is called the dimension of the vector space V or dim(V ). Example From the previously example where a basis for V was given by     2 1 v2 = 4 , v3 =  7  (8.26) 6 11 then the dimension of V is 2 or dim(V ) = 2.The dimension of a space is an invariant property and is the same regardless of thebasis chosen. If v = {v1 , v2 , . . . , vn } (8.27)and w = {w1 , w2 , . . . , wm } (8.28)are two different, distinct bases for a vector space V then m = n.The above discussion and examples lead us to some intuitive conclusions regardingbases and spanning set: Any spanning set for a vector space V can be reduced toa basis by removing linearly dependent vectors in the set. Conversely, any linearly 102
    • independent set of vectors in V can be made a basis by adding a sufficient numberof linearly independent vectors to the set. In this context, a basis is both a maximalindependent set since it cannot be made larger without losing linear independenceand it is also a minimal spanning set since it cannot be made smaller and still spanV.Problems 1. Find the rank of the matrix   1 3 7 A = 2 4 8 (8.29) 9 6 11 2. Judge for the existence and uniqueness for the system Ax = b where a)   1 2 3 A = 4 5 6 (8.30) 9 8 7 b)   1 2 3 A = 4 5 6 (8.31) 9 17 7 c) 15 7 12 A= (8.32) 6 11 23 d)   1 2 3 4 5 6 A= 9  (8.33) 17 7 14 3 5 e)   1 2 3 2 4 6 A= 10  (8.34) 20 30 14 3 5 3. Express q = [5 7]T as a linear combination of the vectors v1 = [2 3]T , v2 = [1 − 1]T , v1 = [0 2]T . 4. Are the vectors v1 = [2 3]T , v2 = [1 − 1]T , v1 = [0 2]T independent? If not, choose a linearly independent set of vectors from these vectors and express q = [5 7]T as a linear combination of this set. 5. Are the columns of the following matrices linearly independent? If not, choose a linearly independent set of vectors from the columns. 103
    • a)   1 2 3 A = 4 5 6 (8.35) 9 8 7 b)   1 2 3 A = 4 5 6 (8.36) 9 17 7 c) 15 7 12 A= (8.37) 6 11 23 d)   1 2 3 4 5 6 A= 9  (8.38) 17 7 14 3 5 e)   1 2 3 2 4 6 A= 10  (8.39) 20 30 14 3 56. Which of the following sets of vectors is a subspace? Check for closure and clarify your work. a) The set of all vectors with real elements. b) The set of all vectors in the form v = [k1 k2 0]T with k1 , k2 ∈ R. c) The set of all vectors in the form v = [k1 5 0]T with k1 , k2 ∈ R. d) The set of all vectors with odd elements. e) The set of all polynomials of degree 3 only. f) The set of all polynomials of degree 3 or less. g) The set of all functions that are differentiable once or more. h) The set of all second-order polynomials with real roots.7. Determine a set of vectors that span the column space and a set of vectors that span the row space of the following matrices. Furthermore, find a basis for both the spaces from their spanning sets. What are the dimensions of the spaces? a)   1 2 3 A = 4 5 6 (8.40) 9 8 7 b)   1 2 3 A = 4 5 6 (8.41) 9 17 7 104
    • c) 15 7 12 A= (8.42) 6 11 23 d)   1 2 3 4 5 6 A= 9  (8.43) 17 7 14 3 5 e)   1 2 3 2 4 6 A= 10  (8.44) 20 30 14 3 58. Decide whether the nullspace of the following matrices is trivial or not by finding a nontrivial vector in the space. If nontrivial, what is the dimension of the nullspace? Also, determine a basis by finding a sufficient number of additional linearly independent vectors in the space. a)   1 2 3 A = 4 5 6 (8.45) 9 8 7 b)   1 2 3 A = 4 5 6 (8.46) 9 17 7 c) 15 7 12 A= (8.47) 6 11 23 d)   1 2 3 4 5 6 A= 9  (8.48) 17 7 14 3 5 e)   1 2 3 2 4 6 A=   (8.49) 10 20 30 14 3 5 105