Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this document? Why not share!

- Generalized Dynamic Inversion for M... by ismail_hameduddin 586 views
- Generalized Dynamic Inversion for M... by ismail_hameduddin 2015 views
- Some important tips for control sys... by manish katara 10677 views
- 8085 alp prog by Prem Kumar 386 views
- 2 8085 microprocessor by aditidey19 14474 views
- Optimal Control: Perspectives from ... by ismail_hameduddin 1108 views

3,126 views

Published on

The notes did not go through the intended revision after the end of the semester due to my schedule so they remain a rough first draft.

They are largely (but not completely) inspired by a control course taught by Dr. Gregory Shaver at the ME dept. at Purdue.

Much of the information was gleaned through a variety of textbooks/papers/experiences, too many to mention here.

Although a reasonable attempt has been made to ensure all the facts are correct, these notes are as-is with no guarantee of accuracy. Use at your own risk.

No Downloads

Total views

3,126

On SlideShare

0

From Embeds

0

Number of Embeds

18

Shares

0

Downloads

154

Comments

0

Likes

1

No embeds

No notes for slide

- 1. Control System Theory & Design Lecture Notes Ismail Hameduddin KING ABDULAZIZ UNIVERSITY
- 2. ContentsContents 21 Introduction to feedback control 4 1.1 Basic notion of feedback control . . . . . . . . . . . . . . . . . . 4 1.2 Control architectures . . . . . . . . . . . . . . . . . . . . . . . . 82 System models and representation 15 2.1 Model classiﬁcation . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.2 State-space representation . . . . . . . . . . . . . . . . . . . . . 15 2.3 Input/output diﬀerential equation . . . . . . . . . . . . . . . . . 16 2.4 Transfer functions . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Dynamic response of systems 19 3.1 First-order system response . . . . . . . . . . . . . . . . . . . . . 19 3.2 Second-order systems . . . . . . . . . . . . . . . . . . . . . . . . 19 3.3 Design considerations . . . . . . . . . . . . . . . . . . . . . . . . 21 3.4 Routh-Hurwitz stability criterion . . . . . . . . . . . . . . . . . . 21 3.5 Pole/Zero eﬀects . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Frequency response tools 25 4.1 Frequency response . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.2 Bode plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.3 Gain and phase margins . . . . . . . . . . . . . . . . . . . . . . . 27 4.4 Phase margin and second-order systems . . . . . . . . . . . . . . 29 4.5 Root-locus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.6 Nyquist stability criterion . . . . . . . . . . . . . . . . . . . . . . 32 4.7 Feedback with disturbances . . . . . . . . . . . . . . . . . . . . . 37 4.8 Trends from Bode & Poisson’s integral constraints . . . . . . . . 39 4.9 Upper bounds on peaks for sensitivity integrals . . . . . . . . . . 425 Frequency domain control design 43 5.1 Direct pole-placement . . . . . . . . . . . . . . . . . . . . . . . . 43 5.2 Modeling errors . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 5.3 Robust stability . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 5.4 Robust performance . . . . . . . . . . . . . . . . . . . . . . . . . 47 5.5 Internal Model Principle . . . . . . . . . . . . . . . . . . . . . . . 486 State-space techniques 49 6.1 Transfer function to state-space . . . . . . . . . . . . . . . . . . . 49 2
- 3. Contents 6.2 Eigenvalues & eigenvectors . . . . . . . . . . . . . . . . . . . . . 50 6.3 Solution and stability of dynamic systems . . . . . . . . . . . . . 50 6.4 State transformation . . . . . . . . . . . . . . . . . . . . . . . . . 55 6.5 Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 6.6 State-space to transfer function . . . . . . . . . . . . . . . . . . . 57 6.7 Cayley-Hamilton Theorem . . . . . . . . . . . . . . . . . . . . . . 58 6.8 Controllability & reachability . . . . . . . . . . . . . . . . . . . . 59 6.9 Controllability gramian . . . . . . . . . . . . . . . . . . . . . . . 64 6.10 Output controllability . . . . . . . . . . . . . . . . . . . . . . . . 65 6.11 Control canonical form and controllability . . . . . . . . . . . . . 66 6.12 Stabilizability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 6.13 Popov-Belevitch-Hautus test for controllability and stabilizability . 72 6.14 Observability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 6.15 Observability gramian . . . . . . . . . . . . . . . . . . . . . . . . 75 6.16 Observable canonical form and observability . . . . . . . . . . . . 75 6.17 Detectability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 6.18 PHB test for observability and detectability . . . . . . . . . . . . 76 6.19 Duality between observability and controllability . . . . . . . . . . 767 Control design in state-space 77 7.1 State feedback design/Pole-placement . . . . . . . . . . . . . . . 77 7.2 Ackermann’s formula . . . . . . . . . . . . . . . . . . . . . . . . 79 7.3 SISO tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 7.4 SISO tracking via complementary sensitivity functions . . . . . . . 81 7.5 SISO tracking via integral action . . . . . . . . . . . . . . . . . . 82 7.6 Stabilizable systems . . . . . . . . . . . . . . . . . . . . . . . . . 83 7.7 State observer design/Luenberger state observer/estimator design 87 7.8 Disturbance observers . . . . . . . . . . . . . . . . . . . . . . . . 90 7.9 Output feedback control . . . . . . . . . . . . . . . . . . . . . . 92 7.10 Transfer function for output feedback compensator . . . . . . . . 93 7.11 SISO tracking with output feedback control . . . . . . . . . . . . 938 Basic notions of linear algebra 95 3
- 4. Chapter 1Introduction to feedback controlWe introduce feedback in terms of what will be discussed in these notes.1.1 Basic notion of feedback controlA block diagram illustration of an ideal standard feedback control system is shownin Fig. 1.1. Figure 1.1: Block diagram illustration of ideal feedback control system.The idea of feedback is straightforward: we monitor the actual output of the systemor plant, then generate an error signal based on the diﬀerence between the desiredoutput and the actual output of the dynamic system and ﬁnally use this error termin a compensator which gives a control signal that can act as an input to the actualsystem.The compensator is a general term and it can be anything such as an algorithm,function or just a simple gain, i.e., a constant value multiplying the error. Thegoal of control design is to determine an appropriate compensator to achieve per-formance objectives. The plant in the system is the dynamic system that is beingcontrolled. We model this system via mathematical tools and use it to designthe compensator. A mathematical model is always an approximation of the realmodel since we cannot eﬀectively model everything present in the real world and wecannot also take into account variations in parameters such as geometries, forcesand operating conditions. Thus, there always exists some uncertainty in the plantmodel which directly aﬀects compensator design. 4
- 5. 1.1. Basic notion of feedback controlFeedback goes beyond mathematical functions and abstractions. It is a fact ofeveryday life. An obvious example of human feedback is when you adjust thewater temperature in the shower. The hot-cold knob is adjusted until a comfortableposition is reached. In this case, the skin monitors the output of the system, i.e.,the temperature of the water. It then sends this signal to the brain that judges thediﬀerence between the desired water temperature and actual water temperature.This diﬀerence is then used by the brain to general a control signal that commandsthe hand to adjust the knobs to reduce this diﬀerence. If the water is cold, youshiver, the brain determines that you need to be warmer, it sends a signal toyour hand to adjust the knob for warmer water until you stop shivering. Controlengineering is then simply the attempt to mimic the elegant, eﬃcient and eﬀectivecontrol of processes apparent in nature.We mentioned that the goal of control design is to determine an appropriate com-pensator to achieve performance objectives. Some common performance objectivesare listed below: • Stability of the closed-loop system. • Dynamic response to be satisfactory, e.g., – Settling time (time of response). – Percent overshoot. • Tracking/steady-state error small. • Remain within hardware limits, e.g., forces and voltages. • Eliminate impact of disturbances such as wind gusts in UAV’s and inclines in a car cruise control system. • Eliminate impact of measurement noise. • Eliminate impact of model uncertainty.The above wish list is not something that is not usually completely possible due tothe existence of fundamental limitations in the the system. There is nothing calleda free lunch. However, we can via feedback control and other strategies, trade-oﬀ these fundamental limitations with respect to each other to achieve the bestpossible scenario based on requirements. This is the essence of control engineering.An example of the previous is when we try to have aggressive control, i.e. smallresponse time, and also eliminate the impact of model uncertainty. The bandwidthof the system is a measure of the aggressiveness of the control action and/orthe dynamic response. A higher bandwidth implies a higher frequency at whichwe have good response and therefore more aggressive control. But at the sametime, in most models, the magnitude of the uncertainty becomes large at higherfrequencies. Hence we limit the bandwidth or frequency of response dependingon the frequency at which uncertainty begins to become signiﬁcant. We want lowmagnitude of response at frequencies where model uncertainty magnitude becomeshigh to reduce the system sensitivity to this uncertainty. This is a situation where,since we cannot avoid model uncertainties, we have to limit the aggressiveness ofthe response to avoid the uncertainty becoming signiﬁcant. 5
- 6. 1.1. Basic notion of feedback controlActual control systemsWe often treat the control and feedback of systems as an idealized scenario to aidin the design process. It is important for the control engineer to appreciate theactual complex nature of control systems. It is instructive to do this through asimple example.Recently, there has been a growing research interest in applications of piezoelec-tric materials. These are very sensitive materials that produce a voltage sig-nal when they sense a force. The converse phenomenon is also true, i.e., aforce/displacement is produced in the material when voltage is applied to it. Thischaracteristic is considered particularly advantageous and is being tested in a widevariety of applications. Among these is their use as a variable valve actuator infuel injector systems.Figure 1.2: Schematic of a proposed piezoelectric variable valve injector system.Fuel injection for an internal combustion engine should ideally be tightly controlledto produce an air-fuel mixture that reduces pollutants and increases eﬃciency whileat the same time mitigating undesirable phenomena such as combustion instability.This is possible through a variable valve that regulates the amount of fuel ﬂowingthrough the cylinder nozzle.One type of assembly of such a variable valve available in the literature is shown 6
- 7. 1.1. Basic notion of feedback controlin Fig. 1.21 . The piezo stack at the top consists of a “stacked” combinationof piezeoelectric materials that produces the desired displacement characteristicswhen voltage is applied. The voltage is applied by the piezo stack driver. Whenthe piezo stack expands (due to voltage via the piezo stack driver), it moves thetop link down which in turn pushes the bottom link down. This displaces ﬂuidout of the needle upper volume into the injector body and we would expect theneedle lower volume to be reduced as well. However, assuming incompressibility,this cannot happen because the needle lower volume does not have an openinginto the injector body. Instead, to compensate for the loss of volume induced bythe bottom link moving down, the needle is pushed up by the ﬂuid. This opens thenozzle. In a similar fashion, with appropriate voltage, the nozzle is closed as well.The piezo stack via mechanical means, controls the nozzle state and it is hencecalled a variable valve actuator (VVA). The chief beneﬁts of using piezoelectricmaterials here is their quick response (high bandwidth) and because incorporationof a piezoelectric actuator helps eliminate many mechanical, moving parts which inturn reduces cost and increases reliability. The disadvantage is that the maximumexpansion achievable by piezoelectric materials is usually low. This is remediedby using the stair-like “ledge” shown in Fig. 1.2 which acts as a displacementmultiplier. A larger ledge causes a larger loss of needle lower volume when thebottom link moves down which makes the needle displace more to compensate forlost volume.The idealized block diagram that is used in control design is shown in Fig. 1.3. Itis supposed to be as simple as possible in order to capture only the most essentialcharacteristics of the system. The actual valve position and the desired valveposition together generate an error signal and the compensator is used to determinethe appropriate voltage signal that will eventually reduce this error to zero. Herethe VVA is actually a 5 state control oriented model, i.e., it needs 5 variables to befully described in the time domain. These states include position of the actuator,velocity of the actuator and the pressure. A full model description of the VVA wouldentail many more states that do not have as much eﬀect on the system responsebut greatly aﬀect the complexity of the problem. Such models are sometimes calledsimulation models because we use them to simulate the response of the systemrather than use them to determine an appropriate compensator (control design).Figure 1.3: Block diagram of an ideal closed-loop variable valve actuator system.The actual or close to actual block diagram is shown in Fig. 1.4. Here we do notignore any known elements from our system. The valve position is measurement 1 Chris Satkoski, Gregory M. Shaver, Ranjit More, Peter Meckl, and Douglas Memering,Dynamic Modeling of a Piezo-electric Actuated Fuel Injector, IFAC Workshop on Engine andPowertrain Control Simulation and Modeling, 11/30-12/2/2009, IFP, Rueil-Malmaison, France. 7
- 8. 1.2. Control architecturesby the linear variable displacement transducer (LVDT) in an analog form. This isthen converted to a digital signal by an analog to digital converter (A/D). It isthen used to generate an error signal based on the desired valve position. The erroris fed into the compensator which outputs a digital signal which is converted toan analog voltage via a digital to analog converter (D/A). This signal is ampliﬁedusing a pulse-width modulation (PWM) ampliﬁer and then supplied to the VVAactuator. Figure 1.4: Block diagram of a closed-loop variable valve actuator system.It is obvious that this is far more complicated system than the one consideredearlier. Every element in the system has its own dynamics which may or maynot include time delay. Furthermore we have errors in measurement and digiti-zation/quantization error. We cannot remove any elements from consideration inthis system without ﬁrst verifying their eﬀect and on the entire system. If thedynamics of the element are fast enough to approximate as a straight line ratherthan a block in the diagram, then we can often ignore the element. For sensorssuch as the LVDT, this may be accomplished by checking manufacturer publishedinformation about them such as bandwidth, damping ratio, etc. This is a fairlysimple example but more complicated systems such as ﬂight control systems mayhave hundreds of these blocks. These notes only deal with the situation in Fig.1.3.1.2 Control architecturesWe now consider several commonly used control architectures. In the following,the reference command is given by the transfer function R(s), the plant by G(s),the output by Y (s), the compensator C(s), disturbances by D(s) and noise byN (s). 8
- 9. 1.2. Control architecturesOpen-loop/Feedforward controlWe now consider the simplest case of control called open-loop or feedforwardcontrol. As the name suggests, there is no feedback of the output back to thecompensator. Rather everything is done without output measurements. A blockdiagram representing a simple feedforward scheme is shown in Fig. 1.5. Figure 1.5: Block diagram of a feedforward variable valve actuator system.The transfer function from the output to the input is simply Y = CG ⇒ Y = CGR (1.1) Rwhere the arguments have been dropped for convenience. Since ideally we wantY (s) = R(s), a natural choice for the compensator would be C(s) = G−1 (s)which leads to Y = CGR = G−1 GR = R. (1.2)This looks like a perfect scenario but we made some critical assumptions whichmakes the previous result problematic. One assumption was that there are nodisturbances anywhere in the system. This assumption is false since we alwayshave disturbances in the system.Although there may be multiple disturbances in diﬀerent parts of the system, wenow consider one type of disturbance in the system for simplicity and for illustrativepurposes. Let there be a disturbance in the control signal being fed into the plant,i.e., the control signal input into the plant is not the same as the compensatoroutput. This situation is illustrated in Fig. 1.6. For such a system we can deriveFigure 1.6: Block diagram of a feedforward variable valve actuator system withdisturbance.the transfer function from the reference R(s) to the output Y (s) as Y = G(CR + D) = GCR + GD (1.3)and substituting the previous compensator choice C(s) = G−1 (s) Y = GG−1 R + GD = R + GD. (1.4) 9
- 10. 1.2. Control architecturesTherefore, even in the case of a single source of disturbance, feedforward fails toprovide good reference tracking since now we have an extra G(s)D(s) term thatis unwanted and perturbs the output from the reference signal.Another assumption is of having no model uncertainty. This is always false sincethere is always uncertainty in a mathematical reasons for a variety of reasons suchas lack of modeling of certain dynamics for simplicity and physical variation betweensystems. Thus, we don’t have an actual true model but a perturbed model G(s)and therefore we can only have G−1 (s). Substituting this into (1.1) gives Y = CGR = G−1 GR = εR. (1.5)where ε is some factor dependent on the model uncertainty. Feedforward failswhen we have uncertainty in the model. A more acute problem occurs when G(s)and hence G(s) have right-half plane (RHP) zeros, i.e., roots of the numeratorhave positive real parts. In this situation, G−1 (s) will have RHP poles, i.e., rootsof the denominator have positive real parts; something that characterizes unstablesystems. And since G−1 = G−1 , there will never be perfect cancellation of the RHPpoles and zeros. Hence, the entire feedforward system (even without disturbances)will be unstable because of the existence of RHP poles.Even though the previous has painted a bleak picture, feedforward control is still auseful tool as long as it is used in an intelligent manner. We sometimes combinefeedback and feedforward control to achieve excellent results as will be shown later.Feedback/closed-loop controlWe consider the previous problem of having input disturbance but now we employa feedback scheme to tackle the control and reference tracking problem. We alsoadd noise to the measurement of the output; something that is expected andunavoidable. The block diagram of such a closed-loop system is shown in Fig. 1.7.Figure 1.7: Block diagram of a feedback variable valve actuator system with dis-turbance and noise. 10
- 11. 1.2. Control architecturesFollowing basic procedures, we determine the system transfer function as Y = GC(R − Y − N ) + GD (1.6) GC G GC = R+ D− N (1.7) 1 + GC 1 + GC 1 + GC = YR + YD − YN (1.8)where YR (s) is the transfer function from R(s) to Y (s), YD (s) is the transferfunction from D(s) to Y (s) and YN (s) is the transfer function from N (s) toY (s). For the output to perfectly track the reference input, we want YR (s) → 1,YD (s) → 0 and YN (s) → 0. Let the compensator be a large number, i.e., C(s)∞, then G → 1 ⇒ YD = 0 (1.9) 1 + GCwhich implies complete disturbance rejection, i.e., the feedforward problem issolved. Furthermore, GC → 1 ⇒ YR = 1 (1.10) 1 + GCwhich is exactly what we want for perfect reference tracking. This is the motivationbehind high gain feedback. But we also have GC → 1 ⇒ YN = 1 (1.11) 1 + GCwhich means the noise is ampliﬁed; something we certainly do not want. Thereis an inherent problem here because the coeﬃcient of R(s) and N (s) are thesame. Thus, we cannot have YR (s) → 1 and YN (s) → 0 simultaneously. Wehave a fundamental contradiction here. The only way to get rid of noise is to setC(s) = 0 but then this leads to no control action.Feedback helps us deal with disturbances and plant uncertainty. High gain feedbacklooks similar to plant inversion in feedforward control but only better. It givesus a plant inversion like process and mitigates the eﬀect of disturbances. Thedrawback, however, is that mitigating noise and perfect tracking cannot be donesimultaneously.In many situations, the reference tracking is in a low frequency range. An exampleof this would be the reference signal of the bank angle of a commercial airliner.Also, for most sensors, noise becomes prevalent mainly in the frequency ranges.Thus we can design C(s) such that the transfer function GC YR = YN = (1.12) 1 + GCis 1 at low frequencies (to aid in reference track) and 0 at high frequencies (tomitigate measurement noise). The frequency response for such a transfer functionis shown in Fig. 1.8.Figure 1.8 shows a transfer function with a bandwidth (i.e. approx. roll-oﬀ fre-quency) of 7 radians per second. It implies, for perfect tracking and noise mitiga-tion, that the reference signal must be predominant in frequencies below 7 radiansper second and the measurement noise must be predominant in frequencies above 7radians per second. Thus, the extent to which you can be aggressive with control,i.e. have high bandwidth, depends on two factors: 11
- 12. 1.2. Control architectures −5 −10 −15 −20 Magnitude (dB) −25 −30 −35 −40 −45 −50 −1 0 1 2 3 10 10 10 10 10 Frequency (rad/sec)Figure 1.8: Typical frequency response magnitude of YR = YN to mitigate noiseand provide good tracking.. • The frequency range at which the noise is present. • The reference signal frequency range.This is an example of a fundamental limitation in a feedback situation. The ag-gressiveness of the control is limited by the measurement noise and the referencesignal frequency range.When we close the loop, it allows us to trade oﬀ the impact of disturbances, noise,reference tracking signals and stability among others. It gives us the freedom totrade oﬀ quantities which we might not have been able to otherwise, such asplant uncertainty, noise mitigation and tracking accuracy. These notes deal withdeveloping a sophisticated outlook and precise tools to do tradeoﬀs.Feedback with feedforwardAnother way to tackle the above problems is to include a feedforward term inthe feedback architecture as shown in Fig. (1.9). Here Gf (s) is the feedforwardtransfer function. The transfer function of the system is Y = GC(R − Y − N ) + GGf R + GD ⇒ Y (1 + GC) = (GC + GGf )R + GD − GCN (1.13)and hence GC + GGf G GC Y = R+ D− N (1.14) 1 + GC 1 + GC 1 + GC = YR + YD − YN . (1.15)As before, we want YR (s) → 1, YD (s) → 0 and YN (s) → 0. Notice we have anextra degree of freedom in the design of YR (s) due to the feedforward term Gf (s). 12
- 13. 1.2. Control architecturesFigure 1.9: Block diagram of a two degree of freedom feedback-feedforward variablevalve actuator system with disturbance and noise.Let Gf (s) = G−1 (s) and thus GC + GGf GC + GG−1 (s) = =1 (1.16) 1 + GC 1 + GCwhich gives accurate steady-state tracking although we still run into problems ofmodel uncertainty as in the feedforward architecture. But most importantly, sincewe are not using feedforward or feedback alone, accurate steady-state trackingdoes not imply noise ampliﬁcation. Now the challenge is the tradeoﬀ between thenoise and disturbance. If we set C(s) ∞ G →0 (1.17) 1 + GCwhich implies YD (s) → 0. But for YN (s) → 0, we need C(s) = 0 GC → 0. (1.18) 1 + GCThese seemingly contradictary/conﬂicting requirements represent a tradeoﬀ thatwe cannot avoid. If our model is excellent, feedforward is beneﬁcial in this casebut we still have tradeoﬀs. Thus, control cannot change the underlying problembut can give us tools to play with the fundamental tradeoﬀs to get an acceptabledesign.Feedback with disturbance compensationAssuming we know the disturbance D(s), will this buy us anything? Consider thefeedback architecture shown in Fig. 1.10. Here GD (s) is an additional transferfunction called the disturbance compensation transfer function. 13
- 14. 1.2. Control architecturesFigure 1.10: Block diagram of a combined feedback and disturbance feedbackvariable valve actuator system with disturbance and noise.From Fig. 1.10 we have Y = GC(R − Y − N ) + CGD GD + GD ⇒ Y (1 + GC) = GCR + (GCGD + G)D − GCN (1.19)and hence GC GGD C + G GC Y = R+ D− N (1.20) 1 + GC 1 + GC 1 + GC = YR + YD − YN . (1.21)Again we want YR (s) → 1, YD (s) → 0 and YN (s) → 0. Now we seem to havemore ﬂexibility because setting GD (s) = −C −1 (s) gives GGD C + G GGD C + G = −CC −1 G + G = 0 ⇒ =0 (1.22) 1 + GCwhich implies that we have perfect disturbance rejection. But again this methodhas its own challenges because we need to have knowledge of the disturbance.This can be through estimation, especially if we have some idea about the natureof disturbance. For example, the disturbance in a manufacturing when the ﬂoorshaking in a repeating. 14
- 15. Chapter 2System models and representation2.1 Model classiﬁcationThere are many approaches to developing system models and a similarly largenumber of classiﬁcations of model types. System models can be classiﬁed as white,grey or black box models. Models where the underlying physics of the system areﬁrst considered to help develop the model are known as white box models. Blackbox models, on the other hand, are entirely data drive. The output of a system isconsidered subject to a given input and a corresponding model is formed using toolssuch as Fourier analysis. Grey box models combine the above two approaches inthat the model form is derived from physical principles while the model parametersare determined using experimental data.Models can also be classiﬁed as nominal/control models or simulation/calibrationmodels. The nominal/control model form is a simpliﬁed dynamic model wherethe desire is to capture the dynamic coupling between control inputs and systemoutputs. These models are directed towards usage for controller design since asimpliﬁed model aids in this process. Conversely, simulation models are typicallygenerated to capture as many aspects of the system behavior as accurately aspossible. The intended use of these types of models is for system and controllervalidation, intuition development and assumption interrogation.2.2 State-space representationConsider a set of 1st-order ordinary diﬀerential equations, xi = f1 (x1 , ..., xn , u1 , ..., um ) ˙ (2.1) . . . xn = fn (x1 , ..., xn , u1 , ..., um ) ˙ (2.2)where x1 , ..., xn are called the system states and u1 , ..., um are the system inputs. 15
- 16. 2.3. Input/output diﬀerential equationNext consider a set of algebraic equations relating outputs to state variables andinputs, y1 = g1 (x1 , ..., xn , u1 , ..., um ) (2.3) . . . yp = gp (x1 , ..., xn , u1 , ..., um ). (2.4)If we let x = [x1 , ..., xn ]T , u = [u1 , ..., um ]T , and y = [y1 , ..., yp ]T , then the aboverelationships can be written in the compact state-space form ˙ x = f (x, u) (2.5) ˙ y = g(x, u). (2.6)If the system considered is linear then it can be written in the linear parametervarying form (LPV) ˙ x = A(t)x + B(t)u (2.7) y = C(t)x + D(t)u (2.8)or if the system is linear time-invariant (LTI) ˙ x = Ax + Bu (2.9) y = Cx + Du (2.10)where A, B, C and D are the relevant matrices.States are the smallest set of n variables (state variables) such that knowledgeof these n variables at t = t0 together with knowledge of the inputs for t ≥ t0determines system behaviour for t ≥ t0 . The state vector is the n-th order vectorwith states as components and the state-space is the n-dimensional space withcoordinates as the state variables. Correspondingly, the state trajectory is the pathproduced in the state-space as the state vector evolves over time.The advantages of the state-space representation are that the dynamic model isrepresented in a compact form with regular notation, the internal behaviour of thesystem is given treatment, the model can easily incorporate complicated outputfunctions, deﬁnition of states helps build intuition and MIMO systems are easilydealt with. There are many tools available for control design for this type of modelform. The drawback of the state-space representation is that the particular equa-tions themselves may not always be very intuitive.2.3 Input/output diﬀerential equationThe input/output diﬀerential equation relates the outputs directly to the inputs.It is obtainable usually through taking the Laplace transform of the state-spacerepresentation, substituting/rearranging appropriately and ﬁnally taking the inverse 16
- 17. 2.4. Transfer functionsLaplace transform to obtain an high-order diﬀerential equation. A linear time-invariant input/output diﬀerential equation is given by y n + an−1 y n−1 + ... + a1 y + a0 y = bm um + bm−1 um−1 + .... + b1 u + b0 u (2.11) ˙ ˙The advantages of input/output diﬀerential equations are that they are concep-tually simple, can easily be converted to transfer functions and many tools areavailable in this context for control design. It is however diﬃcult to solve in thetime domain since we would need to solve the Laplace transform which is generallynot an easy task.2.4 Transfer functionsRecall from the study of Laplace transforms the following important transforma-tions L[f˙(t)] = sF (s) − f (0) (2.12) L[f (t)] = s2 F (s) − sf (0) − f˙(0) ¨ (2.13) . . . L[f n (t)] = sn F (s) − sn−1 f (0) − sn−2 f˙(0) − ... − sy n−2 (0) − y n−1 (0). (2.14)Now consider a generic LTI input/output diﬀerential equation given by y n + an−1 y n−1 + ... + a1 y + a0 y = bm um + bm−1 um−1 + .... + b1 u + b0 u. (2.15) ˙ ˙Applying the Laplace transforms from above to this diﬀerential equation yields sn Y (s) + an−1 sn−1 Y (s) + ... + a1 sY (s) + a0 Y (s) + fy (s, t = 0) = bm sm U (s) + ... + b1 sU (s) + b0 U (s) + fu (s, t = 0) (2.16)where fy (s, t = 0) and fu (s, t = 0) are functions of the initial conditions as givenby the Laplace transform. Rearranging the above gives bm sm + ... + b1 s + b0 fu (s, t = 0) − fy (s, t = 0) Y (s) = U (s) + . sn+ an−1 sn−1 + ... + a0 sn + an−1 sn−1 + ... + a0 1 2 (2.17)Box 1 represents the transfer function and it describes the forced response of thesystem. Box 2 represents the free response of the system depending on the initialconditions. Depending on the system (whether it has control input or not) and theexact system scenario (whether the initial conditions are zero or not), Y (s) maycomprise box 1 or box 2 or both. For control analysis, we usually assume box 2is zero and thus box 1 represents the entire system response. It will be seen laterthat many results which are true for this case are also true when box 1 is zero andbox 2 is not.The roots of the numerator of a transfer functions are called zeros since they sendthe transfer function to zero whereas the roots of the denominator of a transfer 17
- 18. 2.4. Transfer functionsfunction are called poles since they send the transfer function to ∞. The poles andzeros are very important in analyzing system response and for designing appropriatecompensators. 18
- 19. Chapter 3Dynamic response of systems3.1 First-order system responseA ﬁrst-order system (such as a spring-mass system) takes on the general form τ y + y = ku ˙ (3.1) y k ⇒ y+ = u ˙ (3.2) τ τ ⇒ y + ay = bu, ˙ a = 1/τ , b = k/τ (3.3) ⇒ τ [sY (s) + y(0)] + Y (s) = kU (s) (3.4) k/τ y(0) ⇒ τ Y (s) = U (s) + (3.5) s + (1/τ ) s + (1/τ ) 1 2where τ is the time constant of the system response and k is a constant.The roots of the denominator of the transfer function (the characteristic equation) 1are called poles and in this case the pole is −a = − τ . If the pole lies in theleft-half of the complex s-plane (LHP), it is stable (exponentially decreasing). If itis in the right-half of the complex s-plane (RHP) then it is unstable (exponentiallyincreasing). A pole on the imaginary axis (with real part equal to 0) is marginallystable.The free response of this system (due to the initial conditions) is given by yfree (t) = ty(0)e− τ = y(0)e−at .The forced response of the system to a step input is given by ystep (t) = k + t[y(0) − k]e− τ . The value of y at the time constant value τ is given by y(τ ) =0.368y(0) + 0.632k. The lower τ is, the faster the response. Thus, making |a|bigger gives a faster response.3.2 Second-order systemsThe general form of a second-order system is given by 2 2 y + 2ζωn y + ωn y = kωn u ¨ ˙ (3.6) 19
- 20. 3.2. Second-order systemswhere k is a constant, ωn is the natural frequency of the system, and ζ is thedamping ratio. Taking the Laplace transform as before yields the system in termsof its forced and free response, 2 kωn [s + 2ζωn ]y(0) + y(0) ˙ Y (s) = U (s) + (3.7) s2 + 2ζωn s + ωn 2 s2 + 2ζω s + ω 2 n n 1 2where the common denominator is the open loop characteristic equation and whoseroots give the system poles. For such a canonical form of second-order system, thepoles are given by −ζωn ± ωn ζ 2 − 1. If 0 < ζ < 1, then we have two complexpoles (since ζ 2 − 1 is negative) implying oscillation in the system free response. Inthis case poles are given by −ζωn ± ωn 1 − ζ 2 jThe free response of a second-order system (0 < ζ < 1) is given by y(0) 1 − ζ2 yfree (t) = e−ζωn t cos ωt + tan−1 (3.8) 1 − ζ2 ζIt can be easily seen from the above that we need ζωn > 0 or alternatively thepoles in the LHP for stability.The forced response of a second-order system (0 < ζ < 1) to a step is given by e−ζωn t 1 − ζ2 ystep (t) = k 1 − sin ωd t + tan−1 (3.9) 1 − ζ2 ζwhere ωd = ωn 1 − ζ 2.For a second-order system, the rise time tr is given by π−β tr = (3.10) ωdwhere β is the angle such that cos β = ζ. The time to maximum overshoot isgiven by π Td tp = = (3.11) ωd 2and the maximum overshoot Mp itself is given by − √ πζ Mp = e 1−ζ 2 ≈ e−πζ (3.12)where the percent overshoot can be determined by multiplying the above expressionby 100.If the system is nonminimum phase (RHP zeroes), then the response will haveundershoot instead of overshoot. When the system has a RHP zero at s = c, alower bound for the undershoot Mu is given by 1−δ Mu ≥ (3.13) ects − 1where δ is the maximum allowable steady-state error in response beyond the settlingtime. Thus for a 2% settling time, δ = 0.02. 20
- 21. 3.3. Design considerations3.3 Design considerationsWe consider the case where we require a certain percent overshoot in our system.We know that the damping ratio ζ is a critical factor in this, but how must it bechosen such that we have a certain percent maximum overshoot? Moreover, howmust we choose our poles so that this choice of ζ is satisﬁed.We can use the relationship for percent overshoot given previously to determinethe damping ratio that will ensure a certain maximum overshoot, ζ% OS . As longas we choose a damping ratio greater than ζ% OS , we will not exceed the speciﬁedmaximum overshoot. In the complex s-plane, the following can easily be shown tobe true cos θ = ζ (3.14)where θ is a counterclockwise angle made with the negative real axis. Hence anypoles that lie on or between the lines described by θ = ±cos−1 (ζ%OS ) satisfyζ ≥ ζ%OS .For a LHP pole a, it is true that τ = 1/a. Thus, a larger magnitude of a LHPpole signiﬁes a faster response and these poles are known as ”fast poles”. The 2%setting time is given by t2% = 4τ and the 5% settling time is given by t5% = 3τ ,where τ is the time constant. Hence, to achieve a settling time of t2% or lessrequires placing the poles of the system to the left of a = t−4 . A similar result 2%can be easily derived for t5% .We can combine the above to results for maximum overshoot and settling time todetermine precise poles for a desired system response.3.4 Routh-Hurwitz stability criterionAs discussed previously, poles in the RHP indicate instability of a system. There-fore, the roots of the denominator of a transfer function hold key informationregarding the stability and response characteristics of the system.Consider the denominator of a closed-loop transfer function, also known as theclosed-loop characteristic equation, CLCE(s) = sn + a1 sn−1 + . . . + an−1 s + an = 0 (3.15)where a1 , . . . , an are real constants. The system is stable if the roots of theCLCE(s) are in the LHP. If the system is stable, then a1 , . . . , an > 0 but a1 , . . . , an >0 does not necessarily imply stability. Furthermore, if any ak ≤ 0, then the systemhas at least one pole with a positive or zero real part which implies stability ormarginal stability, respectively.The Routh-Hurwitz stability criterion helps determine not only whether a systemis stable or not but it also speciﬁes how many unstable poles are present withoutexplicitly solving the characteristic equation, CLCE(s). It can also be used formultiple gain closed-loop systems.By the Routh-Hurwitz stability criterion, if a1 , . . . , an = 0 and if a1 , . . . , an are allpositive (or equivalently all negative) then we can use the Routh array to determinestability and the number of unstable poles. 21
- 22. 3.4. Routh-Hurwitz stability criterionTo form the Routh array, ﬁrst arrange a1 , . . . , an in two rows as follows Row n 1 a2 a4 . . . 0 n-1 a1 a3 a5 . . . 0Then we determine the rest of the elements in the array. For the third row,coeﬃcients are computed using the pattern below 1 a2 1 a4 1 a6 a1 a3 a1 a5 a1 a7 b1 = − b2 = − b3 = − . . . (3.16) a1 a1 a1This is continued until the remaining elements are all zero. Similarly, for the fourthrow a1 a3 a1 a5 a1 a7 b1 b2 b1 b3 b1 b4 c1 = − c2 = − c3 = − . . . (3.17) b1 b1 b1and for the ﬁfth row b1 b2 b1 b3 b1 b4 c1 c2 c1 c3 c1 c4 d1 = − d2 = − d3 = − . . . (3.18) c1 c1 c1where all the series of elements are continued until the remaining are all zero. TheRouth array is then given by Row n 1 a2 a4 . . . 0 n-1 a1 a3 a5 . . . 0 n-2 b1 b3 b5 . . . 0 n-3 c1 c3 c5 . . . 0 n-4 d1 d3 d5 . . . 0 . . . . . . . . . 2 e1 e2 1 f1 0 g1The number of roots of the closed-loop characteristic equation in the RHP is equalto the number of changes in sign of the coeﬃcients of the ﬁrst column of the array[1, a1 , b1 , c1 , . . .]. The system is stable if and only if a1 , . . . , an > 0 and all theterms in [1, a1 , b1 , c1 , . . .] are positive.We have a special case when the ﬁrst element in a row becomes zero, since thisimplies division by 0 for succeeding rows. For this case, replace the 0 with ε(arbitrarily small constant) and continue with obtaining expressions for the elementsin the succeeding rows. Finally, when all expressions are obtained, let ε → 0 andevaluate the sign of the expressions. This is all that is needed since we are only 22
- 23. 3.5. Pole/Zero eﬀectsconcerned with sign changes. If we ﬁnd no sign changes, then the 0 elementsigniﬁes a pair of poles on the imaginary axis (no real parts) implying marginalstability.Another special case is when all the elements in a row become zero. In this casewe may still proceed in constructing the Routh array using the derivative of apolynomial deﬁned using the previous (nonzero) row. If the n-th row is all zero,ﬁrst form a polynomial of order n − 1 in s using the elements in row n − 1. Then dobtain the derivative of this polynomial with respect to s, i.e. ds and replace thezero row (the n-th row) with the coeﬃcients of this derivative. The rest of thearray is completed as usual.3.5 Pole/Zero eﬀectsThe location of the closed-loop system poles determines the nature of the systemmodels but the location of the closed-loop zeros determines the proportion inwhich these modes are combined. This can be easily veriﬁed via partial fractionexpansions. Poles and zeros far away from the imaginary axis are known as fastpoles and zeros, respectively. Conversely, poles and zeros close to the imaginaryaxis are known as slow poles and zeros. Fast zeros, RHP or LHP, have little to noimpact on system response.Slow zeros have a signiﬁcant eﬀect on system response. Slow LHP zeros lead toovershoot and slow RHP zeros lead to undershoot. A lower bound Mu for themaximum undershoot (due to the presence of a RHP zero at s = c) is given by 1−δ Mu ≥ (3.19) ects − 1where δ = 0.02 for a 2% settling time ts . From the denominator of the above, itis obvious that fast zeros (large c)lead to a smaller lower bound for the maximumundershoot Mu and vice versa. Alternatively a short settling time ts , implying ahigh bandwidth, would lead to more undershoot. Hence for a given RHP zero,we can only lower ts so much before getting unacceptable undershoot. To avoidundershoot, the closed-loop bandwidth is limited by the magnitude of a RHP zero.We determine the maximum bandwidth acceptable below.The the approximate 2% settling time is given by ts ≈ 4τ = 4/a where τ is thetime constant and a is the real part of the slowest LHP closed-loop system pole.Thus Mu can then be expressed as 1−δ Mu ≥ ∼ . (3.20) e4c/a −1Thus if the magnitude of the real part of the slowest LHP closed-loop pole issmaller than the RHP zero, undershoot will remain low since the denominator ofthe above will become large. The converse is true as well. Keep real part of slowestLHP pole less than real part of any RHP zero to avoid undershoot.The above results are true for LHP zeros as well in an analogous manner but forovershoot instead. Thus, in order to avoid overshoot, we must keep |a| < |c| wherea is the slowest LHP pole and c is the slowest LHP zero. 23
- 24. 3.5. Pole/Zero eﬀectsIf the magnitude of the real part of the dominant closed-loop poles is less than themagnitude of the largest unstable open-loop pole, then signiﬁcant overshoot willoccur. Keep real part of slowest LHP pole greater than real part of any RHP poleto avoid overshoot. 24
- 25. Chapter 4Frequency response tools4.1 Frequency responseWe consider the input to any transfer function G(s) = B(s)/A(s) to be u(t) =A sin ωt where A is a constant and ω is the system frequency. Then it can beshown that the steady-state output is then given by yss (t) = |G(jω)|A sin ωt + ∠G(jω) (4.1)where |G(jω)| is simply the magnitude of G(s) when evaluated at jω and thephase ∠G(jω) is given by Im B(jω) Im A(jω) ∠G(jω) = ∠B(jω)−∠A(jω) = tan−1 −tan−1 (4.2) Re B(jω) Re A(jω)for all ω ∈ R+ . Much useful and generalizable information can be gleaned fromthe above (the frequency response function G(jω) or FRF).4.2 Bode plotsThe pair of plots of magnitude |G(jω)| vs. frequency ω and phase ∠G(jω) vs.frequency ω are collectively known as the Bode plots of G(s). The magnitudeis usually plotted in decibels (dB) and the frequency for both plots is on thelogarithmic scale. The decibel value of the FRF G(jω) is given by G(jω) = 20 log10 |G(jω)| (4.3)and the logarithmic scale value of ω is log10 ω. The appropriate values for themagnitude and phase are determined as previously shown.The beneﬁt of using a logarithmic scale are many fold but most importantly, multi-plication in a linear scale implies addition in the logarithmic scale. Thus, since thepoles/zeros of a transfer function can be expressed as an arbitrarily long multiplica-tion of factors, in the logarithmic scale we can decompose a transfer function intoa summation of elementary transfer functions based on the original poles/zeros.This helps in plotting the function and obtaining fundamental insights into thebehaviour of the plant. 25
- 26. 4.2. Bode plots Example Describe the frequency response of the following closed- loop transfer function using bode plots 45s + 237 G(s) = . (4.4) (s2 + 3s + 1)(s + 13) The bode plots for the transfer function can be generated using the bode command in MATLAB. These are shown in Figs. (4.1) and (4.2). Figure (4.1) shows the frequency response bode plots on the standard logarithmic scale and Fig. (4.2) shows the frequency response bode plots on a linear scale. The beneﬁts of the logarithmic scale are obvi- ous from the ﬁgures since they show the response of the system better when compared to the linear scale plots. This is especially true at very low and very high frequencies where the linear scale response grows asymptotically. 40 20 Magnitude (dB) 0 −20 −40 −60 −80 0 −45 Phase (deg) −90 −135 −180 −3 −2 −1 0 1 2 10 10 10 10 10 10 Frequency (Hz) Figure 4.1: Logarithmic scale bode plot of G(s). Note that the frequency is shown in Hz and not in rad/sec to make the linear scale bode plots distinguishable.Many systems, including control systems, have their performance speciﬁed in termsof frequency response criteria. An important one of these is the bandwidth ofthe system. We deﬁne it, in general terms, as the frequency range for whichthe magnitude of the transfer function from input to output is close to unity (inabsolute terms) or zero (in dB). This implies that the bandwidth is the frequencyrange for which the system response is close to ideal.The previous deﬁnition can be expressed more exactly for control systems since theynormally behave like low-pass systems. Low-pass systems are systems for whichthe response begins near zero decibels and then rolls oﬀ with increasing frequency. 26
- 27. 4.3. Gain and phase margins 20 15 Magnitude (abs) 10 5 0 0 −45 Phase (deg) −90 −135 −180 10 20 30 40 50 60 70 80 90 100 Frequency (Hz) Figure 4.2: Linear scale bode plot of G(s).It is usually the case that the magnitude response does not roll oﬀ immediately butrather remains level for some frequency range. For such systems, the bandwidth isthe frequency at which the magnitude has rolled oﬀ by 3 dB from its low-frequencylevel value. Another deﬁnition which is the used in the rest of these notes is thatthe bandwidth is the real part of the slowest LHP pole. It is commonly the casethat the bandwidth from the two deﬁnitions usually closely coincides.4.3 Gain and phase marginsAs has been previously discussed, a system is stable if all of its poles are in the LHP.Otherwise it is unstable. Not all stable systems are the same and some systems aremore stable than others. Thus, we may are faced with the problem of the extentof stability of a system. The concepts of gain and phase margins are useful inthis regard and bode plots in particular are helpful in determining these. It mustbe noted that gain and phase margins are only meaningful for stable closed-loopsystems that become unstable with increasing gain.Consider a simple stable feedback system with the plant H(s) = B(s)/A(s) andthe compensator given by C(s) = KP (s)/L(s) where K is a constant gain. LetG(s) be the closed-loop transfer function given by H(s)C(s) G(s) = (4.5) 1 + H(s)C(s)Then the closed-loop characteristic equation CLCE which is the denominator ofthe closed-loop transfer function can be easily found to be CLCE(s) = 1 + KH(s)C(s) (4.6) 27
- 28. 4.3. Gain and phase marginswhere the roots of 1 + KH(s)C(s) are the closed-loop poles of the system. Theseare given by the solutions to KH(s)C(s) = −1. (4.7)Since angles in the complex s-plane are measured in a counterclockwise directionfrom the positive real axis, in polar coordinates, the point −1 lies at an angle ∠ −1 = −180◦ and magnitude | − 1| = 1. Thus, if we evaluate the frequency responseof the closed-loop system G(jω), the system will become unstable precisely ata point where we have both ∠G(jω) = −180◦ and |G(jω)| = 1. It can alsobe shown that the closed-loop system is also stable for |G(jω)| < 1. But sincewe already have a stable system there will be no point where |G(jω)| < 1 and∠G(jω) = −180◦ simultaneously. This point is commonly known as the crossoverfrequency. The gain crossover frequency is when only |G(jω)| = 1 or G(jω) = 0is true and the phase crossover frequency is when only ∠G(jω) = −180◦ is true.It can be seen from the previous that since systems with RHP poles are a prioriunstable, the concepts of phase and gain margins are meaningless.It follows that we can deﬁne the gain and phase margins as how much we willneed to change the system (in terms of magnitude and angle) before we reachinstability. Moreover, we can exploit the bode plots to formulate succinct deﬁnitionsof these. Thus, the phase margin is deﬁned as the amount of additional phase lagneeded at the gain crossover frequency which will bring the system to the vergeof instability (i.e. ∠G(jω) = −180◦ or ∠G(jω) = +180◦ ). Correspondingly, thegain margin is deﬁned as the reciprocal of the magnitude |G(jω)| or the negativedecibel magnitude − G(jω) at the phase crossover frequency. This gain margindetermination is based on a decibel plot since 20 log10 1 = 0.For minimum-phase systems, the phase margin must be negative for stability. Al-ternatively, we can use ∠G(jω) = 180◦ as the stability criterion since it refers tothe same angle in the complex s-plane. In this case, for minimum-phase systems,the phase margin must be positive for stability.The gain margin describes the maximum multiple of the gain K that can be appliedin feedback before the system becomes unstable. A gain margin in decibels thatis positive implies that the system is stable and conversely, a negative decibel gainmargin implies instability of the underlying system.For a stable minimum-phase system, the gain margin indicates how much wecan increase the gain before the onset of instability. Conversely for an unstableminimum-phase system, it indicates how much we must decrease gain to regainstability. However, since an unstable system would yield a negative decibel value,this implies 0 < K < 1. This is not a feasible control gain for most systems andtherefore is not a meaningful measure of instability for many unstable systems.For systems that never approach the phase crossover frequency, the gain margin iseither ∞ or −∞ indicating that the system is stable for all gains or unstable forall gains, respectively. Furthermore, the system may have more than one crossoverfrequency indicating that the system is stable only for gain values within a certainrange. For stable systems with more than one gain crossover frequency, the phasemargin is measured at the highest gain crossover frequency.For nonminimum-phase system or systems with undeﬁned phase/gain margins, thebest recourse is to use the Nyquist stability criterion. 28
- 29. 4.4. Phase margin and second-order systems4.4 Phase margin and second-order systemsFor a strict canonical second-order closed-loop system, the phase margin is relatedto the damping ratio, ζ. We must be careful to make sure we have a true second-order closed-loop system before using the following results. The second-orderclosed-loop transfer function CLTF(s) is given by 2 ωn CLTF(s) = (4.8) s2 + 2ζωn s + ωn 2and the related second-order open-loop transfer function OLTF(s) is given by 2 ωn OLTF(s) = . (4.9) s(s + 2ζωn )The phase margin for this open-loop transfer function in closed-loop with gain Kcan then be related to ζ as follows 2ζ Phase margin = tan−1 (4.10) 1 + 4ζ 4 − 2ζ 2 ≈ 100ζ for 0 < Phase margin < 70◦ (4.11)The gain crossover frequency ωgc for such a second-order system can also bedetermined as ωgc = ωn 1 + 4ζ 4 − 2ζ 2 (4.12)This allows us to determine the required gain crossover for a speciﬁed percentovershoot (since percent overshoot can be used to determine ζ).The bandwidth, ωBW of the system is given by ωBW = ωn 1 − 2ζ 2 + 4ζ 4 − 4ζ 2 + 2 (4.13)It can also be shown for the above system that ωgc ≤ ωBW ≤ 2ωgc .Since the time constant τ = 1/ωBW , a higher bandwidth implies a faster response(smaller time constant). It also allows us to specify ωBW in terms of τ and de-termine the corresponding range of ωgc . We can use this and the previous resultto design a system in terms of phase margins and crossover gains using speciﬁeddesign requirements.4.5 Root-locusThe basic characteristics of the transient response of a system are closely relatedto pole locations. Pole locations depends on the value of the loop gain in a simplefeedback setting where the compensator is a constant gain. Hence, it is becomesimportant to know how the closed-loop poles move in the complex s-plane as thegain is varied. Once the desired poles are determined using previously discussedtechniques, the design problem then only involves determining the appropriate gain 29
- 30. 4.5. Root-locusto place the system poles at their desired locations. In many cases, simple gainwill not work and we would need to add a more complex compensator.Root-locus addresses the design problem when dealing with adjusting a simpleparameter (such as a gain but could be otherwise). It was developed by W. R.Evans and involves plotting the roots of the characteristic equation of the closed-loop system for all values (0 to ∞) of an adjustable system parameter that isusually the loop gain. Root-locus means the locus of roots of the characteristicequation.Consider a simple feedback system with plant H(s) and compensator C(s). Theclosed-loop transfer function G(s) is then given by H(s)C(s) G(s) = . (4.14) 1 + H(s)C(s)The characteristic equation is the denominator of the above equated to 0 and itsatisﬁes the following H(s)C(s) = −1. (4.15)We can express this in the polar form as ∠H(s)C(s) = ±180◦ |H(s)C(s)| = 1. (4.16)Only the values of s that satisfy the above satisfy the characteristic equation andit follows that these are the poles of the closed-loop transfer function G(s).Let the H(s)C(s) have poles at p1 , . . . , pn and zeroes at z1 , . . . , zm . Then theangle at a given s maybe determined using the following ∠H(s)C(s) = ∠z1 + . . . + ∠zm − ∠p1 − . . . − ∠pn (4.17)where the angle of an arbitrary zero/pole x is given by ∠x = tan−1 (Im x/Re x). √The magnitude of any complex number x is given by |x| = Re x + Im x. Thiscan be used to determine |H(s)C(s)|. • Angles are measured counterclockwise from the positive real axis. • Root-loci are symmetric about the real-axis, due to the presence of complex conjugates. • If the diﬀerence between the orders of the numerator and denominator of H(s)C(s) is greater or equal to two, then the sum of the poles is a constant. This implies that if some poles move toward the right, others have to move toward the left. • A slight change in pole-zero conﬁgurations may cause signiﬁcant changes in the root-locus. • The patterns of the root-loci depend only on the relative separation of the open-loop poles and zeros. • If a pole in the root-locus plot only moves over into the RHP (implying instability) or LHP for a ﬁxed bounded interval of the gain, then the system is called conditionally stable. It is not desirable in a control system since the danger always exists of becoming unstable/stable by both increasing the gain too much or decreasing the gain too much. 30
- 31. 4.5. Root-locus• If the construction of G(s) involved a cancellation of poles with zeros be- cause of the interaction between the plant and compensator, all the roots of the true characteristic equation will not show (since we are dealing with a reduced equation). To avoid this problem, add the canceled closed-loop pole retroactively to the closed-loop poles obtained from the root-locus plot of G(s). This implies that the canceled pole is still a pole of the system but is only canceled in feedback (or it is available after feedback). Example For the feedback system shown in Fig. (4.3) with 1 H(s) = (4.18) s(s + 1)2 ﬁnd the root-locus plot and the gain values that ensure the systemR(s) U(s) Y(s) + K H(s) - Figure 4.3: Simple gain feedback system. stable. The root-locus plot for this transfer function can be approximately drawn by hand or plotted using the rlocus command in MATLAB as shown in Fig. (4.4). The two poles and their movement with increas- 0.86 0.76 0.64 0.5 0.34 0.16 1 0.94 0.5 0.985 Imaginary Axis 2 1.75 1.5 1.25 1 0.75 0.5 0.25 0 0.985 −0.5 0.94 −1 0.86 0.76 0.64 0.5 0.34 0.16 −2 −1.5 −1 −0.5 0 0.5 Real Axis Figure 4.4: Root-locus plot for H(s). 31
- 32. 4.6. Nyquist stability criterion 1.2 1.2 0.5 0.38 0.24 0.12 1 1 0.64 System: sys Gain: 2 0.8 Pole: 0.000468 + 1i 0.8 Damping: −0.000468 Overshoot (%): 100 Imaginary Axis 0.76 0.6 Frequency (rad/sec): 1 0.6 0.4 0.4 0.88 0.2 0.2 0.97 0 −0.2 0.97 0.2 −0.8 −0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 Real Axis Figure 4.5: Maximum stabilizing gain for closed-loop system. ing gain is indicated on the plot. The stable pole at s = −1 does not become unstable for any value of K. However, the marginally stable pole at s = −1 does cross the imaginary axis to become unstable for some value of K. This maximum value of the gain is shown in Fig. (4.5). Thus for any gain value K > 2, the system becomes unstable.4.6 Nyquist stability criterionThis is a helpful tool that is useful for many kinds of systems including timedelay systems and robust stability. Given a closed-loop transfer function G(s), wewant to determine closed-loop stability/instability and also the number of unstableclosed-loop poles.Nyquist plots are obtained by plotting G(jω) for all ω on the complex plane (realvs. imaginary). The direction of the plotted line is also indicated in terms ofincreasing ω. These can usually be done with the aid of bode plots by reading oﬀvalues of magnitude/phase for a certain frequency from the bode plots and thenplotting these points in the complex plane using polar coordinates (i.e. magnitudeand direction/angle). Angles are measured in a counterclockwise manner from thepositive real axis. The Nyquist plot is symmetric about the real axis due to thepresence of complex conjugates.The number of unstable poles N is then given by the equation N = NCW + NOL (4.19)where NOL is the number of open-loop unstable poles and NCW is the number ofclockwise encirclements of the point −1/K by the Nyquist plot. The variable Kis an independent gain on the plant and is usually considered 1 when the Nyquist 32
- 33. 4.6. Nyquist stability criterionplot is not being used for design purposes. A counterclockwise encirclement is thesame as a negative clockwise encirclement. Example Consider the simple feedback system shown in Fig. (4.6). Given R(s) U(s) Y(s) + C(s) G(s) - Figure 4.6: Block diagram of simple single degree of freedom feedback system. 3 K G(s) = , C(s) = (4.20) s−1 s+3 where K is a gain to be determined, ﬁnd the appropriate Nyquist plot and comment on the stability of the closed loop system. Since Nyquist plotting requires an independent gain on the plant, we rearrange and manipulate the feedback system to obtain the closed- loop transfer function as 3 GCL (s) = KH(s) = K . (4.21) (s + 3)(s − 1) The corresponding block diagram is shown in Fig. (4.7). The bode R(s) U(s) Y(s) + K H(s) -Figure 4.7: Block diagram of feedback system manipulated for generating theNyquist plot. diagram is then plotted as in Fig. (4.8) and values at relevant points are picked out in order to make the Nyquist plot. Typical points to get the angle/magnitude of the frequency response are when the angle is at multiples of 90◦ . A more complex Nyquist plot would require more points to get an accurate representation. The corresponding Nyquist plot is shown in Fig. (4.9) with the directions put in according to increasing frequency. It is clear from the expression of H(s) that we have 1 unstable open- loop pole at s = 1, i.e. NOL = 1. From the Nyquist plot we have 1 33
- 34. 4.6. Nyquist stability criterion 0 −20 Magnitude (dB) −40 −60 −80 −150 −155 Phase (deg) −160 −165 −170 −175 −180 −2 −1 0 1 2 10 10 10 10 10 Frequency (rad/sec) Figure 4.8: Bode plot of closed-loop system. 0.4 0.3 0.2 0.1Imag Axis 0 −0.1 −0.2 −0.3 −0.4 −1.5−1.4−1.3−1.2 −1.1 −1 −0.9 −0.8−0.7−0.6 −0.5−0.4−0.3−0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 Real Axis Figure 4.9: Nyquist plot of closed-loop system. counterclockwise encirclement or -1 clockwise encirclements for −1 < −1/K < 0, i.e. NCW = −1. It follows that −1 + 1 = 0 for − 1 < −1/K < 0 NCL = NCW + NOL = 0+1=1 for − 1/K < 1 (4.22) and thus 0 if K > 1 NCL = (4.23) 1 if K < 1 34
- 35. 4.6. Nyquist stability criterion In other words, any gain greater than 1 ensures stability of the closed- loop system.When there is a pole at the origin or anywhere else on the imaginary axis, theNyquist plot will become singular which leads to a problem of counting the encir-clements. This problem is countered by evaluating G(s) at a point which is notsingular but is very close to the singular point. If the singular point is given by ωs j,then the point close to it is usually taken to be ε + ωs j, where ε is an arbitrarilysmall positive real number as shown in Fig. (4.10).Figure 4.10: s-plane showing path taken to determine how symmetric singularitiesjoin when making Nyquist plots.This point is then located on the plot and is made arbitrarily large in magnitude(tending to ∞) which helps indicate in which direction the singular points join atinﬁnite distance. Example Consider a feedback system similar to the one shown in Fig. (4.7) with closed-loop transfer function 1 GCL (s) = KH(s) = K . (4.24) s(s + 1)2 where K is a gain to be determined. Find the appropriate Nyquist plot. The bode diagram is then plotted as in Fig. (4.11) and values at rele- vant points are picked out in order to make the Nyquist plot. Typical points to get the angle/magnitude of the frequency response are when the angle is at multiples of 90◦ . A more complex Nyquist plot would require more points to get an accurate representation. 35
- 36. 4.6. Nyquist stability criterion 20 0 Magnitude (dB) −20 −40 −60 −90 −135 Phase (deg) −180 −225 −270 −1 0 1 10 10 10 Frequency (rad/sec) Figure 4.11: Bode plot of closed-loop system.Since this transfer function has a pole on the imaginary axis at s = 0,we do not evaluate GCL (j0) but instead evaluate at GCL ( + j0)where is an arbitrarily small number. This is in order to determinethe asymptotic direction of the closed contour of the Nyquist plot. Thecorresponding Nyquist plot with directions is shown in Fig. (4.12). 10 5 Imag Axis 0 −5 −10 −2 0 2 4 6 8 Real Axis Figure 4.12: Nyquist plot of closed-loop system.The stability of the closed-loop system maybe determined as before 36
- 37. 4.7. Feedback with disturbances using the Nyquist plot.4.7 Feedback with disturbancesConsider a single degree of freedom feedback system (shown below) with inputdisturbance Di (disturbance between the compensator C(s) output U (s) and theinput of the plant G(s)), output disturbance Do (disturbance in the plant output)and measurement noise/disturbance Dm . Let the system input be R(s). Di Do R(s) U(s) + + Y(s) + C(s) + G(s) + - + + DmFigure 4.13: Block diagram of a single degree-of-freedom feedback system withdisturbances.For such a system, we can write the following two functions Y (s) = T (s)R(s) + S(s)Do (s) + Si (s)Di (s) − T (s)Dm (s) (4.25) U (s) = Su (s)R(s) + Su (s)Do (s) − T (s)Di (s) − Su (s)Dm (s) (4.26)where S(s) is the sensitivity function, T (s) is the complementary sensitivity func-tion, Si (s) is the input-disturbance sensitivity function and Su (s) is the controlsensitivity function. These are given by the following relations G(s)C(s) T (s) = (4.27) 1 + G(s)C(s) 1 S(s) = (4.28) 1 + G(s)C(s) G(s) Si (s) = = G(s)S(s) (4.29) 1 + G(s)C(s) C(s) Su (s) = = C(s)S(s) (4.30) 1 + G(s)C(s)We can use the above to derive transfer functions between any input/output pair.Furthermore, we can independently assess the stability of each input/output pairby evaluating the relevant transfer function. 37
- 38. 4.7. Feedback with disturbances Example Find the closed-loop sensitivity functions for a closed-loop system with plant transfer function 3 G(s) = (4.31) (s + 4)(−s + 2) and compensator −s + 2 C(s) = . (4.32) s Also comment on the stability of the system. Since G(s) = B(s)/A(s) and C(s) = P (s)/L(s) B(s) = 3 A(s) = (s + 4)(−s + 2) P (s) = −s + 2 L(s) = s (4.33) which gives us GC BP 3 T (s) = = = (4.34) 1 + GC AL + BP s+7 1 AL s(s + 4) S(s) = = = 2 (4.35) 1 + GC AL + BP s + 4s + 3 (s + 4)(−s + 2) Su (s) = SC = (4.36) s2 + 4s + 3 3s Si (s) = SG = 2 . (4.37) (s + 4s + 3)(−s + 2) All the above closed-loop transfer functions are stable except for the input-disturbance Si (s) since it has a positive pole. Thus, the input- disturbance to output path is not stable and therefore the entire system is unstable.As can be seen, the complementary sensitivity function T (s) is the basic transferfunction from the input R(s) and the output Y (s). Thus, for good tracking weneed T (s) → 1. But as can be seen from the ﬁrst function for Y (s), this increasesthe measurement noise Dm . The sensitivity on the other hand is the transferfunction from the output disturbance Do (s) to the output Y (s). Moreover theS(s) sensitivity function is directly related to the input sensitivity Si (s) and thecontrol sensitivity Su (s), both of which we want to reduce to as low as possible.Therefore, we want S(s) → 0 and T (s) → 1. This is both possible and is actuallya constraint on the system as it can be easily shown that T (s) + S(s) = 1 (4.38)But as mentioned earlier, T (s) → 1 ampliﬁes measurement noise. If T (s) = 1(and thus S(s) = 0) for all frequencies ω, then the system has inﬁnite bandwidth(fast response). However, it ampliﬁes the measurement noise at all frequencieswhich is even more noticeable at high frequencies since measurement noise isusually signiﬁcant at higher frequencies. Therefore, having a high bandwidth isnot necessarily something desirable from a control design standpoint. Inﬁnite bandwidth leads to high measurement noise because T (s) = 1 ∀ ω. 38
- 39. 4.8. Trends from Bode & Poisson’s integral constraintsIn real systems, T (s) cannot be 1 for all frequencies and rolls oﬀ at some ﬁnitebandwidth frequency. • The bandwidth is deﬁned as ωBW = 1/τ where τ is the time constant of the slowest LHP pole or in an alternate deﬁnition is the 3 dB point on a bode magnitude plot. • When the true plant transfer function is not available, we proceed with our analysis using the nominal plant transfer function G0 (s). In this case the sensitivity functions remain the same but with G0 (s) instead of G(s). Thus we have T0 (s), S0 (s), Si0 (s) and Su0 (s). • For a plant G(s) = B(s)/A(s) and a compensator C(s) = P (s)/L(s), the closed-loop characteristic Acl is as before and is the denominator of all the sensitivity functions above Acl = AL + BP = 0 (4.39) • We need all the sensitivity functions to be stable for stability, i.e. we need all the roots of Acl to be in the LHP. • This is only true if there is no cancellation of unstable poles between C(s) and G(s). Canceling unstable poles is not recommended since we will never have a perfect model and this will cause us to end up with a nonminimum- phase zero instead of canceling the unstable pole. Moreover, this will result in a RHP zero and pole very close to each other and this leads to much higher instability as will be seen later.4.8 Trends from Bode & Poisson’s integral constraintsConsider the case when we have no open-loop RHP planes or zeros. Then forstability, the following is satisﬁed ∞ 0 τd > 0 ln|S(jω)| dω = (4.40) 0 −k π 2 τd = 0where k = lims→∞ sG(s)C(s) and τd is the time delay in the system. Thisis the ﬁrst of Bode’s integral constraints on sensitivity. The next equation forcomplementary sensitivity also needs to be satisﬁed for stability ∞ 1 1 π ln T j d = πτd − (4.41) 0 ω ω 2kr −1where kr = − lims→∞ 1/sG(s)C(s). Since an integral is the area under a graph,the above signify how much of the graph of T (jω) and S(jω) needs to be aboveor below the 0 axis. Upon inspection of the equations, it can be seen that timedelay causes a more positive integral, leading to the fact that time delays causeboth T (jω) and S(jω) to need to move more above the 0 axis for stability (tosatisfy the more positive integrals). This is undesirable as mentioned before andtherefore, time delays are undesirable. 39

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment