Successfully reported this slideshow.



Published on

go and get it

  • Be the first to comment

  • Be the first to like this


  1. 1. 1Basics of PID Control1.1 IntroductionA Proportional–Integral–Derivative (PID) controller is a three-term controllerthat has a long history in the automatic control field, starting from the be-ginning of the last century (Bennett, 2000). Owing to its intuitiveness and itsrelative simplicity, in addition to satisfactory performance which it is able toprovide with a wide range of processes, it has become in practice the standardcontroller in industrial settings. It has been evolving along with the progressof the technology and nowadays it is very often implemented in digital formrather than with pneumatic or electrical components. It can be found in vir-tually all kinds of control equipments, either as a stand-alone (single-station)controller or as a functional block in Programmable Logic Controllers (PLCs)and Distributed Control Systems (DCSs). Actually, the new potentialitiesoffered by the development of the digital technology and of the software pack-ages has led to a significant growth of the research in the PID control field:new effective tools have been devised for the improvement of the analysis anddesign methods of the basic algorithm as well as for the improvement of theadditional functionalities that are implemented with the basic algorithm inorder to increase its performance and its ease of use.The success of the PID controllers is also enhanced by the fact that they oftenrepresent the fundamental component for more sophisticated control schemesthat can be implemented when the basic control law is not sufficient to obtainthe required performance or a more complicated control task is of concern.In this chapter, the fundamental concepts of PID control are introduced withthe aim of presenting the rationale of the control law and of describing theframework of the methodologies presented in the subsequent chapters. In par-ticular, the meaning of the three actions is explained and the tuning issue isbriefly discussed. The different forms for the implementation of a PID controllaw are also addressed.
  2. 2. 2 1 Basics of PID Control1.2 Feedback ControlThe aim of a control system is to obtain a desired response for a given sys-tem. This can be done with an open-loop control system, where the controllerdetermines the input signal to the process on the basis of the reference signalonly, or with a closed-loop control system, where the controller determinesthe input signal to the process by using also the measurement of the output(i.e., the feedback signal).Feedback control is actually essential to keep the process variable close to thedesired value in spite of disturbances and variations of the process dynamics,and the development of feedback control methodologies has had a tremen-dous impact in many different fields of the engineering. Besides, nowadaysthe availability of control system components at a lower cost has favoured theincrease of the applications of the feedback principle (for example in consumerelectronics products).The typical feedback control system is represented in Figure 1.1. Obviously,the overall control system performance depends on the proper choice of eachcomponent. From the purposes of controller design, the actuator and sensordynamics are often neglected (although the saturation limits of the actuatorhave to be taken into account) and the block diagram of Figure 1.2 is consid-ered, where P is the process, C is the controller, F is a feedforward filter, r isthe reference signal, e = r − y is the control error, u is the manipulated (con-trol) variable, y is the process (controlled) variable, d is a load disturbancesignal and n is a measurement noise signal. Controller Actuator Process Sensor Fig. 1.1. Typical components of a feedback control loop d r F e C u P y n Fig. 1.2. Schematic block diagram of a feedback control loop
  3. 3. 1.4 The Three Actions of PID Control 31.3 On–Off ControlOne of the most adopted (and one of the simplest) controllers is undoubtedlythe On–Off controller, where the control variable can assume just two values,umax and umin , depending on the control error sign. Formally, the control lawis defined as follows: umax if e > 0 u= , (1.1) umin if e < 0i.e., the control variable is set to its maximum value when the control error ispositive and to its minimum value when the control error is negative. Gener-ally, umin = 0 (Off) is selected and the controller is usually implemented bymeans of a relay.The main disadvantage of the On–Off controller is that a persistent oscillationof the process variable (around the set-point value) occurs. Consider for exam-ple the process described by the first-order-plus-dead-time (FOPDT) transferfunction 1 P (s) = e−2s 10s + 1controlled by an On–Off controller with umax = 2 and umin = 0. The resultof applying a unit step to the set-point signal is shown in Figure 1.3, whereboth the process variable and the control variable have been plotted.Actually, in practical cases, the On–Off controller characteristic is modifiedby inserting a dead zone (this results in a three-state controller ) or hysteresisin order to cope with measurement noise and to limit the wear and tear of theactuating device. The typical controller functions are shown in Figure 1.4.Because of its remarkable simplicity (there are no parameters to adjust), theOn–Off controller is indeed suitable for adoption when no tight performanceis required, since it is very cost-effective in these cases. For this reason it isgenerally available in commercial industrial controllers.1.4 The Three Actions of PID ControlApplying a PID control law consists of applying properly the sum of threetypes of control actions: a proportional action, an integral action and a deriva-tive one. These actions are described singularly hereafter.1.4.1 Proportional ActionThe proportional control action is proportional to the current control error,according to the expression u(t) = Kp e(t) = Kp (r(t) − y(t)), (1.2)where Kp is the proportional gain. Its meaning is straightforward, since itimplements the typical operation of increasing the control variable when the
  4. 4. 4 1 Basics of PID Control 2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 0 5 10 15 20 25 30 35 40 45 50 timeFig. 1.3. Example of an On–Off control application. Solid line: process variable;dashed line: control variable. u u u umax umax umax e e e umin umin umin a) b) c)Fig. 1.4. Typical On–Off controller characteristics. a) ideal; b) modified with adead zone; c) modified with hysteresis.control error is large (with appropriate sign). The transfer function of a pro-portional controller can be derived trivially as C(s) = Kp . (1.3)With respect to the On–Off controller, a proportional controller has the ad-vantage of providing a small control variable when the control error is smalland therefore to avoid excessive control efforts. The main drawback of using apure proportional controller is that it produces a steady-state error. It is worthnoting that this occurs even if the process presents an integrating dynamics(i.e., its transfer function has a pole at the origin of the complex plane), incase a constant load disturbance occurs. This motivates the addition of a bias
  5. 5. 1.4 The Three Actions of PID Control 5(or reset) term ub , namely, u(t) = Kp e(t) + ub . (1.4)The value of ub can be fixed at a constant level (usually at (umax + umin )/2)or can be adjusted manually until the steady-state error is reduced to zero.It is worth noting that in commercial products the proportional gain is oftenreplaced by the proportional band PB , that is the range of error that causesa full range change of the control variable, i.e., 100 PB = . (1.5) Kp1.4.2 Integral ActionThe integral action is proportional to the integral of the control error, i.e., itis t u(t) = Ki e(τ )dτ, (1.6) 0where Ki is the integral gain. It appears that the integral action is related tothe past values of the control error. The corresponding transfer function is: Ki C(s) = . (1.7) sThe presence of a pole at the origin of the complex plane allows the reductionto zero of the steady-state error when a step reference signal is applied or astep load disturbance occurs. In other words, the integral action is able to setautomatically the correct value of ub in (1.4) so that the steady-state error iszero. This fact is better explained in Figure 1.5, where the resulting transferfunction is 1 C(s) = Kp 1 + , (1.8) Ti si.e., a PI controller results. For this reason the integral action is also oftencalled automatic reset.Thus, the use of a proportional action in conjunction to an integral action,i.e., of a PI controller, solves the main problems of the oscillatory responseassociated to an On–Off controller and of the steady-state error associated toa pure proportional controller.It has to be stressed that when integral action is present, the so-called inte-grator windup phenomenon might occur in the presence of saturation of thecontrol variable. This aspect will be thoroughly analysed in Chapter 3.
  6. 6. 6 1 Basics of PID Control e Kp u 1 T s +1 i Fig. 1.5. PI controller in automatic reset configuration1.4.3 Derivative ActionWhile the proportional action is based on the current value of the controlerror and the integral action is based on the past values of the control error,the derivative action is based on the predicted future values of the controlerror. An ideal derivative control law can be expressed as: de(t) u(t) = Kd , (1.9) dtwhere Kd is the derivative gain. The corresponding controller transfer functionis C(s) = Kd s. (1.10)In order to understand better the meaning of the derivative action, it is worthconsidering the first two terms of the Taylor series expansion of the controlerror at time Td ahead: de(t) e(t + Td ) e(t) + Td . (1.11) dtIf a control law proportional to this expression is considered, i.e., de(t) u(t) = Kp e(t) + Td , (1.12) dtthis naturally results in a PD controller. The control variable at time t istherefore based on the predicted value of the control error at time t + Td .For this reason the derivative action is also called anticipatory control, or rateaction, or pre-act.It appears that the derivative action has a great potentiality in improving thecontrol performance as it can anticipate an incorrect trend of the control errorand counteract for it. However, it has also some critical issues that makes itnot very frequently adopted in practical cases. They will be discussed in thefollowing sections.
  7. 7. 1.5 Structures of PID Controllers 71.5 Structures of PID ControllersThe combination of the proportional, integral, and derivative actions can bedone in different ways. In the so-called ideal or non-interacting form, the PIDcontroller is described by the following transfer function: 1 Ci (s) = Kp 1 + + Td s , (1.13) Ti swhere Kp is the proportional gain, Ti is the integral time constant, and Tdis the derivative time constant. An alternative representation is the series orinteracting form: 1 Ti s + 1 Cs (s) = Kp 1 + (Td s + 1) = Kp (Td s + 1) , (1.14) Ti s Ti swhere the fact that a modification of the value of the derivative time constantTd affects also the integral action justifies the nomenclature adopted.It has to be noted that a PID controller in series form can be always repre-sented in ideal form by applying the following formulae: Ti + Td Kp = Kp Ti Ti = Ti + Td (1.15) Ti Td Td = Ti + TdConversely, it is not always possible to convert a PID controller in series forminto a PID controller in ideal form. This can be done only if Ti ≥ 4Td (1.16)through the following formulae: ⎛ ⎞ Kp Td Kp = ⎝1 + 1 − 4 ⎠ 2 Ti ⎛ ⎞ Ti Td T i = ⎝1 + 1 − 4 ⎠ (1.17) 2 Ti ⎛ ⎞ Ti Td T d = ⎝1 − 1 − 4 ⎠ 2 Ti
  8. 8. 8 1 Basics of PID ControlIt is worth noting that a PID controller has two zeros, a pole at the origin anda gain (the fact that the transfer function is not proper will be discussed inSection 1.6). When Ti = 4Td the resulting zeros of Ci (s) are coincident, whilewhen Ti < 4Td they are complex conjugates. Thus, the ideal form is moregeneral than the series form since it allows the implementation of complexconjugate zeros.The reason for preferring the series form to the ideal form is that the seriesform was the first to be implemented in the last century with pneumatictechnology. Then, many manufacturers chose to retain the know-how and toavoid changing the form of the PID controller. Further, it is sometimes claimedthat a PID controller in series form is more easy to tune.Another way to implement a PID controller is in parallel form 1 , i.e., Ki Cp (s) = Kp + + Kd s. (1.18) sIn this case the three actions are completely separated. Actually, the parallelform is the most general of the different forms, as it allows to exactly switchoff the integral action by fixing Ki = 0 (in the other cases the value of theintegral time constant should tend to infinity). The conversion between theparameters of the parallel PID controller and those of the ideal one can bedone trivially by means of the following formulae: Kp Ki = Ti (1.19) Kd = Kp Td1.6 Modifications of the Basic PID Control LawThe expressions (1.13), (1.14) and (1.18) of a PID controller given in theprevious section are actually not adopted in practical cases because of a fewproblems that can be solved with suitable modifications of the basic controllaw. These are analysed in this section.1.6.1 Problems with Derivative ActionFrom Expressions (1.13), (1.14) and (1.18) it appears that the controller trans-fer function is not proper and therefore it can not be implemented in practice.1 Actually, the term parallel PID controller is often adopted also for expression (1.13) (see for example (Tan et al., 1999; Seborg et al., 2004)). However, here it is preferred to use the nomenclature of (˚str¨m and H¨gglund, 1995; Ang A o a et al., 2005) for the sake of clarity and in order to distinguish better the three considered forms.
  9. 9. 1.6 Modifications of the Basic PID Control Law 9This problem is evidently caused by the derivative action. Indeed, the high-frequency gain of the pure derivative action is responsible for the amplificationof the measurement noise in the manipulated variable. Consider for examplea sinusoidal signal n(t) = A sin(ωt)which represents measurement noise in the control scheme of Figure 1.2. Ifthe derivative action only is considered, the control variable term due to thismeasurement noise is u(t) = AKd ω cos(ωt).It can be easily seen that the amplification effect is more evident when thefrequency of the noise is high. In practical cases, a (very) noisy control variablesignal might cause a damage of the actuator. The problems outlined above canbe solved by filtering the derivative action with (at least) a first-order low-passfilter. The filter time constant should be selected in order to filter suitably thenoise and to avoid to influence significantly the dominant dynamics of thePID controller.In this context, the PID control laws (1.13), (1.14) and (1.18) are usuallymodified as follows. The ideal form becomes: ⎛ ⎞ ⎜ 1 Td s ⎟ Ci1a (s) = Kp ⎜1 + ⎝ + ⎟, ⎠ (1.20) Ti s Td s+1 Nor, alternatively (Gerry and Shinskey, 2005), ⎛ ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ 1 Td s ⎟ Ci1b (s) = Kp ⎜1 + + 2⎟ . (1.21) ⎜ Ti s ⎟ ⎝ Td Td ⎠ 1 + s + 0.5 s N NThe series form becomes: ⎛ ⎞ ⎛ ⎞ 1 ⎜ Td s + 1 ⎟ Ti s + 1 ⎜ Td s + 1 ⎟ Cs (s) = Kp 1+ ⎜ ⎟ = Kp ⎜ ⎟ , (1.22) Ti s ⎝ Td ⎠ Ti s ⎝T d ⎠ s+1 s+1 N Nwhere N generally assumes a value between 1 and 33, although in the majorityof the practical cases its setting falls between 8 and 16 (Ang et al., 2005). Theexpression of the parallel form can be straightforwardly derived as well. It isworth noting that an alternative expression for the ideal form is to filter theoverall control variable, i.e., to use the following controller:
  10. 10. 10 1 Basics of PID Control 1 1 Ci2a (s) = Kp 1 + + Td s , (1.23) Ti s Tf s + 1 ˚ oor, alternatively (Astr¨m and H¨gglund, 2004), a 1 1 Ci2b (s) = Kp 1 + + Td s . (1.24) Ti s (Tf s + 1)2The block diagrams of the most adopted controllers are shown in Figures1.6–1.8. Note that if the PI part of a series controller is in the automaticreset configuration, then the corresponding series PID controller is reportedin Figure 1.9.While these modifications are those that can be usually found in the literature(see for example (Luyben, 2001a)), it has to be stressed that the filter to beadopted is a critical issue and therefore this design aspect will be thoroughlyanalysed in Chapter 2.Another issue related to the derivative action that has to be considered isthe so-called derivative kick. In fact, when an abrupt (stepwise) change of theset-point signal occurs, the derivative action is very large and this results ina spike in the control variable signal, which is undesirable. A simple solutionto avoid this problem is to apply the derivative term to the process outputonly instead of the control error. In this case the ideal (not filtered) derivativeaction becomes: dy(t) u(t) = −Kd . (1.25) dtIt is worth noting that when the set-point signal is constant, applying thederivative term to the control error or to the process variable is equivalent.Thus, the load disturbance rejection performance is the same in the two cases. 1 Ti s e u Kp Ts d T N s+1 d Fig. 1.6. Block diagram of a PID controller in ideal form
  11. 11. 1.6 Modifications of the Basic PID Control Law 11 1 Ti s e 1 u Kp Tf s +1 Ts d Fig. 1.7. Alternative block diagram of a PID controller in ideal form e u Kp Ts d 1 T N s+1 d Ti s Fig. 1.8. Block diagram of a PID controller in series form e u Kp Ts d 1 T N s+1 d Ti s +1Fig. 1.9. Block diagram of a PID controller in series form with the PI part inautomatic reset configuration1.6.2 Set-point WeightingA typical problem with the design of a feedback controller is to achieve at thesame time a high performance both in the set-point following and in the loaddisturbance rejection performance. Roughly speaking, a fast load disturbancerejection is achieved with a high-gain controller, which gives an oscillatoryset-point step response on the other side. This problem can be approached bydesigning a two-degree-of-freedom control architecture, namely, a combinedfeedforward/feedback control law.In the context of PID control this can be achieved by weighting the set-pointsignal for the proportional action, that is, to define the proportional action asfollows: u(t) = Kp (βr(t) − y(t)), (1.26)where the value of β is between 0 and 1.In this way, the control scheme represented in Figure 1.10 is actually imple-mented, where
  12. 12. 12 1 Basics of PID Control 1 C(s) = Kp 1 + + Td s (1.27) Ti sand 1 Csp (s) = Kp β + + Td s (1.28) Ti s(the filter of the derivative action has not been considered for the sake of sim-plicity). It appears that the load disturbance rejection task is decoupled fromthe set-point following one and obviously it does not depend on the weight β.Thus, the PID parameters can be selected to achieve a high load disturbancerejection performance and then the set-point following performance can berecovered by suitably selecting the value of the parameter β. An equivalentcontrol scheme is shown in Figure 1.11, where 1 + βTi s + Ti Td s2 F (s) = . (1.29) 1 + T i s + T i T d s2Here it is more apparent that the function of the set-point weight is to smooththe (step) set-point signal in order to damp the response to a set-point change.Note also that if β = 0 the proportional kick is avoided. Indeed, many indus-trial controllers implement this solution (˚str¨m and H¨gglund, 1995, page A o a110).The use of the set-point weighting and of other feedforward control strategiesfor the improvement of performances will be analysed thoroughly in Chapters4 and 5. r Csp u y C P 1 Fig. 1.10. Two-degree-of-freedom PID control scheme r e u y F C P Fig. 1.11. Equivalent two-degree-of-freedom PID control scheme
  13. 13. 1.7 Digital Implementation 131.6.3 General ISA–PID Control LawIf all the modifications of the basic control law previously addressed are con-sidered, the following general PID control law can be derived: 1 t d(γr(t) − yf (t)) u(t) = Kp βr(t) − y(t) + e(τ )dτ + Td Ti 0 dt (1.30) Td dyf (t) = y(t) − yf (t) N dtwhere, in general, it is 0 ≤ β ≤ 1 and 0 ≤ γ ≤ 1, although the value ofγ is usually either 0 (the derivative action is entirely applied to the processoutput) or 1 (the derivative action is entirely applied to the control error), asexplained in Section 1.6.1.The previous one is usually called a PID controller in ISA form or, alterna-tively, a beta-gamma controller. Often, if β = 1 and γ = 0 the controller isindicated as PI–D, while if β = 0 and γ = 0 it is indicated as I–PD. The blockdiagram corresponding to an ISA–PID controller is the same as in Figure 1.11,where in this case ⎛ ⎞ ⎜ 1 Td s ⎟ C(s) = Ci1a (s) = Kp ⎜1 + ⎝ + ⎟ ⎠ (1.31) Ti s Td s+1 Nand Td β 1+ βTi + s + Ti Td γ + s2 N N F (s) = . (1.32) Td 1 1+ Ti + s + Ti Td 1+ s2 N N1.7 Digital ImplementationIf a digital implementation of the PID controller is adopted, then the previ-ously considered control laws have to be discretised. This can be done withany of the available discretisation method (˚str¨m and Wittenmark, 1997). A oFor the sake of clarity and for future reference (see Chapter 8), an example isshown hereafter. Consider the continuous time expression of a PID controllerin ideal form: t 1 de(t) u(t) = Kp e(t) + e(τ )dτ + Td , (1.33) Ti 0 dtand define a sampling time ∆t. The integral term in (1.33) can be approxi-mated by using backward finite differences as
  14. 14. 14 1 Basics of PID Control tk k e(τ )dτ = e(ti )∆t, (1.34) 0 i=1where e(ti ) is the error of the continuous time system at the ith samplinginstant. By applying the backward finite differences also to the derivativeterm it results: de(tk ) e(tk ) − e(tk−1 ) = . (1.35) dt ∆tThen, the discrete time control law becomes: ∆t k Td u(tk ) = Kp e(tk ) + e(ti ) + (e(tk ) − e(tk−1 )) . (1.36) Ti i=1 ∆tIn this way, the value of the control variable is determined directly. Alterna-tively, the control variable at time instant tk can be calculated based on itsvalue at the previous time instant u(tk−1 ). By subtracting the expression ofu(tk−1 ) from that of u(tk ), we obtain: u(tk ) = u(tk−1 )+ ∆t Td 2Td Td Kp 1+ + e(tk ) + −1 − e(tk−1 ) + e(tk−2 ) . Ti ∆t ∆t ∆t (1.37)For an obvious reason, the control algorithm (1.37) is called incremental algo-rithm or velocity algorithm, while that expressed in (1.36) is called positionalalgorithm.Expression (1.37) can be rewritten more compactly as: u(tk ) − u(tk−1 ) = K1 e(tk ) + K2 e(tk−1 ) + K3 e(tk−2 ), (1.38)where ∆t Td K1 = Kp 1+ + , Ti ∆t 2Td (1.39) K2 = −Kp 1 + , ∆t Td K3 = Kp . ∆tBy defining q −1 as the backward shift operator, i.e., q −1 u(tk ) = u(tk−1 ), (1.40)the discretised PID controller in velocity form can be expressed as
  15. 15. 1.8 Choice of the Controller Type 15 K1 + K2 q −1 + K3 q −2 C(q −1 ) = , (1.41) 1 − q −1where K1 , K2 and K3 can be viewed as the tuning parameters.1.8 Choice of the Controller TypeFor a given control task, it is obviously not necessary to adopt all the threeactions. Thus, the choice of the controller type is an integral part of the over-all controller design, taking into account that the final aim is to obtain thebest cost/benefit ratio and therefore the simplest controller capable to obtaina satisfactory performance should be preferred.In this context it is worth analysing briefly some guidelines on how the con-troller type (P, PI, PD, PID) has to be selected. As already mentioned, a Pcontroller has the disadvantage, in general, of giving a non zero steady-stateerror. However, in control tasks where this is not of concern, such as for exam-ple in surge tank level control or in inner (secondary) loops of cascade controlarchitectures, where the zero steady-state error is ensured by the integral ac-tion adopted in the outer (primary) controller (see Chapter 9), a P controllercan be the best choice, as it is simple to design (indeed, if the process has alow-order dynamics the proportional gain can be set to a high value in orderto provide a fast response and a low steady-state error). Further, if an integralcomponent is present in the system to be controlled (such as in mechanicalservosystems or in surge vessels where the manipulated variable is the differ-ence between inflow and outflow) and no load disturbances are likely to occur,then there is no need of an integral action in the controller to provide a zerosteady-state control error. In this case the control performance can be usuallyimproved by adding a derivative action, i.e., by adopting a PD controller. Infact, the derivative action provides a phase lead that allows to increase thebandwidth of the system and therefore to speed up the response to a set-pointchange.If the zero steady-state error is an essential control requirement, then the sim-plest choice is to use a PI controller. Actually, a PI controller is capable toprovide an acceptable performance for the vast majority of the process con-trol tasks (especially if the dominant process dynamics is of first order) andit is indeed the most adopted controller in the industrial context. This is alsodue to the problems associated with the derivative actions, namely the needof properly filtering the measurement noise and the difficulty in selecting anappropriate value of the derivative time constant.In any case, the use of the derivative action, that is, of a PID controller, pro-vides very often the potentiality of significantly improve the performance.For example, if the process has a second-order dominant dynamics, thezero introduced in the controller by the derivative action can be adoptedto cancel the fastest pole of the process transfer function (see, for example,
  16. 16. 16 1 Basics of PID Control(Skogestad, 2003)). However, it is also often claimed that if the process has asignificant (apparent) dead time, then the derivative action should be discon-nected. Actually, the usefulness of the derivative action has been the subjectof some investigation (˚str¨m and H¨gglund, 2000b). Recent contributions to A o athe literature have shown that the performance improvement given by the useof the derivative action decreases as the ratio between the apparent dead timeand the effective time constant increases but it can be very beneficial if thisratio is not too high (about two) (˚str¨m and H¨gglund, 2004; Kristiansson A o aand Lennartson, 2006).Finally, it is worth noting that for processes affected by a large dead time(with respect to the dominant time constant) the use of a dead-time compen-sator controller, such as a Smith predictor based scheme (Palmor, 1996) orthe so-called PID-deadtime controller (where the time-delay compensation isadded to the integral feedback loop of the PID controller in automatic resetconfiguration) (Shinskey, 1994), can be essential in obtaining a satisfactorycontrol performance (Ingimundarson and H¨gglund, 2002). a1.9 The Tuning IssueThe selection of the PID parameters, i.e., the tuning of the PID controllers,is obviously the crucial issue in the overall controller design. This operationshould be performed in accordance to the control specifications. Usually, asalready mentioned, they are related either to the set-point following or to theload disturbance rejection task, but in some cases both of them are of primaryimportance. The control effort is also generally of main concern as it is relatedto the final cost of the product and to the wear and life-span of the actuator.It should be therefore kept at a minimum level. Further the robustness issuehas to be taken into account.A major advantage of the PID controller is that its parameters have a clearphysical meaning. Indeed, increasing the proportional gain leads to an increas-ing of the bandwidth of the system and therefore a faster but more oscillatoryresponse should be expected. Conversely, increasing the integral time constant(i.e., decreasing the effect of the integral action) leads to a slower responsebut to a more stable system. Finally, increasing the derivative time constantgives a damping effect, although much care should be taken in avoiding toincrease it too much as an opposite effect occurs in this case and an unstablesystem could eventually result.The problem associated with tuning of the derivative action can be betterunderstood with the following analysis (Ang et al., 2005). Suppose that theprocess to be controlled is described by a general FOPDT transfer function K P (s) = e−Ls . (1.42) Ts + 1Suppose also that an ideal PD controller is adopted, i.e.,
  17. 17. 1.9 The Tuning Issue 17 C(s) = Kp (1 + Td s) . (1.43)The gain of the open-loop transfer function is determined as 1 + Td ω 2 2 Td |C(jω)P (jω)| = KKp 2ω2 ≥ KKp min 1, , (1.44) 1+T T 2where the inequality is justified by the fact that (1 + Td ω 2 )/(1 + T 2 ω 2 ) ismonotonic with ω. It can be easily determined that if Td ≤ T and KKp ≥ 1 orif Td ≥ T and Td ≥ T /(KKp ), then the crossover frequency ωc is at infinity,i.e., the magnitude of the open-loop transfer function is not less than 0 dB.As a consequence, since the phase decreases when the frequency increases be-cause of the time delay, the closed-loop system will be unstable.To illustrate this fact, consider an example where the process (1.42) withK = 2, T = 1 and L = 0.2 is controlled by a PID controller in series form(1.14) with Kp = 1 and Ti = 1. Then, if it is selected Td = 0.01 the gainmargin results to be 12.3 dB and the phase margin results to be 68.2 deg.Increasing the derivative time constant to Td = 0.05 yields an increase of thegain margin and of the phase margin to 13.2 dB and 72.7 deg, respectively.Thus, in this case, increasing the derivative action implies that a more slug-gish response and a more robust system is obtained. However, if the derivativetime constant is raised to 0.5 the system stability is lost.The aforementioned concepts allow the operator to manually tune the con-troller in a relatively easy way, although the trial-and-error operation can bevery time consuming and the final result can be far from the optimum andheavily depends on the operator’s skill.In order to help the operator in tuning the controller correctly and with asmall effort, starting with the well-known Ziegler–Nichols formulae (Zieglerand Nichols, 1942), a large number of tuning rules have been devised in thelast sixty years (˚str¨m and H¨gglund, 1995; O’Dwyer, 2006). They try to ad- A o adress the possible different control requirements and they are generally basedon a simple model of the plant. They have been derived empirically or analyt-ically. The operator has therefore to obtain a suitable model of the plant andto select the most convenient tuning rule with respect to the given controlrequirements. It has to be noted that the obtained PID parameters (that is,the selected tuning rule) have to be appropriate for the adopted controllerstructure (ideal, series, etc.), otherwise they have to be converted (see Ex-pressions (1.15), (1.17) and (1.19)).Finally, it is worth highlighting that many software packages have been devel-oped and are available on the market which assist practitioners in designingthe overall controller, namely, to identify an accurate process model based onavailable data, to tune the controller according to the given requirements, toperform a what-if analysis and so on. A review of them can be found in (Anget al., 2005).
  18. 18. 18 1 Basics of PID Control1.10 Automatic TuningThe functionality of automatically identifying the process model and tuningthe controller based on that model is called automatic tuning (or, simply,auto-tuning). In particular, an identification experiment is performed afteran explicit request of the operator and the values of the PID parameters areupdated at the end of it (for this reason the overall procedure is also calledone-shot automatic tuning or tuning-on-demand). The design of an automatictuning procedure involves many critical issues, such as the choice of the identi-fication procedure (usually based on an open-loop step response or on a relayfeedback experiment (Yu, 1999)), of the a priori selected (parametric or nonparametric) process model and of the tuning rule. An excellent presentationof this topic can be found in (Leva et al., 2001).The one-shot automatic tuning functionality is available in practically all thesingle-station controllers available on the market. Advanced (more expensive)control units might provide a self-tuning functionality, where the identificationprocedure is continuously performed during routine process operation in or-der to track possible changes of the system dynamics and the PID parametersvalues are modified adaptively. In this case all the issues related to adaptivecontrol have to be taken into account (˚str¨m and Wittenmark, 1995). A o1.11 Conclusions and ReferencesIn this chapter the fundamental concepts of PID controllers have been in-troduced. The main practical problems connected with their use have beenoutlined and the most adopted controller structures have been presented. Inthe following chapters different aspects that have been considered will be fur-ther developed.Basic concepts of PID controllers can be found in almost every book of processcontrol (see for example (Shinskey, 1994; Ogunnaike and Ray, 1994; Luybenand Luyben, 1997; Marlin, 2000; Corripio, 2001; Bequette, 2003; Seborg etal., 2004; Corriou, 2004; Ellis, 2004; Altmann, 2005)). For a detailed treat-ment, see (˚str¨m and H¨gglund, 1995) and (˚str¨m and H¨gglund, 2006) A o a A o awhere all the methodological as well as technological aspects are covered. Anexcellent collection of tuning rules can be found in (O’Dwyer, 2006). Recentadvances are presented in (Tan et al., 1999).