Analysis of intelligent system design by neuro adaptive control


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Analysis of intelligent system design by neuro adaptive control

  1. 1. International Journal of Advanced in Engineering and Technology (IJARET)International Journal of Advanced Research Research in Engineeringand Technology (IJARET), ISSN 0976 – 6480(Print) IJARETISSN 0976 – 6480(Print), ISSN 0976 – 6499(Online) Volume 2, Number 1, Jan - Feb (2011), © IAEMEISSN 0976 – 6499(Online) Volume 2Number 1, Jan - Feb (2011), pp. 01- 11 © IAEME© IAEME, ANALYSIS OF INTELLIGENT SYSTEM DESIGN BY NEURO ADAPTIVE CONTROL Dr. Manish Doshi Hemchandracharya North Gujarat University Patan, E-Mail: manishdos@gmail.comABSTRACT Design of Intelligent system is very crucial and simple mathematical modeling isnot used for the analysis. Hence this paper represents the analysis of intelligent systemdesign by Neuro adaptive control. Here various methods like Neural networks foridentification, Series-Parallel Model, Supervised control, Inverse control are taken and atlast, Neuro fuzzy adaptive control is used to solve design of intelligent system.Keywords: Adaptive Control, series parallel model, inverse controlI. INTRODUCTION The adaptive control techniques described above assume the availability of anexplicit model for the system dynamics (as in the gain scheduling technique) or at least adynamic structure based on a linear experimental model determined throughidentification (as in STR and MRAC). This may not be the case for a large class ofcomplex nonlinear systems characterized by poorly known dynamics and time-varyingparameters that may operate in ill-defined environments. Besides, conventional adaptivecontrol techniques lack the important feature of learning. This implies that an adaptivecontrol scheme cannot use the knowledge it has acquired in the past to tackle similarsituations in the present or in the future. In other words, while adaptive controltechniques have been used effectively for controlling a large class of systems withpredefined structure and slowly time-varying parameters, they nevertheless lack theability of learning and lack the ability for tackling the global control issues of nonlinearsystems. Making assumption of linear structures of processes is not always possible anddesigners have to deal with the inherent nonlinear aspect of the systems dynamics. [1]. 1
  2. 2. International Journal of Advanced Research in Engineering and Technology (IJARET)ISSN 0976 – 6480(Print), ISSN 0976 – 6499(Online) Volume 2, Number 1, Jan - Feb (2011), © IAEME As a potential remedy to some of these issues, designers have devised newcontrol approaches that avoid the explicit mathematical modeling of dynamicprocesses. Some researchers have termed them as expert intelligent control approaches.Some of these approaches, such as those based on fuzzy logic theory, permit for anexplicit modeling of the systems. Other approaches use well-defined learning algorithmsthrough which the system is implicitly modeled and adapts autonomously its controllerparameters so as to accommodate unpredictable changes in the system’s dynamics. Here,the dynamical structure is not constrained to be linear, as is the case for mostconventional adaptive control techniques. The study of these approaches has constituted aresurgent area of research in the last several years and several important results wereobtained. [2] The family of controllers using fuzzy logic theory are basically designed on thepremise that they take full advantage of the (linguistic) knowledge gained by designersfrom past experiences. We tackle here another class of intelligent controllers, which arebased on neural modeling and learning. They are built using algorithms that allow for thesystem to learn for itself from a set of collected training patterns. They have thedistinctive feature of learning and adjusting their parameters in response to unpredictablechanges in the dynamics or operating environment of the system. Their capability fordealing with nonlinearities, for executing parallel computing tasks and fortolerating a relatively large class of noises make them powerful tools for tacklingthe identification and control aspect of systems characterized by highly nonlinearbehavior, time-varying parameters and possible operation within an unpredictableenvironment.[3] Given the fact that we deal here with dynamical models involving the states of themodel at different time steps, it is only natural to design a specialized structure of neuralnetworks that have the capability for “memorizing” earlier states of the system and foraccommodating feedback signals. This is in contrast with the conventional neuralnetworks (based on BPL) used mostly for system classification or functionapproximation. In fact, despite their proven capabilities as universal approximators, alimitation of standard feed forward neural networks, using backpropagation as thelearning mechanism, is their limitations for exclusively learning static input– output 2
  3. 3. International Journal of Advanced Research in Engineering and Technology (IJARET)ISSN 0976 – 6480(Print), ISSN 0976 – 6499(Online) Volume 2, Number 1, Jan - Feb (2011), © IAEMEmappings. While adept at generalizing pattern-matching problems in which the timedimension is not significant, systems with a non-stationary dynamics cannot be strictlymodeled. This is the case of dynamic systems for which the representation is madethrough time-dependent states. [4] One way of addressing this problem is to make the network dynamic, that is, byproviding it with a memory and feedbacks. A particular class of the recurrent structure isthe so-called recurrent time-delay neural networks. One way for accomplishing this is toincorporate feedback connections from the output of the network to its input layer andinclude time delays into the NN structure through their connections. As there arepropagation delays in natural neurobiological systems, the addition of these time delaysstems from theoretical, as well as practical, motivations. Time-Delay Neural Networks(TDNNs) accomplish this by replicating the network across time. One can envision thisstructure as a series of copies of the same network, each temporally displaced from theprevious one by one discrete time unit similar to the BPTT described earlier. Theresultant structures can be quite large, incorporating many connections, and novellearning algorithms are employed to ensure rapid training of the network.[5],[6].II. PROBLEM FORMULATION To illustrate the idea let us presume that the dynamic system input–outputbehavior of a nonlinear dynamic system is represented by the following equation: y(k  1) = f [ y(k), . . . , y(k  n); u(k), . . . , u(k  n)] where y(k  1) represents the output of network at ttime (k  1), and y(k), y(k 1), . . . , y(k  n) are the delayed output states serving as part of the network inputalong with the input signal u(k) and its delayed components: u(k  1), . . . , u(k n). The network representation of such systems is depicted in Figure 1. Notice thatthis network is recurrent (feedback signals) with time-delayed inputs. While severalalgorithms have been proposed in the literature to deal with the training of time-delayedrecurrent networks, the dynamic backpropagation algorithm has been among the standardtechniques used for this purpose. 3
  4. 4. International Journal of Advanced Research in Engineering and Technology (IJARET)ISSN 0976 – 6480(Print), ISSN 0976 – 6499(Online) Volume 2, Number 1, Jan - Feb (2011), © IAEME Figure 1 Time delayed recurrent neural network2.1 Neural Networks for Identification As universal approximators, neural networks have been used in recent yearsas identifiers for a relatively wide range of complex dynamic systems (linear andnonlinear). Their capability for processing tasks in parallel and for tolerating noisyversions of the input signals make them excellent candidates for systemidentification. The process of systems identification, as mentioned previously, aimsto find a dynamical model that approximates within a predefined degree of accuracythe actual plant dynamics when both systems are excited with the same signals. Thismeans that we require a minimization of the error e(k  1) between the predicted outputNp(k 1) and the actual out- put yp(k 1) of the system:e(k + 1) = yp (k + 1) − yp (k + 1) Since this error depends on the parameters of the network, the solutionshould provide the set of weights that would minimize it. Four major classes ofmodels encompass a wide range of nonlinear input–output model representations: n−1model1: yp(k +1) = ∑ayp(k −i) + g[u(k),......., y(k − m+1)] i i=0 m−1mod el 2: yp(k +1) = f [ yp(k ),...., yp(k − n +1)] + ∑biu(k − i)] i =0model3: yp(k +1) = f [yp(k),...., yp(k −n+1)]+ g[u(k),......., y(k −m+1)]mod el 4: yp(k +1) = f [ yp(k ),...., yp(k − n +1); u(k ),......., y(k − m +1)] In each of the models, the pair (u(k), yp(k)) represents the input–output pairof the identified plant at sample k and m ≤ n. f and g are two smooth functions in 4
  5. 5. International Journal of Advanced Research in Engineering and Technology (IJARET)ISSN 0976 – 6480(Print), ISSN 0976 – 6499(Online) Volume 2, Number 1, Jan - Feb (2011), © IAEMEtheir arguments. However, among the four models, the fourth one has been usedmost often given its sharing relevance to a wide majority of nonlinear dynamicsystems. The choice for the appropriate learning algorithm for the neural identifiers(dynamic or static) depends mostly on whether the network takes as its inputs adelayed version of its output, o r directly uses sample outputs from the plant itself.The two schemes that have been used most frequently are the series-parallelscheme and the parallel scheme. To illustrate the ideas, and if model four is takenas the system model of choice, the two corresponding schemes are then given as:2.2. Series-Parallel Model The future value pof the output N (k 1) in this model is expressed as:yp(k +1) = NNI[ yp(k),...., yp(k − n +1)]; u(k),......., y(k − m +1)] where NNI stands for mapping provided by the neural networkidentifier. This model is represented in Figure 2, and a careful inspection of itsstructure shows that while it still requires a set of delayed signals for it inputs, it doesnot involve feedback from the network output, which is the case for the secondmodel known as parallel.2.3. Parallel Model The estimated future value of the output N (k 1) in the parallel model is pexpressed as:yp(k +1) = NNI[ yp(k),...., yp(k − n +1)]; u(k),......., y(k − m +1)] This model is illustrated in Figure 3 and uses the delayed recursions of theestablished output as some of its input. One may notice here that given the particular structure of the series-parallel model, which doesn’t involve recurrent states of the network output, thestandard BPL could be used for extracting the parameters of the network. This is,however, not the case for the parallel structure given that the model includes afeedback loop including nonlinear elements. As such BPL cannot be applied in thiscase, and dynamic backpropagation would be the learning algorithm of choice. There have been a number of control schemes proposed in recent yearsinvolving neural networks for identification and control. Following categories: 5
  6. 6. International Journal of Advanced Research in Engineering and Technology (IJARET)ISSN 0976 – 6480(Print), ISSN 0976 – 6499(Online) Volume 2, Number 1, Jan - Feb (2011), © IAEMEsupervised (“teacher” or model-based) control, inverse model- based control, andneuro-adaptive control are outlined next. Figure 2 Series-parallel scheme for neural identification Figure 3 Parallel scheme for neural identification 6
  7. 7. International Journal of Advanced Research in Engineering and Technology (IJARET)ISSN 0976 – 6480(Print), ISSN 0976 – 6499(Online) Volume 2, Number 1, Jan - Feb (2011), © IAEMEIII NEURAL NETWORKS FOR CONTROL3.1 Supervised control The first category assumes that a controller is synthesized on the basis of theknowledge acquired from the operation of the plant by a skilled operator (as shownin Figure 4) or through well-tuned PID controllers. During the nominal operation ofthe plant (at a particular operating condition) experimental data are collected fromsensor devices, and are later used as a training set for a neural network. Once thetraining is carried out adequately, the process operator then becomes an upper-levelsupervisor without the need for being part of the control loop of the process. This isparticularly useful in the case of hazardous environments or for improving theautomation level of the plant. The neural network could also play the role of again interpolator of a set of PID controllers placed within the control loop of theplant. Once trained, the neural network will act here as a gain feeder for the PIDcontroller, providing it with appropriate gains every time the operating conditionsof the system change. This is a good alternative for interpolating between the gainsinstead of delivering them in a discrete way, as is the case of conventional gainscheduling. One of the main issues, however, is that the training data collected areusually corrupted with a large amount of noise and as such have to be filtered beforebecoming useful as valid training data. This has the potential of making of theneural network an expensive and possibly a time-consuming process. Another issuepertains to difficulties in extracting the knowledge acquired by the experts into aset of patterns t h at could be used for the network’s training. Figure 4 Neural network acting as a supervisory controller 7
  8. 8. International Journal of Advanced Research in Engineering and Technology (IJARET)ISSN 0976 – 6480(Print), ISSN 0976 – 6499(Online) Volume 2, Number 1, Jan - Feb (2011), © IAEME3.2 Inverse control The second category where neural networks could be implemented within thecontrol loop is known as inverse control or inverse plant modeling. Here, thedesigner seeks t o build a neural network structure capable of mapping the input–output behavior of an inverse controller. The inverse controller is by definition acontroller which, when applied to the plant, leads ideally to an overall transferfunction of unity. A schematic representation of this control scheme is shown inFigure 4. Designing an inverse neural controller should be always done under theassumption t h a t the process is minimum phase and causal. The main advantage ofimplementing an inverse controller into a neural structure is Figure 4 Neural network as inverse model-based controller 8
  9. 9. International Journal of Advanced Research in Engineering and Technology (IJARET)ISSN 0976 – 6480(Print), ISSN 0976 – 6499(Online) Volume 2, Number 1, Jan - Feb (2011), © IAEMEthat it would allow for a faster execution and tolerance to a range of noises. Theimplementation of a neural network as an inverse controller, however, has beenhindered by several difficulties, including the induction of unwanted time delaysleading to possible discretization of processes that are originally continuous. This ismostly due to the effect of noncausality of the inverse controller. Moreover, theinverse controller cannot realistically match exactly the real plant, a fact that leadsto unpredictable errors in the controller design.IV. NEURO-ADAPTIVE CONTROL The third category of neural controllers pertains to implementing twodynamical neural networks within an adaptive control architecture, similar to theMRAC one. One of the networks serves here as an identifier, while the secondserves as a controller. A large amount of research work has been dedicated tocontrollers belonging to this category. Given the dynamic nature of the systembeing identified and controlled, recurrent time-delayed neural networks have beenthe tools of choice for this case. The main advantages of these neuro-adaptivestructures pertain to their capability in effectively tackling the nonlinear behaviorof the systems without compromising on their representation using linearapproximations (such as the ARMA model auto-regressive moving average). Thishas not always been possible with the conventional schemes of adaptive controlsystems as described in earlier sections. As in the identification section, thecontroller here is handled using another structure of a time-delay neuralnetwork for which the output serves as input to the plant. The same output isdelayed in time and fed back to the network to serve as part of its input signals.Now combining the two aspects (identification and control) within a well-definedadaptive structure such as the one described in a previous section leads to therepresentation of Figure 5. In this structure, which is very similar to the MRAC,identification is carried out first, and the identified model is then compared withthe output of a reference model. The recorded error is then used to update thecontrol law through modifications of the neural controller weight. This process isdescribed in ample detail. Despite the fact that the structure is similar to the MRAC,the current scheme developed here is known as an inverse neuro-adaptive control 9
  10. 10. International Journal of Advanced Research in Engineering and Technology (IJARET)ISSN 0976 – 6480(Print), ISSN 0976 – 6499(Online) Volume 2, Number 1, Jan - Feb (2011), © IAEMEscheme. This is mainly due to the fact that the error provided t o the neuralcontroller is n ot computed directly as the difference between the plant outputand the reference, but rather between the model identified and the reference. Figure 5 Neural network as neuro-adaptive controllerIV. CONCLUSION The motivation for the early development of neural networks stemmed fromthe desire to mimic the functionality of the human brain. A neural network is anintelligent data-driven modeling tool that is able to capture and represent complex andnon-linear input/output relationships. Neural networks are used in many importantapplications, such as function approximation, pattern recognition and classification,memory recall, prediction, optimization and noise-filtering. They are used in manycommercial products such as modems, image-processing and recognition systems, speechrecognition software, data mining, knowledge acquisition systems and medicalinstrumentation, etc.V. REFERENCES 1. Fausett, L., Fundamentals of Neural Networks, Prentice-Hall, Englewood Cliffs, NJ, 1994. 2. Ham, F. and Kostanic, I., Principles of Neurocomputing for Science and Engineering, McGraw Hill, New York, NY, 2001. 10
  11. 11. International Journal of Advanced Research in Engineering and Technology (IJARET)ISSN 0976 – 6480(Print), ISSN 0976 – 6499(Online) Volume 2, Number 1, Jan - Feb (2011), © IAEME 3. Haykin, S. (1994) Neural Networks: A Comprehensive Foundation, Englewood Cliffs, NJ: McMillan College Publishing Company. 4. Hopfield, J.J. and Tank, D.W., Computing with Neural Circuits: A Model, Science, Vol. 233, 625−633, 1986. 5. Hopgood, A. (1993) Knowledge-based Systems for Engineers and Scientists, Boca Raton, Florida, CRC Press, 159–85. 6. Jang, J. S., Sun, C. T., and Mizutani, E., Neuro-Fuzzy and Soft Computing, Prentice Hall, Englewood Cliffs, NJ, 1997. 11