Report ecs group4

659 views

Published on

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
659
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
2
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Report ecs group4

  1. 1. Project Report in Embedded Control Systems LegoWay Uppsala University 2009-09-16Abstract: The object of this project was to build a two-wheeled self-balancing robotusing the Lego Mindstorms NXT set and light sensors for detecting the tilt of therobot. The balancing was achieved with an LQG controller with a steady stateKalman filter and the implementation was done in RobotC code. Two light sensorswere used. As the standard sensors included with the set did not give high enoughresolution these were substituted by more advanced light sensors. The robot issensitive to ambient lighting and surface conditions. For robots supposed to handledifferent locations and light settings, the use of light sensors is therefore notrecommended. Francesco Cervigni Jonas Henriksson Tomas Måhlberg Mikael Thuné Åsa Tysk
  2. 2. The star of this project, Iron Buck2
  3. 3. Table of Contents1. Introduction ............................................................................................................................ 5 1.1 Problem formulation ............................................................................................................................................5 1.2 Group members ......................................................................................................................................................52. State of the art analysis........................................................................................................... 53. Problem analysis .................................................................................................................... 6 3.1 Project goals .............................................................................................................................................................7 3.2 Subtasks .....................................................................................................................................................................7 3.2.1 The model .........................................................................................................................................................7 3.2.2 The sensor ........................................................................................................................................................7 3.2.3 The controller .................................................................................................................................................7 3.2.4 Robot design ....................................................................................................................................................84. Our solutions .......................................................................................................................... 8 4.1 Building the robot ..................................................................................................................................................8 4.2 Modelling...................................................................................................................................................................8 4.3 Sensors .................................................................................................................................................................... 10 4.3.1 Converting raw sensor data into an angle with one sensor ..................................................... 10 4.3.2 Converting raw sensor data into an angle with two sensors ................................................... 11 4.4 Controllers ............................................................................................................................................................. 12 4.4.1 LQG controller ............................................................................................................................................. 12 4.5 Simulations ............................................................................................................................................................ 17 4.5.1 Modelling in Simulink ............................................................................................................................... 17 4.5.2 Simulation results ...................................................................................................................................... 18 4.6 System verification............................................................................................................................................. 21 4.7 Line tracking ......................................................................................................................................................... 23 4.7.1 Line tracking algorithm ........................................................................................................................... 23 4.7.2 The search algorithm ................................................................................................................................ 24 4.7.3 Line tracking not achieved ..................................................................................................................... 24 4.8 Programming and RobotC implementation ............................................................................................. 24 4.8.1 Concurrent processes ............................................................................................................................... 24 4.8.2 Sampling time .............................................................................................................................................. 255. Practical issues ..................................................................................................................... 253
  4. 4. 5.1 Synchronization and varying power of the motors .............................................................................. 26 5.2 Motor malfunctions............................................................................................................................................ 26 5.3 Sensor sensitive to background lighting ................................................................................................... 26 5.4 Logging data .......................................................................................................................................................... 26 5.5 Joystick driver interferes with the integer conversion ....................................................................... 276. Discussion ............................................................................................................................ 27 6.1 Goal Assesment .................................................................................................................................................... 27 6.2 Experiences with the RobotC environment ............................................................................................. 27 6.2.1 The Environment........................................................................................................................................ 28 6.2.2 Debugging ...................................................................................................................................................... 28 6.2.3 OS and Tasks ................................................................................................................................................ 28 6.3 Limitations due to lack of time ...................................................................................................................... 28 6.3.1 Kalman filter estimations not good enough .................................................................................... 29 6.3.2 Not able to perform the line tracking................................................................................................. 29 6.3.3 Better evaluation of sensor readings ................................................................................................. 29 6.4 Future work .......................................................................................................................................................... 29Acknowledgements .................................................................................................................. 30References ................................................................................................................................ 31Appendix A – C code ............................................................................................................... 32Appendix B – Matlab code....................................................................................................... 36Appendix C - Model blueprints ................................................................................................ 39Appendix D – Poetry ................................................................................................................ 474
  5. 5. 1. IntroductionThe Lego Mindstorms NXT set is a big box full of opportunities; the parts can be assembled into awide range of shapes, which with the aid of the control brick can be put to use in an endlessvariety of ways. It is relatively cheap and the standard set can be extended with additionalsensors and building parts. This makes it a candidate for laboratory work where the existingequipment might be expensive or for one use only; the Lego parts can be reassembled for coursework with other purposes as well. They are also durable, making them ideal not only forchildren but also for harsh testing by engineering students.The IT department is looking at possibilities of using the NXT in control theory education, whereit might be able to illustrate the use of controllers. One laboratory assignment in a basic courseof control theory is to keep an inverted pendulum balancing. The NXT parts can be assembledinto a segway imitation where the control brick is supported by two wheels. This is an unstablesystem, which is very hard to control with just simple control theory or, even worse, by hand.1.1 Problem formulationThe assignment is to build a two-wheeled self-balancing robot out of a Lego Mindstorms NXT.The robot should be able to follow some predefined path, preferably represented by a dark-coloured line on a white surface, while keeping upright. The idea is that this robot should beused in a control theory course, as a demonstration of what you can achieve with control theory.Since these kinds of robots already have been built quite a few times at different universities oreven as hobby projects around the world the robot should also present improvements in someway - cheaper, faster or more robust.There are some guidelines as to how this is to be solved: • The robot must be built using only major mechanical and electronic equipment from the Lego Mindstorms NXT, with the exception of some 3rd party Lego-compatible sensors. • The robot should be user friendly. Meaning it should contain as few parts as possible and have a robust construction design. • The control system of the robot should be model-based.1.2 Group membersMikael Thuné - Group leader. Head of organization and planning. Worked with modeling and system identification.Jonas Henriksson - Worked with programming and sensor issues. Also responsible for group communication and public relations.Åsa Tysk - Worked with simulation and controller design. Responsible for proof reading the final report and obtaining coffee.Tomas Måhlberg - Worked with simulation, controller and the robot design.Francesco Cervigni - Responsible for the line tracking and the programming structure.2. State of the art analysisBrowsing the Internet reveals that building self-balancing robots is nothing new under the sun.Everyone, from amateur hobby programmer to advanced control theory researcher, seemsfascinated with this simple yet elegant idea.Material found can be broadly categorized into two categories: simple PID-based light sensorrobots able to balance, and more advanced model-based gyro sensor robots able to balance and5
  6. 6. move. While the actual results of the former are in some cases quite good the actual PIDparameters seem to be found by trial and error and not by any kind of mathematical derivation.More interesting are the model-based approaches. By deriving a model of the robot it is possibleto either use a model-based controller or use it to design a hopefully better non-model-basedcontroller. The most common way is to first derive a model of the system and in the case of thesystem not being linear, linearize it around some working point. It might also be possible todirectly control the nonlinear model without linearization, for example by using partial feedbacklinearization. None of us in the group had any previous experience with nonlinear models andwe therefore chose to concentrate on the linear models.The most famous of the self-balancing robot is probably the Segway PT (where PT stands forpersonal transporter) invented by Dean Kamen and now produced by Segway Inc [1]. Being usedin traffic by fragile human beings means the PT needs to be very safe and robust. For example, afailure of the electronics should not allow the machine to go out of control but slow down to asafe stop. To achieve this it incorporates two accelerometers, five gyroscopic sensors and twocontroller circuit boards. The large number of sensors also adds redundant input which can beused to obtain more accurate readings.Trevor Blackwell built his own self-balancing scooter using a wheelchair motor, RC car batteriesand an 8-bit microcontroller [2]. The controller used is a PD written in C. Blackwell comparesthe Segway with his own creation and calls it “Rolls Royce vs Model T”. While it seems to ridejust fine it totally lacks safety features and makes “startlingly loud squelching sounds on tilefloors”.Another two-wheeled robot is the neat looking nBot by David P. Anderson [3]. Much of itshardware is homebuilt which is quite unusual in this kind of project. The nBot is controlled withsimple a simple LQ controller and seems to be very robust to disturbances.With the introduction of Lego Mindstorms the numbers of self-balancing robots around theworld have increased rapidly. Possibly the first self-balancing Lego robot is the LegWay by SteveHassenplug, built using the original Lego Mindstorms RCX kit [4]. The LegWay uses twoadvanced light sensors called EOPD (Electro Optical Proximity Detector) to estimate its tilt angleand is controlled with a simple PID controller. The LegWay is quite stable on flat surfaces butdoes not handle disturbances very well.Another interesting robot is the NXTway-G by Ryo Watanabe. It uses a linearized model andstate feedback control. Unlike the LegWay, and many similar light sensor based designs, it usesthe engines rotary encoders to keep track of the robots position which is necessary for internalstability [5].One of the best models available on the internet is the NXTway-GS by Yorihisa Yamamoto [6].The results of this prototype are quite similar to the ones of the NXTway-G. Both designs use anLQ controller for balancing. The NXTway-GS is also well documented and uses known systemtools. However it does not take advantage of a Kalman filter which possibly might improve theresults further.3. Problem analysisIn this section we define our goals with the project more in depth and try to divide the fullproblem into smaller parts.6
  7. 7. 3.1 Project goalsAfter the state of the art analysis, some project meetings and a few cups of coffee we set up somegoals of our project:To try some model based controller together with the EOPD - We have only found EOPD robotsbalancing with PID controllers and are curious to see if a state space controller would fare anybetter.To build an EOPD based robot able to move and turn - The EOPD robots we have seen have onlybeen able to stand still and balance or move in a very crude manner.To build simple and modular code - There exist a number of code generating tools for LegoNXTbut we wanted to program everything from scratch in RobotC. We also wanted to make the codemodular to make it easy to switch between and compare different controllers.3.2 SubtasksThe problem of balancing the two-wheeled robot can be divided into a set of subtasks, where themain one is to keep the robot upright and moving with some kind of controller. This in turnneeds a sensor for estimating the body tilt angle so that the robot is aware of when it isbalancing or falling. Since it is the motors that balance the robot; they are required to besynchronized and it is also important that they work regardless of how well the battery ischarged. To balance the robot and having it follow a path at the same time, the controllingprogram has to be able to do simultaneous tasking.3.2.1 The modelThe robot will be modelled as a rectangular block on wheels. The model only needs to describethe behaviour of the robot under all the conditions it will "normally" operate under; moving,turning and falling. If the robot ends up in a state not described by our model (e.g. whendisturbed so that the body tilt angle becomes too large or struck by lightning) we do not expect itto continue to balance.3.2.2 The sensorThe sensor readings will be nonlinear with respect to the tilt angle and needs to be convertedinto angles if they are to be of any use in the controller. The angles need to be as accurate andprecise as possible but should be very lightweight to calculate.3.2.3 The controllerThe controller is possibly the most important part of the robot. Foremost it needs to be fastenough to balance the robot before it falls. If the reference signal is the body angle of the robotthen the stationary error needs to be suppressed or the robot will move in one direction. If theoscillation of the controller is too big it will be susceptible to disturbances and will probably endup falling. All of these cases need to be taken into account by the controller.7
  8. 8. 3.2.4 Robot designThe robot needs to be designed in very stable way to make sure it does not break. The designshould also be quite simple, so the behaviour of the robot is easier to model. The centre ofgravity should be placed as low as possible to make it easier to balance and we should use as fewparts as possible to make it easy to build and maintain.4. Our solutions4.1 Building the robotThe robot is built strictly with parts from the original Lego Mindstorms NXT set apart from thesensors which were produced by a third party company. The main part is the actual computer ofthe NXT robot, constructed like a big Lego brick. It will from hereon be referred to as just thebrick. When designing our robot prototype we focused on making it easy to build and toreplicate. Building a robust robot is somewhat in conflict with building a simple one, as it takeslots of parts to make it really robust. One advantage of using only a few parts is that Lego partsare very prone to disappear. It is therefore not wise having a model dependant on some noteasily available or replaceable parts. The full blueprint of our robots is available in Appendix A.Main parts of the BUCK model are: • The NXT brick, with a 32-bit ARM7 microcontroller with 256 Kbytes FLASH and 64 Kbytes RAM • One original touch sensor • Two original light sensors (used only in the defunct line following version) • Two EOPD (Electro Optical Proximity Detector) sensors, an enhanced light sensor produced by HiTechnic Products4.2 ModellingAfter having spent some time trying to model the system of the robot ourselves we decided toinstead proceed by basing our model on the NXTway-GS, by Yorihisa Yamamoto. The robot ismodelled as a rectangular block on two wheels. The full NXTway-GS model is three-dimensional,ours however is reduced to two dimensions. From the beginning we did this in order to simplifythe model so we could get it working with minimal effort. In the end we were forced to keep thismodel due to lack of time to fully complete the project. The simplified physical model isillustrated in Figure 1 below.8
  9. 9. Figure 1: A simplified schematic drawing of the robot in two dimensions.Ψ depicts the body yaw angle and Ѳ is the angle of the wheels relative to their starting position.The linearized motion of the system is described by equation 1. equation 1where݉ = ‫ݓ‬ℎ݈݁݁ ‫݃݅݁ݓ‬ℎ‫ݐ‬‫݃݅݁ݓ ݕ݀݋ܾ = ܯ‬ℎ‫ݐ‬ܴ = ‫ݓ‬ℎ݈݁݁ ‫ݏݑ݅݀ܽݎ‬‫ݓ ݉݋ݎ݂ ݏݏܽ݉ ݂݋ ݎ݁ݐ݊݁ܿ ݋ݐ ݁ܿ݊ܽݐݏ݅݀ = ܮ‬ℎ݈݁݁ ܽ‫ݏ݅ݔ‬‫ܬ‬௪ = ‫ݓ‬ℎ݈݁݁ ݅݊݁‫ݐ݊݁݉݋݉ ܽ݅ݐݎ‬‫ܬ‬௠ = ‫ݐ݊݁݉݋݉ ܽ݅ݐݎ݁݊݅ ݎ݋ݐ݋݉ ܥܦ‬‫ܬ‬ట = ܾ‫ܿݐ݅݌ ݕ݀݋‬ℎ ݅݊݁‫ݐ݊݁݉݋݉ ܽ݅ݐݎ‬Changes had to be made to the original model in order to take into account the differencesbetween our robot and the NXTWay-GS. These changes include modifications prompted by theuse of smaller wheels, a different placement of the centre of mass and anything else that wasdifferent on our robot. To determine what adjustments would have to be made, the radius of thewheels was measured and the parts weighed. The centre of mass was estimated by balancing torobot on a thin stick and measuring the position of this point. All physical properties were9
  10. 10. written into a MATLAB m-file that can be found in Appendix B where the state space matriceswere calculated.4.3 SensorsThe sensor used for this project was the EOPD (Electro Optical Proximity Detector), a lightsensor with higher resolution than the one included in the NXT kit. The tilt of the robot can beestimated by measuring the intensity of the reflected light from the sensor. In the beginning weonly used one sensor which was positioned at the front of the robot. When the robot tiltsforward more light is reflected and when falling backwards the sensor reads a lower intensityvalue. This makes it possible to establish a relationship between the light intensity and the robottilt angle.The EOPD is less sensitive to background lighting than the standard sensor, which is a goodquality since the lighting conditions then do not impact the measurements. However, the EOPDis sensitive to colours - a light surface is a better reflector than a dark one. This means that forevery new surface the robot should handle, the intensity-angle relation has to be re-evaluated.Another drawback of using the EOPD, or any light sensor, is that the resolution of themeasurements will not be equal for all angles. The mapping between intensity and angle isnonlinear and the intensity readings are integer valued resulting in non-uniform resolution.Therefore the detection of forward tilting is more accurate than that the tilt in the backwardsdirection. This was however solved later when we received a second EOPD which we then couldmount at the back of the robot, thus making it equally accurate in forward and backward tilting.Another problem arises when the sensor is placed too close to the underlying surface. When therobot tilts the sensor also tilts. If the robot falls forward there comes a point when the intensitymeasurements starts to decline, instead of increasing as expected, because the sensor is angledso that it does not catch enough of the reflected light. Another limitation of using the light sensoris the fact that the surface needs to be of a uniform colour.4.3.1 Converting raw sensor data into an angle with one sensorTo form a function for converting sensor data into an angle, direct measurements of the angleand the light value were made. This was done by holding the robot at different angles measuredwith a large protractor while reading the EOPD value from the RobotC debugger. These datawere then plotted in MATLAB and fitted to a polynomial. To reduce the measuring errors themean of several readings were used. We tried both the 2nd and 3rd order polynomials shown inFigure 2 and Figure 3 of which the 3rd order works the best. The polynomial coefficients werespecified in our RobotC code and used to convert the sensor readings into actual angles.10
  11. 11. Figure 2: 2:nd degree polynomial approximation of data Figure 3: 3:rd degree polynomial approximation of dataThis offline method gives very accurate results. However, a drawback problem is that thepolynomial fitting only can be done offline. If we take into account that the EOPD is bothsensitive to the background lighting and the surface material this drawback actually turns into amajor problem, since it is necessary to recalibrate the function quite often.4.3.2 Converting raw sensor data into an angle with two sensorsIn order to make it easier to move the robot from one surface to another we devised acalibration scheme that is carried out every time the program starts. Taking the cubic root of the11
  12. 12. measurements from a light sensor yields a relationship between the sensor values and the anglethat it is linear. This made it possible to easily evaluate this relationship from the robot programat startup. First the sensor values at two known angles are measured, then a system of twoequations is solved to determine two parameters, a and b. The angle is then given by theparameters multiplied with the cubic root of the sensors readings.A stand was mounted on the robot, allowing it to rest at two known angles. These angles werelarge enough for the stand not to interfere with the self-balancing, i.e. by hitting the groundhelping it to balance, so that they could be kept mounted on the robot at all times.At startup measure the light intensity from the front, sensor Ifront, and back sensor, Iback sensor attwo known angles ψ1 and ψ2. We want to determine the parameters a and b where equation 2Which is done by solving the system of equations for a and b equation 3Solving this system is reduced to finding the inverse of a 2x2-matrix. A lightweight operationeven on the limited CPU of the NXT brick.4.4 ControllersThe model describes an unstable system. To stabilize it and keep the robot from falling over,some kind of controller feedback needs to be applied. There are many possible controller types,more or less suitable for solving our problem. Because of the restrictions inherent to our robotand the choice of platform, RobotC, the selection narrowed a bit. The execution of RobotC code inthe processor is not that fast, this left all computation-heavy controllers out of our reach. In theend we decided to try a non-model based PID controller and one that uses knowledge of themodel - the LQG controller.4.4.1 LQG controllerThe LQG controller consists of two building blocks. An observer, a Kalman filter, for estimationof the unknown states and noise reduction in the measured states, and an optimal LQ gain forfeedback control from the estimated states from the Kalman filter.4.4.1.1 The Kalman filterFor our Kalman filter we chose to use the steady-state Kalman gain, for a cheap implementationwith fewer computations. This meant we could obtain the Kalman filter as a discrete linear time-invariant system, i.e. on state-space form, by using functions in MATLABs Control Systemtoolbox. The only measurable states in our system are the angles ψ and Ѳ. Also the motor signal,u, from the previous time step is known. Hence, these signals were chosen as the input signals toour Kalman filter. The equations that need to be evaluated for each cycle can be seen inequation 4:12
  13. 13. equation 4Finding the right covariance matrices for the Kalman filter proved to be rather tedious work.Using what could be considered reasonable noise variances assuming the model was correctresulted in a filter with such bad performance that the output was more or less completelyuseless. The solution was to instead add a lot of process noise and at the same time reduce themeasurement noise. As can be seen in Figure 4&5, when using a small process noise variancethe robot would still balance but oscillate quite a bit because of the inaccuracy of the stateestimates. When increasing the process noise variance the estimated states would follow themeasured states more closely, but the derivatives would become very noisy. This behaviour canbe seen in Figure 6&7. The final version was decided by logging data on the robot andevaluating the performance of the filter. For our final Kalman filter we used the covariancematrices found in equation 5: equation 513
  14. 14. Figure 4: Measured ψ and output from Kalman filter. Blue is the measured value and red the Kalman estimate. Q = [50 0; 0 50], R = [0.01 0; 0 0.001].Figure 5: Measured Ѳ and output from Kalman filter. Blue is the measured value and red the Kalman estimate. Q = [50 0; 0 50], R = [0.01 0; 0 0.001].14
  15. 15. Figure 6: Measured ψ and output from Kalman filter. Blue is the measured value and red the Kalman estimate. Q = [5 0; 0 1], R = [0.01 0; 0 0.001].Figure 7: Measured Ѳ and output from Kalman filter. Blue is the measured value and red the Kalman estimate. Q = [5 0; 0 1], R = [0.01 0; 0 0.001].15
  16. 16. 4.4.1.2 The LQ gainFor the inverted pendulum problem it is crucial that the body tilt angle is kept small to keep thesystem from becoming unstable. Hence, when choosing the design parameters for the LQcontroller our choice was to put a heavy penalty on the ψ angle. Additionally we have alimitation on the amount of power that can be sent to the motors. The signal can only be in theinterval [-100, 100] pwm. To avoid clipping of the input signal we therefore also chose to put aheavy penalty on the input signal of the system, i.e. the motor signal. Based on these premises afirst LQ controller was derived by using functions from MATLABs Control System toolbox. Thepenalty matrices were then fine-tuned by analysing a series of logging of the data and evaluationof the behaviour of the robot for a large number of choices. The penalty matrices that are usedfor the final version of our controller can be seen in equation 6. This led to the LQ gain inequation 7.Since the performance of our Kalman filter is not perfect, especially for the derivatives whichfluctuate quite a bit, the feedback gain for the Ѳ derivative was manually adjusted to a smallervalue. It was then adjusted to 30% of the calculated value, so the final LQ gain vector can be seenin equation 8. equation 6 equation 7 equation 8The performance of this LQG controller can be seen in Figure 8.16
  17. 17. Figure 8: Plots of Kalman estimates of ψ and Ѳ, as well as the motor signal. The robot starts from leaning position (approximately 13 degrees) and automatically stands up and starts balancing.4.5 SimulationsAfter obtaining a state-space model of our robot we wanted to evaluate the performance ofdifferent Kalman filters and LQ gains. For this purpose we built a model of the system inSimulink to simulate the behaviour of the robot.4.5.1 Modelling in SimulinkThe Simulink model of our system with the physical model of our robot and the LQG controllercan be seen in Figure 9.17
  18. 18. Figure 9: Simulink model of the system with an LQG controller.An important limitation to our robot, which is not present in our state space of the physicalsystem is the fact that the signal sent to the motor cannot exceed 100 pwm or be any lower than-100 pwm. This is represented by a saturation put on the input signal to the model in oursimulations.To further approximate the real environment, noise was added to the input signal and to themeasurements. The noise on the input signal, the process noise, was chosen to have the signalpower 1. The measurements of ψ were deemed by us to be rather accurate and the power of thenoise estimated to be in the order of 0.001. The angle Ѳ is determined by a built-in function inRobotC, which outputs an integer value. This led us to choose a larger noise power, 0.1 for the Ѳmeasurement.We had great difficulty reconciling the performance of the Kalman filter in simulations with theactual behaviour on the robot. It would pass the simulations admirably but when the filter wasimplemented on the robot it failed the testing; the Kalman estimates of the angles would bewrong from the start and continually worsening so that the error of the estimates would finally(after a few seconds) be so large that the robot would fall over. With this discrepancy betweensimulations and reality, we began to believe that our model was faulty. Since the simulations areentirely dependant upon an accurate model we decided to not be overly trusting of the results.What we mostly used Simulink for was to examine if the controller would be able to stabilize thesystem, not how well, and checking if the motor signal would be within the allowed -100 pwmand 100 pwm values, so that the saturation property of the motor signal would not give rise totoo many problems.4.5.2 Simulation resultsIn this section some of the results of the simulations for the LQG controller that performed beston the actual robot are presented. Many choices of penalty matrices for the Kalman filter and LQgain were tested and the final one, managing to balance the robot well, was designed with theparameters in equation 9 and 10:18
  19. 19. equation 9 equation 10The results from a 5 second simulation with the LQG controller based on the penalty choicesdescribed above are shown in the Figures 10-12. Figure 10: The true Ѳ value and the Kalman estimation plotted for a 5 second simulation.19
  20. 20. Figure 11: The true ψ value and the Kalman estimation plotted for a 5 second simulation. Figure 12: The control signal sent to the motors plotted for a 5 second simulation.20
  21. 21. 4.6 System verificationSince the PID parameters obtained from simulations in Simulink did not stabilize the robot, anerror in the theoretical model was suspected to be the cause. The model structure had itself beenproven to work in earlier Lego robots [6] so the problem probably was the actual parameters ofthe model. Some parameters which were considered hardware constant were taken fromprevious experiments on the Lego NXT [6] and the rest were measured or approximated.The parameters measured by us were the masses of the chassis and the wheels, the wheel radiusand the distance to the centre of mass. These were all measured using a letter scales and astandard ruler and were therefore considered to be reliable enough. Most of the parametersconcerning the NXT motors were taken from an experiment made by Ryo Watanabe [5] and onewas taken from the report of the NXTway-GS [6].Even though the NXT motors are mass produced there is the possibility that these parametersare different on our robot. However, the parameters most likely to be incorrect are the onesestimated by approximation. These are the different moments of inertia. Since both the shapeand the mass distribution of the robot is very complex, an accurate calculation of the moments ofinertia would be very hard and not worth the effort. Instead the robot was approximated as arectangular block with uniform mass distribution and the calculations were based on that.Hence, the parameters that are most likely to be incorrect are the moments of inertia.So in order to investigate if the theoretical model was inaccurate or not a grey box modellingapproach was chosen. To accomplish this, MATLAB’s System Identification Toolbox was used. Touse this toolbox a state space representation of the theoretical model is needed and also a set ofgathered data. This data is the input and output signals from the actual robot during a test run.This might seem very straightforward. The problem is that the theoretical model needs to bestable in order for the toolbox to work, but the open loop system modelled is naturally unstable.The solution is to use the stabilized closed loop system instead. Which brings us to the nextproblem, the closed loop system needs to be represented in state space form.To get the state space representation of the system with a PI-controller some derivations had tobe made. A helpful webpage was used as a reference [7], and the resulting system can be seen inFigure 13. Figure 13: State space representation of the close loop system with a PI-controllerA0, B0 and C0 are the state space matrices for the open loop system. So the measured signal yconsists of the control error and the integral of the control error.21
  22. 22. Now almost all the preparations needed for the system identification were done. What was leftwas logged data from a test run. In order to make the data as informative as possible, the systemwas excited with a reference signal consisting of three summed sinusoids. The frequencies of thesinusoids were chosen with respect to the frequency gain of the system, see Figure 14, i.e. theywere placed where the system had a significant gain. The actual logging of the data ran into someproblems since the PI parameters that stabilized the robot did not stabilize the theoretical modeland vice versa. A compromise was therefore chosen. A set of parameters that barely stabilizedboth systems were found and data from test runs could now be logged. Figure 14: Bode diagram for the closed loop system with Kp=25 and Ki=130With the gathered data and the state space representation, the toolbox could now be run. Thefunction used is called pem() and uses an iterative prediction-error minimization method tofind the value of the specified parameters that best fits the data. After a few runs with differentparameters specified for identification, some rather mixed results were collected. The momentof inertia for the wheels was identified as ten times larger than our approximated value and thiswas proven to be correct, since we had made some miscalculations. As for the rest of theparameters no helpful results were gained. Some were chosen up to 104 times larger and somewere given a different sign. This seemed like a too big difference and when the logged outputswere compared with the outputs from the validated model, given the same reference signal andstarting values, they did not match.Given that we had little time left on the project we decided to try using system identification onthe LQG controller instead. We thought the very last days would be better spent on our latestcontroller which finally was working properly. As before, a state space representation of theclosed loop system was needed to perform the identification. First we divided the system intothree parts and modelled them separately, the robot, the Kalman filter and the Feedback gain.We then used MATLABs built in functions series() and feedback() to connect these systems in aclosed loop and got a state space representation. We then used the same procedure as with thePID controller to perform the system identification. Unfortunately we did not get any satisfying22
  23. 23. results at all. The parameters were way off and the loss function was huge. This point to some pointederror in the model structure given to the toolbox and we think the problem was that we mighthave forgotten to update the Kalman filter when the model parameters changed. However, these parametersattempts were performed on the last day and we did not have time to find and correct theproblems.4.7 Line trackingOne of the goals of the project was to make the robot autonomously follow a line. The line wasprinted in a dark colour on a white paper and formed to make an irregular track. k4.7.1 Line tracking algorithmIn order to perform the line detection we used two light sensors of the original type that camewith the NXT. The two EOPD light sensors we used for approximating the tilt angle are very approximatingsensitive to changes in the light values. This means they need to be on a surface of uniformcolour. If one of them should be placed over the track less light would be reflected and thiswould be registered as a big change in the body tilt angle. This is a major limitation of the lightsensor implementation. Our thoughts on how to solve this was that the line detection sensorsshould be mounted far from the balancing sensors, on the side of the robot. In this way our robotwould move along the line and not over it, while still being able to follow it. It is however still ovelikely that the sensors at some point would be positioned over the track. Either when the robotis executing the search algorithm or if the track at some point intersects with itself. intersectsThe reference value BLACK represents the value coming from the sensor when entirely placedover the line. Using this value as a threshold it is possible to detect from the incoming sensorreadings if they are over the line or not.The robots state with respect to the line can be (see figure 15) • OUT: None of the sensors are over the line. • IN: Both sensors are positioned over the line. • P_IN: Just one sensor is over the line. Having this state is the main advantage of using 2 sensors. Figure 15: Possible states of the line tracking23
  24. 24. It is possible from the third state to know exactly what movement that has to be performed inorder to achieve a better position; it can be determined on which side of the line the robot is.The line tracking is performed as a continuous polling of the light sensors, returning the current ackingstate [IN , P_IN, OUT] the action perform depends on the current state: performed • OUT : The search algorithm is performed. • IN: The robot moves slowly forward, and the search algorithm variables ar reset. are • P_IN: A movement is performed to the left or right, depending on which sensor is over the line.4.7.2 The search algorithmThe search algorithm requires an undefined number of steps before finding the line, so it is not findingperformed atomically but in a sequence of single separate movements. This is because we haveto continuously run the balance function. A set of global variables represent the search state.Every time the tracking function is called and if the robot is in the state OUT, the next movement calledaccording to the search state is performed.For each attempt to find the line, the robot searches within a given range ATTEMPT *TURNING_STEP. The variable ATTEMPT is first set to 1 and increased for each failed attempt tofind the line, making the robot search in a larger area. Since it is also known which sensor left the wnline first the robot will know in what direction to look first. Figure 16: Search ranges for different attempts.4.7.3 Line tracking not achievedTrouble with getting the robot to balance caused work on the line tracking algorithm to be setaside in favour of controller testing. In the end this meant that the line tracking was neveractually implemented on the robot.4.8 Programming and RobotC implementationTo get a working and balancing robot we had to implement all our theories and solutions and in toRobotC-code. As always when programming the solutions have to be adapted somewhat to the code. programming,language used.4.8.1 Concurrent processesOur implementation does not make use of multiple concurrent processes. Effective Effectiveparallelization is impossible because the brick is a single processor system though the24
  25. 25. programming environment supports multitasking. During the course of the project we tried toimplement concurrent solutions anyway, earning us some experience in that field but when werealised that it would not help us we reverted to the use of a single process for all the tasks.We made one attempt to implement concurrency in our robot, consisting of five tasks in additiono the main task. Two control tasks: - Line_check : polling data from the light_sensors for the line tracking, switched. - Balance_check : polling data from the light_sensors used for balancing.These would work as switches, activating and deactivating three operative tasks, giving theactual directions to the motors: - Run_forward which would make the robot move straight forward. - Find_line which would move the robot in search of the line. - Balance which would hold the robot upright.We abandoned this implementation, using single tasking instead for the following reasons: - The mathematical model required an constant time step. With concurrent tasks we would not be entirely certain of exactly when the actions would have been performed, especially because of the lack of documentation available about the scheduler (a Priority- Round Robin) and the context switch. - Complicated, to achieve the right scheduling we would have to manually change the tasks priorities. - It would be inefficient since the continual context switches between tasks would steal time that would be much needed for computations in the balance task.4.8.2 Sampling timeThe robot balances in continuous time whereas the controller works in discrete time. Thereforethe system needs to be sampled. For good performance the sampling frequency of the systemshould be high. This is limited by the fact that both the sensors and the motors need some timebetween value readings and output to the wheels. Also the computations in RobotC require someprocessor time every cycle of the program in between readings. Since the RobotC systemfunctions only return the time stamps in multiples of milliseconds, 1 ms is the smallest unit bywhich time is measured on the robot. The EOPD sensor readings will only be updated every 3 msand the motor output every other millisecond. This puts a lower limit of 3 ms for the samplingperiod. In order to allow for the time it takes to calculate the Kalman estimates and conversionof the angle, currently at 4 ms, we decided to use a time slot of 10 ms for the program andcontroller.5. Practical issuesDuring the course of the project we encountered some problems that are related to the LegoMindstorms and sensor hardware and also the RobotC programming environment. The motoroutputs from the Lego bricks output ports were not synchronized and sometimes worked in anunpredictable way. The fact that the power of the motors varied with the battery level had to beaddressed and the sensor values were dependant on the lighting conditions which meant someform of calibration needed to be performed.25
  26. 26. 5.1 Synchronization and varying power of the motorsThe robot is reliant upon the wheels for balancing and moving. Unfortunately the motorsrotating the wheels were not synchronized properly, one motor was rotating at a higher pacethan the other. This resulted in the robot turning slightly when it should have moved straightahead. The RobotC language includes a function that is supposed to counteract this problem,however it did not work in a satisfactory manner. In fact, there was no apparent differencebetween having it turned on or off. The problem likely was a slight mechanical variationbetween the motors. We also noticed that the speed difference changed with varying referencespeed so we solved the problem with a P controller where the faster motor was assigned tofollow the slower motor, effectively synchronizing them.When the power level in the battery of the NXT brick was reduced, the output power to themotors also decreased. This meant that the same control signal did not give the same responsefrom the motors when the battery was well charged as when it had lost some of its capacity,making the balancing very dependant on the battery level. To remedy the problem we scaled themotor signals with a constant equal to the current battery voltage divided with the maximumbattery voltage.5.2 Motor malfunctionsMuch time was spent on problems caused by the motors not functioning as they should. Theoutputs from the motors are dependant on how well the batteries are charged. Then there is thepreviously discussed matter of synchronizing the two motors. Also, when one of the motors wasconnected to port C, both the motors would stop dead for several samples, regain their properfunctions, only to stop dead again after another short period of time. We have found noexplanation for this behaviour. But it did not occur with any of the other motor ports on the NXT.This was deduced to being a bug in the firmware of the brick.5.3 Sensor sensitive to background lightingAfter many measurements and adjustments we have concluded that the EOPD sensor is not asinsensitive to ambient light as the supplier describes. Our measurements gave different results,which we could only be accredited to lighting conditions since this was the only factor thatchanged between the measurements. We noticed a small improvement when we changed theunderlying surface to a paper of a duller colour. This less reflective material seemed to be moreconsistent for different background lighting, though not as much as we would have wanted.5.4 Logging dataThere are some limitations to what data is possible to log with the NXT brick. First of all it is onlypossible to log positive integers. To be able to log negative numbers we added a large number tomake all negative numbers positive, subtracting it again in MATLAB to convert back to theoriginal values. This is not a very pretty solution unless you know what values to expect, but inthe end it was good enough for us. To log floating numbers it is possible to multiply the value bya factor of 10 before type converting it into an integer, dividing it with the same number later toconvert it back.Another nuisance of using the data log on the brick is that it has to be restarted in between eachrun in order for the right data to be logged. The logging is done by writing variables to a log fileon the brick. There is a RobotC function that saves the log file as a new file on the brick whichcan then be uploaded to the work station for analysis. The log file is not flushed at this commandhowever and logging new values means adding them to the end of that file. In order to properlyremove all earlier logged variables in the log file the brick has to be restarted.26
  27. 27. We tried to bypass the inefficient RobotC way of logging variables and made an attempt at livelogging the data straight to MATLAB via Bluetooth, using the RWTH Mindstorms NXT Toolbox.This was unsuccessful because of the limitations of the USB dongle we used (Abe UB22S [8]). Itturned out that it is either not bi-directional or we could not get it to work in bi-directionalmode. This was a necessary condition for using the aforementioned toolbox. Since we did a lot ofanalysing of data, having live logging working would have sped up the logging processconsiderably and saved us a lot of time.5.5 Joystick driver interferes with the integer conversionWhen using the standard RobotC driver for a joystick, which is convenient when you want to trythe performance of a servo controller for the robot, the integer conversion in RobotC issomehow changed. This means you cannot typecast float variables to the integer type. Manyfunctions in RobotC require values to be integers, e.g. the logging function(see the previoussection) and the function for sending directions to the motors. This peculiarity of the driver tooksome time for us to figure out. The motor synchronisation solution we had devised that madeone of the motors follow the other used an integer typecast which caused the command for thatmotor to fail. Meanwhile, what we observed was the motor port A ceasing to function. Wewrongly assumed this to be a problem with the port, not the program, since we had already hadsome experience with port malfunctions, see section 5.2.6. DiscussionWe had much fun working on this project and learned a great deal. In this section we would liketo present some final thoughts on what we achieved as compared to what we initially stated asour objectives. We also present a brief evaluation of the programming environment and anaccount of what we did not have time to do, things that could be improved and some suggestionsfor future work.6.1 Goal AssesmentIn section 3.1 we defined some goals for the project: to use EOPD sensors and a model-basedcontroller, to have the robot being able to move and turn and to make all the code for thisassignment simple and modular.The controller that we designed for our robot was indeed model-based, an LQG controller withsteady state Kalman filter. The second goal of having the robot move and turn was not reachedhowever, since the controller did not work in servo mode and the robot was only able to standand balance. As for the code, it is simple and easily modified. Due to its rather compact form wedecided that splitting it up for the sake of modularity would not enhance the readability or easeof use.6.2 Experiences with the RobotC environmentAll the programming was done in RobotC. We also had the option of using a graphicalprogramming environment, the Embedded Coder Robot NXT, but we chose to do theimplementation in RobotC instead.27
  28. 28. 6.2.1 The EnvironmentProgramming of the brick was done using the RobotC programming environment by CarnegieMallon Robotics Academy. Our summarised experience with this IDE:RobotC pros: • An easy language, C-like and without the need of pointers managing. • There are a many system calls available to get information from and managing the brick. These are easy to call since their specifications and signatures are mostly available in a side frame of the environment. • The programs are compiled and automatically downloaded to the brick via USB or Bluetooth. • A very accurate debugger. • Good documentation.RobotC cons: • Frequently crashes on the computer, especially when Bluetooth communication with the brick fails. • Some drivers (joystick drivers in particular) can cause errors with the typecasting affecting the motors and the logging behaviour. • Some function specifications are inaccurate, or sometimes even missing.6.2.2 DebuggingMany control windows are offered during this phase. Real-time variables can be observed duringa step-by-step execution or live during runtime. The second option was very useful to observethe behaviour of the robot during the balancing without having to log data from the run. Thecombined use of off-line analysing of logged data and on-line variable watching in the debuggerwindows provides a good view of the robot behaviour and sensor values. Though not asconvenient as live logging certain variables in MATLAB would have been, which we tried butwere unable to implement, see section 5.4.6.2.3 OS and TasksIn the RobotC specification of the bricks firmware there is some documentation of howmultithreading in the OS works. The scheduler uses a Round-Robin with assigned prioritieswhere the threads are commonly called tasks.Very important are the task control functions, a task can: • start or stop another task. • give up its time slice. • dynamically set its own priority.6.3 Limitations due to lack of timeWhen making our initial plans for how the project assignment should be solved we were veryoptimistic in our estimation of how much time it would take to derive a model of the systemourselves. Half of the work force was employed for several weeks on developing a model, whichwas ultimately scrapped in favour of the NXTway-GS model. In retrospect it would have beenwiser to just skip the modelling altogether and use an already available model. This would havesaved us some time, which could have been put to good use elsewhere.28
  29. 29. 6.3.1 Kalman filter estimations not good enoughWe made rapid progress during the last few weeks of the project but in the end we really ran outof time and were unable to implement all that we wished. The main thing left to complete is theKalman filter, which at the moment is not good enough for implementation with a servocontroller. This is partly connected with the issues of the system identification - with a bettermodel we think we would be able to build a better filter. The next step after this would be toimplement the full three dimensional model in order to improve the turning capabilities of ourrobot.6.3.2 Not able to perform the line trackingWe also started developing the line tracking algorithm very early. Our project group consisted offour people having studied some control theory and one person with a background in computerprogramming. Since the RobotC program we designed had a fairly simple structure and noadvanced algorithms were needed our programmer soon ran out of things to do. So he was putto the task of making the robot follow the line. In the end we were not able to implement the linefollowing because the servo controller of our robot does not work and it cannot balance whilemoving.6.3.3 Better evaluation of sensor readingsWith more time available we would like to find some better way of setting the initial values ofthe EOPD sensor. It should be possible to develop a good method for finding all needed settingsonline without the need to manually tune the parameters. We tried some solutions but none ofthem worked in a satisfactory way.6.4 Future workFor future projects we do not recommend the EOPD sensors for balancing robots. While we dothink it is possible to build a very stable robot far better than ours there are quite a fewunavoidable problems which makes them less suitable then the gyro sensor. Using light sensorswill require the surface to be of uniform colour and the robot will have difficulty balancing whenthe surface is not perfectly flat. These problems do not arise when working with a gyro sensor.At one point we had an idea of using the standard NXT light sensors with a Kalman filter andstate feedback for balancing. These sensors do have a lower resolution compared to the EOPDsensors but they are much less costly as they come included with the standard box. If thissolution worked it would make a cheap set up for lab experiments. However, since the Kalmanfilter performed rather poorly we never attempted an implementation with the standard lightsensors.29
  30. 30. AcknowledgementsWe would like to thank the people working with the course for providing great help, extensiveavailability, equipment procurement and superb feedback on the project: Alexander Medvedev,Romeo Buenaflor, Karl Marklund and Egi Hidayat.30
  31. 31. References [1]: Segway.com <http://segway.com/> [2]: Trevor Blackwell <http://www.tlb.org/> [3]: nBot Balancing Robot <http://geology.heroy.smu.edu/~dpa-www/robo/nbot/> [4]: Steve´s LegWay <http://www.teamhassenplug.org/robots/legway/> [5]: Ryo´s Holiday <http://web.mac.com/ryo_watanabe> [6]: NXTway-GS <http://www.mathworks.com/matlabcentral/fileexchange/19147> [7]: PID vs. Linear Control <http://home.earthlink.net/~ltrammell/tech/pidvslin.htm> [8]: Bluetooth adapter <http://www.mindstorms.rwth-aachen.de/trac/wiki/BluetoothAdapter >31
  32. 32. Appendix A – C code#pragma config(Sensor, S2, EOPDSensor_front, sensorHighSpeed)#pragma config(Sensor, S3, EOPDSensor_back, sensorHighSpeed)#pragma config(Sensor, S4, touchSensor1, sensorTouch)#pragma config(Motor, motorA, mot_A, tmotorNormal, openLoop, )#pragma config(Motor, motorB, mot_B, tmotorNormal, openLoop, )//*!!Code automatically generated by ROBOTC configuration wizard!!*//void initiate();void balance();float controller();float get_angle();// Kalman filter and LQG controller parameters, generated by MATLABconst float A_kalman[4][4] = {0.62306648, 0.85921357, 0.00441865, 0.00553713, 0.08679139, 0.55889584, 0.00194059, 0.00808854, -0.56002930, -0.45173857, 0.27036566, 0.71947306, 0.05872239, -0.57039578, 0.25410188, 0.75356152};const float B_kalman[4][4] = {0.00542480, 0.00542480, 0.37693352, -0.86937486, -0.00188616, -0.00188616, -0.08679139, 0.44876757, 0.70916911, 0.70916911, 0.56002930, -0.66609073, -0.24697468, -0.24697468, -0.05872239, 1.78653831};const float C_kalman[4][4] = {0.66520288, 0.73866618, 0.00000000, 0.00000000, 0.07386662, 0.61037187, 0.00000000, 0.00000000, -16.13045327, 53.45637964, 1.00000000, 0.00000000, 5.39792503, -19.76754552, 0.00000000, 1.00000000};const float D_kalman[4][4] = {0.00000000, 0.00000000, 0.33479712, -0.73866618, 0.00000000, 0.00000000, -0.07386662, 0.38962813, 0.00000000, 0.00000000, 16.13045327, -53.45637964, 0.00000000, 0.00000000, -5.39792503, 19.76754552};const float L_theta = -0.00000056;const float L_psi = -32.23874313;const float L_theta_dot = -1.00138809*0.3;const float L_psi_dot = -2.96306334;const float dt = 0.010; // Sampling time of system (10 ms)float equi = 1.4; // Equilibrium angle of the robot, needed to adjust the psivalue // psi=0 when the robot is perpendicular to the surfacefloat u_front = 0; // Sensor value read from the EOPD sensor in frontfloat u_back = 0; // Sensor value read from the EOPD sensor in backfloat motor_signal = 0; // Signal power to be sent to motorsfloat psi = 0; // Body yawfloat theta = 0; // Wheel anglefloat psi_1 = 13; // Body yaw angle for front measurments during initiatefloat psi_2 = -19; // Body yaw angle for back measurments during initiatefloat sens_front = 0; // Constants used for mapping sensor readings to psi anglefloat sens_back = 0;float x_est[4] = {0,0,0,0}; // Kalman filter state estimatesfloat x_prev[4] = {0,0,0,0}; // Kalman filter state estimates from previous timestepfloat u_kalman[4] = {0,0,0,0}; // Input signal to Kalman filterfloat x_new[4] = {0,0,0,0}; // Updated statesfloat V_max_full = 7740; // Maximum voltage when battery is fully charged (in mV)float V_max_current = 0; // Current maximum voltagefloat battery_scale = 1; // Scale the motor signals with respect to battery level32
  33. 33. //MAIN//---------------------------------------------task main() {/* main() executes the intiate function and then enters the main loop. */ // Calculate battery scaling from current maximum voltage V_max_current = nAvgBatteryLevel; battery_scale = V_max_full / V_max_current; // Execute initiate to calibrate sensors->angle mapping initiate(); // Wait for button to be pressed before starting main loop eraseDisplay(); nxtDisplayTextLine(1, "Waiting", ); while(SensorValue(touchSensor1) == 0); wait1Msec(1000); eraseDisplay(); nxtDisplayTextLine(1, "Running", ); // Read body yaw angle to give the Kalman filter a good start guess // if the robot is not held upright at start x_prev[1] = get_angle(); // Run main loop until touch sensor pressed while(SensorValue(touchSensor1) == 0) { ClearTimer(T1); balance(); // Balance the robot while(time1[T1] < (dt*1000)); // Stay idle until next time slice } // Save data log SaveNxtDatalog();}//---------------------------------------------//INITIATE//---------------------------------------------void initiate() {/* initiate() takes two readings at known psi angles from the two EOPD sensors, and from these values evaluates two constants used for the sensors->psi angle mapping. The first position is when the robot is tilted forward psi_1 degrees and for the second it is tilted backwards psi_2 degrees. */ nxtDisplayCenteredTextLine(1, "When ready,", ); nxtDisplayCenteredTextLine(3, "press button for", ); nxtDisplayCenteredTextLine(5, "front measurement", ); // Wait for button to be pressed before taking first measurement while(SensorValue(touchSensor1) == 0); wait1Msec(1000); float u_front_1 = exp(0.33333*log(1023-SensorValue(EOPDSensor_front))); float u_back_1 = exp(0.33333*log(1023-SensorValue(EOPDSensor_back))); nxtDisplayCenteredTextLine(1, "When ready,", ); nxtDisplayCenteredTextLine(3, "press button for", ); nxtDisplayCenteredTextLine(5, "back measurement", ); // Wait for button to be pressed before taking second measurement while(SensorValue(touchSensor1) == 0); wait1Msec(1000); float u_front_2 = exp(0.33333*log(1023-SensorValue(EOPDSensor_front))); float u_back_2 = exp(0.33333*log(1023-SensorValue(EOPDSensor_back))); float det = 1/(u_front_1*u_back_2-u_front_2*u_back_1);33
  34. 34. // Evalute the two contants used for the sensors->psi mapping sens_front = det*(u_back_2*psi_1-u_back_1*psi_2); sens_back = det*(u_front_1*psi_2-u_front_2*psi_1);}//---------------------------------------------//BALANCE//---------------------------------------------void balance() {/* balance() essentially reads control signal and sends power to the motors accordingly. */ // Get new motor signal from controller motor_signal = controller(); // Clip signal if it exceeds max or min values if (motor_signal<-100) motor_signal = -100; else if (motor_signal>100) motor_signal = 100; // Send motor signals to motors, the signal sent to motor A is corrected // by a simple P controller to make the robot move straight motor[mot_A] = battery_scale*(motor_signal+(1.2*(nMotorEncoder[mot_B]-nMotorEncoder[mot_A]))); motor[mot_B] = battery_scale*motor_signal;}//---------------------------------------------//GET_ANGLE//---------------------------------------------float get_angle() {/* get_angle() reads the current sensor values from the two EOPD sensors and maps these to a psi angle. */ u_front = 1023-SensorValue(EOPDSensor_front); u_back = 1023-SensorValue(EOPDSensor_back); psi = sens_front*exp(0.33333*log(u_front)) + sens_back*exp(0.33333*log(u_back)); // Correct psi for robot equlibrium angle return psi - equi;}//---------------------------------------------//CONTROLLER//---------------------------------------------float controller() {/* controller() calucluates the Kalman filter state estimates and returns the LQ control signal to caller. */ // Get psi (body yaw angle) psi = get_angle(); // Get theta (wheel angle) theta = 0.5*(nMotorEncoder[motorA] + nMotorEncoder[motorB]) + psi; // Set input signals to Kalman filter u_kalman[0] = motor_signal; u_kalman[1] = motor_signal; u_kalman[2] = theta; u_kalman[3] = psi; // Reset old variables for (int i=0;i<4;i++) { x_new[i] = 0; x_est[i] = 0; }34
  35. 35. // x_est = C*x_prev + D*u for (int r=0;r<4;r++) for (int k=0;k<2;k++) x_est[r] += x_prev[k]*C_kalman[r][k]; x_est[2] += x_prev[2]; x_est[3] += x_prev[3]; for (int r=0;r<4;r++) for (int k=2;k<4;k++) x_est[r] += u_kalman[k]*D_kalman[r][k]; // x_prev = A*x_prev + B*u for (int r=0;r<4;r++) for (int k=0;k<4;k++) x_new[r] += x_prev[k]*A_kalman[r][k] + u_kalman[k]*B_kalman[r][k]; for (int i=0;i<4;i++) x_prev[i] = x_new[i]; // Uncomment these lines if a log is needed. The data need to be corrected // accordingly when read in MATLAB. /*AddToDatalog(0,(int)(theta)+10000); AddToDatalog(0,(int)(x_est[0])+10000); AddToDatalog(0,(int)(psi*100)+10000); AddToDatalog(0,(int)(x_est[1]*100)+10000); AddToDatalog(0,(int)(x_est[2]*10)+10000); AddToDatalog(0,(int)(x_est[3]*10)+10000);*/ // return LQ control signal return -(L_theta*x_est[0]+L_psi*x_est[1]+L_theta_dot*x_est[2]+L_psi_dot*x_est[3]);}//---------------------------------------------35
  36. 36. Appendix B РMatlab codeclcclear all%% NXTway-GS Parameters and State-Space Matrix Calculation %%% Physical Constantg = 9.82; % gravity acceleration [m/sec^2]% NXTway-GS Parametersm = 0.017; % wheel weight [kg]R = 0.028; % wheel radius [m]Jw = 6.7e-5; % wheel inertia moment [kgm^2]M = 0.57; % body weight [kg]W = 0.14; % body width [m]D = 0.06; % body depth [m]H = 0.144; % body height [m]L = 0.08; % distance of the center of mass from the wheelaxle [m]Jpsi = 0.5e-3;%M*L^2/3%3e-3; % body pitch inertia moment [kgm^2]Jphi = 9.67e-4; % body yaw inertia moment [kgm^2]fm = 0.0022; % friction coefficient between body & DC motorfw = 0; % friction coefficient between wheel & floor% DC Motor ParametersJm = 1e-6; % DC motor inertia moment [kgm^2]Rm = 6.69; % DC motor resistance [Ħ]Kb = 0.468; % DC motor back EMF constant [Vsec/rad]Kt = 0.317; % DC motor torque constant [Nm/A]n = 1; % gear ratio% NXTway-GS State-Space Matrix Calculationalpha = n * Kt / Rm;beta = n * Kt * Kb / Rm + fm;tmp = beta + fw;E_11 = (2 * m + M) * R^2 + 2 * Jw + 2 * n^2 * Jm;E_12 = M * L * R - 2 * n^2 * Jm;E_22 = M * L^2 + Jpsi + 2 * n^2 * Jm;detE = E_11 * E_22 - E_12^2;A1_32 = -g * M * L * E_12 / detE;A1_42 = g * M * L * E_11 / detE;A1_33 = -2 * (tmp * E_22 + beta * E_12) / detE;A1_43 = 2 * (tmp * E_12 + beta * E_11) / detE;A1_34 = 2 * beta * (E_22 + E_12) / detE;A1_44 = -2 * beta * (E_11 + E_12) / detE;B1_3 = alpha * (E_22 + E_12) / detE;B1_4 = -alpha * (E_11 + E_12) / detE;A1 = [ 0 0 1 0 0 0 0 1 0 A1_32 A1_33 A1_34 0 A1_42 A1_43 A1_44 ];B1 = [ 0 0 0 0 B1_3 B1_3 B1_4 B1_4 ];C1 = eye(4);D1 = zeros(4, 2);I = m * W^2 / 2 + Jphi + (Jw + n^2 * Jm) * W^2 / (2 * R^2);J = tmp * W^2 / (2 * R^2);K = alpha * W / (2 * R);36
  37. 37. A2 = [ 0 1 0 -J / I ];B2 = [ 0 0 -K / I K / I ];C2 = eye(2);D2 = zeros(2);A = [[A1 zeros(4,2)]; [zeros(2,4) A2]];B = [B1; B2];C = [[C1 zeros(4,2)]; [zeros(2,4) C2]];D = [D1; D2];Ts = 0.010; % sampling timesys_c = ss(A1,B1,C1,D1); % continuous systemsys_d = c2d(sys_c,Ts); % time-discrete system%% Kalman filter calculations %%Ck = C1(1:2,:); % only two first states (psi and theta) measureableDk = zeros(2,4);plant = ss(A1,[B1 B1],Ck,Dk,InputName,{u1,u2,w1,w2},OutputName,{thetapsi});plant_d = c2d(plant,Ts); % discrete state-space model of plantQ = [50 0; % process noise 0 50];R = [0.01 0; % measurement noise 0 0.001];[kalmf,L,P,M,Z] = kalman(plant_d,Q,R); % create steady-state Kalman filterkalmf = kalmf(3:end,:) % only interested in outputting plant states % (last four states in Kalman filter)%% LQG regulator computation %%Ql = [1 0 0 0; 0 1e6 0 0;0 0 1 0; 0 0 0 1];Rl = [1e12 0;0 1e12];[X_,L_,LQG] = dare(sys_d.a,sys_d.b,Ql,Rl); % compute LQG parameters% %% LQG with servo% Ck = C1(1:2,:); % only two first states (psi and theta) measureable% Dk = zeros(2,2);% plant = ss(A1,B1,Ck,Dk,InputName,{u1,u2},OutputName,{theta psi});% plant_d = c2d(plant,Ts); % discrete state-space model of plant%% nx = 4; %Number of states% ny = 2; %Number of outputs% Q = 0.5*eye(nx);%% Ql = [1 0 0 0; 0 1e9 0 0; 0 0 1 0; 0 0 0 1];% Rl = [1e12 0;0 1e12];%% R = [1 0; % measurement noise% 0 0.1];% QXU = blkdiag(Ql,Rl);% QWV = blkdiag(Q,R);% QI = [1 1e2; 1e2 1e6];%% KLQG = lqg(plant_d,QXU,QWV,QI)%% Generate C code for Kalman & LGQ parameters37
  38. 38. fprintf(1,const float A_kalman[%d][%d] = {,[size(kalmf.a,1),size(kalmf.a,2)]);for i=1:size(kalmf.a,1) for j=1:size(kalmf.a,2) fprintf(1,%0.8f, ,kalmf.a(i,j)); end if i~=size(kalmf.a,1) fprintf(1,n ); else fprintf(1,bb};nn); endendfprintf(1,const float B_kalman[%d][%d] = {,[size(kalmf.b,1),size(kalmf.b,2)]);for i=1:size(kalmf.b,1) for j=1:size(kalmf.b,2) fprintf(1,%0.8f, ,kalmf.b(i,j)); end if i~=size(kalmf.b,1) fprintf(1,n ); else fprintf(1,bb};nn); endendfprintf(1,const float C_kalman[%d][%d] = {,[size(kalmf.c,1),size(kalmf.c,2)]);for i=1:size(kalmf.c,1) for j=1:size(kalmf.c,2) fprintf(1,%0.8f, ,kalmf.c(i,j)); end if i~=size(kalmf.c,1) fprintf(1,n ); else fprintf(1,bb};nn); endendfprintf(1,const float D_kalman[%d][%d] = {,[size(kalmf.d,1),size(kalmf.d,2)]);for i=1:size(kalmf.d,1) for j=1:size(kalmf.d,2) fprintf(1,%0.8f, ,kalmf.d(i,j)); end if i~=size(kalmf.d,1) fprintf(1,n ); else fprintf(1,bb};nn); endendfprintf(1,const float L_theta = %0.8f;n,LQG(1,1));fprintf(1,const float L_psi = %0.8f;n,LQG(1,2));fprintf(1,const float L_theta_dot = %0.8f;n,LQG(1,3));fprintf(1,const float L_psi_dot = %0.8f;n,LQG(1,4));38
  39. 39. Appendix C - Model blueprintsBuilding the BUCK model Figur 1: Step 1Step 1: Attach one angular beam 3x7 to each side of the NXT brick. The beams are needed to fitthe engines to the brick but can easily be replaced by a beam of just about any shape, as long asthe three bottom pegs stay in place.39
  40. 40. Figur 2: Step 2Step 2: Attach the wheels to the engines and the engines to the brick. Also place a touch sensorat one of the angular beams (or another suitable place).40
  41. 41. Figur 3: Step 3 Figur 4: Step 4Step 3&4: This construction will hold the front light sensor in place. The purpose of the 90degree angular beams are to allow the robot to stand at an angle of about 15 degress in order tosample the light intensity at this angle. The 45 degrees angular beams will hold a light cover toreduce the amount of exterior light affecting the sensor.41
  42. 42. Figur 5: Step 5Step 5: Place the light sensor in the front. You might need to detach the engines to perform thisstep.42
  43. 43. Figur 6: Step 6 Figur 7: Step 743
  44. 44. Figur 8: Step 8Step 6, 7, & 8: This construction will hold the back light sensor in place. The extended 90 degreeand the 45 degree angular beams serves the same purposes as their corresponding parts at thefront.44
  45. 45. Figur 9: Step 945
  46. 46. Figur 10: Step 10Step 9 & 10: Attach the back light sensor to the NXT brick. All thats left now is to attach thesensor and motor cables.46
  47. 47. Appendix D – PoetryThe page of mysteries En robot på hjul Balanserar som den vill Fattar ingenting LQ är knepigt Kalman har ju alltid fel Buck, han är för tjock Un robot su ruote saldi che avrebbe preso nulla LQ è complicato Kalman sono naturalmente sempre sbagliato Ceppo, egli è spessa Av: Bruno K. Öijer Han stod till slut Dock ganska fult Skulle nog sagt tut Legohuvudet är gult... ...(med undantag för basketlegot och infödingarna...) Av: Legodonatorn En robot på hjul vid namn var så gla´ Skrev rapport om sig själv varje da´ att glida på hjul det är vansinnigt kul en diktappendix vill ja ha! Av: Limerickmannen Lets take a ride on Iron Bucks shoulders he has more gold than anyone in this world Av: Lilliput47

×