Final ppt

655 views

Published on

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
655
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
10
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Final ppt

  1. 1. NON-LINEARITY COMPENSATION OFSTRAIN GAUGE BY USINGSOFT COMPUTING TECHNIQUES
  2. 2. OBJECTIVE: Brief IntroductionTo study the nonlinearity problem of STRAIN GAUGE andsuggest novel methods of circumventing the effect of thisproblem.Nonlinear compensation of STRAIN GAUGE using differentANN techniques
  3. 3. INTRODUCTIONSENSORS:- The sensors are devices which, for the purpose of measurement, turn physicalinput quantities into electrical output signals, their output-input and output-timerelationship being predictable to a known degree of accuracy at specified environmentalconditions. It can also be defined as “a device which provides a usable output in response toa specified measurand”. It can also be defined as an element that senses a variation in input energy toproduce a variation in another or same form of energy is called asensor, whereas, transducer involves a transduction principle which converts a specifiedmeasurand into an usable output.Sensor CharacteristicsTwo general characteristicsSTATIC DYNAMICAccuracy FidelityPrecision (Repeatability) Speed of responseResolutionMinimum detectable signalNonlinearityHysteresis(Involves determination of transferfunction, frequency response) Threshold
  4. 4. The sensor consists of several elements or blocks such as sensing element, signalconditioning element, signal processing element and data presentation element. General structure of sensor Accuracy: Degree of closeness of measured value by instruments to the exact value of the quantity. Precision: Degree of reproducibility of measurements . Resolution: minimum change in input value that can be detected by instruments. Threshold: This is the smallest input change that produces a detectable output at zero value condition of the measured. Sensitivity: This is defined as the ratio of the incremental output (Δy) to incremental input (Δx), i.e
  5. 5. Nonlinearity: The deviation from linearity, which itself is defined in terms ofsuperposition principles, is expressed as a percentage of the full scale outputat a given value of the input. Nonlinearity can, however, be specified here.• deviation from a straight line joining the end points of the scale.
  6. 6. The nonlinearity problem gives rise to the following difficulties:(i) Non accuracy in measurement(ii) Limitation of dynamic range (linearity region)(iii) Full potentiality of the sensor cannot be utilized. The nonlinearity problem arises due to:(i) Environmental changes such as change in temperature, humidity andatmospheric pressure(ii) Aging(iii) Constructional limitations.FIDELITY: it is the degree to which the instrument indicates the change in themeasured value without dynamic error.SPEED OF RESPONSE: it is the rapidity with which an instrument responds tochange in the measured quantity.
  7. 7. Adaptive and intelligent methods for compensation of nonlinearities havebeen proposedThese methods are based on the following structures:(i) ADALIN : Single Neural Network(ii) MLP : Multi Layer Perceptron(iii) RBFNN : Radial Basis Function based Neural Network(iv) ANFIS : Adaptive Neuro Fuzzy Inference SystemThe learning algorithms employed in the thesis are:(i) LMS algorithm in ADALIN and ANFIS(ii) BP algorithm in MLP(iii) RBF learning algorithm
  8. 8. Strain gaugeSTRAIN GAUGE is a passive electrical transducer. It gives variation in electrical resistance between its two terminals as effect of strain on sensor(gage) on application of external force.A metal conductor if stretched or compressed,a change in its resistance occurs due to changeIn Its diameter and length. A change in its resistivity can be observed if subjected to strain, this Property is called as piezoresistive effect. Thus resistive strain gage is also known aspiezoresistive gage. Since resistance of a conductor is directly proportional to its lengthTherefore the resistance of gage increases with positive strain. Usually, the wire gage consists of a resistance made of veryfine wire securely bonded to the member to be strained, in our case the balance. When aforce is applied to the balance, the wire then becomes either in tension or compression.The resistance change of a wire when a force is applied is due in part to a change inlength and cross-section, and in part to an actual change in specific resistance.
  9. 9. Principle of Strain MeasurementStrain-initiated resistance change is extremely small. Thus, forstrain measurement a Wheatstone bridge is formed to convertthe resistance change to a voltage change
  10. 10. The gage factor, K, of a strain gage, relates the change in resistance (DR) to the change inlength (DL). The gage factor is constant for a given strain gage, and R is the nondeformedresistance of the strain gage, so Gage factor is used to measure the sensitivity of a material to strain Causes of Non linearities in strain gage The major source of error comes from the fact that the resistances of most wires Changes with temperature. Error in designing resistance measurement circuit. Wheatstone bridge circuit can provide solution to both effects. Wheatstone bridges are Commonly used with strain gage set up to measure small changes in resistance. These Bridges are inherently insensitive to supply voltage fluctuations.
  11. 11. Non linearity compensation of strain gage: In the non linearity compensation scheme used in our project, here we are giving force (strain) to a strip (whose strain is to be measured) by using micrometer manually. This force will produce elongation on one side and compression on other side. The differential voltage of the Strain Gage after being demodulated does not keep linear relationship with the displacement. The nonlinearity compensator can be developed by using different ANN techniques like ADALIN, MLP, RBF-NN and ANFIS. The output of the ANN based nonlinearity compensator is compared with the desired signal (displacement signal given by using micrometer) to produce an error signal. With this error signal, the weight vectors of the ANN model are updated. This process is repeated till the mean square error (MSE) is minimized. Once the training is complete, the Strain Gage together with the ANN model acts like a linear Strain Gage with enhanced dynamic range. In our project we have used RBFNN to compensate the non linearity of Strain Gage.
  12. 12. The ANN based adaptive nonlinear compensator can be developed by 􀂄 MLP 􀂄 FLANN 􀂄 Cascaded FLANN (CFLANN)ADALIN Based Non-linearity CompensationThe differential or demodulated voltage e at the output of LVDT is normalized by dividing eachvalue with the maximum value. The normalized voltage output e is subjected to functionalexpansion and than input to the single neuron perceptron based nonlinearity compensator. Theoutput of the ADALIN based nonlinearity compensator is compared with the normalized inputdisplacement of the LVDT. The widrow-hoff algorithm, in which both the learning rates arechosen as 0.07, is used to adapt the weights of the Neuron. Applying various inputpatterns, the ANN weights are updated using the widrow-hoff algorithm. To enable completelearning, 1000 iterations are made. Then, the weights of neurons are frozen and stored in thememory. During the testing phase, the frozen weights are used in the ADALIN model.
  13. 13. The operation in a neuron involves the computation of the weighted sum of inputs and threshold. The resultant signal is then passed through a nonlinear activation function. This is also called as a perceptron, which is built around a nonlinear neuron; whereas the LMS algorithm described in the preceding sections is built around a linear neuron. The output of the neuron may be represented as,Where is the threshold to the neurons at the first layer, wj(n) is the weight associatedwith jth the input, N is the no. of inputs to the neuron and φ(.) is the nonlinear activationfunction. Different types of nonlinear function are shown in Fig. Different types of nonlinear activation function, (a) Signum function or hard limiter, (b) Threshold function, (c) Sigmoid function, (d) Piecewise Linear
  14. 14. Signum Function: Threshold Function: Sigmoid Function:Where v is the input to the sigmoid function and a is the slope of the sigmoid function. Forthe steady convergence a proper choice of a is required.Piecewise-Linear Function:
  15. 15. MLP Based Non-linearity CompensationThe differential or demodulated voltage e at the output of LVDT is normalized bydividing each value with the maximum value. The normalized voltage output e issubjected to input to the MLP based nonlinearity compensator. In case of the MLP, weused different neurons with different layers. However, the 1-30-50-1 network isobserved to perform better hence it is chosen for simulation. Each hidden layer and theoutput layer contains tanh(.) type activation function. The output of the MLP basednonlinearity compensator is compared with the normalized input displacement of theLVDT. The BP algorithm, in which both the learning rate and the momentum rate arechosen as 0.01 and 1 respectively, is used to adapt the weights of the MLP. Applyingvarious input patterns, the ANN weights are updated using the BP algorithm. To enablecomplete learning, 400 iterations are made. Then, the weights of the various layers ofthe MLP are frozen and stored in the memory. During the testing phase, the frozenweights are used in the MLP model.
  16. 16. If P1 is the number of neurons in the first hidden layer, each element of the output vector of first hidden layer may be calculated as,where αj is the threshold to the neurons of the first hidden layer, N is the no. of inputs and φ(.)is the nonlinear activation function in the first hidden layer of these type. The time index n hasbeen dropped to make the equations simpler. Let P2 be the number of neurons in the secondhidden layer. The output of this layer is represented as, fk and may be written as
  17. 17. where, αk is the threshold to the neurons of the second hidden layer. The output of the finaloutput layer can be calculated asWhere, αl is the threshold to the neuron of the final layer and P3 is the no. of neurons in theoutput layer. The output of the MLP may be expressed as2.2.3 Radial Basis Function Neural Network (RBF-NN)The Radial Basis Function based neural network (RBFNN) consists of an input layer made up ofsource nodes and a hidden layer of large dimension. The number of input and output nodes ismaintained same and while training, the same pattern is simultaneously applied at the inputand the output. The nodes within each layer are fully connected to the previous layer. The inputvariables are each assigned to a node in the input layer and pass directly to the hidden layerwithout weights. The hidden nodes contain the radial basis functions (RBFs) which are Gaussianin characteristics. Each hidden unit in the network has two parameters called a center (μ), and awidth (σ) associated with it. The Gaussian function of the hidden units is radially symmetric inthe input space and the output of each hidden unit depends only on the radial distancebetween the input vector x and the center parameter μ for the hidden unit.
  18. 18. Radial Basis Function neural network structureThe Gaussian function gives the highest output when the incoming variables are closest tothe center position and decreases monotonically as the distance from the center decreases. Each node of the hidden layer contains the radial basis function:
  19. 19. Where μk is the center vector for the kth hidden unit and σk is the width of the Gaussianfunction and denotes the Euclidean norm.The parameters of the RBFNN are updated using the RBF algorithm. The RBF algorithmis an exact analytical procedure for evaluation of the first derivative of the output errorwith respect to network parameters.
  20. 20. Simulation Studies
  21. 21. Mean Square Error (MSE) Plot of RBFNN for Strain Gage non linearity compensationThe RBFNN based nonlinearity compensator improves the linearity quite appreciably about0.0018, i.e. MSE is 0.0018.

×