Dh31738742

192 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
192
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
5
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Dh31738742

  1. 1. K.Ravindra Reddy, K.Siva Chandra, G.Anusha / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 1, January -February 2013, pp.738-742 Neural units with higher-order synaptic operations for robotic Image processing applicationsK.Ravindra Reddy, M.Tech K.Siva Chandra, M.Tech G.Anusha, (M.Tech) Lecturer, JNTUACEP, Pulivendula, AP, INDIAAbstract proposed neural units with higher-order synaptic Neural units with higher-order synaptic operations are effective and efficient for imageoperations have good computational properties in processing and mobile robot routing operations.information processing and control applications.This paper presents neural units with higher- II. Neural units with higher-orderorder synaptic operations for visual image synaptic operationprocessing applications. We use the neural units Most neural networks employ linear neuronswith higher order synaptic operations for edge or neural units, which have a linear correlationdetection and employ the Hough transform to between the input vector and the synaptic weightprocess the edge detection results. The edge vector. In recent years, neural units with higher-orderdetection method based on the neural unit with synaptic operations and their composed neuralhigher order synaptic operations has been applied networks, where the higher-order nonlinearto solve routing problems of mobile robots. correlation manipulations are used, have been provedSimulation results show that the proposed neural to have good computational, storage, patternunits with higher-order synaptic operations are recognition, and learning properties, and can beefficient for image processing and routing implemented in hardware. These networks satisfy theapplications of mobile robots. Stone–Weierstrass theorem and hence considered to improve the approximation and generalization I. Introduction capabilities of the network. In the literature of A mobile robot requires multiple sensors adaptive neural control, neural networks are primarilysuch as CCD camera, ultrasonic and infrared sensors used as on-line approximators for the unknownto locate its position, to avoid obstacles, and to find nonlinearities due to their inherent approximationan optimal path leading to the destination. The robot capabilities. However, the conventional models offuses all the information acquired from these sensors neurons 222 Z.-G. Hou et al. cannot deal with theand makes decisions to control its direction and discontinuities in the input training data. In an effortspeed. Among these sensors, the CCD camera serves to overcome such limitations of conventional neuralas a visual sensor for the decision, especially for the units with linear synaptic operations, somewide range decision, of mobile robot routing researchers have turned their attention to neural unitsoperations. The most important task for the CCD with higher-order synaptic operations. It is to becamera and the processor is to capture and process noted that neural units with higher-order synapticimages, and to discriminate the obstacles in the paths. operations contain all the linear and nonlinearEdge detection is a simple while efficient method for correlations terms of the input components. Amobile robot routing problems. An edge is where the generalized structure of neural units with higher-vertical and the horizontal surfaces often objects order synaptic operations is a polynomial networkmeet. For a mobile robot, an edge may mean where that includes the weighted sums of products ofthe path ends and the wall or obstacle starts, or where selected input components with an appropriatethe paths and obstacles intersect. Traditionally, edges power. This type of network is called a sigma–pihave been defined loosely as pixel intensity network. The synaptic operation of the sigma–pidiscontinuities within an image. Usually it is network creates the product of the selected inputeasy to detect edges with high contrast, or those with components computed with power operation whilehigh S/N ratio. But in changing light or dim the conventional neural units compute the synapticcircumstances, the edge detection may not be so easy. operation as a weighted sum of all the neural inputs.In the mean time, the on-board CPU should recognize Since sigma–pi networks result in exponentialthe feasible paths and path junctions or crossings increase in the number of parameters, some modifiedusing the edge features. forms of sigma– pi networks which involve smaller In this paper, we address edge detection number of weights than the neural units withproblems of vision-based routing for mobile robots higher-order synaptic operations. They are called asusing higher-order neural units, i.e., neural units with pi–sigma network where the synaptic operation is thehigher-order synaptic operations, as an image product of the weighted sum of all nonlinearprocessing tool. Simulation results show that these correlations terms of the input components. Another 738 | P a g e
  2. 2. K.Ravindra Reddy, K.Siva Chandra, G.Anusha / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 1, January -February 2013, pp.738-742type of higher-order neural network called the ridge row of the weight matrix W = [w00 w01 w02 …..polynomial neural network. However, the problem w0n] Є Rn+1 can produce the weighted linearencountered with these neural networks is the combination of the neural inputs The output of thecombinatorial increase of the weight numbers; that is, neural unit with quadratic synaptic operation is givenas the input size increases, the number of weights in by y = φ(z) Є R, where φ(z) is the activation functionhigher-order neural networks increases exponentially. of the neural unit, which defines the somaticSome papers proposed higher-order neural networks operation.based on the Hopfield type or recurrent neuralnetworks which has been used for solving various III. Edge detection using neural units withoptimization problems and thus is not the case to be higher-order synaptic operation anddiscussed in this paper. applications to mobile robot routing problems2.1 Neural units with quadratic synaptic operation Traditional edge detection and enhancement The synaptic operation of the neural unit techniques include Sobel, Laplace and Gaborwith quadratic synaptic operation embraces both the operators, etc. More recently, a few tentativefirst- and second-order neural input combinations researches werewith the synaptic weights. A neural unit with published using neural networks for efficientquadratic synaptic operation and with n scalar inputs processing of digital images. Most of themis defined as follows: are based on the multi-layer feed forward network Z=∑ ∑ wij xi yj architectures, which usually have longer learning i=0 j=i time. In our present research, we developed an=w00x0x0+w01x0x1+wn(n−1)xnxn−1+wnnxnxn, approach on the basis of neural unit with higher-orderwhere w00 is the threshold (bias) weight and x0 = 1 is synaptic operations by which using one unit isthe constant bias. capable of dealing with complicated problems. TheIf we consider the n scalar neural inputs and the Gaussian function is now applied as the discriminantconstant bias input x0 = 1 as an augmented (n + 1)- function because the Gaussian function performs thedimensional input vector defined by xa = {1, x1, . . . , mean of the time instance values during a period withxn}T ЄRn+1, Eq. (1) can be expressed as z = xT a respect to the activated points. Moreover, theWxa, where W is the augmented synaptic weight Gaussian function is continuous and differentiable, somatrix for the neural unit with quadratic synaptic that this function should be taken as a discriminantoperation The weights wi j and wji , i, j Є {0, 1, 2, . . . function. Original image Neural Convolution with, n}, in the augmented matrix W yield the same LG inputs Edge detected image Spatial functionquadratic term xi x j or x j xi . Therefore, an upper Neurons Learning Training Neural processor.triangle or a lower triangle of the augmented weightmatrix W is sufficient to describe the discriminant Neural procedure for edge detection:function that assigns a measurement value to very • To take an image performing as the eyespoint in the input space. The upper triangle matrix is • To apply the second-order differentiation of the Gaussian function called the Laplacian of the Gaussian (LG) to process the captured image.Fig 1. Neural unit for higher order synaptic operationThe conventional neural units with linear synapticoperation are a subset of the neural unit withquadratic synaptic operation. For example, the first Fig.2 Neural procedure of edge detection 739 | P a g e
  3. 3. K.Ravindra Reddy, K.Siva Chandra, G.Anusha / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 1, January -February 2013, pp.738-742• To teach the neural units with color spatialfunctions• To detect the edges of the objects in the imageThe Gaussian function is defined as Ga(x) = e−αx2 ,(11) where α is the slope rate of the Gaussianfunction. The first order differentiation of theGaussian function is derived as DG(x) = −2αxe−αx2= −2αxGa(x). (12) Thus, the LG is derived asLG(x) = {−2αxGa(x)} = (−2αx)_Ga(x) + (−2αx)Ga_ (x)= (−2α)Ga(x) + (−2αx) {−2αxGa(x)}= (−2α + 4α2x2)Ga(x)= 2α(2αx2 − 1)Ga(x). (13)The three-dimensional LG is applied as an optimalaggregation function. The captured image isconvoluted by the LG whose size for the suitableexperiment is (5×5). The convoluted data of the Fig3.Neural input matrix from the convoluted imageimage is fed to the neural inputs. The neuralprocessor is constructed by training the static neuronswith the color spatial function. The spatial functionverifies the contrast or brightness of the color in theimage. The range of Neural units with higher-ordersynaptic operations for robotic image processingapplications.Fig. 4 The neural processor and conversion of theimage matrix. a Neural processor; b Conversion ofthe selected image matrix; c Conversion of the entireimage matrix the spatial function is from 0 to 255,which signifies the gray level color.The neuralprocessor generates a one-dimensional neural outputwith a 3×3 neural input matrix from the convolutedimage. Each neural output affects the neighboringsegments charging or discharging the illumination.The effect produces the clear edge detected image.Figure 3 shows the transformation of the imageserving as inputs to the neural processor. Thepartitioned matrix is then rearranged as the neuralinputs in the order of the elements in rows andcolumns. The selected 3×3 matrix S1 becomes the9×1 neural input xC1. Each element of xC1 isprocessed by the neural processor. After the synapticand somatic operations, the neural processorgenerates the neural output asyN1(x) = φ [va1(x)] , (14) Fig. 4 The mobile robotwhere va1(x) denotes the higher-order synapticoperation. 740 | P a g e
  4. 4. K.Ravindra Reddy, K.Siva Chandra, G.Anusha / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 1, January -February 2013, pp.738-742 Fig. 8 Result of the Hough transform with the threshold being 0.8 the higher-order computation. In Fig. 5 Original image of a corridor order to obtain the optimal edge detected image, the slope rate of the Gaussian function and the size ofSince the new converted matrix is replaced the first the sectioned matrix should be adjustable. Neuralpartitioned matrix, the next partitioned matrix units with higher-order synaptic operations forcontains the new computed element from the robotic image processing applications 227 CCDprevious neural processed results,and so forth. After camera image acquisition were processing & edgethe entire neural procedure is completed,the detection Ultrasonic, infrared, odometer &convoluted image matrix is obtained.The transformed electronic compass Localization, obstacle avoidanceconvoluted image after the neural processor enhances & path routing Higher-order neural units for datathe edges of the objects in the given image by fusion and image processing. Feature extraction & matching Database updating Direction & speed control.Fig. 6 Edge detection using higher-order neural units. Fig. 9 Block diagram of information processing forFig. 7 Result of the Hough transform with the the robot system.threshold being 0.7. In the field of computer vision and image processing, it is imperative to detect the basic shapes such as lines and circles. A higher-order neural unit 741 | P a g e
  5. 5. K.Ravindra Reddy, K.Siva Chandra, G.Anusha / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 1, January -February 2013, pp.738-742based edge detection method was developed in this higher-order synaptic operations. We are alsopaper to find out the shapes of the images. However, developing dynamic neural units for widerthe neural edge detection alone cannot facilitate the applications in the field of information processingmachine to recognize the objects. In order to employ and control for robotics.the edge detection results, Hough transform isrequired to compute and transfer the image data to IV. Conclusionsthe gadget. Hough transform is one of the powerful Using the neural units with higher-ordermethods which detect the basic shapes from the synaptic operation, it present a scheme for visuallandmarks even though some landmarks are failed or image processing of mobile robots. Firstly employcovered up. The original form of Hough transform the neural units with higher-order synaptic operationinvolved parameterizing lines by the following to obtain the edge features of an image, and then useslope–intercept equation: the Hough transform to process the edge detectiony = mx + c. (15) Every point on a straight edge of the result. Simulation studies showed that the proposededge detected image is plotted as a line in (m, c) method is efficient for mobile robot vision systemspace corresponding to all the (m, c) values consistent and routing applications.with its coordinates, and lines are detected in thisspace. The detrimental of the original Hough Referencestransform 1. Barsi A, Heipke C, Willrich F (2002) Junctionis that the value of m or c becomes infinity if the line extraction by artificial neural network system -is parallel with x-axis or y-axis. The modified Hough JEANS. In: IntArchPhRS Com.IIIGraz, voltransform was used in order to remove this XXXIV, Part 3b, pp 18–21disadvantage, which substitutes the normal (θ, ρ) 2. Davies ER (1990) Machine vision: Theory,form for the slope–intercept format for the straight Algorithms, Practicalities.Academic Press, Sanline as Diego.ρ = x cos θ + y sin θ. (16). The set of lines passing 3. Ghosh J, Shin Y (1992) Efficient higher orderthrough a point is represented as a set of sine curves neural networks for classification and functionin (θ, ρ) space. Then multiple hits in (θ, ρ) space approximation. Int J Neural Syst 3(4):323–signify the presence of lines in the original image. 350.Consider that some lines are detected from the 4. Giles CL, Maxwell T (1987) Learning,landmarks. The parameter space, i.e., (θ, ρ) space, is invariance, and generalization in high-orderdivided into many small subspaces, and the neural networks. Appl Optics 26(23):4972–relationships vote for the corresponding subspace. 4978. 5. Gupta MM, Knopf GK (1994) Neuro-The central point values of the subspaces which have vision systems: principles and applications.many votes are selected as the parameters of the lines IEEE Press, New York.to be detected. The lines are detected corresponding 6. Gupta MM, Jin L,HommaN(2003) Static andto the threshold. The image captured with the CCD dynamic neural networks: from fundamentalscamera is processed to advanced theory. Wiley/IEEE Press, Newby the neural edge detector and Hough transform. YorkUsing the computed results, the mobile robot, which 7. He Z, Siyal MY (1998) Modification onis shown in Fig. 5, can obtain the data of the edge higher-order neural networks.In: Proceedingslines of the corridor, and travel along the corridor and of the artificial networks in engineeringavoid colliding with obstacles. The detected edges of Conference, vol 8, pp 31–36.corridors can be used for mobile serve for real-time 8. Homma N, Gupta MM (2002) Superimposingobstacle avoidance. The visual system of the mobile learning for backpropagation neural networks.robot serves for long-term routing applications of the Bull Coll Med Sci, Tohoku Univ, Jpnrobot. An original image captured with the CCD 11(2):253–259camera of the robot is shown in Fig. 6. The neuro- 9. Hou ZG (2001) A hierarchical optimizationvision system detects the edges of the neural network for large-scale dynamicImage and identifies the lines of the corridors and the systems. Automatica 37(12):1931–1940.junctions or crossing. Figure 7 shows the edge 10. HouZG(2005) Principal component analysisdetection result of (PCA) for data fusion and navigation ofthe corridor image, and Figs. 8 and 9 illustrate the mobile robots. In: Kantor P et al (eds) Springerstraight lines with different thresholds, along which lecture notes in computer science (LNCS):the mobile robot should move. The primary intelligence and security informatics, vol 3495.simulation results showed that this method is of great Springer, Berlin Heidelberg New York, pppotential for practical use, and worth further studies. 610–611.We have developed a vision-based decision-makingunit, which is shown in Fig. 10, for our developedmobile robot using the proposed neural units with 742 | P a g e

×