Successfully reported this slideshow.

Bb33307314

197 views

Published on

IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com

  • Be the first to comment

  • Be the first to like this

Bb33307314

  1. 1. Sudeshna Roy / International Journal of Engineering Research and Applications(IJERA) ISSN: 2248-9622 www.ijera.comVol. 3, Issue 3, May-Jun 2013, pp.307-14307 | P a g eDesign of Rectangular Dielectric Resonator Antenna for ArtificialNeural NetworkSudeshna RoyDepartment of Electronics and Communication Engineering, Heritage Institute of Technology Kolkata-700107IndiaABSTRACTThis paper represents a study ofsimulation of a Dielectric Resonator Antenna(DRA) based on its rectangular shape forArtificial Neural Network (ANN). The DRA isintended to be used as the radiating element of atransmitting array of active integratedantennas; its input impedance must exhibit aproper resistive load at the fundamentalresonance frequency, as well as a dominantreactive behavior, either inductive or capacitive,at higher harmonics. The design procedure isperformed by exploiting Artificial NeuralNetworks (ANN), to find the resonator geometrystarting from the desired resonance frequency,and a finite element based numerical tool for theelectromagnetic characterization of the antenna.The ANN can be employed in a designed RDRAover a short range of frequency of 5.68-5.9 GHz.After simulation this network can give a broadacceptance of simulated data of input andoutput dimensions of DRA with the possibleoriginal existence of the correspondingdimensions.Keywords – Artificial Neural Network, DielectricResonator Antenna (DRA), Far Field, Gain,Bandwidth.I. INTRODUCTIONThe field of wireless communications hasbeen undergoing a revolutionary growth in the lastdecade. This is attributed to the invention ofportable mobile phones some 15 years ago. Thesuccess of the second-generation (2G) cellularcommunication services motivates the developmentof wideband third-generation (3G) cellular phonesand other wireless products and services, includingwireless local area networks (WLAN), home RF,Bluetooth, wireless local loops (WLL), local multi-point distributed networks (LMDS), to name a few.The crucial component of a wireless network ordevice is the antenna.In this era of wireless communications,low-profile and small-size antennas are highlypreferable for mobile devices, such as cellularphones, notebook computers, personal digitalassistant (PDA), etc. Very soon, our cities will beflooded with antennas of different kinds andshapes. On the other hand, for safety andportability reasons, low power, multi-functionaland multi-band wireless devices are highlypreferable. All these stringent requirementsdemand the development of highly efficient, low-profile and small-size antennas that can be madeimbedded into wireless products.In the last 2 decades, two classes of novelantennas have been investigated and extensivelyreported on. They are the microstrip patch antennaand the dielectric resonator antenna (DRA). Bothare highly suitable for the development of modernwireless communications.The use of a dielectric resonator as aresonant antenna was proposed by Professor S. A.Long in the early 1980‟s. Since the DielectricResonator Antenna (DRA) has negligible metallicloss, it is highly efficient when operated atmillimeter wave frequencies. Conversely, a high-permittivity or partially metallized dielectricresonator can be used as a small and low-profileantenna operated at lower microwave frequencies.Low loss dielectric materials are now easilyavailable commercially at very low cost. Thiswould attract more system engineers to choosedielectric resonator antennas when designing theirwireless products.In the last decade, a considerable attentionhas been focused on dielectric resonator antennas(DRAs) as an alternative to microstrip ones [1-6].DRAs represent a relatively novel application ofdielectric resonators (DRs). These resonators areunshielded and rely, for field confinement withintheir boundaries, on a very high difference betweentheir own permittivity and the permittivity of theouter medium. The low-loss, high permittivitydielectrics used for DRs (10≤ɛr≤100), ensure thatmost of the field remains within the resonator, soleading to high quality factor Q. If the permittivityconstant is not too high and the excited modepresents strong fields at the resonator boundaries,the Q drops significantly inasmuch part of thestored energy is radiated in the environment. Sincedielectric losses remain low and the size of the DRare small with respect to the free-space wavelength,these radiators provide small and high efficiencyantennas.DRAs are used in different applications as theyexhibit several attractive features like small size,high radiation efficiency, light weight, simple
  2. 2. Sudeshna Roy / International Journal of Engineering Research and Applications(IJERA) ISSN: 2248-9622 www.ijera.comVol. 3, Issue 3, May-Jun 2013, pp.307-14308 | P a g estructure, better performance when used in phasedarray configurations (since they can be packedmore tightly) [7-10]. Furthermore, consisting of 3Dstructures, DRAs show more degrees of freedomfor their geometrical definition and shape [11].The open literature presents various shapes forDRAs, such as cylindrical (CDRA), rectangular(RDRA), hemispherical (HDRA) and ring orannular DRA (ADRA). Many studies on DRAshave been devoted to radiation efficiency, radiatedfield characteristics and to enlarge the impedancebandwidth, but less attention has been given totheir harmonic tuning and hence to their use inactive integrated antennas (AIAs). Recently severalcontributions have been presented where harmonictuning techniques have been proposed for Ku-bandrectangular and cylindrical dielectric resonatorantennas [12-14].The study has been carried out byexploiting artificial neural networks (ANNs), forthe inversion of the formula relating thefundamental resonance frequency and thegeometrical parameters of the resonator, and afinite elements based numerical tool (thecommercial software HFSS), for theelectromagnetic characterization of the antenna.II. DIELECTRIC RESONATORANTENNA (DRA)A Dielectric Resonator Antenna (DRA) isa radio antenna which is mostly used at microwavefrequencies and higher, that consists of a block ofceramic material of various shapes, the dielectricresonator, mounted on a metal surface, a groundplane. Radio waves are introduced into the insideof the resonator material from the transmittercircuit and bounce back and forth between theresonator walls, forming standing waves. The wallsof the resonator are partially transparent to radiowaves, and radio power is radiated into space.[15]An advantage of dielectric waveguide antennas isthey lack metal parts, which become lossy at highfrequencies, dissipating energy. So these antennascan have lower losses and be more efficient thanmetal antennas at high microwave and millimeterwave frequencies. Dielectric waveguide antennasare used in some compact portable wirelessdevices, and military millimeter-wave radarequipment. The antenna was first proposed byLong, et al, in 1973.Dielectric resonator antennas (DRA) offer thefollowing attractive features: Generally the dimension of a DRA is of theorder of , where is denoted as the free-space wavelength and is the dielectricconstant of the resonator material. Thus, bychoosing a high value of( ), the DRA‟s size canbe significantly reduced. No inherent conductor loss is present in thedielectric resonators, which leads to highradiation efficiency of the antenna. This kindof feature is especially becomes attractive formillimeter (mm)-wave antennas, where theloss can be high in the metal fabricatedantennas. DRAs also offer simple coupling schemes tonearly all kinds of transmission line whichused at microwave and mm-wavefrequencies. This makes them suitable forintegration into different planartechnologies. The coupling between theplanar transmission line and a DRA can beeasily controlled by varying the position ofthe DRA with respect to that line. So theDRA‟s performance can be easily optimizedexperimentally. A DRA‟s operating bandwidth can be variedover a wide range by suitably choosingresonator parameters. Each mode of a DRA has a unique internaland associated external field distribution.Therefore, different radiation characteristicscan be obtained by exciting different modesof a DRA.The use of a dielectric resonator as aresonant antenna was firstly proposed in 1983. Dueto the absence of metallic loss, the dielectricresonator antenna (DRA) is highly efficient when itis operated at millimeter (mm) wave frequencies.Also with the use of high dielectric constantmaterial, the DRA can be used as a small and lowprofile antenna when operated at low microwavefrequencies. Low cost dielectric materials are noweasily available commercially: encouraging moreantenna engineers to design communicationsystems with DRAs. Some reasons for using theDRA are its inherent merit of having no metallicloss, small physical size, easily tunable, and thelarge degree of design freedom in choosing theparameters. The characteristics of the major lowFig.1 Radiation pattern of the DRA above thefinite size ground plane (ref. IEEE)
  3. 3. Sudeshna Roy / International Journal of Engineering Research and Applications(IJERA) ISSN: 2248-9622 www.ijera.comVol. 3, Issue 3, May-Jun 2013, pp.307-14309 | P a g eorder modes can be adequately approximated bykeeping the low order terms of the multi-poleexpansion for the field distribution. The radiationpattern of the DRA would ultimately depend on thefield distribution inside the DRA.III. ARTIFICIAL NEURAL NETWORK(ANN)Artificial Neural Networks (ANNs) are non-linear mapping structures based on the function ofhuman brain. They are powerful tools formodeling, especially when the underlying datarelationship is unknown. ANNs can identify andlearn correlated patterns between input data setsand corresponding target values. After training,ANNs can be used to predict the outcome of newindependent input data. ANNs imitate the learningprocess of the human brain and can processproblems involving non-linear and complex dataeven if the data are imprecise and noisy. Thus theyare ideally suited for the modeling of some datawhich are known to be complex and often non-linear. ANNs has great capacity in predictivemodeling i.e., all the characters describing theunknown situation can be presented to the trainedANNs.An ANN is a computational structure that isinspired by observed process in natural networks ofbiological neurons in the brain. It consists of simplecomputational units called neurons, which arehighly interconnected. ANNs have become thefocus of much attention, largely because of theirwide range of applicability and the ease with whichthey can treat complicated problems. ANNs areparallel computational models comprised ofdensely interconnected adaptive processing units.These networks are fine-grained parallelimplementations of non-linear static or dynamicsystems. A very important feature of thesenetworks is their adaptive nature, where “learningby example” replaces “programming” in solvingproblems. This feature makes such computationmodels very appealing in application domainswhere one has little or incomplete understanding ofthe problem to be solved but where training data isreadily available. Work on artificial neuralnetworks has been motivated right from itsinception by the recognition that the human braincomputes in an entirely different way from theconventional digital computer. The brain is ahighly complex, nonlinear and parallel computer(information-processing system). It has thecapability to organize its structural constituents,known as neurons, so as to perform certaincomputations (e.g., pattern recognition, perception,and motor control) many times faster than thefastest digital computer in existence today.Consider, for example, human vision, which is aninformation processing task [16-19]. It is the functionof the visual system to provide a representation ofthe environment around us and, more important, tosupply the information we need to interact with theenvironment. To be specific, the brain routinelyaccomplishes perceptual recognition tasks (e.g.,recognizing a familiar face embedded in anunfamiliar scene) in approximately 100-200 ms,whereas tasks of much lesser complexity may takedays on a conventional computer. The followingdefinition of a neural network can be offered for anadaptive machine:A neural network is a massively parallel distributedprocessor made up of simple processing units,which has a natural propensity for storingexperiential knowledge and making it available foruse. It resembles the brain in two respects: Knowledge is acquired by the network from itsenvironment through a learning process. Interneuron connection strengths, known assynaptic weights, are used to store theacquired knowledge.The procedure used to perform the learning processis called a learning algorithm, the function ofwhich is to modify the synaptic weights of thenetwork in an orderly fashion to attain a desireddesign objective.The modification of synaptic weights provides thetraditional method for the design of neuralnetworks. Such an approach is the closest to linearadaptive filter theory, which is already wellestablished and successfully applied in manydiverse fields [20-22].IV. BENEFITS OF NEURAL NETWORKSIt is apparent that a neural network derivesits computing power through, first, its massivelyparallel distributed structure and, second, its abilityto learn and therefore generalize.The use of neural networks offers the followinguseful properties and capabilities:1. Nonlinearity.An artificial neuron can be linear or nonlinear. Aneural network, made up of an interconnection ofnonlinear neurons, is itself nonlinear. Moreover,the nonlinearity is of a special kind in the sense thatit is distributed throughout the network.Nonlinearity is a highly important property,particularly if the underlying physical mechanismresponsible for generation of the input signal (e.g.,speech signal) is inherently nonlinear.2. Input-Output Mapping.A popular paradigm of learning called learningwith a teacher or supervised learning involvesmodification of the synaptic weights of a neuralnetwork by applying a set of labelled trainingsamples or task examples. Each example consists
  4. 4. Sudeshna Roy / International Journal of Engineering Research and Applications(IJERA) ISSN: 2248-9622 www.ijera.comVol. 3, Issue 3, May-Jun 2013, pp.307-14310 | P a g eof a unique input signal and a correspondingdesired response. The network is presented with anexample picked at random from the set and thesynaptic weights (free parameters) of the networkare modified to minimize the difference betweenthe desired response and the actual response of thenetwork produced by the input signal in accordancewith an appropriate statistical criterion. In anonparametric approach to this problem, therequirement is to "estimate" arbitrary decisionboundaries in the input signal space for the pattern-classification task using a set of examples, and todo so without invoking a probabilistic distributionmodel. A similar point of view is implicit in thesupervised learning paradigm, which suggests aclose analogy between the input-output mappingperformed by a neural network and nonparametricstatistical inference [23].3. Evidential Response.In the context of pattern classification, a neuralnetwork can be designed to provide information notonly about which particular pattern to select, butalso about the confidence in the decision made.This latter information may be used to rejectambiguous patterns, should they arise and therebyimprove the classification performance of thenetwork.4. Contextual Information.Knowledge is represented by the very structure andactivation state of a neural network. Every neuronin the network is potentially affected by the globalactivity of all other neurons in the network.Consequently, contextual information is dealt withnaturally by a neural network.5. Fault Tolerance.A neural network, implemented in hardware form,has the potential to be inherently fault tolerant, orcapable of robust computation, in the sense that itsperformance degrades gracefully under adverseoperating conditions. In order to be assured that theneural network is in fact fault tolerant, it may benecessary to take corrective measures in designingthe algorithm used to train the network [25].6. VLSI Implementability.The massively parallel nature of a neural networkmakes it potentially fast for the computation ofcertain tasks. This same feature makes a neuralnetwork well suited for implementation usingVery-Large-Scale-Integration (VLSI) technology.One particular beneficial virtue of VLSI is that itprovides a means of capturing truly complexbehavior in a highly hierarchical fashion [26].7. Uniformity of Analysis and Design.Basically, neural networks enjoy universality asinformation processors. We say this in the sensethat the same notation is used in all domainsinvolving the application of neural networks. Thisfeature manifests itself in different ways: Neurons, in one form or another, represent aningredient common to all neural networks. This commonality makes it possible to sharetheories and learning algorithms in differentapplications of neural networks. Modular networks can be built through aseamless integration of modules.8. Neurobiological Analogy.The design of a neural network is motivatedby analogy with the brain, which is a living proofthat fault tolerant parallel processing is not onlyphysically possible but also fast and powerful.Neurobiologists look to (artificial) neural networksas a research tool for the interpretation ofneurobiological phenomena. On the other hand,engineers look to neurobiology for new ideas tosolve problems more complex than those based onconventional hard-wired design techniques [27-31].V. LITERATURE REVIEWA first wave of interest in neural networks (alsoknown as „connectionist models‟ or „paralleldistributed processing‟) emerged after theintroduction of simplified neurons by McCullochand Pitts in 1943 (McCulloch & Pitts, 1943).Theyproposed a model of “computing elements” calledMc-Culloch-Pitts neurons, which performsweighted sum of the inputs to these elementsfollowed by a threshold logic operation.Combinations of these computing elements wereused to realize several logical computations. Themain drawback of this model of computation is thatthe weights are fixed and hence the models couldnot learn from examples which are the maincharacteristic of the ANN technique which laterevolved.Hebb (1949), proposed a learning scheme foradjusting a connection weight based on pre andpost synaptic values of the variables. Hebb‟s lawbecame a fundamental learning rule in neuron-network literature. Rosenblatt (1958), proposed theperceptron models, which has weight adjustable bythe perceptron learning law. Widrows and Hoff(1960) and his group proposed an ADALINE(Adaptive Linear Element) model for computingelements and LMS (Least Mean Square) learningalgorithm to adjust the weights of an ADALINEmodel.When Minsky and Papert published their bookPerceptrons in 1969 (Minsky & Papert, 1969) inwhich they showed the deficiencies of perceptronmodels, most neural network funding wasredirected and researchers left the field. Only a fewresearchers continued their efforts, most notably
  5. 5. Sudeshna Roy / International Journal of Engineering Research and Applications(IJERA) ISSN: 2248-9622 www.ijera.comVol. 3, Issue 3, May-Jun 2013, pp.307-14311 | P a g eTeuvo Kohonen, Stephen Grossberg, JamesAnderson, and Kunihiko Fukushima.The interest in neural networks re-emerged onlyafter some important theoretical results wereattained in the early eighties (most notably thediscovery of error back-propagation), and newhardware developments increased the processingcapacities. This renewed interest is reflected in thenumber of scientists, the amounts of funding, thenumber of large conferences, and the number ofjournals associated with neural networks.Nowadays most universities have a neuralnetworks group, within their psychology, physics,computer science, or biology departments.Hopfield (1982), gave energy analysis of feedbackneural networks. The analysis has shown theexistence of stable equilibrium states in a feedbacknetwork, provided the network has symmetricalweights. Rumelhart et al. (1986), showed that it ispossible to adjust the weight of a multilayer feedforward neural network in a systematic way tolearn the implicit mapping in a set of input-outputpatterns pairs. The learning law is calledgeneralized delta rule or error back propagation.Artificial neural networks can be most adequatelycharacterized as „computational models‟ withparticular properties such as the ability to adapt orlearn, to generalize, or to cluster or organize data,and which operation is based on parallelprocessing. However, many of the abovementioned properties can be attributed to existing(non-neural) models. Often parallels withbiological systems are described. However, there isstill so little known (even at the lowest cell level)about biological systems, that the models we areusing for our artificial neural systems seem tointroduce an oversimplification of the „biological‟models.VI. DRA MODELThe model of rectangular dielectric resonatorantenna is designed with the use of software namedas Ansoft HFSS. The required tools for getting helpto construct the appropriate dielectric resonatorantenna with probe feed are previously installed inthis software. After constructing the requiredfigure, it have to give the proper excitationaccording to the requirement, by which aftersimulation it is very easy to get the radiationpattern from the far field report and the frequencyvs. dB rectangular plot from terminal solution datareport. The following figures are of constructedDRA using software and the correspondingsimulated graph and pattern for the particulardimensions of the configured DRA which is arectangular in shape.Fig 2. Constructed Rectangular DielectricResonator Antenna using Ansoft HFSSFig.3. Simulated Output of Frequency vs. dBGraph from Rectangular Plot.
  6. 6. Sudeshna Roy / International Journal of Engineering Research and Applications(IJERA) ISSN: 2248-9622 www.ijera.comVol. 3, Issue 3, May-Jun 2013, pp.307-14312 | P a g eFig.4. Simulated Output of Radiation Pattern fromFar Field Report.VII. DISCUSSIONSAfter doing many more programming andsimulation process of the corresponding designedDRA structure by using the Ansoft HFSS software,it is very easy to predict that after making theproper dimensions of the input parameters like thelength, breadth, height of rectangular dielectricresonator antenna, probe height and probe offsetposition, its became easier to get the proper gain ofthe radiation pattern, bandwidth and the frequencyin the desired range in between 5.68 GHz and 5.9GHz. After giving a desired input values fordifferent parameters of DRA and probe, it‟sbecome prompt to get the required amount ofoutput values for the parameters namely frequency,gain and bandwidth. Then after getting a hugeamount of various output data according to itsvarious input data, its next job is to compare thehuge datasheet with the original existence of thatvarious dimensions of rectangular dielectricresonator antenna. The data is collected in a tableformat of the following:Table.1. Small Datasheet Collected by AnsoftHFSS (High Frequency Structure Simulator)Softwared/h w/h ∆x P fsimulated BW Gain1.82 1.08 0 2.9 5.798 0.129 4.86461.92 1.12 0 3.2 5.827 0.174 4.741.89 1.12 0.1 4.1 5.839 0.088 4.56761.82 1.08 0.1 4.7 5.738 0.188 5.59541.89 1.12 0.2 4.9 5.823 0.128 4.38421.41 0.92 0.3 5 5.744 0.125 4.7761.57 0.99 0.4 6 5.688 0.181 4.33421.57 1.02 0.5 5.2 5.871 0.157 4.23011.48 0.94 0.6 6 5.721 0.109 4.0688here,d= length of DRAw= breadth of DRAh= height of DRA∆x= probe offset positionP= probe heightfsimulated= simulated frequencyBW= bandwidthVIII. CONCLUSIONAfter all such discussion it can beconcluded that a huge set of data can be collectedby using this Ansoft HFSS software, with the helpof which an Artificial Neural Network model canbe designed. By this ANN model we will able toestablish the proper relationship between the inputparameters and the output parameters of antenna.By the simulation process we can able to predictthe output parameters namely frequency,bandwidth and gain of antenna correctly. Thisartificial neural network model will help us todesign antenna with our desired parameters.ACKNOWLEDGEMENTHere I would like to acknowledge mysincere gratitude to my guide Asst. Prof. SriparnaBhattacharya of Heritage Institute ofTechnology for her continuous guidance, support,encouragement, affection and providing necessaryinfrastructure and resources, without which mycontinuing research process on this field and alsothe completion of this paper using the advancedHFSS software was impossible.
  7. 7. Sudeshna Roy / International Journal of Engineering Research and Applications(IJERA) ISSN: 2248-9622 www.ijera.comVol. 3, Issue 3, May-Jun 2013, pp.307-14313 | P a g eREFERENCES[1] Zainud-Deen, S. H., H. A. El-AzemMalhat, and K. H. Awadalla, "A single-feed cylindrical superquadric dielectricresonator antenna for circularpolarization," Progress InElectromagnetics Research, Vol. 85, 409-424, 2008.[2] Si, L.-M. and X. Lv, "CPW-FED multi-band omni-directional planar microstripantenna using composite metamaterialresonators for wireless communications,"Progress In Electromagnetics Research,Vol. 83, 133-146, 2008.[3] Al-Zoubi, A. S., A. A. Kishk, and A. W.Glisson, "Analysis and design of arectangular dielectric resonator antennafed by dielectric image line throughnarrow slots," Progress InElectromagnetics Research, Vol. 77, 379-390, 2007.[4] Rezaei, P., M. Hakkak, K. Forooraghi,"Design of wide-band dielectric resonatorantenna with a two-segment structure,"Progress In Electromagnetics Research,Vol. 66, 111-124, 2006.[5] Tadjalli, A., A. R. Sebak, and T. A.Denidni, "Resonance frequencies and farfield patterns of elliptical dielectricresonator antenna: analytical approach,"Progress In Electromagnetics Research,Vol. 64, 81-98, 2006.[6] Saed, M. A. and R. Yadla, "Microstrip-FED low profile and compact dielectricresonator antennas," Progress InElectromagnetics Research, Vol. 56, 151-162, 2006.[7] Shin, J., A. A. Kishk, and A. W. Glisson,"Analysis of rectangular dielectricresonator antennas excited through a slotover a finite ground plane," IEEEAntennas Propag. Society Int. Symp., Vol.4,Jul. 2000.[8] Kishk, A. A., "Dielectric resonatorantenna elements for array applications,"Proc. IEEE Int. Symp. Phased-Array Syst.Technol., 300-305, Oct. 2003.[9] Kishk, A. A., "Experimental study of thebroadband embedded dielectric resonatorantennas excited by a narrow slot," IEEEAntennas Wireless Propag. Lett., Vol. 4,79-81, 2005.[10] Mongia, R. K. and A. Ittipiboon,"Theoretical and experimentalinvestigations on rectangular dielectricresonator antennas," IEEE Trans.Antennas Propag., Vol. 45, No. 9, 1997.[11] Petosa, A., A. Ittipiboon, Y. M. Antar, D.Roscoe, and M. Cuhaci, "Recent advancesin dielectric-resonator antennatechnology," IEEE Antennas Propag.Mag., Vol. 40, No. 35, 35-48, 1998.[12] Guraliuc, A., G. Manara, P. Nepa, G.Pelosi, and S. Selleri, "Harmonic tuningfor Ku-band dielectric resonatorantennas," IEEE Antennas and WirelessPropag. Letters, Vol. 6, 568-571, 2007.[13] Lucci, L., G. Manara, P. Nepa, G. Pelosi,L. Rossi, and S. Selleri, "Harmoniccontrol in cylindrical DRA for activeantennas," IEEE Antennas andPropagation International Symposium,Conf. Proc., 3372-3375, 2007.[14] Lucci, L., G. Manara, P. Nepa, G. Pelosi,and S. Selleri, "Cylindrical dielectricresonator antennas with harmonic controlas an active antenna radiator,"International Journal of Antennas andPropagation, Vol. 2009, 7, HindawiPublishing Corporation, 2009.[15] Dielectric Resonator Antenna by Kwai-Man Luk and Kwok-Wa Leung.[16] D. Marr, Vision, W.H. Freeman andCompany, New York, 1982.[17] P. S. Churchland, T. J. Sejnowski, TheComputational Brain, Cambridge, MA:MIT Press, 1992.[18] N. Suga,“Cortical Computational Mapsfor Auditory Imaging”, Neural Networks,vol. 3. pp. 3-21,1990.[19] N. Suga,“Computations of Velocity andRange in the Bat Auditory System forEcho Location," in ComputationalNeuroscience, E.L. Schwartz, ed., pp. 213-231, Cambridge, MA, MIT Press, 1990.[20] B. Widrow, S. D. Steams, Adaptive SignalProcessing, Englewood Cliffs, NJ,Prentice-Hall,1985.[21] S. Haykin, Adaptive Filter Theory, 3rdedition, Englewood Cliffs, NJ: Prcnticc-Hall, 1996.[22] S. Haykin, “Neural Networks Expand SPsHorizons”, IEEE Signal ProcessingMagazine, vol. 13. no. 2, pp. 24-29, 1996.[23] A. Brodal,Neurological Anatomy inRelation to Clinical Medicine, 3rd edition,New York, Oxford University Press, 1981.[24] Geman, S., E. Bienenstock, and R.Doursat,“Neural Networks and theBias/Variance Dilemma," NeuralComputation, vol. 4, pp. 1-58, 1992.[25] S. Grossberg, Neural Networks andNatural Intelligence, Cambridge, MA:MIT Press, 1988.[26] P. Kerlirzin, F Vallet,“Robustness inmultilayer perceptrons”, NeuralComputation, vol. 5, pp. 473-482, 1993.
  8. 8. Sudeshna Roy / International Journal of Engineering Research and Applications(IJERA) ISSN: 2248-9622 www.ijera.comVol. 3, Issue 3, May-Jun 2013, pp.307-14314 | P a g e[27] M. A. Mahowald, C. Mead,“Siliconretina”, Analog VLSI and Neural Systems(C. Mead), Chapter 15. Reading, MA:Addison-Wesley, 1989.[28] T. J. Anastasio,“Modeling Vestibulo-Ocular Reflex Dynamics: From ClassicalAnalysis to Neural Networks”, in FEeckman, ed., Neural Systems: Analysisand Modeling, pp. 407-430, Norwell, MA,Kluwer, 1993.[29] P. Sterling, “Retina”, in The SynapticOrganization of the Brain, CM. Shepherd,ed., 3rd edition, pp. 170-213. New York:Oxford University Press, 1990.[30] M. A. Mahowald, C. Mead, “SiliconRetina” in Analog VLSI and NeuralSystems (C. Mead), Chapter 15. Reading,MA: Addison-Wesley, 1989.[31] K. A. Boahen, A.G. Andreou, “A ContrastSensitive Silicon Retina With ReciprocalSynapses”, Advances in NeuralInformation Processing Systems, vol. 4,pp. 764-772. San Mateo, CA: MorganKaufmann, 1992.[32] K. A. Boahen,“A Retinomorphic VisionSystem”, IEEE Micro, vol. 16, no. 5, pp.30-39, 1996.[33] P. S. Churchland, T.J. Sejnowski, TheComputational Brain, Cambridge, MA:MIT Press, 1992.[34] D. Gabor, W. P. L. Wilby, R. Woodcock,“A Universal Non-Linear Filter, Predictor,and Simulator”, 1960.

×