EGR 183 Final Presentation

922 views

Published on

Short (expert) version powerpoint summarizing research paper file 'Paper EGR 183 Modeling Neural Networks in silico'.

Published in: Health & Medicine, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
922
On SlideShare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
13
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • EGR 183 Final Presentation

    1. 1. EGR 183: Modeling Neural Networks in silico Dr. Needham - Fall 2007 Daniel Calrin B.S.E. Daniel Cook Joshua Mendoza-Elias
    2. 2. Background: Neurons
    3. 3. Background: LTP and LTD <ul><li>The mechanisms of Long-term Potentiation </li></ul>
    4. 4. Background continued: <ul><li>The mechanisms of Long-term Depression </li></ul>
    5. 5. Short-term and Long-term Effects
    6. 6. The Basis of Hebbian Learning
    7. 7. Foundation for our Computer Model
    8. 8. Types of Neuron Inhibitory Neurons Input Neuron Output Neuron Neuron is voltage-clamped, presynaptic to all neurons in model Inhibitory neurons depress post-synaptic neurons Excitatory Neurons Average firing rates solved at each time step Learning rule determines change in synaptic strength inhibitory synapse excitatory synapse Key: 1 2 3 4 α = - 1 4
    9. 9. Synaptic strengths 1.0 1.0 t=t 0 t=t 0+1 + α In phase Out of phase + α Two neurons are firing full-speed: Strengths increase by factor of alpha 1.0 0.0 - β One neuron v i is firing but v j is not: Strengths decrease by factor of beta - β v i v j v i v j v i v j v i v j
    10. 10. Inhibitory Neurons t=t 0 t=t 0+1 Excitatory Inhibitory = ...+ w i,j v i +... = ...- w i,j v i +... An inhibitory neuron v i is firing, depressing the post-synaptic neuron Weighted v i is summed negatively into v j Weighted v i is summed positively into v j An excitatory neuron v i is firing, potentiating the post-synaptic neuron v i v j v j v i v i v j v i v j
    11. 11. <ul><li>In-phase & out-of-phase components, but we could not teach the model complete phasic inversion </li></ul><ul><li>Need further development to do this: one-way connections (i.e. some strengths are 0) </li></ul>Results
    12. 12. Phase components <ul><li>Learning rule: α = | v 1 - v N | </li></ul>Input Neuron Output Neuron Convergence In phase component Out of phase component time firing rate
    13. 13. Output Neuron Maxima & Minima Local max near minimum Local min near maximum Maximum at maximum time firing rate Input Neuron
    14. 14. Further developments <ul><li>Short-term </li></ul><ul><li>Sparse synapse matrix (i.e. some synapses are strength 0) </li></ul><ul><li>Asynchronous firing </li></ul><ul><li>Multi-dimensional training (i.e. for character recognition, sound recognition, etc.) </li></ul><ul><li>Long-term </li></ul><ul><li>Ca +2 Modeling </li></ul><ul><li>Gene Expression Profile (DNA microarray data to reflect changes in synaptic efficacy) </li></ul>
    15. 15. Biological parallel with In Silico Starvoytov et al. 2005 Light-drected stimulation of neurons on silicon wafers. J Neurophysiol 93 : 1090-1098.
    16. 16. LDS in concert with Computer Simulation <ul><li>MEAs vs. LDS </li></ul><ul><li>More real-time data </li></ul><ul><li>More quickly </li></ul><ul><li>Scans: Works on variably connected neural networks </li></ul>

    ×