Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

파이콘 한국 2020) 파이썬으로 구현하는 신경세포 기반의 인공 뇌 시뮬레이터

441 views

Published on

* 파이콘 한국 2020의 발표자료입니다.

현대 인공 신경망의 뿌리가 되었던 뇌 과학!

이 발표에서는 인공 신경망에 대한 뇌 과학적 접근과,
뇌 세포의 발화를 모사하는 파이썬 기반의 뉴로모픽 신경망 모델에 대한 사례를 공유할 예정입니다.

뉴로모픽 신경망은 단순히 기존의 딥러닝에서 셀 구조만을 변경한 것이 아닙니다.
실제로 실험을 수행하기 어려운 생물학적 한계점을 뇌 시뮬레이션을 통해서 극복할 수 있으며,
나아가 뇌의 정보처리 메커니즘을 밝히고, 다양한 뇌 질환 치료제의 타겟을 연구하는데 아주 중요한 역할을 할 수 있습니다.

이번 발표를 통해, 기계학습을 연구하고 있는 많은 연구자 분들에게 새로운 아이디어에 대한 영감이 될 수 있기를 희망합니다.

Published in: Science
  • Be the first to comment

파이콘 한국 2020) 파이썬으로 구현하는 신경세포 기반의 인공 뇌 시뮬레이터

  1. 1. 파 썬으로 구현하는 신경세포 기반의 인공 뇌 시뮬레이터 김성현
  2. 2. 저는 인간의 학습 원리와 최신 딥러닝 기술을 융합하여 AGI (Artificial general intelligence) 를 개발하는 꿈을 가지고 있습니다. ✉ bananaband657@gmail.com 🏠 https://banana-media-lab.tistory.com https://github.com/MrBananaHuman
  3. 3. Introduction01 - Introduction to neuroscience Spiking Neural Network (SNN)02 - SNN as a neuromorphic neural network model Modeling of SNN03 - Python Nengo library for SNN modeling Applications of SNN04 - Deep SNN models Future of SNN05 - Neuromorphic chip
  4. 4. Introduction Introduction to neuroscience 01
  5. 5. Introduction The brain is the most complex 1.5 kg organ that controls all functions of the body, interprets information from the outside world, and embodies the essence of the mind and soul. Thoughts Perceptions Language Sensations Memories Actions Emotions Learning
  6. 6. Introduction The brain is the most complex 1.5 kg organ that controls all functions of the body, interprets information from the outside world, and embodies the essence of the mind and soul. Thoughts Perceptions Language Sensations Memories Actions Emotions Learning
  7. 7. History of Neuroscience - Neuron Neuroscience is the study of how the nervous system develops, its structure, and what it does. The first drawing of a neuron as the nerve cell (1865) [1] The first illustrated a synapse (1893, 1897) [2-3] [1] Otto Friedrich Karl Deiters, Deiters, 1865 [2] Sherrington CS, 1897, A textbook of physiology, London:Macmillian, p.1024-70 [3] Cajal R, 1893, Arch Anat Physiol Anat Abth., V & VI:310-428
  8. 8. History of Neuroscience - Neuron A typical neuron consists of a cell body (soma), dendrites, and a single axon. [1] https://ib.bioninja.com.au/standard-level/topic-6-human-physiology/65-neurons-and-synapses/neurons.html Synapse Dendrite Nucleus Soma (Cell body) Axon terminal Myelin sheath Axon Synapse [1]
  9. 9. History of Neuroscience – Action Potential An action potential is a rapid rise and subsequent fall in voltage or membrane potential across a cellular membrane with a characteristic pattern. [3] [1] [2] [1] How big is the GIANT Squid Giant Axon?, @TheCellularScale [2] Hodgkin AL & Huxley AF, 1945, J Physiol [3] https://www.moleculardevices.com/applications/patch-clamp-electrophysiology/what-action-potential#gref
  10. 10. History of Neuroscience - Synapse Synapses are biological junctions through which neurons' signals can be sent to each other. [2] [1] https://synapseweb.clm.utexas.edu/type-1-synapse [2] Besson, P., 2017, Doctoral dissetation Presynaptic neuron Postsynaptic neuron Synpase [1] Excitatory postsynaptic potential (EPSP) Inhibitory postsynaptic potential (IPSP)
  11. 11. History of Neuroscience - Synaptic Plasticity in Synapse Synaptic plasticity refers to the phenomenon whereby strength of synaptic connections between neurons changes over time. [1] M G LARRABEE, D W BRONK, 1947, J Neurophysiol. Presynaptic neuron Postsynaptic neuron Before stimulating After stimulating Action potentials recorded from the postganglionic nerve (1947) [1]
  12. 12. History of Neuroscience - The Brain Neuron Synapse Plasticity Dendrite Nucleus Soma (cell body) Axon terminal Myelin sheath Axon • 86 Billion • 10–25 μm • > 1,000 types • 7,000 syn/neuron • 100-500 trillion • Potentiation • Depression [1] https://ib.bioninja.com.au/standard-level/topic-6-human-physiology/65-neurons-and-synapses/neurons.html [2] https://commons.wikimedia.org/w/index.php?curid=41349083 [3] https://sites.google.com/site/mcauliffeneur493/home/synaptic-plasticity [1] [2] [3]
  13. 13. Artificial Neural Network (ANN) Revolution [1] ANN is abstract model that mimics the complex structure and functioning of the brain, which is developing explosively in recent years. [1] A brief history of neural nets and deep learning by A. Kurenkov
  14. 14. Limitation of ANN Despite the success of the ANN algorithm, it has clear limitations. Computational limitations [1] Whittington and Bogacz, 2019, Trends in Cognitive Sciences [2] Grossberg, 1987, Cognitive Science [3] Lillicrap et al., 2020, Nature Review Neuroscience • Lack of local error representation → Vanishing gradient [1] • Symmetry of forwards and backwards weights → Weight transport problem [2] • Feedback in brains alters neural activity [3] • Unrealistic models of neurons → Large computational cost [1] • Error signals are singed and potentially extreme-valued → Over fitting [3]
  15. 15. How Does The Brain Learn? [1] [1] Brainbow Hippocampus, Greg Dunn and Brian Edwards, 2014 [2] https://blogs.cardiff.ac.uk/acerringtonlab/ca1-pyramidal-neuron-red-hot/ [2]
  16. 16. Spiking Neural Network (SNN) SNN as a neuromorphic neural network model 02
  17. 17. Overview SNNs operate using spikes, which are discrete events that take place at points in time, rather than continuous values. [1] Anwani and Rajendran, 2015, IJCNN [1] Components • Spiking neuron model • Synapse • Synaptic plasticity
  18. 18. Spiking Neuron Model - Leaky Integrate-and-Fire (LIF)Model A spiking neuron model is a mathematical description of the properties of certain cells in the nervous system that generate sharp electrical potentials across their cell membrane, roughly one millisecond in duration. [1] Teka, W. et al., 2014, PLoS Comput Biol. [Appendix 1] https://www.youtube.com/watch?v=2_MIjvwWsrg [Appendix 2] https://www.youtube.com/watch?v=KXnHxZdn8NU Characteristics • Subthreshold leaky- integrator dynamic • A firing threshold • Reset mechanism Resistor-Capacitor (RC) circuit [1]
  19. 19. Spiking Neuron Model - Leaky Integrate-and-Fire (LIF)Model A spiking neuron model is a mathematical description of the properties of certain cells in the nervous system that generate sharp electrical potentials across their cell membrane, roughly one millisecond in duration. Characteristics • Subthreshold leaky- integrator dynamic • A firing threshold • Reset mechanism [1] Louis Lapicque, 1907, Journal de Physiologie et de Pathologie Générale. [Appendix 1] https://www.youtube.com/watch?v=2_MIjvwWsrg [Appendix 2] https://www.youtube.com/watch?v=KXnHxZdn8NU Leaky Integrate-and-Fire model [1]
  20. 20. Synapse Model The synapse model activates as an input current stimulation to the spiking neuron model. [1] Dutta, S. et al., 2017, Scientific reports [1]
  21. 21. Synaptic Plasticity - Learning in the Brain To reduce punishment To improve knowledge (reward(?)) ANN output target Error function Loss function Learning rate Error signal SNN • Unsupervised learning • Fire together, wire together • STDP learning • BCM learning • Supervised learning • Local error propagation • TP learning • PES learning [1] Timothy P. Lillicrap et al., 2020, Nat Rev Neurosci. [1] [1] [1]
  22. 22. Unsupervised Learning - Spike Timing Dependent Plasticity (STDP) The Spike Timing Dependent Plasticity (STDP) algorithm, which has been observed in the mammalian brain, modulates the weight of a synapse based on the relative timing of presynaptic and postsynaptic spikes. [1-3] [1] Wang, R. et al., 2016, ISCAS [2] Gerstner et al., 1996, Nature [3] Bi and Poo, 1998, Journal of Neuroscience [2-3] [1] PostPre Spike Spike Time (ms) Δt
  23. 23. Unsupervised Learning - Bienenstock, Cooper & Munro (BCM) The BCM model proposes a sliding threshold for long-term potentiation (LTP) or long-term depression (LTD) induction, and states that synaptic plasticity is stabilized by a dynamic adaptation of the time-averaged postsynaptic activity. [1] Bienenstock, Cooper & Munro 1982 J Neurosci Bienenstock, Cooper & Munro (BCM) learning [1] Learning in visual cortex BCM model
  24. 24. Supervised Learning - Target Propagation (TP) output target Local layer-wise errors Hypothesis • The essential idea behind using a stack of auto-encoders for deep learning • This backward-propagated target induces hidden-activity targets that should have been realized by the network • Learning proceeds by updating the forward weights to minimize these local layer-wise activity differences Target propagation (TP) learning [1] [1] Timothy P. Lillicrap et al., 2020, Nat Rev Neurosci.
  25. 25. Supervised Learning - Prescribed Error Sensitivity (PES) [1] [1] Timothy P. Lillicrap et al., 2020, Nat Rev Neurosci. [2] Voelker, A. R., 2015, Centre for Theoretical Neuroscience Prescribed Error Sensitivity (PES) learning [2] A connection from x to y learns to output y ∗ by minimizing |y ∗ − y|.
  26. 26. Modeling of Spiking Neural Network (SNN) Python Nengo library for SNN modeling 03
  27. 27. Nengo Library The Nengo Brain Maker is a Python package for building, testing, and deploying neural networks as a Neural Engineering Framework (NEF). [1] https://www.nengo.ai/ [1]
  28. 28. Nengo Library The Nengo Brain Maker is a Python package for building, testing, and deploying neural networks as a Neural Engineering Framework (NEF). [1] https://www.nengo.ai/ [1]
  29. 29. Nengo Tutorial Installation Usage Build a network !pip install nengo import nengo import numpy as np net = nengo.Network() with net: sin_input = nengo.Node(output=np.sin) input_neuron = nengo.Ensemble(n_neurons=4, dimensions=1) nengo.Connection(sin_input, input_neuron) Node (Sine)
  30. 30. Spiking Neuron Model Characteristics import matplotlib.pyplot as plt %matplotlib inline from nengo.dists import Choice from nengo.utils.ensemble import tuning_curves from nengo.utils.matplotlib import rasterplot with nengo.Simulator(net) as sim: plt.figure() plt.plot(*tuning_curves(input_layer, sim)) plt.xlabel("input value") plt.ylabel("firing rate") plt.xlim(-1, 1) plt.title(str(nengo.LIF())) sim.run(5.0)
  31. 31. Neural Dynamics intercepts=[-.5] intercepts=[0] intercepts=[.5] Input value Firingrate(Hz) Characteristics input_neuron = nengo.Ensemble(intercepts=[-.5])
  32. 32. Neural Dynamics Characteristics input_neuron = nengo.Ensemble(intercepts=[0], encoders=[[-1]]) Input value Firingrate(Hz) intercepts=[-.5], encoders=[[-1]] intercepts=[0], encoders=[[-1]] intercepts=[.5], encoders=[[-1]]
  33. 33. Neural Dynamics Characteristics input_neuron = nengo.Ensemble(intercepts=[0], encoders=[[-1]], max_rates=[100]) Input value Firingrate(Hz) max_rates=[10] max_rates=[100] max_rates=[200]
  34. 34. Neural Dynamics Characteristics input_neuron = nengo.Ensemble(intercepts=[0], encoders=[[-1]], max_rates=[100], radius=1) Input value Firingrate(Hz) radius=1 radius=2 radius=10
  35. 35. Neural Decoding Characteristics with net: sin_input = nengo.Node(np.sin) input_layer = nengo.Ensemble(n_neurons=2,dimensions=1, intercepts=[-.5, - .5], encoders=[[1], [-1]], max_rates = [100, 100]) nengo.Connection(sin_input, input_layer) Input value Firingrate(Hz)
  36. 36. Neural Decoding Prober with net: sin_probe = nengo.Probe(sin_input) spikes = nengo.Probe(input_layer.neurons) filtered = nengo.Probe(input_layer, synapse=0.01) t = sim.trange() # Plot the spiking output of the ensemble plt.figure(figsize=(10, 8)) plt.subplot(2, 2, 1) rasterplot(t, sim.data[spikes], colors=[(1, 0, 0), (0, 0, 0)]) plt.yticks((1, 2), ("On neuron", "Off neuron")) plt.ylim(2.5, 0.5) # Plot the decoded output of the ensemble plt.figure() plt.plot(t, sim.data[filtered]) plt.plot(t, sim.data[sin_probe]) plt.xlim(0, 10)
  37. 37. Neural Decoding Prober with net: sin_probe = nengo.Probe(sin_input) spikes = nengo.Probe(input_layer.neurons) filtered = nengo.Probe(input_layer, synapse=0.01) t = sim.trange() # Plot the spiking output of the ensemble plt.figure(figsize=(10, 8)) plt.subplot(2, 2, 1) rasterplot(t, sim.data[spikes], colors=[(1, 0, 0), (0, 0, 0)]) plt.yticks((1, 2), ("On neuron", "Off neuron")) plt.ylim(2.5, 0.5) # Plot the decoded output of the ensemble plt.figure() plt.plot(t, sim.data[filtered]) plt.plot(t, sim.data[sin_probe]) plt.xlim(0, 10)
  38. 38. Neural Decoding Characteristics net = nengo.Network() with net: sin_input = nengo.Node(np.sin) input_layer = nengo.Ensemble(n_neurons=100,dimensions=1) nengo.Connection(sin_input, input_layer) Input value Firingrate(Hz) Time (s)Inputvalue
  39. 39. Image Processing Input function custom urlretrieve("http://deeplearning.net/data/mnist/mnist.pkl.gz", "mnist.pkl.gz") with gzip.open("mnist.pkl.gz") as f: train_data, _, test_data = pickle.load(f, encoding="latin1") train_data = list(train_data) def image_input(t): # MNIST image data to Model img = train_data[0][int(t)] return img net = nengo.Network() neuron_number = 28*28 with net: input_node = nengo.Node(image_input) pre_neuron = nengo.Ensemble(neuron_number, dimensions=neuron_number, max_rates = [100] * neuron_number, intercepts=[0] * neuron_number) nengo.Connection(input_node, pre_neuron)
  40. 40. Image Processing Input function custom urlretrieve("http://deeplearning.net/data/mnist/mnist.pkl.gz", "mnist.pkl.gz") with gzip.open("mnist.pkl.gz") as f: train_data, _, test_data = pickle.load(f, encoding="latin1") train_data = list(train_data) def image_input(t): # MNIST image data to Model img = train_data[0][int(t)] return img net = nengo.Network() neuron_number = 28*28 with net: input_node = nengo.Node(image_input) pre_neuron = nengo.Ensemble(neuron_number, dimensions=neuron_number, max_rates = [100] * neuron_number, intercepts=[0] * neuron_number) nengo.Connection(input_node, pre_neuron) Time (s) Neuronnumber(28*28)
  41. 41. Voice Processing Input function custom def voice_input(t): ms = int(t * 1000) frame_num = int(ms / frame_size) voice = transposed_norm_S[frame_num] return voice with nengo.Network() as net: voice_input = nengo.Node(output=voice_input) input_neuron = nengo.Ensemble(n_neurons=80, dimensions=1, max_rates=([100] * neuron_number)) nengo.Connection(voice, input_neuron, synapse=0.01) spike_probe = nengo.Probe(input_neuron)
  42. 42. Voice Processing Input function custom def voice_input(t): ms = int(t * 1000) frame_num = int(ms / frame_size) voice = transposed_norm_S[frame_num] return voice with nengo.Network() as net: voice_input = nengo.Node(output=voice_input) input_neuron = nengo.Ensemble(n_neurons=80, dimensions=1, max_rates=([100] * neuron_number)) nengo.Connection(voice, input_neuron, synapse=0.01) spike_probe = nengo.Probe(input_neuron) neurons
  43. 43. Unsupervised Learning BCM learning net = nengo.Network() with net: sin = nengo.Node(lambda t: np.sin(t * 4)) pre = nengo.Ensemble(100, dimensions=1) post = nengo.Ensemble(100, dimensions=1) nengo.Connection(sin, pre) conn = nengo.Connection(pre, post) conn.learning_rule_type = nengo.BCM(learning_rate=5e-10)
  44. 44. Supervised Learning Without PES learning with net: noise_input = nengo.Node(WhiteSignal(60, high=5), size_out=1) input_layer = nengo.Ensemble(60, dimensions=1) output_layer = nengo.Ensemble(60, dimensions=1) nengo.Connection(noise_input, input_layer) conn = nengo.Connection(input_layer, output_layer) with nengo.Simulator(model) as sim: sim.run(10.0) Node (Noise) Input Output
  45. 45. Supervised Learning With PES learning with net: noise_input = nengo.Node(WhiteSignal(60, high=5), size_out=1) input_layer = nengo.Ensemble(60, dimensions=1) output_layer = nengo.Ensemble(60, dimensions=1) nengo.Connection(noise_input, input_layer) conn = nengo.Connection(input_layer, output_layer) error_neuron = nengo.Ensemble(60, dimensions=1) nengo.Connection(output_layer, error_neuron) nengo.Connection(input_layer, error_neuron, transform=-1) conn.learning_rule_type = nengo.PES() nengo.Connection(error_neuron, conn.learning_rule) with nengo.Simulator(model) as sim: sim.run(10.0) Node (Noise) Input Error Output -1
  46. 46. Supervised Learning With PES learning with net: noise_input = nengo.Node(WhiteSignal(60, high=5), size_out=1) input_layer = nengo.Ensemble(60, dimensions=1) output_layer = nengo.Ensemble(60, dimensions=1) nengo.Connection(noise_input, input_layer) conn = nengo.Connection(input_layer, output_layer) error_neuron = nengo.Ensemble(60, dimensions=1) nengo.Connection(output_layer, error_neuron) nengo.Connection(input_layer, error_neuron, transform=-1) conn.learning_rule_type = nengo.PES() nengo.Connection(error_neuron, conn.learning_rule) with nengo.Simulator(model) as sim: sim.run(10.0)
  47. 47. Supervised Learning With PES learning with net: noise_input = nengo.Node(WhiteSignal(60, high=5), size_out=1) input_layer = nengo.Ensemble(60, dimensions=1) output_layer = nengo.Ensemble(60, dimensions=1) nengo.Connection(noise_input, input_layer) conn = nengo.Connection(input_layer, output_layer) error_neuron = nengo.Ensemble(60, dimensions=1) nengo.Connection(output_layer, error_neuron) nengo.Connection(input_layer, error_neuron, transform=-1) conn.learning_rule_type = nengo.PES() nengo.Connection(error_neuron, conn.learning_rule) with nengo.Simulator(model) as sim: sim.run(10.0)
  48. 48. Keras Model Converting [1] [1] https://towardsdatascience.com/mnist-handwritten-digits-classification-using-a-convolutional-neural-network-cnn-af5fafbc35e9
  49. 49. Keras Model Converting MNIST model converting converter = nengo_dl.Converter(model, epochs=2, swap_activations={tf.nn.relu: nengo.RectifiedLinear()) with nengo_dl.Simulator(converter.net, seed=0, minibatch_size=200) as sim: sim.compile( optimizer=tf.optimizers.RMSprop(0.001), loss={ converter.outputs[dense1]: tf.losses.SparseCategoricalCrossentropy( from_logits=True ) }, metrics={converter.outputs[dense1]: tf.metrics.sparse_categorical_accuracy}, ) sim.fit( {converter.inputs[inp]: train_images}, {converter.outputs[dense1]: train_labels}, epochs=epochs, ) sim.save_params("./mnist_model")
  50. 50. Keras Model Converting
  51. 51. Applications of SNN Deep SNN models 04
  52. 52. Solving XOR Problem It is known that the XOR problem cannot be solved with the traditional perceptron model but Nengo based SNN can solve the problem with only a single layer. [1] [2] [1] Gidon et al., 2020, Science [2] https://github.com/sunggukcha/xor [3] https://www.nengo.ai/examples/ [3]
  53. 53. Permuted Sequential MNIST In the Permuted Sequential MNIST data containing the order information for writing numbers, the Nengo SNN-based (Legendre Memory Units) LMU showed SOTA performance. [2] [1] https://github.com/edwin-de-jong/mnist-digits-stroke-sequence-data/wiki/MNIST-digits-stroke-sequence-data [2] Coelker, A. et al., 2019, NIPS [3] https://www.nengo.ai/examples/ [2] [1] [3]
  54. 54. Large Scale Virtual Brain Simulation Methods • Semantic Pointer Architecture Unified Network (SPAUN) • Using Nengo • 2.5 million LIF neurons • Success on 8 diverse tasks • Copy drawing style • Image recognition • Reinforcement learning • Serial working memory • Counting • Question Answering • Rapid variable creation • Fluid reasoning [1] Eliasmith et al., 2012, Science [1]
  55. 55. Large Scale Virtual Brain Simulation Methods • Semantic Pointer Architecture Unified Network (SPAUN) • Using Nengo • 2.5 million LIF neurons • Success on 8 diverse tasks • Copy drawing style • Image recognition • Reinforcement learning • Serial working memory • Counting • Question Answering • Rapid variable creation • Fluid reasoning [1] Eliasmith et al., 2012, Science [1]
  56. 56. Future of SNN Neuromorphic chip 05
  57. 57. Neuromorphic Advantages Advantages • Sparsification over time → Less communication • Less communication → Fewer memory lookups • Cheaper computation → Sum instead of multiply [1] Jeehyun Kwak and Hyun Jae Jang, Neural Computation Lab (NCL), Korea Univ. [1]
  58. 58. Neuromorphic Advantages Neuromorphic Processing Unit [1] Eliasmith and Suma, The Neuromorphic Advantage, Applied Brain Research (ABR) [1] Intel Loihi Chip
  59. 59. Neuromorphic Advantages [1] Eliasmith and Suma, The Neuromorphic Advantage, Applied Brain Research (ABR) [1]
  60. 60. Neuromorphic Advantages [1] Eliasmith et al., 2016, arXiv [2] Jang, H. J. et al., 2020, Science Advances [1] [2] 3D neuron model
  61. 61. Computational Neuroscience [1] Trappenberg, T. P., 2009, Fundamentals of computational neuroscience, OUP Oxford [1]
  62. 62. 감사합니다 :-) SNN와 관련된 대화는 언제나 환영합니다. bananaband657@gmail.com

×