This document describes constructing a Monte Carlo model of a multi-population neural network to compare with mean field and population density methods. It summarizes modeling neural activity across populations with different physiological characteristics. Simulation results show the Monte Carlo method can accurately model population interactions and parameter variations, making it suitable for testing population density methods. The document concludes additional physiological variables should be included in future simulations.
Artificial neural networks are computational models inspired by the human brain. They are composed of interconnected nodes that process information using a technique called machine learning. This report discusses the basic components of neural networks including neurons, layers, and training methods. It also provides examples of using neural networks to learn and implement simple logic functions like AND, OR, NAND, and NOR gates. The code shows how neural networks can be built and trained in MATLAB to recognize patterns in input data and produce the correct output.
A STDP RULE THAT FAVOURS CHAOTIC SPIKING OVER REGULAR SPIKING OF NEURONSijaia
We compare the number of states of a Spiking Neural Network (SNN) composed from chaotic spiking
neurons versus the number of states of a SNN composed from regular spiking neurons while both SNNs
implementing a Spike Timing Dependent Plasticity (STDP) rule that we created. We find out that this
STDP rule favors chaotic spiking since the number of states is larger in the chaotic SNN than the regular
SNN. This chaotic favorability is not general; it is exclusive to this STDP rule only. This research falls
under our long-term investigation of STDP and chaos theory.
Quantum brain a recurrent quantum neural network model to describe eye trac...Elsa von Licy
This document proposes a theoretical quantum brain model called a Recurrent Quantum Neural Network (RQNN) to describe eye movements when tracking moving targets. The model suggests that a quantum process in the brain mediates the collective response of neurons. When simulating the model, two phenomena are observed: 1) as eye sensor data is processed, a wave packet is triggered in the quantum brain that moves like a particle, and 2) when tracking a fixed target, this wave packet moves discretely rather than continuously, resembling saccadic eye movements. The model precisely predicts eye movements, performing better than classical models like the Kalman filter.
Spiking Neural Networks As Continuous-Time Dynamical Systems: Fundamentals, E...IDES Editor
In this article is presented a very simple and effective
analog spiking neural network simulator, realized with an
event-driven method, taking into account a basic biological
neuron parameter: the spike latency. Also, other fundamentals
biological parameters are considered, such as subthreshold
decay and refractory period. This model allows to synthesize
neural groups able to carry out some substantial functions.
The proposed simulator is applied to elementary structures,
in which some properties and interesting applications are
discussed, such as the realization of a Spiking Neural Network
Classifier.
Modeling Stochasticity and Gap Junction Dynamics: Integrate and Fire Modeldharmakarma
In this presentation, we describe a mathematical model for modeling the stochasticity of firing neurons based on a modified integrate and fire model that incorporates gap junction potential.
An artificial neural network (ANN) is the piece of a computing system designed to simulate the way the human brain analyzes and processes information. It is the foundation of artificial intelligence (AI) and solves problems that would prove impossible or difficult by human or statistical standards. ANNs have self-learning capabilities that enable them to produce better results as more data becomes available.
Artificial neural networks are computational models inspired by the human brain. They are composed of interconnected nodes that process information using a technique called machine learning. This report discusses the basic components of neural networks including neurons, layers, and training methods. It also provides examples of using neural networks to learn and implement simple logic functions like AND, OR, NAND, and NOR gates. The code shows how neural networks can be built and trained in MATLAB to recognize patterns in input data and produce the correct output.
A STDP RULE THAT FAVOURS CHAOTIC SPIKING OVER REGULAR SPIKING OF NEURONSijaia
We compare the number of states of a Spiking Neural Network (SNN) composed from chaotic spiking
neurons versus the number of states of a SNN composed from regular spiking neurons while both SNNs
implementing a Spike Timing Dependent Plasticity (STDP) rule that we created. We find out that this
STDP rule favors chaotic spiking since the number of states is larger in the chaotic SNN than the regular
SNN. This chaotic favorability is not general; it is exclusive to this STDP rule only. This research falls
under our long-term investigation of STDP and chaos theory.
Quantum brain a recurrent quantum neural network model to describe eye trac...Elsa von Licy
This document proposes a theoretical quantum brain model called a Recurrent Quantum Neural Network (RQNN) to describe eye movements when tracking moving targets. The model suggests that a quantum process in the brain mediates the collective response of neurons. When simulating the model, two phenomena are observed: 1) as eye sensor data is processed, a wave packet is triggered in the quantum brain that moves like a particle, and 2) when tracking a fixed target, this wave packet moves discretely rather than continuously, resembling saccadic eye movements. The model precisely predicts eye movements, performing better than classical models like the Kalman filter.
Spiking Neural Networks As Continuous-Time Dynamical Systems: Fundamentals, E...IDES Editor
In this article is presented a very simple and effective
analog spiking neural network simulator, realized with an
event-driven method, taking into account a basic biological
neuron parameter: the spike latency. Also, other fundamentals
biological parameters are considered, such as subthreshold
decay and refractory period. This model allows to synthesize
neural groups able to carry out some substantial functions.
The proposed simulator is applied to elementary structures,
in which some properties and interesting applications are
discussed, such as the realization of a Spiking Neural Network
Classifier.
Modeling Stochasticity and Gap Junction Dynamics: Integrate and Fire Modeldharmakarma
In this presentation, we describe a mathematical model for modeling the stochasticity of firing neurons based on a modified integrate and fire model that incorporates gap junction potential.
An artificial neural network (ANN) is the piece of a computing system designed to simulate the way the human brain analyzes and processes information. It is the foundation of artificial intelligence (AI) and solves problems that would prove impossible or difficult by human or statistical standards. ANNs have self-learning capabilities that enable them to produce better results as more data becomes available.
Boundness of a neural network weights using the notion of a limit of a sequenceIJDKP
feed forward neural network with backpropagation le
arning algorithm is considered as a black box
learning classifier since there is no certain inter
pretation or anticipation of the behavior of a neur
al
network weights. The weights of a neural network ar
e considered as the learning tool of the classifier
, and
the learning task is performed by the repetition mo
dification of those weights. This modification is
performed using the delta rule which is mainly used
in the gradient descent technique. In this article
a
proof is provided that helps to understand and expl
ain the behavior of the weights in a feed forward n
eural
network with backpropagation learning algorithm. Al
so, it illustrates why a feed forward neural networ
k is
not always guaranteed to converge in a global minim
um. Moreover, the proof shows that the weights in t
he
neural network are upper bounded (i.e. they do not
approach infinity). Data Mining, Delta
This document provides an overview of neural networks. It discusses that neural networks are composed of interconnected processing units similar to neurons in the brain. Neural networks can learn patterns from examples through training and are well-suited for problems that are difficult to solve with traditional algorithms. The document outlines common neural network architectures like feedforward and feedback networks. It also discusses neural network learning methods and applications.
A Threshold Logic Unit (TLU) is a mathematical function conceived as a crude model, or abstraction of biological neurons. Threshold logic units are the constitutive units in an artificial neural network. In this paper a positive clock-edge triggered T flip-flop is designed using Perceptron Learning Algorithm, which is a basic design algorithm of threshold logic units. Then this T flip-flop is used to design a two-bit up-counter that goes through the states 0, 1, 2, 3, 0, 1… Ultimately, the goal is to show how to design simple logic units based on threshold logic based perceptron concepts.
This document describes a backpropagation algorithm for training second-order feedforward neural networks. It defines the architecture of these networks, which include first and second-order connections between units. The backpropagation algorithm is extended from traditional first-order networks to compute gradients and update both first and second-order weights during training. These networks are theoretically capable of universal function approximation like first-order networks. The document outlines the real and complex versions of the backpropagation algorithm for training these second-order neural networks.
This study analyzed spike train data recorded from neurons in the dorsolateral prefrontal cortex (DLPFC) of a monkey performing a working memory task. Spike train distance metrics were applied to quantify how information about the task was encoded temporally. Optimal parameters were identified for single-unit and multi-unit analyses. Information encoding was found to vary across time intervals of the task, with some neuron pairs showing higher information at different times. Visualizations using t-SNE helped demonstrate that target location could be decoded from spike train distances. The study helps quantify temporal encoding in the DLPFC during working memory tasks.
Artificial neural networks (ANNs) are modeled after the human brain and are useful for problems involving vision, speech recognition, and other tasks brains are good at. They consist of interconnected nodes that receive and process input signals to produce an output. While ANNs have been studied since the 1940s, the development of the backpropagation algorithm in 1986 allowed networks with many layers, or "deep" networks, to be trained effectively, leading to recent advances in deep learning.
This document provides an overview of deep learning and some key concepts in neural networks. It discusses how neural networks work by taking inputs, multiplying them by weights, applying an activation function, and using backpropagation to update the weights. It describes common activation functions like sigmoid and different types of neural networks like CNNs and RNNs. For CNNs specifically, it explains concepts like convolution using filters, padding input images to prevent information loss, and max pooling layers to make predictions invariant to position or scale.
Deep neural networks & computational graphsRevanth Kumar
This document summarizes a presentation on deep neural networks and computational graphs. It discusses how neural networks work using an example of a network with inputs, hidden layers, and an output. It also explains key concepts like activation functions, backpropagation for updating weights, and how the chain rule is applied in backpropagation. Computational graphs are introduced as a way to represent mathematical expressions and evaluate gradients to train neural networks.
This document outlines a study on the entanglement dynamics of a two-qubit system in a spin star configuration. The introduction discusses how quantum phenomena become important at small scales relevant to nanotechnology. It notes that the merging of classical information theory and quantum mechanics created the field of quantum information theory. The study will examine the exact reduced dynamics of the two-qubit system as it interacts with an environment and how entanglement evolves over time.
ANALYSIS OF ELEMENTARY CELLULAR AUTOMATA CHAOTIC RULES BEHAVIORijsptm
We present detailed and in depth analysis of Elementary Cellular Automata (ECA) with periodic
cylindrical configuration. The focus is to determine whether Cellular Automata (CA) is suitable for the
generation of pseudo random number sequences (PRNs) of cryptographic strength. Additionally, we
identify the rules that are most suitable for such applications. It is found that only two sub-clusters of the
chaotic rule space are actually capable of producing viable PRNs. Furthermore, these two sub-clusters
consist of two majorly non-linear rules. Each sub-cluster of rules is derived from a cluster leader rule by
reflection or negation or the combined two transformations. It is shown that the members of each subcluster
share the same dynamical behavior. Results of testing the ECA running under these rules for
comprehensively large number of lattice lengths using the Diehard Test suite have shown that apart from
some anomaly, the whole output sequence can be potentially utilized for cryptographic strength pseudo
random sequence generation with sufficiently large number of p-values pass rates.
This document provides an overview of self-organizing maps (SOM) as an unsupervised learning technique. It discusses the principles of self-organization including self-amplification, competition, and cooperation. The Willshaw-von der Malsburg model and Kohonen feature maps are presented as two approaches to building topographic maps through self-organization. The Kohonen SOM learning algorithm is described as involving competition between neurons to determine a winning neuron, cooperation between neighboring neurons, and adaptive changes to synaptic weights based on Hebbian learning principles.
This document describes research applying artificial neural networks to magnetotelluric data to determine subsurface layer structures. Key points:
- Researchers developed a three-layer neural network model trained with backpropagation to locate subsurface layers from magnetotelluric data. Resilient propagation training was found to be most effective.
- The network was trained on synthetic 1D magnetotelluric data for different layer resistivities and thicknesses, and tested on synthetic and real field data.
- Results showed the neural network approach produced fast, accurate, and objective estimates of subsurface resistivity and depth that correlated well with conventional serial algorithms. This validated neural networks as a useful tool for magnetotelluric inversion and
Part 1 of the Deep Learning Fundamentals Series, this session discusses the use cases and scenarios surrounding Deep Learning and AI; reviews the fundamentals of artificial neural networks (ANNs) and perceptrons; discuss the basics around optimization beginning with the cost function, gradient descent, and backpropagation; and activation functions (including Sigmoid, TanH, and ReLU). The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
Fractals in Small-World Networks With Time DelayXin-She Yang
This document analyzes the effect of time delay on the fractal dimension of small-world networks. It develops an analytical approach and uses numerical simulations. The key findings are:
1) A delay differential equation is derived to model how influence spreads over the network with time delay.
2) Analytical expressions show that time delay reduces the fractal dimension and makes the network behave more like a large world.
3) Numerical simulations match the analytical results and show time delay significantly lowers the fractal dimension, especially for larger delays.
The document provides an introduction to artificial neural networks (ANNs). It discusses that ANNs are inspired by biological neural systems and composed of interconnected computing units called neurons that can learn from examples like the human brain. There are two main reasons for building ANNs: to solve problems requiring parallel processing like character recognition, and to better understand natural information processing by simulating brain functions. ANNs can be used to model how biological systems like the human brain work in various cognitive tasks and sensory processes.
- Artificial neural networks are inspired by biological neural networks and try to mimic their learning mechanisms by modifying synaptic strengths through an optimization process.
- Learning in neural networks can be formulated as a function approximation task where the network learns to approximate a function by minimizing an error measure through optimization of synaptic weights.
- A single hidden layer neural network is capable of learning nonlinear function approximations if general optimization methods are applied to update the synaptic weights.
This document provides an overview of neural networks. It discusses how neural networks were inspired by biological neural systems and attempt to model their massive parallelism and distributed representations. It covers the perceptron algorithm for learning basic neural networks and the development of backpropagation for learning in multi-layer networks. The document discusses concepts like hidden units, representational power of neural networks, and successful applications of neural networks.
Samrat Sengupta has over 15 years of experience in FMCG sales, marketing, and retail across companies like USL-Diageo, Pepsico, ITC, Nestle, Future Group, and Metro Cash & Carry. He is currently the Regional Trade Marketing Manager for USL-Diageo in Eastern India, responsible for over Rs. 800 crores in annual sales. Sengupta has expertise in strategic planning, sales, retail operations, customer marketing, market execution, account management, and people management. He holds an MBA from the Indian School of Business Management and Administration and professional certifications in digital marketing and networking.
This document summarizes a study that aimed to purify the calcium-binding protein regucalcin (RGN) from seminal vesicular fluid using an immunoaffinity methodology. The authors first provide background on seminal fluid composition and functions of RGN. They then describe using techniques like PCR, Western blot, SDS-PAGE and liquid chromatography-mass spectrometry to demonstrate that RGN is present in seminal vesicular fluid and to purify and characterize the protein. The authors conclude that RGN plays an important role in calcium homeostasis and cell proliferation/apoptosis, suggesting it could be a target for prostate cancer treatment.
Boundness of a neural network weights using the notion of a limit of a sequenceIJDKP
feed forward neural network with backpropagation le
arning algorithm is considered as a black box
learning classifier since there is no certain inter
pretation or anticipation of the behavior of a neur
al
network weights. The weights of a neural network ar
e considered as the learning tool of the classifier
, and
the learning task is performed by the repetition mo
dification of those weights. This modification is
performed using the delta rule which is mainly used
in the gradient descent technique. In this article
a
proof is provided that helps to understand and expl
ain the behavior of the weights in a feed forward n
eural
network with backpropagation learning algorithm. Al
so, it illustrates why a feed forward neural networ
k is
not always guaranteed to converge in a global minim
um. Moreover, the proof shows that the weights in t
he
neural network are upper bounded (i.e. they do not
approach infinity). Data Mining, Delta
This document provides an overview of neural networks. It discusses that neural networks are composed of interconnected processing units similar to neurons in the brain. Neural networks can learn patterns from examples through training and are well-suited for problems that are difficult to solve with traditional algorithms. The document outlines common neural network architectures like feedforward and feedback networks. It also discusses neural network learning methods and applications.
A Threshold Logic Unit (TLU) is a mathematical function conceived as a crude model, or abstraction of biological neurons. Threshold logic units are the constitutive units in an artificial neural network. In this paper a positive clock-edge triggered T flip-flop is designed using Perceptron Learning Algorithm, which is a basic design algorithm of threshold logic units. Then this T flip-flop is used to design a two-bit up-counter that goes through the states 0, 1, 2, 3, 0, 1… Ultimately, the goal is to show how to design simple logic units based on threshold logic based perceptron concepts.
This document describes a backpropagation algorithm for training second-order feedforward neural networks. It defines the architecture of these networks, which include first and second-order connections between units. The backpropagation algorithm is extended from traditional first-order networks to compute gradients and update both first and second-order weights during training. These networks are theoretically capable of universal function approximation like first-order networks. The document outlines the real and complex versions of the backpropagation algorithm for training these second-order neural networks.
This study analyzed spike train data recorded from neurons in the dorsolateral prefrontal cortex (DLPFC) of a monkey performing a working memory task. Spike train distance metrics were applied to quantify how information about the task was encoded temporally. Optimal parameters were identified for single-unit and multi-unit analyses. Information encoding was found to vary across time intervals of the task, with some neuron pairs showing higher information at different times. Visualizations using t-SNE helped demonstrate that target location could be decoded from spike train distances. The study helps quantify temporal encoding in the DLPFC during working memory tasks.
Artificial neural networks (ANNs) are modeled after the human brain and are useful for problems involving vision, speech recognition, and other tasks brains are good at. They consist of interconnected nodes that receive and process input signals to produce an output. While ANNs have been studied since the 1940s, the development of the backpropagation algorithm in 1986 allowed networks with many layers, or "deep" networks, to be trained effectively, leading to recent advances in deep learning.
This document provides an overview of deep learning and some key concepts in neural networks. It discusses how neural networks work by taking inputs, multiplying them by weights, applying an activation function, and using backpropagation to update the weights. It describes common activation functions like sigmoid and different types of neural networks like CNNs and RNNs. For CNNs specifically, it explains concepts like convolution using filters, padding input images to prevent information loss, and max pooling layers to make predictions invariant to position or scale.
Deep neural networks & computational graphsRevanth Kumar
This document summarizes a presentation on deep neural networks and computational graphs. It discusses how neural networks work using an example of a network with inputs, hidden layers, and an output. It also explains key concepts like activation functions, backpropagation for updating weights, and how the chain rule is applied in backpropagation. Computational graphs are introduced as a way to represent mathematical expressions and evaluate gradients to train neural networks.
This document outlines a study on the entanglement dynamics of a two-qubit system in a spin star configuration. The introduction discusses how quantum phenomena become important at small scales relevant to nanotechnology. It notes that the merging of classical information theory and quantum mechanics created the field of quantum information theory. The study will examine the exact reduced dynamics of the two-qubit system as it interacts with an environment and how entanglement evolves over time.
ANALYSIS OF ELEMENTARY CELLULAR AUTOMATA CHAOTIC RULES BEHAVIORijsptm
We present detailed and in depth analysis of Elementary Cellular Automata (ECA) with periodic
cylindrical configuration. The focus is to determine whether Cellular Automata (CA) is suitable for the
generation of pseudo random number sequences (PRNs) of cryptographic strength. Additionally, we
identify the rules that are most suitable for such applications. It is found that only two sub-clusters of the
chaotic rule space are actually capable of producing viable PRNs. Furthermore, these two sub-clusters
consist of two majorly non-linear rules. Each sub-cluster of rules is derived from a cluster leader rule by
reflection or negation or the combined two transformations. It is shown that the members of each subcluster
share the same dynamical behavior. Results of testing the ECA running under these rules for
comprehensively large number of lattice lengths using the Diehard Test suite have shown that apart from
some anomaly, the whole output sequence can be potentially utilized for cryptographic strength pseudo
random sequence generation with sufficiently large number of p-values pass rates.
This document provides an overview of self-organizing maps (SOM) as an unsupervised learning technique. It discusses the principles of self-organization including self-amplification, competition, and cooperation. The Willshaw-von der Malsburg model and Kohonen feature maps are presented as two approaches to building topographic maps through self-organization. The Kohonen SOM learning algorithm is described as involving competition between neurons to determine a winning neuron, cooperation between neighboring neurons, and adaptive changes to synaptic weights based on Hebbian learning principles.
This document describes research applying artificial neural networks to magnetotelluric data to determine subsurface layer structures. Key points:
- Researchers developed a three-layer neural network model trained with backpropagation to locate subsurface layers from magnetotelluric data. Resilient propagation training was found to be most effective.
- The network was trained on synthetic 1D magnetotelluric data for different layer resistivities and thicknesses, and tested on synthetic and real field data.
- Results showed the neural network approach produced fast, accurate, and objective estimates of subsurface resistivity and depth that correlated well with conventional serial algorithms. This validated neural networks as a useful tool for magnetotelluric inversion and
Part 1 of the Deep Learning Fundamentals Series, this session discusses the use cases and scenarios surrounding Deep Learning and AI; reviews the fundamentals of artificial neural networks (ANNs) and perceptrons; discuss the basics around optimization beginning with the cost function, gradient descent, and backpropagation; and activation functions (including Sigmoid, TanH, and ReLU). The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
Fractals in Small-World Networks With Time DelayXin-She Yang
This document analyzes the effect of time delay on the fractal dimension of small-world networks. It develops an analytical approach and uses numerical simulations. The key findings are:
1) A delay differential equation is derived to model how influence spreads over the network with time delay.
2) Analytical expressions show that time delay reduces the fractal dimension and makes the network behave more like a large world.
3) Numerical simulations match the analytical results and show time delay significantly lowers the fractal dimension, especially for larger delays.
The document provides an introduction to artificial neural networks (ANNs). It discusses that ANNs are inspired by biological neural systems and composed of interconnected computing units called neurons that can learn from examples like the human brain. There are two main reasons for building ANNs: to solve problems requiring parallel processing like character recognition, and to better understand natural information processing by simulating brain functions. ANNs can be used to model how biological systems like the human brain work in various cognitive tasks and sensory processes.
- Artificial neural networks are inspired by biological neural networks and try to mimic their learning mechanisms by modifying synaptic strengths through an optimization process.
- Learning in neural networks can be formulated as a function approximation task where the network learns to approximate a function by minimizing an error measure through optimization of synaptic weights.
- A single hidden layer neural network is capable of learning nonlinear function approximations if general optimization methods are applied to update the synaptic weights.
This document provides an overview of neural networks. It discusses how neural networks were inspired by biological neural systems and attempt to model their massive parallelism and distributed representations. It covers the perceptron algorithm for learning basic neural networks and the development of backpropagation for learning in multi-layer networks. The document discusses concepts like hidden units, representational power of neural networks, and successful applications of neural networks.
Samrat Sengupta has over 15 years of experience in FMCG sales, marketing, and retail across companies like USL-Diageo, Pepsico, ITC, Nestle, Future Group, and Metro Cash & Carry. He is currently the Regional Trade Marketing Manager for USL-Diageo in Eastern India, responsible for over Rs. 800 crores in annual sales. Sengupta has expertise in strategic planning, sales, retail operations, customer marketing, market execution, account management, and people management. He holds an MBA from the Indian School of Business Management and Administration and professional certifications in digital marketing and networking.
This document summarizes a study that aimed to purify the calcium-binding protein regucalcin (RGN) from seminal vesicular fluid using an immunoaffinity methodology. The authors first provide background on seminal fluid composition and functions of RGN. They then describe using techniques like PCR, Western blot, SDS-PAGE and liquid chromatography-mass spectrometry to demonstrate that RGN is present in seminal vesicular fluid and to purify and characterize the protein. The authors conclude that RGN plays an important role in calcium homeostasis and cell proliferation/apoptosis, suggesting it could be a target for prostate cancer treatment.
Brian Perry, an associate director and veteran of the New Jersey Army National Guard, ran the 2015 Boston Marathon as his first marathon ever, qualifying for and completing the race. He ran in support of Team RWB, a non-profit benefitting veterans. Brian began running as a way to continue his military training and schedule, and qualified for Boston at his sister's encouragement despite his inexperience. He appreciated the crowds and atmosphere of running the iconic marathon on Patriots' Day in Massachusetts. Brian has since trained for a Half Ironman and respects the mental and physical benefits of his fitness regimen, which aligns with the mission of his charity Team RWB to connect veterans to their community through physical activity.
Este documento presenta el plan de vida de Emily Belén Mejía Carabajo, una estudiante de la Universidad de las Fuerzas Armadas ESPE. Incluye secciones sobre autoconocimiento, misión personal, formulación de estrategias, legado y conclusiones. El documento detalla los intereses, valores y metas de Emily, así como eventos que han influido en su desarrollo. Su misión es ser una profesional capacitada en biotecnología que aporte a la sociedad con sus conocimientos.
The narrator took an overnight bus from Bogota to Zipaquira that was very cold. They fell asleep during the trip and upon arrival had broth for breakfast before collecting their bags and taking another bus further to Zipaquira, arriving around 1pm. Though tired, they went for a walk that night with their mom to explore the city. The next day they visited the salt mine, which was awesome.
Este documento trata sobre educación en tecnología. Define conceptos clave como tecnología, artefacto, sistema, innovación e invención. Explica las relaciones entre tecnología y ciencia, diseño, informática, ética y alfabetización tecnológica. También describe los componentes de la educación en tecnología, como las competencias y desempeños, y cómo facilitan el aprendizaje tecnológico de los estudiantes.
This document provides an overview of interfacing a solver with HyperStudy. It discusses HyperStudy's fundamental algorithm and the study setup process, which includes 5 phases: 1) Create Studies, 2) Create Models, 3) Create Design Variables, 4) Do Nominal Run, and 5) Create Responses. The Do Nominal Run phase sets up communication between HyperStudy and the solver by generating an input file using nominal values and checking that the solver outputs a response file.
This document is a curriculum vitae for John Richard Self that outlines his qualifications, skills, and work experience in the electrical field. He has over 15 years of experience in both commercial and domestic electrical work, including installations, maintenance, inspections, and more. He is fully qualified with several certifications and has worked on projects across various sectors for multiple companies. Currently he works permanent contracts with Barlows (UK) Ltd and has ongoing work with Links HVAC, Links Projects, and DAC Plumbing and Heating on both commercial and domestic installations.
This document discusses the purification of regucalcin (RGN), a calcium-binding protein, from seminal vesicular fluid. It introduces RGN and its functions in maintaining calcium homeostasis and roles in apoptosis and antioxidant activity. The authors describe using PCR, Western blot, and SDS-PAGE techniques to demonstrate that RGN is present in seminal fluid and confirm its identity using mass spectrometry. Their results show RGN can be purified from seminal fluid using an immunoaffinity method and identify its molecular weight. The conclusion states RGN's important roles and potential applications for developing cancer treatments.
This document discusses evaluating the effectiveness of design of experiments (DOE) and optimization techniques for a highly nonlinear structural engineering problem - retrofitting a masonry wall with steel plates and stiffeners to maximize energy absorption under blast loading. The authors discretize the design space based on available component sizes and conduct finite element simulations to obtain a complete nonlinear response surface of absorbed energy. They then apply standard DOE and optimization methods to assess how well the results match those from the full response surface. The test problem involves time-consuming simulations, nonlinear behavior, and design feasibility constraints, to gauge how tools perform on realistic engineering challenges with such complications.
El documento presenta información sobre el sistema nervioso central como parte de una clase de fisiología humana impartida por el Dr. Gustavo Moreno en la Universidad Técnica de Ambato a la estudiante Selene Peñaloza en el año 2016.
The document discusses analyzing information flow in a computer simulated cortical neural network model. It describes how transfer entropy was used to quantify information flow within and between neurons in the model. The presence of information flow contributed to validating the neural network reconstruction by suggesting information is not randomly created and destroyed at each node. Generally, the document describes analyzing information flow in the cortical neural network model and why it is important.
PR12-225 Discovering Physical Concepts With Neural NetworksKyunghoon Jung
1) The document describes a neural network called SciNet that is designed to learn physical concepts and representations from experimental observations.
2) SciNet compresses observations into a latent representation, then uses the representation to make predictions about future observations when given a question.
3) The experiments show that SciNet is able to learn fundamental physical concepts like energy, momentum, angular momentum from different example systems and use these concepts to accurately predict future system behavior.
This document describes a study using artificial neural networks (ANNs) to model complex nonlinear systems. Specifically, it discusses:
1) Using an ANN to predict pressure distributions on a rotor wing during ramping motion, with results showing accurate prediction of spatial and temporal evolution.
2) Applying the same ANN model to predict performance of a bank stock based on trends in the stock and stock market index.
3) Proposing a framework combining ANNs with mathematical models to obtain better predictions and representations of financial data trends.
Many oscillatory systems of great interest such as networks of reies, neurons, and relaxation oscillators exhibit pulsing behavior. The analysis of such
oscillators has historically utilized a linear-phase model such as the Kuramoto
equation to describe their dynamics. These models accurately describe the behavior of pulsing oscillators on larger timescales, but do not explicitly capture the pulsing nature of the system being analyzed. Indeed, the Kuramoto model and its derivatives abstract the pulsing dynamics and instead use a constantly advancing phase, thereby blurring the specific dynamics in order to fit to an analytically tractable framework. In this thesis, a modification is presented by introducing a phase-dependence to the frequency of such oscillators. Consequently, this modication induces clear pulsing behavior, and thus ntroducesnew dynamics such as nonlinear phase progressions that more accurately reflect the nature of systems such as neurons, relaxation oscillators, and fireflies. The
analysis of this system of equations is presented and the discovery of a heretofore unknown phenomenon termed periodic stability is described in which the phase-locked state of the system oscillates between stability and instability at a frequency determined by the mean phase. The implications of this periodic stability on the system such as oscillations in the coherence, or total degree of synchronization of the oscillator's trajectories, are discussed. The theoretical predictions made by this novel analysis are simulated numerically, and extended to real experimental systems such as electrical Wien-Bridge oscillators and neurons; systems previously described using the abstract Kuramoto model. Lattices constructed using this novel model yield predictions widely
observed in real biological and chemical systems such as spiral waves. As a
result, this model provides a fresh paradigm for exploring systems of coupled
oscillators. The results of this work thus have clear implications on all real
systems described presently by the Kuramoto model.
Word Recognition in Continuous Speech and Speaker Independent by Means of Rec...CSCJournals
The document presents a study on word recognition in continuous speech using different variants of self-organizing maps (SOMs). It proposes three variants: leaky integrators neurons (LIN), spiking SOM (SSOM), and recurrent spiking SOM (RSSOM). The variants modify the learning function and best matching unit selection compared to basic SOM. An experiment applies the variants to recognize words from sentences in the TIMIT speech corpus, showing good robustness and high recognition rates.
A PERFORMANCE EVALUATION OF A PARALLEL BIOLOGICAL NETWORK MICROCIRCUIT IN NEURONijdpsjournal
A critical issue in biological neural network modelling is the parameter tuning of a model by means of the numerical simulations to map a real scenario. This approach requires a huge amount of computational resources to assesses the impact of every model value that, generally, changes the network response. In this paper we analyse the performance of a CA1 neural network microcircuit model for pattern
recognition. Moreover, we investigate its scalability and benefits on multi core and on parallel and distributed architectures.
This document discusses a study on the electromagnetic activity produced by oscillations of microtubules in cells. Microtubules are composed of electrically polar subunits that could generate electric fields when they vibrate mechanically. The study derives the electromagnetic field produced by an oscillating electric dipole to model the microtubule subunits. It then models microtubule networks in dividing and non-dividing cells and finds that the asymmetric network in a dividing cell produces a field that decays more slowly with distance. However, the calculated field intensities are very low and difficult to detect without sophisticated methods.
Hardware Implementation of Spiking Neural Network (SNN)supratikmondal6
This project work was carried out under the supervision of Dr. Gaurav Trivedi (IIT Guwahati, Electrical Engineering) and under the mentorship of Mr. Ashvinikumar Pruthviraj Dongre (IIT Guwahati, PhD Scholar). In this project we have tried to implement the SNN for image classification in FPGA by
developing an efficient and realistic architecture and also by incorporating a technique of weight change according to
Step-Wise STDP learning curve.
Wavelet-based EEG processing for computer-aided seizure detection and epileps...IJERA Editor
Many Neurological disorders are very difficult to detect. One such Neurological disorder which we are going to discuss in this paper is Epilepsy. Epilepsy means sudden change in the behavior of a human being for a short period of time. This is caused due to seizures in the brain. Many researches are going onto detect epilepsy detection through analyzing EEG. One such method of epilepsy detection is proposed in this paper. This technique employs Discrete Wave Transform (DWT) method for pre-processing, Approximate Entropy (ApEn) to extract features and Artificial Neural Network (ANN) for classification. This paper presented a detailed survey of various methods that are being used for epilepsy detection and also proposes a wavelet based epilepsy detection method
This document provides an overview of neural networks. It discusses how the human brain works and how artificial neural networks are modeled after the human brain. The key components of a neural network are neurons which are connected and can be trained. Neural networks can perform tasks like pattern recognition through a learning process that adjusts the connections between neurons. The document outlines different types of neural network architectures and training methods, such as backpropagation, to configure neural networks for specific applications.
The document presents a method for classifying ECG signals using continuous wavelet transform (CWT) and deep neural networks. CWT is used to decompose ECG signals into different time-frequency components, which are then used to generate a scalogram image. A convolutional neural network is used to extract features from the scalogram images and classify the ECG signals into types including ARR, CHF, and NSR. The method achieves classification accuracy of over 98% on a public ECG dataset, outperforming other methods. The simple and accurate approach has potential for use as a clinical diagnostic tool.
This document summarizes a study that aimed to replicate the results of a previous paper on selectively stimulating neuronal fibers or cell bodies using different asymmetric biphasic current waveforms. The study developed a multi-compartment Hodgkin-Huxley neuronal model in MATLAB and simulated the response of populations of neurons to different stimulus waveforms. The results showed that an anodic-leading asymmetric biphasic waveform selectively activated fibers, while a cathodic-leading waveform preferentially activated cell bodies, consistent with the previous study.
Restricted Boltzman Machine (RBM) presentation of fundamental theorySeongwon Hwang
The document discusses restricted Boltzmann machines (RBMs), an type of neural network that can learn probability distributions over its input data. It explains that RBMs define an energy function over hidden and visible units, with no connections between units within the same group. This conditional independence allows efficient computation of conditional probabilities. RBMs are trained using maximum likelihood, minimizing the negative log-likelihood of the training data by gradient descent.
final Year Projects, Final Year Projects in Chennai, Software Projects, Embedded Projects, Microcontrollers Projects, DSP Projects, VLSI Projects, Matlab Projects, Java Projects, .NET Projects, IEEE Projects, IEEE 2009 Projects, IEEE 2009 Projects, Software, IEEE 2009 Projects, Embedded, Software IEEE 2009 Projects, Embedded IEEE 2009 Projects, Final Year Project Titles, Final Year Project Reports, Final Year Project Review, Robotics Projects, Mechanical Projects, Electrical Projects, Power Electronics Projects, Power System Projects, Model Projects, Java Projects, J2EE Projects, Engineering Projects, Student Projects, Engineering College Projects, MCA Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, Wireless Networks Projects, Network Security Projects, Networking Projects, final year projects, ieee projects, student projects, college projects, ieee projects in chennai, java projects, software ieee projects, embedded ieee projects, "ieee2009projects", "final year projects", "ieee projects", "Engineering Projects", "Final Year Projects in Chennai", "Final year Projects at Chennai", Java Projects, ASP.NET Projects, VB.NET Projects, C# Projects, Visual C++ Projects, Matlab Projects, NS2 Projects, C Projects, Microcontroller Projects, ATMEL Projects, PIC Projects, ARM Projects, DSP Projects, VLSI Projects, FPGA Projects, CPLD Projects, Power Electronics Projects, Electrical Projects, Robotics Projects, Solor Projects, MEMS Projects, J2EE Projects, J2ME Projects, AJAX Projects, Structs Projects, EJB Projects, Real Time Projects, Live Projects, Student Projects, Engineering Projects, MCA Projects, MBA Projects, College Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, M.Sc Projects, Final Year Java Projects, Final Year ASP.NET Projects, Final Year VB.NET Projects, Final Year C# Projects, Final Year Visual C++ Projects, Final Year Matlab Projects, Final Year NS2 Projects, Final Year C Projects, Final Year Microcontroller Projects, Final Year ATMEL Projects, Final Year PIC Projects, Final Year ARM Projects, Final Year DSP Projects, Final Year VLSI Projects, Final Year FPGA Projects, Final Year CPLD Projects, Final Year Power Electronics Projects, Final Year Electrical Projects, Final Year Robotics Projects, Final Year Solor Projects, Final Year MEMS Projects, Final Year J2EE Projects, Final Year J2ME Projects, Final Year AJAX Projects, Final Year Structs Projects, Final Year EJB Projects, Final Year Real Time Projects, Final Year Live Projects, Final Year Student Projects, Final Year Engineering Projects, Final Year MCA Projects, Final Year MBA Projects, Final Year College Projects, Final Year BE Projects, Final Year BTech Projects, Final Year ME Projects, Final Year MTech Projects, Final Year M.Sc Projects, IEEE Java Projects, ASP.NET Projects, VB.NET Projects, C# Projects, Visual C++ Projects, Matlab Projects, NS2 Projects, C Projects, Microcontroller Projects, ATMEL Projects, PIC Projects, ARM Projects, DSP Projects, VLSI Projects, FPGA Projects, CPLD Projects, Power Electronics Projects, Electrical Projects, Robotics Projects, Solor Projects, MEMS Projects, J2EE Projects, J2ME Projects, AJAX Projects, Structs Projects, EJB Projects, Real Time Projects, Live Projects, Student Projects, Engineering Projects, MCA Projects, MBA Projects, College Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, M.Sc Projects, IEEE 2009 Java Projects, IEEE 2009 ASP.NET Projects, IEEE 2009 VB.NET Projects, IEEE 2009 C# Projects, IEEE 2009 Visual C++ Projects, IEEE 2009 Matlab Projects, IEEE 2009 NS2 Projects, IEEE 2009 C Projects, IEEE 2009 Microcontroller Projects, IEEE 2009 ATMEL Projects, IEEE 2009 PIC Projects, IEEE 2009 ARM Projects, IEEE 2009 DSP Projects, IEEE 2009 VLSI Projects, IEEE 2009 FPGA Projects, IEEE 2009 CPLD Projects, IEEE 2009 Power Electronics Projects, IEEE 2009 Electrical Projects, IEEE 2009 Robotics Projects, IEEE 2009 Solor Projects, IEEE 2009 MEMS Projects, IEEE 2009 J2EE P
Complexity and Quantum Information ScienceMelanie Swan
This document discusses using quantum information science and quantum computing to model complex systems like the human brain. It proposes the "AdS/Brain Theory of Neural Signaling" which uses wavefunctions, tensor networks, and neural field theories at different scales from brain networks to molecules. Quantum computing could provide a new platform to model the brain across its nine orders of magnitude of complexity and help complete the human connectome by handling the large data and processing requirements. The AdS/Brain theory represents the first application of the AdS/CFT correspondence across multiple scales of the brain.
Improvement of a Bidirectional Brain-Computer Interface for Neural Engineerin...HayleyBoyd5
The Neurochip-3 is a device capable of recording and stimulating brain activity in non-human primates. However, large artifacts are seen in the recorded signals following stimulation, impeding experiments focused on low-frequency neural activity. The goal is to model and eliminate this artifact problem without compromising safety, usefulness, or portability of the device. Possible solutions involve circuit modifications to prevent stimulation artifacts from entering the amplifier or altering pre-existing circuitry to improve offset recovery time.
P REDICTION F OR S HORT -T ERM T RAFFIC F LOW B ASED O N O PTIMIZED W...ijcsit
Short term traffic forecasting has been a very impo
rtant consideration in many areas of transportation
research for more than 3 decades. Short-term traffi
c forecasting based on data driven methods is one o
f the
most dynamic and developing research arenas with en
ormous published literature. In order to improve
forecasting model accuracy of wavelet neural networ
k, an adaptive particle swarm optimization algorith
m
based on cloud theory was proposed, not only to hel
p improve search performance, but also speed up
individual optimizing ability. And the inertia weig
ht adaptively changes depending on X-conditional cl
oud
generator which has the stable tendency and randomn
ess property .Then the adaptive particle swarm
optimization algorithm based on cloud theory was us
ed to optimize the weights and thresholds of wavele
t
BP neural network, Instead of traditional gradient
descent method . At last, wavelet BP neural network
was
trained to search for the optimal solution. Based o
n above theory, an improved wavelet neural network
model based on modified particle swarm optimization
algorithm was proposed and the availability of the
modified prediction method was proved by predicting
the time series of real traffic flow. At last, the
computer simulations have shown that the nonlinear
fitting and accuracy of the modified prediction
methods are better than other prediction methods.
This document introduces the concept of random processes and provides examples to illustrate them. It defines a random process as a probability system composed of a sample space, an ensemble of time functions, and a probability measure. Random processes extend the concept of a random variable to incorporate the time parameter. Examples given include coin tossing, throwing a die, and thermal noise voltages across resistors. A random process is said to be stationary if its joint probability distribution is invariant to time shifts. Stationary processes have the property that the probability of waveforms passing through time-shifted windows remains the same. An example of a non-stationary process is also provided.
1. Towards Modeling Neural Networks with
Physiologically Different Populations:
Constructing a Monte-Carlo Model
By Adam Cone
VIGRE Research Project
Summer 2003
Advisor: _____________________
Prof. Daniel Tranchina
Date:_______________
1
2. Adam Cone Modeling Neural Networks
Table of Contents
Abstract …………………...……………………………………………...………………..3
Introduction ………………………………………………………………….…………….......3
Biological Background ……………………………………………………………...…..4
Mathematical Background …………………………………………………….………5
Monte-Carlo Network Construction …………………………….………………………7
Network Construction and Representation .…………………………………………....8
Translating Neural Activity into Network Stimulus …………………………………..……9
Variable Physiological Characteristics ……………………………………………….12
Multi-Population Simulation Results ...…………………………………………………13
Mean Field and Monte-Carlo Comparisons .……………………………………………....13
Physiological Parameter Variation: Testing ….…………………………………………....13
Conclusion ...………………………………………………………………………………...18
Literature Review ……...……………………………………………………………………19
Appendix A …………………….……………………………………………………………..20
Appendix B ………………………….………………………………………………………..21
2
3. Adam Cone Modeling Neural Networks
Abstract
Computing power is a fundamental limitation in mathematically modeling multi-
population neural networks in the visual cortex, and innovative modeling techniques are
continually sought to facilitate more efficient, complete simulation. Recent population-density
methods have shown promise in comparisons with contemporary Monte-Carlo and mean field
methods in single-population regimes,1
suggesting their potential usefulness in network
modeling. To carry-out comparisons in physiologically accurate network regimes, all three
models must be modified and expanded to account, not only for multiple external inputs, but
also for network interactions, and different population response parameters.
This paper details our construction of multi-population network Monte-Carlo and mean
field models and critically analyses simulation results to verify their accuracy. We conclude that
the Monte-Carlo method is suitable for use in future population-density method testing. Finally,
we propose additional variables that Monte-Carlo and mean-field models should account for in
future simulations.
Related Fields: computational neural science, numerical computing, computational biology,
applied mathematics, mathematical modeling, neural networks, biomathematics
Introduction
“[A Monte Carlo method] solves a problem by generating suitable random numbers and
observing that fraction of the numbers obeying some [property set]. The method is useful for
obtaining numerical solutions to problems which are too complicated to solve analytically.”2
“In
the Population-Density approach, integrate-and-fire neurons are grouped into large populations
of similar neurons. For each population, we form a probability density that represents the
distribution of neurons over all possible [voltage] states.”3,4
A practical difference between the
two methods is that Monte Carlo simulations are applicable to all kinds of problems, whereas
Population-Density simulations were developed specifically for modeling large populations of
similar neurons with specific properties.
The comparison is important because identifying and using the most computationally
efficient method will enable us to account for more variables, simulate more neurons for longer
durations and, in short, make our simulations more “life-like”. These expanded models can
improve our understanding of neural networks.
1
Cone
2
Weisstein
3
Nykamp and Tranchina
4
For a thorough derivation of the Population-Density method, see ibid.
3
4. Adam Cone Modeling Neural Networks
Biological Background
From a biological perspective, neurons (Figure
1)5
are the brain’s fundamental units. A
neuron’s essential function is to integrate input
from various sources and either fire or not fire
an action potential, the medium of inter-neuron
communication, which, in turn, stimulates
other neurons. Neurons interface with one
another at synapses (Figure 2)6
where the pre-
synaptic neuron releases molecules called
neurotransmitters that open ion channels on
the post-synaptic neuron. The open ion
channels allow ions from the surrounding
cytosol to enter the neuron, or vice-versa.
Since the ions are carrying charge, their
movement is essentially an electrical current,
and it’s effect on the neuron voltage relative to
the cytosol can be either excitatory (increases voltage and action potential probability) or
inhibitory (decreased voltage and action potential probability). Whether a neuron ultimately fires
an action potential is a function of its voltage at the axon-hillock, the action potential initiation
area. Each neuron has a threshold voltage and, when this threshold voltage is met or exceeded
at the axon hillock, the neuron fires an action
potential. Once it has fired, the neuron enters
a refractory period; it cannot fire during this
period as its ion concentrations are being
reestablished. A neuron’s, or group of
neurons’, activity is defined as the action
potential firing rate. Determining neural
activity for various groups of neurons is
crucial in human visual cortex analysis
because it indicates what processes are
taking place at various locations.
Figure 1: Neuron Diagram
During its passage through the central
visual system, information from the retina,
encoded as action potentials from retinal
neurons, undergoes several relays and
transformations. Each of these relays and
transformations involves either a redirection
or re-organization, respectively, of the action-
potential-encoded information by neuron
populations, large neuron groups that are
similar in their biophysical properties and synaptic connections. These populations interact in
neural networks to perform the various operations on visual information (e.g. mapping,
rerouting, organizing, filtering, integrating, etc.) that enable us to interpret visual stimuli. Neural
networks exhibit complex behavior, and facilitate high-level processes, such as orientation
tuning. Understanding neural network behavior is one of the central projects in understanding
the visual cortex.
Figure 2: Synapse Diagram
5
Maizels
6
Maizels
4
5. Adam Cone Modeling Neural Networks
Mathematical Background:
The following integrate-and-fire neuron modeling equations are used directly to update
the mean field method; they form the computational foundation for our simulations. We use
adapted forms of the equations to update the Monte-Carlo simulation. Let and be
the excitatory and inhibitory membrane conductances (nS) as functions of time, and let
( )eg t ( )ig t
eτ and
iτ be the excitatory and inhibitory decay constants (ms) . In the absence of synaptic events, the
excitatory and inhibitory membrane conductances decay according to the first-order differential
equations
( ) ( )
, 0e e
e
e
dg t g t
dt
τ
τ
−
= ≠
( ) ( )
, 0i i
i
i
dg t g t
dt
τ
τ
−
= ≠ .
If T (ms) is the synaptic event time,k ( k )f T +
and ( k )f T −
are the right- and left-hand
limits of a function f at T , andk
k
e
eτ
Γ
and
k
i
iτ
Γ
are the random excitatory and inhibitory
conductance boosts (nS) at , thenkT
( ) ( ) , 0
( ) ( ) , 0
k
e
e k e k e
e
k
i
i k i k i
i
g T g T
g T g T
τ
τ
τ
τ
+ −
+ −
Γ
= + ≠
Γ
= + ≠
Combining these equations, we obtain the general equations for membrane
conductance:
( ) ( )
( )
, 0
( ) ( )
( )
, 0
k
e e k
e k
e
e
k
i i k
i k
i
i
g t t T
dg t
dt
g t t T
dg t
dt
δ
τ
τ
δ
τ
τ
+ Γ −
= − ≠
+ Γ −
= − ≠
∑
∑
Now, let (mV) be the resting membrane voltage, V t be the membrane voltage at time
(mV), and denote the equilibrium excitatory and inhibitory voltages (mV), and be the
membrane capacitance. We model membrane voltage with the following differential equation:
rE ( ) t
eE iE C
[ ( )] [ ( )] [ ( )]( )
, 0r r e e i ig E V t g E V t g E V tdV t
C
dt C
− + − + −
= ≠ .
5
6. Adam Cone Modeling Neural Networks
Suppose that excitatory and inhibitory synaptic events are occurring with frequency (Hz)
and (Hz), respectively, for a population , and that the events are Poisson distributed for
each neuron. Then the average excitatory and inhibitory conductances for the neurons in the
population and is given by
( )e tν
P
( )eg t
( )i tν
( )ig t
( ) ( ) ( )
, 0
( ) ( ) ( )
, 0
e e e e
e
e
i i i i
i
i
d g t t g t
dt
d g t t g t
dt
ν
τ
τ
ν
τ
τ
Γ −
= ≠
Γ −
= ≠
Now, when we model the evolving mean-field conductance for each population in the
network, we must account not only for synaptic input from external stimuli, but also for synaptic
events generated by network neurons. Physiologically, when a pre-synaptic neuron crosses
voltage threshold and fires an action potential, there is a random delay between the firing time
and the time at which any post-synaptic neuron experiences a resultant synaptic event.
Suppose pre-synaptic neuron A synapses to post-synaptic neuron B and that neuron A fires
an action potential at a time T (ms). We want to know the time T (ms) at which neuron B
experiences the resultant synaptic event as a function of T . Computationally, we model this
delay by defining two time quantities: the minimum possible delay between action potential firing
and synaptic event occurance (ms); and the maximum possible additional delay time
(ms). We compute as follows: T T ,7
ap se
ap
minT
maxrandT seT min maxrandse ap randT T= + +
randwhere (unitless) is MATLAB function that outputs a uniformly distributed random number
between 0 and 1.
In the mean field population model, we compute the rate of synaptic input to population
β from population γ , (Hz), as a function of the activity of population γ , (Hz), where
,8
and the distribution function α (unitless), which inputs a synaptic delay, and outputs
the probability that it will occur for a given action potential. In our case, because we select the
delay from a uniform distribution, is a piecewise constant function, namely
βγν ( )A tγ
t ∈ ( )t
( )tα
( )
[ ]
( )
min
min min max max
max
min max
0, ,
1
( ) , , , 0
0, ,
rand rand
rand
rand
t T
t t T T T T
T
t T T
α
∈ −∞
= ∈ + ≠
∈ + ∞
.
mindt T>
min maxrandT T dt= = (1 rand)se ap dt
7
A potential problem arises if our simulation time-step . To avoid this, we can, for example, set
, which is physiologically accurate, so that becomes T T .= + +
maxrand randT8
We do not further restrict t because, due to the random part of the synaptic delay, action
potentials from population α that originated at different times could effect at any given time .tαβν
6
7. Adam Cone Modeling Neural Networks
Finally, let (unitless) denote the number of synapses from γ to β . Now we use a
convolution integral as follows:
Nβγ
( ) ( )N A t t t dtβγ βγ γν α
∞
−∞
′ ′ ′= −∫
min maxmin
min min max
( ) ( ) ( ) ( ) ( ) ( )
rand
rand
T TT
T T T
N A t t t dt A t t t dt A t t t dtβγ γ γ γα α α
+ ∞
−∞ +
′ ′ ′ ′ ′ ′ ′ ′ ′= − + − + −
∫ ∫ ∫
t′ ( ) 0twhere (ms) is the ‘time ago’. Using the fact that when t T , we
obtain
min min max[ ,( )]randT T∉ +α =
min max min max
min minmax max
1
0 ( ) 0 ( )
rand randT T T T
rand randT T
N
N A t t dt A t t dt
T T
βγ
βγ βγ γ γν
+ +
′ ′ ′ ′= + − + = −
∫ ∫
We now make the substitution
*
*
*
max min max
*
min min
*
*
'
'
( )
( )
0 1 1
'
'
rand
t t t
t t t
t t T T
t t T
dt d t t
dt dt
dt dt
= −
⇒ = −
⇒ = − +
⇒ = −
′−
⇒ = = − = −
′
⇒ = −
so that our new integral is
min max
min
min min
min max min max
( )
* *
max
* * * *
max max( ) ( )
( )( )
( 1)( 1) ( ) ( )
rand
rand rand
t T T
rand t T
t T t T
rand randt T T t T T
N
A t dt
T
N N
A t dt A t dt
T T
βγ
βγ γ
βγ βγ
γ γ
ν
− +
−
− −
− + − +
= −
= − − =
∫
∫ ∫
.
Monte-Carlo Model Construction
Our goal was to adapt the Monte-Carlo population model to simulate multi-population neural
network activity. For our purposes, a multi-population network is a group of neuron populations,
in which any neuron can, a priori, synapse to any other neuron in the network, including itself.
There were three main challenges in achieving this goal:
7
8. Adam Cone Modeling Neural Networks
1) network construction and representation
2) translating neural activity to network stimulus
3) endowing different populations with different physiological characteristics
Network Construction and Representation
The first problem was to computationally define the following network features:
1) network size
2) number of populations
3) relative population sizes
4) population types (excitatory or inhibitory)
5) network connectivity
The first four quantities are relatively straightforward to implement, but the network connectivity
is not. One approach is to input the data for each synapse that exists between neurons. So, for
example, if we want neuron 3 to synapses to neuron 12, we could manually input those values
to the computer. However, since the neural networks can be larger than 100,000 and the
number of synapses far higher, this solution is impractical. Furthermore, we are not concerned
with whether a synapse exists between any two specific neurons- the holistic statistical
properties of network connectivity are our primary concern- so inputting values this way would
be tedious. We need a more efficient method of defining the connectivity.
If one considers the neuron populations as the fundamental functional units of the
network, then one can think of how the neuron populations are connected to one another
without concentrating on the individual neurons. Suppose there is a network with two
populations A and B , with a and neurons, respectively.b 9
From a population perspective, we
completely define the network by declaring the number of synapses from A to B , ; to A ,
; to A , ; and to B , . Equivalently, we could declare the probability that a
given neuron in population synapses to a given neuron in population , given by , etc.
Since we have defined the connectivity to our satisfaction, we want to randomly generate a
network with neuron-level connectivity that satisfies our specified population-level connectivity.10
We achieve this by generating an × matrix, where (unitless) is the number of neurons
in the network. In this matrix,
ABS A
AAS B BAS B BBS
A B ABS
ab
N N N
1, neuron i has an inhibitory synapse to neuron j
0, neuron i doesn't synapse to neuron j
1, neuron i has an excitatory synapse to neuron j
ijx
−
=
N
A
.
For example, suppose we want a neural network with neurons and three populations,
, B , and C . We want and to be excitatory, C inhibitory, and their sizes to be a , b andA B
9
Note that, because we assume that each neuron can synapse at most once to any other neuron, ab is
the total number of possible synapses from A to B.
8
9. Adam Cone Modeling Neural Networks
c (unitless), respectively. How do we define population-level connectivity? We construct a 3 3×
matrix M with
( )any given neuron in population i synapses to any given neuron in population jijm P=
a a×
[
.
Now, we generate an random matrix (i.e. a matrix in which each entry is a uniformly
distributed real number ). We ask the computer to perform a logical operation on
each such that
]0,1r ∈ L
ijm
0, 0,
( ) , 0
1, ,1
AA
ij
ij
AA
ij
S
m
aa
L m aa
S
m
aa
∈
= ≠
∈
.
We now perform similar operations for each 2-population combination (i.e. A : B , A :C , B : A ,
B : B , etc.), and concatenate the resulting matrices. Now we have our connectivity matrix – our
representation of the neural network.
Translating Neural Activity into Network Stimulus
dt
We are concerned primarily with network activity, specifically the rate at which each
population in the network, as well as the network itself, is firing action potentials at any given
time. To find this, we need to record how many neurons fired in each discretized time step of
length , which necessitates updating the conductances and voltage of each neuron in each
time step. In a network model, we need to perform all the operations of the population model, in
addition to accounting for inter-neuron interaction. The following is a checklist of the basic steps:
1) classify synaptic input rate to each neuron from external source
2) translate input rate into a Poisson-distributed sequences of synaptic events
3) classify action potential-generated synaptic events from network neurons
4) sort the synaptic events from network and external stimulus into new sequence
5) integrate over time to update the neuron conductances
6) integrate over time to find the neuron voltages
7) decide whether each neuron has fired an action potential
8) record which neurons will experience network-generated synaptic events in next dt
9) store the three-dimensional state-space coordinates.
When the simulation is running, every time a neuron fires, we must
1) reference the connectivity matrix
2) determine which neurons experience synaptic events
3) determine the synaptic event times
4) determine whether the pre-synaptic neuron is excitatory or inhibitory
5) generate the random strength of the synaptic events
9
10. Adam Cone Modeling Neural Networks
6) store the post-synaptic neuron/time/strength data for future reference.11
In the Monte-Carlo regime, accounting for inter-network communication is essentially a
bookkeeping problem. In computer science terminology, we need a data object in which we can
easily store and modify data about inter-neuron communication. We first construct this object,
then explain how it is used in the program.
The synaptic input events for each neuron must be sorted by time of occurrence,
because our second-order accurate integration scheme requires integrating voltages and
conductances between these synaptic events. The essential problem is one of data storage and
access, but because we ultimately want synaptic events lined up for use in future time-steps, we
call our data object the queue matrix. The queue matrix dynamically stores future synaptic input
data for each post-neuron. That is, for each neuron, the queue matrix stores the times and types
of future synaptic events.
Now, because the synaptic delay , the queue matrix
must have a capacity of at least . Since no resultant synaptic event can
occur more than T T after the end of the current time-step, further storage is
extraneous,12
so we have that one of the matrix dimensions is , where
is the MATLAB notation for the smallest integer function.
[ ]mi( )sa apT T T− ∈
min max
l 1randT T+
cei
dt
+
n min max,( )randT T+
rand
min+ max
cei min max
l 1randT T
dt
+
+
ceil
max_in
Furthermore, since it is possible that a neuron experiences multiple synaptic events from
other network neurons in one time-step, we need to know how much space to allocate for
synaptic events. Let (unitless) denote the maximum number of incoming synapses held
by any neuron in the network and (ms) denote the refractory period. Then, in general, the
requisite number of events the queue matrix must store, and, therefore, one of the queue matrix’
dimensions, is given by .
refτ
max
max_in*ceil randT dt
τ
+
ref
N
( ) ( ) ( )neurons
ijkq th
events × time-steps ×
th
Finally, since the queue matrix must store synaptic event data for each neuron, one of
the matrix’ dimensions must be .
By convention, our queue matrix is . In other words, in
, we find the data about the time and type of the i synaptic event occurring during the j
time-step at neuron . This leaves us with the problem of storing two pieces of information, time
and type, in a single matrix cell. For computational reasons, we chose to represent the data as a
complex number . Let T (ms) be the elapsed time between the beginning of the time-
k
a bi+ se
11
We assume that each pre-synaptic neuron is either excitatory or inhibitory, not both.
12
That the total simulation length is more than is irrelevant; we simply erase all
events after the current time-step.
min max
ceil 1andT T
dt
+
+
10
11. Adam Cone Modeling Neural Networks
step and the synaptic event. Then the real part, a , is given by . The
imaginary part is given by .
0, if no event o
, >0se se
a
T T
=
1, synaptic event
0, no syna ic event
t
b
−
=
ccured
b
an inhibitory
pt
1, an excitatory synaptic even
,ap ω
The queue matrix is updated at the end of each time-step; a process which requires
defining two additional objects. Let T (ms) denote the time at which a neuron ω fired, and let
(ms) denote the beginning of the time-step. After we finish the voltage integration and
determine if/when each neuron has fired, we create vector of length of neuron firing times in
which is the time the neuron fired, given by . Furthermore, we sum down the
first dimension of the queue matrix to obtain a ( event-counter
matrix, in which x is the number of synaptic events the neuron is already determined to
experience in the time-step. So, if further pre-synaptic neuron firing leads to additional
synaptic events, we will reference the event counter matrix to decide at which positions in the
queue matrix column we should put them.
0T
N
ix thi
, 0ap iT T
dt
−
0≠ ) ( )times-steps × neurons
ij thj
thi
jk
fireN N×
fireN dt
fireN N×
In each dt , after we have defined the neuron firing vector and we know which neurons
fired, we want to ‘queue-up’ the resultant synaptic events at all the neurons post-synaptic to
firing neurons. To do this, we use the neuron firing vector to find the appropriate rows of the
neuron connectivity matrix and, since the connectivity firing matrix rows not corresponding to
firing neurons are irrelevant, we condense the relevent rows into a new firing
connectivity matrix, where denotes the number of neurons that fired in the current .
Now, we generate an delay times matrix, with uniformly distributed random values
. We create another × matrix, firing times matrix, by right-
multiplying an ones matrix by the transpose of the firing times row vector, so that
has the value of the element in the neuron firing times vector. Adding these two matrices,
we obtain random, uniformly distributed synaptic event reception times that account for the both
the different firing times of the pre-synaptic neurons, and the multiple post-synaptic targets. To
rid ourselves of the unwanted data, we element-multiply the times matrix by the absolute value
of the firing connectivity matrix. We call this the firing times matrix.
[ ]min min,R N N∈ fireN NmaxNij +
N×
thi
fireN ijx
We need to associate with each future synaptic event generated in the current a
type, excitatory or inhibitory, based on the designation of the pre-synaptic neuron that fired. We
now construct a firing information matrix by adding the firing times matrix to the firing
connectivity matrix, multiplied by , the imaginary number. So, in each firing information matrix
cell, we have a complex number , with the synaptic event reception time, and
the synaptic event type: excitatory or inhibitory. Sorting the columns by absolute
value, we arrange the events chronologically from first (top) to last (bottom).
dt
i
λ real( )λ
sign(imag( ))λ
11
12. Adam Cone Modeling Neural Networks
Although we have organized our action potential firing data for the current time-step, it
remains to enter this data into the queue matrix, where it can be efficiently accessed in future
time-loops. We do this by looping over the maximum number of synaptic events any post-
synaptic neuron will experience as a result of action potentials that occurred in the current time-
step; a quantity obtained by summing down the absolute value of the firing connectivity matrix.
In the loop, we increment the relevant cells of the event counter matrix, and define row, column,
and depth indices for the queue matrix.
Variable Physiological Characteristics
Our goal is to model multiple population network activity in the visual cortex. Optimally,
we want our simulation to handle a user-specified number of populations, each with different
qualities (i.e. excitatory vs. inhibitory, different refractory periods, voltage thresholds, etc.),
without having to write individual programs for each case. We construct a variable population
network by the algorithm outlined in Network Construction, but how do we efficiently assign
physiological constants to the different populations? Briefly, the user constructs the following
column vectors, each of length (unitless), where is the number of populations in the
network:
P P
N thi
thk
excitatory equilibrium voltage
inhibitory equilibrium voltage
resting voltage
excitatory conductance decay constant
inhibitory conductance decay constant
membrane time constant
refractory period
reset voltage
threshold voltage
For example, the i element of the reset voltage vector contains the reset voltage
constant for the neurons in population i .
th
Although we now have sufficient data to simulate interacting populations with different
physiological parameters, the data is not in a convenient form for calculations. Since many of
our Monte-Carlo computations are based on individual neurons, we would like, for each
characteristic, to have a vector of dimension whose element is the characteristic value of
the i neuron in the network. Let be the number of neurons in the population. Then we
can construct a matrix of dimension , where
th kP
N P×
1
1 1
0, ,
j j
n n
n n
ij
i P P
x
−
= =
∉
1
1 1
1, ,
j j
n n
n n
i P P
−
= =
=
∑ ∑
∈
∑ ∑
N th th
.
Now, multiplication by, for instance, the refractory time column vector yields a column vector of
length , where the i element is the refractory time value of the i network neuron.
12
13. Adam Cone Modeling Neural Networks
For example, suppose a given neural network has seven neurons and four distinct
populations, , , , and , with sizes two, one, three, and one, respectively. Further,
suppose that the population refractory time values are 3 , , , and , respectively.
Then the computationally convenient vector of neuron refractory time values is given by:
a b c d
sµ 2 sµ 6 sµ 4 sµ
1 0 0 0 3 s
1 0 0 0 3 s
3 s
0 1 0 0 2 s
2 s
0 0 1 0 6 s
6 s
0 0 1 0 6 s
4 s
0 0 1 0 6 s
0 0 0 1 6 s
µ
µ
µ
µ
µ
µ
µ
µ
µ
µ
µ
=
.
Performing this operation for each population-variable characteristic, we obtain
convenient vectors for each characteristic. The data is now in a computationally convenient
form.
Multi-Population Simulation Results
To demonstrate the necessity (over the mean-field method) and versatility of our Monte-
Carlo multi-population method, we present activity and conductance comparisons between the
mean-field and Monte-Carlo simulation data (Figures 1-4); and the results of four simulations,
each run with different physiological parameters and each with 1000 neurons, and two
interacting populations: one excitatory (population 1) and one inhibitory (population 2) (Figures
5-8). The specific physiological parameters of each simulation are found in Appendix A, and the
simulation code is found is Appendix B.
Mean-Field and Monte-Carlo Comparisons
While the Monte-Carlo method is the standard for accuracy in network modeling, it must
ponderously track each individual neuron; a computationally costly process. The mean field
method, by comparison, involves only numerically solving ordinary differential equations, and is
far faster; ideally, we would use them exclusively. However, the mean field models neuron-
interaction poorly, as demonstrated in Figures 1-4, hence the need for a Monte-Carlo model.
Figures 1-4 each show relative agreement between the mean field and Monte-Carlo methods,
but we see, in each figure, the mean field deviating from the Monte-Carlo. This motivates our
need for a Monte-Carlo method.
Physiological Parameter Variation: Testing
One of our primary goals was to account for interactions between physiologically
different neuron populations. After programming the simulations, we varied physiological
parameters (e.g. threshold voltage, refractory period, etc.) and critically examined the results.
We expect, for instance, that if an isolated neuron population X has a higher threshold voltage
then another, otherwise identical, isolated neuron population Y , then, given similar synaptic
input , will have a higher activity than . If a simulation did not show this, it is unlikely thatY X
13
14. Adam Cone Modeling Neural Networks
the simulation accurately modeled reality. The following are similar functionality tests, which
compare population activity.
We ran simulations with four different parameter sets, three of them varying by one or
two physiological variables from the control set, parameter set 1, and compared the results.
Briefly, parameter set 2 differs from parameter set 1 in its threshold voltage values; higher for
the excitatory population and lower for the inhibitory population. In parameter set three, we
made the following two changes: we modified the connectivity matrix so that there is substantial
population self-synapsing; and we changed the refractory periods for both populations (halved
for population 1, doubled for population 2). Finally, population set 4 has a higher resting voltage
for population 1, and a lower resting voltage for population 2.
Parameter Set 2: Threshold Voltage
In comparison with Figure 5, Figure 6 has two salient features (increased population 2
activity; decreased population 1 activity), both of which are consistent with the physiological
differences- a population’s activity is proportional to the ease with which its neurons can reach
threshold voltage. By making the threshold harder to meet for population 1, and easier to meet
for population 2, we decreased and increased their respective activities. This effect is enhanced
by the populations’ connectivity. Since population 2 is inhibitory and synapses to many neurons
in population 1, population 2’s increased activity means increased inhibition for population 1. So,
while population 2’s increased activity results exclusively from the threshold voltage change,
population 1’s decreased activity is the result of both the threshold voltage changes, and the
indirect effect of the threshold voltage changes through the network architecture.
Parameter Set 3: Network Connectivity and Refractory Period
Relative to the simulation activity data from parameter set 1, parameter set 3 data
(Figure 7) exhibits an increased positive disparity between the activities of population 1 and
population 2.
The result of altering the refractory period is that the population 1 neurons have less
forced inactivity, whereas the population 2 neurons have more. When a neuron is in a refractory
period its voltage cannot evolve, and, in some sense, the excitatory synaptic events are ‘wasted’
in that they cannot make the neuron more likely to fire. In parameter set 3, the refractory period
changes minimize this ‘waste’ for population 1, and maximize it for population 2- hence, the
greater disparity.
The effect of allowing population self-synapsing is subtler, but important. Population 1 is
excitatory, which means that its action potentials precipitate excitatory synaptic events at post-
synaptic neurons, including those in inhibitory population 2. In the population 1 network, these
excitatory events make the inhibitory population more likely to fire, which, in turn, decreases the
activity of population 1. For most external synaptic stimulus time-courses, this effect is minor,
but present; the network connectivity means that population 1’s activity is, through population 2,
somewhat self-limiting. However, in parameter 3 network, the set of neurons post-synaptic to
population 1 include neurons in population 1, which means that population 1’s activity has a
strong self-actuating effect. Furthermore, population 2’s self-synapsing has precisely the
opposite effect- it’s activity is now strongly self-limiting, and, therefore, so is its tempering effect
on population 1. The net result is that population 1’s activity increases and population 2’s activity
decreases. The combination of these two effects explains the large differences between Figures
5 and 7.
14
15. Adam Cone Modeling Neural Networks
Parameter Set 4: Neuron Resting Voltage
Figure 8’s relationship to Figure 5 is similar to Figure 7’s, but less extreme; Population
1’s activity is slightly higher, and population 2’s activity is slightly lower.
When a neuron experiences no synaptic events, its conductance will asymptotically
approach the resting voltage. The higher the resting voltage, the less excitatory synaptic
stimulus a neuron will need to reach threshold voltage, and vice-versa. Hence, when we
increase the resting voltage of population 1 and decrease that of population 2, we see the
corresponding differences. The net effect is weaker than that of the changes in parameter set 3
for two reasons. First, there is only one altered physiological variable. Second, the resting
voltage is less influential when a neuron experiences a high frequency of synaptic events, and,
in all our simulations, the rate of synaptic events is relatively high. However, the effect is still, not
surprisingly, significant.
15
18. Adam Cone Modeling Neural Networks
Conclusion
Although our multi-population simulations generated coherent results, there are several
possible improvements that we plan to implement in future Monte-Carlo simulations. For
example, synaptic depression becomes increasingly important in network-level, as opposed to
population-level, analysis. The voltage-impact of a synaptic event, although partially random, is
a function of the amount and type of neurotransmitter released by the pre-synaptic neuron into
the synaptic cleft. When a synaptic event occurs, the pre-synaptic neuron’s immediately-
available neurotransmitter is depleted by some number of neurotransmitter molecules χ . Given
sufficient time, the pre-synaptic exponentially restores the amount of immediately-available
neurotransmitter to its original value. However, if the pre-synaptic neuron fires action potentials
at some critical frequency, the amount of immediately-available neurotransmitter can fall below
χ , and the synaptic event’s efficacy is reduced or ‘depressed’. In population simulations this
phenomenon is of little importance, since we are concerned only with the activity of the modeled
population. However, in multi-population networks, the generated action potentials have
consequences, and synaptic depression must be accounted for.
Nevertheless, the two-population simulation results are consistent with theoretical
predictions, which suggests that the multi-population Monte-Carlo network model is an
appropriate foil for testing new population density methods.
18
19. Adam Cone Modeling Neural Networks
Literature Review
Cone, Adam Richard. New York University; Courant Institute of Mathematical Sciences. 9/26/03
<http://www.cims.nyu.edu/vigrenew/ug_research/adam_cone.pdf>
Maizels, Deborah Jane. Zoobotanica. Apple. 10-18-02
<http://www.zoobotanica.plus.com/portfolio%20medicine%20pages/synapse.htm>.
Nykamp, Duane and Daniel Tranchina. “A Population Density Approach That Facilitates Large-
Scale Modeling of Neural Networks: Analysis and an Application to Orientation Tuning.” Journal
of Computational Neural Science 8 (2000): 19-50
Weisstein, Eric. Eric Weisstein's World of Mathematics (MathWorldTM
). Wolfram Research. 10-
18-02 <http://mathworld.wolfram.com/MonteCarloMethod.html>.
19
21. Adam Cone Modeling Neural Networks
Appendix B: Simulation Code
Main Program
function
multi_pop4a(tau_E,tau_I,tau_M,tau_R,E_E,E_I,E_R,v_T,v_R,nue_0,nui_0,c_e,c_i,N,Tsim,randstate
,pop_con_mat,pop_type_vec,pop_frac_vec)
%second-order accurate version in which events are generated in groups with ID and sorted
%tau_E - Excitatory synaptic conductance time constants for each population (column vector
n_pops long)
%tau_I - Inhibitory synaptic time constants for each population
%tau_M - Membrane time constant
%tau_Ref - Refractory time for each population
%nue_0 - time-average excitatory synaptic input rate
%nu_0 - time-average inhibitory synaptic input rate
%N - number of neurons in the simulation
%c_e - maximum contrast for random excitory conductance for each population
%c_i - maximum contrast for random inhibitory conductance for each population
% synaptic input rates
%tau_ref for each neuron
%dispersion of uniform latencies handled correctly
%calls new qcount: qcount_11
rand('state',randstate)
dt=min(tau_E)/10;
nt=ceil(Tsim/dt); %number of time points
t=(0:(nt))*dt; %the time points
nt=nt+1;
t_max=max(t);
%Generate connectivity matrix
[con_mat, pop_id_vec]= conmat_5(N,pop_con_mat,pop_type_vec,pop_frac_vec);
n_pops=size(pop_con_mat,1);
Kpop=false(N,n_pops); %matrix where each column entry=1 for neurons in population with that
column number
for jp=1:n_pops
Kpop(:,jp)=(pop_id_vec'==jp);
end
E_e=(Kpop*E_E)';%row vectors of length N
v_th=(Kpop*v_T)';
E_i=(Kpop*E_I)';
E_r=(Kpop*E_R)';
v_reset=(Kpop*v_R)';
tau_m=(Kpop*tau_M)';
tau_e=(Kpop*tau_E)';
tau_i=(Kpop*tau_I)';
tau_ref=(Kpop*tau_R)';
n_ref=round(tau_ref/dt); %number of time bins in refractory state
rate_mf=zeros(n_pops,nt); % mean-field rate
nu_e=zeros(n_pops,nt); %external excitatory input rate for each pop
nu_i=nu_e; % ditto for inhibitory
%%%UPDATE FOR NPOPS (NOT JUST 2 POPS)
for j=1:n_pops
[nu_e(j,:) nu_i(j,:)]=ex_in_synaptic_rates(nue_0(j),nui_0(j),c_e(j),c_i(j),t,randstate);
end
%check to see whether these have to be defined here
nue_vec=zeros(N,1);
nui_vec=nue_vec;
axon_delay_constant=dt;
axon_delay_rand_range=2*dt;
tmin=axon_delay_constant;
tmax=tmin+axon_delay_rand_range;
Td=tmax-tmin;
21
22. Adam Cone Modeling Neural Networks
kmax=ceil(tmax/dt);
kmin=ceil(tmin/dt);
npoints=kmax-kmin+2;
e1=dt*kmax-tmax;
z=min(dt,kmax*dt-tmin);
weights=zeros(npoints,1);
weights(1)=z-e1 - (1/2)*(z^2/dt -e1^2/dt);
weights(2)=(1/2)*(z^2/dt -e1^2/dt);
kw=2;
while kw < npoints
z=min(dt,(kmax-kw+1)*dt-tmin);
weights(kw)=weights(kw)+z-(1/2)*z^2/dt;
weights(kw+1)=weights(kw+1)+(1/2)*z^2/dt;
kw=kw+1;
end
weights=weights/Td;
q_mat = qmat_6(con_mat,axon_delay_constant,axon_delay_rand_range,dt,max(tau_ref));
[max_event_num,num_neurons,num_dt_slots]=size(q_mat);
event_counter_matrix=zeros(num_dt_slots,num_neurons);
mu_Gamma_E=tau_M./(E_E-E_R); % Expected area under unitary synaptic event for each pop.
Gives average EPSP of ~0.5 mV
mu_GE=nue_0.*mu_Gamma_E; %Average of G_e for steady synaptic input rate at nu_0
sigma_sq_Gamma_E=mu_Gamma_E.^2/5; %variance of Gamma_e for parabolic distribution
mu_Gamma_E_sq=sigma_sq_Gamma_E+mu_Gamma_E.^2; %expected square of Gamma_e
sigma_GE=sqrt(nue_0.*mu_Gamma_E_sq./(2*tau_E)); %variance of G_e for steady synaptic input
at rate nu_0
mu_Gamma_I=10*mu_Gamma_E;%mean area under inhibitory cinductance for each pop
mu_GI=nui_0.*mu_Gamma_I;%mean inhibitory conductance
sigma_sq_Gamma_I=mu_Gamma_I.^2/5;
mu_Gamma_I_sq=sigma_sq_Gamma_I+mu_Gamma_I.^2;%expected square of Gamma_I
sigma_GI=sqrt(nui_0.*mu_Gamma_I_sq./(2*tau_I)); %variance of G_e for steady synaptic input
at rate nu_0
mu_Gamma_e=(Kpop*mu_Gamma_E)';
mu_Gamma_i=(Kpop*mu_Gamma_I)';
g_e=rate_mf; %mean_field E conductance for each pop at each time
g_i=g_e; %ditto for I
g_i_mc=g_i;
g_e_mc=g_i;
g_e(:,1)=nue_0.*mu_Gamma_E; %populations-by-time steps
g_i(:,1)=nui_0.*mu_Gamma_I;
aptimes=zeros(n_pops,ceil(Tsim*200*N)); %action potential times matrix
Jap=zeros(n_pops,1); %index for keeping track of location of ap's in the matrix above
count_down=-ones(1,N); %counts number of time steps until exiting refractory period
delt_fire=zeros(1,N); %time of action potential measured from beginning of time step
G_ep1=zeros(1,N); %conductance values to be used in integration
G_ep2=G_ep1;
G_ip1=G_ep1;
G_ip2=G_ip1;
Dt=zeros(1,N); %time step, either dt, or less for neurons emerging from
%refractory period
t_remain=dt*ones(1,N);
t_elapse=zeros(1,N);
Tzero=zeros(1,N);
ID_vec=Tzero;
dt_vec=dt*ones(1,N);
nET=20;
ETzero=zeros(nET,N);
count_zero=zeros(1,N);
G_e=zeros(nt,N); %Matrix of Ge values (nt+1) time points by N neurons
G_i=zeros(nt,N); %Matrix of Gi values (nt+1) time points by N neurons
22
23. Adam Cone Modeling Neural Networks
V=G_e; %Corresponding membrane voltages
G_e(1,:)=( Kpop*mu_GE+(Kpop*sigma_GE).*randn(N,1) )'; %Initialize G_e at t=0 by choosing
values from a gaussian distribution
G_e(1,:)=max(G_e(1,:),zeros(size(G_e(1,:)))); %If negative, set to zero
G_i(1,:)=( Kpop*mu_GI+(Kpop*sigma_GI).*randn(N,1) )'; %Initialize G_e at t=0 by choosing
values from a gaussian distribution
G_i(1,:)=max(G_i(1,:),zeros(size(G_i(1,:))));
g_i_mc(1)=mean(G_i(1,:));
g_e_mc(1)=mean(G_e(1,:));
V(1,:)=E_i + (v_th-E_i).*rand(1,N); % Initial random V from uniform disribution
V1 =E_i + (v_th-E_i).*rand(1,N);
%
%Mean-field computation
rate_mf(:,1)=mean_field_rate_vectorized(E_R,E_E,E_I,v_T,v_R,tau_M,tau_R,g_e(:,1),g_i(:,1));
rand('state',randstate+1)
cpt=cputime; %for measuring cpu time for time loop
counter=0;
for k=1:(nt-1) %Step through time
firing_times_vector=Tzero;
klower1=max(1,k-kmax);
kupper1=max(1,k-kmin+1);
nw1=kupper1-klower1+1;
JR1=klower1:kupper1;
JW1=(npoints-nw1+1):npoints;
klower2=max(1,k+1-kmax);
kupper2=max(1,k+1-kmin+1);
nw2=kupper2-klower2+1;
JR2=klower2:kupper2;
JW2=(npoints-nw2+1):npoints;
%take out loop independent stuff, as Adam did
nu_e1=nu_e(:,k)+ N*(pop_con_mat')*(
(rate_mf(:,JR1)*weights(JW1)).*(pop_type_vec'==1).*(pop_frac_vec') );
nu_e2=nu_e(:,k+1)+N*(pop_con_mat')*(
(rate_mf(:,JR2)*weights(JW2)).*(pop_type_vec'==1).*(pop_frac_vec') );
nu_i1=nu_i(:,k)+ N*(pop_con_mat')*( (rate_mf(:,JR1)*weights(JW1)).*(pop_type_vec'==-
1).*(pop_frac_vec') );
nu_i2=nu_i(:,k+1)+N*(pop_con_mat')*( (rate_mf(:,JR2)*weights(JW2)).*(pop_type_vec'==-
1).*(pop_frac_vec') );
g_e(:,k+1)=g_syn_updatea(g_e(:,k),nu_e1,nu_e2,tau_E,dt,mu_Gamma_E);
g_i(:,k+1)=g_syn_updatea(g_i(:,k),nu_i1,nu_i2,tau_I,dt,mu_Gamma_I);
%Mean-filed computation
rate_mf(:,k+1)=mean_field_rate_vectorized(E_R,E_E,E_I,v_T,v_R,tau_M, ...
tau_R,g_e(:,k+1),g_i(:,k+1));
%TAKE AVERAGE FOR TWO ADJACENT GRID POINTS INSTEAD?
nue_vec=Kpop*nu_e(:,k);
nui_vec=Kpop*nu_i(:,k);
rate_tot_vec=nue_vec+nui_vec;
Ns=poissrnd(dt*rate_tot_vec); %number of events for each of N neurons
p_e_vec=Kpop*( nu_e(:,k)./(nu_e(:,k)+nu_i(:,k)) );%probability that an event is of type
E for each neuron
G_ep1=G_e(k,:); %auxilliary conductance vector, initialized to that at beginning of time
step
G_ip1=G_i(k,:); %ditto
G_ep2=G_ep1; %ditto
G_ip2=G_ip1; %ditto
V_ep1=V(k,:);%auxilliary voltage vector
V_ep2=V_ep1; %ditto
n_max=max(Ns);%intialization of maximum over neurons of number of remaining events to be
integrated
if n_max>nET
n_max
23
24. Adam Cone Modeling Neural Networks
warning('Increase first dimension, nET, of ET matrix')
end
counter=counter+1;%pointer to present dt slot in queue matrix (q_mat)
mod_counter = mod(counter,num_dt_slots) + num_dt_slots*(mod(counter,num_dt_slots)==0);
%Generate all times and IDs for events coming from external input
%do this by grouping neurons according to common number of events.
%Start with all neurons tha have maximum number of events.
%Generate all event times and IDs in vectorized manner for these.
%Next, do the same for all neurons with one fewer number of events.
%Repeat until encountering nerons with no events
ETIDtemp=ETzero;
for K=n_max:-1:1%while K > 0
JK=(Ns==K);
ns=sum(JK);
if ns>0
p_e_mat=ones(K,ns)*diag(p_e_vec(JK));
ETIDtemp(1:K,JK)=dt*rand(K,ns)+i*(-1+2*(rand(K,ns)<=p_e_mat)); %generate times
%and IDs (E=1 or I=-1) for all neuron
with n_max
%synaptic events
end
end
max_internal_events=max(event_counter_matrix(mod_counter,:));
ETIDtot=sort([ETIDtemp(1:max(1,n_max),:);q_mat(1:max(1,max_internal_events),:,mod_counter)])
;
no_events=sum(n_max+max_internal_events);
[nnn mmm]=size(ETIDtot);
min_n_zeros=min(sum(ETIDtot==0,1));%minimum number of leading zeros in columns of
ETIDtot
if min_n_zeros~=(nnn)
row_start=min_n_zeros+1;
else
row_start=nnn;
end
ETIDtot=ETIDtot(row_start:nnn,:);
t_elapse=Tzero; %Initialize elapsed time at beginning of time step
t_remain=dt_vec; %Initializeremaining time at beginning of time step
T=Tzero; %initlaize time-since-last-event vector
Jint=false(1,N); %intialize vector that points to neurons with events to be integrated
count=count_zero;
%%%%%%%%%%%%%%%%%%%%%%
num_rows=nnn-row_start+1;
num_steps=num_rows;
if no_events>0
num_steps=num_rows+1;
end
%%%%%%%%%%%%%%%%%%%%%% %
for nd=1:num_steps
if nd <=num_rows
Jint=ETIDtot(nd,:)~=0;
n_int=sum(Jint);
if n_int~=0
T(Jint)=real(ETIDtot(nd,Jint))-t_elapse(Jint);
else
Jint=true(1,N);
T=t_remain;
end
else
Jint=true(1,N);
T=t_remain;
end
G_ep2(Jint)=G_ep1(Jint).*exp(-T(Jint)./tau_e(Jint));
G_ip2(Jint)=G_ip1(Jint).*exp(-T(Jint)./tau_i(Jint));
24
25. Adam Cone Modeling Neural Networks
%integrate only the subset of the neurons that are
%out or coming out of refractory period
J_out= Jint & (count_down<0); %index of neurons that are to be integrated that were
%nonrefractory at beginning of time step
J_coming=Jint & count_down==0 & delt_fire>t_elapse & delt_fire<= ...
(t_elapse+T); %index of neurons to be integrated that are coming out during
current
%time step
%count_down(J_coming)=count_down(J_coming)-1;
Dt=T; %auxilliary time step, initialized to full time
%to next event
if sum(J_coming)>0 %if any neurons coming out
G_ep1(J_coming)=G_ep1(J_coming).*exp(-(delt_fire(J_coming)-
t_elapse(J_coming))./tau_e(J_coming));%conductance upon emerging
G_ip1(J_coming)=G_ip1(J_coming).*exp(-(delt_fire(J_coming)-
t_elapse(J_coming))./tau_i(J_coming));%ditto
Dt(J_coming)=t_elapse(J_coming)+T(J_coming)-delt_fire(J_coming);%times between
emerging and end of time step
end
J=J_out | J_coming;
%integrate all non_refractory neurons
V_ep2(J) = ( V_ep1(J)-(Dt(J)./(2*tau_m(J))).*( V_ep1(J) - 2*E_r(J) - G_ep2(J).*E_e(J)
- G_ip2(J).*E_i(J) + G_ep1(J).*...
(V_ep1(J)-E_e(J)) + G_ip1(J).*(V_ep1(J)-E_i(J)) ) )./( 1 +
(Dt(J)./(2*tau_m(J))).*(1+G_ep2(J)+G_ip2(J)) );
%Find out who has crossed threshold; find times, and reset
%(put into refractory pool)
J=V_ep2>=v_th;
nf=sum(J);
if nf>0
V_ep2(J)=v_reset(J);
A=( (G_ep2(J)-G_ep1(J)).*(v_th(J)-E_e(J)) + (G_ip2(J)-G_ip1(J)).*(v_th(J)-E_i(J))
)./(2*tau_m(J).*Dt(J));
%solution from trapezoidal rule and
%linear interp of G_e and G_i
B=( V_ep1(J)+ v_th(J)- 2*E_r(J) + G_ep1(J).*(V_ep1(J)+v_th(J)-2*E_e(J)) +
G_ip1(J).*(V_ep1(J)+v_th(J)-2*E_i(J)) )./(2*tau_m(J));
C=v_th(J)-V_ep1(J);
rp=(-B+sqrt(B.^2-4*A.*C))./(2*A); %possible firing times are roots of a quadratic
rm=(-B-sqrt(B.^2-4*A.*C))./(2*A);
r=[rp; rm];
dtp=[Dt(J);Dt(J)];
tm=sum(r.*((r>0)&(r<dtp))); %take the only sensible root
delt_fire(J)=t_elapse(J)+tm; %record the time of firing within the
%interval
KF=( (ones(n_pops,1)*J) & Kpop' );
Nap=sum(KF,2);
firing_times_vector(J)=delt_fire(J)/dt;
for kap=1:n_pops
Ip=Jap(kap)+(1:Nap(kap));
aptimes(kap,Ip)=t(k)+delt_fire(KF(kap,:));
Jap(kap)=Jap(kap)+Nap(kap);
end
count_down(J)=n_ref(J); %reset the count down vector f
end
%update conductances of all neurons that had event (those with
%n_max>0)
if no_events>0 & (nd<=num_rows)
ID_vec=imag(ETIDtot(nd,:));
Je=Jint & (ID_vec==1); %picks subscripts of neurons that have excitatory events
Ji=Jint & (ID_vec==-1); %picks out subscripts of neurons that have inhibitory
events
N_e=sum(Je);
N_i=sum(Ji);
25
26. Adam Cone Modeling Neural Networks
z=rand(1,n_int); %First, generate uniformly distributed random number for each
event
theta=(atan2(2*sqrt(z-z.^2),(1-2*z))-2*pi)/3; %Convert this into parabolically
distributed number, by this
%and the following two lines
x=2*cos(theta)+1;
if N_e>0
G_ep2(Je)=G_ep2(Je)+mu_Gamma_e(Je).*x(1:N_e)./tau_e(Je);
end
if N_i>0
G_ip2(Ji)=G_ip2(Ji)+mu_Gamma_i(Ji).*x(N_e+1:n_int)./tau_i(Ji);
end
end
V_ep1=V_ep2;
G_ep1=G_ep2;
G_ip1=G_ip2;
t_remain(Jint)=t_remain(Jint)-T(Jint);
t_elapse(Jint)=t_elapse(Jint)+T(Jint);
end
V(k+1,:)=V_ep2;
G_e(k+1,:)=G_ep2;
G_i(k+1,:)=G_ip2;
count_down=count_down-1; %decrement the count down vector by 1
Jhist=count_down<0;
for kmc=1:n_pops
g_e_mc(kmc,k+1)=mean(G_ep2(Kpop(:,kmc)'));
g_i_mc(kmc,k+1)=mean(G_ip2(Kpop(:,kmc)'));
end
event_counter_matrix(mod_counter,:)=0;%initialze counter to zero in
%current slot that has just been used
q_mat(:,:,mod_counter)=zeros(size(q_mat(:,:,mod_counter)));%zero the dt slot of q_mat
just used
[q_mat event_counter_matrix] = qcount_11(con_mat,q_mat,mod_counter,num_dt_slots,...
firing_times_vector,axon_delay_constant,axon_delay_rand_range,dt,N, ...
event_counter_matrix);
end
cpt=cputime-cpt
%---------------PLOTTING STUFF---------------
figure(1)
plot(t,[nu_e;nu_i])
xlabel('Time (s)'); ylabel('Synaptic Input Rate (Hz)');
set(gca,'XLim',[0 Tsim])
legend('E_1','E_2','I_1','I_2')
title('Excitatory and Inhibitory Synaptic Input Rates vs. Time')
dth=0.002;
th=0:dth:Tsim;
t_rate=(dth/2):dth:(t_max-dth/2);
aptimes1=aptimes(1,aptimes(1,:)~=0);
N1=N*pop_frac_vec(1);
figure(2)
rate_monte1=hist(aptimes1,t_rate)/(N1*dth);
plot(t,rate_mf(1,:),'r-')
hold on
bar(t_rate,rate_monte1)
xlabel('Time (s)'); ylabel('Population 1 Firing Rate (Hz)')
hold off
set(gca,'XLim',[0 Tsim])
title('Mean Field and Monte-Carlo Population 1 Firing Rate vs. Time')
legend('Mean Field','Monte Carlo')
aptimes2=aptimes(2,aptimes(2,:)~=0);
N2=N*pop_frac_vec(2);
figure(3)
plot(t,rate_mf(2,:),'r-')
hold on
rate_monte2=hist(aptimes2,t_rate)/(N2*dth);
26
27. Adam Cone Modeling Neural Networks
bar(t_rate,rate_monte2)
xlabel('Time (s)'); ylabel('Population 2 Firing Rate (Hz)')
hold off
legend('Mean Field','Mont Carlo')
set(gca,'XLim',[0 Tsim])
title('Mean Field and Monte-Carlo Population 2 Firing Rate vs. Time')
figure(4)
plot(t_rate,rate_monte1,'c',t_rate,rate_monte2,'k')
xlabel('Time (s)'); ylabel('Firing Rate (Hz)');
legend('Population 1', 'Population 2')
title('Populations 1 and 2 Firing Rates vs. Time')
figure(5)
plot(t,[g_e;g_e_mc])
xlabel('Time (s)'); ylabel('Conductance (nS)');
legend('Mean Field 1','Mean Field 2','Monte-Carlo 1','Monte-Carlo 2')
title('Mean Field and Monte Carlo Excitatory Network Conductances vs. Time')
figure(6)
plot(t,[g_i;g_i_mc])
xlabel('Time (s)'); ylabel('Conductance (nS)')
legend('Mean Field 1','Mean Field 2','Monte-Carlo 1','Monte-Carlo 2')
title('Mean Field and Monte Carlo Inhibitory Network Conductances vs. Time')
save monte_carlo_results_multi_pops t_rate rate_monte1 rate_monte2 t nu_e nu_i ...
g_e g_i g_e_mc g_i_mc t rate_mf randstate
Connectivity Matrix Generator
% This m-file constructs a connectivity matrix from data about the number
% and type of populations and the user-specified output-connectivity of each
% population: 1) N = number of neurons
% 2) pop_connectivity_matrix = if the number of distinct
% populations is pop_number, then pop_connectivity_matrix is
% a pop_number*pop_number matrix, in which entry (i,j) is the
% probability that a neuron in population i synapses to some
% neuron in population j.
% 3) pop_type_vector = 1*pop_number matrix of type of each population
% (1 for excitatory, -1 for inhibitory).
% 4) pop_fraction_vector = 1*pop_number matrix of the proportion
% of total number in
% each population.
function [connectivity_matrix, pop_id_vector]=
conmat_5(N,pop_connectivity_matrix,pop_type_vector,pop_fraction_vector)
cpt = cputime;
% multi_pop_con_mat_generator is short for muliple-population-connectivity-matrix-generator
%Neurons are assigned populations based on their labels. For example,
%if population 1 comprises 20% of the network size, then the first 20%
%of the neurons, counting from 1, will be in population 1. For this reason,
%we construct the following vector which will give us something like "population boundaries"
n_pops=length(pop_fraction_vector);
pop_count_vector=round(N*pop_fraction_vector);%the jth entry in population_count_vector is
how many neurons are in population j.
pop_count_vector(n_pops)=N-sum(pop_count_vector(1:(n_pops-1)));
if sum(pop_count_vector==0)~=0
error('ZERO NEURONS IN AT LEAST ONE POPULATION')
end
%now, declare that connectivity_matrix, the eventual output of this function m-file, is an
N*N matrix, which we will fill with
% synapse type/presence values (i.e. 1 at (i,j): excitatory synapse from neuron i to neuron
j, -1 at
27
28. Adam Cone Modeling Neural Networks
%(i,j): inhibitory synapse from neuron i to neuron j, 0 at (i,j): no synapse from neuron i
to neuron j)
connectivity_matrix = sparse(zeros(N,N));
%Suppose we want to know whether neuron A will synapse to neuron B. Although each synapse
presence
%value is decided randomly, the weighting is given by the connectivity of
%the population containing A, a, to the population containing B, b. This value is
%found in the user-specified pop_connectivity_matrix, namely, at (a,b).
%This value uniquely determines the probability that neuron A synapses to
%neuron B.
pop_partition_vector =[0,cumsum(pop_count_vector)];
%sub_ab_connectivity_matrix is the "sub-connectivity matrix" between
%pre-synaptic population a and post-synaptic population b. It will
%be assimilated at each step by connectivity_matrix.
pop_id_vector=sparse(zeros(1,N));
for a = 1:n_pops %step through pre-synaptic populations
pop_id_vector((pop_partition_vector(a)+1):pop_partition_vector(a+1))=a;
for b = 1:n_pops %step through post_synaptic populations
sub_ab_connectivity_matrix = rand(pop_count_vector(a),pop_count_vector(b)) <
pop_connectivity_matrix(a,b);
connectivity_matrix((pop_partition_vector(a)+1):pop_partition_vector(a+1),(pop_partition_vec
tor(b)+1):...
pop_partition_vector(b+1)) = sub_ab_connectivity_matrix*pop_type_vector(a);
end
end
cpt = cputime-cpt;
Queue Matrix Updating
function [queue, event_counter_matrix] =
qcount_10(connectivity_matrix,queue_matrix,mod_counter,n_dt_slots,...
firing_times_vector,axon_delay_constant,axon_delay_rand_range,dt,N,event_counter_matrix)
Jfire=(firing_times_vector~=0);
firing_neuron_number=sum(Jfire);
firing_connectivity_matrix = connectivity_matrix(Jfire,:);
random_part = axon_delay_rand_range/dt*rand(size(firing_connectivity_matrix));
constant_part = axon_delay_constant/dt*ones(size(firing_connectivity_matrix));
times_part =
diag(firing_times_vector(find(firing_times_vector)))*ones(size(firing_connectivity_matrix));
firing_times_matrix =
abs(firing_connectivity_matrix).*(times_part+constant_part+random_part);
firing_info_matrix = firing_times_matrix+i*firing_connectivity_matrix;
firing_total_vector = sum(abs(firing_connectivity_matrix),1);
firing_info_matrix = sort(firing_info_matrix,1);
lower_bound = firing_neuron_number-max(firing_total_vector)+1;
firing_info_matrix = firing_info_matrix(lower_bound:firing_neuron_number,:);
for b = 1:max(firing_total_vector)
max_neurons_vector = (firing_total_vector >= (max(firing_total_vector)+1-b));
I = find(max_neurons_vector);
K = mod(mod_counter+floor(real(firing_info_matrix(b,I))),n_dt_slots);
K=K+n_dt_slots*(K==0);
counter_index = sub2ind(size(event_counter_matrix),K,I);
event_counter_matrix(counter_index) = event_counter_matrix(counter_index) + 1;
queue_index = sub2ind(size(queue_matrix),event_counter_matrix(counter_index),I,K);
queue_matrix(queue_index) = i*imag(firing_info_matrix(b,max_neurons_vector))+...
dt*( real(firing_info_matrix(b,max_neurons_vector))-...
floor(real(firing_info_matrix(b,max_neurons_vector))) );
end
queue = queue_matrix;
28
29. Adam Cone Modeling Neural Networks
Queue Matrix Generator
function queue_matrix =
qmat_5(connectivity_matrix,axon_delay_constant,axon_delay_rand_range,dt,tau_ref)
%axon_delay_rand_range=length of randon part of delay interval following
%minumum delay, axon_delay_constant
C = ceil((axon_delay_constant+axon_delay_rand_range)/dt)+1;%number of dt slots needed
max_in = max(sum(abs(connectivity_matrix)));
%A =max_in*ceil((C*dt-axon_delay_constant)/tau_ref); %estimate of maximum number of events
to be stored
A =max_in*ceil((axon_delay_rand_range+dt)/tau_ref); %estimate of maximum number of events to
be stored
%find the number of neurons in the network
B = length(connectivity_matrix);
queue_matrix = zeros(A,B,C);
Field Firing Rate Computation
function rate_mf=mean_field_rate_vectorized(E_r,E_e,E_i,v_th,v_reset,tau_m,tau_ref,g_e,g_i)
Eg=(E_r+g_e.*E_e+g_i.*E_i)./(1+g_e+g_i);
tau_1=tau_m./(1+g_e+g_i);
ts=tau_1.*log( (Eg-v_reset)./(Eg-v_th) );
rate_mf=(Eg>v_th)./(ts+tau_ref);
External Synaptic Input Generator
function [nu_e, nu_i]=ex_in_synaptic_rates(nue_0,nui_0,c_e,c_i,tg,randstate)
rand('state',randstate)
f=[1 3 7 15 31 63];
nf=length(f);
ce=rand(1,nf);
ci=rand(1,nf);
thetae=2*pi*rand(1,nf);
thetai=2*pi*rand(1,nf);
de=zeros(size(tg));
di=zeros(size(tg));
for j=1:nf
de=de+ce(j)*sin(2*pi*f(j)*tg + thetae(j));
di=di+ci(j)*sin(2*pi*f(j)*tg + thetai(j));
end
se=max(abs(de));
si=max(abs(di));
nu_e=nue_0*(1+c_e*(de/se));
nu_i=nui_0*(1+c_i*(di/si));
Mean Field Synaptic Conductance Updating
function g_s=g_syn_update(g_s,nu_s_1,nu_s_2,tau_s,dt,mu_Gamma_s)
g_s=( g_s + (dt./(2*tau_s)).*(mu_Gamma_s.*(nu_s_2 + nu_s_1) - g_s) )./...
(1 + dt./(2*tau_s));
Conductance Computation Program
function [g, dg, sigma_G, mu_G, mu_Gamma_e]=get_gbins(ngbins,tau_m,tau_e,nu_0,nu_e,E_e,E_r)
mu_Gamma_e=tau_m/(E_e-E_r); % Expected area under unitary synaptic event. Gives average EPSP
of ~0.5 mV
29
30. Adam Cone Modeling Neural Networks
%mu_Gamma_e=mu_Gamma_e/2;
%
%Asssume steady input until time zero, when sinusoidal modulation begins
mu_G=nu_0*mu_Gamma_e; %Average of G_e for steady synaptic input rate at nu_0
sigma_sq_Gamma_e=mu_Gamma_e^2/5; %variance of Gamma_e for parabolic distribution
mu_Gamma_e_sq=sigma_sq_Gamma_e+mu_Gamma_e^2; %expected square of Gamma_e
sigma_G=sqrt(nu_0*mu_Gamma_e_sq/(2*tau_e)); %variance of G_e for steady synaptic input at
rate nu_0
%Set up bins for g and also for histogram
mu_G_max=max(nu_e)*mu_Gamma_e;
sigma_G_max=sqrt(max(nu_e)*mu_Gamma_e_sq/(2*tau_e));
gmax=mu_G_max+3*sigma_G_max;
%
dg=gmax/ngbins;
g=(0:(ngbins-1))*dg; % a row vector
%
Voltage Computation Program
function [v, dv, v_reset, E_r]=get_vbins(E_i,E_r,v_th,nvbins)
%dv=(v_th-E_r)/(nvbins-0.5); %E_r is a grid point
dv=(v_th-E_i)/nvbins;
v=E_i + ((1:nvbins)' - 0.5)*dv; %A column vector of voltages. E_i and v_redet are half-grid
points.
E_r=v( floor((E_r-E_i)/dv) + 1 ); %E_r is chosen to be a grid
point
v_reset=E_r-dv/2; %v_reset is a half grid point
30