Numenta VP of Research Subutai Ahmad delivered this presentation at the Centre for Theoretical Neuroscience, University of Waterloo on October 2, 2018.
Jeff Hawkins Human Brain Project Summit Keynote: "Location, Location, Locatio...Numenta
Jeff Hawkins delivered this keynote presentation at the 2018 Human Brain Project Summit Open Day in Maastricht, the Netherlands on October 15, 2018. A screencast recording of the slides is also available at: https://numenta.com/resources/videos/jeff-hawkins-human-brain-project-screencast/
Location, Location, Location - A Framework for Intelligence and Cortical Comp...Numenta
Jeff Hawkins gave this presentation as part of the Johns Hopkins APL Colloquium Series on Septemer 21, 2018.
View the video of the talk here: https://numenta.com/resources/videos/jeff-hawkins-johns-hopkins-apl-talk/
Could A Model Of Predictive Voting Explain Many Long-Range Connections? by Su...Numenta
These are slides on a workshop Subutai Ahmad hosted on March 5, 2018 at the Computational and Systems Neuroscience Meeting (Cosyne) 2018.
About:
This workshop on long-range cortical circuits is focused on our peer-reviewed paper, “A Theory of How Columns in the Neocortex Enable Learning the Structure of the World.” Subutai discussed the inference mechanism introduced in the paper, our theory of location information, and how long-range connections allow columns to integrate inputs over space to perform object recognition.
CVPR 2020 Workshop: Sparsity in the neocortex, and its implications for conti...Christy Maver
Numenta VP Research Subutai Ahmad presents a talk on "Sparsity in the Neocortex and its Implications for Continual Learning" at the virtual CVPR 2020 workshop. In this talk, he discusses how continuous learning systems can benefit from sparsity, active dendrites and other neocortical mechanisms.
Have We Missed Half of What the Neocortex Does? by Jeff Hawkins (12/15/2017)Numenta
This was a presentation given on December 15, 2017 at the MIT Center for Brains, Minds + Machines as part of their Brains, Minds and Machines Seminar Series.
You can watch the recording of the presentation after Slide 1.
In this talk, Jeff describes a theory that sensory regions of the neocortex process two inputs. One input is the well-known sensory data arriving via thalamic relay cells. We propose the second input is a representation of allocentric location. The allocentric location represents where the sensed feature is relative to the object being sensed, in an object-centric reference frame. As the sensors move, cortical columns learn complete models of objects by integrating sensory features and location representations over time. Lateral projections allow columns to rapidly reach a consensus of what object is being sensed. We propose that the representation of allocentric location is derived locally, in layer 6 of each column, using the same tiling principles as grid cells in the entorhinal cortex. Because individual cortical columns are able to model complete complex objects, cortical regions are far more powerful than currently believed. The inclusion of allocentric location offers the possibility of rapid progress in understanding the function of numerous aspects of cortical anatomy.
Jeff discusses material from these two papers. Others can be found at https://numenta.com/papers
A Theory of How Columns in the Neocortex Enable Learning the Structure of the World
URL: https://doi.org/10.3389/fncir.2017.00081
Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in the Neocortex
URL: https://doi.org/10.3389/fncir.2016.00023
Jeff Hawkins NAISys 2020: How the Brain Uses Reference Frames, Why AI Needs t...Numenta
Jeff Hawkins presents a talk on "How the Brain Uses Reference Frames to Model the World, Why AI Needs to do the Same." In this talk, he gives an overview of The Thousand Brains Theory and discusses how machine intelligence can benefit from working on the same principles as the neocortex.
This talk was first presented at the NAISys conference on November 10, 2020. You can find a re-recording of the talk here: https://youtu.be/mGSG7I9VKDU
Jeff Hawkins Human Brain Project Summit Keynote: "Location, Location, Locatio...Numenta
Jeff Hawkins delivered this keynote presentation at the 2018 Human Brain Project Summit Open Day in Maastricht, the Netherlands on October 15, 2018. A screencast recording of the slides is also available at: https://numenta.com/resources/videos/jeff-hawkins-human-brain-project-screencast/
Location, Location, Location - A Framework for Intelligence and Cortical Comp...Numenta
Jeff Hawkins gave this presentation as part of the Johns Hopkins APL Colloquium Series on Septemer 21, 2018.
View the video of the talk here: https://numenta.com/resources/videos/jeff-hawkins-johns-hopkins-apl-talk/
Could A Model Of Predictive Voting Explain Many Long-Range Connections? by Su...Numenta
These are slides on a workshop Subutai Ahmad hosted on March 5, 2018 at the Computational and Systems Neuroscience Meeting (Cosyne) 2018.
About:
This workshop on long-range cortical circuits is focused on our peer-reviewed paper, “A Theory of How Columns in the Neocortex Enable Learning the Structure of the World.” Subutai discussed the inference mechanism introduced in the paper, our theory of location information, and how long-range connections allow columns to integrate inputs over space to perform object recognition.
CVPR 2020 Workshop: Sparsity in the neocortex, and its implications for conti...Christy Maver
Numenta VP Research Subutai Ahmad presents a talk on "Sparsity in the Neocortex and its Implications for Continual Learning" at the virtual CVPR 2020 workshop. In this talk, he discusses how continuous learning systems can benefit from sparsity, active dendrites and other neocortical mechanisms.
Have We Missed Half of What the Neocortex Does? by Jeff Hawkins (12/15/2017)Numenta
This was a presentation given on December 15, 2017 at the MIT Center for Brains, Minds + Machines as part of their Brains, Minds and Machines Seminar Series.
You can watch the recording of the presentation after Slide 1.
In this talk, Jeff describes a theory that sensory regions of the neocortex process two inputs. One input is the well-known sensory data arriving via thalamic relay cells. We propose the second input is a representation of allocentric location. The allocentric location represents where the sensed feature is relative to the object being sensed, in an object-centric reference frame. As the sensors move, cortical columns learn complete models of objects by integrating sensory features and location representations over time. Lateral projections allow columns to rapidly reach a consensus of what object is being sensed. We propose that the representation of allocentric location is derived locally, in layer 6 of each column, using the same tiling principles as grid cells in the entorhinal cortex. Because individual cortical columns are able to model complete complex objects, cortical regions are far more powerful than currently believed. The inclusion of allocentric location offers the possibility of rapid progress in understanding the function of numerous aspects of cortical anatomy.
Jeff discusses material from these two papers. Others can be found at https://numenta.com/papers
A Theory of How Columns in the Neocortex Enable Learning the Structure of the World
URL: https://doi.org/10.3389/fncir.2017.00081
Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in the Neocortex
URL: https://doi.org/10.3389/fncir.2016.00023
Jeff Hawkins NAISys 2020: How the Brain Uses Reference Frames, Why AI Needs t...Numenta
Jeff Hawkins presents a talk on "How the Brain Uses Reference Frames to Model the World, Why AI Needs to do the Same." In this talk, he gives an overview of The Thousand Brains Theory and discusses how machine intelligence can benefit from working on the same principles as the neocortex.
This talk was first presented at the NAISys conference on November 10, 2020. You can find a re-recording of the talk here: https://youtu.be/mGSG7I9VKDU
Does the neocortex use grid cell-like mechanisms to learn the structure of ob...Numenta
These are Jeff Hawkins' slides from the Computational Theories of the Brain Workshop held at the Simons Institute at UC Berkeley on April 17, 2018.
Abstract:
In this talk, I propose that the neocortex learns models of objects using the same methods that the entorhinal cortex uses to map environments. I propose that each cortical column contains cells that are equivalent to grid cells. These cells represent the location of sensor patches relative to objects in the world. As we move our sensors, the location of the sensor is paired with sensory input to learn the structure of objects. I explore the evidence for this hypothesis, propose specific cellular mechanisms that the hypothesis requires, and suggest how the hypothesis could be tested.
References:
“A Theory of How Columns in the Neocortex Enable Learning the Structure of the World” by Jeff Hawkins, Subutai Ahmad, YuWei Cui (2017)
“Place Cells, Grid Cells, and the Brain’s Spatial Representation System” by Edvard Moser, Emilio Kropff, May-Britt Moser (2008)
“Evidence for grid cells in a human memory network” by Christian Doeller, Caswell Barry, Neil Burgess (2010)
Recognizing Locations on Objects by Marcus LewisNumenta
Marcus gave a talk called "Recognizing Locations on Objects" during the HTM Meetup on 11/03/2017.
The brain learns and recognizes objects with independent moving sensors. It’s not obvious how a network of neurons would do this. Numenta has suggested that the brain solves this by computing each sensor’s location relative to the object, and learning the object as a set of features-at-locations. Marcus showed how the brain might determine this “location relative to the object.” He extended the model from Numenta’s recent paper, "A Theory of How Columns in the Neocortex Enable Learning the Structure of the World," so that it computes this location. This extended model takes two inputs: each sensor’s input, and each sensor’s “location relative to the body.” The model connects the columns in such a way that a column can compute its “location relative to the object” from another column’s “location relative to object.” When a column senses a feature, it recalls a union of all locations where it has sensed this feature, then the columns work together to narrow their unions. This extended model essentially takes its sensory input and asks, “Do I know any objects that contain this spatial arrangement of features?”
Numenta Brain Theory Discoveries of 2016/2017 by Jeff HawkinsNumenta
Jeff Hawkins discussed recent advances in cortical theory made by Numenta during the HTM Meetup on 11/03/2017. These discoveries are described in the recently published peer-reviewed paper, “A Theory of How Columns in the Neocortex Enable Learning the Structure of the World.” Jeff walked through the text and figures in the paper, as well as discussed the significance of these advances and the importance they play in AI and cortical theory.
The recording of the HTM Meetup is available at https://www.youtube.com/watch?v=c6U4yBfELpU&t=
BAAI Conference 2021: The Thousand Brains Theory - A Roadmap for Creating Mac...Numenta
Jeff Hawkins presented a talk on "The Thousand Brains Theory: A Roadmap to Machine Intelligence" at the Beijing Academy of Artificial Intelligence Conference on 1st June 2021. In this talk, he discussed the key components of The Thousand Brains Theory and Numenta's recent work.
The Predictive Neuron: How Active Dendrites Enable Spatiotemporal Computation...Numenta
This was a presentation given on February 8, 2018 at the European Institute for Theoretical Neuroscience (EITN)'s Dendritic Integration and Computation with Active Dendrites Workshop.
The workshop is aimed at putting together experiments, models and recent neuromorphic systems aiming at understanding the computational properties conferred by dendrites in neural systems. It is focused particularly on the excitable properties of dendrites and the type of computation they can implement.
This presentation guide you through Neural Networks, use neural networksNeural Networks v/s Conventional
Computer, Inspiration from Neurobiology, Types of neural network, The Learning Process, Hetero-association recall mechanisms and Key Features,
For more topics stay tuned with Learnbay.
Featuring pointers for: Single-layer neural networks and multi-layer neural networks, gradient descent, backpropagation. Slides are for introduction, for deep explanation on deep learning, please consult other slides.
Design of Cortical Neuron Circuits With VLSI Design Approachijsc
A simple CMOS circuitry using very less number of MOSFETs reproduce most of the electrophysiological cortical neuron types and is capable of producing a variety of different behaviors with diversity similar to that of real biological neuron cell. The firing pattern of basic cell classes like regular spiking (RS), chattering (CH), intrinsic bursting (IB) and fast spiking(FS) are obtained with a simple adjustment of only one biasing voltage makes circuit suitable for applications in reconfigurable neuromorphic devices that implement biologically resemble circuit of cortex. This paper discusses spice simulation of the various spiking pattern ability with required and firing frequency of a given cell type. The circuit operation is verified for both conditions-constant input and pulsating input.
3. What is an ANN Describe various types of ANN. Which ANN do you p.pdfivylinvaydak64229
3. What is an ANN? Describe various types of ANN. Which ANN do you prefer amidst of the
variety of ANNs? Justify the reason beyond this.
Solution
What is ANN?:
An artificial neuron network (ANN) is a computational model based on the structure and
functions of biological neural networks. Information that flows through the network affects the
structure of the ANN because a neural network changes - or learns, in a sense - based on that
input and output.
ANNs have three layers that are interconnected. The first layer consists of input neurons. Those
neurons send data on to the second layer, which in turn sends the output neurons to the third
layer.
Types of artificial neural networks:
There are two Artificial Neural Network topologies FreeForward and Feedback.
FeedForward ANN
The information flow is unidirectional. A unit sends information to other unit from which it does
not receive any information. There are no feedback loops. They are used in pattern
generation/recognition/classification. They have fixed inputs and outputs.
Feedback.:
Here, feedback loops are allowed. They are used in content addressable memories.
Radial Basis Function (RBF) Neural Network –
Radial basis functions are powerful techniques for interpolation in multidimensional space. A
RBF is a function which has built into a distance criterion with respect to a center. RBF neural
networks have the advantage of not suffering from local minima in the same way as Multi-Layer
Perceptrons. RBF neural networks have the disadvantage of requiring good coverage of the input
space by radial basis functions.
Kohonen Self-organizing Neural Network – The self-organizing map (SOM) performs a form of
unsupervised learning. A set of artificial neurons learn to map points in an input space to
coordinates in an output space. The input space can have different dimensions and topology from
the output space, and the SOM will attempt to preserve these.
Recurrent Neural Networks – Recurrent neural networks (RNNs) are models with bi-directional
data flow. Recurrent neural networks can be used as general sequence processors. Various types
of Recurrent neural networks are Fully recurrent network (Hopfield network and Boltzmann
machine), Simple recurrent networks, Echo state network, Long short term memory network, Bi-
directional RNN, Hierarchical RNN, and Stochastic neural networks.
Modular Neural Network : Biological studies have shown that the human brain functions not as a
single massive network, but as a collection of small networks. This realization gave birth to the
concept of modular neural networks, in which several small networks cooperate or compete to
solve problems.
Physical Neural Network :
A physical neural network includes electrically adjustable resistance material to simulate
artificial synapses.
Feed Forwad is mostly used ANN Network due to its different applications:
1)Physiological feed-forward system:In physiology, feed-forward control is exemplified by the
normal anticipatory regu.
Does the neocortex use grid cell-like mechanisms to learn the structure of ob...Numenta
These are Jeff Hawkins' slides from the Computational Theories of the Brain Workshop held at the Simons Institute at UC Berkeley on April 17, 2018.
Abstract:
In this talk, I propose that the neocortex learns models of objects using the same methods that the entorhinal cortex uses to map environments. I propose that each cortical column contains cells that are equivalent to grid cells. These cells represent the location of sensor patches relative to objects in the world. As we move our sensors, the location of the sensor is paired with sensory input to learn the structure of objects. I explore the evidence for this hypothesis, propose specific cellular mechanisms that the hypothesis requires, and suggest how the hypothesis could be tested.
References:
“A Theory of How Columns in the Neocortex Enable Learning the Structure of the World” by Jeff Hawkins, Subutai Ahmad, YuWei Cui (2017)
“Place Cells, Grid Cells, and the Brain’s Spatial Representation System” by Edvard Moser, Emilio Kropff, May-Britt Moser (2008)
“Evidence for grid cells in a human memory network” by Christian Doeller, Caswell Barry, Neil Burgess (2010)
Recognizing Locations on Objects by Marcus LewisNumenta
Marcus gave a talk called "Recognizing Locations on Objects" during the HTM Meetup on 11/03/2017.
The brain learns and recognizes objects with independent moving sensors. It’s not obvious how a network of neurons would do this. Numenta has suggested that the brain solves this by computing each sensor’s location relative to the object, and learning the object as a set of features-at-locations. Marcus showed how the brain might determine this “location relative to the object.” He extended the model from Numenta’s recent paper, "A Theory of How Columns in the Neocortex Enable Learning the Structure of the World," so that it computes this location. This extended model takes two inputs: each sensor’s input, and each sensor’s “location relative to the body.” The model connects the columns in such a way that a column can compute its “location relative to the object” from another column’s “location relative to object.” When a column senses a feature, it recalls a union of all locations where it has sensed this feature, then the columns work together to narrow their unions. This extended model essentially takes its sensory input and asks, “Do I know any objects that contain this spatial arrangement of features?”
Numenta Brain Theory Discoveries of 2016/2017 by Jeff HawkinsNumenta
Jeff Hawkins discussed recent advances in cortical theory made by Numenta during the HTM Meetup on 11/03/2017. These discoveries are described in the recently published peer-reviewed paper, “A Theory of How Columns in the Neocortex Enable Learning the Structure of the World.” Jeff walked through the text and figures in the paper, as well as discussed the significance of these advances and the importance they play in AI and cortical theory.
The recording of the HTM Meetup is available at https://www.youtube.com/watch?v=c6U4yBfELpU&t=
BAAI Conference 2021: The Thousand Brains Theory - A Roadmap for Creating Mac...Numenta
Jeff Hawkins presented a talk on "The Thousand Brains Theory: A Roadmap to Machine Intelligence" at the Beijing Academy of Artificial Intelligence Conference on 1st June 2021. In this talk, he discussed the key components of The Thousand Brains Theory and Numenta's recent work.
The Predictive Neuron: How Active Dendrites Enable Spatiotemporal Computation...Numenta
This was a presentation given on February 8, 2018 at the European Institute for Theoretical Neuroscience (EITN)'s Dendritic Integration and Computation with Active Dendrites Workshop.
The workshop is aimed at putting together experiments, models and recent neuromorphic systems aiming at understanding the computational properties conferred by dendrites in neural systems. It is focused particularly on the excitable properties of dendrites and the type of computation they can implement.
This presentation guide you through Neural Networks, use neural networksNeural Networks v/s Conventional
Computer, Inspiration from Neurobiology, Types of neural network, The Learning Process, Hetero-association recall mechanisms and Key Features,
For more topics stay tuned with Learnbay.
Featuring pointers for: Single-layer neural networks and multi-layer neural networks, gradient descent, backpropagation. Slides are for introduction, for deep explanation on deep learning, please consult other slides.
Design of Cortical Neuron Circuits With VLSI Design Approachijsc
A simple CMOS circuitry using very less number of MOSFETs reproduce most of the electrophysiological cortical neuron types and is capable of producing a variety of different behaviors with diversity similar to that of real biological neuron cell. The firing pattern of basic cell classes like regular spiking (RS), chattering (CH), intrinsic bursting (IB) and fast spiking(FS) are obtained with a simple adjustment of only one biasing voltage makes circuit suitable for applications in reconfigurable neuromorphic devices that implement biologically resemble circuit of cortex. This paper discusses spice simulation of the various spiking pattern ability with required and firing frequency of a given cell type. The circuit operation is verified for both conditions-constant input and pulsating input.
3. What is an ANN Describe various types of ANN. Which ANN do you p.pdfivylinvaydak64229
3. What is an ANN? Describe various types of ANN. Which ANN do you prefer amidst of the
variety of ANNs? Justify the reason beyond this.
Solution
What is ANN?:
An artificial neuron network (ANN) is a computational model based on the structure and
functions of biological neural networks. Information that flows through the network affects the
structure of the ANN because a neural network changes - or learns, in a sense - based on that
input and output.
ANNs have three layers that are interconnected. The first layer consists of input neurons. Those
neurons send data on to the second layer, which in turn sends the output neurons to the third
layer.
Types of artificial neural networks:
There are two Artificial Neural Network topologies FreeForward and Feedback.
FeedForward ANN
The information flow is unidirectional. A unit sends information to other unit from which it does
not receive any information. There are no feedback loops. They are used in pattern
generation/recognition/classification. They have fixed inputs and outputs.
Feedback.:
Here, feedback loops are allowed. They are used in content addressable memories.
Radial Basis Function (RBF) Neural Network –
Radial basis functions are powerful techniques for interpolation in multidimensional space. A
RBF is a function which has built into a distance criterion with respect to a center. RBF neural
networks have the advantage of not suffering from local minima in the same way as Multi-Layer
Perceptrons. RBF neural networks have the disadvantage of requiring good coverage of the input
space by radial basis functions.
Kohonen Self-organizing Neural Network – The self-organizing map (SOM) performs a form of
unsupervised learning. A set of artificial neurons learn to map points in an input space to
coordinates in an output space. The input space can have different dimensions and topology from
the output space, and the SOM will attempt to preserve these.
Recurrent Neural Networks – Recurrent neural networks (RNNs) are models with bi-directional
data flow. Recurrent neural networks can be used as general sequence processors. Various types
of Recurrent neural networks are Fully recurrent network (Hopfield network and Boltzmann
machine), Simple recurrent networks, Echo state network, Long short term memory network, Bi-
directional RNN, Hierarchical RNN, and Stochastic neural networks.
Modular Neural Network : Biological studies have shown that the human brain functions not as a
single massive network, but as a collection of small networks. This realization gave birth to the
concept of modular neural networks, in which several small networks cooperate or compete to
solve problems.
Physical Neural Network :
A physical neural network includes electrically adjustable resistance material to simulate
artificial synapses.
Feed Forwad is mostly used ANN Network due to its different applications:
1)Physiological feed-forward system:In physiology, feed-forward control is exemplified by the
normal anticipatory regu.
CVPR 2020 Workshop: Sparsity in the neocortex, and its implications for conti...Numenta
Numenta VP Research Subutai Ahmad presents a talk on "Sparsity in the Neocortex and its Implications for Continual Learning" at the virtual CVPR 2020 workshop. In this talk, he discusses how continuous learning systems can benefit from sparsity, active dendrites and other neocortical mechanisms.
Neural networks Self Organizing Map by Engr. Edgar Carrillo IIEdgar Carrillo
This presentation talks about neural networks and self organizing maps. In this presentation,Engr. Edgar Caburatan Carrillo II also discusses its applications.
As Wireless Sensor Networks are penetrating into the industrial domain, many research opportunities are emerging. One such essential and challenging application is that of node localization. A feed-forward neural network based methodology is adopted in this paper. The Received Signal Strength Indicator (RSSI) values of the anchor node beacons are used. The number of anchor nodes and their configurations has an impact on the accuracy of the localization system, which is also addressed in this paper. Five different training algorithms are evaluated to find the training algorithm that gives the best result. The multi-layer Perceptron (MLP) neural network model was trained using Matlab. In order to evaluate the performance of the proposed method in real time, the model obtained was then implemented on the Arduino microcontroller. With four anchor nodes, an average 2D localization error of 0.2953 m has been achieved with a 12-12-2 neural network structure. The proposed method can also be implemented on any other embedded microcontroller system.
Wireless Sensor Network using Particle Swarm Optimizationidescitation
Wireless sensor network (WSN) is becoming
progressively important and challenging research area. A
Wireless sensor network (WSN) consists of spatially
distributed autonomous sensors to monitor physical and
environmental conditions and to co-operatively pass their data
through the network to a main location. Wireless sensor
consists of small low cost sensor nodes, having a limited
transmission range and their processing, storage capabilities
and energy resources are limited. The main task of such a
network is to gather information from a node and transmit it
to a base station for further processing.WSN has different
issues such as optimal sensor deployment, node localization,
base station placement, location of target nodes, energy aware
clustering and data aggregation. Recently researchers around
the world are applying bio-inspired optimization algorithm
known as particle swarm optimization (PSO) for increasing
efficiency in the WSN issues. This paper describes the use of
PSO algorithm for optimal sensor deployment in WSN.
Similar to Have We Missed Half of What the Neocortex Does? A New Predictive Framework Based on Cortical Grid Cells (20)
Brains@Bay Meetup: A Primer on Neuromodulatory Systems - Srikanth RamaswamyNumenta
Meetup page: https://www.meetup.com/Brains-Bay/events/284481247/
Neuromodulators are signalling chemicals in the brain, which control the emergence of adaptive learning and behaviour. Neuromodulators including dopamine, acetylcholine, serotonin and noradrenaline operate on a spectrum of spatio-temporal scales in tandem and opposition to reconfigure functions of biological neural networks and to regulate global cognition and state transition. Although neuromodulators are important in shaping cognition, their phenomenology is yet to be fully realized in deep neural networks (DNNs). In this talk, we will give an overview of the biological organizing principles of neuromodulators in adaptive cognition and highlight the competition and cooperation across neuromodulators.
Brains@Bay Meetup: How to Evolve Your Own Lab Rat - Thomas MiconiNumenta
Meetup page: https://www.meetup.com/Brains-Bay/events/284481247/
A hallmark of intelligence is the ability to learn new flexible, cognitive behaviors - that is, behaviors that require discovering, storing and exploiting novel information for each new instance of the task. In meta-learning, agents are trained with external algorithms to learn one specific cognitive task. However, animals are able to pick up such cognitive tasks automatically, as a result of their evolved neural architecture and synaptic plasticity mechanisms, including neuromodulation. Here we evolve neural networks, endowed with plastic connections and reward-based neuromodulation, over a sizable set of simple meta-learning tasks based on a framework from computational neuroscience. The resulting evolved networks can automatically acquire a novel simple cognitive task, never seen during evolution, through the spontaneous operation of their evolved neural organization and plasticity system. We suggest that attending to the multiplicity of loops involved in natural learning may provide useful insight into the emergence of intelligent behavior.
Brains@Bay Meetup: The Increasing Role of Sensorimotor Experience in Artifici...Numenta
We receive information about the world through our sensors and influence the world through our effectors. Such low-level data has gradually come to play a greater role in AI during its 70-year history. I see this as occurring in four steps, two of which are mostly past and two of which are in progress or yet to come. The first step was to view AI as the design of agents which interact with the world and thereby have sensorimotor experience; this viewpoint became prominent in the 1980s and 1990s. The second step was to view the goal of intelligence in terms of experience, as in the reward signal of optimal control and reinforcement learning. The reward formulation of goals is now widely used but rarely loved. Many would prefer to express goals in non-experiential terms, such as reaching a destination or benefiting humanity, but settle for reward because, as an experiential signal, reward is directly available to the agent without human assistance or interpretation. This is the pattern that we see in all four steps. Initially a non-experiential approach seems more intuitive, is preferred and tried, but ultimately proves a limitation on scaling; the experiential approach is more suited to learning and scaling with computational resources. The third step in the increasing role of experience in AI concerns the agent’s representation of the world’s state. Classically, the state of the world is represented in objective terms external to the agent, such as “the grass is wet” and “the car is ten meters in front of me”, or with probability distributions over world states such as in POMDPs and other Bayesian approaches. Alternatively, the state of the world can be represented experientially in terms of summaries of past experience (e.g., the last four Atari video frames input to DQN) or predictions of future experience (e.g., successor representations). The fourth step is potentially the biggest: world knowledge. Classically, world knowledge has always been expressed in terms far from experience, and this has limited its ability to be learned and maintained. Today we are seeing more calls for knowledge to be predictive and grounded in experience. After reviewing the history and prospects of the four steps, I propose a minimal architecture for an intelligent agent that is entirely grounded in experience.
Brains@Bay Meetup: Open-ended Skill Acquisition in Humans and Machines: An Ev...Numenta
In this talk, I will propose a conceptual framework sketching a path toward open-ended skill acquisition through the coupling of environmental, morphological, sensorimotor, cognitive, developmental, social, cultural and evolutionary mechanisms. I will illustrate parts of this framework through computational experiments highlighting the key role of intrinsically motivated exploration in the generation of behavioral regularity and diversity. Firstly, I will show how some forms of language can self-organize out of generic exploration mechanisms without any functional pressure to communicate. Secondly, we will see how language — once invented — can be recruited as a cognitive tool that enables compositional imagination and bootstraps open-ended cultural innovation.
For more:
Brains@Bay Meetup: The Effect of Sensorimotor Learning on the Learned Represe...Numenta
Most current deep neural networks learn from a static data set without active interaction with the world. We take a look at how learning through a closed loop between action and perception affects the representations learned in a DNN. We demonstrate how these representations are significantly different from DNNs that learn supervised or unsupervised from a static dataset without interaction. These representations are much sparser and encode meaningful content in an efficient way. Even an agent who learned without any external supervision, purely through curious interaction with the world, acquires encodings of the high dimensional visual input that enable the agent to recognize objects using only a handful of labeled examples. Our results highlight the capabilities that emerge from letting DNNs learn more similar to biological brains, though sensorimotor interaction with the world.
For more:
SBMT 2021: Can Neuroscience Insights Transform AI? - Lawrence SpracklenNumenta
Numenta's Director of ML Architecture Lawrence Spracklen presented a talk at the SBMT Annual Congress on July 10th, 2021. He talked about how neuroscience principles can inspire better machine learning algorithms.
FPGA Conference 2021: Breaking the TOPS ceiling with sparse neural networks -...Numenta
Nick Ni (Xilinx) and Lawrence Spracklen (Numenta) presented a talk at the FGPA Conference Europe on July 8th, 2021. In this talk, they presented a neuroscience approach to optimize state-of-the-art deep learning networks into sparse topology and how it can unlock significant performance gains on FPGAs without major loss of accuracy. They then walked through the FPGA implementation where they exploited the advantage of sparse networks with a unique Domain Specific Architecture (DSA).
OpenAI’s GPT 3 Language Model - guest Steve OmohundroNumenta
In this research meeting, guest Stephen Omohundro gave a fascinating talk on GPT-3, the new massive OpenAI Natural Language Processing model. He reviewed the network architecture, training process, and results in the context of past work. There was extensive discussion on the implications for NLP and for Machine Intelligence / AGI.
Link to GPT-3 paper: https://arxiv.org/abs/2005.14165
Link to YouTube recording of Steve's talk: https://youtu.be/0ZVOmBp29E0
The Thousand Brains Theory: A Framework for Understanding the Neocortex and B...Numenta
Recent advances in reverse engineering the neocortex reveal that it is a highly-distributed sensory-motor modeling system. Each cortical column learns complete models of observed objects through movement and sensation. The columns use long-range connections to vote on what objects are currently being observed. In this talk, we introduce the key elements of this theory and describe how these elements can be introduced into current machine learning techniques to improve their capabilities, robustness, and power requirements.
The Biological Path Toward Strong AI by Matt Taylor (05/17/18)Numenta
These are Matt Taylor's slides from the AI Singapore Meetup on May 17, 2018.
Abstract:
Today’s wave of AI technology is still being driven by the ANN neuron pioneered decades ago. Hierarchical Temporal Memory (HTM) is a realistic biologically-constrained model of the pyramidal neuron reflecting today’s most recent neocortical research. This talk will describe and visualize core HTM concepts like sparse distributed representations, spatial pooling and temporal memory. Strong AI is a common goal of many computer scientists. So far, machine learning techniques have created amazing results in narrow fields, but haven’t produced something we could all call “intelligent”. Given recent advances in neuroscience research, we know a lot more about how neurons work together now than we did when ANNs were created. We believe systems with a more realistic neuronal model will be more likely to produce Strong AI. Hierarchical Temporal Memory is a theory of intelligence based upon neuroscience research. The neocortex is the seat of intelligence in the brain, and it is structurally homogeneous throughout. This means a common algorithm is processing all your sensory input, no matter which sense. We believe we have discovered some of the foundational algorithms of the neocortex, and we’ve implemented them in software. I’ll show you how they work with detailed dynamic visualizations of Sparse Distributed Representations, Spatial Pooling, and Temporal Memory.
The Biological Path Towards Strong AI Strange Loop 2017, St. LouisNumenta
Copy and paste this URL to your browser to watch the live presentation: https://www.youtube.com/watch?v=-h-cz7yY-G8
Abstract:
Today’s wave of AI technology is still being driven by the ANN neuron pioneered decades ago. Hierarchical Temporal Memory (HTM) is a realistic biologically-constrained model of the pyramidal neuron reflecting today’s most recent neocortical research. This talk will describe and visualize core HTM concepts like sparse distributed representations, spatial pooling and temporal memory. Strong AI is a common goal of many computer scientists. So far, machine learning techniques have created amazing results in narrow fields, but haven’t produced something we could all call “intelligent”. Given recent advances in neuroscience research, we know a lot more about how neurons work together now than we did when ANNs were created. We believe systems with a more realistic neuronal model will be more likely to produce Strong AI. Hierarchical Temporal Memory is a theory of intelligence based upon neuroscience research. The neocortex is the seat of intelligence in the brain, and it is structurally homogeneous throughout. This means a common algorithm is processing all your sensory input, no matter which sense. We believe we have discovered some of the foundational algorithms of the neocortex, and we’ve implemented them in software. I’ll show you how they work with detailed dynamic visualizations of Sparse Distributed Representations, Spatial Pooling, and Temporal Memory.
Numenta engineer Yuwei Cui walks through how the HTM Spatial Pooler works, explaining why desired properties exist and how they work. Includes lots of graphs of SP online learning performance, discussion of topology and boosting.
Matt Taylor, Numenta's Open Source Community Manager, delivered this presentation at AI With the Best on April 29, 2017.
Abstract: Strong AI is a common goal of many computer scientists. So far, machine learning techniques have created amazing results in narrow fields, but haven’t produced something we could all call “intelligent”.
Given recent advances in neuroscience research, we know a lot more about how neurons work together now than we did when ANNs were created. We believe systems with a more realistic neuronal model will be more likely to produce Strong AI.
Hierarchical Temporal Memory is a theory of intelligence based upon neuroscience research. The neocortex is the seat of intelligence in the brain, and it is structurally homogeneous throughout. This means a common algorithm is processing all your sensory input, no matter which sense.
We believe we have discovered some of the foundational algorithms of the neocortex, and we’ve implemented them in software. I’ll show you how they work with detailed dynamic visualizations of Sparse Distributed Representations, Spatial Pooling, and Temporal Memory.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
Introduction:
RNA interference (RNAi) or Post-Transcriptional Gene Silencing (PTGS) is an important biological process for modulating eukaryotic gene expression.
It is highly conserved process of posttranscriptional gene silencing by which double stranded RNA (dsRNA) causes sequence-specific degradation of mRNA sequences.
dsRNA-induced gene silencing (RNAi) is reported in a wide range of eukaryotes ranging from worms, insects, mammals and plants.
This process mediates resistance to both endogenous parasitic and exogenous pathogenic nucleic acids, and regulates the expression of protein-coding genes.
What are small ncRNAs?
micro RNA (miRNA)
short interfering RNA (siRNA)
Properties of small non-coding RNA:
Involved in silencing mRNA transcripts.
Called “small” because they are usually only about 21-24 nucleotides long.
Synthesized by first cutting up longer precursor sequences (like the 61nt one that Lee discovered).
Silence an mRNA by base pairing with some sequence on the mRNA.
Discovery of siRNA?
The first small RNA:
In 1993 Rosalind Lee (Victor Ambros lab) was studying a non- coding gene in C. elegans, lin-4, that was involved in silencing of another gene, lin-14, at the appropriate time in the
development of the worm C. elegans.
Two small transcripts of lin-4 (22nt and 61nt) were found to be complementary to a sequence in the 3' UTR of lin-14.
Because lin-4 encoded no protein, she deduced that it must be these transcripts that are causing the silencing by RNA-RNA interactions.
Types of RNAi ( non coding RNA)
MiRNA
Length (23-25 nt)
Trans acting
Binds with target MRNA in mismatch
Translation inhibition
Si RNA
Length 21 nt.
Cis acting
Bind with target Mrna in perfect complementary sequence
Piwi-RNA
Length ; 25 to 36 nt.
Expressed in Germ Cells
Regulates trnasposomes activity
MECHANISM OF RNAI:
First the double-stranded RNA teams up with a protein complex named Dicer, which cuts the long RNA into short pieces.
Then another protein complex called RISC (RNA-induced silencing complex) discards one of the two RNA strands.
The RISC-docked, single-stranded RNA then pairs with the homologous mRNA and destroys it.
THE RISC COMPLEX:
RISC is large(>500kD) RNA multi- protein Binding complex which triggers MRNA degradation in response to MRNA
Unwinding of double stranded Si RNA by ATP independent Helicase
Active component of RISC is Ago proteins( ENDONUCLEASE) which cleave target MRNA.
DICER: endonuclease (RNase Family III)
Argonaute: Central Component of the RNA-Induced Silencing Complex (RISC)
One strand of the dsRNA produced by Dicer is retained in the RISC complex in association with Argonaute
ARGONAUTE PROTEIN :
1.PAZ(PIWI/Argonaute/ Zwille)- Recognition of target MRNA
2.PIWI (p-element induced wimpy Testis)- breaks Phosphodiester bond of mRNA.)RNAse H activity.
MiRNA:
The Double-stranded RNAs are naturally produced in eukaryotic cells during development, and they have a key role in regulating gene expression .
A brief information about the SCOP protein database used in bioinformatics.
The Structural Classification of Proteins (SCOP) database is a comprehensive and authoritative resource for the structural and evolutionary relationships of proteins. It provides a detailed and curated classification of protein structures, grouping them into families, superfamilies, and folds based on their structural and sequence similarities.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
Have We Missed Half of What the Neocortex Does? A New Predictive Framework Based on Cortical Grid Cells
1. University of Waterloo
October 2, 2018
Subutai Ahmad
sahmad@numenta.com
@SubutaiAhmad
Have We Missed Half of What the Neocortex Does?
A New Predictive Framework
Based on Cortical Grid Cells
Collaborators:
-Jeff Hawkins, Marcus Lewis,
Scott Purdy, Mirko Klukas
2. Standard Model of the Neocortex
Sensory array
Simple
features
Complex
features
ObjectsRegion 3
Region 2
Region 1
3. Vernon Mountcastle’s Big Idea
1) All areas of the neocortex look
the same because they perform
the same basic function.
2) What makes one region a visual
region and another an auditory
region is what it is connected to.
3) A small area of cortex, a 1mm2
“cortical column”, is the unit of
replication and implements the
common cortical algorithm.
Mountcastle, 1978
4. L2/3
L4
L6
L5
Input
Anatomy of a Cortical Column
1) Cortical columns are really complex!
The function of a cortical column must also be complex.
2) Local circuitry is remarkably consistent everywhere.
The function of a cortical column must be generic.
Simple
Output L2
L3a
L3b
L4
L6a
L6b
L6 ip
L6 mp
L6 bp
L5 tt
L5 cc
L5 cc-ns
L5: Calloway et. al, 2015
L6: Zhang and Deschenes, 1997
Binzegger et al., 2004
Nelson et al., 2013
Output, via thalamus
50%10%
Cortex
Thalamus
Output, direct
L5 CTC: Guillery, 1995
Constantinople and Bruno, 2013
Long range lateral connections
Realistic
5. What Does a Cortical Column Do?
The neocortex learns a model of the world
- Thousands of objects, how they appear on the sensors
- Relative location of features, invariant to sensor position
- Learned via movement of sensors
- Makes detailed predictions
- Models physical and abstract objects
Mountcastle corollary:
If the neocortex learns models of objects,
then each column learns models of objects
6. How can networks of neurons learn rich predictive
models of objects through movement?
Question:
The neocortex and cortical columns use “cortical grid
cells” and path integration to model an object’s
structure.
Proposal:
Talk Outline:
1) Properties of grid cells
2) Network model of cortical columns
3) Implications
11. Multiple Modules Can Represent Locations Uniquely
(Fiete et al, 2008; Sreenivasan and Fiete, 2011)
12. Path Integration
(Hafting et al, 2005;
McNaughton et al., 2006; Ocko et al., 2018)
- As animal moves, grid cells update their location
- This can happen in the dark, using efference copy of motion signals
- Path integration: regardless of path trajectory, the same location in
environment will activate a consistent grid code
- Imprecise, so sensory cues are used to “anchor” grid cells
13. Unique Location Spaces For Each Environment
- Each module randomly initialized in a new room, codes for each room will be unique
(e.g. 20 modules, 100 cells each = 10020 possible codes)
- Initial point implicitly defines a location space for each environment
- Modules update independently, so path integration still works
- Sensory cues can be used to anchor or re-activate the location space
(Rowland & Moser, 2014)
14. Summary: Grid Cells Represent the Structure of Environments
Entorhinal Cortex
Body in environments
Location
- Encoded by grid cells
- Unique to location in room AND room
- Location is updated by movement
A Room is:
- A set of locations that are connected
by movement (via path integration).
- Some locations have associated
features.
15. Proposal: Cortical Grid Cells Represent the Structure of Objects
Location
- Encoded by grid cells
- Unique to location in room AND room
- Location is updated by movement
A Room is:
- A set of locations that are connected
by movement (via path integration).
- Some locations have associated
features.
Entorhinal Cortex
Body in environments
Cortical Column
Sensor patch relative to objects
Location
- Encoded by grid-like cells
- Unique to location on object AND object
- Location is updated by movement
An Object is:
- A set of locations that are connected
by movement (via path integration).
- Some locations have associated
features.
16. Network Model (Single Cortical Column)
Grid cell modules
use movement to
update their location
Sensory predictions
(forward model)Predictions + sensation =>
sensory representation
- Sequence of discrete time steps
- Each time step consists of 4 stages
representing a movement followed by a
sensation
New sensation updates
location
(Lewis et al, submitted)
17. Updating Grid Cell Module Based On Motion
- Given a motion 𝒅 𝑡
, phase is shifted according to:
- Threshold activity to get a binary location representation
- Cells in each module represented by 2D phase Φ
- Activity at time t is a bump centered around a
phase Φ 𝑡
𝑖
- Each module has a different scale and orientation,
represented by a transform matrix:
𝑴𝑖 =
𝑠𝑖 cos 𝜃𝑖 − 𝑠𝑖 sin 𝜃𝑖
𝑠𝑖 sin 𝜃𝑖 𝑠𝑖 cos 𝜃𝑖
−1
Φ 𝑡,move
𝑖
= 𝜑 + 𝑴𝑖 𝒅 𝑡 mod 1.0 𝜑 ∈ Φ 𝑡−1
𝑖
18. Location Layer Forms Predictions in Sensory Layer
Sensory predictions
(forward model)
19. 5K to 30K excitatory synapses
- 10% proximal
- 90% distal
Distal dendrites are pattern detectors
- 8-15 co-active, co-located synapses
generate dendritic NMDA spikes
- sustained depolarization of soma but
does not typically generate AP
(Mel, 1992; Branco & Häusser, 2011; Schiller et al, 2000; Losonczy, 2006; Antic
et al, 2010; Major et al, 2013; Spruston, 2008; Milojkovic et al, 2005, etc.)
Active Dendrites in Pyramidal Neurons
20. Proximal synapses: Cause somatic spikes
Define classic receptive field of neuron
Distal synapses: Cause dendritic spikes
Put the cell into a depolarized, or “predictive” state
Depolarized neurons fire sooner, inhibiting nearby neurons.
A neuron can predict its activity in hundreds of unique contexts.
5K to 30K excitatory synapses
- 10% proximal
- 90% distal
Distal dendrites are pattern detectors
- 8-15 co-active, co-located synapses
generate dendritic NMDA spikes
- sustained depolarization of soma but
does not typically generate AP
HTM Neuron Model
Active Dendrites Predict a Neuron’s Inputs
(Poirazi et al., 2003)
(Hawkins & Ahmad, 2016)
21. Sensory Layer is a Sequence Memory Layer
- Neurons in a mini-column learn same FF receptive field.
- Distal dendritic segments form connections to cells in location layer.
- Active segments act as predictions and bias cells.
- With sensory input these cells fire first, and inhibit other cells within
mini-column.
(Hawkins & Ahmad, 2016; Hawkins et al, 2017)
No prediction
t=0
t=1
Sensory input
Predicted input
t=0
Predicted cells
inhibit neighbors
t=1
Sensory input
Sensory layer
Very specific sparse representation
that encodes the current sensory
input at the current location.
Dense representation that activates
the codes for this sensory input at
any location.
22. Multiple Simultaneous Locations Represents Uncertainty
- Sensory representation activates grid cell locations
- If this sensory representation is not unique, we activate
a union of grid cells in each module
- With sufficiently large modules, the union can represent
several locations without confusion
- Movement shifts all active grid cells
(Ahmad & Hawkins, 2016)
(Lewis et al, submitted)
23. Learning
- For a new object, we first activate a random
cell in each module and move to first feature
- Select random subset of active sensory cells for
this sensory input. Store location representation
on independent dendritic segment in sensory
cells.
- Store this sensory representation on an
independent dendritic segment in each active
location cell
- Move to next feature and repeat
- This process invokes a new unique location
space and sequentially stores specific location
representation with sensory cues, and specific
sensory code with locations.
32. Single Cortical Column Model Of Sensorimotor Inference
- Objects are defined by the relative
locations of sensory features
- Objects are recognized through a
sequence of movements and sensations
- Grid cell code enables a powerful
predictive sensorimotor network
- Simulation results demonstrate
convergence and capacity
33. Sensorimotor Inference With Multiple Columns And Long-
Range Lateral Connections
- Each column has partial knowledge of object.
- Columns vote through long range lateral connections.
(Hawkins et al, 2017)
35. L2/3
L4
L6
Where Are Cortical Grid Cells?
L6 to L4:
Ahmed et al., 1994
Binzegger et al., 2004
Harris & Shepherd, 2015
L6 motor:
Harris & Shepherd, 2015
Nelson et al., 2013
Leinweber et al., 2017
Long-range lateral connections
Large pathway between L6 and L4
Strong motor projections into L6
36. L2/3
L4
L6
Object layer
Location layer
Sensory input layer
Grid cell
modules
Where Are Cortical Grid Cells?
L6 to L4:
Ahmed et al., 1994
Binzegger et al., 2004
Harris & Shepherd, 2015
L6 motor:
Harris & Shepherd, 2015
Nelson et al., 2013
Leinweber et al., 2017
37. 1) Border ownership cells:
Cells fire only if feature is present at object-centric location on object.
Detected even in primary sensory areas (V1 and V2).
(Zhou et al., 2000; Willford & von der Heydt, 2015)
2) Grid cell signatures in cortex:
Cortical areas in humans show grid cell like signatures (fMRI and single cell recordings)
Seen while subjects navigate conceptual object spaces and virtual environments.
(Doeller et al., 2010; Jacobs et al. 2013; Constantinescu et al., 2016; )
3) Sensorimotor prediction in sensory regions:
Cells predict their activity before a saccade.
Predictions during saccades are important for invariant object recognition.
(Duhamel et al., 1992; Nakamura and Colby, 2002; Li and DiCarlo, 2008)
4) Hippocampal functionality may have been conserved in neocortex:
(Jarvis et al., 2005; Luzatti, 2015)
Biological Evidence
37
39. Rethinking Hierarchy: Thousand Brains Theory of Intelligence
Sense array
Objects
Objects
Objects
Sense array
Every column learns models of objects.
Each model is different depending on its inputs.
Cortical columns quickly resolve uncertainty through voting.
Neocortex contains thousands of massively parallel and distributed
independent modeling systems.
40. Experimentally Testable Predictions
40
1) Object coding:
Every sensory region will contain layers that are stable while sensing a familiar object.
The set of cells will be sparse but specific to object identity.
Ambiguous information will lead to denser activity in upper layers.
Each region will contain cells tuned to locations of features in the object’s reference frame.
(Zhou et al., 2000; Zheng & Kwon, 2018)
2) Cortical columns:
Cortical cols can learn complete object models
Complexity of objects tied to span of long-range lateral connections
Activity within stable layers will converge slower with long–range connections disabled
Subgranular layers of primary sensory regions (Layer 6) will be driven by motor signals
Grid-like cells in Layer 6a
(Nelson et al., 2013; Sutter et Shepherd, 2015; Lee et al, 2008; Leinweber et al, 2017)
41. L2/3
L4
L6
Summary
1. Cortical columns are much more
powerful than typically assumed.
2. Cortical columns model the
structure of objects using grid
cell mechanisms.
3. Multiple columns resolve
uncertainty through voting.
4. Thousands of cortical columns
vote through long-range
connections across regions and
sensory modalities.
42. Numenta Team
Jeff Hawkins Marcus Lewis
Contact: sahmad@numenta.com
@SubutaiAhmad
Scott Purdy
Mirko Klukas Luiz Scheinkman