Presentation given by Yuwei Cui, Numenta Research Engineer at Beijing Normal University. December 2015.
Collaborators: Jeff Hawkins, Subutai Ahmad, Chetan Surpur
Numenta engineer Yuwei Cui walks through how the HTM Spatial Pooler works, explaining why desired properties exist and how they work. Includes lots of graphs of SP online learning performance, discussion of topology and boosting.
Brains, Data, and Machine Intelligence (2014 04 14 London Meetup)Numenta
Jeff will discuss the Brains, Data, Machine Intelligence, Cortical Learning Algorithm he developed and the Numenta Platform for Intelligent Computing (NuPIC).
Hierarchical Temporal Memory: Computing Like the Brain - Matt Taylor, NumentaWithTheBest
Most of today's AI technologies are extensions of ANN models that were envisioned twenty years ago. While they have some amazing capabilities, they are not truly intelligent. To attain truly intelligent machines, our approach is to understand the only thing we can all agree is intelligent today: the human brain. HTM is an AI technology that is biologically-constrained. All major algorithms at work in an HTM system were uncovered by years of intensive neuroscience research. In this presentation, I'll describe the major mechanism of HTM at a high level, and lay a path towards the future of truly intelligent machines. http://numenta.com/open-source-community/
Matt Taylor, Numenta
Numenta engineer Yuwei Cui walks through how the HTM Spatial Pooler works, explaining why desired properties exist and how they work. Includes lots of graphs of SP online learning performance, discussion of topology and boosting.
Brains, Data, and Machine Intelligence (2014 04 14 London Meetup)Numenta
Jeff will discuss the Brains, Data, Machine Intelligence, Cortical Learning Algorithm he developed and the Numenta Platform for Intelligent Computing (NuPIC).
Hierarchical Temporal Memory: Computing Like the Brain - Matt Taylor, NumentaWithTheBest
Most of today's AI technologies are extensions of ANN models that were envisioned twenty years ago. While they have some amazing capabilities, they are not truly intelligent. To attain truly intelligent machines, our approach is to understand the only thing we can all agree is intelligent today: the human brain. HTM is an AI technology that is biologically-constrained. All major algorithms at work in an HTM system were uncovered by years of intensive neuroscience research. In this presentation, I'll describe the major mechanism of HTM at a high level, and lay a path towards the future of truly intelligent machines. http://numenta.com/open-source-community/
Matt Taylor, Numenta
Numenta Brain Theory Discoveries of 2016/2017 by Jeff HawkinsNumenta
Jeff Hawkins discussed recent advances in cortical theory made by Numenta during the HTM Meetup on 11/03/2017. These discoveries are described in the recently published peer-reviewed paper, “A Theory of How Columns in the Neocortex Enable Learning the Structure of the World.” Jeff walked through the text and figures in the paper, as well as discussed the significance of these advances and the importance they play in AI and cortical theory.
The recording of the HTM Meetup is available at https://www.youtube.com/watch?v=c6U4yBfELpU&t=
The Predictive Neuron: How Active Dendrites Enable Spatiotemporal Computation...Numenta
This was a presentation given on February 8, 2018 at the European Institute for Theoretical Neuroscience (EITN)'s Dendritic Integration and Computation with Active Dendrites Workshop.
The workshop is aimed at putting together experiments, models and recent neuromorphic systems aiming at understanding the computational properties conferred by dendrites in neural systems. It is focused particularly on the excitable properties of dendrites and the type of computation they can implement.
Does the neocortex use grid cell-like mechanisms to learn the structure of ob...Numenta
These are Jeff Hawkins' slides from the Computational Theories of the Brain Workshop held at the Simons Institute at UC Berkeley on April 17, 2018.
Abstract:
In this talk, I propose that the neocortex learns models of objects using the same methods that the entorhinal cortex uses to map environments. I propose that each cortical column contains cells that are equivalent to grid cells. These cells represent the location of sensor patches relative to objects in the world. As we move our sensors, the location of the sensor is paired with sensory input to learn the structure of objects. I explore the evidence for this hypothesis, propose specific cellular mechanisms that the hypothesis requires, and suggest how the hypothesis could be tested.
References:
“A Theory of How Columns in the Neocortex Enable Learning the Structure of the World” by Jeff Hawkins, Subutai Ahmad, YuWei Cui (2017)
“Place Cells, Grid Cells, and the Brain’s Spatial Representation System” by Edvard Moser, Emilio Kropff, May-Britt Moser (2008)
“Evidence for grid cells in a human memory network” by Christian Doeller, Caswell Barry, Neil Burgess (2010)
Have We Missed Half of What the Neocortex Does? by Jeff Hawkins (12/15/2017)Numenta
This was a presentation given on December 15, 2017 at the MIT Center for Brains, Minds + Machines as part of their Brains, Minds and Machines Seminar Series.
You can watch the recording of the presentation after Slide 1.
In this talk, Jeff describes a theory that sensory regions of the neocortex process two inputs. One input is the well-known sensory data arriving via thalamic relay cells. We propose the second input is a representation of allocentric location. The allocentric location represents where the sensed feature is relative to the object being sensed, in an object-centric reference frame. As the sensors move, cortical columns learn complete models of objects by integrating sensory features and location representations over time. Lateral projections allow columns to rapidly reach a consensus of what object is being sensed. We propose that the representation of allocentric location is derived locally, in layer 6 of each column, using the same tiling principles as grid cells in the entorhinal cortex. Because individual cortical columns are able to model complete complex objects, cortical regions are far more powerful than currently believed. The inclusion of allocentric location offers the possibility of rapid progress in understanding the function of numerous aspects of cortical anatomy.
Jeff discusses material from these two papers. Others can be found at https://numenta.com/papers
A Theory of How Columns in the Neocortex Enable Learning the Structure of the World
URL: https://doi.org/10.3389/fncir.2017.00081
Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in the Neocortex
URL: https://doi.org/10.3389/fncir.2016.00023
BAAI Conference 2021: The Thousand Brains Theory - A Roadmap for Creating Mac...Numenta
Jeff Hawkins presented a talk on "The Thousand Brains Theory: A Roadmap to Machine Intelligence" at the Beijing Academy of Artificial Intelligence Conference on 1st June 2021. In this talk, he discussed the key components of The Thousand Brains Theory and Numenta's recent work.
Zaikun Xu from the Università della Svizzera Italiana presented this deck at the 2016 Switzerland HPC Conference.
“In the past decade, deep learning as a life-changing technology, has gained a huge success on various tasks, including image recognition, speech recognition, machine translation, etc. Pio- neered by several research groups, Geoffrey Hinton (U Toronto), Yoshua Benjio (U Montreal), Yann LeCun(NYU), Juergen Schmiduhuber (IDSIA, Switzerland), Deep learning is a renaissance of neural network in the Big data era.
Neural network is a learning algorithm that consists of input layer, hidden layers and output layers, where each circle represents a neural and the each arrow connection associates with a weight. The way neural network learns is based on how different between the output of output layer and the ground truth, following by calculating the gradients of this discrepancy w.r.b to the weights and adjust the weight accordingly. Ideally, it will find weights that maps input X to target y with error as lower as possible.”
Watch the video presentation: http://insidehpc.com/2016/03/deep-learning/
See more talks in the Swiss Conference Video Gallery: http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Slides from Portland Machine Learning meetup, April 13th.
Abstract: You've heard all the cool tech companies are using them, but what are Convolutional Neural Networks (CNNs) good for and what is convolution anyway? For that matter, what is a Neural Network? This talk will include a look at some applications of CNNs, an explanation of how CNNs work, and what the different layers in a CNN do. There's no explicit background required so if you have no idea what a neural network is that's ok.
An introduction to Machine Learning (and a little bit of Deep Learning)Thomas da Silva Paula
25-min talk about Machine Learning and a little bit of Deep Learning. Starts with some basic definitions (Supervised and Unsupervised Learning). Then, neural networks basic functionality is explained, ending up in Deep Learning and Convolutional Neural Networks.
Machine Learning Meetup that happened in Porto Alegre, Brazil.
It’s long ago, approx. 30 years, since AI was not only a topic for Science-Fiction writers, but also a major research field surrounded with huge hopes and investments. But the over-inflated expectations ended in a subsequent crash and followed by a period of absent funding and interest – the so-called AI winter. However, the last 3 years changed everything – again. Deep learning, a machine learning technique inspired by the human brain, successfully crushed one benchmark after another and tech companies, like Google, Facebook and Microsoft, started to invest billions in AI research. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new Hype? How is Deep Learning different from previous approaches? Are the advancing AI technologies really a threat for humanity? Let’s look behind the curtain and unravel the reality. This talk will explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why "Deep Learning is probably one of the most exciting things that is happening in the computer industry” (Jen-Hsun Huang – CEO NVIDIA).
Either a new AI “winter is coming” (Ned Stark – House Stark) or this new wave of innovation might turn out as the “last invention humans ever need to make” (Nick Bostrom – AI Philosoph). Or maybe it’s just another great technology helping humans to achieve more.
Tijmen Blankenvoort, co-founder Scyfer BV, presentation at Artificial Intelligence Meetup 15-1-2014. Introduction into Neural Networks and Deep Learning.
Numenta Brain Theory Discoveries of 2016/2017 by Jeff HawkinsNumenta
Jeff Hawkins discussed recent advances in cortical theory made by Numenta during the HTM Meetup on 11/03/2017. These discoveries are described in the recently published peer-reviewed paper, “A Theory of How Columns in the Neocortex Enable Learning the Structure of the World.” Jeff walked through the text and figures in the paper, as well as discussed the significance of these advances and the importance they play in AI and cortical theory.
The recording of the HTM Meetup is available at https://www.youtube.com/watch?v=c6U4yBfELpU&t=
The Predictive Neuron: How Active Dendrites Enable Spatiotemporal Computation...Numenta
This was a presentation given on February 8, 2018 at the European Institute for Theoretical Neuroscience (EITN)'s Dendritic Integration and Computation with Active Dendrites Workshop.
The workshop is aimed at putting together experiments, models and recent neuromorphic systems aiming at understanding the computational properties conferred by dendrites in neural systems. It is focused particularly on the excitable properties of dendrites and the type of computation they can implement.
Does the neocortex use grid cell-like mechanisms to learn the structure of ob...Numenta
These are Jeff Hawkins' slides from the Computational Theories of the Brain Workshop held at the Simons Institute at UC Berkeley on April 17, 2018.
Abstract:
In this talk, I propose that the neocortex learns models of objects using the same methods that the entorhinal cortex uses to map environments. I propose that each cortical column contains cells that are equivalent to grid cells. These cells represent the location of sensor patches relative to objects in the world. As we move our sensors, the location of the sensor is paired with sensory input to learn the structure of objects. I explore the evidence for this hypothesis, propose specific cellular mechanisms that the hypothesis requires, and suggest how the hypothesis could be tested.
References:
“A Theory of How Columns in the Neocortex Enable Learning the Structure of the World” by Jeff Hawkins, Subutai Ahmad, YuWei Cui (2017)
“Place Cells, Grid Cells, and the Brain’s Spatial Representation System” by Edvard Moser, Emilio Kropff, May-Britt Moser (2008)
“Evidence for grid cells in a human memory network” by Christian Doeller, Caswell Barry, Neil Burgess (2010)
Have We Missed Half of What the Neocortex Does? by Jeff Hawkins (12/15/2017)Numenta
This was a presentation given on December 15, 2017 at the MIT Center for Brains, Minds + Machines as part of their Brains, Minds and Machines Seminar Series.
You can watch the recording of the presentation after Slide 1.
In this talk, Jeff describes a theory that sensory regions of the neocortex process two inputs. One input is the well-known sensory data arriving via thalamic relay cells. We propose the second input is a representation of allocentric location. The allocentric location represents where the sensed feature is relative to the object being sensed, in an object-centric reference frame. As the sensors move, cortical columns learn complete models of objects by integrating sensory features and location representations over time. Lateral projections allow columns to rapidly reach a consensus of what object is being sensed. We propose that the representation of allocentric location is derived locally, in layer 6 of each column, using the same tiling principles as grid cells in the entorhinal cortex. Because individual cortical columns are able to model complete complex objects, cortical regions are far more powerful than currently believed. The inclusion of allocentric location offers the possibility of rapid progress in understanding the function of numerous aspects of cortical anatomy.
Jeff discusses material from these two papers. Others can be found at https://numenta.com/papers
A Theory of How Columns in the Neocortex Enable Learning the Structure of the World
URL: https://doi.org/10.3389/fncir.2017.00081
Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in the Neocortex
URL: https://doi.org/10.3389/fncir.2016.00023
BAAI Conference 2021: The Thousand Brains Theory - A Roadmap for Creating Mac...Numenta
Jeff Hawkins presented a talk on "The Thousand Brains Theory: A Roadmap to Machine Intelligence" at the Beijing Academy of Artificial Intelligence Conference on 1st June 2021. In this talk, he discussed the key components of The Thousand Brains Theory and Numenta's recent work.
Zaikun Xu from the Università della Svizzera Italiana presented this deck at the 2016 Switzerland HPC Conference.
“In the past decade, deep learning as a life-changing technology, has gained a huge success on various tasks, including image recognition, speech recognition, machine translation, etc. Pio- neered by several research groups, Geoffrey Hinton (U Toronto), Yoshua Benjio (U Montreal), Yann LeCun(NYU), Juergen Schmiduhuber (IDSIA, Switzerland), Deep learning is a renaissance of neural network in the Big data era.
Neural network is a learning algorithm that consists of input layer, hidden layers and output layers, where each circle represents a neural and the each arrow connection associates with a weight. The way neural network learns is based on how different between the output of output layer and the ground truth, following by calculating the gradients of this discrepancy w.r.b to the weights and adjust the weight accordingly. Ideally, it will find weights that maps input X to target y with error as lower as possible.”
Watch the video presentation: http://insidehpc.com/2016/03/deep-learning/
See more talks in the Swiss Conference Video Gallery: http://insidehpc.com/2016-swiss-hpc-conference/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Slides from Portland Machine Learning meetup, April 13th.
Abstract: You've heard all the cool tech companies are using them, but what are Convolutional Neural Networks (CNNs) good for and what is convolution anyway? For that matter, what is a Neural Network? This talk will include a look at some applications of CNNs, an explanation of how CNNs work, and what the different layers in a CNN do. There's no explicit background required so if you have no idea what a neural network is that's ok.
An introduction to Machine Learning (and a little bit of Deep Learning)Thomas da Silva Paula
25-min talk about Machine Learning and a little bit of Deep Learning. Starts with some basic definitions (Supervised and Unsupervised Learning). Then, neural networks basic functionality is explained, ending up in Deep Learning and Convolutional Neural Networks.
Machine Learning Meetup that happened in Porto Alegre, Brazil.
It’s long ago, approx. 30 years, since AI was not only a topic for Science-Fiction writers, but also a major research field surrounded with huge hopes and investments. But the over-inflated expectations ended in a subsequent crash and followed by a period of absent funding and interest – the so-called AI winter. However, the last 3 years changed everything – again. Deep learning, a machine learning technique inspired by the human brain, successfully crushed one benchmark after another and tech companies, like Google, Facebook and Microsoft, started to invest billions in AI research. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new Hype? How is Deep Learning different from previous approaches? Are the advancing AI technologies really a threat for humanity? Let’s look behind the curtain and unravel the reality. This talk will explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why "Deep Learning is probably one of the most exciting things that is happening in the computer industry” (Jen-Hsun Huang – CEO NVIDIA).
Either a new AI “winter is coming” (Ned Stark – House Stark) or this new wave of innovation might turn out as the “last invention humans ever need to make” (Nick Bostrom – AI Philosoph). Or maybe it’s just another great technology helping humans to achieve more.
Tijmen Blankenvoort, co-founder Scyfer BV, presentation at Artificial Intelligence Meetup 15-1-2014. Introduction into Neural Networks and Deep Learning.
Evaluating Real-Time Anomaly Detection: The Numenta Anomaly BenchmarkNumenta
Subutai Ahmad, VP Research presenting NAB and discussing the need for evaluating real-time anomaly detection algorithms. This presentation was delivered at MLConf (Machine Learning Conference) in San Francisco 2015.
Science Journal - How much space we have in brain and How memory is stored-re...saphyaire Wind
Our brain is magnificent storing device which holds all memory of many decades we live. Many questions arises from the perception we have about our brain,
e.g.
• How much space does our brain have?
• Is there unlimited space in our brain?
• Can our brain hold unlimited memory or information?
• How may gb or tb space is there in our brain?
But our brain isn’t like computer and our memory isn’t stored like data stored in harddrive. In fact, our brain is so efficient it just stores bits and pieces of info of events. When we recall an event, the brain reconstructs the whole event that is why our memory isn’t very vivid and few details changes every time we recall something.
To know, how our brain stores and retrieves memory and how our memory is different from data stored in disc of hard-drive, watch the video in the presentation.
«Секретные» технологии инвестиционных банков / Алексей Рагозин (Дойче Банк)Ontico
Как правило, такое базовое ПО, как языки программирования, системы управления базами данных, брокеры сообщений, используется в разных индустриях и не имеет ярко выраженной бизнес-специализации. Java, Python, MySQL и не только находят применение повсюду, начиная с больших корпораций, заканчивая стратапами и видеоиграми.
Тем не менее, встречаются исключения. В докладе пойдёт речь о технологиях, получивших распространение в инвестиционных банках и не слишком известных за их пределами. Хотя прямого отношения к торговле финансовыми инструментами сами по себе эти технологии не имеют.
Тезисы - http://www.highload.ru/2015/abstracts/1888.html
New learning technologies seem likely to transform much of science, as they are already doing for many areas of industry and society. We can expect these technologies to be used, for example, to obtain new insights from massive scientific data and to automate research processes. However, success in such endeavors will require new learning systems: scientific computing platforms, methods, and software that enable the large-scale application of learning technologies. These systems will need to enable learning from extremely large quantities of data; the management of large and complex data, models, and workflows; and the delivery of learning capabilities to many thousands of scientists. In this talk, I review these challenges and opportunities and describe systems that my colleagues and I are developing to enable the application of learning throughout the research process, from data acquisition to analysis.
SF Big Analytics20170706: What the brain tells us about the future of streami...Chester Chen
Much of the world’s data is becoming streaming, time-series data. It becomes increasingly important to analyze streaming data in real-time. Hierarchal Temporal Memory (HTM) is a detailed computational theory of the neocortex. At the core of HTM are time-based learning algorithms that store and recall spatial and temporal patterns. HTM is well suited to a wide variety of problems; particularly those involve streaming data and time-based patterns. The current HTM systems are able to learn the structure of streaming data, make predictions and detect anomalies. It is distinguished from other techniques in its ability to learn continuously in a fully unsupervised manner. HTM has been tested and implemented in software, all of which is developed with best practices and is suitable for deploying in commercial applications. The core learning algorithms are fully documented and available in an open source project called NuPIC. HTM not only advances our understanding of how the brain may solve the sequence learning problem but is also applicable to real-world sequence learning problems from continuous data streams.
Speaker
Yuwei Cui a Research Staff Member at Numenta, a company focused on Machine Intelligence. His professional interests are in the areas of Artificial Intelligence, Computational Neuroscience, Computer Vision and Machine Learning. He became interested in AI while studying physics in the University of Science and Technology of China
He later went on to get a PhD in computational neuroscience, specializing in understanding how our visual system process sensory inputs and contribute to perceptions, from the University of Maryland at College Park. He became fascinated by the brain and reverse engineering its underlying computational principles. He has published numerous peer-reviewed scientific articles in Neuroscience and AI.
Could A Model Of Predictive Voting Explain Many Long-Range Connections? by Su...Numenta
These are slides on a workshop Subutai Ahmad hosted on March 5, 2018 at the Computational and Systems Neuroscience Meeting (Cosyne) 2018.
About:
This workshop on long-range cortical circuits is focused on our peer-reviewed paper, “A Theory of How Columns in the Neocortex Enable Learning the Structure of the World.” Subutai discussed the inference mechanism introduced in the paper, our theory of location information, and how long-range connections allow columns to integrate inputs over space to perform object recognition.
The field of Artificial Intelligence (AI) has been revitalized in this decade, primarily due to the large-scale application of Deep Learning (DL) and other Machine Learning (ML) algorithms. This has been most evident in applications like computer vision, natural language processing, and game bots. However, extraordinary successes within a short period of time have also had the unintended consequence of causing a sharp difference of opinion in research and industrial communities regarding the capabilities and limitations of deep learning. A few questions you might have heard being asked (or asked yourself) include:
a. We don’t know how Deep Neural Networks make decisions, so can we trust them?
b. Can Deep Learning deal with highly non-linear continuous systems with millions of variables?
c. Can Deep Learning solve the Artificial General Intelligence problem?
The goal of this seminar is to provide a 1000-feet view of Deep Learning and hopefully answer the questions above. The seminar will touch upon the evolution, current state of the art, and peculiarities of Deep Learning, and share thoughts on using Deep Learning as a tool for developing power system solutions.
PowerPoint slides from a 2015 Guest Lecture in PSYCH-268A: Computational Neuroscience, Prof. Jeff Krichmar, University of California, Irvine (UCI).
Corresponding publication:
Beyeler*, M., Carlson*, K. D. , Chou*, T-S., Dutt, N., Krichmar, J. L. (2015). CARLsim 3: A user-friendly and highly optimized library for the creation of neurobiologically detailed spiking neural networks. Proceedings of IEEE International Joint Conference on Neural Networks (IJCNN), Killarney, Ireland. (*equal contribution)
Jeff Hawkins NAISys 2020: How the Brain Uses Reference Frames, Why AI Needs t...Numenta
Jeff Hawkins presents a talk on "How the Brain Uses Reference Frames to Model the World, Why AI Needs to do the Same." In this talk, he gives an overview of The Thousand Brains Theory and discusses how machine intelligence can benefit from working on the same principles as the neocortex.
This talk was first presented at the NAISys conference on November 10, 2020. You can find a re-recording of the talk here: https://youtu.be/mGSG7I9VKDU
2019년 파이콘 한국에서 진행된 튜토리얼 자료입니다. 최재식 교수님께서 설명가능인공지능이란 무엇인가에 대해 발표해주신 Part 1 발표자료입니다. 아래 링크를 통해 행사 관련 정보를 확인하실 수 있습니다.
http://xai.unist.ac.kr/Tutorial/2018/
https://github.com/OpenXAIProject/PyConKorea2019-Tutorials
Part 1: https://www.slideshare.net/OpenXAI/2019-part-1
Part 2: https://www.slideshare.net/OpenXAI/2019-lrp-part-2
Part 3: https://www.slideshare.net/OpenXAI/2019-shap-part-3
Brains@Bay Meetup: A Primer on Neuromodulatory Systems - Srikanth RamaswamyNumenta
Meetup page: https://www.meetup.com/Brains-Bay/events/284481247/
Neuromodulators are signalling chemicals in the brain, which control the emergence of adaptive learning and behaviour. Neuromodulators including dopamine, acetylcholine, serotonin and noradrenaline operate on a spectrum of spatio-temporal scales in tandem and opposition to reconfigure functions of biological neural networks and to regulate global cognition and state transition. Although neuromodulators are important in shaping cognition, their phenomenology is yet to be fully realized in deep neural networks (DNNs). In this talk, we will give an overview of the biological organizing principles of neuromodulators in adaptive cognition and highlight the competition and cooperation across neuromodulators.
Brains@Bay Meetup: How to Evolve Your Own Lab Rat - Thomas MiconiNumenta
Meetup page: https://www.meetup.com/Brains-Bay/events/284481247/
A hallmark of intelligence is the ability to learn new flexible, cognitive behaviors - that is, behaviors that require discovering, storing and exploiting novel information for each new instance of the task. In meta-learning, agents are trained with external algorithms to learn one specific cognitive task. However, animals are able to pick up such cognitive tasks automatically, as a result of their evolved neural architecture and synaptic plasticity mechanisms, including neuromodulation. Here we evolve neural networks, endowed with plastic connections and reward-based neuromodulation, over a sizable set of simple meta-learning tasks based on a framework from computational neuroscience. The resulting evolved networks can automatically acquire a novel simple cognitive task, never seen during evolution, through the spontaneous operation of their evolved neural organization and plasticity system. We suggest that attending to the multiplicity of loops involved in natural learning may provide useful insight into the emergence of intelligent behavior.
Brains@Bay Meetup: The Increasing Role of Sensorimotor Experience in Artifici...Numenta
We receive information about the world through our sensors and influence the world through our effectors. Such low-level data has gradually come to play a greater role in AI during its 70-year history. I see this as occurring in four steps, two of which are mostly past and two of which are in progress or yet to come. The first step was to view AI as the design of agents which interact with the world and thereby have sensorimotor experience; this viewpoint became prominent in the 1980s and 1990s. The second step was to view the goal of intelligence in terms of experience, as in the reward signal of optimal control and reinforcement learning. The reward formulation of goals is now widely used but rarely loved. Many would prefer to express goals in non-experiential terms, such as reaching a destination or benefiting humanity, but settle for reward because, as an experiential signal, reward is directly available to the agent without human assistance or interpretation. This is the pattern that we see in all four steps. Initially a non-experiential approach seems more intuitive, is preferred and tried, but ultimately proves a limitation on scaling; the experiential approach is more suited to learning and scaling with computational resources. The third step in the increasing role of experience in AI concerns the agent’s representation of the world’s state. Classically, the state of the world is represented in objective terms external to the agent, such as “the grass is wet” and “the car is ten meters in front of me”, or with probability distributions over world states such as in POMDPs and other Bayesian approaches. Alternatively, the state of the world can be represented experientially in terms of summaries of past experience (e.g., the last four Atari video frames input to DQN) or predictions of future experience (e.g., successor representations). The fourth step is potentially the biggest: world knowledge. Classically, world knowledge has always been expressed in terms far from experience, and this has limited its ability to be learned and maintained. Today we are seeing more calls for knowledge to be predictive and grounded in experience. After reviewing the history and prospects of the four steps, I propose a minimal architecture for an intelligent agent that is entirely grounded in experience.
Brains@Bay Meetup: Open-ended Skill Acquisition in Humans and Machines: An Ev...Numenta
In this talk, I will propose a conceptual framework sketching a path toward open-ended skill acquisition through the coupling of environmental, morphological, sensorimotor, cognitive, developmental, social, cultural and evolutionary mechanisms. I will illustrate parts of this framework through computational experiments highlighting the key role of intrinsically motivated exploration in the generation of behavioral regularity and diversity. Firstly, I will show how some forms of language can self-organize out of generic exploration mechanisms without any functional pressure to communicate. Secondly, we will see how language — once invented — can be recruited as a cognitive tool that enables compositional imagination and bootstraps open-ended cultural innovation.
For more:
Brains@Bay Meetup: The Effect of Sensorimotor Learning on the Learned Represe...Numenta
Most current deep neural networks learn from a static data set without active interaction with the world. We take a look at how learning through a closed loop between action and perception affects the representations learned in a DNN. We demonstrate how these representations are significantly different from DNNs that learn supervised or unsupervised from a static dataset without interaction. These representations are much sparser and encode meaningful content in an efficient way. Even an agent who learned without any external supervision, purely through curious interaction with the world, acquires encodings of the high dimensional visual input that enable the agent to recognize objects using only a handful of labeled examples. Our results highlight the capabilities that emerge from letting DNNs learn more similar to biological brains, though sensorimotor interaction with the world.
For more:
SBMT 2021: Can Neuroscience Insights Transform AI? - Lawrence SpracklenNumenta
Numenta's Director of ML Architecture Lawrence Spracklen presented a talk at the SBMT Annual Congress on July 10th, 2021. He talked about how neuroscience principles can inspire better machine learning algorithms.
FPGA Conference 2021: Breaking the TOPS ceiling with sparse neural networks -...Numenta
Nick Ni (Xilinx) and Lawrence Spracklen (Numenta) presented a talk at the FGPA Conference Europe on July 8th, 2021. In this talk, they presented a neuroscience approach to optimize state-of-the-art deep learning networks into sparse topology and how it can unlock significant performance gains on FPGAs without major loss of accuracy. They then walked through the FPGA implementation where they exploited the advantage of sparse networks with a unique Domain Specific Architecture (DSA).
OpenAI’s GPT 3 Language Model - guest Steve OmohundroNumenta
In this research meeting, guest Stephen Omohundro gave a fascinating talk on GPT-3, the new massive OpenAI Natural Language Processing model. He reviewed the network architecture, training process, and results in the context of past work. There was extensive discussion on the implications for NLP and for Machine Intelligence / AGI.
Link to GPT-3 paper: https://arxiv.org/abs/2005.14165
Link to YouTube recording of Steve's talk: https://youtu.be/0ZVOmBp29E0
CVPR 2020 Workshop: Sparsity in the neocortex, and its implications for conti...Numenta
Numenta VP Research Subutai Ahmad presents a talk on "Sparsity in the Neocortex and its Implications for Continual Learning" at the virtual CVPR 2020 workshop. In this talk, he discusses how continuous learning systems can benefit from sparsity, active dendrites and other neocortical mechanisms.
The Thousand Brains Theory: A Framework for Understanding the Neocortex and B...Numenta
Recent advances in reverse engineering the neocortex reveal that it is a highly-distributed sensory-motor modeling system. Each cortical column learns complete models of observed objects through movement and sensation. The columns use long-range connections to vote on what objects are currently being observed. In this talk, we introduce the key elements of this theory and describe how these elements can be introduced into current machine learning techniques to improve their capabilities, robustness, and power requirements.
Jeff Hawkins Human Brain Project Summit Keynote: "Location, Location, Locatio...Numenta
Jeff Hawkins delivered this keynote presentation at the 2018 Human Brain Project Summit Open Day in Maastricht, the Netherlands on October 15, 2018. A screencast recording of the slides is also available at: https://numenta.com/resources/videos/jeff-hawkins-human-brain-project-screencast/
Location, Location, Location - A Framework for Intelligence and Cortical Comp...Numenta
Jeff Hawkins gave this presentation as part of the Johns Hopkins APL Colloquium Series on Septemer 21, 2018.
View the video of the talk here: https://numenta.com/resources/videos/jeff-hawkins-johns-hopkins-apl-talk/
Have We Missed Half of What the Neocortex Does? A New Predictive Framework ...Numenta
Numenta VP of Research Subutai Ahmad delivered this presentation at the Centre for Theoretical Neuroscience, University of Waterloo on October 2, 2018.
The Biological Path Toward Strong AI by Matt Taylor (05/17/18)Numenta
These are Matt Taylor's slides from the AI Singapore Meetup on May 17, 2018.
Abstract:
Today’s wave of AI technology is still being driven by the ANN neuron pioneered decades ago. Hierarchical Temporal Memory (HTM) is a realistic biologically-constrained model of the pyramidal neuron reflecting today’s most recent neocortical research. This talk will describe and visualize core HTM concepts like sparse distributed representations, spatial pooling and temporal memory. Strong AI is a common goal of many computer scientists. So far, machine learning techniques have created amazing results in narrow fields, but haven’t produced something we could all call “intelligent”. Given recent advances in neuroscience research, we know a lot more about how neurons work together now than we did when ANNs were created. We believe systems with a more realistic neuronal model will be more likely to produce Strong AI. Hierarchical Temporal Memory is a theory of intelligence based upon neuroscience research. The neocortex is the seat of intelligence in the brain, and it is structurally homogeneous throughout. This means a common algorithm is processing all your sensory input, no matter which sense. We believe we have discovered some of the foundational algorithms of the neocortex, and we’ve implemented them in software. I’ll show you how they work with detailed dynamic visualizations of Sparse Distributed Representations, Spatial Pooling, and Temporal Memory.
Recognizing Locations on Objects by Marcus LewisNumenta
Marcus gave a talk called "Recognizing Locations on Objects" during the HTM Meetup on 11/03/2017.
The brain learns and recognizes objects with independent moving sensors. It’s not obvious how a network of neurons would do this. Numenta has suggested that the brain solves this by computing each sensor’s location relative to the object, and learning the object as a set of features-at-locations. Marcus showed how the brain might determine this “location relative to the object.” He extended the model from Numenta’s recent paper, "A Theory of How Columns in the Neocortex Enable Learning the Structure of the World," so that it computes this location. This extended model takes two inputs: each sensor’s input, and each sensor’s “location relative to the body.” The model connects the columns in such a way that a column can compute its “location relative to the object” from another column’s “location relative to object.” When a column senses a feature, it recalls a union of all locations where it has sensed this feature, then the columns work together to narrow their unions. This extended model essentially takes its sensory input and asks, “Do I know any objects that contain this spatial arrangement of features?”
The Biological Path Towards Strong AI Strange Loop 2017, St. LouisNumenta
Copy and paste this URL to your browser to watch the live presentation: https://www.youtube.com/watch?v=-h-cz7yY-G8
Abstract:
Today’s wave of AI technology is still being driven by the ANN neuron pioneered decades ago. Hierarchical Temporal Memory (HTM) is a realistic biologically-constrained model of the pyramidal neuron reflecting today’s most recent neocortical research. This talk will describe and visualize core HTM concepts like sparse distributed representations, spatial pooling and temporal memory. Strong AI is a common goal of many computer scientists. So far, machine learning techniques have created amazing results in narrow fields, but haven’t produced something we could all call “intelligent”. Given recent advances in neuroscience research, we know a lot more about how neurons work together now than we did when ANNs were created. We believe systems with a more realistic neuronal model will be more likely to produce Strong AI. Hierarchical Temporal Memory is a theory of intelligence based upon neuroscience research. The neocortex is the seat of intelligence in the brain, and it is structurally homogeneous throughout. This means a common algorithm is processing all your sensory input, no matter which sense. We believe we have discovered some of the foundational algorithms of the neocortex, and we’ve implemented them in software. I’ll show you how they work with detailed dynamic visualizations of Sparse Distributed Representations, Spatial Pooling, and Temporal Memory.
Matt Taylor, Numenta's Open Source Community Manager, delivered this presentation at AI With the Best on April 29, 2017.
Abstract: Strong AI is a common goal of many computer scientists. So far, machine learning techniques have created amazing results in narrow fields, but haven’t produced something we could all call “intelligent”.
Given recent advances in neuroscience research, we know a lot more about how neurons work together now than we did when ANNs were created. We believe systems with a more realistic neuronal model will be more likely to produce Strong AI.
Hierarchical Temporal Memory is a theory of intelligence based upon neuroscience research. The neocortex is the seat of intelligence in the brain, and it is structurally homogeneous throughout. This means a common algorithm is processing all your sensory input, no matter which sense.
We believe we have discovered some of the foundational algorithms of the neocortex, and we’ve implemented them in software. I’ll show you how they work with detailed dynamic visualizations of Sparse Distributed Representations, Spatial Pooling, and Temporal Memory.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...
Why Neurons have thousands of synapses? A model of sequence memory in the brain
1. Beijing Normal University
December, 2015
Yuwei Cui
ycui@numenta.com
Why neurons have thousands of synapses?
A model of sequence memory in the brain
Collaborators:
Jeff Hawkins (PI)
Subutai Ahmad
Chetan Surpur
2. History
2005 – 2009
HTM theory
First generation algorithms
Hierarchy and vision problems
Vision Toolkit
2002
2004
2009 – 2012
Cortical Learning Algorithms
SDRs, sequence memory,
continuous learning
Applications exploration
2013 – 2015
Continued HTM development
NuPIC open source project
Grok for anomaly detection
2005
2014 – ??
Sensorimotor
Goal directed behavior
Sequence classification
3. Numenta
Research
HTM theory
HTM algorithms
NuPIC
Open source community
Technology Validation
and Development
Streaming Analytics
Natural Language
Sensorimotor Inference
Numenta’s Approach
*HTM = Hierarchical Temporal Memory
Neuroscience
Experimental
Research
4. 1) Reverse Engineer the Neocortex
- information and biological theory
- making good progress
2) Create Technology for Machine Intelligence
based on neocortical principles
- not whole-brain simulation, not human-like
- new senses, new embodiments, faster , larger
Numenta’s Goals
Mission: Be the leader in the coming era of machine intelligence
5. What Does the Neocortex Do?
Sensory stream
retina
cochlea
somatic
The neocortex learns a model
of the world, primarily through
behavior.
Sensory arrays
Motor stream
The model is time-based and
predictive.
Top three neocortical principles
1) Memory-prediction
2) Continuous learning
3) Sensory-motor integration
7. The Neuron
Σ
ANN neuron
Few synapses
Sum input x weights
Learn by modifying weights
of synapses
HTM neuron
Thousands of synapses
Active dendrites:
Cell recognizes 100’s of unique
patterns
Learn by modeling growth of
new synapses
Biological neuron
Thousands of synapses
Active dendrites:
Cell recognizes 100’s of unique
patterns
Learn by growing new
synapses
Feedback
Local
Feedforward
Linear
Generate spikes
Non-linear
8-20 coactive synapses
lead to dendritic NMDA
spikes
Weakly depolarize soma
Hawkins & Ahmad, arXiv 2015
8. High Order Sequences
Two sequences: A-B-C-D
X-B-C-Y
Hawkins & Ahmad, arXiv 2015
X
A B
B
C
C
D
Y
Before learning
X B’’ C’’
D’
Y’’
After learning
A B’ C’
Same columns,
but only one cell active per column after learning.
Active cells
Depolarized (predictive) cells
Inactive cells
Time
X
A B
B
C
C
D
Y
Before learning
X B’’ C’’
D’
Y’’
After learning
A B’ C’
Same columns,
but only one cell active per column after learning.
Active cells
Depolarized (predictive) cells
Inactive cells
Time
9. B input C input D’ AND Y” predicted
Multiple simultaneous predictions
C’ AND C” predicted
C’ predicted
Prediction of next input
A input B’ predicted B input
Sequence Prediction
Two sequences: A-B-C-D
X-B-C-Y
Hawkins & Ahmad, arXiv 2015
10. 1) On-line learning
2) High-order representations
For example: sequences “ABCD” vs. “XBCY”
3) Multiple simultaneous predictions
For example: “BC” predicts both “D” and “Y”
4) Fully local and unsupervised learning rules
5) Extremely robust
Tolerant to >40% noise and faults
6) High capacity
HTM Sequence Memory : Computational Properties
Extensively tested, deployed in commercial applications
Full source code and documentation available: numenta.org & github.com/numenta
Paper in progress, arXiv version available: (Hawkins & Ahmad, 2015; Cui et al, 2015)
17. Summary
- Experimental findings from Neuroscience can lead to improved learning
algorithms
- Used properties of active dendrites, Hebbian-style plasticity and minicolumns
- Creating biologically inspired algorithms that really work leads to deeper
understanding of cortical principles and numerous testable predictions
Research Roadmap
- Understand functional properties of laminar microcircuit and
thalamocortical inputs
- Model multiple regions and hierarchy
- More biophysically accurate neuron models (e.g. spiking models)
20. Numenta Research Partnerships
IBM Research
Creating complete technology stack for HTM systems
Lead: Dr. Winfried Wilcke
DARPA
HTM-based “Cortical Processor”
Lead: Dr. Dan Hammerstrom
University of Heidelberg
Ported HTM sequence memory to HICANN neuromorphic chip
Lead: Dr. Karlheinz Meier
University of Berlin
Testing biological predictions of HTM theory
Lead: Dr. Matthew Larkum
21. 1) Sparser activations during a predictable sensory stream.
2) Unanticipated inputs leads to a burst of activity correlated vertically
within mini-columns.
3) Neighboring mini-columns will not be correlated.
4) Predicted cells need fast inhibition to inhibit nearby cells within mini-column.
5) For predictable stimuli, dendritic NMDA spikes will be much more frequent
than somatic action potentials.
6) Localized synaptic plasticity for dendritic segments that have spiked followed
a short time later by a back action potential.
7) The existence of sub-threshold LTP (in the absence of NMDA spikes) in
dendritic segments if a cluster of synapses become active followed by a bAP.
8) The existence of localized weak LTD when an NMDA spike is not followed by
an action potential.
Testable Predictions
(Vinje & Gallant, 2002)
(Ecker et al, 2010; Smith &
Häusser, 2010)
(Smith et al, 2013)
(Losonczy et al, 2008)
22. Summary
- Experimental findings from Neuroscience can lead to improved learning
algorithms
- Used properties of active dendrites, Hebbian-style plasticity and minicolumns
- Creating biologically inspired algorithms that really work leads to deeper
understanding of cortical principles and numerous testable predictions
Research Roadmap
- Understand functional properties of laminar microcircuit and
thalamocortical inputs
- Model multiple regions and hierarchy
- More biophysically accurate neuron models (e.g. spiking models)
27. Branco, T., & Häusser, M. (2011). Synaptic integration gradients in single cortical pyramidal cell
dendrites. Neuron, 69(5), 885–92.
NMDA Dendritic Spike
28.
29. Local
Active Dendrites - Highlights
Feedforward
Feedback
Experimental Data
Synapses on distal segments have a non-linear effect.
8 to 20 coactive synapses on a distal dendrite branch
will cause an NMDA dendritic spike. (This is a small
fraction of spines on the branch.)
Synapse activity must be spatially and temporally
localized
NMDA spike will depolarize soma but not cause action
potential.
85% of excitatory synapses on distal dendrites.
(Branco & Häusser, 2011; Schiller et al, 2000; Losonczy, 2006; Antic et
al, 2010; Major et al, 2013; Spruston, 2008; Milojkovic et al, 2005, etc.)
Editor's Notes
I don't know how many of you have heard about Numenta. Founded by Jeff Hawkins in 2005, we are an unusual research focused organization - we focus on understanding the computational principles of the neocortex. My background is in computer science and machine learning.
I don't know how many of you have heard about Numenta. Founded by Jeff Hawkins in 2005, we are an unusual research focused organization - we focus on understanding the computational principles of the neocortex
We study experimental research in neuroscience. We use these to improve our theory and learning algorithms. Why bother? Why not stick with the existing ML paradigm? Well if you look at the history of ML, insights from neuroscience have led to numerous fundamental advances in machine learning (including by the way, the very first learning algorithm). But lately the field has ignored neuroscience. At Numenta we think that's a big mistake.
We validate that our algorithms actually work in real-world applications. We also release everything we do as open source and have cultivated a very fast growing open source community. NuPIC is one of the top machine learning projects on github today. Two points here: 1) we think this approach will lead to qualitative leaps in learning algorithms. 2) <animate back arrow> I am hopeful that our theories will help inform experimental work as well. There is a large set of detailed testable predictions that come out of our theory.
There are a few more things I don’t show, but this is the architecture we want to understand and the one we need to replicate for MI.
What we can show is that a population of such neurons arranged in minicolumns leads to an extremely powerful sequence memory algorithm.