This document discusses machine intelligence and the cortical theory of intelligence. It begins by comparing approaches to computing in the 1940s-1950s and 2010s-2020s, noting that while many approaches existed, one dominant paradigm eventually emerged in both eras due to flexibility and scalability. It then outlines Numenta's cortical theory, including hierarchical temporal memory (HTM), and how HTM models the neocortex. The document details Numenta's research applying HTM to areas like anomaly detection, language processing, and vision. It argues HTM may be the dominant machine intelligence paradigm due to the neocortex's success and HTM's ability to model the neocortex's common algorithms across modalities.
Brains, Data, and Machine Intelligence (2014 04 14 London Meetup)Numenta
Jeff will discuss the Brains, Data, Machine Intelligence, Cortical Learning Algorithm he developed and the Numenta Platform for Intelligent Computing (NuPIC).
Why Neurons have thousands of synapses? A model of sequence memory in the brainNumenta
Presentation given by Yuwei Cui, Numenta Research Engineer at Beijing Normal University. December 2015.
Collaborators: Jeff Hawkins, Subutai Ahmad, Chetan Surpur
Hierarchical Temporal Memory: Computing Like the Brain - Matt Taylor, NumentaWithTheBest
Most of today's AI technologies are extensions of ANN models that were envisioned twenty years ago. While they have some amazing capabilities, they are not truly intelligent. To attain truly intelligent machines, our approach is to understand the only thing we can all agree is intelligent today: the human brain. HTM is an AI technology that is biologically-constrained. All major algorithms at work in an HTM system were uncovered by years of intensive neuroscience research. In this presentation, I'll describe the major mechanism of HTM at a high level, and lay a path towards the future of truly intelligent machines. http://numenta.com/open-source-community/
Matt Taylor, Numenta
The Biological Path Towards Strong AI Strange Loop 2017, St. LouisNumenta
Copy and paste this URL to your browser to watch the live presentation: https://www.youtube.com/watch?v=-h-cz7yY-G8
Abstract:
Today’s wave of AI technology is still being driven by the ANN neuron pioneered decades ago. Hierarchical Temporal Memory (HTM) is a realistic biologically-constrained model of the pyramidal neuron reflecting today’s most recent neocortical research. This talk will describe and visualize core HTM concepts like sparse distributed representations, spatial pooling and temporal memory. Strong AI is a common goal of many computer scientists. So far, machine learning techniques have created amazing results in narrow fields, but haven’t produced something we could all call “intelligent”. Given recent advances in neuroscience research, we know a lot more about how neurons work together now than we did when ANNs were created. We believe systems with a more realistic neuronal model will be more likely to produce Strong AI. Hierarchical Temporal Memory is a theory of intelligence based upon neuroscience research. The neocortex is the seat of intelligence in the brain, and it is structurally homogeneous throughout. This means a common algorithm is processing all your sensory input, no matter which sense. We believe we have discovered some of the foundational algorithms of the neocortex, and we’ve implemented them in software. I’ll show you how they work with detailed dynamic visualizations of Sparse Distributed Representations, Spatial Pooling, and Temporal Memory.
Brains, Data, and Machine Intelligence (2014 04 14 London Meetup)Numenta
Jeff will discuss the Brains, Data, Machine Intelligence, Cortical Learning Algorithm he developed and the Numenta Platform for Intelligent Computing (NuPIC).
Why Neurons have thousands of synapses? A model of sequence memory in the brainNumenta
Presentation given by Yuwei Cui, Numenta Research Engineer at Beijing Normal University. December 2015.
Collaborators: Jeff Hawkins, Subutai Ahmad, Chetan Surpur
Hierarchical Temporal Memory: Computing Like the Brain - Matt Taylor, NumentaWithTheBest
Most of today's AI technologies are extensions of ANN models that were envisioned twenty years ago. While they have some amazing capabilities, they are not truly intelligent. To attain truly intelligent machines, our approach is to understand the only thing we can all agree is intelligent today: the human brain. HTM is an AI technology that is biologically-constrained. All major algorithms at work in an HTM system were uncovered by years of intensive neuroscience research. In this presentation, I'll describe the major mechanism of HTM at a high level, and lay a path towards the future of truly intelligent machines. http://numenta.com/open-source-community/
Matt Taylor, Numenta
The Biological Path Towards Strong AI Strange Loop 2017, St. LouisNumenta
Copy and paste this URL to your browser to watch the live presentation: https://www.youtube.com/watch?v=-h-cz7yY-G8
Abstract:
Today’s wave of AI technology is still being driven by the ANN neuron pioneered decades ago. Hierarchical Temporal Memory (HTM) is a realistic biologically-constrained model of the pyramidal neuron reflecting today’s most recent neocortical research. This talk will describe and visualize core HTM concepts like sparse distributed representations, spatial pooling and temporal memory. Strong AI is a common goal of many computer scientists. So far, machine learning techniques have created amazing results in narrow fields, but haven’t produced something we could all call “intelligent”. Given recent advances in neuroscience research, we know a lot more about how neurons work together now than we did when ANNs were created. We believe systems with a more realistic neuronal model will be more likely to produce Strong AI. Hierarchical Temporal Memory is a theory of intelligence based upon neuroscience research. The neocortex is the seat of intelligence in the brain, and it is structurally homogeneous throughout. This means a common algorithm is processing all your sensory input, no matter which sense. We believe we have discovered some of the foundational algorithms of the neocortex, and we’ve implemented them in software. I’ll show you how they work with detailed dynamic visualizations of Sparse Distributed Representations, Spatial Pooling, and Temporal Memory.
Matt Taylor, Numenta's Open Source Community Manager, delivered this presentation at AI With the Best on April 29, 2017.
Abstract: Strong AI is a common goal of many computer scientists. So far, machine learning techniques have created amazing results in narrow fields, but haven’t produced something we could all call “intelligent”.
Given recent advances in neuroscience research, we know a lot more about how neurons work together now than we did when ANNs were created. We believe systems with a more realistic neuronal model will be more likely to produce Strong AI.
Hierarchical Temporal Memory is a theory of intelligence based upon neuroscience research. The neocortex is the seat of intelligence in the brain, and it is structurally homogeneous throughout. This means a common algorithm is processing all your sensory input, no matter which sense.
We believe we have discovered some of the foundational algorithms of the neocortex, and we’ve implemented them in software. I’ll show you how they work with detailed dynamic visualizations of Sparse Distributed Representations, Spatial Pooling, and Temporal Memory.
The Predictive Neuron: How Active Dendrites Enable Spatiotemporal Computation...Numenta
This was a presentation given on February 8, 2018 at the European Institute for Theoretical Neuroscience (EITN)'s Dendritic Integration and Computation with Active Dendrites Workshop.
The workshop is aimed at putting together experiments, models and recent neuromorphic systems aiming at understanding the computational properties conferred by dendrites in neural systems. It is focused particularly on the excitable properties of dendrites and the type of computation they can implement.
Numenta Brain Theory Discoveries of 2016/2017 by Jeff HawkinsNumenta
Jeff Hawkins discussed recent advances in cortical theory made by Numenta during the HTM Meetup on 11/03/2017. These discoveries are described in the recently published peer-reviewed paper, “A Theory of How Columns in the Neocortex Enable Learning the Structure of the World.” Jeff walked through the text and figures in the paper, as well as discussed the significance of these advances and the importance they play in AI and cortical theory.
The recording of the HTM Meetup is available at https://www.youtube.com/watch?v=c6U4yBfELpU&t=
The Biological Path Toward Strong AI by Matt Taylor (05/17/18)Numenta
These are Matt Taylor's slides from the AI Singapore Meetup on May 17, 2018.
Abstract:
Today’s wave of AI technology is still being driven by the ANN neuron pioneered decades ago. Hierarchical Temporal Memory (HTM) is a realistic biologically-constrained model of the pyramidal neuron reflecting today’s most recent neocortical research. This talk will describe and visualize core HTM concepts like sparse distributed representations, spatial pooling and temporal memory. Strong AI is a common goal of many computer scientists. So far, machine learning techniques have created amazing results in narrow fields, but haven’t produced something we could all call “intelligent”. Given recent advances in neuroscience research, we know a lot more about how neurons work together now than we did when ANNs were created. We believe systems with a more realistic neuronal model will be more likely to produce Strong AI. Hierarchical Temporal Memory is a theory of intelligence based upon neuroscience research. The neocortex is the seat of intelligence in the brain, and it is structurally homogeneous throughout. This means a common algorithm is processing all your sensory input, no matter which sense. We believe we have discovered some of the foundational algorithms of the neocortex, and we’ve implemented them in software. I’ll show you how they work with detailed dynamic visualizations of Sparse Distributed Representations, Spatial Pooling, and Temporal Memory.
SBMT 2021: Can Neuroscience Insights Transform AI? - Lawrence SpracklenNumenta
Numenta's Director of ML Architecture Lawrence Spracklen presented a talk at the SBMT Annual Congress on July 10th, 2021. He talked about how neuroscience principles can inspire better machine learning algorithms.
"Mainstream access to deep learning technology will greatly impact most industries over the next three to five years."
So what exactly is deep learning? How does it work? And most importantly, why should you even care?
Deep learning is used in the research community and in industry to help solve many big data problems such as computer vision, speech recognition, and natural language processing.
Practical examples include:
-Vehicle, pedestrian and landmark identification for driver assistance
-Image recognition
-Speech recognition and translation
-Natural language processing
-Life sciences
-What You Will Learn
-Understand the intuition behind Artificial Neural Networks
-Apply Artificial Neural Networks in practice
-Understand the intuition behind Convolutional Neural Networks
-Apply Convolutional Neural Networks in practice
-Understand the intuition behind Recurrent Neural Networks
-Apply Recurrent Neural Networks in practice
-Understand the intuition behind Self-Organizing Maps
-Apply Self-Organizing Maps in practice
-Understand the intuition behind Boltzmann Machines
-Apply Boltzmann Machines in practice
-Understand the intuition behind AutoEncoders
-Apply AutoEncoders in practice
Does the neocortex use grid cell-like mechanisms to learn the structure of ob...Numenta
These are Jeff Hawkins' slides from the Computational Theories of the Brain Workshop held at the Simons Institute at UC Berkeley on April 17, 2018.
Abstract:
In this talk, I propose that the neocortex learns models of objects using the same methods that the entorhinal cortex uses to map environments. I propose that each cortical column contains cells that are equivalent to grid cells. These cells represent the location of sensor patches relative to objects in the world. As we move our sensors, the location of the sensor is paired with sensory input to learn the structure of objects. I explore the evidence for this hypothesis, propose specific cellular mechanisms that the hypothesis requires, and suggest how the hypothesis could be tested.
References:
“A Theory of How Columns in the Neocortex Enable Learning the Structure of the World” by Jeff Hawkins, Subutai Ahmad, YuWei Cui (2017)
“Place Cells, Grid Cells, and the Brain’s Spatial Representation System” by Edvard Moser, Emilio Kropff, May-Britt Moser (2008)
“Evidence for grid cells in a human memory network” by Christian Doeller, Caswell Barry, Neil Burgess (2010)
Artificial Neural Network Seminar - Google BrainRawan Al-Omari
it's our seminar in artificial neural network course, at F.I.T.E, AI Dept.
it's about Google Brain project, and who they using neural network in building it .
actually it's a very interesting project they work on it .
for more information about this project :
http://nyti.ms/T5E71e
Tijmen Blankenvoort, co-founder Scyfer BV, presentation at Artificial Intelligence Meetup 15-1-2014. Introduction into Neural Networks and Deep Learning.
"You Can Do It" by Louis Monier (Altavista Co-Founder & CTO) & Gregory Renard (CTO & Artificial Intelligence Lead Architect at Xbrain) for Deep Learning keynote #0 at Holberton School (http://www.meetup.com/Holberton-School/events/228364522/)
If you want to assist to similar keynote for free, checkout http://www.meetup.com/Holberton-School/
Deep Learning - The Past, Present and Future of Artificial IntelligenceLukas Masuch
In the last couple of years, deep learning techniques have transformed the world of artificial intelligence. One by one, the abilities and techniques that humans once imagined were uniquely our own have begun to fall to the onslaught of ever more powerful machines. Deep neural networks are now better than humans at tasks such as face recognition and object recognition. They’ve mastered the ancient game of Go and thrashed the best human players. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new hype? How is Deep Learning different from previous approaches? Let’s look behind the curtain and unravel the reality. This talk will introduce the core concept of deep learning, explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why “deep learning is probably one of the most exciting things that is happening in the computer industry“ (Jen-Hsun Huang – CEO NVIDIA).
Numenta engineer Yuwei Cui walks through how the HTM Spatial Pooler works, explaining why desired properties exist and how they work. Includes lots of graphs of SP online learning performance, discussion of topology and boosting.
Matt Taylor, Numenta's Open Source Community Manager, delivered this presentation at AI With the Best on April 29, 2017.
Abstract: Strong AI is a common goal of many computer scientists. So far, machine learning techniques have created amazing results in narrow fields, but haven’t produced something we could all call “intelligent”.
Given recent advances in neuroscience research, we know a lot more about how neurons work together now than we did when ANNs were created. We believe systems with a more realistic neuronal model will be more likely to produce Strong AI.
Hierarchical Temporal Memory is a theory of intelligence based upon neuroscience research. The neocortex is the seat of intelligence in the brain, and it is structurally homogeneous throughout. This means a common algorithm is processing all your sensory input, no matter which sense.
We believe we have discovered some of the foundational algorithms of the neocortex, and we’ve implemented them in software. I’ll show you how they work with detailed dynamic visualizations of Sparse Distributed Representations, Spatial Pooling, and Temporal Memory.
The Predictive Neuron: How Active Dendrites Enable Spatiotemporal Computation...Numenta
This was a presentation given on February 8, 2018 at the European Institute for Theoretical Neuroscience (EITN)'s Dendritic Integration and Computation with Active Dendrites Workshop.
The workshop is aimed at putting together experiments, models and recent neuromorphic systems aiming at understanding the computational properties conferred by dendrites in neural systems. It is focused particularly on the excitable properties of dendrites and the type of computation they can implement.
Numenta Brain Theory Discoveries of 2016/2017 by Jeff HawkinsNumenta
Jeff Hawkins discussed recent advances in cortical theory made by Numenta during the HTM Meetup on 11/03/2017. These discoveries are described in the recently published peer-reviewed paper, “A Theory of How Columns in the Neocortex Enable Learning the Structure of the World.” Jeff walked through the text and figures in the paper, as well as discussed the significance of these advances and the importance they play in AI and cortical theory.
The recording of the HTM Meetup is available at https://www.youtube.com/watch?v=c6U4yBfELpU&t=
The Biological Path Toward Strong AI by Matt Taylor (05/17/18)Numenta
These are Matt Taylor's slides from the AI Singapore Meetup on May 17, 2018.
Abstract:
Today’s wave of AI technology is still being driven by the ANN neuron pioneered decades ago. Hierarchical Temporal Memory (HTM) is a realistic biologically-constrained model of the pyramidal neuron reflecting today’s most recent neocortical research. This talk will describe and visualize core HTM concepts like sparse distributed representations, spatial pooling and temporal memory. Strong AI is a common goal of many computer scientists. So far, machine learning techniques have created amazing results in narrow fields, but haven’t produced something we could all call “intelligent”. Given recent advances in neuroscience research, we know a lot more about how neurons work together now than we did when ANNs were created. We believe systems with a more realistic neuronal model will be more likely to produce Strong AI. Hierarchical Temporal Memory is a theory of intelligence based upon neuroscience research. The neocortex is the seat of intelligence in the brain, and it is structurally homogeneous throughout. This means a common algorithm is processing all your sensory input, no matter which sense. We believe we have discovered some of the foundational algorithms of the neocortex, and we’ve implemented them in software. I’ll show you how they work with detailed dynamic visualizations of Sparse Distributed Representations, Spatial Pooling, and Temporal Memory.
SBMT 2021: Can Neuroscience Insights Transform AI? - Lawrence SpracklenNumenta
Numenta's Director of ML Architecture Lawrence Spracklen presented a talk at the SBMT Annual Congress on July 10th, 2021. He talked about how neuroscience principles can inspire better machine learning algorithms.
"Mainstream access to deep learning technology will greatly impact most industries over the next three to five years."
So what exactly is deep learning? How does it work? And most importantly, why should you even care?
Deep learning is used in the research community and in industry to help solve many big data problems such as computer vision, speech recognition, and natural language processing.
Practical examples include:
-Vehicle, pedestrian and landmark identification for driver assistance
-Image recognition
-Speech recognition and translation
-Natural language processing
-Life sciences
-What You Will Learn
-Understand the intuition behind Artificial Neural Networks
-Apply Artificial Neural Networks in practice
-Understand the intuition behind Convolutional Neural Networks
-Apply Convolutional Neural Networks in practice
-Understand the intuition behind Recurrent Neural Networks
-Apply Recurrent Neural Networks in practice
-Understand the intuition behind Self-Organizing Maps
-Apply Self-Organizing Maps in practice
-Understand the intuition behind Boltzmann Machines
-Apply Boltzmann Machines in practice
-Understand the intuition behind AutoEncoders
-Apply AutoEncoders in practice
Does the neocortex use grid cell-like mechanisms to learn the structure of ob...Numenta
These are Jeff Hawkins' slides from the Computational Theories of the Brain Workshop held at the Simons Institute at UC Berkeley on April 17, 2018.
Abstract:
In this talk, I propose that the neocortex learns models of objects using the same methods that the entorhinal cortex uses to map environments. I propose that each cortical column contains cells that are equivalent to grid cells. These cells represent the location of sensor patches relative to objects in the world. As we move our sensors, the location of the sensor is paired with sensory input to learn the structure of objects. I explore the evidence for this hypothesis, propose specific cellular mechanisms that the hypothesis requires, and suggest how the hypothesis could be tested.
References:
“A Theory of How Columns in the Neocortex Enable Learning the Structure of the World” by Jeff Hawkins, Subutai Ahmad, YuWei Cui (2017)
“Place Cells, Grid Cells, and the Brain’s Spatial Representation System” by Edvard Moser, Emilio Kropff, May-Britt Moser (2008)
“Evidence for grid cells in a human memory network” by Christian Doeller, Caswell Barry, Neil Burgess (2010)
Artificial Neural Network Seminar - Google BrainRawan Al-Omari
it's our seminar in artificial neural network course, at F.I.T.E, AI Dept.
it's about Google Brain project, and who they using neural network in building it .
actually it's a very interesting project they work on it .
for more information about this project :
http://nyti.ms/T5E71e
Tijmen Blankenvoort, co-founder Scyfer BV, presentation at Artificial Intelligence Meetup 15-1-2014. Introduction into Neural Networks and Deep Learning.
"You Can Do It" by Louis Monier (Altavista Co-Founder & CTO) & Gregory Renard (CTO & Artificial Intelligence Lead Architect at Xbrain) for Deep Learning keynote #0 at Holberton School (http://www.meetup.com/Holberton-School/events/228364522/)
If you want to assist to similar keynote for free, checkout http://www.meetup.com/Holberton-School/
Deep Learning - The Past, Present and Future of Artificial IntelligenceLukas Masuch
In the last couple of years, deep learning techniques have transformed the world of artificial intelligence. One by one, the abilities and techniques that humans once imagined were uniquely our own have begun to fall to the onslaught of ever more powerful machines. Deep neural networks are now better than humans at tasks such as face recognition and object recognition. They’ve mastered the ancient game of Go and thrashed the best human players. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new hype? How is Deep Learning different from previous approaches? Let’s look behind the curtain and unravel the reality. This talk will introduce the core concept of deep learning, explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why “deep learning is probably one of the most exciting things that is happening in the computer industry“ (Jen-Hsun Huang – CEO NVIDIA).
Numenta engineer Yuwei Cui walks through how the HTM Spatial Pooler works, explaining why desired properties exist and how they work. Includes lots of graphs of SP online learning performance, discussion of topology and boosting.
Recognizing Locations on Objects by Marcus LewisNumenta
Marcus gave a talk called "Recognizing Locations on Objects" during the HTM Meetup on 11/03/2017.
The brain learns and recognizes objects with independent moving sensors. It’s not obvious how a network of neurons would do this. Numenta has suggested that the brain solves this by computing each sensor’s location relative to the object, and learning the object as a set of features-at-locations. Marcus showed how the brain might determine this “location relative to the object.” He extended the model from Numenta’s recent paper, "A Theory of How Columns in the Neocortex Enable Learning the Structure of the World," so that it computes this location. This extended model takes two inputs: each sensor’s input, and each sensor’s “location relative to the body.” The model connects the columns in such a way that a column can compute its “location relative to the object” from another column’s “location relative to object.” When a column senses a feature, it recalls a union of all locations where it has sensed this feature, then the columns work together to narrow their unions. This extended model essentially takes its sensory input and asks, “Do I know any objects that contain this spatial arrangement of features?”
a preview of the platform TouchNet: A touch simulator and dataset of touchable objects to teach AIs how to interact with their environments via motor-sensory systems and touch
SF Big Analytics20170706: What the brain tells us about the future of streami...Chester Chen
Much of the world’s data is becoming streaming, time-series data. It becomes increasingly important to analyze streaming data in real-time. Hierarchal Temporal Memory (HTM) is a detailed computational theory of the neocortex. At the core of HTM are time-based learning algorithms that store and recall spatial and temporal patterns. HTM is well suited to a wide variety of problems; particularly those involve streaming data and time-based patterns. The current HTM systems are able to learn the structure of streaming data, make predictions and detect anomalies. It is distinguished from other techniques in its ability to learn continuously in a fully unsupervised manner. HTM has been tested and implemented in software, all of which is developed with best practices and is suitable for deploying in commercial applications. The core learning algorithms are fully documented and available in an open source project called NuPIC. HTM not only advances our understanding of how the brain may solve the sequence learning problem but is also applicable to real-world sequence learning problems from continuous data streams.
Speaker
Yuwei Cui a Research Staff Member at Numenta, a company focused on Machine Intelligence. His professional interests are in the areas of Artificial Intelligence, Computational Neuroscience, Computer Vision and Machine Learning. He became interested in AI while studying physics in the University of Science and Technology of China
He later went on to get a PhD in computational neuroscience, specializing in understanding how our visual system process sensory inputs and contribute to perceptions, from the University of Maryland at College Park. He became fascinated by the brain and reverse engineering its underlying computational principles. He has published numerous peer-reviewed scientific articles in Neuroscience and AI.
It’s long ago, approx. 30 years, since AI was not only a topic for Science-Fiction writers, but also a major research field surrounded with huge hopes and investments. But the over-inflated expectations ended in a subsequent crash and followed by a period of absent funding and interest – the so-called AI winter. However, the last 3 years changed everything – again. Deep learning, a machine learning technique inspired by the human brain, successfully crushed one benchmark after another and tech companies, like Google, Facebook and Microsoft, started to invest billions in AI research. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new Hype? How is Deep Learning different from previous approaches? Are the advancing AI technologies really a threat for humanity? Let’s look behind the curtain and unravel the reality. This talk will explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why "Deep Learning is probably one of the most exciting things that is happening in the computer industry” (Jen-Hsun Huang – CEO NVIDIA).
Either a new AI “winter is coming” (Ned Stark – House Stark) or this new wave of innovation might turn out as the “last invention humans ever need to make” (Nick Bostrom – AI Philosoph). Or maybe it’s just another great technology helping humans to achieve more.
Artificial Intelligence Today (22 June 2017)Sabri Sansoy
This was a top level presentation on some of the 30+ subcategories of Artificial Intelligence at the Hackaday LA June Meetup - Wheels, Wings, and Walkers. Sponsored by SupplyFrame Design Labs in Pasadena CA
An introduction to AI,ML,DL
Working of AI System
Scope of AI ,Cyber Security and BCT in Marine
Marine Education Scope of AI and BCT
Changes Required in Curriculum
Cyber security in Marine field
Parametric Analysis
Skill Set Requirement
I will talk about innovation in the area of cyber security analytics - developing machine learning methods to detect and block cyber attacks (e.g. detecting ransomware within 4 seconds of execution and killing the underlying processes). Rather than just focusing on this as a 'black box', I'll pull it apart and talk about how we can use these methods to enable security practitioners (SOC/CIRT etc) to ask and answer questions about 'what' and 'why' these methods are flagging attacks. I'll also talk about resilience of machine learning methods to manipulation and adversarial attacks - how stable these approaches are to diversity and evolution of malware for example.
[DSC Europe 23] Goran S. Milovanovic - Deciphering the AI Landscape: Business...DataScienceConferenc1
I will unveil the findings of an exhaustive study on the current landscape of AI in business, its market dynamics, and the associated risks. This research, rooted in the Natural Language Processing (NLP) analysis of media discourse, utilised leading language models (LLMs) and drew from a variety of data sources. Beyond the empirical picture, I will dive into an analysis of the various risk levels inherent to the AI business. Additionally, I will provide insights into the potential direction of the AI market in the near to medium-term future. The primary objective is to clarify our present understanding and control of the AI business, while also highlighting the uncharted areas we might encounter in the future.
Novi Sad AI is the first AI community in Serbia with goal of democratizing knowledge of AI. On our first event we talked about Belief networks, Deep learning and many more.
Brains@Bay Meetup: A Primer on Neuromodulatory Systems - Srikanth RamaswamyNumenta
Meetup page: https://www.meetup.com/Brains-Bay/events/284481247/
Neuromodulators are signalling chemicals in the brain, which control the emergence of adaptive learning and behaviour. Neuromodulators including dopamine, acetylcholine, serotonin and noradrenaline operate on a spectrum of spatio-temporal scales in tandem and opposition to reconfigure functions of biological neural networks and to regulate global cognition and state transition. Although neuromodulators are important in shaping cognition, their phenomenology is yet to be fully realized in deep neural networks (DNNs). In this talk, we will give an overview of the biological organizing principles of neuromodulators in adaptive cognition and highlight the competition and cooperation across neuromodulators.
Brains@Bay Meetup: How to Evolve Your Own Lab Rat - Thomas MiconiNumenta
Meetup page: https://www.meetup.com/Brains-Bay/events/284481247/
A hallmark of intelligence is the ability to learn new flexible, cognitive behaviors - that is, behaviors that require discovering, storing and exploiting novel information for each new instance of the task. In meta-learning, agents are trained with external algorithms to learn one specific cognitive task. However, animals are able to pick up such cognitive tasks automatically, as a result of their evolved neural architecture and synaptic plasticity mechanisms, including neuromodulation. Here we evolve neural networks, endowed with plastic connections and reward-based neuromodulation, over a sizable set of simple meta-learning tasks based on a framework from computational neuroscience. The resulting evolved networks can automatically acquire a novel simple cognitive task, never seen during evolution, through the spontaneous operation of their evolved neural organization and plasticity system. We suggest that attending to the multiplicity of loops involved in natural learning may provide useful insight into the emergence of intelligent behavior.
Brains@Bay Meetup: The Increasing Role of Sensorimotor Experience in Artifici...Numenta
We receive information about the world through our sensors and influence the world through our effectors. Such low-level data has gradually come to play a greater role in AI during its 70-year history. I see this as occurring in four steps, two of which are mostly past and two of which are in progress or yet to come. The first step was to view AI as the design of agents which interact with the world and thereby have sensorimotor experience; this viewpoint became prominent in the 1980s and 1990s. The second step was to view the goal of intelligence in terms of experience, as in the reward signal of optimal control and reinforcement learning. The reward formulation of goals is now widely used but rarely loved. Many would prefer to express goals in non-experiential terms, such as reaching a destination or benefiting humanity, but settle for reward because, as an experiential signal, reward is directly available to the agent without human assistance or interpretation. This is the pattern that we see in all four steps. Initially a non-experiential approach seems more intuitive, is preferred and tried, but ultimately proves a limitation on scaling; the experiential approach is more suited to learning and scaling with computational resources. The third step in the increasing role of experience in AI concerns the agent’s representation of the world’s state. Classically, the state of the world is represented in objective terms external to the agent, such as “the grass is wet” and “the car is ten meters in front of me”, or with probability distributions over world states such as in POMDPs and other Bayesian approaches. Alternatively, the state of the world can be represented experientially in terms of summaries of past experience (e.g., the last four Atari video frames input to DQN) or predictions of future experience (e.g., successor representations). The fourth step is potentially the biggest: world knowledge. Classically, world knowledge has always been expressed in terms far from experience, and this has limited its ability to be learned and maintained. Today we are seeing more calls for knowledge to be predictive and grounded in experience. After reviewing the history and prospects of the four steps, I propose a minimal architecture for an intelligent agent that is entirely grounded in experience.
Brains@Bay Meetup: Open-ended Skill Acquisition in Humans and Machines: An Ev...Numenta
In this talk, I will propose a conceptual framework sketching a path toward open-ended skill acquisition through the coupling of environmental, morphological, sensorimotor, cognitive, developmental, social, cultural and evolutionary mechanisms. I will illustrate parts of this framework through computational experiments highlighting the key role of intrinsically motivated exploration in the generation of behavioral regularity and diversity. Firstly, I will show how some forms of language can self-organize out of generic exploration mechanisms without any functional pressure to communicate. Secondly, we will see how language — once invented — can be recruited as a cognitive tool that enables compositional imagination and bootstraps open-ended cultural innovation.
For more:
Brains@Bay Meetup: The Effect of Sensorimotor Learning on the Learned Represe...Numenta
Most current deep neural networks learn from a static data set without active interaction with the world. We take a look at how learning through a closed loop between action and perception affects the representations learned in a DNN. We demonstrate how these representations are significantly different from DNNs that learn supervised or unsupervised from a static dataset without interaction. These representations are much sparser and encode meaningful content in an efficient way. Even an agent who learned without any external supervision, purely through curious interaction with the world, acquires encodings of the high dimensional visual input that enable the agent to recognize objects using only a handful of labeled examples. Our results highlight the capabilities that emerge from letting DNNs learn more similar to biological brains, though sensorimotor interaction with the world.
For more:
FPGA Conference 2021: Breaking the TOPS ceiling with sparse neural networks -...Numenta
Nick Ni (Xilinx) and Lawrence Spracklen (Numenta) presented a talk at the FGPA Conference Europe on July 8th, 2021. In this talk, they presented a neuroscience approach to optimize state-of-the-art deep learning networks into sparse topology and how it can unlock significant performance gains on FPGAs without major loss of accuracy. They then walked through the FPGA implementation where they exploited the advantage of sparse networks with a unique Domain Specific Architecture (DSA).
BAAI Conference 2021: The Thousand Brains Theory - A Roadmap for Creating Mac...Numenta
Jeff Hawkins presented a talk on "The Thousand Brains Theory: A Roadmap to Machine Intelligence" at the Beijing Academy of Artificial Intelligence Conference on 1st June 2021. In this talk, he discussed the key components of The Thousand Brains Theory and Numenta's recent work.
Jeff Hawkins NAISys 2020: How the Brain Uses Reference Frames, Why AI Needs t...Numenta
Jeff Hawkins presents a talk on "How the Brain Uses Reference Frames to Model the World, Why AI Needs to do the Same." In this talk, he gives an overview of The Thousand Brains Theory and discusses how machine intelligence can benefit from working on the same principles as the neocortex.
This talk was first presented at the NAISys conference on November 10, 2020. You can find a re-recording of the talk here: https://youtu.be/mGSG7I9VKDU
OpenAI’s GPT 3 Language Model - guest Steve OmohundroNumenta
In this research meeting, guest Stephen Omohundro gave a fascinating talk on GPT-3, the new massive OpenAI Natural Language Processing model. He reviewed the network architecture, training process, and results in the context of past work. There was extensive discussion on the implications for NLP and for Machine Intelligence / AGI.
Link to GPT-3 paper: https://arxiv.org/abs/2005.14165
Link to YouTube recording of Steve's talk: https://youtu.be/0ZVOmBp29E0
CVPR 2020 Workshop: Sparsity in the neocortex, and its implications for conti...Numenta
Numenta VP Research Subutai Ahmad presents a talk on "Sparsity in the Neocortex and its Implications for Continual Learning" at the virtual CVPR 2020 workshop. In this talk, he discusses how continuous learning systems can benefit from sparsity, active dendrites and other neocortical mechanisms.
The Thousand Brains Theory: A Framework for Understanding the Neocortex and B...Numenta
Recent advances in reverse engineering the neocortex reveal that it is a highly-distributed sensory-motor modeling system. Each cortical column learns complete models of observed objects through movement and sensation. The columns use long-range connections to vote on what objects are currently being observed. In this talk, we introduce the key elements of this theory and describe how these elements can be introduced into current machine learning techniques to improve their capabilities, robustness, and power requirements.
Jeff Hawkins Human Brain Project Summit Keynote: "Location, Location, Locatio...Numenta
Jeff Hawkins delivered this keynote presentation at the 2018 Human Brain Project Summit Open Day in Maastricht, the Netherlands on October 15, 2018. A screencast recording of the slides is also available at: https://numenta.com/resources/videos/jeff-hawkins-human-brain-project-screencast/
Location, Location, Location - A Framework for Intelligence and Cortical Comp...Numenta
Jeff Hawkins gave this presentation as part of the Johns Hopkins APL Colloquium Series on Septemer 21, 2018.
View the video of the talk here: https://numenta.com/resources/videos/jeff-hawkins-johns-hopkins-apl-talk/
Have We Missed Half of What the Neocortex Does? A New Predictive Framework ...Numenta
Numenta VP of Research Subutai Ahmad delivered this presentation at the Centre for Theoretical Neuroscience, University of Waterloo on October 2, 2018.
Could A Model Of Predictive Voting Explain Many Long-Range Connections? by Su...Numenta
These are slides on a workshop Subutai Ahmad hosted on March 5, 2018 at the Computational and Systems Neuroscience Meeting (Cosyne) 2018.
About:
This workshop on long-range cortical circuits is focused on our peer-reviewed paper, “A Theory of How Columns in the Neocortex Enable Learning the Structure of the World.” Subutai discussed the inference mechanism introduced in the paper, our theory of location information, and how long-range connections allow columns to integrate inputs over space to perform object recognition.
Have We Missed Half of What the Neocortex Does? by Jeff Hawkins (12/15/2017)Numenta
This was a presentation given on December 15, 2017 at the MIT Center for Brains, Minds + Machines as part of their Brains, Minds and Machines Seminar Series.
You can watch the recording of the presentation after Slide 1.
In this talk, Jeff describes a theory that sensory regions of the neocortex process two inputs. One input is the well-known sensory data arriving via thalamic relay cells. We propose the second input is a representation of allocentric location. The allocentric location represents where the sensed feature is relative to the object being sensed, in an object-centric reference frame. As the sensors move, cortical columns learn complete models of objects by integrating sensory features and location representations over time. Lateral projections allow columns to rapidly reach a consensus of what object is being sensed. We propose that the representation of allocentric location is derived locally, in layer 6 of each column, using the same tiling principles as grid cells in the entorhinal cortex. Because individual cortical columns are able to model complete complex objects, cortical regions are far more powerful than currently believed. The inclusion of allocentric location offers the possibility of rapid progress in understanding the function of numerous aspects of cortical anatomy.
Jeff discusses material from these two papers. Others can be found at https://numenta.com/papers
A Theory of How Columns in the Neocortex Enable Learning the Structure of the World
URL: https://doi.org/10.3389/fncir.2017.00081
Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in the Neocortex
URL: https://doi.org/10.3389/fncir.2016.00023
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
What the Brain says about Machine Intelligence
1. November 21, 2014
Jeff Hawkins
jhawkins@Numenta.com
What the Brain Says About Machine Intelligence
2. 1940’s 1950’s
- Dedicated vs. universal
- Analog vs. digital
- Decimal vs. binary
- Wired vs. memory-based programming
- Serial vs. random access memory
Many approaches
- Universal
- Digital
- Binary
- Memory-based programming
- Two tier memory
One dominant paradigm
The Birth of Programmable Computing
Why Did One Paradigm Win?
- Network effects
Why Did This Paradigm Win?
- Most flexible
- Most scalable
3. 2010’s 2020’s
The Birth of Machine Intelligence
- Specific vs. universal algorithms
- Mathematical vs. memory-based
- Batch vs. on-line learning
- Labeled vs. behavior-based learning
Many approaches
- Universal algorithms
- Memory-based
- On-line learning
- Behavior-based learning
One dominant paradigm
Why Will One Paradigm Win?
- Network effects
Why Will This Paradigm Win?
- Most flexible
- Most scalable
How Do We Know This is Going to Happen?
- Brain is proof case
- We have made great progress
4. 1) Discover operating principles of neocortex.
2) Create machine intelligence technology
based on neocortical principles.
Numenta’s Mission
Talk Topics
- Cortical facts
- Cortical theory
- Research roadmap
- Applications
- Thoughts on Machine Intelligence
5. What the Cortex Does
patterns Learns a model of world
from changing sensory data
The model generates
- predictions
- anomalies
- actions
Most sensory changes are due
to your own movement
The neocortex learns a sensory-motor model of the world
patterns
patterns
light
sound
touch
retina
cochlear
somatic
7. Cortical Theory
Hierarchy
Cellular layers
Mini-columns
Neurons: 3-10K synapses
- 10% proximal
- 90% distal
Active dendrites
Learning = new synapses
Remarkably uniform
- anatomically
- functionally
Sheet of cellsHTM
Hierarchical Temporal Memory
1) Hierarchy of identical regions
2) Each region learns sequences
3) Stability increases going up hierarchy if
input is predictable
4) Sequences unfold going down
Questions
- What does a region do?
- What do the cellular layers do?
- How do neurons implement this?
- How does this work in hierarchy?
2/3
4
6
5
8. 2/3
4
5
6
Cellular Layers
Sequence memory:
Sequence memory:
Sequence memory:
Sequence memory:
Inference (high-order)
Inference (sensory-motor)
Motor
Attention
FeedforwardFeedback
Each layer is a variation of common sequence memory algorithm.
These are universal functions. They apply to:
- all cortical regions
- all sensory-motor modalities.
Copy of motor commands
Sensor data Higher region
Sub-cortical
Motor centers
Lower region
10. HTM Temporal Memory
Learns sequences
Recognizes and recalls sequences
Predicts next inputs
- High capacity
- Distributed
- Local learning rules
- Fault tolerant
- No sensitive parameters
- Generalizes
11. HTM Temporal Memory
Not Just Another ANN 1) Cortical Anatomy
Mini-columns
Inhibitory cells
Cell connectivity patterns
2) Sparse Distributed
Representations
3) Realistic Neurons
Active dendrites
Thousands of synapses
Learn via synapse formation
numenta.com/learn/
12. 2/3
4
5
6
Research Roadmap
Sensory-motor Inference
High-order Inference
Motor Sequences
Attention/Feedback
Theory 98%
Extensively tested
Commercial
Theory 80%
In development
Theory 50%
Theory 30%
Streaming Data
Capabilities: Prediction
Anomaly detection
Classification
Applications: Predictive maintenance
Security
Natural Language Processing
22. Document corpus
(e.g. Wikipedia)
128 x 128
100K “Word SDRs”
- =
Apple Fruit Computer
Macintosh
Microsoft
Mac
Linux
Operating system
….
Natural Language
23. Training set
frog eats flies
cow eats grain
elephant eats leaves
goat eats grass
wolf eats rabbit
cat likes ball
elephant likes water
sheep eats grass
cat eats salmon
wolf eats mice
lion eats cow
dog likes sleep
elephant likes water
cat likes ball
coyote eats rodent
coyote eats rabbit
wolf eats squirrel
dog likes sleep
cat likes ball
---- ---- -----
Word 3Word 2Word 1
Sequences of Word SDRs
HTM
24. Training set
eats“fox”
?
frog eats flies
cow eats grain
elephant eats leaves
goat eats grass
wolf eats rabbit
cat likes ball
elephant likes water
sheep eats grass
cat eats salmon
wolf eats mice
lion eats cow
dog likes sleep
elephant likes water
cat likes ball
coyote eats rodent
coyote eats rabbit
wolf eats squirrel
dog likes sleep
cat likes ball
---- ---- -----
Sequences of Word SDRs
HTM
25. Training set
eats“fox”
rodent
- Learning is unsupervised
- Semantic generalization
- Works across languages
- Many applications
Intelligent search
Sentiment analysis
Semantic filtering
frog eats flies
cow eats grain
elephant eats leaves
goat eats grass
wolf eats rabbit
cat likes ball
elephant likes water
sheep eats grass
cat eats salmon
wolf eats mice
lion eats cow
dog likes sleep
elephant likes water
cat likes ball
coyote eats rodent
coyote eats rabbit
wolf eats squirrel
dog likes sleep
cat likes ball
---- ---- -----
Sequences of Word SDRs
HTM
26. Server metrics Human metrics
Natural language
GPS dataEEG dataFinancial data
All these applications run on
the exact same HTM code.
27. 2/3
4
5
6
Research Roadmap
Sensory-motor Inference
High-order Inference
Motor Sequences
Attention/Feedback
Theory 98%
Extensively tested
Commercial
Theory 80%
In development
Theory 50%
Theory 30%
Streaming Data
Capabilities: Prediction
Anomaly detection
Classification
Applications: IT
Security
Natural Language Processing
Static Data (via active learning)
Capabilities: Classification
Prediction
Applications: Vision image classification
Network classification
Classification of connected graphs
28. 2/3
4
5
6
Research Roadmap
Sensory-motor Inference
High-order Inference
Motor Sequences
Attention/Feedback
Theory 98%
Extensively tested
Commercial
Theory 80%
In development
Theory 50%
Theory 30%
Streaming Data
Capabilities: Prediction
Anomaly detection
Classification
Applications: IT
Security
Natural Language Processing
Static Data (via active learning)
Capabilities: Classification
Prediction
Applications: Vision image classification
Network classification
Classification of connected graphs
Static and/or streaming Data
Capabilities: Goal-oriented behavior
Applications: Robotics
Smart bots
Proactive defense
29. 2/3
4
5
6
Research Roadmap
Sensory-motor Inference
High-order Inference
Motor Sequences
Attention/Feedback
Theory 98%
Extensively tested
Commercial
Theory 80%
In development
Theory 50%
Theory 30%
Streaming Data
Capabilities: Prediction
Anomaly detection
Classification
Applications: IT
Security
Natural Language Processing
Static Data (via active learning)
Capabilities: Classification
Prediction
Applications: Vision image classification
Network classification
Classification of connected graphs
Static and/or streaming Data
Capabilities: Goal-oriented behavior
Applications: Robotics
Smart bots
Proactive defense
Enables : Multi-sensory modalities
Multi-behavioral modalities
30. - Algorithms are documented
- Multiple independent implementations
NuPIC www.Numenta.org
- Numenta’s software is open source (GPLv3)
- Numenta’s daily research code is online
- Active discussion groups for theory and implementation
- Collaborative
IBM Almaden Research, San Jose, CA
DARPA, Washington D.C
Cortical.IO, Austria
Research Transparency
45. - This is a first order sequence memory.
- It cannot learn A-B-C-D vs. X-B-C-Y.
- Mini-columns turn this into a high-order sequence memory.
Learning Transitions
Multiple predictions can occur at once.
A-B A-C A-D
46. Forming High Order Representations
Feedforward: Sparse activation of columns
Burst of activity Highly sparse unique pattern
Unpredicted Predicted
Feedforward: Sparse activation of columns
47. Representing High-order Sequences
A
X B
B
C
C
Y
D
Before training
A
X B’’
B’
C’’
C’
Y’’
D’
After training
Same columns,
but only one cell active per column.
IF 40 active columns, 10 cells per column
THEN 1040 ways to represent the same input in different contexts
48. SDR Properties
subsampling is OK
3) Union membership:
Indices
1
2
|
10
Is this SDR
a member?
2) Store and Compare:
store indices of active bits
Indices
1
2
3
4
5
|
40
1)
2)
3)
….
10)
2%
20%Union
1) Similarity:
shared bits = semantic similarity
49. What Can Be Done With Software
1 layer
30 msec / learning-inference-prediction step
10-6 of human cortex
2048 columns 65,000 neurons
300M synapses
50. Challenges
Dendritic regions
Active dendrites
1,000s of synapses
10,000s of potential synapses
Continuous learning
Challenges and Opportunities for Neuromorphic HW
Opportunities
Low precision memory (synapses)
Fault tolerant
- memory
- connectivity
- neurons
- natural recovery
Simple activation states (no spikes)
Connectivity
- very sparse, topological
51. 2/3
4
5
6
Cellular Layers
Sequence memory
Sequence memory
Sequence memory
Sequence memory
Inference
Inference
Motor
Attention
FeedforwardFeedback
Each layer implements a variation of a common sequence
memory algorithm.
Higher cortexSensor/lower cortex
Lower cortex
Motor center
52. Why Will Machine Intelligence be Based on Cortical Principles?
1) Cortex uses a common learning algorithm
vision
hearing
touch
behavior
2) Cortical algorithm is incredibly adaptable
languages
engineering
science
arts …
3) Network effects
Hardware and software efforts will
focus on most universal solution
53. 2/3
4
5
6
Cellular Layers
Sequence memory:
Sequence memory:
Sequence memory:
Sequence memory:
Inference
Inference
Motor
Attention
FeedforwardFeedback
Each layer is a variation of a common sequence memory algorithm.
Higher cortexSensor/lower cortex
Lower cortex
Sub-cortical
motor center
Inputs/outputs define the role of each layer.
56. Sparse Distributed Representations (SDRs)
- Sensory perception
- Planning
- Motor control
- Prediction
- Attention
Sparse Distribution Representations are used
everywhere in the cortex.
57. Sparse Distributed Representations
What are they
• Many bits (thousands)
• Few 1’s mostly 0’s
• Example: 2,000 bits, 2% active
• Each bit has semantic meaning
• No bit is essential
01000000000000000001000000000000000000000000000000000010000…………01000
Desirable attributes
• High capacity
• Robust to noise and deletion
• Efficient and fast
• Enable new operations
58. SDR Operations
1) Similarity:
shared bits = semantic similarity
subsampling is OK
3) Union membership:
Indices
1
2
|
10
Is this SDR
a member?
2) Store and Compare:
store indices of active bits
Indices
1
2
3
4
5
|
40
1)
2)
3)
….
10)
2%
20%Union
67. x = 0100000000000000000100000000000110000000
• Extremely high capacity
• Robust to noise and deletions
• Have many desirable properties
• Solve semantic representation problem
Attributes
SDR Basics
• Large number of neurons
• Few active at once
• Every cell represents something
• Information is distributed
• SDRs are binary
10 to 15 synapses are
sufficient to
recognize patterns in
thousands of cells.
A single dendrite can
recognize multiple
unique patterns
without confusion.
68. Example: SDR Classification Capacity in Presence of Noise
• n = number of bits in SDR
• w = number of 1 bits
• W = number of vectors that overlap vector x by b bits
• Probability of false positive for one stored pattern
• Probability of false positive for M stored patterns
Wx (n,w,b) =
wx
b
æ
èç
ö
ø÷ ´
n - wx
w - b
æ
èç
ö
ø÷
fpw
n
(q) =
Wx (n,w,b)
b=q
w
å
n
w
æ
èç
ö
ø÷
fpX (q) £ fpwxi
n
(q)
i=0
M-1
å n = 2048, w = 40
With 50% noise, you can classify 1015 patterns with an error < 10-11
n = 64, w=12
With 33% noise, you can classify only 10 patterns with an error 0.04%
Link.to.whitepaper.com