Overview of artificial intelligence, its definition and classification, its history and historical development, as well as several theories and concepts.
Radar and Wireless for Automotive: Market and Technology Trends 2019 report b...Yole Developpement
The radar and 5G/V2X markets will both grow – one through market pull, the other through prospective enablement.
More information on https://www.i-micronews.com/products/radar-and-v2x-for-automotive-technologies-and-market-trends-2019/
Generative AI models, such as ChatGPT and Stable Diffusion, can create new and original content like text, images, video, audio, or other data from simple prompts, as well as handle complex dialogs and reason about problems with or without images. These models are disrupting traditional technologies, from search and content creation to automation and problem solving, and are fundamentally shaping the future user interface to computing devices. Generative AI can apply broadly across industries, providing significant enhancements for utility, productivity, and entertainment. As generative AI adoption grows at record-setting speeds and computing demands increase, on-device and hybrid processing are more important than ever. Just like traditional computing evolved from mainframes to today’s mix of cloud and edge devices, AI processing will be distributed between them for AI to scale and reach its full potential.
In this presentation you’ll learn about:
- Why on-device AI is key
- Full-stack AI optimizations to make on-device AI possible and efficient
- Advanced techniques like quantization, distillation, and speculative decoding
- How generative AI models can be run on device and examples of some running now
- Qualcomm Technologies’ role in scaling on-device generative AI
Executing the UAE AI Strategy: Opportunities and ChallengesSaeed Al Dhaheri
this presentation was conducted at the American University of The Emirates. It gives an overview about AI and present some use cases. Also presents the UAE AI strategy and what the government has done to activate and execute the strategy. It also highlights some examples of how UAE public sector organizations are transforming their services using AI.
AI for security or security for AI - Sergey GordeychikSergey Gordeychik
Machine learning technologies are turning from rocket science into daily engineering life. You no longer have to know the difference between Faster R-CNN and HMM to develop a machine vision system, and even OpenCV has bindings for JavaScript allowing to resolve quite serious tasks all the while remaining in front end. On other hand massive implementation of AI in various areas brings about problems, and security is one of the greatest concerns. In the broader context security is really all about trust.
Do we trust AI? I don’t, personally.
What is “state of the art” in AI security? Yesterday it was a PoC, not a product, today becoming a We will fix it later, tomorrow it will be a if it works, don’t touch it. And tomorrow is too late.
But what we can do for Trustworthy AI? There are just no simple answers.
You can’t install antivirus or calculate hashes to control integrity of annotated dataset. Traditional firewalls and IDS are almost useless in ML cloud internal SDN Infiniband network. Event C-level Compliance such as PCI DSS and GDPR doesn’t work for massive country-level AI deployments. What about vulnerability management for TensorFlow ML model? How it will impact ROC and AUC?..
To make it better we should rethink Cyber Resilience for AI process, systems and applications to make sure that they continuously deliver the intended outcome despite adverse cyber events. Make sure that security is genuinely integrated into innovation that AI brings into our lives. To trust AI and earn his trust, perhaps?
Augmented and Virtual Reality applied in Industry 4.0IGS
This is our Augmented and Virtual Reality presentation, showcasing our own deployments with AR/VR/MR and wearables, and status of the industry in the world regarding similar technologies deployments.
This talk overviews my background as a female data scientist, introduces many types of generative AI, discusses potential use cases, highlights the need for representation in generative AI, and showcases a few tools that currently exist.
Give a background of Data Science and Artificial Intelligence, to better understand the current state of the art (SOTA) for Large Language Models (LLMs) and Generative AI. Then start a discussion on the direction things are going in the future.
Overview of artificial intelligence, its definition and classification, its history and historical development, as well as several theories and concepts.
Radar and Wireless for Automotive: Market and Technology Trends 2019 report b...Yole Developpement
The radar and 5G/V2X markets will both grow – one through market pull, the other through prospective enablement.
More information on https://www.i-micronews.com/products/radar-and-v2x-for-automotive-technologies-and-market-trends-2019/
Generative AI models, such as ChatGPT and Stable Diffusion, can create new and original content like text, images, video, audio, or other data from simple prompts, as well as handle complex dialogs and reason about problems with or without images. These models are disrupting traditional technologies, from search and content creation to automation and problem solving, and are fundamentally shaping the future user interface to computing devices. Generative AI can apply broadly across industries, providing significant enhancements for utility, productivity, and entertainment. As generative AI adoption grows at record-setting speeds and computing demands increase, on-device and hybrid processing are more important than ever. Just like traditional computing evolved from mainframes to today’s mix of cloud and edge devices, AI processing will be distributed between them for AI to scale and reach its full potential.
In this presentation you’ll learn about:
- Why on-device AI is key
- Full-stack AI optimizations to make on-device AI possible and efficient
- Advanced techniques like quantization, distillation, and speculative decoding
- How generative AI models can be run on device and examples of some running now
- Qualcomm Technologies’ role in scaling on-device generative AI
Executing the UAE AI Strategy: Opportunities and ChallengesSaeed Al Dhaheri
this presentation was conducted at the American University of The Emirates. It gives an overview about AI and present some use cases. Also presents the UAE AI strategy and what the government has done to activate and execute the strategy. It also highlights some examples of how UAE public sector organizations are transforming their services using AI.
AI for security or security for AI - Sergey GordeychikSergey Gordeychik
Machine learning technologies are turning from rocket science into daily engineering life. You no longer have to know the difference between Faster R-CNN and HMM to develop a machine vision system, and even OpenCV has bindings for JavaScript allowing to resolve quite serious tasks all the while remaining in front end. On other hand massive implementation of AI in various areas brings about problems, and security is one of the greatest concerns. In the broader context security is really all about trust.
Do we trust AI? I don’t, personally.
What is “state of the art” in AI security? Yesterday it was a PoC, not a product, today becoming a We will fix it later, tomorrow it will be a if it works, don’t touch it. And tomorrow is too late.
But what we can do for Trustworthy AI? There are just no simple answers.
You can’t install antivirus or calculate hashes to control integrity of annotated dataset. Traditional firewalls and IDS are almost useless in ML cloud internal SDN Infiniband network. Event C-level Compliance such as PCI DSS and GDPR doesn’t work for massive country-level AI deployments. What about vulnerability management for TensorFlow ML model? How it will impact ROC and AUC?..
To make it better we should rethink Cyber Resilience for AI process, systems and applications to make sure that they continuously deliver the intended outcome despite adverse cyber events. Make sure that security is genuinely integrated into innovation that AI brings into our lives. To trust AI and earn his trust, perhaps?
Augmented and Virtual Reality applied in Industry 4.0IGS
This is our Augmented and Virtual Reality presentation, showcasing our own deployments with AR/VR/MR and wearables, and status of the industry in the world regarding similar technologies deployments.
This talk overviews my background as a female data scientist, introduces many types of generative AI, discusses potential use cases, highlights the need for representation in generative AI, and showcases a few tools that currently exist.
Give a background of Data Science and Artificial Intelligence, to better understand the current state of the art (SOTA) for Large Language Models (LLMs) and Generative AI. Then start a discussion on the direction things are going in the future.
3D Imaging & Sensing 2018 Reports by Yole DeveloppementYole Developpement
The iPhone X initiated a trend. What happens next?
More information here: https://www.i-micronews.com/category-listing/product/p3d-imaging-sensing-2018.html
Leading IT research firm Enterprise Management Associates (EMA) conducted in-depth research with a global panel of IT leaders to explore how the interaction of automation, AI, and the demands of digitally transforming enterprises combine to pave the way for ServiceOps – a technology-enabled approach to high-efficiency collaboration between IT service and ITOps.
Check out these slides to get results from this research.
Uber - Building Intelligent Applications, Experimental ML with Uber’s Data Sc...Karthik Murugesan
In this talk, we will explore how Uber enables rapid experimentation of machine learning models and optimization algorithms through the Uber’s Data Science Workbench (DSW). DSW covers a series of stages in data scientists’ workflow including data exploration, feature engineering, machine learning model training, testing and production deployment. DSW provides interactive notebooks for multiple languages with on-demand resource allocation and share their works through community features.
It also has support for notebooks and intelligent applications backed by spark job servers. Deep learning applications based on TensorFlow and Torch can be brought into DSW smoothly where resources management is taken care of by the system. The environment in DSW is customizable where users can bring their own libraries and frameworks. Moreover, DSW provides support for Shiny and Python dashboards as well as many other in-house visualization and mapping tools.
In the second part of this talk, we will explore the use cases where custom machine learning models developed in DSW are productionized within the platform. Uber applies Machine learning extensively to solve some hard problems. Some use cases include calculating the right prices for rides in over 600 cities and applying NLP technologies to customer feedbacks to offer safe rides and reduce support costs. We will look at various options evaluated for productionizing custom models (server based and serverless). We will also look at how DSW integrates into the larger Uber’s ML ecosystem, e.g. model/feature stores and other ML tools, to realize the vision of a complete ML platform for Uber.
Artificial intelligence (AI) and machine learning (ML) are undergoing revolutionary changes that will affect wide swaths of our society. And the applications of this technology are increasingly diverse. Join us as we narrow in on how researchers in AL and ML are using AWS to identify and prevent financial market manipulation in a high-volume, high-velocity stock market. We also explore how to use natural language processing to aid emergency response organizations in real time during deadly disasters, such as during hurricanes and catastrophic wildfires.
Chat GPT 4 can pass the American state bar exam, but before you go expecting to see robot lawyers taking over the courtroom, hold your horses cowboys – we're not quite there yet. That being said, AI is becoming increasingly more human-like, and as a VC we need to start thinking about how this new wave of technology is going to affect the way we build and run businesses. What do we need to do differently? How can we make sure that our investment strategies are reflecting these changes? It's a brave new world out there, and we’ve got to keep the big picture in mind!
Sharing here with you what we at Cavalry Ventures found out during our Generative AI deep dive.
The Future of AI is Generative not Discriminative 5/26/2021Steve Omohundro
The deep learning AI revolution has been sweeping the world for a decade now. Deep neural nets are routinely used for tasks like translation, fraud detection, and image classification. PwC estimates that they will create $15.7 trillion/year of value by 2030. But most current networks are "discriminative" in that they directly map inputs to predictions. This type of model requires lots of training examples, doesn't generalize well outside of its training set, creates inscrutable representations, is subject to adversarial examples, and makes knowledge transfer difficult. People, in contrast, can learn from just a few examples, generalize far beyond their experience, and can easily transfer and reuse knowledge. In recent years, new kinds of "generative" AI models have begun to exhibit these desirable human characteristics. They represent the causal generative processes by which the data is created and can be compositional, compact, and directly interpretable. Generative AI systems that assist people can model their needs and desires and interact with empathy. Their adaptability to changing circumstances will likely be required by rapidly changing AI-driven business and social systems. Generative AI will be the engine of future AI innovation.
"Discovery and Delivery through Product IntelliGenAI framework" by Ramkumar A...ISPMAIndia
The adoption of Generative AI, a revolutionary technology is now widespread and has started reshaping industries and functions by automating and augmenting human efforts to a large extent. Product Management traditionally leverages PM's competencies and skill sets like (not limited to) product thinking, product strategy, discovery, cross-functional collaborations, analytical skills, writing PRDs, PR-FAQs, delivery and product GTM and has never been about leveraging AI tools extensively. But, the time has come now.
Key Learnings:
The audience will be able to appreciate and understand how GenAI can augment their product management career.
1) Leverage ChatGPT and Google Bard to learn product discovery techniques.
2) Use GenAI tools to automate things like creating roadmap, generate wireframes through context and text inputs.
3) Use GenAI framework to create vision document, product requirement document from product strategy and consumer insights.
4) Use prompt engineering to create PRDs with user stories from PRFAQ, Strategy and design mocks as inputs
5) Create digital designs through GenAI tools (DALL-E) and covert them to code templates for engineering teams to consume
6) Gain efficiency and reduce cycle time in product GTM through quick insights by leveraging GenAI tools.
This session was presented at the AWS Community Day in Munich (September 2023). It's for builders that heard the buzz about Generative AI but can’t quite grok it yet. Useful if you are eager to connect the dots on the Generative AI terminology and get a fast start for you to explore further and navigate the space. This session is largely product agnostic and meant to give you the fundamentals to get started.
This talk gives an introduction about Healthcare Use cases - The AI ladder and Lifestyle AI at Scale Themes The iterative nature of the workflow and some of the important components to be aware in developing AI health care solutions were being discussed. The different types of algorithms and when machine learning might be more appropriate in deep learning or the other way will also be discussed. Use cases in terms of examples are also shared as part of this presentation .
Galanz is a home appliance manufacturer – product range: microwave oven, air conditioner, other kitchen products
The company is big, but “young”, increases rapidly and wants to become „The world’s plant to global brand” to raises profit.
Presentation of Nozha Boujemaa (Dr Inria) on Trusworthy Artificial Intelligence including Responsible and Robust Artificial Intelligence - MIT Tech Review Innovation Leaders Summit "Breakthrough to Impact", Paris November 30th 2018
Big Data & Analytics to Improve Supply Chain and Business PerformanceBristlecone SCC
Prof. David Simchi Levi, Engineering Systems Professor at MIT and Chairman of OPS Rules spoke at Bristlecone Pulse 2017 about delivering customer value through digitization, analytics and automation.
AI Basic, AI vs Machine Learning vs Deep Learning, AI Applications, Top 50 AI Game Changer Solutions, Advanced Analytics, Conversational Bots, Financial Services, Healthcare, Insurance, Manufacturing, Quality & Security, Retail, Social Impact, and Transportation & Logistics
3D Imaging & Sensing 2018 Reports by Yole DeveloppementYole Developpement
The iPhone X initiated a trend. What happens next?
More information here: https://www.i-micronews.com/category-listing/product/p3d-imaging-sensing-2018.html
Leading IT research firm Enterprise Management Associates (EMA) conducted in-depth research with a global panel of IT leaders to explore how the interaction of automation, AI, and the demands of digitally transforming enterprises combine to pave the way for ServiceOps – a technology-enabled approach to high-efficiency collaboration between IT service and ITOps.
Check out these slides to get results from this research.
Uber - Building Intelligent Applications, Experimental ML with Uber’s Data Sc...Karthik Murugesan
In this talk, we will explore how Uber enables rapid experimentation of machine learning models and optimization algorithms through the Uber’s Data Science Workbench (DSW). DSW covers a series of stages in data scientists’ workflow including data exploration, feature engineering, machine learning model training, testing and production deployment. DSW provides interactive notebooks for multiple languages with on-demand resource allocation and share their works through community features.
It also has support for notebooks and intelligent applications backed by spark job servers. Deep learning applications based on TensorFlow and Torch can be brought into DSW smoothly where resources management is taken care of by the system. The environment in DSW is customizable where users can bring their own libraries and frameworks. Moreover, DSW provides support for Shiny and Python dashboards as well as many other in-house visualization and mapping tools.
In the second part of this talk, we will explore the use cases where custom machine learning models developed in DSW are productionized within the platform. Uber applies Machine learning extensively to solve some hard problems. Some use cases include calculating the right prices for rides in over 600 cities and applying NLP technologies to customer feedbacks to offer safe rides and reduce support costs. We will look at various options evaluated for productionizing custom models (server based and serverless). We will also look at how DSW integrates into the larger Uber’s ML ecosystem, e.g. model/feature stores and other ML tools, to realize the vision of a complete ML platform for Uber.
Artificial intelligence (AI) and machine learning (ML) are undergoing revolutionary changes that will affect wide swaths of our society. And the applications of this technology are increasingly diverse. Join us as we narrow in on how researchers in AL and ML are using AWS to identify and prevent financial market manipulation in a high-volume, high-velocity stock market. We also explore how to use natural language processing to aid emergency response organizations in real time during deadly disasters, such as during hurricanes and catastrophic wildfires.
Chat GPT 4 can pass the American state bar exam, but before you go expecting to see robot lawyers taking over the courtroom, hold your horses cowboys – we're not quite there yet. That being said, AI is becoming increasingly more human-like, and as a VC we need to start thinking about how this new wave of technology is going to affect the way we build and run businesses. What do we need to do differently? How can we make sure that our investment strategies are reflecting these changes? It's a brave new world out there, and we’ve got to keep the big picture in mind!
Sharing here with you what we at Cavalry Ventures found out during our Generative AI deep dive.
The Future of AI is Generative not Discriminative 5/26/2021Steve Omohundro
The deep learning AI revolution has been sweeping the world for a decade now. Deep neural nets are routinely used for tasks like translation, fraud detection, and image classification. PwC estimates that they will create $15.7 trillion/year of value by 2030. But most current networks are "discriminative" in that they directly map inputs to predictions. This type of model requires lots of training examples, doesn't generalize well outside of its training set, creates inscrutable representations, is subject to adversarial examples, and makes knowledge transfer difficult. People, in contrast, can learn from just a few examples, generalize far beyond their experience, and can easily transfer and reuse knowledge. In recent years, new kinds of "generative" AI models have begun to exhibit these desirable human characteristics. They represent the causal generative processes by which the data is created and can be compositional, compact, and directly interpretable. Generative AI systems that assist people can model their needs and desires and interact with empathy. Their adaptability to changing circumstances will likely be required by rapidly changing AI-driven business and social systems. Generative AI will be the engine of future AI innovation.
"Discovery and Delivery through Product IntelliGenAI framework" by Ramkumar A...ISPMAIndia
The adoption of Generative AI, a revolutionary technology is now widespread and has started reshaping industries and functions by automating and augmenting human efforts to a large extent. Product Management traditionally leverages PM's competencies and skill sets like (not limited to) product thinking, product strategy, discovery, cross-functional collaborations, analytical skills, writing PRDs, PR-FAQs, delivery and product GTM and has never been about leveraging AI tools extensively. But, the time has come now.
Key Learnings:
The audience will be able to appreciate and understand how GenAI can augment their product management career.
1) Leverage ChatGPT and Google Bard to learn product discovery techniques.
2) Use GenAI tools to automate things like creating roadmap, generate wireframes through context and text inputs.
3) Use GenAI framework to create vision document, product requirement document from product strategy and consumer insights.
4) Use prompt engineering to create PRDs with user stories from PRFAQ, Strategy and design mocks as inputs
5) Create digital designs through GenAI tools (DALL-E) and covert them to code templates for engineering teams to consume
6) Gain efficiency and reduce cycle time in product GTM through quick insights by leveraging GenAI tools.
This session was presented at the AWS Community Day in Munich (September 2023). It's for builders that heard the buzz about Generative AI but can’t quite grok it yet. Useful if you are eager to connect the dots on the Generative AI terminology and get a fast start for you to explore further and navigate the space. This session is largely product agnostic and meant to give you the fundamentals to get started.
This talk gives an introduction about Healthcare Use cases - The AI ladder and Lifestyle AI at Scale Themes The iterative nature of the workflow and some of the important components to be aware in developing AI health care solutions were being discussed. The different types of algorithms and when machine learning might be more appropriate in deep learning or the other way will also be discussed. Use cases in terms of examples are also shared as part of this presentation .
Galanz is a home appliance manufacturer – product range: microwave oven, air conditioner, other kitchen products
The company is big, but “young”, increases rapidly and wants to become „The world’s plant to global brand” to raises profit.
Presentation of Nozha Boujemaa (Dr Inria) on Trusworthy Artificial Intelligence including Responsible and Robust Artificial Intelligence - MIT Tech Review Innovation Leaders Summit "Breakthrough to Impact", Paris November 30th 2018
Big Data & Analytics to Improve Supply Chain and Business PerformanceBristlecone SCC
Prof. David Simchi Levi, Engineering Systems Professor at MIT and Chairman of OPS Rules spoke at Bristlecone Pulse 2017 about delivering customer value through digitization, analytics and automation.
AI Basic, AI vs Machine Learning vs Deep Learning, AI Applications, Top 50 AI Game Changer Solutions, Advanced Analytics, Conversational Bots, Financial Services, Healthcare, Insurance, Manufacturing, Quality & Security, Retail, Social Impact, and Transportation & Logistics
Brains, Data, and Machine Intelligence (2014 04 14 London Meetup)Numenta
Jeff will discuss the Brains, Data, Machine Intelligence, Cortical Learning Algorithm he developed and the Numenta Platform for Intelligent Computing (NuPIC).
Evaluating Real-Time Anomaly Detection: The Numenta Anomaly BenchmarkNumenta
Subutai Ahmad, VP Research presenting NAB and discussing the need for evaluating real-time anomaly detection algorithms. This presentation was delivered at MLConf (Machine Learning Conference) in San Francisco 2015.
Why Neurons have thousands of synapses? A model of sequence memory in the brainNumenta
Presentation given by Yuwei Cui, Numenta Research Engineer at Beijing Normal University. December 2015.
Collaborators: Jeff Hawkins, Subutai Ahmad, Chetan Surpur
SF Big Analytics20170706: What the brain tells us about the future of streami...Chester Chen
Much of the world’s data is becoming streaming, time-series data. It becomes increasingly important to analyze streaming data in real-time. Hierarchal Temporal Memory (HTM) is a detailed computational theory of the neocortex. At the core of HTM are time-based learning algorithms that store and recall spatial and temporal patterns. HTM is well suited to a wide variety of problems; particularly those involve streaming data and time-based patterns. The current HTM systems are able to learn the structure of streaming data, make predictions and detect anomalies. It is distinguished from other techniques in its ability to learn continuously in a fully unsupervised manner. HTM has been tested and implemented in software, all of which is developed with best practices and is suitable for deploying in commercial applications. The core learning algorithms are fully documented and available in an open source project called NuPIC. HTM not only advances our understanding of how the brain may solve the sequence learning problem but is also applicable to real-world sequence learning problems from continuous data streams.
Speaker
Yuwei Cui a Research Staff Member at Numenta, a company focused on Machine Intelligence. His professional interests are in the areas of Artificial Intelligence, Computational Neuroscience, Computer Vision and Machine Learning. He became interested in AI while studying physics in the University of Science and Technology of China
He later went on to get a PhD in computational neuroscience, specializing in understanding how our visual system process sensory inputs and contribute to perceptions, from the University of Maryland at College Park. He became fascinated by the brain and reverse engineering its underlying computational principles. He has published numerous peer-reviewed scientific articles in Neuroscience and AI.
Have We Missed Half of What the Neocortex Does? by Jeff Hawkins (12/15/2017)Numenta
This was a presentation given on December 15, 2017 at the MIT Center for Brains, Minds + Machines as part of their Brains, Minds and Machines Seminar Series.
You can watch the recording of the presentation after Slide 1.
In this talk, Jeff describes a theory that sensory regions of the neocortex process two inputs. One input is the well-known sensory data arriving via thalamic relay cells. We propose the second input is a representation of allocentric location. The allocentric location represents where the sensed feature is relative to the object being sensed, in an object-centric reference frame. As the sensors move, cortical columns learn complete models of objects by integrating sensory features and location representations over time. Lateral projections allow columns to rapidly reach a consensus of what object is being sensed. We propose that the representation of allocentric location is derived locally, in layer 6 of each column, using the same tiling principles as grid cells in the entorhinal cortex. Because individual cortical columns are able to model complete complex objects, cortical regions are far more powerful than currently believed. The inclusion of allocentric location offers the possibility of rapid progress in understanding the function of numerous aspects of cortical anatomy.
Jeff discusses material from these two papers. Others can be found at https://numenta.com/papers
A Theory of How Columns in the Neocortex Enable Learning the Structure of the World
URL: https://doi.org/10.3389/fncir.2017.00081
Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in the Neocortex
URL: https://doi.org/10.3389/fncir.2016.00023
BAAI Conference 2021: The Thousand Brains Theory - A Roadmap for Creating Mac...Numenta
Jeff Hawkins presented a talk on "The Thousand Brains Theory: A Roadmap to Machine Intelligence" at the Beijing Academy of Artificial Intelligence Conference on 1st June 2021. In this talk, he discussed the key components of The Thousand Brains Theory and Numenta's recent work.
Numenta Brain Theory Discoveries of 2016/2017 by Jeff HawkinsNumenta
Jeff Hawkins discussed recent advances in cortical theory made by Numenta during the HTM Meetup on 11/03/2017. These discoveries are described in the recently published peer-reviewed paper, “A Theory of How Columns in the Neocortex Enable Learning the Structure of the World.” Jeff walked through the text and figures in the paper, as well as discussed the significance of these advances and the importance they play in AI and cortical theory.
The recording of the HTM Meetup is available at https://www.youtube.com/watch?v=c6U4yBfELpU&t=
Could A Model Of Predictive Voting Explain Many Long-Range Connections? by Su...Numenta
These are slides on a workshop Subutai Ahmad hosted on March 5, 2018 at the Computational and Systems Neuroscience Meeting (Cosyne) 2018.
About:
This workshop on long-range cortical circuits is focused on our peer-reviewed paper, “A Theory of How Columns in the Neocortex Enable Learning the Structure of the World.” Subutai discussed the inference mechanism introduced in the paper, our theory of location information, and how long-range connections allow columns to integrate inputs over space to perform object recognition.
Deep Learning - The Past, Present and Future of Artificial IntelligenceLukas Masuch
In the last couple of years, deep learning techniques have transformed the world of artificial intelligence. One by one, the abilities and techniques that humans once imagined were uniquely our own have begun to fall to the onslaught of ever more powerful machines. Deep neural networks are now better than humans at tasks such as face recognition and object recognition. They’ve mastered the ancient game of Go and thrashed the best human players. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new hype? How is Deep Learning different from previous approaches? Let’s look behind the curtain and unravel the reality. This talk will introduce the core concept of deep learning, explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why “deep learning is probably one of the most exciting things that is happening in the computer industry“ (Jen-Hsun Huang – CEO NVIDIA).
Does the neocortex use grid cell-like mechanisms to learn the structure of ob...Numenta
These are Jeff Hawkins' slides from the Computational Theories of the Brain Workshop held at the Simons Institute at UC Berkeley on April 17, 2018.
Abstract:
In this talk, I propose that the neocortex learns models of objects using the same methods that the entorhinal cortex uses to map environments. I propose that each cortical column contains cells that are equivalent to grid cells. These cells represent the location of sensor patches relative to objects in the world. As we move our sensors, the location of the sensor is paired with sensory input to learn the structure of objects. I explore the evidence for this hypothesis, propose specific cellular mechanisms that the hypothesis requires, and suggest how the hypothesis could be tested.
References:
“A Theory of How Columns in the Neocortex Enable Learning the Structure of the World” by Jeff Hawkins, Subutai Ahmad, YuWei Cui (2017)
“Place Cells, Grid Cells, and the Brain’s Spatial Representation System” by Edvard Moser, Emilio Kropff, May-Britt Moser (2008)
“Evidence for grid cells in a human memory network” by Christian Doeller, Caswell Barry, Neil Burgess (2010)
It’s long ago, approx. 30 years, since AI was not only a topic for Science-Fiction writers, but also a major research field surrounded with huge hopes and investments. But the over-inflated expectations ended in a subsequent crash and followed by a period of absent funding and interest – the so-called AI winter. However, the last 3 years changed everything – again. Deep learning, a machine learning technique inspired by the human brain, successfully crushed one benchmark after another and tech companies, like Google, Facebook and Microsoft, started to invest billions in AI research. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new Hype? How is Deep Learning different from previous approaches? Are the advancing AI technologies really a threat for humanity? Let’s look behind the curtain and unravel the reality. This talk will explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why "Deep Learning is probably one of the most exciting things that is happening in the computer industry” (Jen-Hsun Huang – CEO NVIDIA).
Either a new AI “winter is coming” (Ned Stark – House Stark) or this new wave of innovation might turn out as the “last invention humans ever need to make” (Nick Bostrom – AI Philosoph). Or maybe it’s just another great technology helping humans to achieve more.
Brains@Bay Meetup: A Primer on Neuromodulatory Systems - Srikanth RamaswamyNumenta
Meetup page: https://www.meetup.com/Brains-Bay/events/284481247/
Neuromodulators are signalling chemicals in the brain, which control the emergence of adaptive learning and behaviour. Neuromodulators including dopamine, acetylcholine, serotonin and noradrenaline operate on a spectrum of spatio-temporal scales in tandem and opposition to reconfigure functions of biological neural networks and to regulate global cognition and state transition. Although neuromodulators are important in shaping cognition, their phenomenology is yet to be fully realized in deep neural networks (DNNs). In this talk, we will give an overview of the biological organizing principles of neuromodulators in adaptive cognition and highlight the competition and cooperation across neuromodulators.
Brains@Bay Meetup: How to Evolve Your Own Lab Rat - Thomas MiconiNumenta
Meetup page: https://www.meetup.com/Brains-Bay/events/284481247/
A hallmark of intelligence is the ability to learn new flexible, cognitive behaviors - that is, behaviors that require discovering, storing and exploiting novel information for each new instance of the task. In meta-learning, agents are trained with external algorithms to learn one specific cognitive task. However, animals are able to pick up such cognitive tasks automatically, as a result of their evolved neural architecture and synaptic plasticity mechanisms, including neuromodulation. Here we evolve neural networks, endowed with plastic connections and reward-based neuromodulation, over a sizable set of simple meta-learning tasks based on a framework from computational neuroscience. The resulting evolved networks can automatically acquire a novel simple cognitive task, never seen during evolution, through the spontaneous operation of their evolved neural organization and plasticity system. We suggest that attending to the multiplicity of loops involved in natural learning may provide useful insight into the emergence of intelligent behavior.
Brains@Bay Meetup: The Increasing Role of Sensorimotor Experience in Artifici...Numenta
We receive information about the world through our sensors and influence the world through our effectors. Such low-level data has gradually come to play a greater role in AI during its 70-year history. I see this as occurring in four steps, two of which are mostly past and two of which are in progress or yet to come. The first step was to view AI as the design of agents which interact with the world and thereby have sensorimotor experience; this viewpoint became prominent in the 1980s and 1990s. The second step was to view the goal of intelligence in terms of experience, as in the reward signal of optimal control and reinforcement learning. The reward formulation of goals is now widely used but rarely loved. Many would prefer to express goals in non-experiential terms, such as reaching a destination or benefiting humanity, but settle for reward because, as an experiential signal, reward is directly available to the agent without human assistance or interpretation. This is the pattern that we see in all four steps. Initially a non-experiential approach seems more intuitive, is preferred and tried, but ultimately proves a limitation on scaling; the experiential approach is more suited to learning and scaling with computational resources. The third step in the increasing role of experience in AI concerns the agent’s representation of the world’s state. Classically, the state of the world is represented in objective terms external to the agent, such as “the grass is wet” and “the car is ten meters in front of me”, or with probability distributions over world states such as in POMDPs and other Bayesian approaches. Alternatively, the state of the world can be represented experientially in terms of summaries of past experience (e.g., the last four Atari video frames input to DQN) or predictions of future experience (e.g., successor representations). The fourth step is potentially the biggest: world knowledge. Classically, world knowledge has always been expressed in terms far from experience, and this has limited its ability to be learned and maintained. Today we are seeing more calls for knowledge to be predictive and grounded in experience. After reviewing the history and prospects of the four steps, I propose a minimal architecture for an intelligent agent that is entirely grounded in experience.
Brains@Bay Meetup: Open-ended Skill Acquisition in Humans and Machines: An Ev...Numenta
In this talk, I will propose a conceptual framework sketching a path toward open-ended skill acquisition through the coupling of environmental, morphological, sensorimotor, cognitive, developmental, social, cultural and evolutionary mechanisms. I will illustrate parts of this framework through computational experiments highlighting the key role of intrinsically motivated exploration in the generation of behavioral regularity and diversity. Firstly, I will show how some forms of language can self-organize out of generic exploration mechanisms without any functional pressure to communicate. Secondly, we will see how language — once invented — can be recruited as a cognitive tool that enables compositional imagination and bootstraps open-ended cultural innovation.
For more:
Brains@Bay Meetup: The Effect of Sensorimotor Learning on the Learned Represe...Numenta
Most current deep neural networks learn from a static data set without active interaction with the world. We take a look at how learning through a closed loop between action and perception affects the representations learned in a DNN. We demonstrate how these representations are significantly different from DNNs that learn supervised or unsupervised from a static dataset without interaction. These representations are much sparser and encode meaningful content in an efficient way. Even an agent who learned without any external supervision, purely through curious interaction with the world, acquires encodings of the high dimensional visual input that enable the agent to recognize objects using only a handful of labeled examples. Our results highlight the capabilities that emerge from letting DNNs learn more similar to biological brains, though sensorimotor interaction with the world.
For more:
SBMT 2021: Can Neuroscience Insights Transform AI? - Lawrence SpracklenNumenta
Numenta's Director of ML Architecture Lawrence Spracklen presented a talk at the SBMT Annual Congress on July 10th, 2021. He talked about how neuroscience principles can inspire better machine learning algorithms.
FPGA Conference 2021: Breaking the TOPS ceiling with sparse neural networks -...Numenta
Nick Ni (Xilinx) and Lawrence Spracklen (Numenta) presented a talk at the FGPA Conference Europe on July 8th, 2021. In this talk, they presented a neuroscience approach to optimize state-of-the-art deep learning networks into sparse topology and how it can unlock significant performance gains on FPGAs without major loss of accuracy. They then walked through the FPGA implementation where they exploited the advantage of sparse networks with a unique Domain Specific Architecture (DSA).
Jeff Hawkins NAISys 2020: How the Brain Uses Reference Frames, Why AI Needs t...Numenta
Jeff Hawkins presents a talk on "How the Brain Uses Reference Frames to Model the World, Why AI Needs to do the Same." In this talk, he gives an overview of The Thousand Brains Theory and discusses how machine intelligence can benefit from working on the same principles as the neocortex.
This talk was first presented at the NAISys conference on November 10, 2020. You can find a re-recording of the talk here: https://youtu.be/mGSG7I9VKDU
OpenAI’s GPT 3 Language Model - guest Steve OmohundroNumenta
In this research meeting, guest Stephen Omohundro gave a fascinating talk on GPT-3, the new massive OpenAI Natural Language Processing model. He reviewed the network architecture, training process, and results in the context of past work. There was extensive discussion on the implications for NLP and for Machine Intelligence / AGI.
Link to GPT-3 paper: https://arxiv.org/abs/2005.14165
Link to YouTube recording of Steve's talk: https://youtu.be/0ZVOmBp29E0
CVPR 2020 Workshop: Sparsity in the neocortex, and its implications for conti...Numenta
Numenta VP Research Subutai Ahmad presents a talk on "Sparsity in the Neocortex and its Implications for Continual Learning" at the virtual CVPR 2020 workshop. In this talk, he discusses how continuous learning systems can benefit from sparsity, active dendrites and other neocortical mechanisms.
The Thousand Brains Theory: A Framework for Understanding the Neocortex and B...Numenta
Recent advances in reverse engineering the neocortex reveal that it is a highly-distributed sensory-motor modeling system. Each cortical column learns complete models of observed objects through movement and sensation. The columns use long-range connections to vote on what objects are currently being observed. In this talk, we introduce the key elements of this theory and describe how these elements can be introduced into current machine learning techniques to improve their capabilities, robustness, and power requirements.
Jeff Hawkins Human Brain Project Summit Keynote: "Location, Location, Locatio...Numenta
Jeff Hawkins delivered this keynote presentation at the 2018 Human Brain Project Summit Open Day in Maastricht, the Netherlands on October 15, 2018. A screencast recording of the slides is also available at: https://numenta.com/resources/videos/jeff-hawkins-human-brain-project-screencast/
Location, Location, Location - A Framework for Intelligence and Cortical Comp...Numenta
Jeff Hawkins gave this presentation as part of the Johns Hopkins APL Colloquium Series on Septemer 21, 2018.
View the video of the talk here: https://numenta.com/resources/videos/jeff-hawkins-johns-hopkins-apl-talk/
Have We Missed Half of What the Neocortex Does? A New Predictive Framework ...Numenta
Numenta VP of Research Subutai Ahmad delivered this presentation at the Centre for Theoretical Neuroscience, University of Waterloo on October 2, 2018.
The Biological Path Toward Strong AI by Matt Taylor (05/17/18)Numenta
These are Matt Taylor's slides from the AI Singapore Meetup on May 17, 2018.
Abstract:
Today’s wave of AI technology is still being driven by the ANN neuron pioneered decades ago. Hierarchical Temporal Memory (HTM) is a realistic biologically-constrained model of the pyramidal neuron reflecting today’s most recent neocortical research. This talk will describe and visualize core HTM concepts like sparse distributed representations, spatial pooling and temporal memory. Strong AI is a common goal of many computer scientists. So far, machine learning techniques have created amazing results in narrow fields, but haven’t produced something we could all call “intelligent”. Given recent advances in neuroscience research, we know a lot more about how neurons work together now than we did when ANNs were created. We believe systems with a more realistic neuronal model will be more likely to produce Strong AI. Hierarchical Temporal Memory is a theory of intelligence based upon neuroscience research. The neocortex is the seat of intelligence in the brain, and it is structurally homogeneous throughout. This means a common algorithm is processing all your sensory input, no matter which sense. We believe we have discovered some of the foundational algorithms of the neocortex, and we’ve implemented them in software. I’ll show you how they work with detailed dynamic visualizations of Sparse Distributed Representations, Spatial Pooling, and Temporal Memory.
The Predictive Neuron: How Active Dendrites Enable Spatiotemporal Computation...Numenta
This was a presentation given on February 8, 2018 at the European Institute for Theoretical Neuroscience (EITN)'s Dendritic Integration and Computation with Active Dendrites Workshop.
The workshop is aimed at putting together experiments, models and recent neuromorphic systems aiming at understanding the computational properties conferred by dendrites in neural systems. It is focused particularly on the excitable properties of dendrites and the type of computation they can implement.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
Principles of Hierarchical Temporal Memory - Foundations of Machine Intelligence
1. Numenta Workshop
October 17, 2014
Jeff Hawkins
jhawkins@Numenta.com
Principles of Hierarchical Temporal Memory
Foundations of Machine Intelligence
2. 1) Discover operating principles of neocortex.
2) Create technology for machine intelligence
based on neocortical principles.
Numenta’s Mission
3. Why Will Machine Intelligence be Based on Cortical Principles?
1) Cortex uses a common learning algorithm
vision
hearing
touch
behavior
2) Cortical algorithm is incredibly adaptable
languages
engineering
science
arts …
3) Network effects
hardware and software efforts will
focus on most universal solution
4. Talk Topics
- Cortical facts
- Cortical theory (HTM)
- Research roadmap
- Applications roadmap
- Thoughts on Machine Intelligence
easy
deep
easy
5. What the Cortex Does
patterns The neocortex learns a model
from fast changing sensory data
The model generates
- predictions
- anomalies
- actions
Most of sensory changes are due
to your own movement
The neocortex learns a sensory-motor model of the world
patterns
patterns
light
sound
touch
retina
cochlear
somatic
7. Cortical Theory
Hierarchy
Cellular layers
Mini-columns
Neurons w/1000’s of synapses
- 10% proximal
- 90% distal
Active distal dendrites
Synaptogenesis
Remarkably uniform
- anatomically
- functionally
Sheet of cellsHTM
Hierarchical Temporal Memory
1) Hierarchy of identical regions
2) Each region learns sequences
3) Stability increases going up hierarchy if
input is predictable
4) Sequences unfold going down
Questions
- What does a region do?
- What do the cellular layers do?
- How do neurons implement this?
- How does this work in hierarchy?
2/3
4
6
5
8. 2/3
4
5
6
Cellular Layers
Sequence memory
Sequence memory
Sequence memory
Sequence memory
Inference
Inference
Motor
Attention
FeedforwardFeedback
Each layer implements a variation of a common sequence
memory algorithm.
9. 2/3
4
5
6
Copy of motor commands
Sensor/afferent data Next higher region
Two Types of Inference (L4, L2/3)
Learns sensory-motor sequences
Learns high-order sequences Stable
Predicted
Pass through
changes
Un-predicted
A-B-C-D
X-B-C-Y
A-B-C- ? D
X-B-C- ? Y
These are universal inference steps.
They apply to all sensory modalities.
Produces receptive field properties seen in cortex.
11. Sparse Distributed Representations (SDRs)
• Many bits (thousands)
• Few 1’s mostly 0’s
• Example: 2,000 bits, 2% active
• Each bit has semantic meaning
• Learned
01000000000000000001000000000000000000000000000000000010000…………01000
Dense Representations
• Few bits (8 to 128)
• All combinations of 1’s and 0’s
• Example: 8 bit ASCII
• Bits have no inherent meaning
• Arbitrarily assigned by programmer
01101101 = m
Sparse Distributed Representations (SDRs)
The Language of Intelligence
12. SDR Properties
subsampling is OK
3) Union membership:
Indices
1
2
|
10
Is this SDR
a member?
2) Store and Compare:
store indices of active bits
Indices
1
2
3
4
5
|
40
1)
2)
3)
….
10)
2%
20%Union
E.g. a cell can recognize many
unique patterns on a single
dendritic branch.
Ten synapses from Pattern 1
Ten synapses from Pattern N
1) Similarity:
shared bits = semantic similarity
13. Cell activates
from dozens of feedforward patterns
Neurons Recognize Hundreds of Patterns
Cell predicts its activity
in hundreds of contexts
20. - This is a first order sequence memory.
- It cannot learn A-B-C-D vs. X-B-C-Y.
- Mini-columns turn this into a high-order sequence memory.
Learning Transitions
Multiple predictions can occur at once.
A-B A-C A-D
21. Forming High Order Representations
Feedforward
Sparse activation of columns
No prediction
All cells in column become active
With prediction
Only predicted cells in column become active
22. Representing High-order Sequences
A-B-C-D vs. X-B-C-Y
A
X B
B
C
C
Y
D
Before training
A
X B’’
B’
C’’
C’
Y’’
D’
After training
Same columns,
but only one cell active per column.
IF 40 active columns, 10 cells per column
THEN 1040 ways to represent the same input in different contexts
23. HTM Temporal Memory (aka Cellular Layer)
Converts input to sparse activation of columns
Recognizes, and recalls high-order sequences
- Continuous learning
- High capacity
- Local learning rules
- Fault tolerant
- No sensitive parameters
- Semantic generalization
HTM Temporal Memory is a building block of neocortex/machine intelligence
motor
inference
inference
attention
24. 2/3
4
5
6
Research Roadmap
Sensory-motor Inference
High-order Inference
Motor Sequences
Attention/Feedback
Theory 98%
Extensively tested
Commercial
Theory 80%
In development
Theory 50%
Theory 10%
Data: Streaming
Capabilities: Prediction
Anomaly detection
Classification
Applications: Predictive maintenance
Security
Natural Language Processing
25. Applications Using HTM High-order Inference
Server anomalies
GROK
available on AWS
Unusual human
behavior
Geospatial
anomalies
Natural language
search/prediction
Cortical.IO
Stock volume
anomalies
HTM High Order
Sequence Memory
Encoder
SDRData Predictions
Anomalies
All use the same HTM code base
26. 2/3
4
5
6
Research Roadmap
Sensory-motor Inference
High-order Inference
Motor Sequences
Attention/Feedback
Theory 98%
Extensively tested
Commercial
Theory 80%
In development
Theory 50%
Theory 10%
Data: Streaming
Capabilities: Prediction
Anomaly detection
Classification
Applications: IT
Security
Natural Language Processing
Data: Static
(with simple behaviors)
Capabilities: Classification
Prediction
Applications: Vision image classification
(with saccades)
Network classification
27. 2/3
4
5
6
Research Roadmap
Sensory-motor Inference
High-order Inference
Motor Sequences
Attention/Feedback
Theory 98%
Extensively tested
Commercial
Theory 80%
In development
Theory 50%
Theory 10%
Data: Streaming
Capabilities: Prediction
Anomaly detection
Classification
Applications: IT
Security
Natural Language Processing
Data: Static
With simple behaviors
Capabilities: Classification
Prediction
Applications: Vision image classification
(with saccades)
Network classification
Data: Static and/or streaming
Capabilities: Goal-oriented behavior
Applications: Robotics
Smart bots
Proactive defense
28. 2/3
4
5
6
Research Roadmap
Sensory-motor Inference
High-order Inference
Motor Sequences
Attention/Feedback
Theory 98%
Extensively tested
Commercial
Theory 80%
In development
Theory 50%
Theory 10%
Data: Streaming
Capabilities: Prediction
Anomaly detection
Classification
Applications: IT
Security
Natural Language Processing
Data: Static
(with simple behavior)
Capabilities: Classification
Prediction
Applications: Vision image classification
(with saccades and hierarchy)
Network classification
Data: Static and/or streaming
Capabilities: Goal-oriented behavior
Applications: Robotics
Smart bots
Proactive defense
Enables : Multi-sensory modalities
Multi-behavioral modalities
29. Algorithms are documented
Multiple independent implementations
Numenta’s software is open source (GPLv3)
NuPIC www.Numenta.org
Active discussion groups for theory and implementation
Numenta’s research code is posted daily
Collaborations
IBM Almaden Research, San Jose, CA
DARPA, Washington D.C
Cortical.IO, Austria
Jhawkins@numenta.com
@Numenta
Research Roadmap: open and transparent
35. Machine Intelligence Landscape
Premise Biological Mathematical Engineered
Data Spatial-temporal
Behavior
Spatial-temporal Language
Documents
Capabilities Prediction
Classification
Goal-oriented Behavior
Classification NL Query
Valuable? Yes Yes Yes
Path to M.I.? Yes Probably not No
Cortical
(e.g. HTM)
ANNs
(e.g. Deep learning)
A.I.
(e.g. Watson)
36. 1940’s 1950’s
- Analog vs. digital
- Decimal vs. binary
- Wired vs. memory-based programming
- Serial vs. random access memory
Many approaches
- Digital
- Binary
- Memory-based programming
- Two tier memory
One dominant paradigm
The Birth of Programmable Computing
Why Did One Paradigm Win?
- Network effects
Why Did This Paradigm Win?
- Most flexible
- Most scalable
37. 2010’s 2020’s
The Birth of Machine Intelligence
- Specific vs. universal algorithms
- Mathematical vs. memory-based
- Spatial vs. time-based patterns
- Batch vs. on-line learning
Many approaches
- Universal algorithms
- Memory-based
- Time-based patterns
- On-line learning
One dominant paradigm
Why Will One Paradigm Win?
- Network effects
Why Will This Paradigm Win?
- Most flexible
- Most scalable
How Do We Know This is Going to Happen?
- Brain is proof case
- We have made great progress
38. What Can Be Done With Software
1 layer
30 msec / learning-inference-prediction step
10-6 of human cortex
2048 columns 65,000 neurons
300M synapses
39. Challenges
Dendritic regions
Active dendrites
1,000s of synapses
10,000s of potential synapses
Continuous learning
Challenges and Opportunities for Neuromorphic HW
Opportunities
Low precision memory (synapses)
Fault tolerant
- memory
- connectivity
- neurons
- natural recovery
Simple activation states (no spikes)
Connectivity
- very sparse, topological
40. Requirements for Online learning
• Train on every new input
• If pattern does not repeat, forget it
• If pattern repeats, reinforce it
Connection strength/weight is binary
Connection permanence is a scalar
Training changes permanence
If permanence > threshold then connected
Learning is the
formation of connections
10
connectedunconnected
Connection
permanence 0.2
41. 1
2/3
4
5
6
Motor
1
2/3
4
5
6
Sensory
Motor
Kinesthetic
Thalamus
Thalamus
We believe all layers implement variations of the same learning algorithm:
- Learning transitions in afferent data.
Stable representations are formed for predicted transitions.
Unpredicted transitions are passed to next layer.
Layer 4: Learns sensory/motor transitions.
Layer 3: Learns high-order sequence transitions.
Layers 5 and 6 learn sequences for motor and attention
Sequence Memory
Cortical Layers
42. Document corpus
(e.g. Wikipedia)
128 x 128
100K “Word SDRs”
- =
Apple Fruit Computer
Macintosh
Microsoft
Mac
Linux
Operating system
….
Natural Language +
43. Training set
frog eats flies
cow eats grain
elephant eats leaves
goat eats grass
wolf eats rabbit
cat likes ball
elephant likes water
sheep eats grass
cat eats salmon
wolf eats mice
lion eats cow
dog likes sleep
elephant likes water
cat likes ball
coyote eats rodent
coyote eats rabbit
wolf eats squirrel
dog likes sleep
cat likes ball
---- ---- -----
Word 3Word 2Word 1
Sequences of Word SDRs
HTM
44. Training set
eats“fox”
?
frog eats flies
cow eats grain
elephant eats leaves
goat eats grass
wolf eats rabbit
cat likes ball
elephant likes water
sheep eats grass
cat eats salmon
wolf eats mice
lion eats cow
dog likes sleep
elephant likes water
cat likes ball
coyote eats rodent
coyote eats rabbit
wolf eats squirrel
dog likes sleep
cat likes ball
---- ---- -----
Sequences of Word SDRs
HTM
45. Training set
eats“fox”
rodent
1) Unsupervised Learning
2) Semantic Generalization
3) Many Applications
frog eats flies
cow eats grain
elephant eats leaves
goat eats grass
wolf eats rabbit
cat likes ball
elephant likes water
sheep eats grass
cat eats salmon
wolf eats mice
lion eats cow
dog likes sleep
elephant likes water
cat likes ball
coyote eats rodent
coyote eats rabbit
wolf eats squirrel
dog likes sleep
cat likes ball
---- ---- -----
Sequences of Word SDRs
HTM