These slides use concepts from my (Jeff Funk) course entitled analyzing hi-tech opportunities to show how AHaH (anti-Hebbian and Hebbian) computing is becoming economically feasible. Traditional computing with the von Neumann architecture requires constant interactions between the processor and the memory (usually DRAM) and improvements in memory access time are occurring at a much slower rate than that of microprocessor speeds. This performance gap is becoming a bottleneck for von Neumann based computers. AHaH computing (and Synaptic computing http://www.slideshare.net/Funk98/neurosynaptic-chips) address this bottleneck in that they use a different architecture that mimics the processing of the brain. AHaH computing has additional differences from von Neumann (and synaptic) architectures in that it reduces the number of interactions between the memory and the processor by combining some aspects of memory on the processing chip. This is done with so-called memristors, which are naturally adaptive systems, and which are experiencing rapid improvements in cost, storage density, and storage capacity. With memristors, widely used pathways become stronger and less widely used pathways become weaker thus facilitating machine learning. Although machine learning can also be done with software, memristors and AHaH computing enables machine learning at the hardware level. The optimism for AHaH computing partly comes from the rapid improvements in memristors, which are rapidly improving the economics of AHaH computing.
Moore's law is the observation that the number of transistors in a dense integrated circuit doubles approximately every two years. The observation is named after Gordon Moore, the co-founder of Fairchild Semiconductor and Intel, whose 1965 paper described a doubling every year in the number of components per integrated circuit, and projected this rate of growth would continue for at least another decade. In 1975, looking forward to the next decade, he revised the forecast to doubling every two years. The period is often quoted as 18 months because of Intel executive David House, who predicted that chip performance would double every 18 months (being a combination of the effect of more transistors and the transistors being faster).
Neuromorphic Chipsets - Industry Adoption AnalysisNetscribes
The concept of emulating neurons on a chip could enhance complex operations to make business decisions secure and cost-effective. Parallel connected neurons can boost AI verticals compared with the conventional processing systems. Non-stop learning and pattern recognition using this human brain architecture can help compute signals and data in the form of visual, speech, olfactory, etc., to perform real-time operations as well as predict outcomes based on detected patterns. Neuromorphic chipsets can also enhance performance owing to their low-power consumption to process AI algorithms.
Based on patent data, this report analyzes the ongoing R&D and investments in neuromorphic chipsets by major institutions across the globe to reveal the top innovators and technology leaders in this space.
For the full report, contact info@netscribes.com
Visit www.netscribes.com
The Implementing AI: Hardware Challenges, hosted by KTN and eFutures, is the first event of the Implementing AI webinar series to address the challenges and opportunities that realising AI for hardware present.
There will be presentations from hardware organisations and from solution providers in the morning; followed by Q&A. The afternoon session will consist of virtual breakout rooms, where challenges raised in the morning session can be workshopped.
Artificial Intelligence now impacts every aspect of modern life and is key to the generation of valuable business insights.
Implementing AI webinar series is designed for people involved in the management and implementation of AI based solutions – from developers to CTOs.
Find out more: https://ktn-uk.co.uk/news/just-launched-implementing-ai-webinar-series
Neuromorphic circuits are typically used to emulate cortical structures and to explore principles of computation of the brain. But they can also be used to implement convolutional and deep networks. Here we demonstrate a proof of concept, using our latest multi-core and on-line learning reconfigurable spiking neural network chips.
Moore's law is the observation that the number of transistors in a dense integrated circuit doubles approximately every two years. The observation is named after Gordon Moore, the co-founder of Fairchild Semiconductor and Intel, whose 1965 paper described a doubling every year in the number of components per integrated circuit, and projected this rate of growth would continue for at least another decade. In 1975, looking forward to the next decade, he revised the forecast to doubling every two years. The period is often quoted as 18 months because of Intel executive David House, who predicted that chip performance would double every 18 months (being a combination of the effect of more transistors and the transistors being faster).
Neuromorphic Chipsets - Industry Adoption AnalysisNetscribes
The concept of emulating neurons on a chip could enhance complex operations to make business decisions secure and cost-effective. Parallel connected neurons can boost AI verticals compared with the conventional processing systems. Non-stop learning and pattern recognition using this human brain architecture can help compute signals and data in the form of visual, speech, olfactory, etc., to perform real-time operations as well as predict outcomes based on detected patterns. Neuromorphic chipsets can also enhance performance owing to their low-power consumption to process AI algorithms.
Based on patent data, this report analyzes the ongoing R&D and investments in neuromorphic chipsets by major institutions across the globe to reveal the top innovators and technology leaders in this space.
For the full report, contact info@netscribes.com
Visit www.netscribes.com
The Implementing AI: Hardware Challenges, hosted by KTN and eFutures, is the first event of the Implementing AI webinar series to address the challenges and opportunities that realising AI for hardware present.
There will be presentations from hardware organisations and from solution providers in the morning; followed by Q&A. The afternoon session will consist of virtual breakout rooms, where challenges raised in the morning session can be workshopped.
Artificial Intelligence now impacts every aspect of modern life and is key to the generation of valuable business insights.
Implementing AI webinar series is designed for people involved in the management and implementation of AI based solutions – from developers to CTOs.
Find out more: https://ktn-uk.co.uk/news/just-launched-implementing-ai-webinar-series
Neuromorphic circuits are typically used to emulate cortical structures and to explore principles of computation of the brain. But they can also be used to implement convolutional and deep networks. Here we demonstrate a proof of concept, using our latest multi-core and on-line learning reconfigurable spiking neural network chips.
ARTIFICIAL brain ......AI
Artificial brain (or artificial mind) is a term commonly used in the media[1] to describe research that aims to develop software and hardware with cognitive abilities similar to those of the animal or human brain. Research investigating "artificial brains" and brain emulation plays three important roles in science:
An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience.
A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, at least in theory, to create a machine that has all the capabilities of a human being.
A long term project to create machines exhibiting behavior comparable to those of animals with complex central nervous system such as mammals and most particularly humans. The ultimate goal of creating a machine exhibiting human-like behavior or intelligence is sometimes called strong AI.
An example of the first objective is the project reported by Aston University in Birmingham, England[2] where researchers are using biological cells to create "neurospheres" (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer's, motor neurone and Parkinson's disease.
The second objective is a reply to arguments such as John Searle's Chinese room argument, Hubert Dreyfus' critique of AI or Roger Penrose's argument in The Emperor's New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper "Computing Machinery and Intelligence".[3]
The third objective is generally called artificial general intelligence by researchers.[4] However, Ray Kurzweil prefers the term "strong AI". In his book The Singularity is Near, he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on grounds of computer power continuing an exponential growth trend) that this could be done by 2025. Henry Markram, director of the Blue Brain project (which is attempting brain emulation), made a similar claim (2020) at the Oxford TED conference in 2009.
TheThingsConference 2019 Slides of Alex RaimondiSchekeb Fateh
Next generation IoT sensors will perform high performance audio-visual processing supported by AI directly on the edge to enable the next generation of battery-powered sensing devices for use cases like predictive maintenance, people counting, feature recognition in images, etc.
At Intel Labs Day 2020, Intel spotlighted research initiatives across multiple domains where its researchers are striving for orders of magnitude advancements to shape the next decade of computing. Themed “In Pursuit of 1000X: Disruptive Research for the Next Decade in Computing,” the event featured several emerging areas including integrated photonics, neuromorphic computing, quantum computing, confidential computing, and machine programming. Together, these domains represent pioneering efforts to address critical challenges in the future of computing, and Intel’s leadership role in pursuing breakthroughs to address them. Rich Uhlig, Intel senior fellow, vice president, and director of Intel Labs was joined by several domain experts across the research organization to share perspectives on the industry and societal impact of these technologies.
Artificial Intelligence is being supplanted by "Artificial Brain," i.e. neuromorphic technologies. Yet there still a whopping gap that neuromorphic systems need to close before they will become a match for successful AI applications.
Talk @ ACM SF Bayarea Chapter on Deep Learning for medical imaging space.
The talk covers use cases, special challenges and solutions for Deep Learning for Medical Image Analysis using Tensorflow+Keras. You will learn about:
- Use cases for Deep Learning in Medical Image Analysis
- Different DNN architectures used for Medical Image Analysis
- Special purpose compute / accelerators for Deep Learning (in the Cloud / On-prem)
- How to parallelize your models for faster training of models and serving for inferenceing.
- Optimization techniques to get the best performance from your cluster (like Kubernetes/ Apache Mesos / Spark)
- How to build an efficient Data Pipeline for Medical Image Analysis using Deep Learning
- Resources to jump start your journey - like public data sets, common models used in Medical Image Analysis
Smart Data Slides: Emerging Hardware Choices for Modern AI Data ManagementDATAVERSITY
Leading edge AI applications have always been resource-intensive and known for stretching the limits of conventional (von Neumann architecture) computer performance. Specialized hardware, purpose built to optimize AI applications, is not new. In fact, it should be no surprise that the very first .com internet domain was registered to Symbolics - a company that built the Lisp Machine, a dedicated AI workstation - in 1985. In the last three decades, of course, the performance of conventional computers has improved dramatically with advances in chip density (Moore’s Law) leading to faster processor speeds, memory speeds, and massively parallel architectures. And yet, some applications - like machine vision for real time video analysis and deep machine learning - always need more power.
Participants in this webinar will learn the fundamentals of the three hardware approaches that are receiving significant investments and demonstrating significant promise for AI applications.
- neuromorphic/neurosynaptic architectures (brain-inspired hardware)
- GPUs (graphics processing units, optimized for AI algorithms), and
- quantum computers (based on principles and properties of quantum-mechanics rather than binary logic).
Note - This webinar requires no previous knowledge of hardware or computer architectures.
AI & Cognitive Computing are some of the most popular business an technical words out there. It is critical to get the basic understanding of Cognitive Computing, which helps us appreciate the technical possibilities and business benefits of the technology.
This presentation consist of models and explanations of deep learning, artificial intelligence and today's systems and communications. This was presented at the ITU-T Workshop on Machine Learning for 5G held at the ITU HQ in Geneva, Switzerland on 29 January 2018. More information on this workshop can be found here: https://www.itu.int/en/ITU-T/Workshops-and-Seminars/20180129/Pages/default.aspx
ARTIFICIAL brain ......AI
Artificial brain (or artificial mind) is a term commonly used in the media[1] to describe research that aims to develop software and hardware with cognitive abilities similar to those of the animal or human brain. Research investigating "artificial brains" and brain emulation plays three important roles in science:
An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience.
A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, at least in theory, to create a machine that has all the capabilities of a human being.
A long term project to create machines exhibiting behavior comparable to those of animals with complex central nervous system such as mammals and most particularly humans. The ultimate goal of creating a machine exhibiting human-like behavior or intelligence is sometimes called strong AI.
An example of the first objective is the project reported by Aston University in Birmingham, England[2] where researchers are using biological cells to create "neurospheres" (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer's, motor neurone and Parkinson's disease.
The second objective is a reply to arguments such as John Searle's Chinese room argument, Hubert Dreyfus' critique of AI or Roger Penrose's argument in The Emperor's New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper "Computing Machinery and Intelligence".[3]
The third objective is generally called artificial general intelligence by researchers.[4] However, Ray Kurzweil prefers the term "strong AI". In his book The Singularity is Near, he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on grounds of computer power continuing an exponential growth trend) that this could be done by 2025. Henry Markram, director of the Blue Brain project (which is attempting brain emulation), made a similar claim (2020) at the Oxford TED conference in 2009.
TheThingsConference 2019 Slides of Alex RaimondiSchekeb Fateh
Next generation IoT sensors will perform high performance audio-visual processing supported by AI directly on the edge to enable the next generation of battery-powered sensing devices for use cases like predictive maintenance, people counting, feature recognition in images, etc.
At Intel Labs Day 2020, Intel spotlighted research initiatives across multiple domains where its researchers are striving for orders of magnitude advancements to shape the next decade of computing. Themed “In Pursuit of 1000X: Disruptive Research for the Next Decade in Computing,” the event featured several emerging areas including integrated photonics, neuromorphic computing, quantum computing, confidential computing, and machine programming. Together, these domains represent pioneering efforts to address critical challenges in the future of computing, and Intel’s leadership role in pursuing breakthroughs to address them. Rich Uhlig, Intel senior fellow, vice president, and director of Intel Labs was joined by several domain experts across the research organization to share perspectives on the industry and societal impact of these technologies.
Artificial Intelligence is being supplanted by "Artificial Brain," i.e. neuromorphic technologies. Yet there still a whopping gap that neuromorphic systems need to close before they will become a match for successful AI applications.
Talk @ ACM SF Bayarea Chapter on Deep Learning for medical imaging space.
The talk covers use cases, special challenges and solutions for Deep Learning for Medical Image Analysis using Tensorflow+Keras. You will learn about:
- Use cases for Deep Learning in Medical Image Analysis
- Different DNN architectures used for Medical Image Analysis
- Special purpose compute / accelerators for Deep Learning (in the Cloud / On-prem)
- How to parallelize your models for faster training of models and serving for inferenceing.
- Optimization techniques to get the best performance from your cluster (like Kubernetes/ Apache Mesos / Spark)
- How to build an efficient Data Pipeline for Medical Image Analysis using Deep Learning
- Resources to jump start your journey - like public data sets, common models used in Medical Image Analysis
Smart Data Slides: Emerging Hardware Choices for Modern AI Data ManagementDATAVERSITY
Leading edge AI applications have always been resource-intensive and known for stretching the limits of conventional (von Neumann architecture) computer performance. Specialized hardware, purpose built to optimize AI applications, is not new. In fact, it should be no surprise that the very first .com internet domain was registered to Symbolics - a company that built the Lisp Machine, a dedicated AI workstation - in 1985. In the last three decades, of course, the performance of conventional computers has improved dramatically with advances in chip density (Moore’s Law) leading to faster processor speeds, memory speeds, and massively parallel architectures. And yet, some applications - like machine vision for real time video analysis and deep machine learning - always need more power.
Participants in this webinar will learn the fundamentals of the three hardware approaches that are receiving significant investments and demonstrating significant promise for AI applications.
- neuromorphic/neurosynaptic architectures (brain-inspired hardware)
- GPUs (graphics processing units, optimized for AI algorithms), and
- quantum computers (based on principles and properties of quantum-mechanics rather than binary logic).
Note - This webinar requires no previous knowledge of hardware or computer architectures.
AI & Cognitive Computing are some of the most popular business an technical words out there. It is critical to get the basic understanding of Cognitive Computing, which helps us appreciate the technical possibilities and business benefits of the technology.
This presentation consist of models and explanations of deep learning, artificial intelligence and today's systems and communications. This was presented at the ITU-T Workshop on Machine Learning for 5G held at the ITU HQ in Geneva, Switzerland on 29 January 2018. More information on this workshop can be found here: https://www.itu.int/en/ITU-T/Workshops-and-Seminars/20180129/Pages/default.aspx
How and When do New Technologies Become Economically FeasibleJeffrey Funk
These slides contrast two processes by which new technologies become economically feasible. Some technologies become economically feasible as advances in science facilitate the creation of new concepts and improvements in the resulting technologies. Other technologies become economically feasible as improvements in electronic components (e.g., Moore's Law), smart phones, and the Internet experience improvements.
Predicting Breakthrough Technologies: An empirical analysis of past predictio...Jeffrey Funk
These slides empirically analyzes predictions made by MIT’s Technology Review. Technology Review has produced a list of 10 breakthrough technologies for many of the last 10 years (2001, 2002-2014). These predictions are based on conversations with academic experts from a variety of scientific disciplines. To analyze these predictions, I gathered recent market sales data for the predictions done in 2001, 2003, 2004 and 2005. I found that many of these technologies still have small markets (<$1Billion), markets that are smaller than technologies not chosen by Technology Review such as smart phones, Cloud Computing. Tablet Computers. Big Data, Social Networking, and eBooks/eReaders. The slides then use theories of cognition to explain these relatively poor predictions and propose an alternative way of predicting breakthrough technologies
What enables improvements in cost and performance to occur?Jeffrey Funk
These slides discuss the design changes that enable improvements in cost and performance to occur. The main types of design changes that lead to improvements are: 1) reductions in scale (e.g., transistors and Moore's Law); 2) creation of new materials; 3) increases in scale (e.g., internal combustion engines, oil tankers, production equipment). Some technologies experience these improvements directly and some indirectly through the impact of components on higher-level systems.
When do new technologies become economically feasible: the case of electronic...Jeffrey Funk
These slides describe the process by which many new electronic products become economically feasible and the types of questions that young entrepreneurs should be asking when they are searching for new opportunities. It first shows detailed cost data for nine types of electronic products and that about 95% of the costs involve standard electronic components such as microprocessors, memory, and displays. Second, it focuses on the iPhone and the iPad and how they have been improved, mostly through the availability of better standard components. Third, we work backwards and identify the performance and cost that were needed in these components before the iPhone and iPad offered sufficient levels of performance and price to users. Fourth, we use this analysis to think about the types of new smart phones and tablet computers that will likely emerge in the next few years and the types of questions that young entrepreneurs should be asking about them. Fifth, these examples are used to revise models of learning and invention. These slides are based on one of my courses, MT5009 (Analyzing Hi-Tech Opportunities), and more details (lecture and group presentation slides) can found on one of my slideshare accounts: http://www.slideshare.net/Funk98/presentations
Intro to course module: How do new Technologies Become Economically FeasibleJeffrey Funk
These slides introduce a course that helps students understand when new technologies become economically feasible. It does this by focusing on technologies that are experiencing rapid improvements. These technologies (and systems composed from them) are more likely to become economically feasible for a growing number of applications than are technologies with less rapid rates of improvement. It also helps students understand the reasons for these rapid rates of improvement and thus the types of technologies for which we can expect rapid rates of improvement. While many analyses of new technologies focus on demand and production, these slides show how other technical changes impact more directly on improvements. these technical changes include new materials and scaling.
These slides discuss Robert Gordon's recent book, The Rise and Fall of American Growth. He argues that growth was faster between 1870 and 1940 than between 1940 and 2010. Simply put, an American in 1870 would not have recognized life in 1940 but an American in 1940 would recognize life today. These slides discuss what would be needed to change these results and thus make the improvements since 1940 equivalent to those between 1870 and 1940
These slides analyse the improvements in materials and electronics that are making new forms of displays economically feasible. Improvements in organic and other materials, integrated circuits, and other electronics are making displays better and cheaper and will cause them to become even more widely used in our economy than they currently are.
These slides analyze the impact of improved cloud computing on the ability to provide better real-time security, These improvements are changing security from a batch to a real-time world in which terrorists and other criminals can be more quickly captured.
MIT's Poor Predictions About TechnologyJeffrey Funk
These slides analyze the 40 predictions of breakthrough technologies that were made betwee 2001 and 2005 by MIT’s Technology Review. Most of them are science-based technologies, and none of the science-based technologies predicted between 2001 and 2005 have markets larger than $10 billion. Among its 40 predictions, only four have markets larger than $10 billion and these technologies have little to do with recent advances in science and instead were enabled by Moore’s Law and improvements in Internet services. MIT also missed many technologies that have achieved market sales greater than $100 billion such as smart phones, cloud computing, and the Internet of Things and other technologies with sales greater than $50 billion such as e-commerce for apparel and tablet computers.
These slides use concepts from my (Jeff Funk) course entitled analyzing hi-tech opportunities to show how mobile devices are becoming more economically feasible for health care. Rapid improvements in electronics are enabling a wide variety of health-related attachments to become available for mobile phones. These attachments can analyze breath, blood oxygen levels, blood glucose, blood type, and urine and do ultrasounds. These advances will change the way health care is monitored and managed.
Moore's Law is a computing principle that has driven innovation in the industry for over 50 years. It states that computing power doubles every 18 months, leading to exponential growth in technology. This has enabled the development of faster and more powerful computers, as well as the rise of the modern internet and mobile devices.
MT48 A Flash into the future of storage…. Flash meets Persistent Memory: The...Dell EMC World
Several key technology trends are redefining the boundaries of the traditional storage infrastructure stack: In a rapidly changing world of system interconnects, emerging memory media, and storage semantics, Server Designers and Storage Architects are engaging and collaborating like never before to exploit breakthrough technology capabilities.
With the backdrop of Big Data volume, Cloud Data ubiquity and IoT Data velocity, Application Developers are entering the Post-POSIX world of real-time, high-frequency, low latency data management frameworks.
This session will address key technology trends in Storage, Networking, and Compute, as they define the parameters of a Memory Centric Architecture (MCA) and the Next Generation Data Center.
Our update for the beginning of 2014, about self-directed evolution from the constraint of biology to a substrate-independent mind (SIM) and personality, a process alluded to in science fiction with the oft-confusing term "uploading". In this talk, I present the most realistic development route to SIM via whole brain emulation (WBE), neural prostheses and neural interfaces. I describe how I contribute to make this happen, as effectively as I can, through my work as it is presented at carboncopies.org. Then, I draw your attention to the most significant development in the field at this moment, an opportunity for a widely applicable Platform for high resolution neural interfaces. That platform has the potential in the near-term to provide the access needed for true brain machine interfaces, cognitive neural prostheses and the type of data acquisition that is essential for whole brain emulation.
Big Data researchers have recently raised an important issue -- a single machine with a state-of-art RAM+SSD+HDD+etc storage engine and multiple cores can do better than a distributed system -- where Hadoop and MapReduce are the most popular examples. This started a race for single-machine processors. The term massively multicore refers to an architecture with a high number of cores -- ideally in hundreds. In such environments, it is crucial to have an optimal engine for dynamic packing of heterogeneous jobs. This paper builds on top of recent technologies like lockfree parallelization, streaming algorithms and hotspot distributions, but introduces new methods and algorithms that make such a package feasible for massively multicore processors.
Many modern and emerging applications must process huge amounts of data.
Unfortunately, prevalent computer architectures are based on the von Neumann design, where processing units and memory units are located apart, which make them highly inefficient for large-scale data intensive tasks.
The performance and energy costs when executing this type of applications are dominated by the movement of data between memory units and processing units. This is known as the von Neumann bottleneck.
Processing-in-Memory (PIM) is a computing paradigm that avoids most of this data movement by putting together, in the same place or near, computation and data.
This talk will give an overview of PIM and will discuss some of the key enabling technologies.
Next I will present some of our research results in that area, specifically in the application areas of genome sequence alignment and time series analysis.
Many automakers are trying to utilize machine learning to realize automated driving of cars. Application of GPGPU and Approximate computing is being actively studied because using conventional CPUs in machine learning is often disadvantageous from the viewpoint of performance and energy consumption. As of today, they are in the stage of commercialization sufficiently. However, considering the high performance and low energy consumption required of automobiles that are several years ahead, it is not guaranteed enough that GPGPU and Approximate computing have the potential to fully satisfy them. Therefore, some automakers are considering Neuromophic device as a semiconductor candidate to be installed in the next generation of automatic driving vehicles. For the past eight months, IBM has been studying technologies for applying Japanese automobile manufacturers and Neuromophic devices to automobiles. We will report technical problems and application areas obtained from that study.
Preparing Codes for Intel Knights Landing (KNL)AllineaSoftware
Getting ready for the next generation of Intel Xeon Phi processors: we outline the steps to tune, profile and then optimize applications to target many core
The "Unproductive Bubble:" Unprofitable startups, small markets for new digit...Jeffrey Funk
This article will show that the current bubble has produced few profitable startups and involved few if any new digital technologies, nor technologies involving recent scientific advances, and thus it is unlikely that much that is productive will be left once the dust settles. There is a growth in old technologies such as e-commerce but little in new technologies such as AI. The startup losses are also much larger than in the past suggesting that fewer of today’s startups will still exist in a few years than those of 20 years ago.
Commercialization of Science: What has changed and what can be done to revit...Jeffrey Funk
This paper several changes that I believe may have reduced America’s ability to develop science-based technologies. I make no claims about the completeness. I begin with the growth of university research and then cover several changes it engendered, including an obsession with papers, hyper-specialization of researchers, and huge bureaucracies, also using the words of Nobel Laureates and other scientists to make my points.
2000, 2008, 2022: It is hard to avoid the parallels How Big Will the 2022 S...Jeffrey Funk
These slides summarize the recent share price declines for new startups, declines that are driven by huge annual and cumulative losses and it contrasts today's bubble with those of 2000 and 2008. It shows that today's bubble involves bigger startup losses than those of the 2000 bubble and that the markets of new technologies have not grown to the extent that those of past decades did. Many hedge funds, VCs, and pension funds are heavily invested in these startups. Some of them are also highly leveraged.
The Slow Growth of AI: The State of AI and Its ApplicationsJeffrey Funk
The failure of IBM Watson, disappointments of self-driving vehicles, slow diffusion of medical imaging, small markets for AI software, and scorching criticisms of Google’s research papers provide evidence for hype and disappointment in AI, which is consistent with negative social impact of Big Data and AI algorithms. There are some successes, but they are much smaller than the predictions, with virtual applications (advertising, news, retail sales, finance and e-commerce) having the largest success, building from previous Big Data usage in the past. Looking forward, AI will augment not replace workers just as past technologies did on farms, factories, and offices. Robotic process automation and natural language processing are likely to play important roles in this augmentation with RPA automating repetitive work, natural language processing summarizing information, and RPA also putting the information in the right bins for engineers, accountants, researchers, journalists, and lawyers. Big challenges include reductions in training time depending on faster computers, exponentially rising demands on computers for high accuracies in image recognition, a slowdown in supercomputer improvements, datasets riddled with errors, and reproducibility problems.
Behind the Slow Growth of AI: Failed Moonshots, Unprofitable Startups, Error...Jeffrey Funk
Smaller than expected markets, money-losing startups, failure of Watson, slow-diffusion of self-driving vehicles and medical imaging, and scorching criticisms of Google’s research papers are some of the examples used to characterize the hype of AI. There are some successes, but they are much smaller than the predictions, with advertising, news, and e-commerce having the biggest success stories. Looking forward, #AI will augment not replace workers just as past technologies did on farms, factories, and offices. Robotic process automation and natural language processing are likely to play important roles in this augmentation with #RPA automating repetitive work, natural language processing categorizing information, and RPA also putting the information in the right bins for engineers, accountants, researchers, journalists, and lawyers. The big challenges include exponentially rising demands on computers for high accuracies in images, a slowdown in supercomputer improvements, datasets riddled with errors, and reproducibility problems. See either this podcast or my slides, whose URL is shown in comments. #technolgy #innovation #venturecapital #ipo #artificialintelligence
The Troubled Future of Startups and Innovation: Webinar for London FuturistsJeffrey Funk
These slides show how the most successful startups of today (Unicorns) are not doing as well as the most successful of 20 to 50 years ago. Today's startups are doing worse in terms of time to profitability and time to top 100 market capitalization status. Only one Unicorn founded since 2000 has achieved top 100 market capitalization status while six, nine, and eight from the 70s, 80s, and 90s did so. It is also unlikely that few or any of today's Unicorns will achieve this status because their market capitalizations are too low, share prices increases since IPO are too small, and profits remain elusive. Only 14 of 45 had share price increases greater than the Nasdaq and only 6 of 45 had profits in 2019. The reasons for the worse performance of today's Unicorns than those of 20 to 50 years ago include no breakthrough technologies, hyper-growth strategies, and the targeting of regulated industries. The slides conclude with speculations on why few breakthrough technologies, including science-based technologies from universities are emerging. We need to think back to the division of labor that existed a half a century ago.
Where are the Next Googles and Amazons? They should be here by nowJeffrey Funk
Great startups aren’t being founded like they were in the 1970s (Microsoft, Apple, Oracle, Genentech, Home Depot, EMC), 1980s (Cisco, Dell, Adobe, Qualcomm, Amgen, Gilead Sciences), and 1990s (Amazon, Google, Netflix, Salesforce.com, PayPal). All of these startups reached the top 100 for market capitalization, but Facebook is the only startup founded since 2000 which has entered the top 100. Tesla and Uber are often discussed as highly successful but they have many times higher cumulative losses than did Amazon at its time of peak losses and neither has had a profitable year despite being older than Amazon was when it achieved profits. Furthermore, few of the recent Unicorn IPOs have experienced shareprice increases greater than those of the Nasdaq (14 of 45), only 3 of these 14 have profits, and only six of them have a
market capitalization over $30 (Zoom), $20 (Square), and $10 billion (Twilio, DocuSign, Okta). America’s venture capital system isn’t working as well as it once did, and the coronavirus will make things worse before the VC system gets better.
Start-up losses are mounting and innovation is slowing, but venture capitalists, entrepreneurs, consultants, university researchers, and business schools are hyping new technologies more than ever before. This hype is facilitated by changes in online media, including the rise of social media. This paper describes how the professional incentives of experts and the changes in online media have increased hype and how this hype makes it harder for policy makers, managers, scientists, engineers, professors, and students to understand new technologies and make good decisions. We need less hype and more level-headed economic analysis and this paper describes how this economic analysis can be done. Here is a link to the journal, Issues in Science & Technology: www.issues.org
Irrational Exuberance: A Tech Crash is ComingJeffrey Funk
These slides apply Nobel Laureate Robert Schiller's concept of irrational exuberance (and a book) title to the current speculative bubble of 2019. Over investments in startups and a lack of profitability in them are finally starting to catch up with the venture capital industry and the tech sector that relies on it. Investments by US venture capitalists have risen about six times since 2001 causing the total invested in 2018 to exceed by 40% the peak of 2000, the last big year of the dotcom bubble. But the number of IPOs has never returned to the peak years of 1993 to 2000; only about 250 were carried out between 2015 and 2017 vs. about 1,200 between 1995 and 1997.
The reason is simple: startups are taking longer to go public because they are not profitable. Consider the data. The median time to IPO has risen from 2.8 years in 1998 to 7.7 years in 2016 and the ones going public are less profitable than they were in the past. Although only 22% of startups going public in 1980 were unprofitable, 82% were unprofitable in 2018. The same high percentages of unprofitability have only been achieved twice before, in 1998 and 1999 right before the dotcom bubble burst. Furthermore, startups that have recently done high profile IPOs such as Snap, Dropbox, Blue Apron, Fitbit, Trivago, Box, and Cloudera are still not profitable.
Ride Sharing, Congestion, and the Need for Real SharingJeffrey Funk
Current ride sharing services are not financially sustainable. Although they provide more convenience than do taxi services, they are experiencing massive losses because they have the same cost structure as do taxis and thus must compete through subsidies and lower wages. After all, they use the same vehicles, roads, and drivers, and only GPS algorithms and phones are new.
They also increase congestion. Just as more private vehicles or taxis on the road will increase congestion, more ride sharing vehicles also increase congestion.
These slides describe new ways to use the technologies of ride sharing to reduce congestion along with costs while at the same time keeping travel time low. This can be done through changing public transportation systems or allowing private companies to offer competing services. For instance, current bus services, whether they are private or public, need to use the algorithms, GPS, phones and other technologies of ride sharing to revise routes, schedules and the premises that currently underpin public transportation. There is no reason a bus should be certain size, stop every 200 meters, or follow the same route all day. Algorithms and phones enable new types of routes in which designers simultaneously minimize time travel and maximize number of passengers transported per vehicle.hour.
Using the percent of top managers in IPOs (initial public offering) as a proxy for an industry’s/technology’s scientific intensity, this paper shows that the percentage of IPOs and of venture capital financing for science-based technologies has been declining for decades. Second, the percentage of PhDs among the top managers in science intensive industries is also declining, suggesting that their scientific intensities are falling. Third, the age of these top managers rose during the same period suggesting that the importance of experiential knowledge has increased even as the importance of PhDs and thus educational knowledge has decreased. Fourth, the numbers of IPOs and of venture capital funding are not increasing for newer science-based industries such as superconductors, solar cells, nanotechnology, and GMOs. Fifth, there are extreme diseconomies of scale in the universities that produce the PhD-holding top managers, suggesting that universities are far less effective at doing research than are companies. These results provide a new understanding of science and technology, and they offer new prescriptions for reversing slowing productivity growth.
This paper addresses the types of knowledge that are needed in entrepreneurial firms using a unique data base of executives and directors for all IPOs filed between 1990 and 2010. Using highest educational degrees as a proxy for educational knowledge, it shows that 85% of those with PhDs are concentrated in the life sciences and ICT (information and communication technology) industries and second, that those in the ICT industries are concentrated at lower layers in a “digital stack” of industries, ranging from semiconductors and other electronics at the bottom layer to computing and Internet infrastructure at the middle layer and Internet content, commerce, and services in the top layer. Third, industries with fewer PhDs have more bachelor’s and MBA degrees suggesting that PhDs are being replaced by them and not M.S. degrees. Fourth, age is higher for industries with the most PhDs thus suggesting a greater need for experiential knowledge in industries with greater needs for educational knowledge. Fifth, the number of Nobel Prizes tracks industries with high fractions of PhDs.
beyond patents:scholars of innovation use patenting as an indicator of innova...Jeffrey Funk
This paper discusses the problems with using patents as a measure of innovation and papers as a measure of science. It also uses data to show the problems. for example, the number of patent applications and awards have grown by six times since 1984 while productivity growth has slowed.
These slides discuss how to put context back into learning. Farm and other work at home once provided a context for learning, but this context has become much weaker as work at home as mostly disappeared Students once learned mostly from parents because they worked on farms, fixed things at home, and prepared meals. These activities provided a "context" for school learning, a context that has been mostly lost. These slides discuss how this context can be put back into learning and the implications for the types of people best suited for teaching and the way to train them.
Technology Change, Creative Destruction, and Economic FeasibiltyJeffrey Funk
After showing that the costs of most electronic products are from electronic components, these slides show how the iPhone and iPad became economically feasible through improvements in microprocessors, flash memory, and displays.
These slides show that the demand for most professions is growing steadily in spite of continued improvements in productivity enhancing tools for them. They also show that AI will have a largely incremental effect on the professions, in combination with Moore's Law, cloud computing, and Big Data. They do this accounting, legal, architects, journalists, and engineers.
Solow's Computer Paradox and the Impact of AIJeffrey Funk
These slides show why IT has not delivered large improvements in productivity and why new forms of IT like AI will also not deliver large improvements, except in selected sectors. The main reason is that the improvements in AI are over-hyped and because most sectors do not have large inefficiencies in the organization of people, machinery, and materials.
What does innovation today tell us about tomorrow?Jeffrey Funk
This paper was published in Issues in Science and Technology. It distinguished between the Silicon Valley and science-based process of technology change. It shows that more new products and services are emerging from the latter than the former.
Creative destrution, Economic Feasibility, and Creative Destruction: The Case...Jeffrey Funk
This paper shows how new forms of electronic products and services such as smart phones, tablet computers and ride sharing become economically feasible and thus candidates for commercialization and creative destruction as improvements in standard electronic components such as microprocessors, memory, and displays occur. Unlike the predominant viewpoint in which commercialization is reached as advances in science facilitate design changes that enable improvements in performance and cost, most new forms of electronic products and services are not invented in a scientific sense and the cost and performance of them are primarily driven by improvements in standard components. They become candidates for commercialization as the cost and performance of standard components reach the levels necessary for the final products and services to have the required levels of performance and cost. This suggests that when managers, policy makers, engineers, and entrepreneurs consider the choice and timing of commercializing new electronic products and services, they should understand the composition of new technologies, the impact of components on a technology's cost, performance and design, and the rates of improvement in the components.
"𝑩𝑬𝑮𝑼𝑵 𝑾𝑰𝑻𝑯 𝑻𝑱 𝑰𝑺 𝑯𝑨𝑳𝑭 𝑫𝑶𝑵𝑬"
𝐓𝐉 𝐂𝐨𝐦𝐬 (𝐓𝐉 𝐂𝐨𝐦𝐦𝐮𝐧𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬) is a professional event agency that includes experts in the event-organizing market in Vietnam, Korea, and ASEAN countries. We provide unlimited types of events from Music concerts, Fan meetings, and Culture festivals to Corporate events, Internal company events, Golf tournaments, MICE events, and Exhibitions.
𝐓𝐉 𝐂𝐨𝐦𝐬 provides unlimited package services including such as Event organizing, Event planning, Event production, Manpower, PR marketing, Design 2D/3D, VIP protocols, Interpreter agency, etc.
Sports events - Golf competitions/billiards competitions/company sports events: dynamic and challenging
⭐ 𝐅𝐞𝐚𝐭𝐮𝐫𝐞𝐝 𝐩𝐫𝐨𝐣𝐞𝐜𝐭𝐬:
➢ 2024 BAEKHYUN [Lonsdaleite] IN HO CHI MINH
➢ SUPER JUNIOR-L.S.S. THE SHOW : Th3ee Guys in HO CHI MINH
➢FreenBecky 1st Fan Meeting in Vietnam
➢CHILDREN ART EXHIBITION 2024: BEYOND BARRIERS
➢ WOW K-Music Festival 2023
➢ Winner [CROSS] Tour in HCM
➢ Super Show 9 in HCM with Super Junior
➢ HCMC - Gyeongsangbuk-do Culture and Tourism Festival
➢ Korean Vietnam Partnership - Fair with LG
➢ Korean President visits Samsung Electronics R&D Center
➢ Vietnam Food Expo with Lotte Wellfood
"𝐄𝐯𝐞𝐫𝐲 𝐞𝐯𝐞𝐧𝐭 𝐢𝐬 𝐚 𝐬𝐭𝐨𝐫𝐲, 𝐚 𝐬𝐩𝐞𝐜𝐢𝐚𝐥 𝐣𝐨𝐮𝐫𝐧𝐞𝐲. 𝐖𝐞 𝐚𝐥𝐰𝐚𝐲𝐬 𝐛𝐞𝐥𝐢𝐞𝐯𝐞 𝐭𝐡𝐚𝐭 𝐬𝐡𝐨𝐫𝐭𝐥𝐲 𝐲𝐨𝐮 𝐰𝐢𝐥𝐥 𝐛𝐞 𝐚 𝐩𝐚𝐫𝐭 𝐨𝐟 𝐨𝐮𝐫 𝐬𝐭𝐨𝐫𝐢𝐞𝐬."
Building Your Employer Brand with Social MediaLuanWise
Presented at The Global HR Summit, 6th June 2024
In this keynote, Luan Wise will provide invaluable insights to elevate your employer brand on social media platforms including LinkedIn, Facebook, Instagram, X (formerly Twitter) and TikTok. You'll learn how compelling content can authentically showcase your company culture, values, and employee experiences to support your talent acquisition and retention objectives. Additionally, you'll understand the power of employee advocacy to amplify reach and engagement – helping to position your organization as an employer of choice in today's competitive talent landscape.
Personal Brand Statement:
As an Army veteran dedicated to lifelong learning, I bring a disciplined, strategic mindset to my pursuits. I am constantly expanding my knowledge to innovate and lead effectively. My journey is driven by a commitment to excellence, and to make a meaningful impact in the world.
VAT Registration Outlined In UAE: Benefits and Requirementsuae taxgpt
Vat Registration is a legal obligation for businesses meeting the threshold requirement, helping companies avoid fines and ramifications. Contact now!
https://viralsocialtrends.com/vat-registration-outlined-in-uae/
Kseniya Leshchenko: Shared development support service model as the way to ma...Lviv Startup Club
Kseniya Leshchenko: Shared development support service model as the way to make small projects with small budgets profitable for the company (UA)
Kyiv PMDay 2024 Summer
Website – www.pmday.org
Youtube – https://www.youtube.com/startuplviv
FB – https://www.facebook.com/pmdayconference
Enterprise Excellence is Inclusive Excellence.pdfKaiNexus
Enterprise excellence and inclusive excellence are closely linked, and real-world challenges have shown that both are essential to the success of any organization. To achieve enterprise excellence, organizations must focus on improving their operations and processes while creating an inclusive environment that engages everyone. In this interactive session, the facilitator will highlight commonly established business practices and how they limit our ability to engage everyone every day. More importantly, though, participants will likely gain increased awareness of what we can do differently to maximize enterprise excellence through deliberate inclusion.
What is Enterprise Excellence?
Enterprise Excellence is a holistic approach that's aimed at achieving world-class performance across all aspects of the organization.
What might I learn?
A way to engage all in creating Inclusive Excellence. Lessons from the US military and their parallels to the story of Harry Potter. How belt systems and CI teams can destroy inclusive practices. How leadership language invites people to the party. There are three things leaders can do to engage everyone every day: maximizing psychological safety to create environments where folks learn, contribute, and challenge the status quo.
Who might benefit? Anyone and everyone leading folks from the shop floor to top floor.
Dr. William Harvey is a seasoned Operations Leader with extensive experience in chemical processing, manufacturing, and operations management. At Michelman, he currently oversees multiple sites, leading teams in strategic planning and coaching/practicing continuous improvement. William is set to start his eighth year of teaching at the University of Cincinnati where he teaches marketing, finance, and management. William holds various certifications in change management, quality, leadership, operational excellence, team building, and DiSC, among others.
Discover the innovative and creative projects that highlight my journey throu...dylandmeas
Discover the innovative and creative projects that highlight my journey through Full Sail University. Below, you’ll find a collection of my work showcasing my skills and expertise in digital marketing, event planning, and media production.
3.0 Project 2_ Developing My Brand Identity Kit.pptxtanyjahb
A personal brand exploration presentation summarizes an individual's unique qualities and goals, covering strengths, values, passions, and target audience. It helps individuals understand what makes them stand out, their desired image, and how they aim to achieve it.
An introduction to the cryptocurrency investment platform Binance Savings.Any kyc Account
Learn how to use Binance Savings to expand your bitcoin holdings. Discover how to maximize your earnings on one of the most reliable cryptocurrency exchange platforms, as well as how to earn interest on your cryptocurrency holdings and the various savings choices available.
Event Report - SAP Sapphire 2024 Orlando - lots of innovation and old challengesHolger Mueller
Holger Mueller of Constellation Research shares his key takeaways from SAP's Sapphire confernece, held in Orlando, June 3rd till 5th 2024, in the Orange Convention Center.
Improving profitability for small businessBen Wann
In this comprehensive presentation, we will explore strategies and practical tips for enhancing profitability in small businesses. Tailored to meet the unique challenges faced by small enterprises, this session covers various aspects that directly impact the bottom line. Attendees will learn how to optimize operational efficiency, manage expenses, and increase revenue through innovative marketing and customer engagement techniques.
1. Anti-Hebbian and Hebbian
(AHaH) Computing
MT5009
Analyzing Hi-Technology
Opportunities
1. Chow Ka Yau Daniel A0145207M
2. Muhammad Dzahir Bin Mohamed Zain Affandi A0129428Y
3. Gregory Chee Ken Khyun A0132405W
4. Jayapathma Herath Madhushanka Meranjan A0132398Y
5. Lim Yee Hao Marcus A0132390N
http://www.riken.jp/en/research/rikenresearch/highlights/7918/
2. • Current worldwide buzz - Big Data analytics
• Future: Not just analytics but also
• Solutions: Predictive and Prescriptive analytics
• We need
- Machine learning
- Intelligent computing
- Large scale simulations
Problem: Current Von-Neumann computing bottleneck
We believe, AHaH Computing can break the Von-Neumann
bottleneck and open up a new era of big data analytics
Problem Statement
3. CONTENTS
1. The Von-Neumann Architecture and Limitations
2. Solutions to break Von-Neumann bottleneck
3. AHaH Phenomenon
4. AHaH Computing Architecture
5. Comparison of AHaH, VN and Neuromorphic Computing
6. AHaH Advantages
7. Memristor – Memory Trends
8. Big Data bottleneck, Model, Market and Challenges
9. Big Data Analytics Applications
10. Conclusion
5. CONTENTS
1. The Von-Neumann Architecture and Limitations
2. Solutions to break Von-Neumann bottleneck
3. AHaH Phenomenon
4. AHaH Computing Architecture
5. Comparison of AHaH, VN and Neuromorphic Computing
6. AHaH Advantages
7. Memristor – Memory Trends
8. Big Data bottleneck, Model, Market and Challenges
9. Big Data Analytics Applications
10. Conclusion
6. http://www.gridgain.com/wp-content/uploads/2014/09/insideBIGDATA-Guide-to-In-
Memory-Computing.pdf
- Keeping data in a server’s RAM instead
of hard disk or flash devices
- Massive parallelization for faster
processing speeds
- Inexpensive way to speed up enterprise
software applications, including but not
limited to analytics
Challenges:
- Requires lots of RAM!
- Unsustainable brute force method
since data volumes continue to
explode in a big data world
- High power consumption
- Processor and memory are still
separated
http://www.toddmace.io/
1. Algorithmic Approach: In-Memory Computing
Solutions to break VN bottleneck
8. 2. Neuromorphic Approach
http://www.slideshare.net/Funk98/neurosynaptic-chips
IBM’s True North
supercomputer incorporates
the largest neuromorphic
chip in the world, but the
chip is not capable of
learning on its own
Solutions to break VN bottleneck
Limitations
No active machine
learning on chip
No unsupervised
learning
Required
supercomputer
http://www.slideshare.net/Funk98/neurosynaptic-chips?qid=e85f972a-2571-49d0-ac6c-4b4395525901&v=default&b=&from_search=1
9. Aim: to achieve the level
whereby it is able to do
what brain does –
processing and memory
are performed by the
same component
https://upload.wikimedia.org/wikipedia/commons/d/df/PPTExponentialGrowthof_Computin
g.jpg
The Future of Computing
http://www.slideshare.net/Funk98/neurosynaptic-chips?qid=e85f972a-2571-
49d0-ac6c-4b4395525901&v=default&b=&from_search=1
10. 3. Self - Organizational Approach
AHaH computing is more than just the integration of
memory and processing!
Anti-Hebbian and Hebbian
(AHaH) Computing
Solutions to break VN bottleneck
11. CONTENTS
1. The Von-Neumann Architecture and Limitations
2. Solutions to break Von-Neumann bottleneck
3. AHaH Phenomenon
4. AHaH Computing Architecture
5. Comparison of AHaH, VN and Neuromorphic Computing
6. AHaH Advantages
7. Memristor – Memory Trends
8. Big Data bottleneck, Model, Market and Challenges
9. Big Data Analytics Applications
10. Conclusion
12. • Creation of conductive path through a common
medium
• It just happens naturally (self organization)
• No need for external control to produce path of
conduction
• Occurs in rivers, air, blood system etc.
Bifurcation video
AHaH Phenomenon
Examples of Natural Adaptive System
http://knowm.org/blog/
It is this
energy
dissipating
pathways
competing and
conducting for
resources
It is a Learning Process
13. AHaH Phenomenon
Biology – Brain Synapse
- Strong Spikes Strengthen the connection
- Open up new pathways
- Hebbian Learning
- Weak Spikes (misfiring) weaken the
connection
- Delete up pathways
- Anti - Hebbian Learning
Model
• Firing strengthen pathways by increasing
synaptic weight change. This Adaptation
is called Hebbian learning
• Misfiring weaken the pathway connection
by decreasing synaptic weight change.
This Adaptation is called Anti - Hebbian
learning
Which component can help to mimic this synaptic adaptation ?
?
?
http://www.smashinglists.com/top-10-amazing-facts-
about-the-human-brain/
http://www.intechopen.com/books/reinforcement_learning/interaction_between_the
_spatio-temporal_learning_rule__non_hebbian__and_hebbian_in_single_cells__a_c
MEMRISTORS
14. AHaH Phenomenon
Memristors - Mimic Synaptic Adaptation
Analogous to the adaptive water pipeline
1) High pressure difference -> more water flow -> diameter gets bigger
2) Cut water supply -> diameter stays same – Remembers how much water has flown
3) Less pressure difference -> less water flow -> diameter gets smaller
4) Water flows in the opposite direction if diameter gets two small
– Erasing the path and flow is bi-directional
Replace the water flow with Current flow
Replace the pressure with Resistance
Synaptic Adaptation
Memristors adapts its resistance as current flows through the memristor
Non - Volatile memory
Remembers how much current flowed through the memristor
Inputs
Output
Memristors
http://cacm.acm.org/news/33675-memristor-minds-the-future-of-artificial-intelligence/fulltext
15. CONTENTS
1. The Von-Neumann Architecture and Limitations
2. Solutions to break Von-Neumann bottleneck
3. AHaH Phenomenon
4. AHaH Computing Architecture
5. Comparison of AHaH, VN and Neuromorphic Computing
6. AHaH Advantages
7. Memristor – Memory Trends
8. Big Data bottleneck, Model, Market and Challenges
9. Big Data Analytics Applications
10. Conclusion
16. http://www.hpcwire.com/2015/09/09/knowm-snaps-in-final-piece-of-memristor-puzzle/
AHaH Computing Architecture
Overall Architecture
Connected
Memristors
Connected KT-
Synapse in Parallel
– AHaH node
Map Many KT-Cores
into RAM
architecture
KT-Core
• Connected AHaH Nodes in
Parallel
• Column decoder and Row
decoder to select AHaH Nodes
• Controller to control Instruction
flow into AHaH Nodes
Basic RAM ArchitectureAHaH-NodeINPUTS
Output
http://i.cmpnet.com/pldesignline/20
05/07/zeidmanfigure1.gif
17. AHaH Unites Memory and processing
Example- A . B = C
CPU
MEMORY
Row
Decorder
Column Decoder
Input
Output
B
A C
Row
Select
Column
SelectRead Write
CPU
1) Inputs are
connected with
multiple cores
KT-RAM
Activation of
KT-Cores
2)Activation
Instruction
Act of accessing the
memory Becomes the act
of configuring the
Memory
- Weight Change
occurs inside memristors
KT-Core
During
Activation,
AHaH
Controller
connects with
Multiple AHaH
nodes
InputA
AHaH
Von-Neumann
3) Adaptation
Instruction
- AND Logic
4) Data Out
Output
C• Creates logics within the memory using sequential Instructions flow
• Adaption happens for free, because memristors adopt as we use them
• No back and forth data transfer between memory and CPU
ALU
C
B
C
18. CONTENTS
1. The Von-Neumann Architecture and Limitations
2. Solutions to break Von-Neumann bottleneck
3. AHaH Phenomenon
4. AHaH Computing Architecture
5. Comparison of AHaH, VN and Neuromorphic Computing
6. AHaH Advantages
7. Memristor – Memory Trends
8. Big Data bottleneck, Model, Market and Challenges
9. Big Data Analytics Applications
10. Conclusion
19. Comparison of AHaH, Von-Neumann and
Neuromorphic Computing
Architecture:
Conventional
Computing
Neuromorphic
Computing
AHaH
Computing
Architecture Von Neumann Neural Network AHaH Architecture
Computing Unit CPU Synaptic Chip Synaptic Chip
Storing Unit Memory Synaptic Chip Synaptic Chip
Storing Element DRAM DRAM/SRAM Memristors
Suitability Logical and Analytical Machine Learning
(pattern recognition)
Logical & analytical and
Machine learning
(pattern recognition)
Processing Serial Processing
(multi cores)
Parallel Processing Parallel Processing
Backward
Compatibility
Only in Von-Neumann
Architecture
Unable to use in von-
Neumann architecture
directly
(Require Supercomputer)
Able to use in both Von-
Neumann & AHaH
Architecture
Power Consumption High Low Ultra Low
Speed Slow Fast Fast
20. CONTENTS
1. The Von-Neumann Architecture and Limitations
2. Solutions to break Von-Neumann bottleneck
3. AHaH Phenomenon
4. AHaH Computing Architecture
5. Comparison of AHaH, VN and Neuromorphic Computing
6. AHaH Advantages
7. Memristor – Memory Trends
8. Big Data bottleneck, Model, Market and Challenges
9. Big Data Analytics Applications
10. Conclusion
21. • Barrier potential inherently exists between
CPU and RAM especially when the it is
physically separated.
• Sufficient energy must be applied to
overcome this barrier potential
• Power between the CPU and RAM heavily
depends on the distance between the CPU
and RAM
• Each read operation lowers the switch
barriers. Thus, the act of accessing the
memory becomes the act of configuring
the memory over time
In brains, d = 0 which
would mean high
amounts of power
saving
AHaH Advantages:
Lower power
https://www.youtube.com/watch?v=CFSrC7kjbJo
22. Less processing
requests and reply
between CPU and
memory due to a
internal machine
learning
AHaH Advantages:
Power and Time Reduction
https://www.youtube.com/watch?v=CFSrC7kjbJo
More back-and-forth operations! Single package reply!
25. AHaH Advantages:
Machine Learning (Example)
Example:
Rich person has a high confidence level of having
high education
Rich person has a high confidence level of being old
Sequence 3: Unsupervised learning
Sequence 1: Supervised Learning
Sequence 2: Assign Label for KT- Cores (rich and poor)
Model
Input
Spikes
https://www.youtube.com/watch?v=CFSrC7kjbJo
https://www.youtube.com/watch?v=CFSrC7kjbJo
http://knowm.org/thermodynamic-ram-
technology-stack-published/
26. AHaH Advantages:
Flexibility of AHaH
• Integration AHaH circuits with existing
integrated circuits technology by
apply it at the end of the process line
at the very end
• Front-end-of-line refers to the current
integrated circuits.
• Back-end-of-line refers to AHaH
circuit would be fabricated on top of
it.
• Able to take advantage of an
existing process that is working very
well and bringing it to the next level
of performance
• Create new computers that adapts as
it is used
https://www.youtube.com/watch?v=w7q07eKPM9U
AHaH
circuits
Current
integrated
circuits
27. CONTENTS
1. The Von-Neumann Architecture and Limitations
2. Solutions to break Von-Neumann bottleneck
3. AHaH Phenomenon
4. AHaH Computing Architecture
5. Comparison of AHaH, VN and Neuromorphic Computing
6. AHaH Advantages
7. Memristor – Memory Trends
8. Big Data bottleneck, Model, Market and Challenges
9. Big Data Analytics Applications
10. Conclusion
28. AHaH Enabler - Memristor
Memristor as
building
block
ReRAM AHaH NVRAM
Symbol
Unit:
29. “Most Memristors that I
have seen do not
behave like fast,
binary, non-volatile,
deterministic switches.
This is a problem
because this is how HP
wants them to behave”
• Alex Nuegent – Lead
Inventor and CEO of Knowm
HP’s Memristor Problem
The incumbents are limiting the applications of memristors
within the existing memory technology framework
Limit the use of memristor
http://knowm.org/the-problem-is-not-memristors-its-how-hp-is-trying-to-use-them/
34. 2015 onwards, prediction of the mass manufacturing of Memristor to be available
This will bring forth the further improvement of AHaH computing architecture where
AHaH synaptic chip can be produced based on memristors
http://www.reram-forum.com/2013/03/21/predicting-the-reram-
roadmap/
Future opportunity for Memristor production
35. CONTENTS
1. The Von-Neumann Architecture and Limitations
2. Solutions to break Von-Neumann bottleneck
3. AHaH Phenomenon
4. AHaH Computing Architecture
5. Comparison of AHaH, VN and Neuromorphic Computing
6. AHaH Advantages
7. Memristor – Memory Trends
8. Big Data bottleneck, Model, Market and Challenges
9. Big Data Analytics Applications
10. Conclusion
37. Definition of Big Data
http://www.csc.com/insights/flxwd/78931-
big_data_universe_beginning_to_explode
The global production of data
is expanding and reach
~40ZB by 2020
https://www.capgemini.com/blog/capping-it-
off/2014/07/are-you-effectively-using-big-data
> 85% of an
organization’s data
is unstructured
Time and energy
consuming to
process unstructured
data
https://web-assets.domo.com/blog/wp-
content/uploads/2014/04/DataNeverSleeps_2.0_v2.jpg
Data velocity is
measured against
time
Enable real time
streaming
processing
Volume, Variety & Velocity
39. Business Intelligence
comprises of tools and
methodology for data
analyzing
Data / Big Data Analytics
can be grouped under
Business Intelligence
Past in Nature:
Descriptive and Diagnostic
Analytics
Future in Nature:
Predictive and
Prescriptive Analytics
http://www.fyisolutions.com/blog/advanced-analytics-seminar/
Big Data Analytics
40. The Bottleneck is in technology
Not only need new algorithms and techniques but
breakthrough computing architecture
The Big Hurdle
http://image.slidesharecdn.com/finalpresentation-150305004602-conversion-gate01/95/presentation-on-big-data-analytics-15-638.jpg?cb=1425516438
41. CONTENTS
1. The Von-Neumann Architecture and Limitations
2. Solutions to break Von-Neumann bottleneck
3. AHaH Phenomenon
4. AHaH Computing Architecture
5. Comparison of AHaH, VN and Neuromorphic Computing
6. AHaH Advantages
7. Memristor – Memory Trends
8. Big Data bottleneck, Model, Market and Challenges
9. Big Data Analytics Applications
10. Conclusion
42. http://ayata.com/stage/wp-content/uploads/ayata-infographic-2012-09-04.jpg
Big Data Analytics
Applications
Healthcare Industry
Given agility to government
to combat flu epidemic
dealing with vaccine
production/delivery rate vs
outbreak numbers in various
states
Autonomous Vehicle
Google AV to recognize &
anticipate what might be
coming in real time at a
junction
Oil & Gas Industry
Chevron need to analyze 50
terabytes of seismic data
Drilling miss cost USD$100M
Retail Industry
Starbucks marketing strategy
aligned to real time data
and responses
43. CONTENTS
1. The Von-Neumann Architecture and Limitations
2. Solutions to break Von-Neumann bottleneck
3. AHaH Phenomenon
4. AHaH Computing Architecture
5. Comparison of AHaH, VN and Neuromorphic Computing
6. AHaH Advantages
7. Memristor – Memory Trends
8. Big Data bottleneck, Model, Market and Challenges
9. Big Data Analytics Applications
10. Conclusion
44. CONCLUSION
• Big market and growth of Big Data applications
• Von-Neumann architecture bottleneck is hitting the limits
• Cutting edge of AHaH computing architecture
• Real time processing (Integrated memory & processing)
• Ultra less power consumption and less heat generate
• Self-Organized approach
http://ekvv.uni-bielefeld.de/bilddb/bild?id=87240
45. ANY QUESTIONS?
For further readings on AHaH computing, please
visit www.knowm.org
(Startup for AHaH – started July 2015)