Is artificial superintelligence (ASI) imminent? Adam Ford will assess the evidence and ethical importance of artificial intelligence; its opportunities and risks. Drawing on the history of progress in AI and how today it surpasses peak human capability in some domains, he will present forecasts about further progress.
"Progress in AI will likely be explosive; even more significant than both the agricultural and industrial revolutions" - Adam will explore the notion of intelligence and what aspects are missing in AI now and how 'understanding' arises in biological intelligence and how it could be realised in AI over the next decade or two. He will conclude with takes on ideal AI outcomes and some recommendations for increasing the likelihood of achieving them.
BIO: Adam Ford (Masters of IT at RMIT) is an IEET Affiliate Scholar, a futurologist and works as a data/information architect, a data analyst and data engineer. He co-organised a variety of conferences in Australia, USA and China. Adam also convenes the global effort of 'Future Day’ seeking to ritualize focus on the future to a specific day. He is a grass roots journalist, having interviewed many experts on the future, and is currently working on a documentary project focusing on preparing for the future of artificial intelligence.
The Technological Singularity - Risks & Opportunities - Monash UniversityAdam Ford
Why focus on the risks and opportunities of Strong AI? What could go wrong? I will draw on 3 main thesis from Nick Bostroms book Superintelligence and talk about possible failure modes. I will also briefly talk about what could go really well if we end up with Friendly AI.
Artificial Intelligence and Machine Learning Aditya Singh
Presented By JBIMS Marketting Batch (2017-2020).
Application Artificial Intelligence in MIS(Management Information System). Presented By Trilok Prabhakaran , Aditya Singh , Shashi Yadav, Vaibhav Rokade. Presentation have live cases of two different industry.
Dan Faggella - TEDx Slides 2015 - Artificial intelligence and ConsciousnessDaniel Faggella
URL of the original TEDx Talk: https://www.youtube.com/watch?v=PjiZbMhqqTM
Notes from my 2015 TEDx presentation, titled: "We Should Wake Up Before The Machines Do," on the topic of artificial intelligence and consciousness.
Speaker: Daniel Faggella
Location: Southern New Hampshire University
The Technological Singularity - Risks & Opportunities - Monash UniversityAdam Ford
Why focus on the risks and opportunities of Strong AI? What could go wrong? I will draw on 3 main thesis from Nick Bostroms book Superintelligence and talk about possible failure modes. I will also briefly talk about what could go really well if we end up with Friendly AI.
Artificial Intelligence and Machine Learning Aditya Singh
Presented By JBIMS Marketting Batch (2017-2020).
Application Artificial Intelligence in MIS(Management Information System). Presented By Trilok Prabhakaran , Aditya Singh , Shashi Yadav, Vaibhav Rokade. Presentation have live cases of two different industry.
Dan Faggella - TEDx Slides 2015 - Artificial intelligence and ConsciousnessDaniel Faggella
URL of the original TEDx Talk: https://www.youtube.com/watch?v=PjiZbMhqqTM
Notes from my 2015 TEDx presentation, titled: "We Should Wake Up Before The Machines Do," on the topic of artificial intelligence and consciousness.
Speaker: Daniel Faggella
Location: Southern New Hampshire University
Startup 101 for Freshmen and Sophomore class of Computer Innovation Engineer (International class) at King Mongkut's Institute of Technology Ladkrabang (KMITL)
INTERNATIONAL FORUM TOWARD AI NETWORK SOCIETY ドワンゴ 人工知能研究所
Panel Discussion: Management of Risks Brought about by AI Networking
In a desirable future, the happiness of all humans will be balanced against the survival of humankind under the support of super intelligence. In that future, society will be an ecosystem formed by augmented human beings and various public AIs, in what I dub an ecosystem of shared intelligent agents (EcSIA).
Although no human can completely understand EcSIA—it is too complex and vast—humans can control its basic directions. In implementing such control, the grace and wealth that EcSIA affords needs to be properly distributed to everyone.
A presentation I created for class that tries to explain the different approaches in developing Artificial Intelligence through explanation and examples.
Brain-inspired AI as a way to desired general intelligenceドワンゴ 人工知能研究所
Brain-inspired AI as a way to desired general intelligence
Hiroshi Yamakawa* Naoya Arakawa* Koichi Takahashi*
*Whole Brain Architecture Initiative
Presented at "Gatsby-Kakenhi Joint Workshop on AI and Neuroscience ", May 12, 2017
Abstract:
It is critical to give artificial intelligence (AI) a general purpose problem-solving ability, as artificial general intelligence (AGI) with such functionality will bring about unprecedented intelligence development by its self-improvement capability.
With the advent of deep learning, Moravec’s paradox began to be eliminated and the current AI is progressing in a bottom-up approach following the ways of phylogeny and ontogeny. However, it is still struggling with intuitive physics that can be grasped by less-than-year-old infants. From this viewpoint, the realization of AGI seems far away.
Meanwhile, from the viewpoint of AI drawing on the brain, the realization of AGI does not seem so far away, as brain functions have been partially realized with artificial neural networks (ANNs). For example, general object recognition with a convolutional neural network (CNN) can be used to model the visual temporal lobe pathway and the delayed reward calculation of reinforcement learning for the basal ganglia. Similar cases can be made for other organs such as the auditory cortex and cerebellum. Those functional models will be gradually integrated to the whole brain architecture with mesoscopic connectomic information.
As the realization of brain-inspired AGI becomes more realistic, we must have a broad perspective in research and development, for its impact on society will be extensive.
Even if AGI goes beyond our intelligence, it will be relatively easy to understand it if it operates on the same architecture as our brain. AGI based on the brain architecture will more likely be a common property of mankind, because the architecture can be agreed upon by many and be used in a widely shared development platform. Thus, in order to build AGI in harmony with human beings, it would be desirable to strengthen the cooperation between the neuroscientific community and AI community and to promote the open development of brain-inspired AI.
Abstract:
Artificial intelligence is progressing rapidly, and if AI has versatility and autonomy, it can have a big impact on humanity. Against this background, discussions for building relationships between AI and human beings have become popular.
For example, Partners on AI, Future of Humanity Institute (FHI) and IEEE Global Initiative are famous. Also in Japan, the Ethics Committee of the Japanese Society of Artificial Intelligence, the AI Society Study Group, the Acceptable Intelligence with Responsibility (AIR), the Robotics Study Group, the General AI Research Committee etc are active.
Under these circumstances, our NPO WBAI promotes open co-creation through the whole brain architecture approach, with the vision of "a world with artificial intelligence harmonized with humanity". In this approach, the mission is to "learn from the architecture of the whole brain and create generic artificial intelligence like people".
Its main reason to learn from the brain to make AGI better is to make AGI to behave similarly to human beings, to facilitate the creation of AGI, and to collaborate on the architecture of the brain AGI to be shared by many people Can be induced.
WBAI intends to increase influence by broadening our ideas and contribute to being developed globally in the form of AGI globally.
----
This presentation material of @ Superhumanity: Post-Labor, Psychopathology, Plasticity” International Symposium, Seoul, 0ct,27th, 2017.
http://www.mmca.go.kr/eng/pr/newsDetail.do?menuId=6010000000&bdCId=201709290006033&searchBmCid=200904130000001
AI – Risks, Opportunities and Ethical Issues April 2023.pdfAdam Ford
If we turn our gaze to our own history, we find that our ascension from mid-level scavengers to the architects of our own global ecosystem is the result of a singular, powerful phenomenon: intelligence. Intelligence, in its myriad forms, has sculpted the modern civilization that we experience today. In recent years, the torch of intelligence has been partially handed off from biological to artificial substrates, reshaping the landscape of capability in unprecedented ways.
Artificial intelligence, with relentless and inhuman efficiency, is fast surpassing human ability in numerous narrow domains, offering a tantalising or perhaps chilling hint of the future. We're not in the realm of science fiction, but in the foothills of a towering reality, where machines might not only equal but surpass human capabilities in most areas of economic and societal value. This isn't a hazy, distant future, but a rapidly approaching dawn that could eclipse our most sophisticated abilities.
The impact of superhuman AI will be immense, altering our world in ways that we are only beginning to understand. However, it is by no means certain that these changes will be beneficial for us or for other sentient animals. The chilling narrative of an AI out of control isn't merely a whimsical construct of fiction, it is a plausible scenario that needs serious consideration.
Increasingly, those at the cutting edge of AI research, the engineers and scientists who build and understand these systems, are expressing a growing concern. The horizon of transformative AI, once considered a distant and abstract prospect, may be closer than we once assumed. This proximity is not just a cause for excitement, but for deep and profound concern.
In this emerging epoch of AI, skepticism must become more than an intellectual indulgence. We are potentially at the cusp of a revolution, set to radically reshape our world, and our ethical responsibilities demand that we critically analyze and steer this transformation. Trust in AI can't be presupposed; it should be cultivated scientifically, subjected to constant scrutiny and understanding. We should not passively accept the expanding role of AI but diligently evaluate its implications.
Black box algorithms that elude our comprehension cannot satisfy our need for transparency. Our commitment to accountability in AI must extend to understanding its mechanisms, a challenge only skepticism can meet.
Moreover, we must steel ourselves against the seductive allure of utopian promises or the paralysing dread of hollywood inspired dystopian visions about AI. Our skeptical toolkit allows us to discern reality from hype and fosters a balanced perspective.
In the rush to keep abreast of our AI creations, skepticism is our indispensable ally, guiding us towards a nuanced comprehension of their ethical and safety contours. Devoid of skepticism's unflinching gaze, we stand vulnerable, teetering on the edge of being outpaced by these emergent intelligences.
AI – Risks, Opportunities and Ethical Issues.pdfAdam Ford
Adam Ford, data professional and futurologist, will assess the prospect of transformative artificial intelligence (AI) in the near and mid-term future, and discuss the ethical importance of building an AI that is smarter and more capable than humanity.
He will explore how AI may outpace human capability in most (or possibly all) areas of economic usefulness – i.e. automated science, engineering and technological development – resulting in explosive progress that is likely to be more significant than the agricultural and industrial revolutions of the past.
Adam will discuss the notion of intelligence, what is missing in AI now, and the importance of ‘understanding’ in biological intelligence. He will address a variety of expert opinions on progress in AI capability and timelines, and make recommendations for achieving good outcomes.
Startup 101 for Freshmen and Sophomore class of Computer Innovation Engineer (International class) at King Mongkut's Institute of Technology Ladkrabang (KMITL)
INTERNATIONAL FORUM TOWARD AI NETWORK SOCIETY ドワンゴ 人工知能研究所
Panel Discussion: Management of Risks Brought about by AI Networking
In a desirable future, the happiness of all humans will be balanced against the survival of humankind under the support of super intelligence. In that future, society will be an ecosystem formed by augmented human beings and various public AIs, in what I dub an ecosystem of shared intelligent agents (EcSIA).
Although no human can completely understand EcSIA—it is too complex and vast—humans can control its basic directions. In implementing such control, the grace and wealth that EcSIA affords needs to be properly distributed to everyone.
A presentation I created for class that tries to explain the different approaches in developing Artificial Intelligence through explanation and examples.
Brain-inspired AI as a way to desired general intelligenceドワンゴ 人工知能研究所
Brain-inspired AI as a way to desired general intelligence
Hiroshi Yamakawa* Naoya Arakawa* Koichi Takahashi*
*Whole Brain Architecture Initiative
Presented at "Gatsby-Kakenhi Joint Workshop on AI and Neuroscience ", May 12, 2017
Abstract:
It is critical to give artificial intelligence (AI) a general purpose problem-solving ability, as artificial general intelligence (AGI) with such functionality will bring about unprecedented intelligence development by its self-improvement capability.
With the advent of deep learning, Moravec’s paradox began to be eliminated and the current AI is progressing in a bottom-up approach following the ways of phylogeny and ontogeny. However, it is still struggling with intuitive physics that can be grasped by less-than-year-old infants. From this viewpoint, the realization of AGI seems far away.
Meanwhile, from the viewpoint of AI drawing on the brain, the realization of AGI does not seem so far away, as brain functions have been partially realized with artificial neural networks (ANNs). For example, general object recognition with a convolutional neural network (CNN) can be used to model the visual temporal lobe pathway and the delayed reward calculation of reinforcement learning for the basal ganglia. Similar cases can be made for other organs such as the auditory cortex and cerebellum. Those functional models will be gradually integrated to the whole brain architecture with mesoscopic connectomic information.
As the realization of brain-inspired AGI becomes more realistic, we must have a broad perspective in research and development, for its impact on society will be extensive.
Even if AGI goes beyond our intelligence, it will be relatively easy to understand it if it operates on the same architecture as our brain. AGI based on the brain architecture will more likely be a common property of mankind, because the architecture can be agreed upon by many and be used in a widely shared development platform. Thus, in order to build AGI in harmony with human beings, it would be desirable to strengthen the cooperation between the neuroscientific community and AI community and to promote the open development of brain-inspired AI.
Abstract:
Artificial intelligence is progressing rapidly, and if AI has versatility and autonomy, it can have a big impact on humanity. Against this background, discussions for building relationships between AI and human beings have become popular.
For example, Partners on AI, Future of Humanity Institute (FHI) and IEEE Global Initiative are famous. Also in Japan, the Ethics Committee of the Japanese Society of Artificial Intelligence, the AI Society Study Group, the Acceptable Intelligence with Responsibility (AIR), the Robotics Study Group, the General AI Research Committee etc are active.
Under these circumstances, our NPO WBAI promotes open co-creation through the whole brain architecture approach, with the vision of "a world with artificial intelligence harmonized with humanity". In this approach, the mission is to "learn from the architecture of the whole brain and create generic artificial intelligence like people".
Its main reason to learn from the brain to make AGI better is to make AGI to behave similarly to human beings, to facilitate the creation of AGI, and to collaborate on the architecture of the brain AGI to be shared by many people Can be induced.
WBAI intends to increase influence by broadening our ideas and contribute to being developed globally in the form of AGI globally.
----
This presentation material of @ Superhumanity: Post-Labor, Psychopathology, Plasticity” International Symposium, Seoul, 0ct,27th, 2017.
http://www.mmca.go.kr/eng/pr/newsDetail.do?menuId=6010000000&bdCId=201709290006033&searchBmCid=200904130000001
AI – Risks, Opportunities and Ethical Issues April 2023.pdfAdam Ford
If we turn our gaze to our own history, we find that our ascension from mid-level scavengers to the architects of our own global ecosystem is the result of a singular, powerful phenomenon: intelligence. Intelligence, in its myriad forms, has sculpted the modern civilization that we experience today. In recent years, the torch of intelligence has been partially handed off from biological to artificial substrates, reshaping the landscape of capability in unprecedented ways.
Artificial intelligence, with relentless and inhuman efficiency, is fast surpassing human ability in numerous narrow domains, offering a tantalising or perhaps chilling hint of the future. We're not in the realm of science fiction, but in the foothills of a towering reality, where machines might not only equal but surpass human capabilities in most areas of economic and societal value. This isn't a hazy, distant future, but a rapidly approaching dawn that could eclipse our most sophisticated abilities.
The impact of superhuman AI will be immense, altering our world in ways that we are only beginning to understand. However, it is by no means certain that these changes will be beneficial for us or for other sentient animals. The chilling narrative of an AI out of control isn't merely a whimsical construct of fiction, it is a plausible scenario that needs serious consideration.
Increasingly, those at the cutting edge of AI research, the engineers and scientists who build and understand these systems, are expressing a growing concern. The horizon of transformative AI, once considered a distant and abstract prospect, may be closer than we once assumed. This proximity is not just a cause for excitement, but for deep and profound concern.
In this emerging epoch of AI, skepticism must become more than an intellectual indulgence. We are potentially at the cusp of a revolution, set to radically reshape our world, and our ethical responsibilities demand that we critically analyze and steer this transformation. Trust in AI can't be presupposed; it should be cultivated scientifically, subjected to constant scrutiny and understanding. We should not passively accept the expanding role of AI but diligently evaluate its implications.
Black box algorithms that elude our comprehension cannot satisfy our need for transparency. Our commitment to accountability in AI must extend to understanding its mechanisms, a challenge only skepticism can meet.
Moreover, we must steel ourselves against the seductive allure of utopian promises or the paralysing dread of hollywood inspired dystopian visions about AI. Our skeptical toolkit allows us to discern reality from hype and fosters a balanced perspective.
In the rush to keep abreast of our AI creations, skepticism is our indispensable ally, guiding us towards a nuanced comprehension of their ethical and safety contours. Devoid of skepticism's unflinching gaze, we stand vulnerable, teetering on the edge of being outpaced by these emergent intelligences.
AI – Risks, Opportunities and Ethical Issues.pdfAdam Ford
Adam Ford, data professional and futurologist, will assess the prospect of transformative artificial intelligence (AI) in the near and mid-term future, and discuss the ethical importance of building an AI that is smarter and more capable than humanity.
He will explore how AI may outpace human capability in most (or possibly all) areas of economic usefulness – i.e. automated science, engineering and technological development – resulting in explosive progress that is likely to be more significant than the agricultural and industrial revolutions of the past.
Adam will discuss the notion of intelligence, what is missing in AI now, and the importance of ‘understanding’ in biological intelligence. He will address a variety of expert opinions on progress in AI capability and timelines, and make recommendations for achieving good outcomes.
Superintelligence: how afraid should we be?David Wood
Superintelligence: How afraid should we be? Presentation by David Wood at the Computational Intelligence Unconference UK, 26th July 2014. Reviews ideas in three recent books: Superintelligence, by Nick Bostrom; Our Final Invention, by James Barrat; and Intelligence Unbound, edited by Russell Blackford and Damien Broderick.
Please contact the author to invite him to present animated and/or extended versions of these slides in front of an audience of your choosing. (Commercial rates will apply for commercial settings.)
Branch of computer science that develops machines and software with human-like intelligence
top 5 artificial intelligence stocks
artificial intelligence technology
artificial intelligence articles
artificial intelligence companies
artificial intelligence stocks to buy
artificial intelligence robots
artificial intelligence in medicine
artificial intelligence wikipedia
Artificial Intelligence (AI) is one of the hottest topics in the tech and startup world at the moment. The field of AI and its associated technologies present a range of opportunities – as well as challenges – for corporates. Learn more about what Artificial Intelligence means for your organization.
Will the machines save us or kill us all? – that is the question. While many are thrilled with the latest AI
breakthroughs and dream of a shinning AI-powered world, others, like Bill Gates, Elon Musk, Steve Wozniak and the late
and legendary Stephen Hawking, expressed concerns about the evolution of the machines and warned about an
apocalyptic future.
http://www.altitude.com/
AI and the Future of Work [TUG-CO, 11/15/23]Matt Small
This talk was given to the Tech User Group, Central Oregon on November 15th, 2023.
~~
AI revolutionizes the way we interact with information. It will change the way we work. In this talk, we will introduce the fundamental concepts and the latest research and policies in the field. We will then explore numerous opportunities and societal challenges related to technology adoption, work augmentation and identity. Finally, we will introduce Personal AI and the mission of Kwaai Lab.
Kwaai Lab, a non-profit, volunteer-based open-source AI lab, is composed of researchers, architects, developers, and philosophers. Our goal is to design and implement the tools, fundamentals, and policies that empower us all to own our own Personal AI assistants.
A talk about Artificial Intelligence and its impacts, and how it relates to Creativity: can artificial intelligence be creative? Does it have a sense of ethics or morals? Is it all simply a simulation?
Many questions arise around this topic: What is Artificial Intelligence and what isn't? What is possible today? How can my organisation use AI? Will this replace my job? What can we expect in the future?
We will answer these and more in our presentation. We help you understand the impact of digital on your business and give you concrete steps to start taking action.
The theoretical concept of artificial intuition is the capacity of an artificial object or software to function with the factor of consciousness known as intuition: a machine-based system that has some capacity to function analogously to human intuition.
This guide demystifies AI and democratizes AI knowledge on how it creates and delivers value. It provides an essential understanding of AI to anyone with varying technical knowledge, curiosity, and interest in the technology.
Similar to Will Super-Intellligent AI Transform Our Future? - Adam Ford - 2022-01 (20)
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
The Art of the Pitch: WordPress Relationships and Sales
Will Super-Intellligent AI Transform Our Future? - Adam Ford - 2022-01
1. Will Super-
Intelligent AI
Transform
our Future?
Adam Ford • 2022-01 • @adam_ford
Can we intelligently design intelligence?
Will AI be able to self-improve?
Can AI Understand stuff?
What role might causal learning play?
2. DeepMind’s Deep Q
Learning Circa 2015
Agent fed sensory input
(pixel stream) with goal
of maximising score on
screen - no rules, or
domain knowledge
given.
Starts off clumsy..
Much better after 2 hrs..
After 4 hrs it learns to
dig a tunnel through to
top side..
3. Stepping into the Future online conference April 23-24th 2022
Speakers/panelists:
Anders Sandberg (FHI),
Andres E. Gomez (Qualia
Research Institute),
Ben Goertzel (OpenCog),
David Pearce (The
Hedonistic Imperative),
James Hughes (IEET),
Mike Johnson (Qualia
Research Institute),
PJ Manney (IEET),
Stuart Armstrong (FHI).
5. A faster future than we think - are we ready?
1. I will demonstrate there is a substantial chance Human
Level AI (HLAI) before 2100;
2. If HLAI then likely Super Intelligence (SI) will follow via an
Intelligence Explosion (IntExp);
3. Therefore an AI driven productivity explosion
4. An uncontrolled IntExp could destroy everything we value,
but a controlled IntExp would benefit life enormously -
shaping our future lightcone.
*HLAI= Human Level AI - The term ‘human-level’ may be a misnomer - I will discuss later
[The Intelligence Explosion - Evidence and Import 2012]
6. Will AI Surpass HLI ?
So what if it does?
Intelligence is powerful
(human level intelligence)
7. Intelligence is Powerful
(and potentially dangerous)
Intelligence got our ancestors from mid-level scavengers to apex
predators and beyond
● from hand axes → the international L115A3 sniper rifle → fully
automated drones
● from barefoot meandering → heavier than air flight → escape
velocity → landing robots on mars, probes escaping our solar
system..
Assumption: substitute human minds with better-designed minds,
running at 1 million X human clock speed, we would likely see a
much faster rate of progress in technoscience.
→
13. Force Multipliers
Force multipliers in multiple areas of science
and technology are setting the stage for AI driven
productivity explosion
Examples: higher resolution brain scanners, faster
hardware, exotic forms of computation, improved
algorithms, improved philosophy
●AI is a force multiplier feedback for
progress in the above examples
14. Evolution vs Intelligent Design of Intelligent Processes
Human intelligence, the result of "blind" evolutionary
processes, has only recently developed the ability for
planning and forward thinking - deliberation. Almost all our
cognitive tools were the result of ancestral selection
pressures, forming the roots of almost all our behavior. As
such, when considering the design of complex systems where
the intelligent designer - us - collaborates with the system
being constructed, we are faced with a new signature
and a different way to achieve General Intelligence that's
completely different than the process that gave birth to our
brains.
15. AI is now powerful - more so in the future
Syllogism1: intelligence is powerful; AI is intelligent; therefore AI is
powerful
Syllogism2: technological progress is a much faster force multiplier
than evolutionary progress; AI is subject to technological progress
and HI is subject to evolutionary progress; therefore AI will become
smarter faster than HI
Syllogism3: more intelligence is more powerful than less
intelligence; AI will overtake HI; therefore AI will be more powerful
than HI
17. Intelligence Explosion
"Let an ultraintelligent machine be defined
as a machine that can far surpass all the
intellectual activities of any man however
clever. Since the design of machines is
one of these intellectual activities, an
ultraintelligent machine could design even
better machines; there would then
unquestionably be an ‘intelligence
explosion,’ and the intelligence of man
would be left far behind. Thus the first
ultraintelligent machine is the last invention
that man need ever make." - I. J. Good
18. Intelligence Explosion
The purest case - AI rewriting its own
source code.
● Improving intelligence accelerates
● It passes a tipping point
● ‘Gravity’ does the rest
Push
P
u
l
l
sisyphus & robot
19. Is AI > HLI likely?
Syllogism
1. There will be AI (before long, absent defeaters).
2. If there is AI, there will be AI+ (soon after, absent defeaters).
3. If there is AI+, there will be AI++ (soon after, absent defeaters).
4. Therefore, there will be AI++ (before too long, absent defeaters).
The Singularity: A Philosophical Analysis: http://consc.net/papers/singularity.pdf
What happens when machines
become more intelligent than
humans?
Philosopher, David
Chalmers
20. Mathematician / Hugo Award Winning
SciFi Writer Vernor Vinge
coined the term:
"Technological Singularity"
The best answer to the question,
“Will computers ever be as smart as humans?”
is probably “Yes, but only briefly.”—Vernor Vinge
Vernor postulated that
"Within thirty years, we will have the technological means to create superhuman intelligence.
Shortly after, the human era will be ended." - Vernor Vinge 1994 https://edoras.sdsu.edu/~vinge/misc/singularity.html
21. Closing the Loop - Positive Feedback
A Positive Feedback Loop is
achieved through an agents
ability to recursively optimize its
ability to optimize itself!
Bottle-neck? Other entities? Substrate
expandable (unlike brain), grey box or
ideally white box source code which can
be engineered and manipulated (brain,
black box code with no manual…
possibly can be improved through
biotech...not trivial )
Better source code
= more
optimization
power*
AI optimizes
source code
*Smarter AI will be even better at
becoming smarter
22. Moore’s Law: the Actual, the Colloquial & the Rhetorical
● CPU - transistors per cpu not doubling?
● GPU - speed increases
● Economic drivers motivated by general
advantages to having faster computers.
● Even if slowdown - small advantages in
speed can aid competitors in race
conditions. Though usually surges in
speedups follow. Not a smooth curve
upwards.
● No, speedups don’t have to be increases in
numbers of transistors in CPU cores.
Surface-level abstractions used to make specific
predictions about when the singularity will
happen A plot (logarithmic scale) of MOS transistor counts for microprocessors against dates of introduction, nearly doubling
every two years. See https://ourworldindata.org/technological-progress
Though Moore’s law has a specific definition, the term has been used colloquially to indicate a
general trend of increasing computing power.
23. AI: Why should we be concerned?
● Value Alignment “The concern is not that [AGI] would hate
or resent us for enslaving it, or that suddenly a spark of
consciousness would arise and it would rebel, but rather that it
would be very competently pursuing an objective that differs
from what we really want.” - Nick Bostrom
- ..but what do we really want? Do we really know? What will
would we really want if we were wiser/smarter/kinder?
● Race Conditions - Intelligence is powerful - first to create
powerful AGI gets massive first mover advantages - where
safety measures are discarded in favor of speeding up
development
24. Concern: Political and class struggle
● The Artilect War
○ Cosmists - those who favor building AGI
○ Terrans - those who don’t
“I believe that the ideological disagreements between these two
groups on this issue will be so strong, that a major "artilect" war,
killing billions of people, will be almost inevitable before the end of
the 21st century." - Hugo de Garis
25. HLI will rise with AI at the pace of AI?
Rapture of the nerds - where HLI will rise with AI by humans
merging with it - where the true believers will break with human
history and merge with the machine, while the unaugmented will be
left in a defiled state…
● No.
● Biological bottlenecks - the kernel of human
intelligence.
● Merging with AI enough to keep pace means
becoming posthuman - defining characteristics of
humans will be drowned out like a drop of ink in an
ocean.
28. ● Degrees of belief
● Confidence intervals, Credence levels
● Epistemic humility
● What's the likelihood of smarter than
HLI by 2050? Or 2075? Or 2100?
Belief shouldn’t be On / Off
33. AGI & Narrow AI
Strong AI or "Artificial General
Intelligence" (AGI) is AI that has the
ability to perform "general intelligent action".
Distinction: Narrow AI (or Weak AI/Applied AI) used to accomplish specific problem
solving or reasoning tasks that do not encompass (or in some cases are completely
outside of) the full range of human cognitive abilities.
Gradient between Narrow & General - what lies in between?
?
● Narrow: Deep Blue (Chess), Watson (Jeopardy)
● In between: AlphaGo, AlphaStar, Humans, Dolphins, Bonobos
● Optimally General: No such thing. Nothing.. Computable AIXItl? Not forseeably
Narrow
AI
AGI
34. Getting closer to solving key building
blocks of AI
● Unsupervised Learning (without
human direction on unlabelled data) -
DeepMind’s AlphaGo, AlphaStar, now
in many other systems
● Transfer Learning (knowledge in one
domain is applied to another) -
35. One shot or Few Shot
Learning
● GPT-3 - few shot learning
Transfer Learning
"I think transfer learning is the key to
general intelligence. And I think the key to
doing transfer learning will be the
acquisition of conceptual knowledge that is
abstracted away from perceptual details of
where you learned it from." - Demis
Hassabis
36. Transfer Learning
Knowledge in one domain is applied to another
1. IMPALA simultaneously performs multiple tasks—in this case, playing 57
Atari games—and attempts to share learning between them. It showed signs of
transferring what was learned from one game to another.
https://www.technologyreview.com/f/610244/deepminds-latest-ai-transfers-its-learning-to-new-tasks/
2. REGAL - key idea is to combine a neural network policy with a genetic
algorithm, the Biased Random-Key Genetic Algorithm (BRKGA)..REGAL
achieves on average 3.56% lower peak memory than BRKGA on previously
unseen graphs, outperforming all the algorithms we compare to, and giving
4.4x bigger improvement than the next best algorithm.
https://deepmind.com/research/publications/REGAL-Transfer-Learning-For-Fast-Optimization-of-Computation-Graphs
38. Human INT isn’t a universal standard
Human intelligence is
a small fraction in the
much larger mind
design space
→
Misconception : Intelligence is a single dimension.
39. The space of possible minds
● Humans, not being the only possible
mind, are similar to each other
because of the nature of sexual
reproduction
● Transhuman minds, while still being
vestigially human, could be smarter or
more conscious than unaugmented
humans
● Posthuman minds may exist all over
the spectrum - occupy a wider
possibility space in the spectrum
● It may be very hard to guess what an
AI superintelligence would be like,
since it could end up anywhere in this
spectrum of possible minds
42. Recap
Intelligence is powerful - it’s important to track progress in AI
AI doesn’t need to be a reimplementation of human intelligence for it
to be powerful - need a wide view of intelligence
Distinguishing evaluating levels of intelligence in
● AI vs Humans and
● Humans vs Humans
Therefore:
● Avoid anthropomorphism
● We shouldn’t be limiting measuring progress in AI by the kind of
standard of human intelligence alone
45. Machine Understanding
It really matters what kinds of superminds emerge!
Potential for machine understanding (contrast with explainable AI) this decade
or next
- Yoshua Bengio - causal representation learning - Judea Pearl too
Assumptions:
● ethics isn't arbitrary
● strongly informed by real world states of affairs,
● understanding machines will be better positions than humans to solve
currently intractable problems in ethics
Therefore: investigate safe development of understanding machines
- Henk de Regt what is scientific understanding? (I think machines can do it)
46. What is understanding? Understanding by design: Six Facets
● Can explain: provide thorough, supported, and justifiable accounts of
phenomena, facts, and data.
● Can interpret: tell meaningful stories; offer apt translations; provide a revealing
historical or personal dimension to ideas and events; make it personal or
accessible through images, anecdotes, analogies, and models.
● Can apply: effectively use and adapt what is known in diverse contexts.
● Have perspective: see and hear points of view through critical eyes and ears
(or sensors).
● Can empathize: find value in what others might find odd, alien, or implausible;
perceive sensitivity on the basis of prior direct experience.
● Have self knowledge: perceive the personal style, prejudices, and habits of
mind that both shape and impede our own understanding; we are aware of
what we do not understand and why understanding is do hard.
(Source: Wiggins, G. and McTighe, J. (1998) Understanding by Design. Alexandra, VA: Association for Supervision
and Curriculum Development.)
47. What’s missing in modern AI systems?
Deep learning works via mass correlation on big data - great if you
have enough control over your input data relative to the problem,
and that the search space is confined.
Though (*most of) the understanding isn’t in the system, it’s in the
experts that develop the AI systems, and interpret the output.
*arguably some aspects of understanding exist in systems, though not all of them
48. Training: Black Box vs an Understanding Agent
The opaque nature, we can’t see the under the hood effects of our
training on ML - all we can see is the accuracy of it’s predictions - so
then we strive to achieve “predictive accuracy” - we train AIs by
blind groping towards reward.
Though the AI may be harbouring ‘malevolent motives’ that aren’t
recognisable by merely looking at the output - at some stage the AI
may then unpredictably make a ‘treacherous turn’.
How can we tell a black box AI hasn’t developed a taste for
wire-heading / reward hacking?
49. The Importance of Knowing Why
Why is one of the first questions
children ask - (oft recursively)
New Science of Causal Inference
to help us understand cause and effect
relationships.
Current AI is bad at causal learning - but
really good at correlating across big data.
Judea Pearl believes causal AI will happen
within a decade.
Judea
Pearl
50. Why Machine Understanding?
To Avoid Unintended consequences!
● Value alignment: Where the AI’s goals are compatible
with our *useful/good values
● Verifiability / Intelligibility: AI makeup source code /
framework is verifiably beneficial (or highly likely to be) and
where it’s goals and means are intelligible
● Ethical Standards: AI will have to understand ethics -
and it’s understanding ought to be made intelligible to us
● Efficiency: far less training data and computation -
efficiency required for time sensitive decisiveness
51. Why? - Value Alignment
● AI should understand and employ common sense about:
○ What you are asking - did you state your wish poorly?
○ Your motivations - Why you are asking it
○ The context
■ where desirable solutions are not achieved at any cost
■ Is your wish counter productive? Would it harm you or
others?
■ Ex. After summoning a genie, using the 3rd wish to undo
the first 2 wishes. King Midas - badly stated wish.
53. AI needs to Understand Ethics
Intelligible Ethical Standards
● Rule based systems are fragile - Asimov’s novels
explore failure modes
● Convincing predictive ethical models is not enough
- predictions fail, and it’s bad when it’s not known
why
● A notion of cause and effect is imperative in
understanding ethical problems - obviously trolly
problems :D
54. Why? - Efficiency
Overhang resulting from far more efficient algorithms - allowing
more computation to be done with less hardware.
More computation used *wisely could realise wonderful things.
*wisdom is hard won through understanding
We have a preference for X before we die - X could be: a beneficial
singularity, radical longevity medicine, robotic companions,
becoming posthuman etc.
55. Why? - Wisdom
Simplistically, wisdom is
hard won through
understanding.
Decisions are less risky
when knowledge is
understood and more
risky when based on
ignorant interpretations of
data.
56. Why Not Machine Understanding?
Machine Understanding could be disastrous if
1. Adding the right ingredients to AI makes it really cheap and easy
to use;
2. AI will be extremely capable - (intelligence is powerful before
and after an intelligence explosion)
3. There doesn’t need to be many bad eggs or misguided people
who intentionally/unintentionally use the AI for diabolical ends for
there to be devastating consequences
57. How - Achieving AI Understanding -
Intelligibility
Take hints from Lakatos award winner Henk D. Regt’s arguments
against the common theory that scientific explanation/understanding
is achieved through building models:
● But the scientist building the models need certain skills -
scientific understanding is fundamentally a skillful act
● He also emphasises intelligibility in scientific explanation -
which is something sorely lacking in the Deep Learning models
that AI develops
58. Causal AI - the logical next step
4 rungs:
1) learning by association (deep learning, massive correlation
AlphaGo - some animals apparently, and sales managers)
a) Lacks flexibility and adaptability
2) Taking action.. What ifs, taking actions to influence outcomes -
taking interventions require a causal model -
3) Identifying counterfactuals (but for causation) - i.e. to a
computer a match and oxygen may play an equal role
4) Confounding bias - associate with 2nd level -- citrus fruits
prevent scurvy.. But why? A long time ago people used to think it
was the acidity - turns out it is vitamin c
https://ssir.org/articles/entry/the_case_for_causal_ai
59. Causal Learning in AI - a key to Machine
Understanding.
Opening up a world of possibilities for science.
- Important for humans in making discoveries - if we don’t ask why
then we don’t make new discoveries
- Big data can’t answer causal questions alone - emerging causal
AI will draw on causal relationships between correlation and
causation, to see which are mediators, and which are
confounders
60. Causal Representation Learning
Great paper ‘Towards Causal Representation Learning’ - “A central
problem for AI and causality is, thus, causal representation
learning, the discovery of high-level causal variables from low-
level observations.”
61. Causation & Moral Understanding
Consequentialism - the consequences of an act are 1) it’s causal
consequences (computationally tractable), or 2) the whole state of
the universe following the act (practically incomputable)
Zero-Shot-Learning for Ethics: Being able to understand salient
causal factors in novel moral quandaries (problems including
confounders and out-of-training-sample stuff i.e. invernentions)
increases the chance of AI to solving these moral quandaries first
try, and then explain how and why their solution worked.
Some problems, including solving ethical Superintelligence may
need to be done correctly first try.
62. Understanding Understanding
● Philosophy / Epistemology
● Human Understanding
● Machine Understanding
● Understanding & General Intelligence
● AI Impacts / Opportunities & Risks
● Ethics & Zero Shot Solving of Moral Quandaries
Conference in planning for some time after CoronaVirus restrictions ease - let me know if you are
interested - tech101 [at] gmail [dot] com Twitter @adam_ford YouTube: YouTu.be/TheRationalFuture
64. Evaluating measurable claims about intelligence
Cognitive modelling (think & learn similarly to
humans) comparing AI vs human reaction
times, error rates, response quality etc
66. Full understanding of human intelligence not required for AI
● For most of human history, we
didn’t really understand fire -
but made good use of it
● Today we don’t fully
understand biology, though we
are able to develop effective
medicine
● We don’t fully understand the
brain - have developed
intelligent algorithms - some
neuromorphic (brain inspired)
67. Peaks & Valleys in Cognitive capability
● Peaks of AI cognitive
capabilities have already
crossed over valleys in
human cognitive capabilities.
● Will we see a rising tide in
general AI cognitive
capability?
● Will the valleys in AI cognitive
ability rise above the peaks in
human cognitive capability?
68. AI vs HLI Ability - Peaks & Troughs
Not
representative
of all human or
AI abilities
69. AI vs HLI Ability - Peaks & Troughs
Not
representative
of all human or
AI abilities
70. Turing Tests & Winograd Schemas
● Arguably passed Turing test
● Winograd Schema success rate of 90% is would be
considered impressive to people in NLP - UPDATE -
GPT-3 recently got a score of 88.3 without fine tuning..
See 'Language Models are Few-Shot Learners'
● Conversational AI getting better - fast (see earlier example)
/ OpenAI’s GPT-2/ now GPT-3 - https://philosopherai.com &
https://play.aidungeon.io
● Also check out NVIDIAs new sensation :D
https://www.zdnet.com/article/nvidias-ai-advance-natural-language-processing-
gets-faster-and-better-all-the-time
71. What won’t AI be able to do within next 50 years?
Best predictions?
What Computers Can’t Do - Hubert Dreyfus
○ What Computers Still Can’t Do
■ Make coffee
■ Beat humans at physical sport
■ Write funny jokes?
■ Write poetry?
● and novels
● Formulate creative strategies - the ability to formulate the kind of creative strategies
that, for instance, define a great lawyer’s ability to form unique arguments or a top CEO to lead his or
her company in bold new directions. This isn’t just about analyzing data; it’s about venturing into
unstructured problem-solving tasks, and deciding which pieces of information are relevant and which
can be safely ignored.
https://www.digitaltrends.com/cool-tech/things-machines-computers-still-cant-do-yet/
72. How to evaluate evidence?
● Trend extrapolation - Black to Dark Box
○ Some views on trend extrapolation favor near term AGI
○ Some views don’t
○ Strange how people carve up what’s useful trend
extrapolation and what isn’t to support their views
● Inside view - White to Translucent Box
○ Weak - tenable - loose qualitative views on issues with
lopsided support
○ Strong - difficult
● Appeal to authority :D
Pick your authority!
73. Trend Extrapolation: Accelerating Change
● Kurzweil's Graphs
○ Not a discrete event
○ Despite being smooth and continuous, explosive once
curve reaches transformative levels (knee of the curve)
● Consider the Internet. When the Arpanet went from 10,000 nodes to 20,000 in
one year, and then to 40,000 and then 80,000, it was of interest only to a few
thousand scientists. When ten years later it went from 10 million nodes to 20
million, and then 40 million and 80 million, the appearance of this curve looks
identical (especially when viewed on a log plot), but the consequences were
profoundly more transformative.
● Q: How many devices do you think are connected to the internet today?
74. This image is old.
128gb SanDisk for
$30 and a $400gb
for $99 on ebay
(circa sept 2019)
A 512gb microsd
card was $99 AUD
Jan 2022 at
Scorptec
77. Weeeee!
Enjoy the ride!
Math and deep insights
(especially probability) can be
powerful relative to trend fitting
and crude analogies.
Long-term historical trends are
weekly suggestive of future
events
BUT - humans aren’t good at
math, and aren’t all that
convinced by it - nice pretty
graphs or animated gifs seem
to do a better job.
Difficulty - convincing humans
is important - since they will
drive the trajectory of AI
development… so convincing
people of good causes based
on less-accurate/less-sound
yet convincing arguments is
important.
78. Survey of on timeline of AI
AI Expert poll in 2016
● 3 years for championship Angry Birds
● 4 years for the World Series of Poker (some poker solved in
2019)
● 6 years for StarCraft (now solved)
● 6 years for folding laundry
● 7–10 years for expertly answering 'easily Googleable' questions
● 8 years for average speech transcription (sort of solved)
● 9 years for average telephone banking (sort of solved)
● 11 years for expert songwriting (sort of solved)
● over 30 years for writing a New York Times bestseller or winning
the Putnam math competition
https://aiimpacts.org/miri-ai-predictions-dataset
79. 2) In 2017 May, 352 AI experts who published at the 2015 NIPS and ICML conferences were surveyed. Based
on survey results, experts estimate that there’s a 50% chance that AGI will occur before 2060. However,
there’s significant difference of opinion based on geography: Asian respondents expect AGI in 30 years,
whereas North Americans expect it in 74 years.
Some significant job functions that are expected to be automated by 2030 are: Call center reps, truck driving,
retail sales.
Surveys of AI Researchers
1) In 2009, 21 AI experts participating
in AGI-09 conference were surveyed.
Experts believe AGI will occur around
2050, and plausibly sooner. You can
see their estimates regarding specific
AI achievements: passing the Turing
test, passing third grade,
accomplishing Nobel worthy scientific
breakthroughs and achieving
superhuman intelligence.
https://blog.aimultiple.com/artificial-general-intelligence-singularity-timing
80. 2019 - When Will We Reach the Singularity? A Timeline
Consensus from AI Researchers
https://emerj.com/ai-future-outlook/when-will-we-reach-the-singularity-a-timeline-consensus-from-ai-researchers/
81. Recap..
1. Intelligence is powerful
2. Human Intelligence is a small fraction of a huge space of
all possible mind designs
3. We already have AI smarter than HLI (too many examples
to list) - so therefore you can already see AI outperform HLI
- AI has surged past HLI in specific domains
4. An AI that could self improve could become very powerful
very fast
5. Timeline consensus’ from AI researchers say AGI before
2050 or 2060
83. 3 AI Boosters
1. Overhang resulting from
a. algorithmic performance gains (more efficient algorithms
require less computing power per unit of value)
b. or economic gains (Microsoft donating billion $$ to OpenAI)
c. Cheaper cloud infrastructure (drops in price/performance of
already connected cloud)
2. Self improving systems - feedback loop where gains from
previous iterations boost performance of subsequent iterations
of the feedback loop
3. Economic/Political Will - race to first mover advantage
84. Hardware Overhang
● new programming methods use available computing power more efficiently.
● new / improved algorithms can exploit existing computing power far more efficiently than previous
suboptimal algorithms…
○ Result: AI/AGI can run on smaller amounts of hardware... This could lead to an intelligence
explosion, or to a massive increase in the number of AGIs, as they could be easily copied to run
on countless computers.
● Examples:
○ In 2010, the President's Council of Advisors on Science and Technology reported on benchmark
production planning model having become faster by a factor of 43 million between 1988 and
2003 - where only a factor of roughly 1,000 was due to better hardware, while a factor of
43,000 came from algorithmic improvements.
○ Sudden advances in capability gains - DeepMind’s AlphaGo, AlphaFold, GPT-2
● Estimates of time to reach computing power required for whole brain emulation: ~a decade away
○ BUT: very unlikely that human brain algorithms anywhere enar most computationally efficient for
producing AI.
○ WHY? our brains evolved during a natural selection process and thus weren't deliberately
created with the goal of being modeled by AI.
https://wiki.lesswrong.com/wiki/Computing_overhang
85. Economic Gains
In 2016 AI investment was
up to $39 billion, 3x from
2013
See McKinsey Report - ARTIFICIAL
INTELLIGENCE THE NEXT DIGITAL FRONTIER?
86. AI Index 2019 Annual Report
● AI is the most popular area for computer science PhD
specialization, and in 2018, 21% of graduates specialized in
machine learning or AI.
● From 1998 to 2018, peer-reviewed AI research grew by 300%.
● In 2019, global private AI investment was over $70 billion, with
startup investment $37 billion, M&A $34 billion, IPOs $5 billion,
and minority stake $2 billion. Autonomous vehicles led global
investment in the past year ($7 billion), followed by drug and
cancer, facial recognition, video content, fraud detection, and
finance.
https://venturebeat.com/2019/12/11/ai-index-2019-assesses-global-ai-research-investment-and-impact/
87. AI Index 2019 Annual Report
● China now publishes as many AI journal and conference papers per
year as Europe, having passed the U.S. in 2006.
● More than 40% of AI conference paper citations are attributed to
authors from North America, and about 1 in 3 come from East Asia.
● Singapore, Brazil, Australia, Canada and India experienced the fastest
growth in AI hiring from 2015 to 2019.
● The vast majority of AI patents filed between 2014-2018 were filed in
nations like the U.S. and Canada, and 94% of patents are filed in
wealthy nations.
● Between 2010 and 2019, the total number of AI papers on arXiv
increased 20 times.
https://venturebeat.com/2019/12/11/ai-index-2019-assesses-global-ai-research-investment-and-impact/
88. Kinetics of an Intelligence Explosion
Intelligence
level: not a
continuous
surge upwards.
‘Crossover’
refers to the
point where AI
can handle
further growth by
itself.
90. Bill Gates
"Reverse Engineering the Brain is Within
Reach" -“I am in the camp that is concerned
about super intelligence,” Bill Gates said
during an “Ask Me Anything” session on
Reddit. “First, the machines will do a lot of jobs
for us and not be super intelligent. That should
be positive if we manage it well. A few
decades after that, though, the intelligence is
strong enough to be a concern. I agree with
Elon Musk and some others on this and don’t
understand why some people are not
concerned.”
91. Stuart Russell
If human beings are losing every
time, it doesn't matter whether
they're losing to a conscious
machine or an completely non
conscious machine, they still lost.
The singularity is about the quality of
decision-making, which is not
consciousness at all.
92. ‘Expert’ predictions of Never...
(On Nuclear physics) “The consensus view as expressed by Ernest
Rutherford on September 11th, 1933, was that it would never be
possible to extract atomic energy from atoms. So, his prediction was
‘never,’ but what turned out to be the case was that the next morning
Leo Szilard read Rutherford’s speech, became annoyed by it, and
invented a nuclear chain reaction mediated by neutrons!
Rutherford’s prediction was ‘never’ and the truth was about 16 hours
later. In a similar way, it feels quite futile for me to make a
quantitative prediction about when these breakthroughs in AGI will
arrive.” - Stuart Russell - co-wrote foundational textbook on AI
94. AI Capability: Optimal
● Tic-Tac-Toe
● Connect Four: 1988
● Checkers (aka 8x8 draughts): Weakly solved (2007)
● Rubik's Cube: Mostly solved (2010)
● Heads-up limit hold'em poker: Statistically optimal in the sense that "a
human lifetime of play is not sufficient to establish with statistical
significance that the strategy is not an exact solution" (2015) - see
https://www.deepstack.ai/
Search spaces very small - though Heads-up limit hold’em poker is interesting - like most other NN
style AI - the majority of the behaviour was learned via self play in a reinforcement learning
environment
https://en.wikipedia.org/wiki/Progress_in_artificial_intelligence#Current_performance
95. AI Capability: Super-human
● Othello (aka reversi): c. 1997
● Scrabble: 2006[
● Backgammon: c. 1995-2002
● Chess: Supercomputer (c. 1997); Personal computer (c.
2006); Mobile phone (c. 2009); Computer defeats human +
computer (c. 2017)
● Jeopardy!: Question answering, although the machine did not
use speech recognition (2011)
● Shogi: c. 2017
● Arimaa: 2015
● Go: 2017 <- AlphaGo - learned how to play from
scratch
● Heads-up no-limit hold'em poker: 2017
State of play in AI Progress - Some areas already Super-human
Board games and some computer games are well defined
problems and easier to measure competence against.
96. AI Capability: Par-human
● Optical character recognition for ISO 1073-1:1976 and similar special characters.
● Classification of images
● Handwriting recognition
AI Capability: High-human
● Crosswords: c. 2012
● Dota 2: 2018
● Bridge card-playing: According to a 2009 review, "the best programs are
attaining expert status as (bridge) card players", excluding bidding.
● StarCraft II: 2019 <- AlphaStar at grandmaster level
AlphaGo vs AlphaStar - Unsupervised Learning
Wider search spaces - Dota 2 and Starcraft - incomplete
information, long term planning. AlphaStar makes use of
‘autoregressive policies’..
97. AlphaZero / AlphaStar
If you were a blank slate, and were given a GO board, and were told you had 24 hours to
teach yourself to become the best Go player on the planet, and you did exactly that, and
then you did the same thing with Chess and Shogi (both more complex than Chess), and
later Starcraft II, and then Protein Folding - wouldn't you say that proved you had the ability
to adapt to novel situations? Well AlphaZero did all that and it didn't use brute force to do it
either.
Stockfish is another Chess program but unlike AlphaZero it didn't teach itself humans did.
Stockfish could easily beat any human player but it couldn't beat AlphaZero despite the fact
that Stockfish was running on a faster computer and could evaluate 70,000,000 positions a
second while AlphaZero's much smaller computer could only do 80,000.
And AI is becoming less narrow and brittle every day. GO is played on a 19 by 19 grid, but if
you changed it to 20 by 20 or 18 by 18 and gave it another 24 hours to teach itself AlphaZero
would be the best player in the world at that new game too - without any human help.
AlphaZero is not infinitely adaptable, but then humans aren't either.
99. AI Capability: Sub-human
● Optical character recognition for printed text (nearing par-human for Latin-script typewritten text)
● Object recognition
● Facial recognition: Low to mid human accuracy (as of 2014)
● Visual question answering, such as the VQA 1.0
● Various robotics tasks that may require advances in robot hardware as well as AI, including:
○ Stable bipedal locomotion: Bipedal robots can walk, but are less stable than human walkers
(as of 2017)
○ Humanoid soccer
● Speech recognition: "nearly equal to human performance" (2017)
● Explainability. Current medical systems can diagnose certain medical conditions well, but cannot
explain to users why they made the diagnosis.
● Stock market prediction: Financial data collection and processing using Machine Learning
algorithms
● Various tasks that are difficult to solve without contextual knowledge, including:
○ Translation
○ Word-sense disambiguation
○ Natural language processing
Though AI is classed as par or sub-human, for most of these tasks AI can
achieve wider capabilities far faster than humans can.
100. https://talktotransformer.com
OpenAI’s GPT-2
Better Language Models
and Their Implications
“We’ve trained a large-scale unsupervised language
model which generates coherent paragraphs of text,
achieves state-of-the-art performance on many
language modeling benchmarks, and performs
rudimentary reading comprehension, machine
translation, question answering, and
summarization—all without task-specific training.”
Models can be used to generate data by successively
guessing what will come next, feeding in a guess as
input and guessing again. Language models, where
each word is predicted from the words before it, are
perhaps the best known example: these models power
the text predictions that pop up on some email and
messaging apps. Recent advances in language
modelling have enabled the generation of strikingly
plausible passages, such as OpenAI’s GPT-2.
Below is an example of a cut down version of GPT-2
101. GPT-3 / GPT-2
Fire your dungeon master -
let the AI create your
fantasy adventure while
you play it!
https://aidungeon.io
103. 'There's a Wide-Open Horizon of Possibility.' Musicians Are
Using AI to Create Otherwise Impossible New Songs
https://time.com/5774723/ai-music/
104. Recap..
1. Intelligence is powerful
2. Human Intelligence is a small fraction in the space of all possible
mind design
3. An AI that could self improve could become very powerful very fast
4. Look for trends as well as inside views to help gauge likelihood
5. Some AI is becoming more general (AGI)
○ Unsupervised learning
○ Transfer learning
6. Experts in AI research think that it is likely AGI could be developed
by mid century
7. We already have AI smarter than HLI (too many examples to list)
○ AI has surged past HLI in specific domains
113. Accelerating returns - a
shortening of timescales
between salient events…
Events expressed as Time
before Present (Years) on the
X axis and Time to Next Event
(Years) on the Y axis,
Logarithmic Plot.
4 billion years ago early
life.
…
114. Evolutionary History of Mind
Daniel Dennet's Tower of Generate and Test
● "Darwinian creatures", which were simply selected by trial and error on the merits of
their bodies' ability to survive; its "thought" processes being entirely genetic.
● "Skinnerian creatures", which were also capable of independent action and therefore
could enhance their chances of survival by finding the best action (conditioning
overcame the genetic trial and error of Darwinian creatures); phenotypic plasticity; bodily
tribunal of evolved, though sometimes outdated wisdom.
● "Popperian creatures", which can play an action internally in a simulated environment
before they perform it in the real environment and can therefore reduce the chances of
negative effects - allows "our Hypothesis to die in our stead" - popper
● "Gregorian creatures" are tools-enabled, in particular they master the tool of language. -
use tools (e.g. words) - permits learning from others. The possession of learned
predispositions to carry out particular actions
115. Evolutionary History of Mind
●To add to Dennet's metaphore: We are building a new
floor to the tower
●"Turingian creatures", that use tools-enabled
tool-enablers, in particular they create autonomous
agents to create even greater tools (e.g. minds with
further optimal cognitive abilities) - permits insightful
engineering of increasingly insightful agents,
resulting in an Intelligence Explosion.
116. References
● "Kinds of Minds" - Daniel Dennit :
http://www.amazon.com/Kinds-Minds-Understanding-Consciousness-Science/dp/0465073514
● Cascades, Cycles, Insight...: http://lesswrong.com/lw/w5/cascades_cycles_insight
● Recursive Self Improvement: http://lesswrong.com/lw/we/recursive_selfimprovement
● Terry Sejnowski : http://www.scientificamerican.com/article.cfm?id=when-build-brains-like-ours
● Nanotech : http://e-drexler.com
● Justin Rattner - Intel touts progress towards intelligent computers:
http://news.cnet.com/8301-1001_3-10023055-92.html#ixzz1G9Iy1cXe
● Bill Gates speaking about AI Risk on Reddit:
https://www.reddit.com/r/IAmA/comments/2tzjp7/hi_reddit_im_bill_gates_and_im_back_for_my_third
● Superintelligence: https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies
● Smarter Than Us: http://lesswrong.com/lw/jrb/smarter_than_us_is_out
●