Deep learning techniques have achieved great success in recent years, but still lack general intelligence. Several approaches may lead to artificial general intelligence (AGI):
1. Scaling up current deep learning methods like supervised learning using vast amounts of labeled data, or unsupervised learning with huge generative models. However, it is unclear if these narrow techniques can develop general intelligence without other changes.
2. Formal approaches like AIXI aim to mathematically define optimal intelligent behavior, but suffer from computational intractability in practice.
3. Brain simulation attempts to reverse engineer the human brain, but faces challenges in modeling its high-dimensional state and dynamics at an appropriate level of abstraction.
4. Artificial life
Checkout How IBM is thriving a sustainable culture of design at IBM.
You will know about the IBM Design Heritage and how a bootstrap team refactor IBM Design in 2013 with the mission to create a design culture.
You will know more about the Core77 Award Winner IBM Design Education + Activation program which is the core for scaling design through out a 430,000 employes company.
Eszter Debreczeni: The Future of Work and the Augmented Enterprise: How to pr...Edunomica
Eszter Debreczeni: The Future of Work and the Augmented Enterprise: How to prepare together to thrive in the age of Artificial Intelligence?
People Analytics Conference 2022 Winter
Website: https://pacamp.org
Youtube: https://www.youtube.com/channel/UCeHtPZ_ZLZ-nHFMUCXY81RQ
FB: https://www.facebook.com/pacamporg
Artificial Intelligence is back, Deep Learning Networks and Quantum possibili...John Mathon
AI has gone through a number of mini-boom-bust periods. The current one may be short lived as well but I have reasons to think AI is finally making some sustained progress that will see its way into mainstream technology.
"You Can Do It" by Louis Monier (Altavista Co-Founder & CTO) & Gregory Renard (CTO & Artificial Intelligence Lead Architect at Xbrain) for Deep Learning keynote #0 at Holberton School (http://www.meetup.com/Holberton-School/events/228364522/)
If you want to assist to similar keynote for free, checkout http://www.meetup.com/Holberton-School/
Here are some key terms that are similar to "champagne":
- Sparkling wines
- French champagne
- Cognac
- Rosé
- White wine
- Sparkling wine
- Wine
- Burgundy
- Bordeaux
- Cava
- Prosecco
Some specific champagne brands that are similar terms include Moët, Veuve Clicquot, Dom Pérignon, Taittinger, and Bollinger. Grape varieties used in champagne production like Chardonnay and Pinot Noir could also be considered similar terms.
This document provides an introduction to artificial intelligence and discusses key concepts in the field. It explores what constitutes an intelligent system, references important milestones in AI like the Turing Test, and examines examples of AI applications such as chess-playing systems and pattern recognition. The document also discusses techniques used in AI research, including artificial neural networks and fuzzy logic. It poses questions about the capabilities and limitations of machine intelligence and investigates approaches to developing intelligent systems.
This document provides an introduction to the CS 188: Artificial Intelligence course at UC Berkeley. It discusses key topics that will be covered in the course, including rational decision making, computational rationality, a brief history of AI, current capabilities in areas like natural language processing, computer vision, robotics, and game playing. The course will cover general techniques for designing rational agents and making decisions under uncertainty, with applications to domains like language, vision, games, and more. Students will learn how to apply existing AI techniques to new problem types.
Checkout How IBM is thriving a sustainable culture of design at IBM.
You will know about the IBM Design Heritage and how a bootstrap team refactor IBM Design in 2013 with the mission to create a design culture.
You will know more about the Core77 Award Winner IBM Design Education + Activation program which is the core for scaling design through out a 430,000 employes company.
Eszter Debreczeni: The Future of Work and the Augmented Enterprise: How to pr...Edunomica
Eszter Debreczeni: The Future of Work and the Augmented Enterprise: How to prepare together to thrive in the age of Artificial Intelligence?
People Analytics Conference 2022 Winter
Website: https://pacamp.org
Youtube: https://www.youtube.com/channel/UCeHtPZ_ZLZ-nHFMUCXY81RQ
FB: https://www.facebook.com/pacamporg
Artificial Intelligence is back, Deep Learning Networks and Quantum possibili...John Mathon
AI has gone through a number of mini-boom-bust periods. The current one may be short lived as well but I have reasons to think AI is finally making some sustained progress that will see its way into mainstream technology.
"You Can Do It" by Louis Monier (Altavista Co-Founder & CTO) & Gregory Renard (CTO & Artificial Intelligence Lead Architect at Xbrain) for Deep Learning keynote #0 at Holberton School (http://www.meetup.com/Holberton-School/events/228364522/)
If you want to assist to similar keynote for free, checkout http://www.meetup.com/Holberton-School/
Here are some key terms that are similar to "champagne":
- Sparkling wines
- French champagne
- Cognac
- Rosé
- White wine
- Sparkling wine
- Wine
- Burgundy
- Bordeaux
- Cava
- Prosecco
Some specific champagne brands that are similar terms include Moët, Veuve Clicquot, Dom Pérignon, Taittinger, and Bollinger. Grape varieties used in champagne production like Chardonnay and Pinot Noir could also be considered similar terms.
This document provides an introduction to artificial intelligence and discusses key concepts in the field. It explores what constitutes an intelligent system, references important milestones in AI like the Turing Test, and examines examples of AI applications such as chess-playing systems and pattern recognition. The document also discusses techniques used in AI research, including artificial neural networks and fuzzy logic. It poses questions about the capabilities and limitations of machine intelligence and investigates approaches to developing intelligent systems.
This document provides an introduction to the CS 188: Artificial Intelligence course at UC Berkeley. It discusses key topics that will be covered in the course, including rational decision making, computational rationality, a brief history of AI, current capabilities in areas like natural language processing, computer vision, robotics, and game playing. The course will cover general techniques for designing rational agents and making decisions under uncertainty, with applications to domains like language, vision, games, and more. Students will learn how to apply existing AI techniques to new problem types.
This presentation give an introduction to Artificial Intelligence subjectiveness and history. The primary goal of the presentation is to provide a deep enough understanding of Artificial Narrow Intelligence and Artificial General Intelligence so that the people can appreciate the strengths or weaknesses of the AI. The presentation also includes a classification(the main domains of AI) and the most relevant examples from the past decades. In the second part it provides some statistics and future possible applications and forecasts.
Artificial Intelligence or the Brainization of the EconomyWilly Braun
60 years ago, John McCarthy used for the first time the term “Artificial Intelligence”. What does it mean and how has it evolved since 1956?
This is what daphni tried to answer in this in-depth report about AI. We’ve interviewed some of the brightest minds in the field: Bruno Maisonnier (founder of Aldebaran robotics), Massimiliano Versaca (CEO Neurala), Alexandre Lebrun (co-founder of wit.ai), Luc Julia (VP Innovation Samsung).
By Paul Bazin and Pierre-Eric Leibovici
The document provides an overview of deep learning, including its past, present, and future. It discusses the concepts of artificial general intelligence, artificial superintelligence, and predictions about their development from experts like Hawking, Musk, and Gates. Key deep learning topics are summarized, such as neural networks, machine learning approaches, important algorithms and researchers, and how deep learning works.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/sept-2017-alliance-vitf-samek
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Dr. Wojciech Samek of the Fraunhofer Heinrich Hertz Institute delivers the presentation "Methods for Understanding How Deep Neural Networks Work" at the Embedded Vision Alliance's September 2017 Vision Industry and Technology Forum. In his presentation, Dr. Samek covers the following topics:
▪ Unbeatable AI systems
▪ Deep neural network overview
▪ Opening the "black box"
▪ Summary
The document provides an introduction to knowledge graphs. It discusses how knowledge graphs are being used by large enterprises and intelligent agents to capture concepts, entities, and relationships within domains to drive business, generate insights, and enhance relationships. The presentation will cover an overview of what knowledge graphs are, who uses them, why they are used, and how to use them. It then provides some examples of how knowledge graphs are applied, including in intelligent agents, semantic web, search engines, social networks, biology, enterprise knowledge management, and more.
The Unreasonable Benefits of Deep Learningindico data
Dan Kuster led a talk at Sentiment Analysis Symposium discussing why businesses should consider adopting deep learning solutions. Key takeaways include simplicity, accuracy, flexibility, and some hacks for working with the tech.
About the Session:
Machine learning is becoming the tool of choice for analyzing text and image data. While traditional text processing solutions rely on the ability of experts to encode domain knowledge, machine learning models learn this directly from the data. Deep learning is a branch of machine learning that like the human brain quickly learns hierarchical representations of concepts, and it has been key to unlocking state-of-the-art results on a range of text and image classification tasks such as sentiment analysis and beyond.
In this session, we will show the impact of a deep learning based approach over NLP and traditional machine learning based methods for text analysis across key dimensions such as accuracy, flexibility, and the amount of required training data. Specifically, we will discuss how deep learning models are now setting the records for state-of-the-art accuracy in sentiment analysis. We will also demonstrate the flexibility of this approach by showing how the features learned by one model can be easily reused in different domains (e.g., handling additional languages, or predicting new categories) to drastically reduce the time to deployment. Finally, we will touch on the ability of this method to handle additional types of data beyond text, e.g, images, for maximum insight.
New Artifitial Intelligence that can predicts Human ActionsShreya Shetty
The document discusses recent breakthroughs in developing artificial intelligence that can predict human actions. Scientists created an algorithm using videos from YouTube and TV shows that can predict whether two people will hug, kiss, or shake hands with over 43% accuracy. The algorithm employs deep learning techniques to analyze patterns in massive amounts of video data to generate predictions of future actions and objects. This new capability for predictive vision in AI exceeds the accuracy of previous systems.
Introduction to the Artificial Intelligence and Computer Vision revolutionDarian Frajberg
Deep learning and computer vision have revolutionized artificial intelligence. Deep learning uses artificial neural networks inspired by the human brain to learn from large amounts of data without being explicitly programmed. Computer vision gives computers the ability to understand digital images and videos. Key breakthroughs include AlexNet achieving unprecedented accuracy on ImageNet in 2012, demonstrating the power of deep convolutional neural networks for computer vision tasks. Challenges remain around ensuring AI systems are beneficial to society, avoiding data biases, and increasing transparency.
Deep Water - Bringing Tensorflow, Caffe, Mxnet to H2OSri Ambati
Arno Candel introduces Deep Water, which brings Tensorflow, Caffe, Mxnet to H2O. It also brings support for GPUs, image classification, NLP and much more to H2O.
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
The document discusses the founders of Joost and their history of successful startups. It then covers various topics related to cognitive technology, including the brain's processing of information through invariant representations and analogies rather than as a computer. The brain predicts memories to generate expectations and understands through anticipation. The document argues we are close to understanding the organizing principles of the human mind and are now in a race for developing cognitive technology applications, though the work will be challenging. Entrepreneurship opportunities exist in Brazil for this field.
Artificial Intelligence : The discipline that attempts to build entities that do perceive, understand, predict, manipulate the world around us which is manifestly much larger than us.
by Samantha Adams, Met Office.
Originally purely academic research fields, Machine Learning and AI are now definitely mainstream and frequently mentioned in the Tech media (and regular media too).
We’ve also got the explosion of Data Science which encompasses these fields and more. There’s a lot of interesting things going on and a lot of positive as well as negative hype. The terms ML and AI are often used interchangeably and techniques are also often described as being inspired by the brain.
In this talk I will explore the history and evolution of these fields, current progress and the challenges in making artificial brains
From the FreshTech 2017 conference by TechExeter
www.techexeter.uk
Computers are plain stupid (but that's just common sense).Pim Nauts
I was invented to do a guest lecture on my thesis topic for fellow students in June (2008). Presentation was done way before the actual game went online.
The document provides an introduction to artificial intelligence (AI), including a brief history and the four phases of its development. It discusses what AI is, how it works by collecting and processing data through machine learning algorithms to make inferences. The key domains of AI are described as natural language processing, computer vision, speech recognition, and data. The types of AI are defined based on capabilities as artificial narrow intelligence, artificial general intelligence, and potential future artificial super intelligence. Related fields like machine learning, neural networks, data science, expert systems, and robotics are also outlined. Advantages, disadvantages, relevance to daily life, future possibilities, ethical concerns are presented at a high level.
Deep Learning - The Past, Present and Future of Artificial IntelligenceLukas Masuch
The document provides an overview of deep learning, including its history, key concepts, applications, and recent advances. It discusses the evolution of deep learning techniques like convolutional neural networks, recurrent neural networks, generative adversarial networks, and their applications in computer vision, natural language processing, and games. Examples include deep learning for image recognition, generation, segmentation, captioning, and more.
The document discusses the history and various approaches to artificial intelligence, including neural networks, expert systems, and genetic programming. It also examines applications such as speech recognition, game playing, and pattern recognition. Additionally, it addresses potential dangers of advanced AI, such as androids displacing human jobs or nanomachines achieving superintelligent computing power. The document concludes by considering whether developing powerful AI technologies is something researchers "should" pursue.
Artificial Intelligence for Undergrads is a textbook by J. Berengueres that introduces key concepts in artificial intelligence. It covers topics like spell checking algorithms, machine translation, game playing, and Monte Carlo tree search. The book also discusses early pioneers in AI like Marco Dorigo and his work on ant colony optimization algorithms. It aims to explain complex AI concepts in a simple way for undergraduate students new to the field.
This document provides biographical information about Şaban Dalaman and summaries of key concepts in artificial intelligence and machine learning. It summarizes Şaban Dalaman's educational and professional background, then discusses Alan Turing's universal machine concept, the 1956 Dartmouth workshop proposal that helped define the field of AI, and definitions of AI, machine learning, deep learning, and data science. It also lists different tribes and algorithms within machine learning.
This talk will cover ScyllaDB Architecture from the cluster-level view and zoom in on data distribution and internal node architecture. In the process, we will learn the secret sauce used to get ScyllaDB's high availability and superior performance. We will also touch on the upcoming changes to ScyllaDB architecture, moving to strongly consistent metadata and tablets.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
This presentation give an introduction to Artificial Intelligence subjectiveness and history. The primary goal of the presentation is to provide a deep enough understanding of Artificial Narrow Intelligence and Artificial General Intelligence so that the people can appreciate the strengths or weaknesses of the AI. The presentation also includes a classification(the main domains of AI) and the most relevant examples from the past decades. In the second part it provides some statistics and future possible applications and forecasts.
Artificial Intelligence or the Brainization of the EconomyWilly Braun
60 years ago, John McCarthy used for the first time the term “Artificial Intelligence”. What does it mean and how has it evolved since 1956?
This is what daphni tried to answer in this in-depth report about AI. We’ve interviewed some of the brightest minds in the field: Bruno Maisonnier (founder of Aldebaran robotics), Massimiliano Versaca (CEO Neurala), Alexandre Lebrun (co-founder of wit.ai), Luc Julia (VP Innovation Samsung).
By Paul Bazin and Pierre-Eric Leibovici
The document provides an overview of deep learning, including its past, present, and future. It discusses the concepts of artificial general intelligence, artificial superintelligence, and predictions about their development from experts like Hawking, Musk, and Gates. Key deep learning topics are summarized, such as neural networks, machine learning approaches, important algorithms and researchers, and how deep learning works.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/sept-2017-alliance-vitf-samek
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Dr. Wojciech Samek of the Fraunhofer Heinrich Hertz Institute delivers the presentation "Methods for Understanding How Deep Neural Networks Work" at the Embedded Vision Alliance's September 2017 Vision Industry and Technology Forum. In his presentation, Dr. Samek covers the following topics:
▪ Unbeatable AI systems
▪ Deep neural network overview
▪ Opening the "black box"
▪ Summary
The document provides an introduction to knowledge graphs. It discusses how knowledge graphs are being used by large enterprises and intelligent agents to capture concepts, entities, and relationships within domains to drive business, generate insights, and enhance relationships. The presentation will cover an overview of what knowledge graphs are, who uses them, why they are used, and how to use them. It then provides some examples of how knowledge graphs are applied, including in intelligent agents, semantic web, search engines, social networks, biology, enterprise knowledge management, and more.
The Unreasonable Benefits of Deep Learningindico data
Dan Kuster led a talk at Sentiment Analysis Symposium discussing why businesses should consider adopting deep learning solutions. Key takeaways include simplicity, accuracy, flexibility, and some hacks for working with the tech.
About the Session:
Machine learning is becoming the tool of choice for analyzing text and image data. While traditional text processing solutions rely on the ability of experts to encode domain knowledge, machine learning models learn this directly from the data. Deep learning is a branch of machine learning that like the human brain quickly learns hierarchical representations of concepts, and it has been key to unlocking state-of-the-art results on a range of text and image classification tasks such as sentiment analysis and beyond.
In this session, we will show the impact of a deep learning based approach over NLP and traditional machine learning based methods for text analysis across key dimensions such as accuracy, flexibility, and the amount of required training data. Specifically, we will discuss how deep learning models are now setting the records for state-of-the-art accuracy in sentiment analysis. We will also demonstrate the flexibility of this approach by showing how the features learned by one model can be easily reused in different domains (e.g., handling additional languages, or predicting new categories) to drastically reduce the time to deployment. Finally, we will touch on the ability of this method to handle additional types of data beyond text, e.g, images, for maximum insight.
New Artifitial Intelligence that can predicts Human ActionsShreya Shetty
The document discusses recent breakthroughs in developing artificial intelligence that can predict human actions. Scientists created an algorithm using videos from YouTube and TV shows that can predict whether two people will hug, kiss, or shake hands with over 43% accuracy. The algorithm employs deep learning techniques to analyze patterns in massive amounts of video data to generate predictions of future actions and objects. This new capability for predictive vision in AI exceeds the accuracy of previous systems.
Introduction to the Artificial Intelligence and Computer Vision revolutionDarian Frajberg
Deep learning and computer vision have revolutionized artificial intelligence. Deep learning uses artificial neural networks inspired by the human brain to learn from large amounts of data without being explicitly programmed. Computer vision gives computers the ability to understand digital images and videos. Key breakthroughs include AlexNet achieving unprecedented accuracy on ImageNet in 2012, demonstrating the power of deep convolutional neural networks for computer vision tasks. Challenges remain around ensuring AI systems are beneficial to society, avoiding data biases, and increasing transparency.
Deep Water - Bringing Tensorflow, Caffe, Mxnet to H2OSri Ambati
Arno Candel introduces Deep Water, which brings Tensorflow, Caffe, Mxnet to H2O. It also brings support for GPUs, image classification, NLP and much more to H2O.
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
The document discusses the founders of Joost and their history of successful startups. It then covers various topics related to cognitive technology, including the brain's processing of information through invariant representations and analogies rather than as a computer. The brain predicts memories to generate expectations and understands through anticipation. The document argues we are close to understanding the organizing principles of the human mind and are now in a race for developing cognitive technology applications, though the work will be challenging. Entrepreneurship opportunities exist in Brazil for this field.
Artificial Intelligence : The discipline that attempts to build entities that do perceive, understand, predict, manipulate the world around us which is manifestly much larger than us.
by Samantha Adams, Met Office.
Originally purely academic research fields, Machine Learning and AI are now definitely mainstream and frequently mentioned in the Tech media (and regular media too).
We’ve also got the explosion of Data Science which encompasses these fields and more. There’s a lot of interesting things going on and a lot of positive as well as negative hype. The terms ML and AI are often used interchangeably and techniques are also often described as being inspired by the brain.
In this talk I will explore the history and evolution of these fields, current progress and the challenges in making artificial brains
From the FreshTech 2017 conference by TechExeter
www.techexeter.uk
Computers are plain stupid (but that's just common sense).Pim Nauts
I was invented to do a guest lecture on my thesis topic for fellow students in June (2008). Presentation was done way before the actual game went online.
The document provides an introduction to artificial intelligence (AI), including a brief history and the four phases of its development. It discusses what AI is, how it works by collecting and processing data through machine learning algorithms to make inferences. The key domains of AI are described as natural language processing, computer vision, speech recognition, and data. The types of AI are defined based on capabilities as artificial narrow intelligence, artificial general intelligence, and potential future artificial super intelligence. Related fields like machine learning, neural networks, data science, expert systems, and robotics are also outlined. Advantages, disadvantages, relevance to daily life, future possibilities, ethical concerns are presented at a high level.
Deep Learning - The Past, Present and Future of Artificial IntelligenceLukas Masuch
The document provides an overview of deep learning, including its history, key concepts, applications, and recent advances. It discusses the evolution of deep learning techniques like convolutional neural networks, recurrent neural networks, generative adversarial networks, and their applications in computer vision, natural language processing, and games. Examples include deep learning for image recognition, generation, segmentation, captioning, and more.
The document discusses the history and various approaches to artificial intelligence, including neural networks, expert systems, and genetic programming. It also examines applications such as speech recognition, game playing, and pattern recognition. Additionally, it addresses potential dangers of advanced AI, such as androids displacing human jobs or nanomachines achieving superintelligent computing power. The document concludes by considering whether developing powerful AI technologies is something researchers "should" pursue.
Artificial Intelligence for Undergrads is a textbook by J. Berengueres that introduces key concepts in artificial intelligence. It covers topics like spell checking algorithms, machine translation, game playing, and Monte Carlo tree search. The book also discusses early pioneers in AI like Marco Dorigo and his work on ant colony optimization algorithms. It aims to explain complex AI concepts in a simple way for undergraduate students new to the field.
This document provides biographical information about Şaban Dalaman and summaries of key concepts in artificial intelligence and machine learning. It summarizes Şaban Dalaman's educational and professional background, then discusses Alan Turing's universal machine concept, the 1956 Dartmouth workshop proposal that helped define the field of AI, and definitions of AI, machine learning, deep learning, and data science. It also lists different tribes and algorithms within machine learning.
This talk will cover ScyllaDB Architecture from the cluster-level view and zoom in on data distribution and internal node architecture. In the process, we will learn the secret sauce used to get ScyllaDB's high availability and superior performance. We will also touch on the upcoming changes to ScyllaDB architecture, moving to strongly consistent metadata and tablets.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
"Scaling RAG Applications to serve millions of users", Kevin GoedeckeFwdays
How we managed to grow and scale a RAG application from zero to thousands of users in 7 months. Lessons from technical challenges around managing high load for LLMs, RAGs and Vector databases.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Must Know Postgres Extension for DBA and Developer during MigrationMydbops
Mydbops Opensource Database Meetup 16
Topic: Must-Know PostgreSQL Extensions for Developers and DBAs During Migration
Speaker: Deepak Mahto, Founder of DataCloudGaze Consulting
Date & Time: 8th June | 10 AM - 1 PM IST
Venue: Bangalore International Centre, Bangalore
Abstract: Discover how PostgreSQL extensions can be your secret weapon! This talk explores how key extensions enhance database capabilities and streamline the migration process for users moving from other relational databases like Oracle.
Key Takeaways:
* Learn about crucial extensions like oracle_fdw, pgtt, and pg_audit that ease migration complexities.
* Gain valuable strategies for implementing these extensions in PostgreSQL to achieve license freedom.
* Discover how these key extensions can empower both developers and DBAs during the migration process.
* Don't miss this chance to gain practical knowledge from an industry expert and stay updated on the latest open-source database trends.
Mydbops Managed Services specializes in taking the pain out of database management while optimizing performance. Since 2015, we have been providing top-notch support and assistance for the top three open-source databases: MySQL, MongoDB, and PostgreSQL.
Our team offers a wide range of services, including assistance, support, consulting, 24/7 operations, and expertise in all relevant technologies. We help organizations improve their database's performance, scalability, efficiency, and availability.
Contact us: info@mydbops.com
Visit: https://www.mydbops.com/
Follow us on LinkedIn: https://in.linkedin.com/company/mydbops
For more details and updates, please follow up the below links.
Meetup Page : https://www.meetup.com/mydbops-databa...
Twitter: https://twitter.com/mydbopsofficial
Blogs: https://www.mydbops.com/blog/
Facebook(Meta): https://www.facebook.com/mydbops/
"NATO Hackathon Winner: AI-Powered Drug Search", Taras KlobaFwdays
This is a session that details how PostgreSQL's features and Azure AI Services can be effectively used to significantly enhance the search functionality in any application.
In this session, we'll share insights on how we used PostgreSQL to facilitate precise searches across multiple fields in our mobile application. The techniques include using LIKE and ILIKE operators and integrating a trigram-based search to handle potential misspellings, thereby increasing the search accuracy.
We'll also discuss how the azure_ai extension on PostgreSQL databases in Azure and Azure AI Services were utilized to create vectors from user input, a feature beneficial when users wish to find specific items based on text prompts. While our application's case study involves a drug search, the techniques and principles shared in this session can be adapted to improve search functionality in a wide range of applications. Join us to learn how PostgreSQL and Azure AI can be harnessed to enhance your application's search capability.
inQuba Webinar Mastering Customer Journey Management with Dr Graham HillLizaNolte
HERE IS YOUR WEBINAR CONTENT! 'Mastering Customer Journey Management with Dr. Graham Hill'. We hope you find the webinar recording both insightful and enjoyable.
In this webinar, we explored essential aspects of Customer Journey Management and personalization. Here’s a summary of the key insights and topics discussed:
Key Takeaways:
Understanding the Customer Journey: Dr. Hill emphasized the importance of mapping and understanding the complete customer journey to identify touchpoints and opportunities for improvement.
Personalization Strategies: We discussed how to leverage data and insights to create personalized experiences that resonate with customers.
Technology Integration: Insights were shared on how inQuba’s advanced technology can streamline customer interactions and drive operational efficiency.
6. CS231n: Convolutional Neural Networks for Visual Recognition
(Stanford Class)
2015: 150 students
2016: 330 students
2017: 750 students
2018: ??? (max students per class is capped at 999)
11. Convenient properties of Go:
1. Deterministic. No noise in the game.
2. Fully observed. Each player has complete information.
3. Discrete action space. Finite number of actions possible.
4. Perfect simulator. The effect of any action is know exactly.
5. Short episodes. ~200 actions per game.
6. Clear + fast evaluation. According to Go rules.
7. Huge dataset available. Human vs human games.
12. Q: “Can we run AlphaGo on a robot for the Amazon
Picking Challenge”?
13. Q: “Can we run AlphaGo on a robot for the Amazon
Picking Challenge”?
A:
14. 1. Deterministic. No noise in the game.
2. Fully observed. Each player has complete information.
3. Discrete action space. Finite number of actions possible.
4. Perfect simulator. The effect of any action is know exactly.
5. Short episodes. ~200 actions per game.
6. Clear + fast evaluation. According to Go rules.
7. Huge dataset available. Human vs human games.
15. 1. Deterministic. No noise in the game.
2. Fully observed. Each player has complete information.
3. Discrete action space. Finite number of actions possible.
4. Perfect simulator. The effect of any action is know exactly.
5. Short episodes. ~200 actions per game.
6. Clear + fast evaluation. According to Go rules.
7. Huge dataset available. Human vs human games.
OK
OKish
OK
TROUBLE.
challenge
challenge
not good
16. Summary so far:
1. in interest in AI
2. AI is still
3. AI tech works in some cases and can
be repurposed much
(narrow)
17. “What if we succeed in making it not narrow?”
Nick Bostrom
Stephen Hawking
Bill Gates
Elon Musk
Sam Altman
Stuart Russell
Eliezer Yudkowsky
...
~2014+
20. “AGI imminent.”
“Oh no, AI winter imminent.
My funding is about to dry
up again.”
Meanwhile, in Academia...
21. Talk Outline:
- Supervised learning - “it works, just scale up!”
- Unsupervised learning - “it will work, if we only scale up!”
- AIXI - “guys, I can write down optimal AI.”
- Brain simulation - “this will work one day, right?”
- Artificial Life - “just do what nature did.”
- Something not on our radar
Where could AGI come from?
22. Talk Outline:
- Supervised learning - “it works, just scale up!”
- Unsupervised learning - “it will work, if we only scale up!”
- AIXI - “guys, I can write down optimal AI.”
- Brain simulation - “this will work one day, right?”
- Artificial Life - “just do what nature did.”
- Something not on our radar
Where could AGI come from?
40. The low-level gestalt is right, but the high-level,
long-term structure is missing. This is mitigated
with more data / larger models.
41. AIs in this approach…
- Imitate/generate human-like actions
- Can these AIs be creative?
- Can they assemble a room of chairs/tables?
- Can they make human domination schemes?
42. AIs in this approach…
- Imitate/generate human-like actions
- Can these AIs be creative?
- Can they assemble a room of chairs/tables?
- Can they make human domination schemes?
(Kind of)
(Yes)
(No.)
43. Talk Outline:
- Supervised learning - “it works, just scale up!”
- Unsupervised learning - “it will work, if we only scale up!”
- AIXI - “guys, I can write down optimal AI.”
- Brain simulation - “this will work one day, right?”
- Artificial Life - “just do what nature did.”
- Something not on our radar
Where could AGI come from?
44. Unsupervised Learning: Big generative models.
1. Initialize a Big Neural Network
2. Train it to compress a huge amount of
data on the internet
3. ???
4. Profit
45. Example2: (variational) autoencoders
Also see:
Autoregressive models,
Generative Adversarial Networks,
etcetc.
identity function
Information bottleneck:
30 numbers.
(must compress the data to 30
numbers to reconstruct later)
47. Work at OpenAI: “Unsupervised Sentiment Neuron”
(Alec Radford et al.)
Another example:
1. Train a large char-rnn on a large corpus of unlabeled reviews from Amazon
2. One of the neurons automagically “discovers” a small sentiment classifier (this
high-level feature must help predict the next character)
(char-rnn also optimizes compression of data; prediction and compression are closely linked.)
49. What would this AI look like?
- The neural network has a powerful “brain state”:
- Given any input data, could get e.g. 10,000
numbers of the networks “thoughts” about
the data.
- Given any vector of 10,000 numbers, we
could maybe ask the network to generate
samples of data that correspond.
- Does it want to take over the world? (no; has no
agency, no planning, etc.)
50. Talk Outline:
- Supervised learning - “it works, just scale up!”
- Unsupervised learning - “it will work, if we only scale up!”
- AIXI - “guys, I can write down optimal AI.”
- Brain simulation - “this will work one day, right?”
- Artificial Life - “just do what nature did.”
- Something not on our radar
Where could AGI come from?
51. AIXI
- Algorithmic information theory applied to general artificial
intelligence. (Marcus Hutter)
- Allows for a formal definition of “Universal Intelligence”
(Shane Legg)
- Bayesian Reinforcement Learning agent over the
hypothesis space of all Turing machines.
52. Turing machines
Prior probability:
“Simpler worlds” are more likely
P
Turing machines
Likelihood probability:
Which TMs are consistent with my
experience so far?
P
System identification: which Turing machine am I in? If I knew, I could plan perfectly.
Multiply vertically to get a posterior
53. We can write down the optimal agent’s action at time t:
(from http://www.vetta.org/documents/Machine_Super_Intelligence.pdf)
where
54. Complete history of
interactions up to this point
time t
time m
all possible future
action-state
sequences
Weighted average of the
total discounted reward,
across all possible
Turing Machines.
The weights are
[prior] x [likelihood] for
each Turing machine.
(description length of the
TM, number of bits)
57. Attempts have been made...
I like “A Monte-Carlo AIXI Approximation” from Veness et al. 2011,
https://www.aaai.org/Papers/JAIR/Vol40/JAIR-4004.pdf
58. What would this agent look like?
- We need to feed it a reward signal. Might be very hard to write
down. Might lead to “perverse instantiations” (e.g. paper clip
maximizers etc.)
- Or maybe humans have a dial that gives the reward. But its
actions might not be fully observable to humans.
- Very computationally intractable. Also, people are really not
good at writing complex code. (e.g. for “AIXI approximation”).
- This agent could be quite scary. Definitely has agency.
59. Talk Outline:
- Supervised learning - “it works, just scale up!”
- Unsupervised learning - “it will work, if we only scale up!”
- AIXI - “guys, I can write down optimal AI.”
- Brain simulation - “this will work one day, right?”
- Artificial Life - “just do what nature did.”
- Something not on our radar
Where could AGI come from?
61. Brain simulation
- How to measure a complete brain state?
- At what level of abstraction?
- How to model the dynamics?
- How do you simulate the “environment” to
feed into senses?
- Various ethical dilemmas
- Timescale-bearish neuroscientists.
62. Talk Outline:
- Supervised learning - “it works, just scale up!”
- Unsupervised learning - “it will work, if we only scale up!”
- AIXI - “guys, I can write down optimal AI.”
- Brain simulation - “this will work one day, right?”
- Artificial Life - “just do what nature did.”
- Something not on our radar
Where could AGI come from?
64. We don’t have to redo 4B years of evolution.
- Work at a higher level of abstraction. We don’t have to
simulate chemistry etc. to get intelligent networks.
- Intelligent design. We can meddle with the system and
initialize with RL agents, etc.
65. Intelligence is the ability to win, in the face of world dynamics
and a changing population of other intelligent agents with
similar goals.
66. ● attention. The at-will ability to selectively "filter out" parts of the input that is judged not to be relevant for a current top-down
goal. e.g. the "cocktail party effect".
● working memory: some structures/processes that temporarily store and manipulate information (7 +/- 2). Related to this,
phonological loop: a special part of working memory dedicated to storing a few seconds of sound (e.g. when you repeat a 7-digit
phone number in your mind to keep it in memory). also: the visuospatial sketchpad and an episodic buffer.
● long-term memory of quite a few suspected different types: procedural memory (e.g. driving a car), semantic memory (e.g. the
name of the current President), episodic memory (for autobiographical sequences of events, e.g. where one was during 9/11)
● knowledge representation; the ability to rapidly learn and incorporate facts into some "world model" that can be inferred over in
what looks to be approximately bayesian ways. the ability to detect and resolve contradictions, or propose experiments that
disambiguate cases. the ability to keep track of what source provided a piece of information and later down-weigh its confidence
if the source is suddenly judged not trust-worthy.
● spatial reasoning, some crude "game engine" model of a scene and its objects and attributes. All the complex biases we have
built in that only get properly revealed with optical illusions. Spatial memory: cells in the brain that keep track of the connectivity
of the world and do something like an automatic "SLAM", putting together a lot of information from different senses to position
the brain in the world.
● reasoning by analogy, eg applying a proverb such as "that’s locking the barn door after the horse has gone" to a current situation.
● emotions; heuristics that make our genes more likely to spread - e.g. frustration.
● a forward simulator, which lets us roll forward and consider abstract events and situations.
● various skill acquisition heuristics; practicing something repeatedly, including the abstract idea of "resetting" an experiment, or
deciding when an experiment is finished, or what its outcomes were. The heuristic inclination for "fun", experimentation, and
curiosity. The heuristic of empowerment, or the idea that it is better to take actions that leave more options available in the
future.
● consciousness / theory of mind: the understanding that other agents are like me but also slightly different in unknown ways.
Empathy (e.g. the cringy feeling when seeing someone else get hurt). Imitation learning, or the heuristic of paying attention to
and then later repeating what the other agents are doing.
Intelligence “Cognitive toolkit” includes but is not limited to:
67. Conclusion: we need to create environments that
incentivize the emergence of a cognitive toolkit.
68. Conclusion: we need to create environments that
incentivize the emergence of cognitive toolkit.
Incentives a lookup table of correct moves.
Doing it wrong:
69. Conclusion: we need to create environments that
incentivize the emergence of cognitive toolkit.
Doing it right:
Incentives a lookup table of correct moves.
Doing it wrong:
Incentivises cognitive tools.
70. Benefits of multi-agent environments:
- variety - the environment is parameterized by its agent
population, so an optimal strategy must be dynamically
derived, and cannot be statically “baked” as behaviors /
reflexes into a network.
- natural curriculum - the difficulty of the environment is
determined by the skill of the other agents.
71. Why? Trends.
Q: What about the optimization?
A: Optimize over the whole thing: the architecture, the
initialization, the learning rule.
Write very little (or none) explicit code.
(example small
tensorflow graph)
72. datasets
models
ImageNet
(~10^6 images)
Caltech 101
(~10^4 images)
(how large they are)
Google/FB
Images on the web
(~10^9+ images)
(how well they work)
Image Features
(SIFT etc., learning linear
classifiers on top)
ConvNets
(learn the features,
Structure hard-coded)
2013
2017
90s - 2012
CodeGen
(learn the weights
and the structure)
projection
Hard Coded
(edge detection etc.
no learning)
Lena
(10^0; single image)
70s - 90s
possibilityfrontier
Zone of “not going to happen.”
Pascal VOC
(~10^5 images)
In Computer Vision...
73. environments
agents
MuJoCo/ATARI
/Universe
(~few dozen envs)
Cartpole etc.
(and bandits, gridworld,
...few toy tasks)
(how much they measure / incentivise general intelligence)
more multi-agent / non-stationary / real-world-like.
(how impressive they are)
more learning.
more compute.
Value Iteration etc.
(~discrete MDPs, linear
function approximators)
DQN, PG
(deep nets, hard-coded
various tricks)
2013
2017
RL^2
(Learn the RL
algorithm.
structure fixed.)
90s - 2012
CodeGen
(learn structure and
learning algorithm)
projection
(simple multi-agent envs)
Digital worlds
(complex multi-agent envs)
Reality
Hard Coded
(LISP programs, no learning)
BlocksWorld
(SHRDLU etc)
70s - 90s
possibilityfrontier
Zone of “not going to happen.”
In Reinforcement Learning
74. With increasing computational resources, the trend
is towards more learning/optimization, and less
explicit design.
1970: One of Many explicit (LISP)
programs that made up SHRDLU.
50 years
“NEURAL ARCHITECTURE SEARCH WITH
REINFORCEMENT LEARNING”, Zoph & Le
Large-Scale Evolution of Image Classifiers
75. “Learning to Cooperate, Compete, and Communicate”
OpenAI blog post, 2017
- 4 red agents cooperate to
chase 2 green agents
- 2 green agents want to
reach blue “water”
76. What would this look like?
- Achieve completely uninterpretable “proto-AIs” first, similar
to simple animals, but with fairly complete cognitive toolkits.
- Evolved AIs are a synthetic species that lives among us.
- We will shape them to love humans, similar to how we
shaped dogs.
- “AI safety” will become a primarily empirical discipline, not a
mathematical one as it is today.
- Some might try to evolve bad AIs, equiv. to. combat dogs.
- We might have to make it illegal to evolve AI strains, or
upper bound the amount of computation per person and
closely track all computational resources on Earth.
77. Talk Outline:
- Supervised learning - “it works, just scale up!”
- Unsupervised learning - “it will work, if we only scale up!”
- AIXI - “guys, I can write down optimal AI.”
- Brain simulation - “this will work one day, right?”
- Artificial Life - “just do what nature did.”
- Something not on our radar
Where could AGI come from?
79. Combination of some of the above?
- E.g. take the artificial life
approach, but allow agents to
access the high-level
representations of a big,
pre-trained generative model.
80. In order of promisingness:
- Artificial Life - “just do what nature did.”
- Something not on our radar
- Supervised learning - “it works, just scale up!”
- Unsupervised learning - “it will work, if we only scale up!”
- AIXI - “guys, I can write down optimal AI.”
- Brain simulation - “this will work one day, right?”
Conclusion
81. What do you think?
(Thank you!)
SL UL AIXI
BrainSim ALife Other
http://bit.ly/2r54rfe
82. Cool Related Pointers
Sebastian’s post, which inspired the title of this talk
http://www.nowozin.net/sebastian/blog/where-will-artificial-intelligence-come-from.html
Rodney Brooks paper
https://www.researchgate.net/publication/222486990_Intelligence_Without_Representation