This document provides an overview of deep learning 1.0 and discusses potential directions for deep learning 2.0. It summarizes limitations of deep learning 1.0 such as lack of reasoning abilities and discusses how incorporating memory and reasoning capabilities could help address these limitations. The document outlines several approaches being explored for neural memory and reasoning, including memory networks, neural Turing machines, and self-attentive associative memories. It argues that memory and reasoning will be important for developing more human-like artificial general intelligence.
Deep Learning has taken the digital world by storm. As a general purpose technology, it is now present in all walks of life. Although the fundamental developments in methodology have been slowing down in the past few years, applications are flourishing with major breakthroughs in Computer Vision, NLP and Biomedical Sciences. The primary successes can be attributed to the availability of large labelled data, powerful GPU servers and programming frameworks, and advances in neural architecture engineering. This combination enables rapid construction of large, efficient neural networks that scale to the real world. But the fundamental questions of unsupervised learning, deep reasoning, and rapid contextual adaptation remain unsolved. We shall call what we currently have Deep Learning 1.0, and the next possible breakthroughs as Deep Learning 2.0.
This is part 1 of the Tutorial delivered at IEEE SSCI 2020, Canberra, December 1st (Virtual).
This is the talk given at the Faculty of Information Technology, Monash University on 19/08/2020. It covers our recent research on the topics of learning to reason, including dual-process theory, visual reasoning and neural memories.
Describing latest research in visual reasoning, in particular visual question answering. Covering both images and videos. Dual-process theories approach. Relational memory.
The current deep learning revolution has brought unprecedented changes to how we live, learn, interact with the digital and physical worlds, run business and conduct sciences. These are made possible thanks to the relative ease of construction of massive neural networks that are flexible to train and scale up to the real world. But the flexibility is hitting the limits due to excessive demand of labelled data, the narrowness of the tasks, the failure to generalize beyond surface statistics to novel combinations, and the lack of the key mental faculty of deliberate reasoning. In this talk, I will present a multi-year research program to push deep learning to overcome these limitations. We aim to build dynamic neural networks that can train themselves with little labelled data, compress on-the-fly in response to resource constraints, and respond to arbitrary query about a context. The networks are equipped with capability to make use of external knowledge, and operate that the high-level of objects and relations. The long-term goal is to build persistent digital companions that co-live with us and other AI entities, understand our need and intention, and share our human values and norms. They will be capable of having natural conversations, remembering lifelong events, and learning in an open-ended fashion.
Introducing research works in the area of machine reasoning at our Applied AI Institute, Deakin University, Australia. Covering visual & social reasoning, neural Turing machine and System 2.
TL;DR: This tutorial was delivered at KDD 2021. Here we review recent developments to extend the capacity of neural networks to “learning to reason” from data, where the task is to determine if the data entails a conclusion.
The rise of big data and big compute has brought modern neural networks to many walks of digital life, thanks to the relative ease of construction of large models that scale to the real world. Current successes of Transformers and self-supervised pretraining on massive data have led some to believe that deep neural networks will be able to do almost everything whenever we have data and computational resources. However, this might not be the case. While neural networks are fast to exploit surface statistics, they fail miserably to generalize to novel combinations. Current neural networks do not perform deliberate reasoning – the capacity to deliberately deduce new knowledge out of the contextualized data. This tutorial reviews recent developments to extend the capacity of neural networks to “learning to reason” from data, where the task is to determine if the data entails a conclusion. This capacity opens up new ways to generate insights from data through arbitrary querying using natural languages without the need of predefining a narrow set of tasks.
Deep Learning has taken the digital world by storm. As a general purpose technology, it is now present in all walks of life. Although the fundamental developments in methodology have been slowing down in the past few years, applications are flourishing with major breakthroughs in Computer Vision, NLP and Biomedical Sciences. The primary successes can be attributed to the availability of large labelled data, powerful GPU servers and programming frameworks, and advances in neural architecture engineering. This combination enables rapid construction of large, efficient neural networks that scale to the real world. But the fundamental questions of unsupervised learning, deep reasoning, and rapid contextual adaptation remain unsolved. We shall call what we currently have Deep Learning 1.0, and the next possible breakthroughs as Deep Learning 2.0.
This is part 1 of the Tutorial delivered at IEEE SSCI 2020, Canberra, December 1st (Virtual).
This is the talk given at the Faculty of Information Technology, Monash University on 19/08/2020. It covers our recent research on the topics of learning to reason, including dual-process theory, visual reasoning and neural memories.
Describing latest research in visual reasoning, in particular visual question answering. Covering both images and videos. Dual-process theories approach. Relational memory.
The current deep learning revolution has brought unprecedented changes to how we live, learn, interact with the digital and physical worlds, run business and conduct sciences. These are made possible thanks to the relative ease of construction of massive neural networks that are flexible to train and scale up to the real world. But the flexibility is hitting the limits due to excessive demand of labelled data, the narrowness of the tasks, the failure to generalize beyond surface statistics to novel combinations, and the lack of the key mental faculty of deliberate reasoning. In this talk, I will present a multi-year research program to push deep learning to overcome these limitations. We aim to build dynamic neural networks that can train themselves with little labelled data, compress on-the-fly in response to resource constraints, and respond to arbitrary query about a context. The networks are equipped with capability to make use of external knowledge, and operate that the high-level of objects and relations. The long-term goal is to build persistent digital companions that co-live with us and other AI entities, understand our need and intention, and share our human values and norms. They will be capable of having natural conversations, remembering lifelong events, and learning in an open-ended fashion.
Introducing research works in the area of machine reasoning at our Applied AI Institute, Deakin University, Australia. Covering visual & social reasoning, neural Turing machine and System 2.
TL;DR: This tutorial was delivered at KDD 2021. Here we review recent developments to extend the capacity of neural networks to “learning to reason” from data, where the task is to determine if the data entails a conclusion.
The rise of big data and big compute has brought modern neural networks to many walks of digital life, thanks to the relative ease of construction of large models that scale to the real world. Current successes of Transformers and self-supervised pretraining on massive data have led some to believe that deep neural networks will be able to do almost everything whenever we have data and computational resources. However, this might not be the case. While neural networks are fast to exploit surface statistics, they fail miserably to generalize to novel combinations. Current neural networks do not perform deliberate reasoning – the capacity to deliberately deduce new knowledge out of the contextualized data. This tutorial reviews recent developments to extend the capacity of neural networks to “learning to reason” from data, where the task is to determine if the data entails a conclusion. This capacity opens up new ways to generate insights from data through arbitrary querying using natural languages without the need of predefining a narrow set of tasks.
Tutorial delivered at ECML-PKDD 2021.
TL;DR: This tutorial reviews recent developments on drug discovery using machine learning methods.
Powered by neural networks, modern machine learning has enjoyed great successes in data-intensive domains such as computer vision and languages where human can naturally perform well. Machine learning equipped with reasoning is now accelerating fields that traditionally require deep expertise such as physics, chemistry and biomedicine. This tutorial provides an overview of how machine learning and reasoning are speeding up and lowering the cost of drug discovery. This includes how machine learning can help in wide range of areas such as novel molecule identification, protein representation, drug-target binding, drug re-purposing, generative drug design, chemical reaction, retrosynthesis planning, drug-drug interaction, and safety assessment. We will also discuss relevant machine learning models for graph classification, molecular graph transformation, drug generation using deep generative models and reinforcement learning, and chemical reasoning.
A discussion of the nature of AI/ML as an empirical science. Covering concepts in the field, how to position ourselves, how to plan for research, what are empirical methods in AI/ML, and how to build up a theory of AI.
Full day lectures @International University, HCM City, Vietnam, May 2019. Review of AI in 2019; outlook into the future; empirical research in AI; introduction to AI research at Deakin University
This is a deep learning presentation based on Deep Neural Network. It reviews the deep learning concept, related works and specific application areas.It describes a use case scenario of deep learning and highlights the current trends and research issues of deep learning
The next phase of Smart Network Convergence could be putting Deep Learning systems on the Internet. Deep Learning and Blockchain Technology might be combined in the smart networks of the future for automated identification (deep learning) and automated transaction (blockchain). Large scale future-class problems might be addressed with Blockchain Deep Learning nets as an advanced computational infrastructure, challenges such as million-member genome banks, energy storage markets, global financial risk assessment, real-time voting, and asteroid mining.
Blockchain Deep Learning nets and Smart Networks more generally are computing networks with intelligence built in such that identification and transfer is performed by the network itself through sophisticated protocols that automatically identify (deep learning), and validate, confirm, and route transactions (blockchain) within the network.
The adaptive mechanisms include the following AI paradigms that exhibit an ability to learn or adapt to new environments:
Swarm Intelligence (SI),
Artificial Neural Networks (ANN),
Evolutionary Computation (EC),
Artificial Immune Systems (AIS), and
Fuzzy Systems (FS).
The report covers the diverse field of neural networks compiled from various sources into a compact yet detailed form. It also has the formal report writing pattern incorporated in it.
Following topics are discussed in this presentation:What is Soft Computing?
What is Hard Computing?
What is Fuzzy Logic Models?
What is Neural Networks (NN)?
What is Genetic Algorithms or Evaluation Programming?
What is probabilistic reasoning?
Difference between fuzziness and probability
AI and Soft Computing
Future of Soft Computing
It’s long ago, approx. 30 years, since AI was not only a topic for Science-Fiction writers, but also a major research field surrounded with huge hopes and investments. But the over-inflated expectations ended in a subsequent crash and followed by a period of absent funding and interest – the so-called AI winter. However, the last 3 years changed everything – again. Deep learning, a machine learning technique inspired by the human brain, successfully crushed one benchmark after another and tech companies, like Google, Facebook and Microsoft, started to invest billions in AI research. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new Hype? How is Deep Learning different from previous approaches? Are the advancing AI technologies really a threat for humanity? Let’s look behind the curtain and unravel the reality. This talk will explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why "Deep Learning is probably one of the most exciting things that is happening in the computer industry” (Jen-Hsun Huang – CEO NVIDIA).
Either a new AI “winter is coming” (Ned Stark – House Stark) or this new wave of innovation might turn out as the “last invention humans ever need to make” (Nick Bostrom – AI Philosoph). Or maybe it’s just another great technology helping humans to achieve more.
Tutorial delivered at ECML-PKDD 2021.
TL;DR: This tutorial reviews recent developments on drug discovery using machine learning methods.
Powered by neural networks, modern machine learning has enjoyed great successes in data-intensive domains such as computer vision and languages where human can naturally perform well. Machine learning equipped with reasoning is now accelerating fields that traditionally require deep expertise such as physics, chemistry and biomedicine. This tutorial provides an overview of how machine learning and reasoning are speeding up and lowering the cost of drug discovery. This includes how machine learning can help in wide range of areas such as novel molecule identification, protein representation, drug-target binding, drug re-purposing, generative drug design, chemical reaction, retrosynthesis planning, drug-drug interaction, and safety assessment. We will also discuss relevant machine learning models for graph classification, molecular graph transformation, drug generation using deep generative models and reinforcement learning, and chemical reasoning.
A discussion of the nature of AI/ML as an empirical science. Covering concepts in the field, how to position ourselves, how to plan for research, what are empirical methods in AI/ML, and how to build up a theory of AI.
Full day lectures @International University, HCM City, Vietnam, May 2019. Review of AI in 2019; outlook into the future; empirical research in AI; introduction to AI research at Deakin University
This is a deep learning presentation based on Deep Neural Network. It reviews the deep learning concept, related works and specific application areas.It describes a use case scenario of deep learning and highlights the current trends and research issues of deep learning
The next phase of Smart Network Convergence could be putting Deep Learning systems on the Internet. Deep Learning and Blockchain Technology might be combined in the smart networks of the future for automated identification (deep learning) and automated transaction (blockchain). Large scale future-class problems might be addressed with Blockchain Deep Learning nets as an advanced computational infrastructure, challenges such as million-member genome banks, energy storage markets, global financial risk assessment, real-time voting, and asteroid mining.
Blockchain Deep Learning nets and Smart Networks more generally are computing networks with intelligence built in such that identification and transfer is performed by the network itself through sophisticated protocols that automatically identify (deep learning), and validate, confirm, and route transactions (blockchain) within the network.
The adaptive mechanisms include the following AI paradigms that exhibit an ability to learn or adapt to new environments:
Swarm Intelligence (SI),
Artificial Neural Networks (ANN),
Evolutionary Computation (EC),
Artificial Immune Systems (AIS), and
Fuzzy Systems (FS).
The report covers the diverse field of neural networks compiled from various sources into a compact yet detailed form. It also has the formal report writing pattern incorporated in it.
Following topics are discussed in this presentation:What is Soft Computing?
What is Hard Computing?
What is Fuzzy Logic Models?
What is Neural Networks (NN)?
What is Genetic Algorithms or Evaluation Programming?
What is probabilistic reasoning?
Difference between fuzziness and probability
AI and Soft Computing
Future of Soft Computing
It’s long ago, approx. 30 years, since AI was not only a topic for Science-Fiction writers, but also a major research field surrounded with huge hopes and investments. But the over-inflated expectations ended in a subsequent crash and followed by a period of absent funding and interest – the so-called AI winter. However, the last 3 years changed everything – again. Deep learning, a machine learning technique inspired by the human brain, successfully crushed one benchmark after another and tech companies, like Google, Facebook and Microsoft, started to invest billions in AI research. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new Hype? How is Deep Learning different from previous approaches? Are the advancing AI technologies really a threat for humanity? Let’s look behind the curtain and unravel the reality. This talk will explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why "Deep Learning is probably one of the most exciting things that is happening in the computer industry” (Jen-Hsun Huang – CEO NVIDIA).
Either a new AI “winter is coming” (Ned Stark – House Stark) or this new wave of innovation might turn out as the “last invention humans ever need to make” (Nick Bostrom – AI Philosoph). Or maybe it’s just another great technology helping humans to achieve more.
Deep Learning - The Past, Present and Future of Artificial IntelligenceLukas Masuch
In the last couple of years, deep learning techniques have transformed the world of artificial intelligence. One by one, the abilities and techniques that humans once imagined were uniquely our own have begun to fall to the onslaught of ever more powerful machines. Deep neural networks are now better than humans at tasks such as face recognition and object recognition. They’ve mastered the ancient game of Go and thrashed the best human players. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new hype? How is Deep Learning different from previous approaches? Let’s look behind the curtain and unravel the reality. This talk will introduce the core concept of deep learning, explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why “deep learning is probably one of the most exciting things that is happening in the computer industry“ (Jen-Hsun Huang – CEO NVIDIA).
Deep learning, enabled by powerful compute, and fuelled by massive data, has delivered unprecedented data analytics capabilities. However, major limitations remain. Chiefly among those is that deep neural networks tend to exploit the surface statistics in the data, creating short-cuts from the input to the output, without really deeply understanding of the data. As a result, these networks fail miserably to generalize to novel combinations. This is because the networks perform shallow pattern matching but not deliberate reasoning – the capacity to deliberately deduce new knowledge out of the contextualized data. Second, machine learning is often trained to do just one task at a time, making it impossible to re-define tasks on the fly as needed in a complex operating environment. This talk presents our recent developments to extend the capacity of neural networks to remove these limitations. Our main focus is on learning to reason from data, that is, learning to determine if the data entails a conclusion. This capacity opens up new ways to generate insights from data through arbitrary querying using natural languages without the need of predefining a narrow set of tasks.
Artificial intelligence in the post-deep learning eraDeakin University
Deep learning has recently reached the heights that pioneers in the field had aspired to, serving as the driving force behind recent breakthroughs in AI, which have arguably surpassed the Turing test. At present, the spotlight is on scaling Transformers and diffusion models on Internet-scale data. In this talk, I will provide an overview of the fundamental principles of deep learning, its powers, and limitations, and explore the new era of post-deep learning. This new era encompasses novel objectives, dynamic architectures, abstract reasoning, neurosymbolic hybrid systems, and LLM-based agent systems.
Deep Learning for Computer Vision: A comparision between Convolutional Neural...Vincenzo Lomonaco
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general.
However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of “intelligence”.
The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them.
CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems.
HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain.
In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
Artificial Intelligence is back, Deep Learning Networks and Quantum possibili...John Mathon
AI has gone through a number of mini-boom-bust periods. The current one may be short lived as well but I have reasons to think AI is finally making some sustained progress that will see its way into mainstream technology.
Similar to Deep learning 1.0 and Beyond, Part 2 (20)
Deep learning has recently reached the height the pioneers wished for, serving as the driving force behind recent breakthroughs in AI, which have arguably surpassed the Turing test. In this tutorial, we will provide an overview of the fundamental principles of deep learning and explore the latest advances in the field, including Foundation Models. We will also examine the powers and limitations of deep learning, exploring how reasoning may emerge from carefully crafted neural networks and massively pre-trained models.
AI for automated materials discovery via learning to represent, predict, gene...Deakin University
A brief overview of how our AI can help automate the materials discovery process, covering a wide range of problems, from drug design to crystal plasticity.
Generative AI represents a pivotal moment in computing history, opening up new opportunities for scientific discoveries. By harnessing extensive and diverse datasets, we can construct new general-purpose Foundation Models that can be fine-tuned for specific prediction and exploration tasks. This talk introduces our research program, which focuses on leveraging the power of Generative AI for materials discovery. Generative AI facilitates rapid exploration of vast materials design spaces, enabling the identification of new compounds and combinations. However, this field also presents significant challenges, such as effectively representing crystals in a compact manner and striking the right balance between utilizing known structural regions and venturing into unexplored territories. Our research delves into the development of a new kind of generative models specifically designed to search for diverse molecular/crystal regions that yield high returns, as defined by domain experts. In addition, our toolset includes Large Language Models that have been fine-tuned using materials literature and scientific knowledge. These models possess the ability to comprehend extensive volumes of materials literature, encompassing molecular string representations, mathematical equations in LaTeX, and codebases. We explore the open challenges, including effectively representing deep domain knowledge and implementing efficient querying techniques to address materials discovery problems.
AI as a general-purpose technology akin to steam engines and electricity, holds the potential for profound global socio-economic change. In this talk, we delve into a new form of disruptive AI known as Generative AI (GenAI) and its revolutionary impact on how we live, work, and interact with our environment. This discussion will cover GenAI’s arrival, capability and its impact. We will also discuss the challenges and opportunities that GenAI presents to industry leaders and practitioners including the defence sector. We'll explore its potential to reshape industries, push creative boundaries, and expand consolidated knowledge -- GenAI has become the cornerstone upon which new platforms, companies, and industries are built.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
Deep learning 1.0 and Beyond, Part 2
1. 16/11/2020 1
A/Prof Truyen Tran
With contribution from Vuong Le, Hung
Le, Thao Le, Tin Pham & Dung Nguyen
Deakin University
December 2020
Deep learning 1.0 and Beyond
A tutorial
Part II
@truyenoz
truyentran.github.io
truyen.tran@deakin.edu.au
letdataspeak.blogspot.com
goo.gl/3jJ1O0
linkedin.com/in/truyen-tran
2. 16/11/2020 2
“[By 2023] …
Emergence of the
generally agreed upon
"next big thing" in AI
beyond deep learning.”
Rodney Brooks
rodneybrooks.com
“[…] general-purpose computer
programs, built on top of far richer
primitives than our current
differentiable layers—[…] we will
get to reasoning and abstraction,
the fundamental weakness of
current models.”
Francois Chollet
blog.keras.io
“Software 2.0 is written in
neural network weights”
Andrej Karpathy
medium.com/@karpathy
3. DL 1.0 has been fantastic, but has serious limitations
(but not always its fault)
DL builds glorified function
approximators using gradient
descent
Great at interpolating. Think GPT-X.
One-step input/output mapping
Require differentiability
Little systematic generalization
#REF: Marcus, Gary. "Deep learning: A critical appraisal." arXiv preprint arXiv:1801.00631 (2018).
Data hungry to cover all possible
patterns
Computation demanding to process large
data
Energy inefficient
Prohibitive for small labs to compete
Engineering effort is huge Technical
debt
A little too much heuristic. Lack of
theory.
4. DL 1.0 has been fantastic, but has serious limitations
(but not always its fault) (cont.)
#REF: Marcus, Gary. "Deep learning: A critical appraisal." arXiv preprint arXiv:1801.00631 (2018).
Lack natural mechanism to
incorporate prior knowledge, e.g.,
common sense
Assume stationaries
Changes cause trouble Expensive
retraining
No causality Random correlations
can be “learnt”
Sensitive to adversarial attacks
Lack of reasoning
Pure pattern recognizer
Little explainability
Trust issue
To be fair, may of these problems are
common issues of statistical
learning!
5. DL 1.0 is great, but it is struggled to solve many
AI/ML problems
Learn to organize and remember ultra-
long sequences
Learn to generate arbitrary objects, with
zero supports
Reasoning about object, relation,
causality, self and other agents
Imagine scenarios, act on the world and
learn from the feedbacks
Continual learning, never-ending, across
tasks, domains, representations
Learn by socializing
Learn just by observing and self-prediction
Organizing and reasoning about (common-
sense) knowledge
Automated discovery of physical laws
Solve genetics, neuroscience and
healthcare
Automate physical sciences
Automate software engineering
6. Neural memories
Theory of mind
Neural reasoning
A system view
Deep learning 2.0
16/11/2020 6
Classic models
Transformers
Graph neural networks
Unsupervised learning
Deep learning 1.0
Agenda
7. 1960s-1990s
Hand-crafting rules,
domain-specific, logic-
based
High in reasoning
Can’t scale.
Fail on unseen cases.
16/11/2020 7
2020s-2030s
Learning + reasoning, general
purpose, human-like
Has contextual and common-
sense reasoning
Requires less data
Adapt to change
Explainable
1990s-present
Machine learning, general
purpose, statistics-based
Low in reasoning
Needs lots of data
Less adaptive
Little explanation
Photo credit: DARPA
8. 8
System 1:
Intuitive
System 1:
Intuitive
System 1:
Intuitive
• Fast
• Implicit/automatic
• Pattern recognition
• Multiple
System 2:
Analytical
• Slow
• Deliberate/rational
• Careful analysis
• Single, sequential
• Hypothetical thought
• Decoupled from data rep
Single
Memory
• Facts
• Semantics
• Events and relational
associations
• Working space –
temporal buffer
Pattern
recognition
Reasoning
9. Current neural networks offerings
16/11/2020 9
No storage of intermediate results
Little choices over what to compute and what to use
Lack of conditional computation
Little support for complex chained reasoning
Little support for rapid switching of tasks
Credit: hexahedria
10. What is missing? A memory
Use multiple pieces of information
Store intermediate results (RAM like)
Episodic recall of previous tasks (Tape like)
Encode/compress & generate/decompress
long sequences
Learn/store programs (e.g., fast weights)
Store and query external knowledge
Spatial memory for navigation
16/11/2020 10
Rare but important events (e.g., snake
bite)
Needed for complex control
Short-cuts for ease of gradient
propagation = constant path length
Division of labour: program, execution
and storage
Working-memory is an indicator of IQ in
human
11. Memory enables reasoning
Expert reasoning was enabled by a large long-term
memory, acquired through experience
Working memory for analytic reasoning
WM is a system to support information binding to a coordinate
system
Reasoning as deliberative hypothesis testing memory-retrieval
based hypothesis generation
Higher order cognition = creating & manipulating relations
representation of premises, temporarily stored in WM.
Reasoning over concepts & relations requires semantic
memory
Memory is critical for episodic future thinking (mental
simulation)
16/11/2020 11
“[…] one cannot hope to
understand reasoning
without understanding the
memory processes […]”
(Thompson and Feeney, 2014)
12. Neural memories
Theory of mind
Neural reasoning
A system view
Deep learning 2.0
16/11/2020 12
Classic models
Transformers
Graph neural networks
Unsupervised learning
Deep learning 1.0
Agenda
13. Recall: Memory networks
Input is a set Load into memory,
which is NOT updated.
State is a RNN with attention reading
from inputs
Concepts: Query, key and content +
Content addressing.
Deep models, but constant path length
from input to output.
Equivalent to a RNN with shared input
set.
16/11/2020 13
Sukhbaatar, Sainbayar, Jason Weston, and Rob
Fergus. "End-to-end memory networks." Advances in
neural information processing systems. 2015.
14. MANN: Memory-Augmented Neural Networks
(a constant path length)
Long-term dependency
E.g., outcome depends on the far past
Memory is needed (e.g., as in LSTM)
Complex program requires multiple computational steps
Each step can be selective (attentive) to certain memory cell
Operations: Encoding | Decoding | Retrieval
15. 16/11/2020 15
Learning a Turing machine
Can we learn a (neural)
program that learns to
program from data?
Visual reasoning is a
specific program of two
inputs (visual, linguistic)
16. Neural Turing machine (NTM)
(simulating a differentiable Turing machine)
A controller that takes
input/output and talks to an
external memory module.
Memory has read/write
operations.
The main issue is where to write,
and how to update the memory
state.
All operations are differentiable.
Source: rylanschaeffer.github.io
18. 16/11/2020 18
NTM unrolled in time with LSTM as controller
#Ref: https://medium.com/snips-ai/ntm-lasagne-a-library-for-neural-turing-machines-in-lasagne-2cdce6837315
19. MANN for reasoning
Three steps:
Store data into memory
Read query, process sequentially, consult memory
Output answer
Behind the scene:
Memory contains data & results of intermediate steps
Drawbacks of current MANNs:
No memory of controllers Less modularity and
compositionality when query is complex
No memory of relations Much harder to chain predicates.
16/11/2020 19
Source: rylanschaeffer.github.io
20. Failures of item-only MANNs for reasoning
Relational representation is NOT stored Can’t reuse later in the
chain
A single memory of items and relations Can’t understand how
relational reasoning occurs
The memory-memory relationship is coarse since it is represented as
either dot product, or weighted sum.
16/11/2020 20
21. Self-attentive associative memories (SAM)
Learning relations automatically over time
16/11/2020 21
Hung Le, Truyen Tran, Svetha Venkatesh, “Self-
attentive associative memory”, ICML'20.
24. Neural memories
Theory of mind
Neural reasoning
A system view
Deep learning 2.0
16/11/2020 24
Classic models
Transformers
Graph neural networks
Unsupervised learning
Deep learning 1.0
Agenda
25. 25
What color is the thing with the same
size as the blue cylinder?
blue
• Requires multi-step
reasoning: find blue cylinder
➔ locate other object of the
same size ➔ determine its
color (green).
A testbed: Visual QA
26. 26
Reasoning
Qualitative spatial
reasoning
Relational, temporal
inference
Commonsense
Object recognition
Scene graphs
Computer Vision
Natural Language
Processing
Machine
learning
Visual QA
Parsing
Symbol binding
Systematic generalisation
Learning to classify
entailment
Unsupervised
learning
Reinforcement
learning
Program synthesis
Action graphs
Event detection
Object
discovery
27. Learning to reason
Learning is to improve itself by experiencing ~ acquiring
knowledge & skills
Reasoning is to deduce knowledge from previously
acquired knowledge in response to a query (or a cues)
Learning to reason is to improve the ability to decide if a
knowledge base entails a predicate.
E.g., given a video f, determines if the person with the hat turns
before singing.
Hypotheses:
Reasoning as just-in-time program synthesis.
It employs conditional computation.
16/11/2020 27
Khardon, Roni, and Dan Roth. "Learning to reason." Journal of the ACM
(JACM) 44.5 (1997): 697-725.
(Dan Roth; ACM
Fellow; IJCAI John
McCarthy Award)
28. Why neural reasoning?
Reasoning is not necessarily achieved by making
logical inferences
There is a continuity between [algebraically rich
inference] and [connecting together trainable
learning systems]
Central to reasoning is composition rules to guide
the combinations of modules to address new tasks
16/11/2020 28
“When we observe a visual scene, when
we hear a complex sentence, we are
able to explain in formal terms the
relation of the objects in the scene, or
the precise meaning of the sentence
components. However, there is no
evidence that such a formal analysis
necessarily takes place: we see a scene,
we hear a sentence, and we just know
what they mean. This suggests the
existence of a middle layer, already a
form of reasoning, but not yet formal
or logical.”
Bottou, Léon. "From machine learning to machine
reasoning." Machine learning 94.2 (2014): 133-149.
29. The two approaches to neural reasoning
Implicit chaining of predicates through recurrence:
Step-wise query-specific attention to relevant concepts & relations.
Iterative concept refinement & combination, e.g., through a working
memory.
Answer is computed from the last memory state & question embedding.
Explicit program synthesis:
There is a set of modules, each performs an pre-defined operation.
Question is parse into a symbolic program.
The program is implemented as a computational graph constructed by
chaining separate modules.
The program is executed to compute an answer.
16/11/2020 29
30. MACNet: Composition-Attention-
Control
(reasoning by progressive refinement
of selected data)
16/11/2020 30
Hudson, Drew A., and Christopher D. Manning.
"Compositional attention networks for machine
reasoning." arXiv preprint arXiv:1803.03067 (2018).
31. LOGNet: Relational object reasoning with language binding
31
• Key insight: Reasoning is chaining of relational predicates to arrive
at a final conclusion
→ Needs to uncover spatial relations, conditioned on query
→ Chaining is query-driven
→ Objects/language needs binding
→ Object semantics is query-dependent
→ Very thing is end-to-end differentiable
System 1: visual
representation
System 2: High-level
reasoning
Thao Minh Le, Vuong Le, Svetha Venkatesh, and
Truyen Tran, “Dynamic Language Binding in
Relational Visual Reasoning”, IJCAI’20.
32. 32
Language-binding Object Graph Network for VQA
Thao Minh Le, Vuong Le,
Svetha Venkatesh, and
Truyen Tran, “Dynamic
Language Binding in
Relational Visual
Reasoning”, IJCAI’20.
34. Transformer as implicit reasoning
Reasoning as (free-) energy minimisation
The classic Belief Propagation algorithm is minimization algorithm of
the Bethe free-energy!
Transformer has relational, iterative state refinement makes
it a great candidate for implicit relational reasoning.
16/11/2020 34
Heskes, Tom. "Stable fixed points of loopy belief propagation are local minima of the bethe free
energy." Advances in neural information processing systems. 2003.
Ramsauer, Hubert, et al. "Hopfield networks is all you need." arXiv preprint
arXiv:2008.02217 (2020).
36. 16/11/2020 36
Anonymous, “Neural spatio-temporal reasoning with object-centric self-
supervised learning”, https://openreview.net/pdf?id=rEaz5uTcL6Q
Answer place holder
37. 38
Mao, Jiayuan, et al. "The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences
From Natural Supervision." International Conference on Learning Representations. 2019.
NS-CL: Neuro-Symbolic Concept Learner
Question
parser
38. Extract object proposals from the image from which a feature vector is obtained usingRoI Align. Each
object feature is donated as 𝑜𝑜𝑖𝑖
Object concepts of the same attribute is mapped into a embedding space. For example, sphere, cube, and
cylinder are mapped into shape embedding space. This mapping is a classification problem!
= σ < 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠. 𝑜𝑜𝑜𝑜 𝑜𝑜𝑖𝑖, 𝑣𝑣 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
> −γ /τ
Where
𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠. 𝑜𝑜𝑜𝑜 is a neural networks
𝑣𝑣𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
is the concept embedding to be learned of cube
σ : sigmoid function
γ and τ are scaling constants. 39
Concept learner
39. Program execution
Work on object-based visual
representation
An intermediate set of objects is
represented by a vector, as attention mask
over all object in the scene. For example,
Filter(Green_cube) outputs a mask
(0,1,0,0).
The output mask is fed into the next
module (e.g Relate)
40
40. Neural memories
Theory of mind
Neural reasoning
A system view
Deep learning 2.0
16/11/2020 41
Classic models
Transformers
Graph neural networks
Unsupervised learning
Deep learning 1.0
Agenda
41. Contextualized recursive reasoning
Thus far, QA tasks are straightforward and
objective:
Questioner: I will ask about what I don’t know.
Answerer: I will answer what I know.
Real life can be tricky, more subjective:
Questioner: I will ask only questions I think
they can answer.
Answerer 1: This is what I think they want from
an answer.
Answerer 2: I will answer only what I think
they think I can.
16/11/2020 42
Source: religious studies project
We need Theory of Mind to function socially.
42. Sally and Anne
Sally Anne
Sally puts her cake
into her basket
Sally’s basket Anne’s box
Sally goes out of
the room.
Anne takes Sally’s
cake out of Sally’s
basket and put this
cake into Anne’s box
Sally comes back to
the room
1
2
4
5
3
Photo: wikipedia
43. Social dilemma: Stag Hunt games
Difficult decision: individual outcomes (selfish) or group outcomes
(cooperative).
Together hunt Stag (both are cooperative): Both have more meat.
Solely hunt Hare (both are selfish): Both have less meat.
One hunts Stag (cooperative), other hunts Hare (selfish): Only one hunts hare
has meat.
Human evidence: Self-interested but considerate of others
(cultures vary).
Idea: Belief-based guilt-aversion
One experiences loss if it lets other down.
Necessitates Theory of Mind: reasoning about other’s mind.
44. A neural theory of mind
Successor
representationsnext-step action
probability
goal
Rabinowitz, Neil C., et al.
"Machine theory of
mind." arXiv preprint
arXiv:1802.07740 (2018).
45. Theory of Mind Agent with Guilt Aversion (ToMAGA)
Update Theory of Mind
Predict whether other’s behaviour are
cooperative or uncooperative
Updated the zero-order belief (what other will
do)
Update the first-order belief (what other think
about me)
Guilt Aversion
Compute the expected material reward of
other based on Theory of Mind
Compute the psychological rewards, i.e.
“feeling guilty”
Reward shaping: subtract the expected loss of
the other.
Nguyen, Dung, et al. "Theory of Mind with Guilt
Aversion Facilitates Cooperative Reinforcement
Learning." Asian Conference on Machine Learning.
PMLR, 2020.
46. 47
System 1:
Intuitive
System 1:
Intuitive
System 1:
Intuitive
• Fast
• Implicit/automatic
• Pattern recognition
• Multiple
System 2:
Analytical
• Slow
• Deliberate/rational
• Careful analysis
• Single, sequential
• Hypothetical thought
• Decoupled from data rep
Single
Memory
• Facts
• Semantics
• Events and relational
associations
• Working space –
temporal buffer
Pattern
recognition
Reasoning
47. Neural memories
Theory of mind
Neural reasoning
A system view
Deep learning 2.0
16/11/2020 48
Classic models
Transformers
Graph neural networks
Unsupervised learning
Deep learning 1.0
Summary
49. References
Anonymous, “Neuralspatio-temporal reasoning with object-centric self-supervised learning”,
https://openreview.net/pdf?id=rEaz5uTcL6Q
Bello, Irwan, et al. "Neural optimizer search with reinforcement learning." arXiv preprint arXiv:1709.07417 (2017).
Bengio, Yoshua, Aaron Courville, and Pascal Vincent. "Representation learning: A review and new perspectives." IEEE
transactions on pattern analysis and machine intelligence 35.8 (2013): 1798-1828.
Bottou, Léon. "From machine learning to machine reasoning." Machine learning 94.2 (2014): 133-149.
Dehghani, Mostafa, et al. "Universal Transformers." International Conference on Learning Representations. 2018.
Kien Do, Truyen Tran, and Svetha Venkatesh. "Graph Transformation Policy Network for Chemical Reaction
Prediction." KDD’19.
Kien Do, Truyen Tran, Svetha Venkatesh, “Learning deep matrix representations”,arXiv preprint arXiv:1703.01454
Gilmer, Justin, et al. "Neural message passing for quantum chemistry."arXiv preprint arXiv:1704.01212 (2017).
Ha, David, Andrew Dai, and Quoc V. Le. "Hypernetworks." arXiv preprint arXiv:1609.09106 (2016).
Heskes, Tom. "Stable fixed points of loopy belief propagation are local minima of the bethe free energy." Advances in
neural information processing systems. 2003.
Hudson, Drew A., and Christopher D. Manning. "Compositional attention networks for machine reasoning."arXiv preprint
arXiv:1803.03067 (2018).
Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2017). Progressive growing of gans for improved quality, stability, and
variation. arXiv preprint arXiv:1710.10196.
Khardon, Roni, and Dan Roth. "Learning to reason." Journal of the ACM (JACM) 44.5 (1997): 697-725.
Hung Le, Truyen Tran, Svetha Venkatesh, “Self-attentive associative memory”, ICML'20.
Hung Le, Truyen Tran, Svetha Venkatesh, “Neural stored-program memory”, ICLR'20.
16/11/2020 50
50. Thao Minh Le, Vuong Le, Svetha Venkatesh, and Truyen Tran, “Dynamic Language Binding in Relational Visual
Reasoning”, IJCAI’20.
Le-Khac, Phuc H., Graham Healy, and Alan F. Smeaton. "Contrastive Representation Learning: A Framework and
Review." arXiv preprint arXiv:2010.05113 (2020).
Liu, Xiao, et al. "Self-supervised learning: Generative or contrastive." arXiv preprint arXiv:2006.08218 (2020). Marcus,
Gary. "Deep learning: A critical appraisal." arXiv preprint arXiv:1801.00631 (2018).
Mao, Jiayuan, et al. "The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural
Supervision." International Conference on Learning Representations. 2019.
Nguyen, Dung, et al. "Theory of Mind with Guilt Aversion Facilitates Cooperative Reinforcement Learning." Asian
Conference on Machine Learning. PMLR, 2020.
Penmatsa, Aravind, Kevin H. Wang, and Eric Gouaux. "X-ray structure of dopamine transporter elucidates antidepressant
mechanism." Nature 503.7474 (2013): 85-90.
Pham, Trang, et al. "Column Networks for Collective Classification."AAAI. 2017.
Ramsauer, Hubert, et al. "Hopfield networks is all you need." arXiv preprint arXiv:2008.02217 (2020).
Rabinowitz, Neil C., et al. "Machine theory of mind." arXiv preprint arXiv:1802.07740 (2018).
Sukhbaatar, Sainbayar, Jason Weston, and Rob Fergus. "End-to-end memory networks." Advances in neural information
processing systems. 2015.
Tay, Yi, et al. "Efficient transformers: A survey." arXiv preprint arXiv:2009.06732 (2020).
Xie, Tian, and Jeffrey C. Grossman. "Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable
Prediction of Material Properties." Physical review letters 120.14 (2018): 145301.
You, Jiaxuan, et al. "GraphRNN: Generating realistic graphs with deep auto-regressive models." ICML (2018).
16/11/2020 51
References (cont.)