The document describes DeepMoD, a tool that combines neural networks and regression to discover interpretable models for quantitative science. DeepMoD trains a neural network on noisy data to learn an underlying function, while also discovering a symbolic representation of that function in the form of an equation. It does this by including the symbolic representation in the neural network's cost function and optimizing both the network weights and the symbolic form simultaneously.
Honey, I Deep-shrunk the Sample Covariance Matrix! by Erk Subasi at QuantCon ...Quantopian
Since the seminal work of Markowitz, covariance estimates has prime importance for portfolio construction. Running naive portfolio optimizations on sample covariance estimates can be hazardous to the health of one's portfolio though. The recent developments in machine learning, in particular in deep-learning, suggest that high-level abstractions and deep architectural representations are key for success when dealing with non-linear, noisy real-life data. Motivated by this, here we demonstrate a novel form of robust-covariance estimation based on the ideas borrowed from deep-learning domain. In a pedagogical setting, we will show how to use TensorFlow, a recently open-sourced deep-learning library by Google, to build a robust-covariance estimator via denoising autoencoders.
Human-in-the-loop: a design pattern for managing teams which leverage ML by P...Big Data Spain
Human-in-the-loop is an approach which has been used for simulation, training, UX mockups, etc.
https://www.bigdataspain.org/2017/talk/human-in-the-loop-a-design-pattern-for-managing-teams-which-leverage-ml
Big Data Spain Conference
16th -17th November - Kinépolis Madrid
This document summarizes a project between IBM and Cycorp to build a prototype question answering system called PIQUANT that integrates information retrieval, natural language processing, and knowledge representation. The system will explore how to best combine these technologies by balancing knowledge stored in structured databases, unstructured text, and a large common-sense knowledge base. It will also develop strategies for locating answers from different knowledge sources and handling different question types. The goal is to begin exploring how to build intelligent question answering systems that can understand the meaning behind text, not just keywords.
Why big data didn’t end causal inference by Totte Harinen at Big Data Spain 2017Big Data Spain
Ten years ago there were rumours of the death of causal inference. Big data was supposed to enable us to rely on purely correlational data to predict and control the world.
https://www.bigdataspain.org/2017/talk/why-big-data-didnt-end-causal-inference
Big Data Spain 2017
November 16th - 17th Kinépolis Madrid
The current deep learning revolution has brought unprecedented changes to how we live, learn, interact with the digital and physical worlds, run business and conduct sciences. These are made possible thanks to the relative ease of construction of massive neural networks that are flexible to train and scale up to the real world. But the flexibility is hitting the limits due to excessive demand of labelled data, the narrowness of the tasks, the failure to generalize beyond surface statistics to novel combinations, and the lack of the key mental faculty of deliberate reasoning. In this talk, I will present a multi-year research program to push deep learning to overcome these limitations. We aim to build dynamic neural networks that can train themselves with little labelled data, compress on-the-fly in response to resource constraints, and respond to arbitrary query about a context. The networks are equipped with capability to make use of external knowledge, and operate that the high-level of objects and relations. The long-term goal is to build persistent digital companions that co-live with us and other AI entities, understand our need and intention, and share our human values and norms. They will be capable of having natural conversations, remembering lifelong events, and learning in an open-ended fashion.
This is the talk given at the Faculty of Information Technology, Monash University on 19/08/2020. It covers our recent research on the topics of learning to reason, including dual-process theory, visual reasoning and neural memories.
Paul Groth: Data Analysis in a Changing Discourse: The Challenges of Scholarl...COST Action TD1210
Paul Groth (Elsevier) “Data Analysis in a Changing Discourse: The Challenges of Scholarly Communication“
Presentation at the KnoweScape workshop "Evolution and variation of classification systems" March 4-5, 2015 Amsterdam
The recent series of innovations in deep learning have shown enormous potential to impact individuals and society, both positively and negatively. The deep learning models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, signal processing, and human-computer interactions. However, the Black-Box nature of deep learning models and their over-reliance on massive amounts of data condensed into labels and dense representations pose challenges for the system’s interpretability and explainability. Furthermore, deep learning methods have not yet been proven in their ability to effectively utilize relevant domain knowledge and experience critical to human understanding. This aspect is missing in early data-focused approaches and necessitated knowledge-infused learning and other strategies to incorporate computational knowledge. Rapid advances in our ability to create and reuse structured knowledge as knowledge graphs make this task viable. In this talk, we will outline how knowledge, provided as a knowledge graph, is incorporated into the deep learning methods using knowledge-infused learning. We then discuss how this makes a fundamental difference in the interpretability and explainability of current approaches and illustrate it with examples relevant to a few domains.
Honey, I Deep-shrunk the Sample Covariance Matrix! by Erk Subasi at QuantCon ...Quantopian
Since the seminal work of Markowitz, covariance estimates has prime importance for portfolio construction. Running naive portfolio optimizations on sample covariance estimates can be hazardous to the health of one's portfolio though. The recent developments in machine learning, in particular in deep-learning, suggest that high-level abstractions and deep architectural representations are key for success when dealing with non-linear, noisy real-life data. Motivated by this, here we demonstrate a novel form of robust-covariance estimation based on the ideas borrowed from deep-learning domain. In a pedagogical setting, we will show how to use TensorFlow, a recently open-sourced deep-learning library by Google, to build a robust-covariance estimator via denoising autoencoders.
Human-in-the-loop: a design pattern for managing teams which leverage ML by P...Big Data Spain
Human-in-the-loop is an approach which has been used for simulation, training, UX mockups, etc.
https://www.bigdataspain.org/2017/talk/human-in-the-loop-a-design-pattern-for-managing-teams-which-leverage-ml
Big Data Spain Conference
16th -17th November - Kinépolis Madrid
This document summarizes a project between IBM and Cycorp to build a prototype question answering system called PIQUANT that integrates information retrieval, natural language processing, and knowledge representation. The system will explore how to best combine these technologies by balancing knowledge stored in structured databases, unstructured text, and a large common-sense knowledge base. It will also develop strategies for locating answers from different knowledge sources and handling different question types. The goal is to begin exploring how to build intelligent question answering systems that can understand the meaning behind text, not just keywords.
Why big data didn’t end causal inference by Totte Harinen at Big Data Spain 2017Big Data Spain
Ten years ago there were rumours of the death of causal inference. Big data was supposed to enable us to rely on purely correlational data to predict and control the world.
https://www.bigdataspain.org/2017/talk/why-big-data-didnt-end-causal-inference
Big Data Spain 2017
November 16th - 17th Kinépolis Madrid
The current deep learning revolution has brought unprecedented changes to how we live, learn, interact with the digital and physical worlds, run business and conduct sciences. These are made possible thanks to the relative ease of construction of massive neural networks that are flexible to train and scale up to the real world. But the flexibility is hitting the limits due to excessive demand of labelled data, the narrowness of the tasks, the failure to generalize beyond surface statistics to novel combinations, and the lack of the key mental faculty of deliberate reasoning. In this talk, I will present a multi-year research program to push deep learning to overcome these limitations. We aim to build dynamic neural networks that can train themselves with little labelled data, compress on-the-fly in response to resource constraints, and respond to arbitrary query about a context. The networks are equipped with capability to make use of external knowledge, and operate that the high-level of objects and relations. The long-term goal is to build persistent digital companions that co-live with us and other AI entities, understand our need and intention, and share our human values and norms. They will be capable of having natural conversations, remembering lifelong events, and learning in an open-ended fashion.
This is the talk given at the Faculty of Information Technology, Monash University on 19/08/2020. It covers our recent research on the topics of learning to reason, including dual-process theory, visual reasoning and neural memories.
Paul Groth: Data Analysis in a Changing Discourse: The Challenges of Scholarl...COST Action TD1210
Paul Groth (Elsevier) “Data Analysis in a Changing Discourse: The Challenges of Scholarly Communication“
Presentation at the KnoweScape workshop "Evolution and variation of classification systems" March 4-5, 2015 Amsterdam
The recent series of innovations in deep learning have shown enormous potential to impact individuals and society, both positively and negatively. The deep learning models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, signal processing, and human-computer interactions. However, the Black-Box nature of deep learning models and their over-reliance on massive amounts of data condensed into labels and dense representations pose challenges for the system’s interpretability and explainability. Furthermore, deep learning methods have not yet been proven in their ability to effectively utilize relevant domain knowledge and experience critical to human understanding. This aspect is missing in early data-focused approaches and necessitated knowledge-infused learning and other strategies to incorporate computational knowledge. Rapid advances in our ability to create and reuse structured knowledge as knowledge graphs make this task viable. In this talk, we will outline how knowledge, provided as a knowledge graph, is incorporated into the deep learning methods using knowledge-infused learning. We then discuss how this makes a fundamental difference in the interpretability and explainability of current approaches and illustrate it with examples relevant to a few domains.
The current state of prediction in neuroimagingSaigeRutherford
This document summarizes the current state of using machine learning to predict traits and behaviors from brain images. It discusses typical machine learning workflows and a favorite predictive model called the Brain Basis Set. It reviews what traits have been successfully predicted from brain images so far. It also discusses characteristics of successful predictive models, the role of large datasets, and ways prediction could be improved, such as through better data preprocessing and addressing bias. Throughout, it emphasizes the importance of transparency, reproducibility, and collaboration.
IRJET- Improved Model for Big Data Analytics using Dynamic Multi-Swarm Op...IRJET Journal
The document proposes an improved model for big data analytics using dynamic multi-swarm optimization and unsupervised learning algorithms. It develops an algorithm called DynamicK-reference Clustering that combines dynamic multi-swarm optimization with a k-reference clustering algorithm. The k-reference clustering algorithm uses reference distance weighting, Euclidean distance, and chi-square relative frequency to cluster mixed datasets. It was tested on several datasets from a machine learning repository and was shown to more efficiently cluster large, mixed datasets than other clustering algorithms like k-means and particle swarm optimization. The dynamic multi-swarm optimization helps guide the clustering algorithm to obtain more accurate cluster formations by providing the best initial value of k clusters.
The document discusses recent advances in artificial neural networks and deep learning applications. It describes how deep learning models have achieved human-level performance on tasks like image recognition, protein folding, and game playing. Large language models like GPT-3 and DALL-E 2 can generate coherent text and images from text prompts. Reinforcement learning agents can now generalize skills to new tasks without additional training. However, challenges remain in areas like video understanding, long-term coherence, deductive reasoning, and discovering new scientific laws. Future progress may require moving beyond gradient descent learning and developing systems that can perform deductive reasoning from few examples.
Get hands-on with Explainable AI at Machine Learning Interpretability(MLI) Gym!Sri Ambati
This meetup took place in Mountain View on January 24th, 2019.
Description:
With the effort and contributions from researchers and practitioners from academia and industry, Machine Learning Interpretation has become a young sub-field of ML. However, the norms around its definition and understanding is still in its infancy and there are numerous different approaches emerging rapidly. However, there seems to be a lack of a consistent explanation framework to evaluate and consistently benchmark different algorithms - evaluating against interpretation, completeness and consistency of the algorithms.
The idea with the gym is to provide a controlled interactive environment for all forms of Machine Learning algorithms, - initially focusing on supervised predictive modeling problems, to allow analysts and data-scientists to explore, debug and generate insightful understanding of the models by
1.Model Validation: Ways to explore and validate black box ML systems enabling model comparison both globally and locally - identifying biases in the training data through interpretation.
2.What-if Analysis: An interactive environment where communication can happen i.e. enable learning through interactions. User having the ability to conduct "What-If" analysis - effect of single or multiple features and their interactions
3.Model Debugging: Ways to analyze the misbehavior of the model by exploring counterfactual examples(adversarial examples and training)
4. Interpretable Models: Ability to build natively interpretable models - with the goal to simplify complex models to enable better understanding.
The central concept with MLI gym is to have an interactive environment where one could explore and simulate variations in the world(a world post a model is operationalized) beyond the defined model metrics point estimates - e.g. ROC-AUC, confusion matrix, RMSE, R2 score and others.
Speaker's Bio:
Pramit is a Lead Data Scientist/ at H2O.ai. His area of interests is building Statistical/Machine Learning models(Bayesian and Frequentist Modeling techniques) to help the business realize their data-driven goals.
Currently, he is exploring "Model Interpretation" as means to efficiently understand the true nature of predictive models to enable model robustness and security. He believes effective Model Inference coupled with Adversarial training could lead to building trustworthy models with known blind spots. He has started an open source project Skater: https://github.com/datascienceinc/Skater to solve the need for Model Inference(The project is still in its early stages of development but check it out, always eager for feedback)
A Comparative Study of Various Data Mining Techniques: Statistics, Decision T...Editor IJCATR
In this paper we focus on some techniques for solving data mining tasks such as: Statistics, Decision Trees and Neural
Networks. The new approach has succeed in defining some new criteria for the evaluation process, and it has obtained valuable results
based on what the technique is, the environment of using each techniques, the advantages and disadvantages of each technique, the
consequences of choosing any of these techniques to extract hidden predictive information from large databases, and the methods of
implementation of each technique. Finally, the paper has presented some valuable recommendations in this field.
ARTIFICIAL INTELLIGENT ( ITS / TASK 6 ) done by Wael Saad Hameedi / P71062Wael Alawsey
This document provides an overview of artificial intelligence and several AI techniques. It discusses neural networks, genetic algorithms, expert systems, fuzzy logic, and the suitability of AI for solving transportation problems. Neural networks can be used to perform tasks like optical character recognition by analyzing images. Genetic algorithms use principles of natural selection to arrive at optimal solutions. Expert systems mimic human experts to provide advice. Fuzzy logic allows for gradual membership in sets rather than binary membership. Complexity and uncertainty make transportation well-suited for AI approaches.
Semantic, Cognitive, and Perceptual Computing – three intertwined strands of ...Amit Sheth
Keynote at Web Intelligence 2017: http://webintelligence2017.com/program/keynotes/
Video: https://youtu.be/EIbhcqakgvA Paper: http://knoesis.org/node/2698
Abstract: While Bill Gates, Stephen Hawking, Elon Musk, Peter Thiel, and others engage in OpenAI discussions of whether or not AI, robots, and machines will replace humans, proponents of human-centric computing continue to extend work in which humans and machine partner in contextualized and personalized processing of multimodal data to derive actionable information.
In this talk, we discuss how maturing towards the emerging paradigms of semantic computing (SC), cognitive computing (CC), and perceptual computing (PC) provides a continuum through which to exploit the ever-increasing and growing diversity of data that could enhance people’s daily lives. SC and CC sift through raw data to personalize it according to context and individual users, creating abstractions that move the data closer to what humans can readily understand and apply in decision-making. PC, which interacts with the surrounding environment to collect data that is relevant and useful in understanding the outside world, is characterized by interpretative and exploratory activities that are supported by the use of prior/background knowledge. Using the examples of personalized digital health and a smart city, we will demonstrate how the trio of these computing paradigms form complementary capabilities that will enable the development of the next generation of intelligent systems. For background: http://bit.ly/PCSComputing
The increased availability of biomedical data, particularly in the public domain, offers the opportunity to better understand human health and to develop effective therapeutics for a wide range of unmet medical needs. However, data scientists remain stymied by the fact that data remain hard to find and to productively reuse because data and their metadata i) are wholly inaccessible, ii) are in non-standard or incompatible representations, iii) do not conform to community standards, and iv) have unclear or highly restricted terms and conditions that preclude legitimate reuse. These limitations require a rethink on data can be made machine and AI-ready - the key motivation behind the FAIR Guiding Principles. Concurrently, while recent efforts have explored the use of deep learning to fuse disparate data into predictive models for a wide range of biomedical applications, these models often fail even when the correct answer is already known, and fail to explain individual predictions in terms that data scientists can appreciate. These limitations suggest that new methods to produce practical artificial intelligence are still needed.
In this talk, I will discuss our work in (1) building an integrative knowledge infrastructure to prepare FAIR and "AI-ready" data and services along with (2) neurosymbolic AI methods to improve the quality of predictions and to generate plausible explanations. Attention is given to standards, platforms, and methods to wrangle knowledge into simple, but effective semantic and latent representations, and to make these available into standards-compliant and discoverable interfaces that can be used in model building, validation, and explanation. Our work, and those of others in the field, creates a baseline for building trustworthy and easy to deploy AI models in biomedicine.
Bio
Dr. Michel Dumontier is the Distinguished Professor of Data Science at Maastricht University, founder and executive director of the Institute of Data Science, and co-founder of the FAIR (Findable, Accessible, Interoperable and Reusable) data principles. His research explores socio-technological approaches for responsible discovery science, which includes collaborative multi-modal knowledge graphs, privacy-preserving distributed data mining, and AI methods for drug discovery and personalized medicine. His work is supported through the Dutch National Research Agenda, the Netherlands Organisation for Scientific Research, Horizon Europe, the European Open Science Cloud, the US National Institutes of Health, and a Marie-Curie Innovative Training Network. He is the editor-in-chief for the journal Data Science and is internationally recognized for his contributions in bioinformatics, biomedical informatics, and semantic technologies including ontologies and linked data.
M.Sc. Thesis Topics and Proposals @ Polimi Data Science Lab - 2024 - prof. Br...Marco Brambilla
The document outlines several thesis proposal topics in the areas of explainable artificial intelligence (AI), natural language processing (NLP), image analysis, security, and data science. Some key proposals include developing techniques for semantic clustering of words to label images for explainability, using large language models to guide investigative decisions, and generating realistic simulated data to test national security analysis tools. The proposals cover a range of technologies including deep learning, computer vision, NLP, and generative models.
Innovations in technology has revolutionized financial services to an extent that large financial institutions like Goldman Sachs are claiming to be technology companies! It is no secret that technological innovations like Data science and AI are changing fundamentally how financial products are created, tested and delivered. While it is exciting to learn about technologies themselves, there is very little guidance available to companies and financial professionals should retool and gear themselves towards the upcoming revolution.
In this master class, we will discuss key innovations in Data Science and AI and connect applications of these novel fields in forecasting and optimization. Through case studies and examples, we will demonstrate why now is the time you should invest to learn about the topics that will reshape the financial services industry of the future!
AI in Finance
This talk presents areas of investigation underway at the Rensselaer Institute for Data Exploration and Applications. First presented at Flipkart, Bangalore India, 3/2015.
Machine learning(ML) is the scientific study of algorithms and statistical models that computer systems used to progressively improve their performance on a specific task. Machine learning algorithms build a mathematical model of sample data, known as “Training Data", in order to make predictions or decisions without being explicitly programmed to perform the task. Machine learning algorithms are used in the applications of email filtering, detection of network intruders and computer vision, where it is infeasible to develop an algorithm of specific instructions for performing the task. Machine learning is closely related to computational statistics, which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a field of study within machine learning and focuses on exploratory data analysis through unsupervised learning. In its application across business problems, Machine learning is the study of computer systems that learn from data and experience. It is applied in an incredibly wide variety of application areas, from medicine to advertising, from military to pedestrian. Any area in which you need to make sense of data is a potential customer of machine learning.
Discover How Scientific Data is Used for the Public Good with Natural Languag...BaoTramDuong2
This document discusses using natural language processing techniques like n-grams, deep learning models, and named entity recognition to analyze scientific publications and identify references to datasets. It evaluates classifiers like recurrent neural networks and convolutional neural networks to perform sequence labeling and extract dataset citations. The goal is to help government agencies and researchers quickly find datasets, measures, and experts by automating the analysis of research articles.
This document analyzes the patterns of scientific collaboration in competitive intelligence studies from 1995 to 2012. It finds that the collaboration network exhibits isolated authors and groups with weak links. Topics studied have diversified from early focuses on technology and management to newer areas like open sources, economic intelligence, and visualization. However, the field lacks common descriptors and consolidated channels of communication like academic journals. Overall, the scientific community in this area demonstrates weak interconnectivity and dispersion.
This document provides an overview of machine learning concepts including supervised learning, unsupervised learning, and reinforcement learning. It discusses common machine learning applications and challenges. Key topics covered include linear regression, classification, clustering, neural networks, bias-variance tradeoff, and model selection. Evaluation techniques like training error, validation error, and test error are also summarized.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
The current state of prediction in neuroimagingSaigeRutherford
This document summarizes the current state of using machine learning to predict traits and behaviors from brain images. It discusses typical machine learning workflows and a favorite predictive model called the Brain Basis Set. It reviews what traits have been successfully predicted from brain images so far. It also discusses characteristics of successful predictive models, the role of large datasets, and ways prediction could be improved, such as through better data preprocessing and addressing bias. Throughout, it emphasizes the importance of transparency, reproducibility, and collaboration.
IRJET- Improved Model for Big Data Analytics using Dynamic Multi-Swarm Op...IRJET Journal
The document proposes an improved model for big data analytics using dynamic multi-swarm optimization and unsupervised learning algorithms. It develops an algorithm called DynamicK-reference Clustering that combines dynamic multi-swarm optimization with a k-reference clustering algorithm. The k-reference clustering algorithm uses reference distance weighting, Euclidean distance, and chi-square relative frequency to cluster mixed datasets. It was tested on several datasets from a machine learning repository and was shown to more efficiently cluster large, mixed datasets than other clustering algorithms like k-means and particle swarm optimization. The dynamic multi-swarm optimization helps guide the clustering algorithm to obtain more accurate cluster formations by providing the best initial value of k clusters.
The document discusses recent advances in artificial neural networks and deep learning applications. It describes how deep learning models have achieved human-level performance on tasks like image recognition, protein folding, and game playing. Large language models like GPT-3 and DALL-E 2 can generate coherent text and images from text prompts. Reinforcement learning agents can now generalize skills to new tasks without additional training. However, challenges remain in areas like video understanding, long-term coherence, deductive reasoning, and discovering new scientific laws. Future progress may require moving beyond gradient descent learning and developing systems that can perform deductive reasoning from few examples.
Get hands-on with Explainable AI at Machine Learning Interpretability(MLI) Gym!Sri Ambati
This meetup took place in Mountain View on January 24th, 2019.
Description:
With the effort and contributions from researchers and practitioners from academia and industry, Machine Learning Interpretation has become a young sub-field of ML. However, the norms around its definition and understanding is still in its infancy and there are numerous different approaches emerging rapidly. However, there seems to be a lack of a consistent explanation framework to evaluate and consistently benchmark different algorithms - evaluating against interpretation, completeness and consistency of the algorithms.
The idea with the gym is to provide a controlled interactive environment for all forms of Machine Learning algorithms, - initially focusing on supervised predictive modeling problems, to allow analysts and data-scientists to explore, debug and generate insightful understanding of the models by
1.Model Validation: Ways to explore and validate black box ML systems enabling model comparison both globally and locally - identifying biases in the training data through interpretation.
2.What-if Analysis: An interactive environment where communication can happen i.e. enable learning through interactions. User having the ability to conduct "What-If" analysis - effect of single or multiple features and their interactions
3.Model Debugging: Ways to analyze the misbehavior of the model by exploring counterfactual examples(adversarial examples and training)
4. Interpretable Models: Ability to build natively interpretable models - with the goal to simplify complex models to enable better understanding.
The central concept with MLI gym is to have an interactive environment where one could explore and simulate variations in the world(a world post a model is operationalized) beyond the defined model metrics point estimates - e.g. ROC-AUC, confusion matrix, RMSE, R2 score and others.
Speaker's Bio:
Pramit is a Lead Data Scientist/ at H2O.ai. His area of interests is building Statistical/Machine Learning models(Bayesian and Frequentist Modeling techniques) to help the business realize their data-driven goals.
Currently, he is exploring "Model Interpretation" as means to efficiently understand the true nature of predictive models to enable model robustness and security. He believes effective Model Inference coupled with Adversarial training could lead to building trustworthy models with known blind spots. He has started an open source project Skater: https://github.com/datascienceinc/Skater to solve the need for Model Inference(The project is still in its early stages of development but check it out, always eager for feedback)
A Comparative Study of Various Data Mining Techniques: Statistics, Decision T...Editor IJCATR
In this paper we focus on some techniques for solving data mining tasks such as: Statistics, Decision Trees and Neural
Networks. The new approach has succeed in defining some new criteria for the evaluation process, and it has obtained valuable results
based on what the technique is, the environment of using each techniques, the advantages and disadvantages of each technique, the
consequences of choosing any of these techniques to extract hidden predictive information from large databases, and the methods of
implementation of each technique. Finally, the paper has presented some valuable recommendations in this field.
ARTIFICIAL INTELLIGENT ( ITS / TASK 6 ) done by Wael Saad Hameedi / P71062Wael Alawsey
This document provides an overview of artificial intelligence and several AI techniques. It discusses neural networks, genetic algorithms, expert systems, fuzzy logic, and the suitability of AI for solving transportation problems. Neural networks can be used to perform tasks like optical character recognition by analyzing images. Genetic algorithms use principles of natural selection to arrive at optimal solutions. Expert systems mimic human experts to provide advice. Fuzzy logic allows for gradual membership in sets rather than binary membership. Complexity and uncertainty make transportation well-suited for AI approaches.
Semantic, Cognitive, and Perceptual Computing – three intertwined strands of ...Amit Sheth
Keynote at Web Intelligence 2017: http://webintelligence2017.com/program/keynotes/
Video: https://youtu.be/EIbhcqakgvA Paper: http://knoesis.org/node/2698
Abstract: While Bill Gates, Stephen Hawking, Elon Musk, Peter Thiel, and others engage in OpenAI discussions of whether or not AI, robots, and machines will replace humans, proponents of human-centric computing continue to extend work in which humans and machine partner in contextualized and personalized processing of multimodal data to derive actionable information.
In this talk, we discuss how maturing towards the emerging paradigms of semantic computing (SC), cognitive computing (CC), and perceptual computing (PC) provides a continuum through which to exploit the ever-increasing and growing diversity of data that could enhance people’s daily lives. SC and CC sift through raw data to personalize it according to context and individual users, creating abstractions that move the data closer to what humans can readily understand and apply in decision-making. PC, which interacts with the surrounding environment to collect data that is relevant and useful in understanding the outside world, is characterized by interpretative and exploratory activities that are supported by the use of prior/background knowledge. Using the examples of personalized digital health and a smart city, we will demonstrate how the trio of these computing paradigms form complementary capabilities that will enable the development of the next generation of intelligent systems. For background: http://bit.ly/PCSComputing
The increased availability of biomedical data, particularly in the public domain, offers the opportunity to better understand human health and to develop effective therapeutics for a wide range of unmet medical needs. However, data scientists remain stymied by the fact that data remain hard to find and to productively reuse because data and their metadata i) are wholly inaccessible, ii) are in non-standard or incompatible representations, iii) do not conform to community standards, and iv) have unclear or highly restricted terms and conditions that preclude legitimate reuse. These limitations require a rethink on data can be made machine and AI-ready - the key motivation behind the FAIR Guiding Principles. Concurrently, while recent efforts have explored the use of deep learning to fuse disparate data into predictive models for a wide range of biomedical applications, these models often fail even when the correct answer is already known, and fail to explain individual predictions in terms that data scientists can appreciate. These limitations suggest that new methods to produce practical artificial intelligence are still needed.
In this talk, I will discuss our work in (1) building an integrative knowledge infrastructure to prepare FAIR and "AI-ready" data and services along with (2) neurosymbolic AI methods to improve the quality of predictions and to generate plausible explanations. Attention is given to standards, platforms, and methods to wrangle knowledge into simple, but effective semantic and latent representations, and to make these available into standards-compliant and discoverable interfaces that can be used in model building, validation, and explanation. Our work, and those of others in the field, creates a baseline for building trustworthy and easy to deploy AI models in biomedicine.
Bio
Dr. Michel Dumontier is the Distinguished Professor of Data Science at Maastricht University, founder and executive director of the Institute of Data Science, and co-founder of the FAIR (Findable, Accessible, Interoperable and Reusable) data principles. His research explores socio-technological approaches for responsible discovery science, which includes collaborative multi-modal knowledge graphs, privacy-preserving distributed data mining, and AI methods for drug discovery and personalized medicine. His work is supported through the Dutch National Research Agenda, the Netherlands Organisation for Scientific Research, Horizon Europe, the European Open Science Cloud, the US National Institutes of Health, and a Marie-Curie Innovative Training Network. He is the editor-in-chief for the journal Data Science and is internationally recognized for his contributions in bioinformatics, biomedical informatics, and semantic technologies including ontologies and linked data.
M.Sc. Thesis Topics and Proposals @ Polimi Data Science Lab - 2024 - prof. Br...Marco Brambilla
The document outlines several thesis proposal topics in the areas of explainable artificial intelligence (AI), natural language processing (NLP), image analysis, security, and data science. Some key proposals include developing techniques for semantic clustering of words to label images for explainability, using large language models to guide investigative decisions, and generating realistic simulated data to test national security analysis tools. The proposals cover a range of technologies including deep learning, computer vision, NLP, and generative models.
Innovations in technology has revolutionized financial services to an extent that large financial institutions like Goldman Sachs are claiming to be technology companies! It is no secret that technological innovations like Data science and AI are changing fundamentally how financial products are created, tested and delivered. While it is exciting to learn about technologies themselves, there is very little guidance available to companies and financial professionals should retool and gear themselves towards the upcoming revolution.
In this master class, we will discuss key innovations in Data Science and AI and connect applications of these novel fields in forecasting and optimization. Through case studies and examples, we will demonstrate why now is the time you should invest to learn about the topics that will reshape the financial services industry of the future!
AI in Finance
This talk presents areas of investigation underway at the Rensselaer Institute for Data Exploration and Applications. First presented at Flipkart, Bangalore India, 3/2015.
Machine learning(ML) is the scientific study of algorithms and statistical models that computer systems used to progressively improve their performance on a specific task. Machine learning algorithms build a mathematical model of sample data, known as “Training Data", in order to make predictions or decisions without being explicitly programmed to perform the task. Machine learning algorithms are used in the applications of email filtering, detection of network intruders and computer vision, where it is infeasible to develop an algorithm of specific instructions for performing the task. Machine learning is closely related to computational statistics, which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a field of study within machine learning and focuses on exploratory data analysis through unsupervised learning. In its application across business problems, Machine learning is the study of computer systems that learn from data and experience. It is applied in an incredibly wide variety of application areas, from medicine to advertising, from military to pedestrian. Any area in which you need to make sense of data is a potential customer of machine learning.
Discover How Scientific Data is Used for the Public Good with Natural Languag...BaoTramDuong2
This document discusses using natural language processing techniques like n-grams, deep learning models, and named entity recognition to analyze scientific publications and identify references to datasets. It evaluates classifiers like recurrent neural networks and convolutional neural networks to perform sequence labeling and extract dataset citations. The goal is to help government agencies and researchers quickly find datasets, measures, and experts by automating the analysis of research articles.
This document analyzes the patterns of scientific collaboration in competitive intelligence studies from 1995 to 2012. It finds that the collaboration network exhibits isolated authors and groups with weak links. Topics studied have diversified from early focuses on technology and management to newer areas like open sources, economic intelligence, and visualization. However, the field lacks common descriptors and consolidated channels of communication like academic journals. Overall, the scientific community in this area demonstrates weak interconnectivity and dispersion.
This document provides an overview of machine learning concepts including supervised learning, unsupervised learning, and reinforcement learning. It discusses common machine learning applications and challenges. Key topics covered include linear regression, classification, clustering, neural networks, bias-variance tradeoff, and model selection. Evaluation techniques like training error, validation error, and test error are also summarized.
Similar to DeepMoD: Deep Learning Model discovery (20)
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Nucleophilic Addition of carbonyl compounds.pptxSSR02
Nucleophilic addition is the most important reaction of carbonyls. Not just aldehydes and ketones, but also carboxylic acid derivatives in general.
Carbonyls undergo addition reactions with a large range of nucleophiles.
Comparing the relative basicity of the nucleophile and the product is extremely helpful in determining how reversible the addition reaction is. Reactions with Grignards and hydrides are irreversible. Reactions with weak bases like halides and carboxylates generally don’t happen.
Electronic effects (inductive effects, electron donation) have a large impact on reactivity.
Large groups adjacent to the carbonyl will slow the rate of reaction.
Neutral nucleophiles can also add to carbonyls, although their additions are generally slower and more reversible. Acid catalysis is sometimes employed to increase the rate of addition.
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
1. Gert-Jan Both, Remy Kusters
CRI Research, Université Paris Descartes, Paris, France
DeepMoD
Model discovery
neural networks
DeepMoD
Model discovery
neural networks
@GJ_Both
@RemyKusters
2. AI, ML and statistics
Symbolic MLStatistical ML vs.
@sandserif
Main criticisms …
ML provides uninterpretable
solutions
3. Goal:
Develop tools to discover interpretable models for quantitative
science
Combine the statistical power of neural networks with the
symbolic power of regression
Approach:
4. Goal:
Combine the statistical power of neural networks with the
symbolic power of regression
Develop tools to discover interpretable models for quantitative
science
<latexit sha1_base64="OuHvrtTUD9r4pBI06iD0mJxKAcs=">AAACGnicbVDLSgMxFM34rPVVdekmWIQWyjAjgm6EoiAuFWwrtMOQSTNtMPMguZGWYb7Djb/ixoUi7sSNf2NaZ6HWA8k9nHMvyT1BKrgCx/m05uYXFpeWSyvl1bX1jc3K1nZbJVpS1qKJSORNQBQTPGYt4CDYTSoZiQLBOsHt2cTv3DGpeBJfwzhlXkQGMQ85JWAkv+JqH2qjBtTxCc56lIjsPM9xTTew9kfmKoqfjUZ5A9u2XfcrVcd2psCzxC1IFRW49CvvvX5CdcRioIIo1XWdFLyMSOBUsLzc04qlhN6SAesaGpOIKS+brpbjfaP0cZhIc2LAU/XnREYipcZRYDojAkP115uI/3ldDeGxl/E41cBi+v1QqAWGBE9ywn0uGQUxNoRQyc1fMR0SSSiYNMsmBPfvyrOkfWC7hl8dVpunRRwltIv2UA256Ag10QW6RC1E0T16RM/oxXqwnqxX6+27dc4qZnbQL1gfX5Tang4=</latexit>
Differential equation
Space
Time
Noisy Data
Underlying distribution
P(x, t)<latexit sha1_base64="3XtpKznpBSb+nMyPAQPZfV01TDk=">AAAB7XicbZBNSwMxEIZn61etX1WPXhaLUEHKrgh6LHrxWMF+QLuUbJq2sdlkSWbFsvQ/ePGgiFf/jzf/jWm7B219IfDwzgyZecNYcIOe9+3kVlbX1jfym4Wt7Z3dveL+QcOoRFNWp0oo3QqJYYJLVkeOgrVizUgUCtYMRzfTevORacOVvMdxzIKIDCTvc0rQWo1a+ekMT7vFklfxZnKXwc+gBJlq3eJXp6doEjGJVBBj2r4XY5ASjZwKNil0EsNiQkdkwNoWJYmYCdLZthP3xDo9t6+0fRLdmft7IiWRMeMotJ0RwaFZrE3N/2rtBPtXQcplnCCTdP5RPxEuKnd6utvjmlEUYwuEam53demQaELRBlSwIfiLJy9D47ziW767KFWvszjycATHUAYfLqEKt1CDOlB4gGd4hTdHOS/Ou/Mxb8052cwh/JHz+QOJ2I5v</latexit><latexit sha1_base64="3XtpKznpBSb+nMyPAQPZfV01TDk=">AAAB7XicbZBNSwMxEIZn61etX1WPXhaLUEHKrgh6LHrxWMF+QLuUbJq2sdlkSWbFsvQ/ePGgiFf/jzf/jWm7B219IfDwzgyZecNYcIOe9+3kVlbX1jfym4Wt7Z3dveL+QcOoRFNWp0oo3QqJYYJLVkeOgrVizUgUCtYMRzfTevORacOVvMdxzIKIDCTvc0rQWo1a+ekMT7vFklfxZnKXwc+gBJlq3eJXp6doEjGJVBBj2r4XY5ASjZwKNil0EsNiQkdkwNoWJYmYCdLZthP3xDo9t6+0fRLdmft7IiWRMeMotJ0RwaFZrE3N/2rtBPtXQcplnCCTdP5RPxEuKnd6utvjmlEUYwuEam53demQaELRBlSwIfiLJy9D47ziW767KFWvszjycATHUAYfLqEKt1CDOlB4gGd4hTdHOS/Ou/Mxb8052cwh/JHz+QOJ2I5v</latexit><latexit sha1_base64="3XtpKznpBSb+nMyPAQPZfV01TDk=">AAAB7XicbZBNSwMxEIZn61etX1WPXhaLUEHKrgh6LHrxWMF+QLuUbJq2sdlkSWbFsvQ/ePGgiFf/jzf/jWm7B219IfDwzgyZecNYcIOe9+3kVlbX1jfym4Wt7Z3dveL+QcOoRFNWp0oo3QqJYYJLVkeOgrVizUgUCtYMRzfTevORacOVvMdxzIKIDCTvc0rQWo1a+ekMT7vFklfxZnKXwc+gBJlq3eJXp6doEjGJVBBj2r4XY5ASjZwKNil0EsNiQkdkwNoWJYmYCdLZthP3xDo9t6+0fRLdmft7IiWRMeMotJ0RwaFZrE3N/2rtBPtXQcplnCCTdP5RPxEuKnd6utvjmlEUYwuEam53demQaELRBlSwIfiLJy9D47ziW767KFWvszjycATHUAYfLqEKt1CDOlB4gGd4hTdHOS/Ou/Mxb8052cwh/JHz+QOJ2I5v</latexit><latexit sha1_base64="3XtpKznpBSb+nMyPAQPZfV01TDk=">AAAB7XicbZBNSwMxEIZn61etX1WPXhaLUEHKrgh6LHrxWMF+QLuUbJq2sdlkSWbFsvQ/ePGgiFf/jzf/jWm7B219IfDwzgyZecNYcIOe9+3kVlbX1jfym4Wt7Z3dveL+QcOoRFNWp0oo3QqJYYJLVkeOgrVizUgUCtYMRzfTevORacOVvMdxzIKIDCTvc0rQWo1a+ekMT7vFklfxZnKXwc+gBJlq3eJXp6doEjGJVBBj2r4XY5ASjZwKNil0EsNiQkdkwNoWJYmYCdLZthP3xDo9t6+0fRLdmft7IiWRMeMotJ0RwaFZrE3N/2rtBPtXQcplnCCTdP5RPxEuKnd6utvjmlEUYwuEam53demQaELRBlSwIfiLJy9D47ziW767KFWvszjycATHUAYfLqEKt1CDOlB4gGd4hTdHOS/Ou/Mxb8052cwh/JHz+QOJ2I5v</latexit>
Approach:
9. Our approach …
Input
The Neural Network
(~x, t)<latexit sha1_base64="lT4SfvKTT4nZSef6YVRDL9M85fI=">AAAB8nicbZDLSgMxFIYz9VbrrerSTbAIFaTMiKDLohuXFewFpkPJpJk2NJMMyZliGfoYblwo4tancefbmLaz0NYfAh//OYec84eJ4AZc99sprK1vbG4Vt0s7u3v7B+XDo5ZRqaasSZVQuhMSwwSXrAkcBOskmpE4FKwdju5m9faYacOVfIRJwoKYDCSPOCVgLb/aHTOaPU0v4LxXrrg1dy68Cl4OFZSr0St/dfuKpjGTQAUxxvfcBIKMaOBUsGmpmxqWEDoiA+ZblCRmJsjmK0/xmXX6OFLaPgl47v6eyEhszCQObWdMYGiWazPzv5qfQnQTZFwmKTBJFx9FqcCg8Ox+3OeaURATC4RqbnfFdEg0oWBTKtkQvOWTV6F1WfMsP1xV6rd5HEV0gk5RFXnoGtXRPWqgJqJIoWf0it4ccF6cd+dj0Vpw8plj9EfO5w++jZDj</latexit><latexit sha1_base64="lT4SfvKTT4nZSef6YVRDL9M85fI=">AAAB8nicbZDLSgMxFIYz9VbrrerSTbAIFaTMiKDLohuXFewFpkPJpJk2NJMMyZliGfoYblwo4tancefbmLaz0NYfAh//OYec84eJ4AZc99sprK1vbG4Vt0s7u3v7B+XDo5ZRqaasSZVQuhMSwwSXrAkcBOskmpE4FKwdju5m9faYacOVfIRJwoKYDCSPOCVgLb/aHTOaPU0v4LxXrrg1dy68Cl4OFZSr0St/dfuKpjGTQAUxxvfcBIKMaOBUsGmpmxqWEDoiA+ZblCRmJsjmK0/xmXX6OFLaPgl47v6eyEhszCQObWdMYGiWazPzv5qfQnQTZFwmKTBJFx9FqcCg8Ox+3OeaURATC4RqbnfFdEg0oWBTKtkQvOWTV6F1WfMsP1xV6rd5HEV0gk5RFXnoGtXRPWqgJqJIoWf0it4ccF6cd+dj0Vpw8plj9EfO5w++jZDj</latexit><latexit sha1_base64="lT4SfvKTT4nZSef6YVRDL9M85fI=">AAAB8nicbZDLSgMxFIYz9VbrrerSTbAIFaTMiKDLohuXFewFpkPJpJk2NJMMyZliGfoYblwo4tancefbmLaz0NYfAh//OYec84eJ4AZc99sprK1vbG4Vt0s7u3v7B+XDo5ZRqaasSZVQuhMSwwSXrAkcBOskmpE4FKwdju5m9faYacOVfIRJwoKYDCSPOCVgLb/aHTOaPU0v4LxXrrg1dy68Cl4OFZSr0St/dfuKpjGTQAUxxvfcBIKMaOBUsGmpmxqWEDoiA+ZblCRmJsjmK0/xmXX6OFLaPgl47v6eyEhszCQObWdMYGiWazPzv5qfQnQTZFwmKTBJFx9FqcCg8Ox+3OeaURATC4RqbnfFdEg0oWBTKtkQvOWTV6F1WfMsP1xV6rd5HEV0gk5RFXnoGtXRPWqgJqJIoWf0it4ccF6cd+dj0Vpw8plj9EfO5w++jZDj</latexit><latexit sha1_base64="lT4SfvKTT4nZSef6YVRDL9M85fI=">AAAB8nicbZDLSgMxFIYz9VbrrerSTbAIFaTMiKDLohuXFewFpkPJpJk2NJMMyZliGfoYblwo4tancefbmLaz0NYfAh//OYec84eJ4AZc99sprK1vbG4Vt0s7u3v7B+XDo5ZRqaasSZVQuhMSwwSXrAkcBOskmpE4FKwdju5m9faYacOVfIRJwoKYDCSPOCVgLb/aHTOaPU0v4LxXrrg1dy68Cl4OFZSr0St/dfuKpjGTQAUxxvfcBIKMaOBUsGmpmxqWEDoiA+ZblCRmJsjmK0/xmXX6OFLaPgl47v6eyEhszCQObWdMYGiWazPzv5qfQnQTZFwmKTBJFx9FqcCg8Ox+3OeaURATC4RqbnfFdEg0oWBTKtkQvOWTV6F1WfMsP1xV6rd5HEV0gk5RFXnoGtXRPWqgJqJIoWf0it4ccF6cd+dj0Vpw8plj9EfO5w++jZDj</latexit>
ˆu<latexit sha1_base64="BGeEfkLOCpJ34NMiu4knM6UlhOc=">AAAB7nicbZBNSwMxEIZn/az1q+rRS7AInsquCHosevFYwX5Au5Rsmm1Ds9klmQhl6Y/w4kERr/4eb/4b03YP2vpC4OGdGTLzRpkUBn3/21tb39jc2i7tlHf39g8OK0fHLZNazXiTpTLVnYgaLoXiTRQoeSfTnCaR5O1ofDert5+4NiJVjzjJeJjQoRKxYBSd1e6NKOZ22q9U/Zo/F1mFoIAqFGr0K1+9QcpswhUySY3pBn6GYU41Cib5tNyzhmeUjemQdx0qmnAT5vN1p+TcOQMSp9o9hWTu/p7IaWLMJIlcZ0JxZJZrM/O/WtdifBPmQmUWuWKLj2IrCaZkdjsZCM0ZyokDyrRwuxI2opoydAmVXQjB8smr0LqsBY4frqr12yKOEpzCGVxAANdQh3toQBMYjOEZXuHNy7wX7937WLSuecXMCfyR9/kDq6+Pxg==</latexit><latexit sha1_base64="BGeEfkLOCpJ34NMiu4knM6UlhOc=">AAAB7nicbZBNSwMxEIZn/az1q+rRS7AInsquCHosevFYwX5Au5Rsmm1Ds9klmQhl6Y/w4kERr/4eb/4b03YP2vpC4OGdGTLzRpkUBn3/21tb39jc2i7tlHf39g8OK0fHLZNazXiTpTLVnYgaLoXiTRQoeSfTnCaR5O1ofDert5+4NiJVjzjJeJjQoRKxYBSd1e6NKOZ22q9U/Zo/F1mFoIAqFGr0K1+9QcpswhUySY3pBn6GYU41Cib5tNyzhmeUjemQdx0qmnAT5vN1p+TcOQMSp9o9hWTu/p7IaWLMJIlcZ0JxZJZrM/O/WtdifBPmQmUWuWKLj2IrCaZkdjsZCM0ZyokDyrRwuxI2opoydAmVXQjB8smr0LqsBY4frqr12yKOEpzCGVxAANdQh3toQBMYjOEZXuHNy7wX7937WLSuecXMCfyR9/kDq6+Pxg==</latexit><latexit sha1_base64="BGeEfkLOCpJ34NMiu4knM6UlhOc=">AAAB7nicbZBNSwMxEIZn/az1q+rRS7AInsquCHosevFYwX5Au5Rsmm1Ds9klmQhl6Y/w4kERr/4eb/4b03YP2vpC4OGdGTLzRpkUBn3/21tb39jc2i7tlHf39g8OK0fHLZNazXiTpTLVnYgaLoXiTRQoeSfTnCaR5O1ofDert5+4NiJVjzjJeJjQoRKxYBSd1e6NKOZ22q9U/Zo/F1mFoIAqFGr0K1+9QcpswhUySY3pBn6GYU41Cib5tNyzhmeUjemQdx0qmnAT5vN1p+TcOQMSp9o9hWTu/p7IaWLMJIlcZ0JxZJZrM/O/WtdifBPmQmUWuWKLj2IrCaZkdjsZCM0ZyokDyrRwuxI2opoydAmVXQjB8smr0LqsBY4frqr12yKOEpzCGVxAANdQh3toQBMYjOEZXuHNy7wX7937WLSuecXMCfyR9/kDq6+Pxg==</latexit><latexit sha1_base64="BGeEfkLOCpJ34NMiu4knM6UlhOc=">AAAB7nicbZBNSwMxEIZn/az1q+rRS7AInsquCHosevFYwX5Au5Rsmm1Ds9klmQhl6Y/w4kERr/4eb/4b03YP2vpC4OGdGTLzRpkUBn3/21tb39jc2i7tlHf39g8OK0fHLZNazXiTpTLVnYgaLoXiTRQoeSfTnCaR5O1ofDert5+4NiJVjzjJeJjQoRKxYBSd1e6NKOZ22q9U/Zo/F1mFoIAqFGr0K1+9QcpswhUySY3pBn6GYU41Cib5tNyzhmeUjemQdx0qmnAT5vN1p+TcOQMSp9o9hWTu/p7IaWLMJIlcZ0JxZJZrM/O/WtdifBPmQmUWuWKLj2IrCaZkdjsZCM0ZyokDyrRwuxI2opoydAmVXQjB8smr0LqsBY4frqr12yKOEpzCGVxAANdQh3toQBMYjOEZXuHNy7wX7937WLSuecXMCfyR9/kDq6+Pxg==</latexit>
Output
Space
Time
<latexit sha1_base64="f64HEF6znFEZoH5kmjcDbs/59Jo=">AAAB7XicbZBNSwMxEIZn61etX1WPXoJFqCBlVwQ9Fr14rGA/oF1KNk3b2OxmSWbFsvQ/ePGgiFf/jzf/jWm7B219IfDwzgyZeYNYCoOu++3kVlbX1jfym4Wt7Z3dveL+QcOoRDNeZ0oq3Qqo4VJEvI4CJW/FmtMwkLwZjG6m9eYj10ao6B7HMfdDOohEXzCK1mok5aczPO0WS27FnYksg5dBCTLVusWvTk+xJOQRMkmNaXtujH5KNQom+aTQSQyPKRvRAW9bjGjIjZ/Otp2QE+v0SF9p+yIkM/f3REpDY8ZhYDtDikOzWJua/9XaCfav/FREcYI8YvOP+okkqMj0dNITmjOUYwuUaWF3JWxINWVoAyrYELzFk5ehcV7xLN9dlKrXWRx5OIJjKIMHl1CFW6hBHRg8wDO8wpujnBfn3fmYt+acbOYQ/sj5/AHCpY6U</latexit>
Construct the library w.r.t. the inferred solution
Θ =
! ! !
u ux uxu
! ! !
! ! !
uxx uxxu uxxux
! ! !
"
⎛
⎝
⎜
⎜
⎜
⎞
⎠
⎟
⎟
⎟
^ ^ ^ ^ ^ ^ ^ ^ ^
Include the regression task within the neural network
10. How do we train a neural network
Optimise loss function:
L = LMSE + LReg + LL1<latexit sha1_base64="gUOXx4d83oxXFYgklynsP3UnEo8=">AAACMHicbVDLSsNAFJ3UV62vqEs3g0UQhJKIoBuhKKKLCvXRB7QhTKbTduhkEmYmQgn5JDd+im4UFHHrVzhps+jDAwNnzrmXe+/xQkalsqwPI7ewuLS8kl8trK1vbG6Z2zt1GUQCkxoOWCCaHpKEUU5qiipGmqEgyPcYaXiDy9RvPBEhacAf1TAkjo96nHYpRkpLrnnd9pHqY8TiSgLP4cTPjW8frhJ4NK3dk96cVnHtxDWLVskaAc4TOyNFkKHqmq/tToAjn3CFGZKyZVuhcmIkFMWMJIV2JEmI8AD1SEtTjnwinXh0cAIPtNKB3UDoxxUcqZMdMfKlHPqerkwXlbNeKv7ntSLVPXNiysNIEY7Hg7oRgyqAaXqwQwXBig01QVhQvSvEfSQQVjrjgg7Bnj15ntSPS7bmdyfF8kUWRx7sgX1wCGxwCsrgBlRBDWDwDN7AJ/gyXox349v4GZfmjKxnF0zB+P0DA3+pkA==</latexit><latexit sha1_base64="gUOXx4d83oxXFYgklynsP3UnEo8=">AAACMHicbVDLSsNAFJ3UV62vqEs3g0UQhJKIoBuhKKKLCvXRB7QhTKbTduhkEmYmQgn5JDd+im4UFHHrVzhps+jDAwNnzrmXe+/xQkalsqwPI7ewuLS8kl8trK1vbG6Z2zt1GUQCkxoOWCCaHpKEUU5qiipGmqEgyPcYaXiDy9RvPBEhacAf1TAkjo96nHYpRkpLrnnd9pHqY8TiSgLP4cTPjW8frhJ4NK3dk96cVnHtxDWLVskaAc4TOyNFkKHqmq/tToAjn3CFGZKyZVuhcmIkFMWMJIV2JEmI8AD1SEtTjnwinXh0cAIPtNKB3UDoxxUcqZMdMfKlHPqerkwXlbNeKv7ntSLVPXNiysNIEY7Hg7oRgyqAaXqwQwXBig01QVhQvSvEfSQQVjrjgg7Bnj15ntSPS7bmdyfF8kUWRx7sgX1wCGxwCsrgBlRBDWDwDN7AJ/gyXox349v4GZfmjKxnF0zB+P0DA3+pkA==</latexit><latexit sha1_base64="gUOXx4d83oxXFYgklynsP3UnEo8=">AAACMHicbVDLSsNAFJ3UV62vqEs3g0UQhJKIoBuhKKKLCvXRB7QhTKbTduhkEmYmQgn5JDd+im4UFHHrVzhps+jDAwNnzrmXe+/xQkalsqwPI7ewuLS8kl8trK1vbG6Z2zt1GUQCkxoOWCCaHpKEUU5qiipGmqEgyPcYaXiDy9RvPBEhacAf1TAkjo96nHYpRkpLrnnd9pHqY8TiSgLP4cTPjW8frhJ4NK3dk96cVnHtxDWLVskaAc4TOyNFkKHqmq/tToAjn3CFGZKyZVuhcmIkFMWMJIV2JEmI8AD1SEtTjnwinXh0cAIPtNKB3UDoxxUcqZMdMfKlHPqerkwXlbNeKv7ntSLVPXNiysNIEY7Hg7oRgyqAaXqwQwXBig01QVhQvSvEfSQQVjrjgg7Bnj15ntSPS7bmdyfF8kUWRx7sgX1wCGxwCsrgBlRBDWDwDN7AJ/gyXox349v4GZfmjKxnF0zB+P0DA3+pkA==</latexit><latexit sha1_base64="gUOXx4d83oxXFYgklynsP3UnEo8=">AAACMHicbVDLSsNAFJ3UV62vqEs3g0UQhJKIoBuhKKKLCvXRB7QhTKbTduhkEmYmQgn5JDd+im4UFHHrVzhps+jDAwNnzrmXe+/xQkalsqwPI7ewuLS8kl8trK1vbG6Z2zt1GUQCkxoOWCCaHpKEUU5qiipGmqEgyPcYaXiDy9RvPBEhacAf1TAkjo96nHYpRkpLrnnd9pHqY8TiSgLP4cTPjW8frhJ4NK3dk96cVnHtxDWLVskaAc4TOyNFkKHqmq/tToAjn3CFGZKyZVuhcmIkFMWMJIV2JEmI8AD1SEtTjnwinXh0cAIPtNKB3UDoxxUcqZMdMfKlHPqerkwXlbNeKv7ntSLVPXNiysNIEY7Hg7oRgyqAaXqwQwXBig01QVhQvSvEfSQQVjrjgg7Bnj15ntSPS7bmdyfF8kUWRx7sgX1wCGxwCsrgBlRBDWDwDN7AJ/gyXox349v4GZfmjKxnF0zB+P0DA3+pkA==</latexit>
Learn the mapping
(~x, t) ! ~u<latexit sha1_base64="yJgnElJriwfv2D53AURbemLgi/I=">AAACCHicbZDLSsNAFIYn9VbrLerShYNFqCAlEUGXRTcuK9gLNKFMptN26OTCzEm1hCzd+CpuXCji1kdw59s4TbPQ1h8GPv5zDmfO70WCK7Csb6OwtLyyulZcL21sbm3vmLt7TRXGkrIGDUUo2x5RTPCANYCDYO1IMuJ7grW80fW03hozqXgY3MEkYq5PBgHvc0pAW13zsOKMGU0e0lM4wY7kgyEQKcN7nNlx2jXLVtXKhBfBzqGMctW75pfTC2nsswCoIEp1bCsCNyESOBUsLTmxYhGhIzJgHY0B8Zlyk+yQFB9rp4f7odQvAJy5vycS4is18T3d6RMYqvna1Pyv1omhf+kmPIhiYAGdLerHAkOIp6ngHpeMgphoIFRy/VdMh0QSCjq7kg7Bnj95EZpnVVvz7Xm5dpXHUUQH6AhVkI0uUA3doDpqIIoe0TN6RW/Gk/FivBsfs9aCkc/soz8yPn8AtuSZyA==</latexit><latexit sha1_base64="yJgnElJriwfv2D53AURbemLgi/I=">AAACCHicbZDLSsNAFIYn9VbrLerShYNFqCAlEUGXRTcuK9gLNKFMptN26OTCzEm1hCzd+CpuXCji1kdw59s4TbPQ1h8GPv5zDmfO70WCK7Csb6OwtLyyulZcL21sbm3vmLt7TRXGkrIGDUUo2x5RTPCANYCDYO1IMuJ7grW80fW03hozqXgY3MEkYq5PBgHvc0pAW13zsOKMGU0e0lM4wY7kgyEQKcN7nNlx2jXLVtXKhBfBzqGMctW75pfTC2nsswCoIEp1bCsCNyESOBUsLTmxYhGhIzJgHY0B8Zlyk+yQFB9rp4f7odQvAJy5vycS4is18T3d6RMYqvna1Pyv1omhf+kmPIhiYAGdLerHAkOIp6ngHpeMgphoIFRy/VdMh0QSCjq7kg7Bnj95EZpnVVvz7Xm5dpXHUUQH6AhVkI0uUA3doDpqIIoe0TN6RW/Gk/FivBsfs9aCkc/soz8yPn8AtuSZyA==</latexit><latexit sha1_base64="yJgnElJriwfv2D53AURbemLgi/I=">AAACCHicbZDLSsNAFIYn9VbrLerShYNFqCAlEUGXRTcuK9gLNKFMptN26OTCzEm1hCzd+CpuXCji1kdw59s4TbPQ1h8GPv5zDmfO70WCK7Csb6OwtLyyulZcL21sbm3vmLt7TRXGkrIGDUUo2x5RTPCANYCDYO1IMuJ7grW80fW03hozqXgY3MEkYq5PBgHvc0pAW13zsOKMGU0e0lM4wY7kgyEQKcN7nNlx2jXLVtXKhBfBzqGMctW75pfTC2nsswCoIEp1bCsCNyESOBUsLTmxYhGhIzJgHY0B8Zlyk+yQFB9rp4f7odQvAJy5vycS4is18T3d6RMYqvna1Pyv1omhf+kmPIhiYAGdLerHAkOIp6ngHpeMgphoIFRy/VdMh0QSCjq7kg7Bnj95EZpnVVvz7Xm5dpXHUUQH6AhVkI0uUA3doDpqIIoe0TN6RW/Gk/FivBsfs9aCkc/soz8yPn8AtuSZyA==</latexit><latexit sha1_base64="yJgnElJriwfv2D53AURbemLgi/I=">AAACCHicbZDLSsNAFIYn9VbrLerShYNFqCAlEUGXRTcuK9gLNKFMptN26OTCzEm1hCzd+CpuXCji1kdw59s4TbPQ1h8GPv5zDmfO70WCK7Csb6OwtLyyulZcL21sbm3vmLt7TRXGkrIGDUUo2x5RTPCANYCDYO1IMuJ7grW80fW03hozqXgY3MEkYq5PBgHvc0pAW13zsOKMGU0e0lM4wY7kgyEQKcN7nNlx2jXLVtXKhBfBzqGMctW75pfTC2nsswCoIEp1bCsCNyESOBUsLTmxYhGhIzJgHY0B8Zlyk+yQFB9rp4f7odQvAJy5vycS4is18T3d6RMYqvna1Pyv1omhf+kmPIhiYAGdLerHAkOIp6ngHpeMgphoIFRy/VdMh0QSCjq7kg7Bnj95EZpnVVvz7Xm5dpXHUUQH6AhVkI0uUA3doDpqIIoe0TN6RW/Gk/FivBsfs9aCkc/soz8yPn8AtuSZyA==</latexit>
Approximate the data as good as possible …
11. How do we train a neural network
Optimise loss function:
L = LMSE + LReg + LL1<latexit sha1_base64="gUOXx4d83oxXFYgklynsP3UnEo8=">AAACMHicbVDLSsNAFJ3UV62vqEs3g0UQhJKIoBuhKKKLCvXRB7QhTKbTduhkEmYmQgn5JDd+im4UFHHrVzhps+jDAwNnzrmXe+/xQkalsqwPI7ewuLS8kl8trK1vbG6Z2zt1GUQCkxoOWCCaHpKEUU5qiipGmqEgyPcYaXiDy9RvPBEhacAf1TAkjo96nHYpRkpLrnnd9pHqY8TiSgLP4cTPjW8frhJ4NK3dk96cVnHtxDWLVskaAc4TOyNFkKHqmq/tToAjn3CFGZKyZVuhcmIkFMWMJIV2JEmI8AD1SEtTjnwinXh0cAIPtNKB3UDoxxUcqZMdMfKlHPqerkwXlbNeKv7ntSLVPXNiysNIEY7Hg7oRgyqAaXqwQwXBig01QVhQvSvEfSQQVjrjgg7Bnj15ntSPS7bmdyfF8kUWRx7sgX1wCGxwCsrgBlRBDWDwDN7AJ/gyXox349v4GZfmjKxnF0zB+P0DA3+pkA==</latexit><latexit sha1_base64="gUOXx4d83oxXFYgklynsP3UnEo8=">AAACMHicbVDLSsNAFJ3UV62vqEs3g0UQhJKIoBuhKKKLCvXRB7QhTKbTduhkEmYmQgn5JDd+im4UFHHrVzhps+jDAwNnzrmXe+/xQkalsqwPI7ewuLS8kl8trK1vbG6Z2zt1GUQCkxoOWCCaHpKEUU5qiipGmqEgyPcYaXiDy9RvPBEhacAf1TAkjo96nHYpRkpLrnnd9pHqY8TiSgLP4cTPjW8frhJ4NK3dk96cVnHtxDWLVskaAc4TOyNFkKHqmq/tToAjn3CFGZKyZVuhcmIkFMWMJIV2JEmI8AD1SEtTjnwinXh0cAIPtNKB3UDoxxUcqZMdMfKlHPqerkwXlbNeKv7ntSLVPXNiysNIEY7Hg7oRgyqAaXqwQwXBig01QVhQvSvEfSQQVjrjgg7Bnj15ntSPS7bmdyfF8kUWRx7sgX1wCGxwCsrgBlRBDWDwDN7AJ/gyXox349v4GZfmjKxnF0zB+P0DA3+pkA==</latexit><latexit sha1_base64="gUOXx4d83oxXFYgklynsP3UnEo8=">AAACMHicbVDLSsNAFJ3UV62vqEs3g0UQhJKIoBuhKKKLCvXRB7QhTKbTduhkEmYmQgn5JDd+im4UFHHrVzhps+jDAwNnzrmXe+/xQkalsqwPI7ewuLS8kl8trK1vbG6Z2zt1GUQCkxoOWCCaHpKEUU5qiipGmqEgyPcYaXiDy9RvPBEhacAf1TAkjo96nHYpRkpLrnnd9pHqY8TiSgLP4cTPjW8frhJ4NK3dk96cVnHtxDWLVskaAc4TOyNFkKHqmq/tToAjn3CFGZKyZVuhcmIkFMWMJIV2JEmI8AD1SEtTjnwinXh0cAIPtNKB3UDoxxUcqZMdMfKlHPqerkwXlbNeKv7ntSLVPXNiysNIEY7Hg7oRgyqAaXqwQwXBig01QVhQvSvEfSQQVjrjgg7Bnj15ntSPS7bmdyfF8kUWRx7sgX1wCGxwCsrgBlRBDWDwDN7AJ/gyXox349v4GZfmjKxnF0zB+P0DA3+pkA==</latexit><latexit sha1_base64="gUOXx4d83oxXFYgklynsP3UnEo8=">AAACMHicbVDLSsNAFJ3UV62vqEs3g0UQhJKIoBuhKKKLCvXRB7QhTKbTduhkEmYmQgn5JDd+im4UFHHrVzhps+jDAwNnzrmXe+/xQkalsqwPI7ewuLS8kl8trK1vbG6Z2zt1GUQCkxoOWCCaHpKEUU5qiipGmqEgyPcYaXiDy9RvPBEhacAf1TAkjo96nHYpRkpLrnnd9pHqY8TiSgLP4cTPjW8frhJ4NK3dk96cVnHtxDWLVskaAc4TOyNFkKHqmq/tToAjn3CFGZKyZVuhcmIkFMWMJIV2JEmI8AD1SEtTjnwinXh0cAIPtNKB3UDoxxUcqZMdMfKlHPqerkwXlbNeKv7ntSLVPXNiysNIEY7Hg7oRgyqAaXqwQwXBig01QVhQvSvEfSQQVjrjgg7Bnj15ntSPS7bmdyfF8kUWRx7sgX1wCGxwCsrgBlRBDWDwDN7AJ/gyXox349v4GZfmjKxnF0zB+P0DA3+pkA==</latexit>
Learn the mapping
(~x, t) ! ~u<latexit sha1_base64="yJgnElJriwfv2D53AURbemLgi/I=">AAACCHicbZDLSsNAFIYn9VbrLerShYNFqCAlEUGXRTcuK9gLNKFMptN26OTCzEm1hCzd+CpuXCji1kdw59s4TbPQ1h8GPv5zDmfO70WCK7Csb6OwtLyyulZcL21sbm3vmLt7TRXGkrIGDUUo2x5RTPCANYCDYO1IMuJ7grW80fW03hozqXgY3MEkYq5PBgHvc0pAW13zsOKMGU0e0lM4wY7kgyEQKcN7nNlx2jXLVtXKhBfBzqGMctW75pfTC2nsswCoIEp1bCsCNyESOBUsLTmxYhGhIzJgHY0B8Zlyk+yQFB9rp4f7odQvAJy5vycS4is18T3d6RMYqvna1Pyv1omhf+kmPIhiYAGdLerHAkOIp6ngHpeMgphoIFRy/VdMh0QSCjq7kg7Bnj95EZpnVVvz7Xm5dpXHUUQH6AhVkI0uUA3doDpqIIoe0TN6RW/Gk/FivBsfs9aCkc/soz8yPn8AtuSZyA==</latexit><latexit sha1_base64="yJgnElJriwfv2D53AURbemLgi/I=">AAACCHicbZDLSsNAFIYn9VbrLerShYNFqCAlEUGXRTcuK9gLNKFMptN26OTCzEm1hCzd+CpuXCji1kdw59s4TbPQ1h8GPv5zDmfO70WCK7Csb6OwtLyyulZcL21sbm3vmLt7TRXGkrIGDUUo2x5RTPCANYCDYO1IMuJ7grW80fW03hozqXgY3MEkYq5PBgHvc0pAW13zsOKMGU0e0lM4wY7kgyEQKcN7nNlx2jXLVtXKhBfBzqGMctW75pfTC2nsswCoIEp1bCsCNyESOBUsLTmxYhGhIzJgHY0B8Zlyk+yQFB9rp4f7odQvAJy5vycS4is18T3d6RMYqvna1Pyv1omhf+kmPIhiYAGdLerHAkOIp6ngHpeMgphoIFRy/VdMh0QSCjq7kg7Bnj95EZpnVVvz7Xm5dpXHUUQH6AhVkI0uUA3doDpqIIoe0TN6RW/Gk/FivBsfs9aCkc/soz8yPn8AtuSZyA==</latexit><latexit sha1_base64="yJgnElJriwfv2D53AURbemLgi/I=">AAACCHicbZDLSsNAFIYn9VbrLerShYNFqCAlEUGXRTcuK9gLNKFMptN26OTCzEm1hCzd+CpuXCji1kdw59s4TbPQ1h8GPv5zDmfO70WCK7Csb6OwtLyyulZcL21sbm3vmLt7TRXGkrIGDUUo2x5RTPCANYCDYO1IMuJ7grW80fW03hozqXgY3MEkYq5PBgHvc0pAW13zsOKMGU0e0lM4wY7kgyEQKcN7nNlx2jXLVtXKhBfBzqGMctW75pfTC2nsswCoIEp1bCsCNyESOBUsLTmxYhGhIzJgHY0B8Zlyk+yQFB9rp4f7odQvAJy5vycS4is18T3d6RMYqvna1Pyv1omhf+kmPIhiYAGdLerHAkOIp6ngHpeMgphoIFRy/VdMh0QSCjq7kg7Bnj95EZpnVVvz7Xm5dpXHUUQH6AhVkI0uUA3doDpqIIoe0TN6RW/Gk/FivBsfs9aCkc/soz8yPn8AtuSZyA==</latexit><latexit sha1_base64="yJgnElJriwfv2D53AURbemLgi/I=">AAACCHicbZDLSsNAFIYn9VbrLerShYNFqCAlEUGXRTcuK9gLNKFMptN26OTCzEm1hCzd+CpuXCji1kdw59s4TbPQ1h8GPv5zDmfO70WCK7Csb6OwtLyyulZcL21sbm3vmLt7TRXGkrIGDUUo2x5RTPCANYCDYO1IMuJ7grW80fW03hozqXgY3MEkYq5PBgHvc0pAW13zsOKMGU0e0lM4wY7kgyEQKcN7nNlx2jXLVtXKhBfBzqGMctW75pfTC2nsswCoIEp1bCsCNyESOBUsLTmxYhGhIzJgHY0B8Zlyk+yQFB9rp4f7odQvAJy5vycS4is18T3d6RMYqvna1Pyv1omhf+kmPIhiYAGdLerHAkOIp6ngHpeMgphoIFRy/VdMh0QSCjq7kg7Bnj95EZpnVVvz7Xm5dpXHUUQH6AhVkI0uUA3doDpqIIoe0TN6RW/Gk/FivBsfs9aCkc/soz8yPn8AtuSZyA==</latexit>
Approximate the data as good as possible …
Regression
(Seek the PDE)
Discover the differential equation
12. L1 Penalty
Promoting sparsity
How do we train a neural network
Optimise loss function:
L = LMSE + LReg + LL1<latexit sha1_base64="gUOXx4d83oxXFYgklynsP3UnEo8=">AAACMHicbVDLSsNAFJ3UV62vqEs3g0UQhJKIoBuhKKKLCvXRB7QhTKbTduhkEmYmQgn5JDd+im4UFHHrVzhps+jDAwNnzrmXe+/xQkalsqwPI7ewuLS8kl8trK1vbG6Z2zt1GUQCkxoOWCCaHpKEUU5qiipGmqEgyPcYaXiDy9RvPBEhacAf1TAkjo96nHYpRkpLrnnd9pHqY8TiSgLP4cTPjW8frhJ4NK3dk96cVnHtxDWLVskaAc4TOyNFkKHqmq/tToAjn3CFGZKyZVuhcmIkFMWMJIV2JEmI8AD1SEtTjnwinXh0cAIPtNKB3UDoxxUcqZMdMfKlHPqerkwXlbNeKv7ntSLVPXNiysNIEY7Hg7oRgyqAaXqwQwXBig01QVhQvSvEfSQQVjrjgg7Bnj15ntSPS7bmdyfF8kUWRx7sgX1wCGxwCsrgBlRBDWDwDN7AJ/gyXox349v4GZfmjKxnF0zB+P0DA3+pkA==</latexit><latexit sha1_base64="gUOXx4d83oxXFYgklynsP3UnEo8=">AAACMHicbVDLSsNAFJ3UV62vqEs3g0UQhJKIoBuhKKKLCvXRB7QhTKbTduhkEmYmQgn5JDd+im4UFHHrVzhps+jDAwNnzrmXe+/xQkalsqwPI7ewuLS8kl8trK1vbG6Z2zt1GUQCkxoOWCCaHpKEUU5qiipGmqEgyPcYaXiDy9RvPBEhacAf1TAkjo96nHYpRkpLrnnd9pHqY8TiSgLP4cTPjW8frhJ4NK3dk96cVnHtxDWLVskaAc4TOyNFkKHqmq/tToAjn3CFGZKyZVuhcmIkFMWMJIV2JEmI8AD1SEtTjnwinXh0cAIPtNKB3UDoxxUcqZMdMfKlHPqerkwXlbNeKv7ntSLVPXNiysNIEY7Hg7oRgyqAaXqwQwXBig01QVhQvSvEfSQQVjrjgg7Bnj15ntSPS7bmdyfF8kUWRx7sgX1wCGxwCsrgBlRBDWDwDN7AJ/gyXox349v4GZfmjKxnF0zB+P0DA3+pkA==</latexit><latexit sha1_base64="gUOXx4d83oxXFYgklynsP3UnEo8=">AAACMHicbVDLSsNAFJ3UV62vqEs3g0UQhJKIoBuhKKKLCvXRB7QhTKbTduhkEmYmQgn5JDd+im4UFHHrVzhps+jDAwNnzrmXe+/xQkalsqwPI7ewuLS8kl8trK1vbG6Z2zt1GUQCkxoOWCCaHpKEUU5qiipGmqEgyPcYaXiDy9RvPBEhacAf1TAkjo96nHYpRkpLrnnd9pHqY8TiSgLP4cTPjW8frhJ4NK3dk96cVnHtxDWLVskaAc4TOyNFkKHqmq/tToAjn3CFGZKyZVuhcmIkFMWMJIV2JEmI8AD1SEtTjnwinXh0cAIPtNKB3UDoxxUcqZMdMfKlHPqerkwXlbNeKv7ntSLVPXNiysNIEY7Hg7oRgyqAaXqwQwXBig01QVhQvSvEfSQQVjrjgg7Bnj15ntSPS7bmdyfF8kUWRx7sgX1wCGxwCsrgBlRBDWDwDN7AJ/gyXox349v4GZfmjKxnF0zB+P0DA3+pkA==</latexit><latexit sha1_base64="gUOXx4d83oxXFYgklynsP3UnEo8=">AAACMHicbVDLSsNAFJ3UV62vqEs3g0UQhJKIoBuhKKKLCvXRB7QhTKbTduhkEmYmQgn5JDd+im4UFHHrVzhps+jDAwNnzrmXe+/xQkalsqwPI7ewuLS8kl8trK1vbG6Z2zt1GUQCkxoOWCCaHpKEUU5qiipGmqEgyPcYaXiDy9RvPBEhacAf1TAkjo96nHYpRkpLrnnd9pHqY8TiSgLP4cTPjW8frhJ4NK3dk96cVnHtxDWLVskaAc4TOyNFkKHqmq/tToAjn3CFGZKyZVuhcmIkFMWMJIV2JEmI8AD1SEtTjnwinXh0cAIPtNKB3UDoxxUcqZMdMfKlHPqerkwXlbNeKv7ntSLVPXNiysNIEY7Hg7oRgyqAaXqwQwXBig01QVhQvSvEfSQQVjrjgg7Bnj15ntSPS7bmdyfF8kUWRx7sgX1wCGxwCsrgBlRBDWDwDN7AJ/gyXox349v4GZfmjKxnF0zB+P0DA3+pkA==</latexit>
Learn the mapping
(~x, t) ! ~u<latexit sha1_base64="yJgnElJriwfv2D53AURbemLgi/I=">AAACCHicbZDLSsNAFIYn9VbrLerShYNFqCAlEUGXRTcuK9gLNKFMptN26OTCzEm1hCzd+CpuXCji1kdw59s4TbPQ1h8GPv5zDmfO70WCK7Csb6OwtLyyulZcL21sbm3vmLt7TRXGkrIGDUUo2x5RTPCANYCDYO1IMuJ7grW80fW03hozqXgY3MEkYq5PBgHvc0pAW13zsOKMGU0e0lM4wY7kgyEQKcN7nNlx2jXLVtXKhBfBzqGMctW75pfTC2nsswCoIEp1bCsCNyESOBUsLTmxYhGhIzJgHY0B8Zlyk+yQFB9rp4f7odQvAJy5vycS4is18T3d6RMYqvna1Pyv1omhf+kmPIhiYAGdLerHAkOIp6ngHpeMgphoIFRy/VdMh0QSCjq7kg7Bnj95EZpnVVvz7Xm5dpXHUUQH6AhVkI0uUA3doDpqIIoe0TN6RW/Gk/FivBsfs9aCkc/soz8yPn8AtuSZyA==</latexit><latexit sha1_base64="yJgnElJriwfv2D53AURbemLgi/I=">AAACCHicbZDLSsNAFIYn9VbrLerShYNFqCAlEUGXRTcuK9gLNKFMptN26OTCzEm1hCzd+CpuXCji1kdw59s4TbPQ1h8GPv5zDmfO70WCK7Csb6OwtLyyulZcL21sbm3vmLt7TRXGkrIGDUUo2x5RTPCANYCDYO1IMuJ7grW80fW03hozqXgY3MEkYq5PBgHvc0pAW13zsOKMGU0e0lM4wY7kgyEQKcN7nNlx2jXLVtXKhBfBzqGMctW75pfTC2nsswCoIEp1bCsCNyESOBUsLTmxYhGhIzJgHY0B8Zlyk+yQFB9rp4f7odQvAJy5vycS4is18T3d6RMYqvna1Pyv1omhf+kmPIhiYAGdLerHAkOIp6ngHpeMgphoIFRy/VdMh0QSCjq7kg7Bnj95EZpnVVvz7Xm5dpXHUUQH6AhVkI0uUA3doDpqIIoe0TN6RW/Gk/FivBsfs9aCkc/soz8yPn8AtuSZyA==</latexit><latexit sha1_base64="yJgnElJriwfv2D53AURbemLgi/I=">AAACCHicbZDLSsNAFIYn9VbrLerShYNFqCAlEUGXRTcuK9gLNKFMptN26OTCzEm1hCzd+CpuXCji1kdw59s4TbPQ1h8GPv5zDmfO70WCK7Csb6OwtLyyulZcL21sbm3vmLt7TRXGkrIGDUUo2x5RTPCANYCDYO1IMuJ7grW80fW03hozqXgY3MEkYq5PBgHvc0pAW13zsOKMGU0e0lM4wY7kgyEQKcN7nNlx2jXLVtXKhBfBzqGMctW75pfTC2nsswCoIEp1bCsCNyESOBUsLTmxYhGhIzJgHY0B8Zlyk+yQFB9rp4f7odQvAJy5vycS4is18T3d6RMYqvna1Pyv1omhf+kmPIhiYAGdLerHAkOIp6ngHpeMgphoIFRy/VdMh0QSCjq7kg7Bnj95EZpnVVvz7Xm5dpXHUUQH6AhVkI0uUA3doDpqIIoe0TN6RW/Gk/FivBsfs9aCkc/soz8yPn8AtuSZyA==</latexit><latexit sha1_base64="yJgnElJriwfv2D53AURbemLgi/I=">AAACCHicbZDLSsNAFIYn9VbrLerShYNFqCAlEUGXRTcuK9gLNKFMptN26OTCzEm1hCzd+CpuXCji1kdw59s4TbPQ1h8GPv5zDmfO70WCK7Csb6OwtLyyulZcL21sbm3vmLt7TRXGkrIGDUUo2x5RTPCANYCDYO1IMuJ7grW80fW03hozqXgY3MEkYq5PBgHvc0pAW13zsOKMGU0e0lM4wY7kgyEQKcN7nNlx2jXLVtXKhBfBzqGMctW75pfTC2nsswCoIEp1bCsCNyESOBUsLTmxYhGhIzJgHY0B8Zlyk+yQFB9rp4f7odQvAJy5vycS4is18T3d6RMYqvna1Pyv1omhf+kmPIhiYAGdLerHAkOIp6ngHpeMgphoIFRy/VdMh0QSCjq7kg7Bnj95EZpnVVvz7Xm5dpXHUUQH6AhVkI0uUA3doDpqIIoe0TN6RW/Gk/FivBsfs9aCkc/soz8yPn8AtuSZyA==</latexit>
Approximate the data as good as possible …
Regression
(Seek the PDE)
Discover the differential equation
17. Resilience to Noise/ sampling size
Relative error on coefficients
<latexit sha1_base64="6yAhDb2lEN8ufJi3rjVOCio5Mf8=">AAACAnicbZDLSgMxFIYz9VbrbdSVuAkWQRDLjAi6EYpuXFawF2iHIZOmbWgmMyQn0jIUN76KGxeKuPUp3Pk2ppeFVn8IfPznHE7OH6WCa/C8Lye3sLi0vJJfLaytb2xuuds7NZ0YRVmVJiJRjYhoJrhkVeAgWCNVjMSRYPWofz2u1++Z0jyRdzBMWRCTruQdTglYK3T3TAj4Ep8YbMIBPsYtOaZsMBiFbtEreRPhv+DPoIhmqoTuZ6udUBMzCVQQrZu+l0KQEQWcCjYqtIxmKaF90mVNi5LETAfZ5IQRPrROG3cSZZ8EPHF/TmQk1noYR7YzJtDT87Wx+V+taaBzEWRcpgaYpNNFHSMwJHicB25zxSiIoQVCFbd/xbRHFKFgUyvYEPz5k/9C7bTkW749K5avZnHk0T46QEfIR+eojG5QBVURRQ/oCb2gV+fReXbenPdpa86ZzeyiX3I+vgEk2JX+</latexit>
Burgers’ equation
Applications in physics
Higher order PDEs
<latexit sha1_base64="89mN2eImzrIm0QzNLSbmbxofcBY=">AAACAHicbZDLSsNAFIYn9VbrLerChZvBIrixJCLqRii6cVnBXqANYTKdtEMnkzAXSQnZ+CpuXCji1sdw59s4bbPQ1gMzfPz/OcycP0gYlcpxvq3S0vLK6lp5vbKxubW9Y+/utWSsBSZNHLNYdAIkCaOcNBVVjHQSQVAUMNIORrcTv/1IhKQxf1DjhHgRGnAaUoyUkXz7QPsKXsMLqKH2U3hq7ixN09y3q07NmRZcBLeAKiiq4dtfvX6MdUS4wgxJ2XWdRHkZEopiRvJKT0uSIDxCA9I1yFFEpJdNF8jhsVH6MIyFOVzBqfp7IkORlOMoMJ0RUkM5703E/7yuVuGVl1GeaEU4nj0UagZVDCdpwD4VBCs2NoCwoOavEA+RQFiZzComBHd+5UVondVcw/fn1fpNEUcZHIIjcAJccAnq4A40QBNgkINn8ArerCfrxXq3PmatJauY2Qd/yvr8Aa06lS4=</latexit>
t = 5
t = 7
t = 9
Ground truth Sampled Inferred
y
y
y
Coupled equations
<latexit sha1_base64="oJZFq0o7Dpv+0FhHmocEH1HfGbM=">AAACJ3icbZDLSgMxFIYzXmu9jbp0EyxCXVhmRNCNUtSFywr2Ap0yZNJMG5rJDMkZtZS+jRtfxY2gIrr0TUzbWWjrgZCP/z+H5PxBIrgGx/my5uYXFpeWcyv51bX1jU17a7um41RRVqWxiFUjIJoJLlkVOAjWSBQjUSBYPehdjvz6HVOax/IW+glrRaQjecgpASP59nnqAz7DniSBIB5tx4A9wUIo4iufZzJO8SH2Hri5M+Eee4p3unDg2wWn5IwLz4KbQQFlVfHtV68d0zRiEqggWjddJ4HWgCjgVLBh3ks1SwjtkQ5rGpQkYro1GO85xPtGaeMwVuZIwGP198SARFr3o8B0RgS6etobif95zRTC09aAyyQFJunkoTAVGGI8Cg23uWIURN8AoYqbv2LaJYpQMNHmTQju9MqzUDsquYZvjgvliyyOHNpFe6iIXHSCyugaVVAVUfSIntEbereerBfrw/qctM5Z2cwO+lPW9w97rKPZ</latexit>
<latexit sha1_base64="TZ48ZV1oiCE1tUaFN922vZ54mwQ=">AAACCXicbZDLSgMxFIYz9VbrrerSzcEiCGKZEUE3QtEuXFawF2iHIZOmbWjmQnLGUkq3bnwVNy4UcesbuPNtTNtZaOuBhI//P4fk/H4shUbb/rYyS8srq2vZ9dzG5tb2Tn53r6ajRDFeZZGMVMOnmksR8ioKlLwRK04DX/K637+Z+PUHrrSIwnscxtwNaDcUHcEoGsnLw8BDuAIoewNolblECgM4hb65T6AHCXj5gl20pwWL4KRQIGlVvPxXqx2xJOAhMkm1bjp2jO6IKhRM8nGulWgeU9anXd40GNKAa3c03WQMR0ZpQydS5oQIU/X3xIgGWg8D33QGFHt63puI/3nNBDuX7kiEcYI8ZLOHOokEjGASC7SF4gzl0ABlSpi/AutRRRma8HImBGd+5UWonRUdw3fnhdJ1GkeWHJBDckwcckFK5JZUSJUw8kieySt5s56sF+vd+pi1Zqx0Zp/8KevzB0yCluA=</latexit>
u<latexit sha1_base64="0ivuOvsA1zGj29z/88YG+b8aLnI=">AAAB6HicbZBNS8NAEIYn9avWr6pHL4tF8FQSEeqx6MVjC/YD2lA220m7drMJuxuhhP4CLx4U8epP8ua/cdvmoK0vLDy8M8POvEEiuDau++0UNja3tneKu6W9/YPDo/LxSVvHqWLYYrGIVTegGgWX2DLcCOwmCmkUCOwEk7t5vfOESvNYPphpgn5ER5KHnFFjrWY6KFfcqrsQWQcvhwrkagzKX/1hzNIIpWGCat3z3MT4GVWGM4GzUj/VmFA2oSPsWZQ0Qu1ni0Vn5MI6QxLGyj5pyML9PZHRSOtpFNjOiJqxXq3Nzf9qvdSEN37GZZIalGz5UZgKYmIyv5oMuUJmxNQCZYrbXQkbU0WZsdmUbAje6snr0L6qepab15X6bR5HEc7gHC7BgxrU4R4a0AIGCM/wCm/Oo/PivDsfy9aCk8+cwh85nz/hr4z5</latexit><latexit sha1_base64="0ivuOvsA1zGj29z/88YG+b8aLnI=">AAAB6HicbZBNS8NAEIYn9avWr6pHL4tF8FQSEeqx6MVjC/YD2lA220m7drMJuxuhhP4CLx4U8epP8ua/cdvmoK0vLDy8M8POvEEiuDau++0UNja3tneKu6W9/YPDo/LxSVvHqWLYYrGIVTegGgWX2DLcCOwmCmkUCOwEk7t5vfOESvNYPphpgn5ER5KHnFFjrWY6KFfcqrsQWQcvhwrkagzKX/1hzNIIpWGCat3z3MT4GVWGM4GzUj/VmFA2oSPsWZQ0Qu1ni0Vn5MI6QxLGyj5pyML9PZHRSOtpFNjOiJqxXq3Nzf9qvdSEN37GZZIalGz5UZgKYmIyv5oMuUJmxNQCZYrbXQkbU0WZsdmUbAje6snr0L6qepab15X6bR5HEc7gHC7BgxrU4R4a0AIGCM/wCm/Oo/PivDsfy9aCk8+cwh85nz/hr4z5</latexit><latexit sha1_base64="0ivuOvsA1zGj29z/88YG+b8aLnI=">AAAB6HicbZBNS8NAEIYn9avWr6pHL4tF8FQSEeqx6MVjC/YD2lA220m7drMJuxuhhP4CLx4U8epP8ua/cdvmoK0vLDy8M8POvEEiuDau++0UNja3tneKu6W9/YPDo/LxSVvHqWLYYrGIVTegGgWX2DLcCOwmCmkUCOwEk7t5vfOESvNYPphpgn5ER5KHnFFjrWY6KFfcqrsQWQcvhwrkagzKX/1hzNIIpWGCat3z3MT4GVWGM4GzUj/VmFA2oSPsWZQ0Qu1ni0Vn5MI6QxLGyj5pyML9PZHRSOtpFNjOiJqxXq3Nzf9qvdSEN37GZZIalGz5UZgKYmIyv5oMuUJmxNQCZYrbXQkbU0WZsdmUbAje6snr0L6qepab15X6bR5HEc7gHC7BgxrU4R4a0AIGCM/wCm/Oo/PivDsfy9aCk8+cwh85nz/hr4z5</latexit><latexit sha1_base64="0ivuOvsA1zGj29z/88YG+b8aLnI=">AAAB6HicbZBNS8NAEIYn9avWr6pHL4tF8FQSEeqx6MVjC/YD2lA220m7drMJuxuhhP4CLx4U8epP8ua/cdvmoK0vLDy8M8POvEEiuDau++0UNja3tneKu6W9/YPDo/LxSVvHqWLYYrGIVTegGgWX2DLcCOwmCmkUCOwEk7t5vfOESvNYPphpgn5ER5KHnFFjrWY6KFfcqrsQWQcvhwrkagzKX/1hzNIIpWGCat3z3MT4GVWGM4GzUj/VmFA2oSPsWZQ0Qu1ni0Vn5MI6QxLGyj5pyML9PZHRSOtpFNjOiJqxXq3Nzf9qvdSEN37GZZIalGz5UZgKYmIyv5oMuUJmxNQCZYrbXQkbU0WZsdmUbAje6snr0L6qepab15X6bR5HEc7gHC7BgxrU4R4a0AIGCM/wCm/Oo/PivDsfy9aCk8+cwh85nz/hr4z5</latexit>
w<latexit sha1_base64="sGagZRpsBDQQXLOTtGoRB0Vi9Sc=">AAAB6HicbZBNS8NAEIYn9avWr6pHL4tF8FQSEfRY9OKxBfsBbSib7aRdu9mE3Y1SQn+BFw+KePUnefPfuG1z0NYXFh7emWFn3iARXBvX/XYKa+sbm1vF7dLO7t7+QfnwqKXjVDFssljEqhNQjYJLbBpuBHYShTQKBLaD8e2s3n5EpXks780kQT+iQ8lDzqixVuOpX664VXcusgpeDhXIVe+Xv3qDmKURSsME1brruYnxM6oMZwKnpV6qMaFsTIfYtShphNrP5otOyZl1BiSMlX3SkLn7eyKjkdaTKLCdETUjvVybmf/VuqkJr/2MyyQ1KNniozAVxMRkdjUZcIXMiIkFyhS3uxI2oooyY7Mp2RC85ZNXoXVR9Sw3Liu1mzyOIpzAKZyDB1dQgzuoQxMYIDzDK7w5D86L8+58LFoLTj5zDH/kfP4A5LeM+w==</latexit><latexit sha1_base64="sGagZRpsBDQQXLOTtGoRB0Vi9Sc=">AAAB6HicbZBNS8NAEIYn9avWr6pHL4tF8FQSEfRY9OKxBfsBbSib7aRdu9mE3Y1SQn+BFw+KePUnefPfuG1z0NYXFh7emWFn3iARXBvX/XYKa+sbm1vF7dLO7t7+QfnwqKXjVDFssljEqhNQjYJLbBpuBHYShTQKBLaD8e2s3n5EpXks780kQT+iQ8lDzqixVuOpX664VXcusgpeDhXIVe+Xv3qDmKURSsME1brruYnxM6oMZwKnpV6qMaFsTIfYtShphNrP5otOyZl1BiSMlX3SkLn7eyKjkdaTKLCdETUjvVybmf/VuqkJr/2MyyQ1KNniozAVxMRkdjUZcIXMiIkFyhS3uxI2oooyY7Mp2RC85ZNXoXVR9Sw3Liu1mzyOIpzAKZyDB1dQgzuoQxMYIDzDK7w5D86L8+58LFoLTj5zDH/kfP4A5LeM+w==</latexit><latexit sha1_base64="sGagZRpsBDQQXLOTtGoRB0Vi9Sc=">AAAB6HicbZBNS8NAEIYn9avWr6pHL4tF8FQSEfRY9OKxBfsBbSib7aRdu9mE3Y1SQn+BFw+KePUnefPfuG1z0NYXFh7emWFn3iARXBvX/XYKa+sbm1vF7dLO7t7+QfnwqKXjVDFssljEqhNQjYJLbBpuBHYShTQKBLaD8e2s3n5EpXks780kQT+iQ8lDzqixVuOpX664VXcusgpeDhXIVe+Xv3qDmKURSsME1brruYnxM6oMZwKnpV6qMaFsTIfYtShphNrP5otOyZl1BiSMlX3SkLn7eyKjkdaTKLCdETUjvVybmf/VuqkJr/2MyyQ1KNniozAVxMRkdjUZcIXMiIkFyhS3uxI2oooyY7Mp2RC85ZNXoXVR9Sw3Liu1mzyOIpzAKZyDB1dQgzuoQxMYIDzDK7w5D86L8+58LFoLTj5zDH/kfP4A5LeM+w==</latexit><latexit sha1_base64="sGagZRpsBDQQXLOTtGoRB0Vi9Sc=">AAAB6HicbZBNS8NAEIYn9avWr6pHL4tF8FQSEfRY9OKxBfsBbSib7aRdu9mE3Y1SQn+BFw+KePUnefPfuG1z0NYXFh7emWFn3iARXBvX/XYKa+sbm1vF7dLO7t7+QfnwqKXjVDFssljEqhNQjYJLbBpuBHYShTQKBLaD8e2s3n5EpXks780kQT+iQ8lDzqixVuOpX664VXcusgpeDhXIVe+Xv3qDmKURSsME1brruYnxM6oMZwKnpV6qMaFsTIfYtShphNrP5otOyZl1BiSMlX3SkLn7eyKjkdaTKLCdETUjvVybmf/VuqkJr/2MyyQ1KNniozAVxMRkdjUZcIXMiIkFyhS3uxI2oooyY7Mp2RC85ZNXoXVR9Sw3Liu1mzyOIpzAKZyDB1dQgzuoQxMYIDzDK7w5D86L8+58LFoLTj5zDH/kfP4A5LeM+w==</latexit>
:particle density
:attractant density
2D equations
18. Raw images Density field
ut = 0.3uy + 0.01uxx + 0.01uyy<latexit sha1_base64="giCCDNoPIOmus3F40Yxqtlm2jVw=">AAACEnicbZBLS8NAEMc39VXrK+rRy2IRFKEkKuhFKHrxWME+oA1hs922SzebsA9pCPkMXvwqXjwo4tWTN7+N2zaH2jqw8Jv/zDA7/yBmVCrH+bEKS8srq2vF9dLG5tb2jr2715CRFpjUccQi0QqQJIxyUldUMdKKBUFhwEgzGN6O681HIiSN+INKYuKFqM9pj2KkjOTbJ9pX8Bo6lXOo/QSeGnJcg+lolM1kSZL5dtlkk4CL4OZQBnnUfPu7042wDglXmCEp264TKy9FQlHMSFbqaElihIeoT9oGOQqJ9NLJSRk8MkoX9iJhHldwos5OpCiUMgkD0xkiNZDztbH4X62tVe/KSymPtSIcTxf1NIMqgmN/YJcKghVLDCAsqPkrxAMkEFbGxZIxwZ0/eREaZxXX8P1FuXqT21EEB+AQHAMXXIIquAM1UAcYPIEX8AberWfr1fqwPqetBSuf2Qd/wvr6BTy+mhY=</latexit><latexit sha1_base64="giCCDNoPIOmus3F40Yxqtlm2jVw=">AAACEnicbZBLS8NAEMc39VXrK+rRy2IRFKEkKuhFKHrxWME+oA1hs922SzebsA9pCPkMXvwqXjwo4tWTN7+N2zaH2jqw8Jv/zDA7/yBmVCrH+bEKS8srq2vF9dLG5tb2jr2715CRFpjUccQi0QqQJIxyUldUMdKKBUFhwEgzGN6O681HIiSN+INKYuKFqM9pj2KkjOTbJ9pX8Bo6lXOo/QSeGnJcg+lolM1kSZL5dtlkk4CL4OZQBnnUfPu7042wDglXmCEp264TKy9FQlHMSFbqaElihIeoT9oGOQqJ9NLJSRk8MkoX9iJhHldwos5OpCiUMgkD0xkiNZDztbH4X62tVe/KSymPtSIcTxf1NIMqgmN/YJcKghVLDCAsqPkrxAMkEFbGxZIxwZ0/eREaZxXX8P1FuXqT21EEB+AQHAMXXIIquAM1UAcYPIEX8AberWfr1fqwPqetBSuf2Qd/wvr6BTy+mhY=</latexit><latexit sha1_base64="giCCDNoPIOmus3F40Yxqtlm2jVw=">AAACEnicbZBLS8NAEMc39VXrK+rRy2IRFKEkKuhFKHrxWME+oA1hs922SzebsA9pCPkMXvwqXjwo4tWTN7+N2zaH2jqw8Jv/zDA7/yBmVCrH+bEKS8srq2vF9dLG5tb2jr2715CRFpjUccQi0QqQJIxyUldUMdKKBUFhwEgzGN6O681HIiSN+INKYuKFqM9pj2KkjOTbJ9pX8Bo6lXOo/QSeGnJcg+lolM1kSZL5dtlkk4CL4OZQBnnUfPu7042wDglXmCEp264TKy9FQlHMSFbqaElihIeoT9oGOQqJ9NLJSRk8MkoX9iJhHldwos5OpCiUMgkD0xkiNZDztbH4X62tVe/KSymPtSIcTxf1NIMqgmN/YJcKghVLDCAsqPkrxAMkEFbGxZIxwZ0/eREaZxXX8P1FuXqT21EEB+AQHAMXXIIquAM1UAcYPIEX8AberWfr1fqwPqetBSuf2Qd/wvr6BTy+mhY=</latexit><latexit sha1_base64="giCCDNoPIOmus3F40Yxqtlm2jVw=">AAACEnicbZBLS8NAEMc39VXrK+rRy2IRFKEkKuhFKHrxWME+oA1hs922SzebsA9pCPkMXvwqXjwo4tWTN7+N2zaH2jqw8Jv/zDA7/yBmVCrH+bEKS8srq2vF9dLG5tb2jr2715CRFpjUccQi0QqQJIxyUldUMdKKBUFhwEgzGN6O681HIiSN+INKYuKFqM9pj2KkjOTbJ9pX8Bo6lXOo/QSeGnJcg+lolM1kSZL5dtlkk4CL4OZQBnnUfPu7042wDglXmCEp264TKy9FQlHMSFbqaElihIeoT9oGOQqJ9NLJSRk8MkoX9iJhHldwos5OpCiUMgkD0xkiNZDztbH4X62tVe/KSymPtSIcTxf1NIMqgmN/YJcKghVLDCAsqPkrxAMkEFbGxZIxwZ0/eREaZxXX8P1FuXqT21EEB+AQHAMXXIIquAM1UAcYPIEX8AberWfr1fqwPqetBSuf2Qd/wvr6BTy+mhY=</latexit>
Underlying AD equation
Apply to real data …
Model discovery from experimental data
21. Density estimation:Temporal normalizing flows
t
z
t
x
Latent spaceReal space
x
t
Positional data
Temporal Normalizing Flows (tNFs) extend NFs with a temporal
component to estimate a time-dependent density evolution
Code available on Github:
https://github.com/PhIMaL
Paper available on arXiv:
1912.09092
22. Density estimation:Temporal normalizing flows
t
z
t
x
Latent spaceReal space
x
t
Positional data
Temporal Normalizing Flows (tNFs) extend NFs with a temporal
component to estimate a time-dependent density evolution
tNFs allow discovery of PDEs with conservation
constraints, e.g. Fokker-Planck, …
Code available on Github:
https://github.com/PhIMaL
Paper available on arXiv:
1912.09092
23. Code available on Github:
https://github.com/PhIMaL
DeepMoD
Model discovery
neural networks
DeepMoD
Model discovery
neural networks
Paper available on arXiv:
1904.08406
T. Chrysostomou
(Intern)
C. Le Scao
( Intern)
A, Brandon-Bravo
( Engineer)
G.-J. Both
( PhD)
@GJ_Both
@RemyKusters