“Statistical Physics Studies of Machine Learning Problems" by Lenka Zdeborova, Researcher @CNRS
Abstract : We will talk about some insight of the following questions: What makes problems studied in machine and statistical physics related? How can this relation be used to understand better the performance and limitations of machine learning systems? What happens when a phase transition is found in a computational problem? How do phase transitions influence algorithmic hardness?
In this article we consider macrocanonical models for texture synthesis. In these models samples are generated given an input texture image and a set of features which should be matched in expectation. It is known that if the images are quantized, macrocanonical models are given by Gibbs measures, using the maximum entropy principle. We study conditions under which this result extends to real-valued images. If these conditions hold, finding a macrocanonical model amounts to minimizing a convex function and sampling from an associated Gibbs measure. We analyze an algorithm which alternates between sampling and minimizing. We present experiments with neural network features and study the drawbacks and advantages of using this sampling scheme.
Knowledge of cause-effect relationships is central to the field of climate science, supporting mechanistic understanding, observational sampling strategies, experimental design, model development and model prediction. While the major causal connections in our planet's climate system are already known, there is still potential for new discoveries in some areas. The purpose of this talk is to make this community familiar with a variety of available tools to discover potential cause-effect relationships from observed or simulation data. Some of these tools are already in use in climate science, others are just emerging in recent years. None of them are miracle solutions, but many can provide important pieces of information to climate scientists. An important way to use such methods is to generate cause-effect hypotheses that climate experts can then study further. In this talk we will (1) introduce key concepts important for causal analysis; (2) discuss some methods based on the concepts of Granger causality and Pearl causality; (3) point out some strengths and limitations of these approaches; and (4) illustrate such methods using a few real-world examples from climate science.
In this article we consider macrocanonical models for texture synthesis. In these models samples are generated given an input texture image and a set of features which should be matched in expectation. It is known that if the images are quantized, macrocanonical models are given by Gibbs measures, using the maximum entropy principle. We study conditions under which this result extends to real-valued images. If these conditions hold, finding a macrocanonical model amounts to minimizing a convex function and sampling from an associated Gibbs measure. We analyze an algorithm which alternates between sampling and minimizing. We present experiments with neural network features and study the drawbacks and advantages of using this sampling scheme.
Knowledge of cause-effect relationships is central to the field of climate science, supporting mechanistic understanding, observational sampling strategies, experimental design, model development and model prediction. While the major causal connections in our planet's climate system are already known, there is still potential for new discoveries in some areas. The purpose of this talk is to make this community familiar with a variety of available tools to discover potential cause-effect relationships from observed or simulation data. Some of these tools are already in use in climate science, others are just emerging in recent years. None of them are miracle solutions, but many can provide important pieces of information to climate scientists. An important way to use such methods is to generate cause-effect hypotheses that climate experts can then study further. In this talk we will (1) introduce key concepts important for causal analysis; (2) discuss some methods based on the concepts of Granger causality and Pearl causality; (3) point out some strengths and limitations of these approaches; and (4) illustrate such methods using a few real-world examples from climate science.
A quantum-inspired optimization heuristic for the multiple sequence alignment...Konstantinos Giannakis
Slides from the presentation of "A quantum-inspired optimization heuristic for the multiple sequence alignment problem in bio-computing" in IISA 2019 conference.
Multi Model Ensemble (MME) predictions are a popular ad-hoc technique for improving predictions of high-dimensional, multi-scale dynamical systems. The heuristic idea behind MME framework is simple: given a collection of models, one considers predictions obtained through the convex superposition of the individual probabilistic forecasts in the hope of mitigating model error. However, it is not obvious if this is a viable strategy and which models should be included in the MME forecast in order to achieve the best predictive performance. I will present an information-theoretic approach to this problem which allows for deriving a sufficient condition for improving dynamical predictions within the MME framework; moreover, this formulation gives rise to systematic and practical guidelines for optimising data assimilation techniques which are based on multi-model ensembles. Time permitting, the role and validity of “fluctuation-dissipation” arguments for improving imperfect predictions of externally perturbed non-autonomous systems - with possible applications to climate change considerations - will also be addressed.
In this talk I review the concept of Granger causality and the problematic effects of synergy and redundancy on its estimation.
I will then propose an operative definition of these concepts.
A quantum-inspired optimization heuristic for the multiple sequence alignment...Konstantinos Giannakis
Slides from the presentation of "A quantum-inspired optimization heuristic for the multiple sequence alignment problem in bio-computing" in IISA 2019 conference.
Multi Model Ensemble (MME) predictions are a popular ad-hoc technique for improving predictions of high-dimensional, multi-scale dynamical systems. The heuristic idea behind MME framework is simple: given a collection of models, one considers predictions obtained through the convex superposition of the individual probabilistic forecasts in the hope of mitigating model error. However, it is not obvious if this is a viable strategy and which models should be included in the MME forecast in order to achieve the best predictive performance. I will present an information-theoretic approach to this problem which allows for deriving a sufficient condition for improving dynamical predictions within the MME framework; moreover, this formulation gives rise to systematic and practical guidelines for optimising data assimilation techniques which are based on multi-model ensembles. Time permitting, the role and validity of “fluctuation-dissipation” arguments for improving imperfect predictions of externally perturbed non-autonomous systems - with possible applications to climate change considerations - will also be addressed.
In this talk I review the concept of Granger causality and the problematic effects of synergy and redundancy on its estimation.
I will then propose an operative definition of these concepts.
A tutorial given at the AMALEA workshop 2022:
Unsupervised and supervised prototype-based learning is illustrated in terms of bio-medical applications.
My 2hr+ survey talk at the Vector Institute, on our deep learning theorems.Anirbit Mukherjee
This survey talk at the Vector Institute is a much more extended version of my overview talks at the ISMP 2018 and the SIAM Annual Meeting 2018. This gives a lot more details about background concepts and proof strategies.
My invited talk at the 2018 Annual Meeting of SIAM (Society of Industrial and...Anirbit Mukherjee
This is a slightly expanded version of the talk I gave at the 2018 ISMP (International Symposium on Mathematical Programming). This SIAM talk has some more introductory material than the ISMP talk.
Functional specialization in human cognition: a large-scale neuroimaging init...Ana Luísa Pinho
Linking brain systems and mental functions requires accurate descriptions of behavioral tasks and fine demarcations of brain regions. Functional Magnetic Resonance Imaging (fMRI) has contributed to the investigation of brain regions involved in a variety of cognitive processes. However, to date, no data collection has systematically addressed the functional mapping of cognitive mechanisms at a fine spatial scale. The Individual Brain Charting (IBC) project stands for a high-resolution multi-task fMRI dataset that intends to provide the objective basis toward a comprehensive functional atlas of the human brain. The data refer to a permanent cohort performing many different tasks. The large amount of task-fMRI data on the same subjects yields a precise mapping of the underlying functions, free from both inter-subject and inter-site variability. The first release of the IBC dataset consists of data acquired from thirteen participants during performance of a dozen of tasks. Raw data from this release are publicly available in the OpenNeuro repository and derived statistical maps can be found in NeuroVault [1]. These maps reveal a successful cognitive encoding of many psychological domains in large areas of the human brain. Indeed, main findings of the original studies were replicated at higher resolution. Our results thus provide a comprehensive revision of the neural correlates underlying behavior, highlighting nonetheless the spatial variability of functional signatures between participants. In addition, this dataset supports investigations using alternative approaches to group-level analysis of task-specific studies. For instance, such rich task-wise dataset can be applied to mega-analytic encoding models towards the development of a brain-atlasing framework, by systematically mapping functional signatures associated with the cognitive components of the tasks.
A tutorial given at the AMALEA workshop 2022.
This talk presents the statistical physics based theory of machine learning in terms of simple example systems. As a recent application, the occurrence of phase transitions in layered networks is discussed.
Special Plenary Lecture at the International Conference on VIBRATION ENGINEERING AND TECHNOLOGY OF MACHINERY (VETOMAC), Lisbon, Portugal, September 10 - 13, 2018
http://www.conf.pt/index.php/v-speakers
Propagation of uncertainties in complex engineering dynamical systems is receiving increasing attention. When uncertainties are taken into account, the equations of motion of discretised dynamical systems can be expressed by coupled ordinary differential equations with stochastic coefficients. The computational cost for the solution of such a system mainly depends on the number of degrees of freedom and number of random variables. Among various numerical methods developed for such systems, the polynomial chaos based Galerkin projection approach shows significant promise because it is more accurate compared to the classical perturbation based methods and computationally more efficient compared to the Monte Carlo simulation based methods. However, the computational cost increases significantly with the number of random variables and the results tend to become less accurate for a longer length of time. In this talk novel approaches will be discussed to address these issues. Reduced-order Galerkin projection schemes in the frequency domain will be discussed to address the problem of a large number of random variables. Practical examples will be given to illustrate the application of the proposed Galerkin projection techniques.
Characterization of Subsurface Heterogeneity: Integration of Soft and Hard In...Amro Elfeki
Park, E., Elfeki, A. M. M., Dekking, F.M. (2003). Characterization of subsurface heterogeneity: Integration of soft and hard information using multi-dimensional Coupled Markov chain approach. Underground Injection Science and Technology Symposium, Lawrence Berkeley National Lab., October 22-25, 2003. p.49. Eds. Tsang, Chin.-Fu and Apps, John A.
http://www.lbl.gov/Conferences/UIST/index.html#topics
A walk through the intersection between machine learning and mechanistic mode...JuanPabloCarbajal3
Talk at EURECOM, France.
It overviews regression in several of its forms: regularized, constrained, and mixed. It builds the bridge between machine learning and dynamical models.
As electricity is difficult to store, it is crucial to strictly maintain the balance between production and consumption. The integration of intermittent renewable energies into the production mix has made the management of the balance more complex. However, access to near real-time data and communication with consumers via smart meters suggest demand response. Specifically, sending signals would encourage users to adjust their consumption according to the production of electricity. The algorithms used to select these signals must learn consumer reactions and optimize them while balancing exploration and exploitation. Various sequential or reinforcement learning approaches are being considered.
Online violence amplifies IRL discriminations, and the lack of diversity grows in a vicious circle. Understanding cyber-violence, its forms and mechanisms, can help us fight back. To process massive volumes of data, AI finally comes into play for good.
In the energy sector, the use of temporal data stands as a pivotal topic. At GRDF, we have developed several methods to effectively handle such data. This presentation will specifically delve into our approaches for anomaly detection and data imputation within time series, leveraging transformers and adversarial training techniques.
Natasha shares her experience to delve into the complexities, challenges, and strategies associated with effectively leading tech teams dispersed across borders.
Nour and Maria present the work they did at Tweag, Modus Create innovation arm, where the GenAI team developed an evaluation framework for Retrieval-Augmented Generation (RAG) systems. RAG systems provide an easy and low-cost way to extend the knowledge of Large Language Models (LLMs) but measuring their performance is not an easy task.
The presentation will review existing evaluation frameworks, ranging from those based on the traditional ML approach of using groundtruth datasets, including Tweag's, to those that use LLMs to compute evaluation metrics.
It will also delve into the practical implementation of Tweag's chatbot over two distinct documents datasets and provide insights on chunking, embedding and how open source and commercial LLMs compare.
Sharone Dayan, Machine Learning Engineer and Daria Stefic, Data Scientist, both from Contentsquare, delve into evaluation strategies for dealing with partially labelled or unlabelled data.
Laure talked about a very hot topic in the community at the moment with the ChatGPT phenomenon: how to supervise a PhD thesis in NLP in the age of Large Language Models (LLMs)?
Abstract: Who hasn't heard of the "Pilot Syndrome"? 85% of Data Science Pilots remain pilots and do not make it to the production stage. Let's build a production-ready and end-user-friendly Data Science application. 100% python and 100% open source.
Phase 1 | Building the GUI: create an interactive and powerful interface in a few lines of code
Phase 2 | Integrated back end: Manage your models and pipelines and create scenarios the smart way
"Nature Language Processing for proteins" by Amélie Héliou, Software Engineer @ Google Research
Abstract: Over the past few months, Large Language Models have become very popular.
We'll see how a simple LLM works, from input sentence to prediction.
I'll then present an application of LLM to protein name prediction.
Twitter: @Amelie_hel
"We are not passing by, and we are not a trend". What if an automated and large scale version of the Bechdel-Wallace test could confirm the speech of Alice Diop at the Cesar 2023?
That's the objective of BechdelAI : to build a tool based on Artificial Intelligence and open-source, allowing to measure the inequalities and the under-representation of women in movies and audiovisual.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
2. Long history of physics influencing machine learning.
Examples:
Gibbs-Bogoluibov-Feynman’60s - physics behind variational
inference.
Hopfield model’82. Spin glass models of neural networks Amit,
Gutfreund, Sompolinsky’85.
Boltzmann machine Hinton, Sejnowski’86 - named after the
Boltzmann distribution.
Gardner’87 - Maximum storage capacity in neural networks
(related to VC dimension).
SVMs by Boser, Guyon, Vapnik’92 inspired by Krauth, Mézard’87
Many papers on neural networks in physics in 80s and 90s.
PHYSICS IN MACHINE
LEARNING
5. THE PUZZLE OF GENERALIZATION
According to PAC bounds (via VC dimension, Rademacher
complexity) neural networks that generalize well should not be
able to fit random labels.
ICLR’16
6. THEORETICAL QUESTIONS
IN DEEP LEARNING
Why the lack of overfitting?
“More parameters = more overfitting”
Does not seem to hold in deep learning.
7. SAMPLE COMPLEXITY
Cifar10 - 50000 samples.
How many samples are
really needed?
How low is the optimal sample complexity? Are we achieving it?
If not, is it because of architectures or algorithms?
9. THEORETICAL-PHYSICS
ROADMAP
1. Experimental observation or fundamental hypothesis.
2. Unreasonably simple model for which toughest questions
can be understood mathematically.
3. Generalize to more realistic models, relies on universality
(= important laws of nature rarely depend on many details).
10. MODELS
H = J
X
(ij)2E
SiSj
P({Si}i=1,...,N ) =
e H
Z
magnetism of materials
In data science, models are used to fit the data. (e.g. linear
regression: What is the best straight line that captures the
dependence of y on x?)
In physics, models are the main tool for understanding.
11. MODELS
In data science, models are used to fit the data. (e.g. linear
regression: What is the best straight line that captures the
dependence of y on x?)
In physics, models are the main tool for understanding.
P({Si}i=1,...,N ) =
e H
Z
H =
X
(ijk)2E
JijkSiSjSk
glass transitionp-spin model
Jijk ⇠ N(0, 1)
12. IS THIS USEFUL IN MACHINE LEARNING?
Example: Single layer neural network = generalized linear regression.
Given (X,y) find w such that
μ = 1,…, n
i = 1,…, pyμ = φ(
p
∑
i=1
Xμiwi)
data
X
y
labels
w
weights
data
weights
(noisy) activation function
13. Take random iid Gaussian and random iid from
Create
Goal: Compute the best possible generalisation error achievable
with n samples of dimension p.
High-dimensional regime:
TEACHER-STUDENT MODEL
Xμi w*i
yμ = φ(
p
∑
i=1
Xμiw*i )
Pw
p → ∞
n → ∞
n/p = Ω(1)
data
X
y
labels
w
weights
data
weights
Gardner, Derrida’89, Gyorgyi’90
14. Take random iid Gaussian and random iid from
Create
Goal: Compute the best possible generalisation error achievable
with n samples of dimension p.
High-dimensional regime:
Xμi w*i
yμ = φ(
p
∑
i=1
Xμiw*i )
Pw
p → ∞n → ∞ n/p = Ω(1)
What did we win? Posterior is tractable with replica
and cavity method, developed in the theory of spin glasses.
P(w|X, y)
TEACHER-STUDENT MODEL
Gardner, Derrida’89, Gyorgyi’90
15. Optimal generalisation error for any non-linearity
and prior on weights.
Proof of the replica formula for the optimal
generalisation error.
Approximate message passing provably reaching the
optimal generalization error (out of the hard region).
Barbier, Krzakala, Macris, Miolane, LZ, COLT’18, arXiv:1708.03395
NEW W.R.T. 1990
20. INCLUDING HIDDEN VARIABLES
data
X
y
labels
w
v1
v2
weights
p input units
K hidden units
output unit
L=3 layers
n training samples
w learned, v1 & v2 fixed
Limit:
Committee machine
Model from Schwarze’92.
Proof of the replica formula, and approximate message passing Aubin, Maillard,
Barbier, Macris, Krzakala, LZ’19, spotlight at NeurIPS’18.
K = O(1)<latexit sha1_base64="pnb2kdx6DB2WkAndPWumY4tr6mw=">AAAB7XicbVBNSwMxEJ2tX7V+VT16CRahXsquFPQiFL0IHqxgP6BdSjbNtrHZZEmyQln6H7x4UMSr/8eb/8a03YO2Phh4vDfDzLwg5kwb1/12ciura+sb+c3C1vbO7l5x/6CpZaIIbRDJpWoHWFPOBG0YZjhtx4riKOC0FYyup37riSrNpHgw45j6ER4IFjKCjZWat5d3Ze+0Vyy5FXcGtEy8jJQgQ71X/Or2JUkiKgzhWOuO58bGT7EyjHA6KXQTTWNMRnhAO5YKHFHtp7NrJ+jEKn0USmVLGDRTf0+kONJ6HAW2M8JmqBe9qfif10lMeOGnTMSJoYLMF4UJR0ai6euozxQlho8twUQxeysiQ6wwMTaggg3BW3x5mTTPKp5b8e6rpdpVFkcejuAYyuDBOdTgBurQAAKP8Ayv8OZI58V5dz7mrTknmzmEP3A+fwD3vI4P</latexit><latexit sha1_base64="pnb2kdx6DB2WkAndPWumY4tr6mw=">AAAB7XicbVBNSwMxEJ2tX7V+VT16CRahXsquFPQiFL0IHqxgP6BdSjbNtrHZZEmyQln6H7x4UMSr/8eb/8a03YO2Phh4vDfDzLwg5kwb1/12ciura+sb+c3C1vbO7l5x/6CpZaIIbRDJpWoHWFPOBG0YZjhtx4riKOC0FYyup37riSrNpHgw45j6ER4IFjKCjZWat5d3Ze+0Vyy5FXcGtEy8jJQgQ71X/Or2JUkiKgzhWOuO58bGT7EyjHA6KXQTTWNMRnhAO5YKHFHtp7NrJ+jEKn0USmVLGDRTf0+kONJ6HAW2M8JmqBe9qfif10lMeOGnTMSJoYLMF4UJR0ai6euozxQlho8twUQxeysiQ6wwMTaggg3BW3x5mTTPKp5b8e6rpdpVFkcejuAYyuDBOdTgBurQAAKP8Ayv8OZI58V5dz7mrTknmzmEP3A+fwD3vI4P</latexit><latexit sha1_base64="pnb2kdx6DB2WkAndPWumY4tr6mw=">AAAB7XicbVBNSwMxEJ2tX7V+VT16CRahXsquFPQiFL0IHqxgP6BdSjbNtrHZZEmyQln6H7x4UMSr/8eb/8a03YO2Phh4vDfDzLwg5kwb1/12ciura+sb+c3C1vbO7l5x/6CpZaIIbRDJpWoHWFPOBG0YZjhtx4riKOC0FYyup37riSrNpHgw45j6ER4IFjKCjZWat5d3Ze+0Vyy5FXcGtEy8jJQgQ71X/Or2JUkiKgzhWOuO58bGT7EyjHA6KXQTTWNMRnhAO5YKHFHtp7NrJ+jEKn0USmVLGDRTf0+kONJ6HAW2M8JmqBe9qfif10lMeOGnTMSJoYLMF4UJR0ai6euozxQlho8twUQxeysiQ6wwMTaggg3BW3x5mTTPKp5b8e6rpdpVFkcejuAYyuDBOdTgBurQAAKP8Ayv8OZI58V5dz7mrTknmzmEP3A+fwD3vI4P</latexit><latexit sha1_base64="pnb2kdx6DB2WkAndPWumY4tr6mw=">AAAB7XicbVBNSwMxEJ2tX7V+VT16CRahXsquFPQiFL0IHqxgP6BdSjbNtrHZZEmyQln6H7x4UMSr/8eb/8a03YO2Phh4vDfDzLwg5kwb1/12ciura+sb+c3C1vbO7l5x/6CpZaIIbRDJpWoHWFPOBG0YZjhtx4riKOC0FYyup37riSrNpHgw45j6ER4IFjKCjZWat5d3Ze+0Vyy5FXcGtEy8jJQgQ71X/Or2JUkiKgzhWOuO58bGT7EyjHA6KXQTTWNMRnhAO5YKHFHtp7NrJ+jEKn0USmVLGDRTf0+kONJ6HAW2M8JmqBe9qfif10lMeOGnTMSJoYLMF4UJR0ai6euozxQlho8twUQxeysiQ6wwMTaggg3BW3x5mTTPKp5b8e6rpdpVFkcejuAYyuDBOdTgBurQAAKP8Ayv8OZI58V5dz7mrTknmzmEP3A+fwD3vI4P</latexit>
p → ∞
n → ∞ α = n/p = Ω(1)
21. PHASE TRANSITONS
Specialization phase transition
= hidden units specialise to
correlate with specific features.
K=2
sign(0) = 0<latexit sha1_base64="Dc5utTXextgZij3T2/A7p36jzo8=">AAAB+nicbVBNSwMxEJ31s9avrR69BItQLyUrgl6EohePFewHtEvJptk2NJtdkqxS1v4ULx4U8eov8ea/MW33oK0PBh7vzTAzL0gE1wbjb2dldW19Y7OwVdze2d3bd0sHTR2nirIGjUWs2gHRTHDJGoYbwdqJYiQKBGsFo5up33pgSvNY3ptxwvyIDCQPOSXGSj23lHVVhDQfyAo+naArhHtuGVfxDGiZeDkpQ456z/3q9mOaRkwaKojWHQ8nxs+IMpwKNil2U80SQkdkwDqWShIx7Wez0yfoxCp9FMbKljRopv6eyEik9TgKbGdEzFAvelPxP6+TmvDSz7hMUsMknS8KU4FMjKY5oD5XjBoxtoRQxe2tiA6JItTYtIo2BG/x5WXSPKt6uOrdnZdr13kcBTiCY6iABxdQg1uoQwMoPMIzvMKb8+S8OO/Ox7x1xclnDuEPnM8fCE+Shw==</latexit><latexit sha1_base64="Dc5utTXextgZij3T2/A7p36jzo8=">AAAB+nicbVBNSwMxEJ31s9avrR69BItQLyUrgl6EohePFewHtEvJptk2NJtdkqxS1v4ULx4U8eov8ea/MW33oK0PBh7vzTAzL0gE1wbjb2dldW19Y7OwVdze2d3bd0sHTR2nirIGjUWs2gHRTHDJGoYbwdqJYiQKBGsFo5up33pgSvNY3ptxwvyIDCQPOSXGSj23lHVVhDQfyAo+naArhHtuGVfxDGiZeDkpQ456z/3q9mOaRkwaKojWHQ8nxs+IMpwKNil2U80SQkdkwDqWShIx7Wez0yfoxCp9FMbKljRopv6eyEik9TgKbGdEzFAvelPxP6+TmvDSz7hMUsMknS8KU4FMjKY5oD5XjBoxtoRQxe2tiA6JItTYtIo2BG/x5WXSPKt6uOrdnZdr13kcBTiCY6iABxdQg1uoQwMoPMIzvMKb8+S8OO/Ox7x1xclnDuEPnM8fCE+Shw==</latexit><latexit sha1_base64="Dc5utTXextgZij3T2/A7p36jzo8=">AAAB+nicbVBNSwMxEJ31s9avrR69BItQLyUrgl6EohePFewHtEvJptk2NJtdkqxS1v4ULx4U8eov8ea/MW33oK0PBh7vzTAzL0gE1wbjb2dldW19Y7OwVdze2d3bd0sHTR2nirIGjUWs2gHRTHDJGoYbwdqJYiQKBGsFo5up33pgSvNY3ptxwvyIDCQPOSXGSj23lHVVhDQfyAo+naArhHtuGVfxDGiZeDkpQ456z/3q9mOaRkwaKojWHQ8nxs+IMpwKNil2U80SQkdkwDqWShIx7Wez0yfoxCp9FMbKljRopv6eyEik9TgKbGdEzFAvelPxP6+TmvDSz7hMUsMknS8KU4FMjKY5oD5XjBoxtoRQxe2tiA6JItTYtIo2BG/x5WXSPKt6uOrdnZdr13kcBTiCY6iABxdQg1uoQwMoPMIzvMKb8+S8OO/Ox7x1xclnDuEPnM8fCE+Shw==</latexit><latexit sha1_base64="Dc5utTXextgZij3T2/A7p36jzo8=">AAAB+nicbVBNSwMxEJ31s9avrR69BItQLyUrgl6EohePFewHtEvJptk2NJtdkqxS1v4ULx4U8eov8ea/MW33oK0PBh7vzTAzL0gE1wbjb2dldW19Y7OwVdze2d3bd0sHTR2nirIGjUWs2gHRTHDJGoYbwdqJYiQKBGsFo5up33pgSvNY3ptxwvyIDCQPOSXGSj23lHVVhDQfyAo+naArhHtuGVfxDGiZeDkpQ456z/3q9mOaRkwaKojWHQ8nxs+IMpwKNil2U80SQkdkwDqWShIx7Wez0yfoxCp9FMbKljRopv6eyEik9TgKbGdEzFAvelPxP6+TmvDSz7hMUsMknS8KU4FMjKY5oD5XjBoxtoRQxe2tiA6JItTYtIo2BG/x5WXSPKt6uOrdnZdr13kcBTiCY6iABxdQg1uoQwMoPMIzvMKb8+S8OO/Ox7x1xclnDuEPnM8fCE+Shw==</latexit>
0 1 2 3 4
α
0.00
0.05
0.10
0.15
0.20
0.25
Generalizationerrorϵg(α)
0.0
0.2
0.4
0.6
0.8
1.0
Overlapq
AMP q00
AMP q01
SE q00
SE q01
SE ϵg(α)
AMP ϵg(α)
Specialization
yμ = sign[sign(∑
i
Xμ,iwi,1) + sign
∑
i
(Xμ,iwi,2)]
22. Large algorithmic gap:
IT threshold:
Algorithmic threshold
K 1<latexit sha1_base64="tIKGLXfugTsoLV203AoKJohXlvk=">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi+Clgv2ANpTNdpMu3WzC7kQooT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMC1IpDLrut1NaW9/Y3CpvV3Z29/YPqodHbZNkmvEWS2SiuwE1XArFWyhQ8m6qOY0DyTvB+Hbmd564NiJRjzhJuR/TSIlQMIpW6tyTfhQRb1CtuXV3DrJKvILUoEBzUP3qDxOWxVwhk9SYnuem6OdUo2CSTyv9zPCUsjGNeM9SRWNu/Hx+7pScWWVIwkTbUkjm6u+JnMbGTOLAdsYUR2bZm4n/eb0Mw2s/FyrNkCu2WBRmkmBCZr+TodCcoZxYQpkW9lbCRlRThjahig3BW355lbQv6p5b9x4ua42bIo4ynMApnIMHV9CAO2hCCxiM4Rle4c1JnRfn3flYtJacYuYY/sD5/AH0iY6m</latexit><latexit sha1_base64="tIKGLXfugTsoLV203AoKJohXlvk=">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi+Clgv2ANpTNdpMu3WzC7kQooT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMC1IpDLrut1NaW9/Y3CpvV3Z29/YPqodHbZNkmvEWS2SiuwE1XArFWyhQ8m6qOY0DyTvB+Hbmd564NiJRjzhJuR/TSIlQMIpW6tyTfhQRb1CtuXV3DrJKvILUoEBzUP3qDxOWxVwhk9SYnuem6OdUo2CSTyv9zPCUsjGNeM9SRWNu/Hx+7pScWWVIwkTbUkjm6u+JnMbGTOLAdsYUR2bZm4n/eb0Mw2s/FyrNkCu2WBRmkmBCZr+TodCcoZxYQpkW9lbCRlRThjahig3BW355lbQv6p5b9x4ua42bIo4ynMApnIMHV9CAO2hCCxiM4Rle4c1JnRfn3flYtJacYuYY/sD5/AH0iY6m</latexit><latexit sha1_base64="tIKGLXfugTsoLV203AoKJohXlvk=">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi+Clgv2ANpTNdpMu3WzC7kQooT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMC1IpDLrut1NaW9/Y3CpvV3Z29/YPqodHbZNkmvEWS2SiuwE1XArFWyhQ8m6qOY0DyTvB+Hbmd564NiJRjzhJuR/TSIlQMIpW6tyTfhQRb1CtuXV3DrJKvILUoEBzUP3qDxOWxVwhk9SYnuem6OdUo2CSTyv9zPCUsjGNeM9SRWNu/Hx+7pScWWVIwkTbUkjm6u+JnMbGTOLAdsYUR2bZm4n/eb0Mw2s/FyrNkCu2WBRmkmBCZr+TodCcoZxYQpkW9lbCRlRThjahig3BW355lbQv6p5b9x4ua42bIo4ynMApnIMHV9CAO2hCCxiM4Rle4c1JnRfn3flYtJacYuYY/sD5/AH0iY6m</latexit><latexit sha1_base64="tIKGLXfugTsoLV203AoKJohXlvk=">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi+Clgv2ANpTNdpMu3WzC7kQooT/CiwdFvPp7vPlv3LY5aOuDgcd7M8zMC1IpDLrut1NaW9/Y3CpvV3Z29/YPqodHbZNkmvEWS2SiuwE1XArFWyhQ8m6qOY0DyTvB+Hbmd564NiJRjzhJuR/TSIlQMIpW6tyTfhQRb1CtuXV3DrJKvILUoEBzUP3qDxOWxVwhk9SYnuem6OdUo2CSTyv9zPCUsjGNeM9SRWNu/Hx+7pScWWVIwkTbUkjm6u+JnMbGTOLAdsYUR2bZm4n/eb0Mw2s/FyrNkCu2WBRmkmBCZr+TodCcoZxYQpkW9lbCRlRThjahig3BW355lbQv6p5b9x4ua42bIo4ynMApnIMHV9CAO2hCCxiM4Rle4c1JnRfn3flYtJacYuYY/sD5/AH0iY6m</latexit>
yμ = sign[
K
∑
a=1
sign(∑
i
Xμ,iwi,a)]
n > 7.65Kp
n > const . K2
p
PHASE TRANSITONS
0 2 4 6 8 10 12 14
α = (# of samples)/(#hidden units × input size)
0.0
0.1
0.2
0.3
0.4
0.5
Generalizationerrorϵg(α)
Non-specialized
hidden units
Specialized
hidden units
Computational gap
Bayes optimal ϵg(α)
AMP ϵg(α)
Discontinuous specialization
23. impossible hard doable todaydoable
# of samples
Good generalisation error
Our goal: Quantify this in more realistic models.
Design algorithms working in the doable region.
24. LZ, F. Krzakala, Statistical Physics of Algorithm: Threshold and Algorithms,
Advances of Physics (2016), arXiv:1511.02476.
J. Barbier, N. Macris, L. Miolane, F. Krzakala, LZ, Phase Transitions, Optimal
Errors and Optimality of Message-Passing in Generalized Linear Models,
arXiv:1708.03395, COLT’18.
B. Aubin, A. Maillard, J. Barbier, F. Krzakala N. Macris,, LZ, The committee
machine: Computational to statistical gaps in learning a two-layers neural
network, arXiv:1806.05451, NeurIPS’18.
REFERENCES
25. Thank you for your attention!
LZ, F. Krzakala, Statistical Physics of Algorithm: Threshold and Algorithms,
Advances of Physics (2016), arXiv:1511.02476.
J. Barbier, N. Macris, L. Miolane, F. Krzakala, LZ, Phase Transitions, Optimal
Errors and Optimality of Message-Passing in Generalized Linear Models,
arXiv:1708.03395, COLT’18.
B. Aubin, A. Maillard, J. Barbier, F. Krzakala N. Macris,, LZ, The committee
machine: Computational to statistical gaps in learning a two-layers neural
network, arXiv:1806.05451, NeurIPS’18.
REFERENCES