Presentation at SIAM Conference on Applied Algebraic Geometry (AG21), Aug. 2021.
Abstract. The study of phase transitions using data-driven approaches is challenging, especially when little prior knowledge of the system is available. Topological data analysis is an emerging framework for characterizing the shape of data and has recently achieved success in detecting structural transitions in material science, such as the glass--liquid transition. However, data obtained from physical states may not have explicit shapes as structural materials. We thus propose a general framework, termed “topological persistence machine," to construct the shape of data from correlations in states so that we can subsequently decipher phase transitions via qualitative changes in the shape. Our framework enables an effective and unified approach in phase transition analysis without having prior knowledge about phases or requiring the investigation of the system with large size. We demonstrate the efficacy of the approach in terms of detecting the Berezinskii--Kosterlitz--Thouless phase transition in the classical XY model and quantum phase transitions in the transverse Ising and Bose--Hubbard models. Interestingly, while these phase transitions have proven to be notoriously difficult to analyze using traditional methods, they can be characterized through our framework without requiring prior knowledge of the phases. Our approach is thus expected to be widely applicable and will provide the prospective with practical interests in exploring the phases of experimental physical systems.
CCS2019-opological time-series analysis with delay-variant embeddingHa Phuong
Q. H. Tran and Y. Hasegawa, Topological time-series analysis with delay-variant embedding, Oral Presentation at Conference on Complex Systems, Singapore, Singapore, Oct. 2019.
Topological Data Analysis: visual presentation of multidimensional data setsDataRefiner
Topology data analysis (TDA) is an unsupervised approach which may revolutionise the way data can be mined and eventually drive the new generation of analytical tools. The idea behind TDA is an attempt to "measure" shape of data and find compressed combinatorial representation of the shape. In ordinary topology, the combinatorial representations serve the purpose of providing the compressed representation of high dimensional data sets which retains information about the geometric relationships between data points. TDA can also be used as a very powerful clustering technique. Edward will present the comparison between TDA and other dimension reduction algorithms like PCA, LLE, Isomap, MDS, and Spectral Embedding.
Introduction to Topological Data AnalysisMason Porter
Here are slides for my 3/14/21 talk on an introduction to topological data analysis.
This is the first talk in our Short Course on topological data analysis at the 2021 American Physical Society (APS) March Meeting: https://march.aps.org/program/dsoft/gsnp-short-course-introduction-to-topological-data-analysis/
slides describing the following paper:
Octavian-Eugen Ganea et. al., "Hyperbolic Neural Networks", NeurIPS 2018
(NeurIPS 2018 paper reading time at PFN at 2019 Jan. 26th)
CCS2019-opological time-series analysis with delay-variant embeddingHa Phuong
Q. H. Tran and Y. Hasegawa, Topological time-series analysis with delay-variant embedding, Oral Presentation at Conference on Complex Systems, Singapore, Singapore, Oct. 2019.
Topological Data Analysis: visual presentation of multidimensional data setsDataRefiner
Topology data analysis (TDA) is an unsupervised approach which may revolutionise the way data can be mined and eventually drive the new generation of analytical tools. The idea behind TDA is an attempt to "measure" shape of data and find compressed combinatorial representation of the shape. In ordinary topology, the combinatorial representations serve the purpose of providing the compressed representation of high dimensional data sets which retains information about the geometric relationships between data points. TDA can also be used as a very powerful clustering technique. Edward will present the comparison between TDA and other dimension reduction algorithms like PCA, LLE, Isomap, MDS, and Spectral Embedding.
Introduction to Topological Data AnalysisMason Porter
Here are slides for my 3/14/21 talk on an introduction to topological data analysis.
This is the first talk in our Short Course on topological data analysis at the 2021 American Physical Society (APS) March Meeting: https://march.aps.org/program/dsoft/gsnp-short-course-introduction-to-topological-data-analysis/
slides describing the following paper:
Octavian-Eugen Ganea et. al., "Hyperbolic Neural Networks", NeurIPS 2018
(NeurIPS 2018 paper reading time at PFN at 2019 Jan. 26th)
Probabilistic Programming allows very flexible creation of custom probabilistic models and is mainly concerned with insight and learning from your data. The approach is inherently Bayesian so we can specify priors to inform and constrain our models and get uncertainty estimation in form of a posterior distribution. Using MCMC sampling algorithms we can draw samples from this posterior to very flexibly estimate these models. PyMC3 and Stan are the current state-of-the-art tools to construct and estimate these models.
One major drawback of sampling, however, is that it's often very slow, especially for high-dimensional models. That's why more recently, variational inference algorithms have been developed that are almost as flexible as MCMC but much faster. Instead of drawing samples from the posterior, these algorithms instead fit a distribution (e.g. normal) to the posterior turning a sampling problem into and optimization problem. ADVI -- Automatic Differentation Variational Inference -- is implemented in PyMC3 and Stan, as well as a new package called Edward which is mainly concerned with Variational Inference.
Using Large Language Models in 10 Lines of CodeGautier Marti
Modern NLP models can be daunting: No more bag-of-words but complex neural network architectures, with billions of parameters. Engineers, financial analysts, entrepreneurs, and mere tinkerers, fear not! You can get started with as little as 10 lines of code.
Presentation prepared for the Abu Dhabi Machine Learning Meetup Season 3 Episode 3 hosted at ADGM in Abu Dhabi.
최근 이수가 되고 있는 Bayesian Deep Learning 관련 이론과 최근 어플리케이션들을 소개합니다. Bayesian Inference 의 이론에 관해서 간단히 설명하고 Yarin Gal 의 Monte Carlo Dropout 의 이론과 어플리케이션들을 소개합니다.
Scaling Instruction-Finetuned Language Modelstaeseon ryu
이 논문은 언어 모델에 대한 fine tuning하는 방법에 대해 탐구하고 있습니다. 특히, 작업의 수, 모델 크기, 그리고 체인-오브-소트 데이터를 확장하는 것에 초점을 맞추고 있습니다. 결과적으로, 다양한 모델 클래스와 평가 벤치마크에서 보이는 성능과 미처 보지 못한 작업에 대한 일반화에 있어서 상당한 향상을 보여줍니다.
이 논문은 또한, 강력한 few-shot 성능을 달성하는 Flan-T5 체크포인트를 공개합니다. 지시사항 미세조정은 사전 훈련된 언어 모델의 성능과 사용성을 향상시키는 일반적인 방법입니다.
이 논문은 언어 모델의 미세조정에 대한 새로운 접근법을 제시하며, 이를 통해 더 효율적인 방식으로 다양한 언어 작업에 대한 성능을 향상시킬 수 있음을 보여줍니다.
오늘 논문 리뷰를 위해 자연어처리 박산희님이 자세한 리뷰를 도와주셨습니다 많은 관심 미리 감사드립니다!
https://youtu.be/lta-rKYtVbg
Uncertainty in Deep Learning, Gal (2016)
Representing Inferential Uncertainty in Deep Neural Networks Through Sampling, McClure & Kriegeskorte (2017)
Uncertainty-Aware Reinforcement Learning from Collision Avoidance, Khan et al. (2016)
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles, Lakshminarayanan et al. (2017)
What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?, Kendal & Gal (2017)
Uncertainty-Aware Learning from Demonstration Using Mixture Density Networks with Sampling-Free Variance Modeling, Choi et al. (2017)
Bayesian Uncertainty Estimation for Batch Normalized Deep Networks, Anonymous (2018)
Methods of Optimization in Machine LearningKnoldus Inc.
In this session we will discuss about various methods to optimise a machine learning model and, how we can adjust the hyper-parameters to minimise the cost function.
This presentation was created for a first year physics project at Imperial.
A presentation describing some of the applications of quantum entanglement, for example: quantum clocks, quantum computing, teleportation and quantum cryptography. Refers to specific experiment of teleportation carried out by NIST using time-bin encoding.
Fuzzy logic is often heralded as a technique for handling problems with large amounts of vagueness or uncertainty. Since its inception in 1965 it has grown from an obscure mathematical idea to a technique used in a wide variety of applications from cooking rice to controlling diesel engines on an ocean liner.
This talk will give a layman's introduction to the topic and explore some of the real world applications in control and human decision making. Examples might include household appliances, control of large industrial plant, and health monitoring systems for the elderly. We will look at where the field might be going over the next ten years, highlighting areas where DMU's specialist expertise drives the way.
Universal Approximation Property via Quantum Feature Maps
----
The quantum Hilbert space can be used as a quantum-enhanced feature space in machine learning (ML) via the quantum feature map to encode classical data into quantum states. We prove the ability to approximate any continuous function with optimal approximation rate via quantum ML models in typical quantum feature maps.
---
Contributed talk at Quantum Techniques in Machine Learning 2021, Tokyo, November 8-12 2021.
By Quoc Hoan Tran, Takahiro Goto and Kohei Nakajima
Probabilistic Programming allows very flexible creation of custom probabilistic models and is mainly concerned with insight and learning from your data. The approach is inherently Bayesian so we can specify priors to inform and constrain our models and get uncertainty estimation in form of a posterior distribution. Using MCMC sampling algorithms we can draw samples from this posterior to very flexibly estimate these models. PyMC3 and Stan are the current state-of-the-art tools to construct and estimate these models.
One major drawback of sampling, however, is that it's often very slow, especially for high-dimensional models. That's why more recently, variational inference algorithms have been developed that are almost as flexible as MCMC but much faster. Instead of drawing samples from the posterior, these algorithms instead fit a distribution (e.g. normal) to the posterior turning a sampling problem into and optimization problem. ADVI -- Automatic Differentation Variational Inference -- is implemented in PyMC3 and Stan, as well as a new package called Edward which is mainly concerned with Variational Inference.
Using Large Language Models in 10 Lines of CodeGautier Marti
Modern NLP models can be daunting: No more bag-of-words but complex neural network architectures, with billions of parameters. Engineers, financial analysts, entrepreneurs, and mere tinkerers, fear not! You can get started with as little as 10 lines of code.
Presentation prepared for the Abu Dhabi Machine Learning Meetup Season 3 Episode 3 hosted at ADGM in Abu Dhabi.
최근 이수가 되고 있는 Bayesian Deep Learning 관련 이론과 최근 어플리케이션들을 소개합니다. Bayesian Inference 의 이론에 관해서 간단히 설명하고 Yarin Gal 의 Monte Carlo Dropout 의 이론과 어플리케이션들을 소개합니다.
Scaling Instruction-Finetuned Language Modelstaeseon ryu
이 논문은 언어 모델에 대한 fine tuning하는 방법에 대해 탐구하고 있습니다. 특히, 작업의 수, 모델 크기, 그리고 체인-오브-소트 데이터를 확장하는 것에 초점을 맞추고 있습니다. 결과적으로, 다양한 모델 클래스와 평가 벤치마크에서 보이는 성능과 미처 보지 못한 작업에 대한 일반화에 있어서 상당한 향상을 보여줍니다.
이 논문은 또한, 강력한 few-shot 성능을 달성하는 Flan-T5 체크포인트를 공개합니다. 지시사항 미세조정은 사전 훈련된 언어 모델의 성능과 사용성을 향상시키는 일반적인 방법입니다.
이 논문은 언어 모델의 미세조정에 대한 새로운 접근법을 제시하며, 이를 통해 더 효율적인 방식으로 다양한 언어 작업에 대한 성능을 향상시킬 수 있음을 보여줍니다.
오늘 논문 리뷰를 위해 자연어처리 박산희님이 자세한 리뷰를 도와주셨습니다 많은 관심 미리 감사드립니다!
https://youtu.be/lta-rKYtVbg
Uncertainty in Deep Learning, Gal (2016)
Representing Inferential Uncertainty in Deep Neural Networks Through Sampling, McClure & Kriegeskorte (2017)
Uncertainty-Aware Reinforcement Learning from Collision Avoidance, Khan et al. (2016)
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles, Lakshminarayanan et al. (2017)
What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?, Kendal & Gal (2017)
Uncertainty-Aware Learning from Demonstration Using Mixture Density Networks with Sampling-Free Variance Modeling, Choi et al. (2017)
Bayesian Uncertainty Estimation for Batch Normalized Deep Networks, Anonymous (2018)
Methods of Optimization in Machine LearningKnoldus Inc.
In this session we will discuss about various methods to optimise a machine learning model and, how we can adjust the hyper-parameters to minimise the cost function.
This presentation was created for a first year physics project at Imperial.
A presentation describing some of the applications of quantum entanglement, for example: quantum clocks, quantum computing, teleportation and quantum cryptography. Refers to specific experiment of teleportation carried out by NIST using time-bin encoding.
Fuzzy logic is often heralded as a technique for handling problems with large amounts of vagueness or uncertainty. Since its inception in 1965 it has grown from an obscure mathematical idea to a technique used in a wide variety of applications from cooking rice to controlling diesel engines on an ocean liner.
This talk will give a layman's introduction to the topic and explore some of the real world applications in control and human decision making. Examples might include household appliances, control of large industrial plant, and health monitoring systems for the elderly. We will look at where the field might be going over the next ten years, highlighting areas where DMU's specialist expertise drives the way.
Universal Approximation Property via Quantum Feature Maps
----
The quantum Hilbert space can be used as a quantum-enhanced feature space in machine learning (ML) via the quantum feature map to encode classical data into quantum states. We prove the ability to approximate any continuous function with optimal approximation rate via quantum ML models in typical quantum feature maps.
---
Contributed talk at Quantum Techniques in Machine Learning 2021, Tokyo, November 8-12 2021.
By Quoc Hoan Tran, Takahiro Goto and Kohei Nakajima
E. Canay and M. Eingorn
Physics of the Dark Universe 29 (2020) 100565
DOI: 10.1016/j.dark.2020.100565
https://authors.elsevier.com/a/1aydL7t6qq5DB0
https://arxiv.org/abs/2002.00437
Two distinct perturbative approaches have been recently formulated within General Relativity, arguing for the screening of gravity in the ΛCDM Universe. We compare them and show that the offered screening concepts, each characterized by its own interaction range, can peacefully coexist. Accordingly, we advance a united scheme, determining the gravitational potential at all scales, including regions of nonlinear density contrasts, by means of a simple Helmholtz equation with the effective cosmological screening length. In addition, we claim that cosmic structures may not grow at distances above this Yukawa range and confront its current value with dimensions of the largest known objects in the Universe.
Fixed Point Results for Weakly Compatible Mappings in Convex G-Metric Spaceinventionjournals
International Journal of Mathematics and Statistics Invention (IJMSI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJMSI publishes research articles and reviews within the whole field Mathematics and Statistics, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Andrey V. Savchenko - Sequential Hierarchical Image Recognition based on the ...AIST
Andrey V. Savchenko (National Research University Higher School of Economics), Vladimir Milov (N. Novgorod State Technical University), Natalya Belova (NRU HSE, Moscow) - Sequential Hierarchical Image Recognition based on the Pyramid Histograms of Oriented Gradients with Small Samples
AIST Conference 2015 http://aistconf.org
ARRAY FACTOR OPTIMIZATION OF AN ACTIVE PLANAR PHASED ARRAY USING EVOLUTIONARY...jantjournal
Evolutionary algorithms (EAs) have the potential to handle complex, multi-dimensional optimization problems in the field of phased array. Out of different EAs, particle swarm optimization (PSO) is a popular choice. In a phased array, antenna element failure is a common phenomenon and this leads to degradation of the array factor (AF) pattern, primarily in terms of increased side lobe levels (SLLs), displacement of nulls and reduction in the null depths. The recovery of a degraded pattern using a cost and time-effective approach is on demand. In this context, an attempt made to obtain an optimized AF pattern after fault in a 49 elements quasi-circular aperture equilateral triangular grid active planar phased array using PSO. In the paper, multiple cases on recovery are discussed having a maximum 20% element failure. Each recovery is also further evaluated by different statistical analyses. A dedicated software tool was developed to carry out the work presented in this paper.
ARRAY FACTOR OPTIMIZATION OF AN ACTIVE PLANAR PHASED ARRAY USING EVOLUTIONARY...jantjournal
Evolutionary algorithms (EAs) have the potential to handle complex, multi-dimensional optimization
problems in the field of phased array. Out of different EAs, particle swarm optimization (PSO) is a popular
choice. In a phased array, antenna element failure is a common phenomenon and this leads to degradation
of the array factor (AF) pattern, primarily in terms of increased side lobe levels (SLLs), displacement of
nulls and reduction in the null depths. The recovery of a degraded pattern using a cost and time-effective
approach is on demand. In this context, an attempt made to obtain an optimized AF pattern after fault in a
49 elements quasi-circular aperture equilateral triangular grid active planar phased array using PSO. In
the paper, multiple cases on recovery are discussed having a maximum 20% element failure. Each recovery
is also further evaluated by different statistical analyses. A dedicated software tool was developed to carry
out the work presented in this paper.
ARRAY FACTOR OPTIMIZATION OF AN ACTIVE PLANAR PHASED ARRAY USING EVOLUTIONARY...jantjournal
Evolutionary algorithms (EAs) have the potential to handle complex, multi-dimensional optimization problems in the field of phased array. Out of different EAs, particle swarm optimization (PSO) is a popular choice. In a phased array, antenna element failure is a common phenomenon and this leads to degradation of the array factor (AF) pattern, primarily in terms of increased side lobe levels (SLLs), displacement of nulls and reduction in the null depths. The recovery of a degraded pattern using a cost and time-effective approach is on demand. In this context, an attempt made to obtain an optimized AF pattern after fault in a 49 elements quasi-circular aperture equilateral triangular grid active planar phased array using PSO. In the paper, multiple cases on recovery are discussed having a maximum 20% element failure. Each recovery is also further evaluated by different statistical analyses. A dedicated software tool was developed to carry out the work presented in this paper.
ARRAY FACTOR OPTIMIZATION OF AN ACTIVE PLANAR PHASED ARRAY USING EVOLUTIONARY...jantjournal
Evolutionary algorithms (EAs) have the potential to handle complex, multi-dimensional optimization problems in the field of phased array. Out of different EAs, particle swarm optimization (PSO) is a popular choice. In a phased array, antenna element failure is a common phenomenon and this leads to degradation of the array factor (AF) pattern, primarily in terms of increased side lobe levels (SLLs), displacement of nulls and reduction in the null depths. The recovery of a degraded pattern using a cost and time-effective approach is on demand. In this context, an attempt made to obtain an optimized AF pattern after fault in a 49 elements quasi-circular aperture equilateral triangular grid active planar phased array using PSO. In the paper, multiple cases on recovery are discussed having a maximum 20% element failure. Each recovery is also further evaluated by different statistical analyses. A dedicated software tool was developed to carry out the work presented in this paper.
ARRAY FACTOR OPTIMIZATION OF AN ACTIVE PLANAR PHASED ARRAY USING EVOLUTIONARY...jantjournal
Evolutionary algorithms (EAs) have the potential to handle complex, multi-dimensional optimization problems in the field of phased array. Out of different EAs, particle swarm optimization (PSO) is a popular choice. In a phased array, antenna element failure is a common phenomenon and this leads to degradation of the array factor (AF) pattern, primarily in terms of increased side lobe levels (SLLs), displacement of nulls and reduction in the null depths. The recovery of a degraded pattern using a cost and time-effective approach is on demand. In this context, an attempt made to obtain an optimized AF pattern after fault in a 49 elements quasi-circular aperture equilateral triangular grid active planar phased array using PSO. In the paper, multiple cases on recovery are discussed having a maximum 20% element failure. Each recovery is also further evaluated by different statistical analyses. A dedicated software tool was developed to carry out the work presented in this paper.
ARRAY FACTOR OPTIMIZATION OF AN ACTIVE PLANAR PHASED ARRAY USING EVOLUTIONARY...jantjournal
Evolutionary algorithms (EAs) have the potential to handle complex, multi-dimensional optimization problems in the field of phased array. Out of different EAs, particle swarm optimization (PSO) is a popular choice. In a phased array, antenna element failure is a common phenomenon and this leads to degradation of the array factor (AF) pattern, primarily in terms of increased side lobe levels (SLLs), displacement of nulls and reduction in the null depths. The recovery of a degraded pattern using a cost and time-effective approach is on demand. In this context, an attempt made to obtain an optimized AF pattern after fault in a 49 elements quasi-circular aperture equilateral triangular grid active planar phased array using PSO. In the paper, multiple cases on recovery are discussed having a maximum 20% element failure. Each recovery is also further evaluated by different statistical analyses. A dedicated software tool was developed to carry out the work presented in this paper.
ARRAY FACTOR OPTIMIZATION OF AN ACTIVE PLANAR PHASED ARRAY USING EVOLUTIONARY...jantjournal
Evolutionary algorithms (EAs) have the potential to handle complex, multi-dimensional optimization problems in the field of phased array. Out of different EAs, particle swarm optimization (PSO) is a popular choice. In a phased array, antenna element failure is a common phenomenon and this leads to degradation of the array factor (AF) pattern, primarily in terms of increased side lobe levels (SLLs), displacement of nulls and reduction in the null depths. The recovery of a degraded pattern using a cost and time-effective approach is on demand. In this context, an attempt made to obtain an optimized AF pattern after fault in a 49 elements quasi-circular aperture equilateral triangular grid active planar phased array using PSO. In the paper, multiple cases on recovery are discussed having a maximum 20% element failure. Each recovery is also further evaluated by different statistical analyses. A dedicated software tool was developed to carry out the work presented in this paper.
ARRAY FACTOR OPTIMIZATION OF AN ACTIVE PLANAR PHASED ARRAY USING EVOLUTIONARY...jantjournal
Evolutionary algorithms (EAs) have the potential to handle complex, multi-dimensional optimization problems in the field of phased array. Out of different EAs, particle swarm optimization (PSO) is a popular choice. In a phased array, antenna element failure is a common phenomenon and this leads to degradation of the array factor (AF) pattern, primarily in terms of increased side lobe levels (SLLs), displacement of nulls and reduction in the null depths. The recovery of a degraded pattern using a cost and time-effective approach is on demand. In this context, an attempt made to obtain an optimized AF pattern after fault in a 49 elements quasi-circular aperture equilateral triangular grid active planar phased array using PSO. In the paper, multiple cases on recovery are discussed having a maximum 20% element failure. Each recovery is also further evaluated by different statistical analyses. A dedicated software tool was developed to carry out the work presented in this paper.
ARRAY FACTOR OPTIMIZATION OF AN ACTIVE PLANAR PHASED ARRAY USING EVOLUTIONARY...jantjournal
Evolutionary algorithms (EAs) have the potential to handle complex, multi-dimensional optimization problems in the field of phased array. Out of different EAs, particle swarm optimization (PSO) is a popular choice. In a phased array, antenna element failure is a common phenomenon and this leads to degradation of the array factor (AF) pattern, primarily in terms of increased side lobe levels (SLLs), displacement of nulls and reduction in the null depths. The recovery of a degraded pattern using a cost and time-effective approach is on demand. In this context, an attempt made to obtain an optimized AF pattern after fault in a 49 elements quasi-circular aperture equilateral triangular grid active planar phased array using PSO. In the paper, multiple cases on recovery are discussed having a maximum 20% element failure. Each recovery is also further evaluated by different statistical analyses. A dedicated software tool was developed to carry out the work presented in this paper.
ARRAY FACTOR OPTIMIZATION OF AN ACTIVE PLANAR PHASED ARRAY USING EVOLUTIONARY...jantjournal
Evolutionary algorithms (EAs) have the potential to handle complex, multi-dimensional optimization problems in the field of phased array. Out of different EAs, particle swarm optimization (PSO) is a popular choice. In a phased array, antenna element failure is a common phenomenon and this leads to degradation
of the array factor (AF) pattern, primarily in terms of increased side lobe levels (SLLs), displacement of nulls and reduction in the null depths. The recovery of a degraded pattern using a cost and time-effective approach is on demand. In this context, an attempt made to obtain an optimized AF pattern after fault in a
49 elements quasi-circular aperture equilateral triangular grid active planar phased array using PSO. In the paper, multiple cases on recovery are discussed having a maximum 20% element failure. Each recovery is also further evaluated by different statistical analyses. A dedicated software tool was developed to carry out the work presented in this paper.
Similar to SIAM-AG21-Topological Persistence Machine of Phase Transition (20)
018 20160902 Machine Learning Framework for Analysis of Transport through Com...Ha Phuong
• Propose a data-driven framework to study the relationship
between fluid flow at the macro scale and the internal pore
structure, across the micro and mesoscales, in porous, granular media.
Quantifies a hypothesized link between high permeability and
efficient shortest paths that thread through relatively large
pore bodies connected to each other by high conductance pore throats, embodying connectivity and pore structure.
017_20160826 Thermodynamics Of Stochastic Turing MachinesHa Phuong
Show how to construct stochastic models which mimic the
behavior of a general-purpose computer (a Turing machine).
Discrete state systems obeying a Markovian master equation,
which are logically reversible and have a well-defined and
consistent thermodynamic interpretation
The variational Gaussian process (VGP), a Bayesian nonparametric model which adapts its shape to match com- plex posterior distributions. The VGP generates approximate posterior samples by generating latent inputs and warping them through random non-linear mappings; the distribution over random mappings is learned during inference, enabling the transformed outputs to adapt to varying complexity.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
In silico drugs analogue design: novobiocin analogues.pptx
SIAM-AG21-Topological Persistence Machine of Phase Transition
1. Topological Persistence Machine
of Phase Transition
The University ofTokyo
Tran Quoc Hoan
(join work with M. Chen andY. Hasegawa)
MS:Applications of Persistent Homology to PhaseTransitions
2. Motivation
Data-driven approach in physical system with rich geometric and
topological structure
Quantum Quench
Dynamics
Spins Configuration
Interaction
Network
“It’s black but not a black box.”
AlgebraicTopological Machine
Topological Features
2/30
Topological Data Analysis = Model the Shape of Data
3. Persistent Homology
Main idea: vary a proximity parameter and track the appearance and
disappearance of features
Barcodes
Persistence
Diagram
Apply Homology
S. Barannikov+(1994),H. Edelsbrunner+(2002),
A. Zomorodian and G. Carlsson (2004),…
Significant features
persist
Filtration
3/30
5. Persistent Homology S. Barannikov+(1994),H. Edelsbrunner+(2002),
A. Zomorodian and G. Carlsson (2004),…
Simplicial
complex
Chain
complex
Homology
group
Algebraic Holes
(Geometrical object) (Algebraic object) (Algebraic object)
𝐻p =
Ker(𝜕𝑝)
Im(𝜕𝑝+1)
Image by Yasuaki Hiraoka via
http://www.wpi-
aimr.tohoku.ac.jp/hiraoka_labo/index-english.html
Standard computational time O(M3), M = number of simplices
Nina Otter+, A roadmap for the computation of persistent homology, EPJ Datascience, 2017
Complex K Size of K Theoretical guarantee
Čech 2𝑂 |𝑃| Nerve theorem
Vietoris-Rips (VR) 2𝑂 |𝑃| Approximate Čech complex
Alpha |𝑃|𝑂([𝑑/2]) (𝑁 points in ℝ𝑑) Nerve theorem
Witness 2𝑂 |𝐿|
(subset L) For curves and surfaces in Euclidean space
Graph-induced complex 2𝑂 |𝑄|
(subsample Q) ApproximatesVR complex
Sparsified Čech 𝑂 |𝑃| Approximate Čech complex
SparsifiedVR 𝑂 |𝑃| ApproximatesVR complex
5/30
6. Persistent Homology
Persistent Homology encodes both global and local topology of a
dataset into a computational feature set
[F. C. Motta+, Measures of order for nearly hexagonal lattices, Physica D (2018)] 6/30
Thank to Henry Adam for giving this
example
7. Persistent Homology and Physics of Intelligence
Questions:
◼ Given observations from two groups/phases in a physical system, what
makes them “truly” be different?
◼ Can we know the “key” parameters
underlying the observations?
◼ Can we interpret the phase transition
from features of observations?
◼ Can we predict/infer an unknown phase
transition from limited observations?
Let Persistent
Homology do it
Let Statistical Tools and
Machine Learning do it
+
7/30
8. Persistent Homology and PhaseTransition
◼ Hoan Tran - University ofTokyo, Japan - Topological Persistence Machine of PhaseTransitions
◼ Bart Olsthoorn - Nordic Institute forTheoretical Physics, Sweden - Mapping Complex Phase Diagrams in
Spin Models
◼ Alex Cole - University of Amsterdam, Netherlands - Quantitative and Interpretable Order Parameters
for PhaseTransitions from Persistent Homology
◼ Nick Sale - Swansea University, United Kingdom - Quantitative Analysis of PhaseTransitions using
Persistent Homology
◼ Irene Donato - Nextatlas, Italy - Persistent Homology Analysis of PhaseTransitions
◼ Kouji Kashiwa - Fukuoka Institute of Technology, Japan - Exploring the Phase Structure of QCD Effective
Models with Persistent Homology
◼ Daniel Spitz - Universität Heidelberg, Germany - Universal Dynamics in Quantum Many-Body Systems via
Persistent Homology
◼ Willem Elbers - Durham University, United Kingdom - Topological Signatures of Cosmic Reionization and
the First Galaxies
MS72
MS83
8/30
9. Topological Persistence Machine
(1) Decide observations (2) Embedding (3) Make filtration
(4) Compute diagrams for all observations
(each observation = one dataset)
(5) Point summaries or kernel trick
Q. H.Tran et al., Topological persistence machine of phase
transition, PRE (2021) 9/30
10. Point Summaries of Diagrams
Further compressed features from persistence diagrams
◼ Maximum lifetime 𝒫𝑚𝑎𝑥 𝐷 = max
𝑏,𝑑 ∈𝐷
|𝑑 − 𝑏|
◼ 𝛾-norm 𝒫𝛾 𝐷 =
𝑏,𝑑 ∈𝐷
𝑑 − 𝑏 𝛾
1/𝛾
◼ Normalized entropy
ℰ 𝐷 = −
1
log 𝒫1(𝐷)
𝑏,𝑑 ∈𝐷
|𝑑 − 𝑏|
𝒫1(𝐷)
log
|𝑑 − 𝑏|
𝒫1(𝐷)
(Cohen-Steiner+, 2010)
(Chintakunta+, 2015;Myers+, 2019)
10/30
11. The space of Persistence Diagrams
◼ Not a vector space
◼ Difficult to use in statistical-learning
tasks (e.g., classification, regression)
◼ Cannot define an inner product
KernelTrick for Persistence Diagrams
AVG( , , )
is meaningless.
Idea: map Persistence
Diagrams into a Hilbert space
11/30
12. KernelTrick for Persistence Diagrams
◼ Can define an inner product
◼ Use in (linear) statistical-learning tasks (e.g., SVM)
𝐷1
𝐷2
Ω
Φ𝐷1
Φ𝐷2
, 𝐻𝑏
Feature mapping
Φ 𝐻𝑏 = 𝐿2
(ℝ2
)
∞ −dimensional 𝐿2 space
◼ A map 𝑘: Ω × Ω → ℝ is called kernel if there is a Hilbert
space (𝐻𝑏, ⋅,⋅ ) and a feature map Φ: Ω → 𝐻𝑏 s.t.
Ω × Ω
𝐻b × 𝐻b
ℝ
⋅,⋅
𝑘
(Φ, Φ)
Reininghaus+, 2005; Bubenik+, 2015;
Kusano+, 2016; Chachólski+, 2017;
Adams+, 2017; Carrière+, 2017;
Le+, 2018; Corbet+, 2019;…
𝜙𝐷1
, 𝜙𝐷2 𝐻𝑏
= 𝑘 𝐷1, 𝐷2 for ∀𝐷1, 𝐷2 ∈ Ω
12/30
13. Kernel method
(Corbet+, 2019)The feature map Φ: Ω → 𝐿2
(ℝ2
) is often given by
𝐷 ⟼
𝒑∈𝐷
𝑤 𝒑 𝑓(⋅, 𝒑)
Peak function
(e.g., Gaussian)
weight
Persistence Scale Space Kernel
(Reininghaus+, 2015)
Persistence Weighted Gaussian
Kernel (Kusano+, 2016)
Persistence Images
(Adam+, 2017)
The kernel is given by the inner product:
𝑘 𝐷1, 𝐷2 = න
ℝ2
Φ𝐷1
𝒑 Φ𝐷2
𝒑 𝑑𝐿2
13/30
14. Kernel method
We can embed Φ𝐷 into a point
ℙ = {𝜌| ℝ2 𝜌 𝒙 = 1 , 𝜌 𝒙 ≥ 0}
Positive orthant
in the probability simplex
𝜌𝐷 =
1
𝑍
𝒑∈𝐷
𝒩(𝒑, 𝜈𝑰)
{𝒙 = (𝑥1, … , 𝑥𝑛) ∈ ℝ𝑛| σ𝑖
𝑛
𝑥𝑖 = 1, 𝑥𝑖 ≥ 0}
ℎ 𝒙 = 𝑥1, … , 𝑥𝑛 = (𝑦1, … , 𝑦𝑛)
𝕊+ = {𝜒|
ℝ2 𝜒2 𝒙 = 1 , 𝜒 𝒙 ≥ 0}
Fisher Information Metric
𝑑𝐹(𝜌𝐷1
, 𝜌𝐷2
) = arccos ℎ 𝜌𝐷1
, ℎ 𝜌𝐷2
𝑘 𝐷1, 𝐷2 = exp −𝛼𝑑𝐹(𝜌𝐷1
, 𝜌𝐷2
)
The kernel is given by
Persistence Fisher Kernel
(Le+, 2018)
14/30
15. Applications of Kernel method
◼ Kernel PCA, Kernel SVM
◼ Kernel change point detection with Kernel Fisher Discriminant Ratio (KFDR)
➢ Diagrams along index 𝑠: 𝐷𝑠, 𝑠 = 1, … , 𝑀
➢ For each 𝑠 > 1, two classes are defined by the data before and after 𝒔 and
compute
Ƹ
𝜇1 =
1
𝑠 − 1
𝑖=1
𝑠−1
Φ𝐷𝑖
Σ1 =
1
𝑠 − 1
𝑖=1
𝑠−1
Φ𝐷𝑖
− Ƹ
𝜇1 ⨂ Φ𝐷𝑖
− Ƹ
𝜇1
KFDR𝑀,𝑠,𝛾 =
(𝑠 − 1)(𝑀 − 𝑠 + 1)
𝑀
Ƹ
𝜇2 − Ƹ
𝜇1,
Σ + 𝛾𝐼
−1
Ƹ
𝜇2 − Ƹ
𝜇1
ℋb
while
➢ Find change point as 𝑠𝑐 = max
𝑠>1
KFDR𝑀,𝑠,𝛾 (Kusano+,2015;Tran+, 2019)
Σ2 =
1
𝑀 − 𝑠 + 1
𝑖=𝑠
𝑀
Φ𝐷𝑖
− Ƹ
𝜇2 ⨂ Φ𝐷𝑖
− Ƹ
𝜇2
Ƹ
𝜇2 =
1
𝑀 − 𝑠 + 1
𝑖=𝑠
𝑀
Φ𝐷𝑖
Σ =
𝑠−1
𝑀
Σ1 +
𝑀−𝑠+1
𝑀
Σ2
➢ Change-point regression with Φ𝐷1
, … , Φ𝐷𝑀
Kernel Change-point Analysis
(Harchaoui+, 2009)
15/30
16. Topological Persistence Machine
(1) Decide observations (2) Embedding (3) Make filtration
(4) Compute diagrams for all observations
(each observation = one dataset)
(5) Point summaries or kernel trick
Q. H.Tran et al., Topological persistence machine of phase
transition, PRE (2021) 16/30
17. Case-study: 2D-XY model
(no discontinuity in any
observable such as
magnetization or energy,
infinite order transition)
𝑇
Infinite energy to excite a single vortex,
but thermal fluctuations can create
vortex-antivortex pairs bounding
Low High
Entropically favorable for
vortices to separate
BKT transition
𝑻𝒄
𝑱
≅ 𝟎. 𝟖𝟗
𝛽𝐻 = −
𝐽
𝑘𝐵𝑇
𝑖,𝑗
𝑆𝑖 ⋅ 𝑆𝑗
Image by Matthew Beach via
https://mbeach42.github.io/
17/30
18. Topological defect
𝑤 = 0 𝑤 = 1 𝑤 = −1 𝑤 = 2
◼ A topological defect is a group of spins that have a different topology
than spins that point only one direction
◼ A vortex is a special type of topological defect by having non-zero
winding number
➢ A spin configuration with defects cannot be smoothly transformed into the ferromagnetic
ground state where all spins are aligned
Vortex Anti-vortex Vortex
18/30
19. XY model –Topological Order
◼ Observations (𝑙th-sample): 𝑆𝑖
(𝑙)
= cos 𝜃𝑖
(𝑙)
, sin 𝜃𝑖
(𝑙)
𝜌 𝜃𝑖 ∝ 𝑒−𝐸({𝜃𝑖})/𝑇
𝐸 𝜃𝑖 = −𝐽
𝑖,𝑗
cos(𝜃𝑖 − 𝜃𝑗)
◼ Initialize topological defects at 𝑇 = 0
For 1D-XY model
𝜃𝑖
(𝑙)
= 2𝜋𝜈(𝑙)
𝑖
𝑁
+ 𝛿𝜃𝑖
(𝑙)
+ ҧ
𝜃(𝑙)
Winding
number
Spin
fluctuation
Global
rotation
For 2D-XY model
𝜃(𝑖,𝑗)
(𝑙)
= 2𝜋𝜈𝑥
(𝑙) 𝑖
𝑁𝑥
+ 2𝜋𝜈𝑦
(𝑙) 𝑗
𝑁𝑦
+ 𝛿𝜃(𝑖,𝑗)
(𝑙)
+ ҧ
𝜃(𝑙)
Winding number (𝑣𝑥, 𝑣𝑦)
(Rodriguez-Nieva+, Nat. Phys., 2019)
19/30
20. 2D XY model –Topological Order
◼ Initial Purpose: use persistent
homology to identify topological
sectors from spins configuration
𝑣𝑥, 𝑣𝑦 = 0, 0 , 0, 2 , 1, 1 , (2, −1)
𝑁 × 𝑁 spins, 𝑚𝜈 = 100 samples per 𝑣𝑥, 𝑣𝑦
Total = 400 samples at each temperature
◼ Use the Metropolis algorithm to
thermalize samples to temperatureT
(many thanks to J. F. Rodriguez-Nieva for his instruction)
Topological
Persistence Machine
𝑣𝑥, 𝑣𝑦 = ?
20/30
21. 2D XY model –Topological Order
𝑑 𝑠𝑖, 𝑠𝑗 = 𝜉𝑑 𝑺𝑖, 𝑺𝑗 + 1.0 − 𝜉 𝑑 𝒓𝑖, 𝒓𝑗
= 𝜉 |𝜃𝑖 − 𝜃𝑗| + (1.0 − 𝜉) 𝒓𝑖 − 𝒓𝑗
◼ Embedding: we define the distance between spin 𝑖-
th index and 𝑗-th index in 𝑁 × 𝑁 lattice as
Angle distance Lattice distance
◼ Compute Persistence Diagrams of loops: 𝐷𝑙
(𝑇)
for sample 𝑙-th (𝑙 = 1,2, … , 400)
◼ Compute Gram Matrix {𝑘𝑙𝑙′} from Kernel 𝑘 𝐷𝑙
𝑇
, 𝐷𝑙′
𝑇
at eachT
◼ Dimensional Reduction from Gram matrix to see the difference in the
topological order
21/30
22. 2D XY model –Topological Order
Results from Kernel PCA
𝑣𝑥, 𝑣𝑦 =
Distinguishable
Topological sector by winding number
Indistinguishable =Topological Order is lost
We can perform an unsupervised learning of the topological phase transition by
detecting the value of 𝑇 that fails to identify the topological sectors.
What can the Persistence Diagrams tell us more?
22/30
23. 2D XY model –Topological PhaseTransition
𝑣𝑥, 𝑣𝑦 = −1, 2 𝑚𝜈 = 10 samples per 𝑣𝑥, 𝑣𝑦 at each T
𝑇 = {0.30,0.31, … , 1.50} ,Total = 1210 samples
We focus on only one topological sector and see the varying of diagrams via
temperature
Group of well-ordered spins
Group of spins that form
vortices or antivortices
At high 𝑇, it is easy for vortices and antivortices to
appear in many places in the spin configuration
23/30
24. 2D XY model –Topological PhaseTransition
24/24
Dimensional Reduction from Gram
matrix using UMAP (McInnes+,2018)
Kernel Spectral
Clustering
𝑚𝜈 = 10 samples per 𝑣𝑥, 𝑣𝑦 at each T
𝑇
𝐽
≈ 0.89
The transition in the
proportion of diagrams
belonging to each cluster
The number of diagrams
grouped into the cluster
of the low-temperature regime
Q. H.Tran et al., Topological persistence machine of phase
transition, PRE (2021)
24/30
25. Case-study: Quantum Many Body
𝐻𝐼 = −𝐽
𝑖=1
𝐿−1
ො
𝜎𝑖
𝑧
ො
𝜎𝑖+1
𝑧
− 𝐽𝑔
𝑖=1
𝐿
ො
𝜎𝑖
𝑥 𝐻𝐵 = −𝑡
𝑖=1
𝐿−1
𝑏𝑖
†
𝑏𝑖+1 +
𝑏𝑖+1
†
𝑏𝑖
One-dimensional Transverse Ising
model
One-dimensional Bose Hubbard model
ො
𝑛𝑖 =
𝑏𝑖
†
𝑏𝑖
+
𝑈
2
𝑖=1
𝐿
ො
𝑛𝑖 ො
𝑛𝑖 − 𝕝 − 𝜇
𝑖=1
𝐿
ො
𝑛𝑖
𝑔𝑐 = 1.0 𝑔
𝑇
𝑇 = 0
Domain-wall
quasiparticles
Flipped-spin
quasiparticles
Ordered phase
(𝑇 = 0) QPT at ground state
Quantum
Critical 𝑇 = 0
L. D. Carr et al., Mesoscopic effects in quantum phases of
ultracold quantum gases in optical lattices, PRA (2010)
𝐿 = 51
25/30
26. Quantum Many Body
◼ At ground state ො
𝜌, we define the quantum mutual information matrix
ℒ𝑖𝑗 =
1
2
𝑆𝑖 + 𝑆𝑗 − 𝑆𝑖𝑗 for 𝑖 ≠ 𝑗; ℒ𝑖𝑖 = 0
𝑆𝑖 = 𝑇𝑟 ො
𝜌𝑖 log ො
𝜌𝑖
◼ Consider {ℳ𝑖𝑗} as a weighted-graph
𝑑(𝑖, 𝑗) = 1 − 𝑟𝑖𝑗
2
◼ Define the distance where 𝑟𝑖𝑗 is the Pearson
correlation coefficient
𝑆𝑖𝑗 = 𝑇𝑟[ො
𝜌𝑖𝑗 log ො
𝜌𝑖𝑗]
ො
𝜌𝑖 = 𝑇𝑟𝑘≠𝑖 ො
𝜌
ො
𝜌𝑖𝑗 = 𝑇𝑟𝑘≠𝑖,𝑗 ො
𝜌
◼ Observations from the quantum many body system may not have explicit shapes
M.A.Valdez et al., Quantifying Complexity in Quantum PhaseTransitions
via Mutual Information Complex Networks, PRL (2017)
26/30
28. QPT Bose-Hubbard
For loops
◼ Fitting 𝑦 𝐿 = 𝑦(∞) + 𝛼𝐿−𝛽
for 𝐿 → ∞
𝑡
𝑈 𝐵𝐾𝑇
= 0.289 ± 0.001
Our
Density-matrix renormalization
𝑡
𝑈 𝐵𝐾𝑇
= 0.29 ± 0.01
Death − Birth
◼ For small 𝐿:
𝑦 𝐿 =
𝑡
𝑈 𝐵𝐾𝑇
≈ 0.2
Q. H.Tran et al., Topological persistence machine of
phase transition, PRE (2021)
28/30
T. D. Kühner et al., One-dimensional
Bose-Hubbard model with nearest-neighbor
interaction, PRB (2000).
For connected components
29. Summary
◼ We apply persistent homology for the raw data of physical states to
identify the phase of matter with appropriate interpretation.
29/30
◼ Without prior knowledge, this approach provides potential in
general system where the Hamiltonian may be unknown.
◼ The indicator from persistent homology only represents a necessary
but not sufficient condition.
◼ It would be interesting if we can come up this approach for “model
explainability”.