The document discusses modeling a logistics network as a hybrid system. Key points:
- A logistics network with production sites, distribution centers, suppliers and customers can be modeled as a hybrid system with both continuous and discrete dynamics.
- The state of each logistics location (stock level) changes continuously during production but jumps discretely when goods are transported.
- The overall network is modeled as an interconnection of the individual location hybrid systems.
- Analyzing the stability of such hybrid systems is important to minimize costs and unsatisfied orders in the logistics network.
Efficient Analysis of high-dimensional data in tensor formatsAlexander Litvinenko
We solve a PDE with uncertain coefficients. The solution is approximated in the Karhunen Loeve/PCE basis. How to compute maximum ? frequency? probability density function? with almost linear complexity? We offer various methods.
Recommendation System --Theory and PracticeKimikazu Kato
Survey on recommendation systems presented at IMI Colloquium, Kyushu University, Feb 18, 2015.
レコメンデーションシステムの最新の研究動向に関する解説です。2015年2月18日に九州大学IMIコロキアムで講演したものです。資料は英語ですが、講演は日本語でやりました。
Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017) Alexander Litvinenko
Overview of our latest works in applying low-rank tensor techniques to a) solving PDEs with uncertain coefficients (or multi-parametric PDEs) b) postprocessing high-dimensional data c) compute the largest element, level sets, TOP5% elelments
Efficient Analysis of high-dimensional data in tensor formatsAlexander Litvinenko
We solve a PDE with uncertain coefficients. The solution is approximated in the Karhunen Loeve/PCE basis. How to compute maximum ? frequency? probability density function? with almost linear complexity? We offer various methods.
Recommendation System --Theory and PracticeKimikazu Kato
Survey on recommendation systems presented at IMI Colloquium, Kyushu University, Feb 18, 2015.
レコメンデーションシステムの最新の研究動向に関する解説です。2015年2月18日に九州大学IMIコロキアムで講演したものです。資料は英語ですが、講演は日本語でやりました。
Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017) Alexander Litvinenko
Overview of our latest works in applying low-rank tensor techniques to a) solving PDEs with uncertain coefficients (or multi-parametric PDEs) b) postprocessing high-dimensional data c) compute the largest element, level sets, TOP5% elelments
Small updates of matrix functions used for network centralityFrancesco Tudisco
Many relevant measures of importance for nodes and edges of a network are defined in terms of suitable entries of matrix functions $f(A)$, for different choices of $f$ and $A$. Addressing the entries of $f(A)$ can be computationally challenging and this is particularly prohibitive when $A$ undergoes a perturbation $A+\delta A$ and the entries of $f(A)$ have to be updated. Given the adjacency matrix $A$ of a graph $G=(V,E)$, in this talk we consider the case where $\delta A$ is a sparse matrix that yields a small perturbation of the edge structure of $G$.
In particular, we present a bound showing that the variation of the entry $f(A)_{u,v}$ decays exponentially with the distance in $G$ that separates either $u$ or $v$ from the set of nodes touched by the edges that are perturbed. Our bound depends only on the distances in the original graph $G$ and on the field of values of the perturbed matrix $A+\delta A$. We show several numerical examples in support of the proposed result.
Talk presented at the IMA Numerical Analysis and Optimization conference, Birmingham 2018
The talk is based on the paper:
S. Pozza and F. Tudisco, On the stability of network indices defined by means of matrix functions, SIMAX, 2018
Tutorial on Belief Propagation in Bayesian NetworksAnmol Dwivedi
The goal of this mini-project is to implement belief propagation algorithms for posterior probability inference and most probable explanation (MPE) inference for the Bayesian Network with binary values in which the Conditional Probability Table for each random-variable/node is given.
We are interested in finding a permutation of the entries of a given square matrix so that the maximum number of its nonzero entries are moved to one of the corners in a L-shaped fashion.
If we interpret the nonzero entries of the matrix as the edges of a graph, this problem boils down to the so-called core–periphery structure, consisting of two sets: the core, a set of nodes that is highly connected across the whole graph, and the periphery, a set of nodes that is well connected only to the nodes that are in the core.
Matrix reordering problems have applications in sparse factorizations and preconditioning, while revealing core–periphery structures in networks has applications in economic, social and communication networks.
ABC with data cloning for MLE in state space modelsUmberto Picchini
An application of the "data cloning" method for parameter estimation via MLE aided by Approximate Bayesian Computation. The relevant paper is http://arxiv.org/abs/1505.06318
Linear Discriminant Analysis (LDA) Under f-Divergence MeasuresAnmol Dwivedi
For more details, please have a look at:
1. https://www.mdpi.com/1099-4300/24/2/188
2. https://ieeexplore.ieee.org/document/9518004
Abstract:
In statistical inference, the information-theoretic performance limits can often be expressed in terms of a notion of divergence between the underlying statistical models (e.g., in binary hypothesis testing, the total error probability is equal to the total variation between the models). As the data dimension grows, computing the statistics involved in decision-making and the attendant performance limits (divergence measures) face complexity and stability challenges. Dimensionality reduction addresses these challenges at the expense of compromising the performance (divergence reduces due to the data processing inequality for divergence). This paper considers linear dimensionality reduction such that the divergence between the models is \emph{maximally} preserved. Specifically, the paper focuses on the Gaussian models and characterizes an optimal projection of the data onto a lower-dimensional subspace with respect to four $f$-divergence measures (Kullback-Leibler, $\chi^2$, Hellinger, and total variation). There are two key observations. First, projections are not necessarily along the dominant modes of the covariance matrix of the data, and even in some situations, they can be along the least dominant modes. Secondly, under specific regimes, the optimal design of subspace projection is identical under all the $f$-divergence measures considered, rendering a degree of universality to the design independent of the inference problem of interest.
Structured prediction or structured learning refers to supervised machine learning techniques that involve predicting structured objects, rather than single labels or real values. For example, the problem of translating a natural language sentence into a syntactic representation such as a parse tree can be seen as a structured prediction problem in which the structured output domain is the set of all possible parse trees.
A 3hrs intro lecture to Approximate Bayesian Computation (ABC), given as part of a PhD course at Lund University, February 2016. For sample codes see http://www.maths.lu.se/kurshemsida/phd-course-fms020f-nams002-statistical-inference-for-partially-observed-stochastic-processes/
Optimal interval clustering: Application to Bregman clustering and statistica...Frank Nielsen
We present a generic dynamic programming method to compute the optimal clustering of n scalar elements into k pairwise disjoint intervals. This case includes 1D Euclidean k-means, k-medoids, k-medians, k-centers, etc. We extend the method to incorporate cluster size constraints and show how to choose the appropriate k by model selection. Finally, we illustrate and refine the method on two case studies: Bregman clustering and statistical mixture learning maximizing the complete likelihood.
http://arxiv.org/abs/1403.2485
Information-theoretic clustering with applicationsFrank Nielsen
Information-theoretic clustering with applications
Abstract: Clustering is a fundamental and key primitive to discover structural groups of homogeneous data in data sets, called clusters. The most famous clustering technique is the celebrated k-means clustering that seeks to minimize the sum of intra-cluster variances. k-Means is NP-hard as soon as the dimension and the number of clusters are both greater than 1. In the first part of the talk, we first present a generic dynamic programming method to compute the optimal clustering of n scalar elements into k pairwise disjoint intervals. This case includes 1D Euclidean k-means but also other kinds of clustering algorithms like the k-medoids, the k-medians, the k-centers, etc.
We extend the method to incorporate cluster size constraints and show how to choose the appropriate number of clusters using model selection. We then illustrate and refine the method on two case studies: 1D Bregman clustering and univariate statistical mixture learning maximizing the complete likelihood. In the second part of the talk, we introduce a generalization of k-means to cluster sets of histograms that has become an important ingredient of modern information processing due to the success of the bag-of-word modelling paradigm.
Clustering histograms can be performed using the celebrated k-means centroid-based algorithm. We consider the Jeffreys divergence that symmetrizes the Kullback-Leibler divergence, and investigate the computation of Jeffreys centroids. We prove that the Jeffreys centroid can be expressed analytically using the Lambert W function for positive histograms. We then show how to obtain a fast guaranteed approximation when dealing with frequency histograms and conclude with some remarks on the k-means histogram clustering.
References: - Optimal interval clustering: Application to Bregman clustering and statistical mixture learning IEEE ISIT 2014 (recent result poster) http://arxiv.org/abs/1403.2485
- Jeffreys Centroids: A Closed-Form Expression for Positive Histograms and a Guaranteed Tight Approximation for Frequency Histograms.
IEEE Signal Process. Lett. 20(7): 657-660 (2013) http://arxiv.org/abs/1303.7286
http://www.i.kyoto-u.ac.jp/informatics-seminar/
Inference for stochastic differential equations via approximate Bayesian comp...Umberto Picchini
Despite the title the methods are appropriate for more general dynamical models (including state-space models). Presentation given at Nordstat 2012, Umeå. Relevant research paper at http://arxiv.org/abs/1204.5459 and software code at https://sourceforge.net/projects/abc-sde/
Efficient end-to-end learning for quantizable representationsNAVER Engineering
발표자: 정연우(서울대 박사과정)
발표일: 2018.7.
유사한 이미지 검색을 위해 neural network를 이용해 이미지의 embedding을 학습시킨다. 기존 연구에서는 검색 속도 증가를 위해 binary code의 hamming distance를 활용하지만 여전히 전체 데이터 셋을 검색해야 하며 정확도가 떨어지는 다는 단점이 있다. 이 논문에서는 sparse한 binary code를 학습하여 검색 정확도가 떨어지지 않으면서 검색 속도도 향상시키는 해쉬 테이블을 생성한다. 또한 mini-batch 상에서 optimal한 sparse binary code를 minimum cost flow problem을 통해 찾을 수 있음을 보였다. 우리의 방법은 Cifar-100과 ImageNet에서 precision@k, NMI에서 최고의 검색 정확도를 보였으며 각각 98× 와 478×의 검색 속도 증가가 있었다.
Small updates of matrix functions used for network centralityFrancesco Tudisco
Many relevant measures of importance for nodes and edges of a network are defined in terms of suitable entries of matrix functions $f(A)$, for different choices of $f$ and $A$. Addressing the entries of $f(A)$ can be computationally challenging and this is particularly prohibitive when $A$ undergoes a perturbation $A+\delta A$ and the entries of $f(A)$ have to be updated. Given the adjacency matrix $A$ of a graph $G=(V,E)$, in this talk we consider the case where $\delta A$ is a sparse matrix that yields a small perturbation of the edge structure of $G$.
In particular, we present a bound showing that the variation of the entry $f(A)_{u,v}$ decays exponentially with the distance in $G$ that separates either $u$ or $v$ from the set of nodes touched by the edges that are perturbed. Our bound depends only on the distances in the original graph $G$ and on the field of values of the perturbed matrix $A+\delta A$. We show several numerical examples in support of the proposed result.
Talk presented at the IMA Numerical Analysis and Optimization conference, Birmingham 2018
The talk is based on the paper:
S. Pozza and F. Tudisco, On the stability of network indices defined by means of matrix functions, SIMAX, 2018
Tutorial on Belief Propagation in Bayesian NetworksAnmol Dwivedi
The goal of this mini-project is to implement belief propagation algorithms for posterior probability inference and most probable explanation (MPE) inference for the Bayesian Network with binary values in which the Conditional Probability Table for each random-variable/node is given.
We are interested in finding a permutation of the entries of a given square matrix so that the maximum number of its nonzero entries are moved to one of the corners in a L-shaped fashion.
If we interpret the nonzero entries of the matrix as the edges of a graph, this problem boils down to the so-called core–periphery structure, consisting of two sets: the core, a set of nodes that is highly connected across the whole graph, and the periphery, a set of nodes that is well connected only to the nodes that are in the core.
Matrix reordering problems have applications in sparse factorizations and preconditioning, while revealing core–periphery structures in networks has applications in economic, social and communication networks.
ABC with data cloning for MLE in state space modelsUmberto Picchini
An application of the "data cloning" method for parameter estimation via MLE aided by Approximate Bayesian Computation. The relevant paper is http://arxiv.org/abs/1505.06318
Linear Discriminant Analysis (LDA) Under f-Divergence MeasuresAnmol Dwivedi
For more details, please have a look at:
1. https://www.mdpi.com/1099-4300/24/2/188
2. https://ieeexplore.ieee.org/document/9518004
Abstract:
In statistical inference, the information-theoretic performance limits can often be expressed in terms of a notion of divergence between the underlying statistical models (e.g., in binary hypothesis testing, the total error probability is equal to the total variation between the models). As the data dimension grows, computing the statistics involved in decision-making and the attendant performance limits (divergence measures) face complexity and stability challenges. Dimensionality reduction addresses these challenges at the expense of compromising the performance (divergence reduces due to the data processing inequality for divergence). This paper considers linear dimensionality reduction such that the divergence between the models is \emph{maximally} preserved. Specifically, the paper focuses on the Gaussian models and characterizes an optimal projection of the data onto a lower-dimensional subspace with respect to four $f$-divergence measures (Kullback-Leibler, $\chi^2$, Hellinger, and total variation). There are two key observations. First, projections are not necessarily along the dominant modes of the covariance matrix of the data, and even in some situations, they can be along the least dominant modes. Secondly, under specific regimes, the optimal design of subspace projection is identical under all the $f$-divergence measures considered, rendering a degree of universality to the design independent of the inference problem of interest.
Structured prediction or structured learning refers to supervised machine learning techniques that involve predicting structured objects, rather than single labels or real values. For example, the problem of translating a natural language sentence into a syntactic representation such as a parse tree can be seen as a structured prediction problem in which the structured output domain is the set of all possible parse trees.
A 3hrs intro lecture to Approximate Bayesian Computation (ABC), given as part of a PhD course at Lund University, February 2016. For sample codes see http://www.maths.lu.se/kurshemsida/phd-course-fms020f-nams002-statistical-inference-for-partially-observed-stochastic-processes/
Optimal interval clustering: Application to Bregman clustering and statistica...Frank Nielsen
We present a generic dynamic programming method to compute the optimal clustering of n scalar elements into k pairwise disjoint intervals. This case includes 1D Euclidean k-means, k-medoids, k-medians, k-centers, etc. We extend the method to incorporate cluster size constraints and show how to choose the appropriate k by model selection. Finally, we illustrate and refine the method on two case studies: Bregman clustering and statistical mixture learning maximizing the complete likelihood.
http://arxiv.org/abs/1403.2485
Information-theoretic clustering with applicationsFrank Nielsen
Information-theoretic clustering with applications
Abstract: Clustering is a fundamental and key primitive to discover structural groups of homogeneous data in data sets, called clusters. The most famous clustering technique is the celebrated k-means clustering that seeks to minimize the sum of intra-cluster variances. k-Means is NP-hard as soon as the dimension and the number of clusters are both greater than 1. In the first part of the talk, we first present a generic dynamic programming method to compute the optimal clustering of n scalar elements into k pairwise disjoint intervals. This case includes 1D Euclidean k-means but also other kinds of clustering algorithms like the k-medoids, the k-medians, the k-centers, etc.
We extend the method to incorporate cluster size constraints and show how to choose the appropriate number of clusters using model selection. We then illustrate and refine the method on two case studies: 1D Bregman clustering and univariate statistical mixture learning maximizing the complete likelihood. In the second part of the talk, we introduce a generalization of k-means to cluster sets of histograms that has become an important ingredient of modern information processing due to the success of the bag-of-word modelling paradigm.
Clustering histograms can be performed using the celebrated k-means centroid-based algorithm. We consider the Jeffreys divergence that symmetrizes the Kullback-Leibler divergence, and investigate the computation of Jeffreys centroids. We prove that the Jeffreys centroid can be expressed analytically using the Lambert W function for positive histograms. We then show how to obtain a fast guaranteed approximation when dealing with frequency histograms and conclude with some remarks on the k-means histogram clustering.
References: - Optimal interval clustering: Application to Bregman clustering and statistical mixture learning IEEE ISIT 2014 (recent result poster) http://arxiv.org/abs/1403.2485
- Jeffreys Centroids: A Closed-Form Expression for Positive Histograms and a Guaranteed Tight Approximation for Frequency Histograms.
IEEE Signal Process. Lett. 20(7): 657-660 (2013) http://arxiv.org/abs/1303.7286
http://www.i.kyoto-u.ac.jp/informatics-seminar/
Inference for stochastic differential equations via approximate Bayesian comp...Umberto Picchini
Despite the title the methods are appropriate for more general dynamical models (including state-space models). Presentation given at Nordstat 2012, Umeå. Relevant research paper at http://arxiv.org/abs/1204.5459 and software code at https://sourceforge.net/projects/abc-sde/
Efficient end-to-end learning for quantizable representationsNAVER Engineering
발표자: 정연우(서울대 박사과정)
발표일: 2018.7.
유사한 이미지 검색을 위해 neural network를 이용해 이미지의 embedding을 학습시킨다. 기존 연구에서는 검색 속도 증가를 위해 binary code의 hamming distance를 활용하지만 여전히 전체 데이터 셋을 검색해야 하며 정확도가 떨어지는 다는 단점이 있다. 이 논문에서는 sparse한 binary code를 학습하여 검색 정확도가 떨어지지 않으면서 검색 속도도 향상시키는 해쉬 테이블을 생성한다. 또한 mini-batch 상에서 optimal한 sparse binary code를 minimum cost flow problem을 통해 찾을 수 있음을 보였다. 우리의 방법은 Cifar-100과 ImageNet에서 precision@k, NMI에서 최고의 검색 정확도를 보였으며 각각 98× 와 478×의 검색 속도 증가가 있었다.
On the Family of Concept Forming Operators in Polyadic FCADmitrii Ignatov
Triadic Formal Concept Analysis (3FCA) was introduced by Lehman and Wille almost two decades ago. And many researchers work in Data Mining and Formal Concept Analysis using the notions of closed sets, Galois and closure operators, closure systems. However, up-to-date even though that different researchers actively work on mining triadic and n-ary relations, a proper closure operator for enumeration of triconcepts, i.e. maximal triadic cliques of tripartite hypergaphs, was not introduced. In this talk we show that the previously introduced operators for obtaining triconcepts are not always consistent, describe their family and study their properties. We also introduce the notion of maximal switching generator to explain why such concept-forming operators are not closure operators due to violation of monotonicity property.
Research internship on optimal stochastic theory with financial application u...Asma Ben Slimene
This is a presntation of my second year intership on optimal stochastic theory and how we can apply it on some financial application then how we can solve such problems using finite differences methods!
Enjoy it !
Presentation on stochastic control problem with financial applications (Merto...Asma Ben Slimene
This is an introductory to optimal stochastic control theory with two applications in finance: Merton portfolio problem and Investement/consumption problem with numerical results using finite differences approach
Digital Signal Processing[ECEG-3171]-Ch1_L03Rediet Moges
This Digital Signal Processing Lecture material is the property of the author (Rediet M.) . It is not for publication,nor is it to be sold or reproduced.
#Africa#Ethiopia
Reinforcement Learning: Hidden Theory and New Super-Fast AlgorithmsSean Meyn
A tutorial, and very new algorithms -- more details on arXiv and at NIPS 2017 https://arxiv.org/abs/1707.03770
Part of the Data Science Summer School at École Polytechnique: http://www.ds3-datascience-polytechnique.fr/program/
---------
2018 Updates:
See Zap slides from ISMP 2018 for new inverse-free optimal algorithms
Simons tutorial, March 2018 [one month before most discoveries announced at ISMP]
Part I (Basics, with focus on variance of algorithms)
https://www.youtube.com/watch?v=dhEF5pfYmvc
Part II (Zap Q-learning)
https://www.youtube.com/watch?v=Y3w8f1xIb6s
Big 2017 survey on variance in SA:
Fastest convergence for Q-learning
https://arxiv.org/abs/1707.03770
You will find the infinite-variance Q result there.
Our NIPS 2017 paper is distilled from this.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
The increased availability of biomedical data, particularly in the public domain, offers the opportunity to better understand human health and to develop effective therapeutics for a wide range of unmet medical needs. However, data scientists remain stymied by the fact that data remain hard to find and to productively reuse because data and their metadata i) are wholly inaccessible, ii) are in non-standard or incompatible representations, iii) do not conform to community standards, and iv) have unclear or highly restricted terms and conditions that preclude legitimate reuse. These limitations require a rethink on data can be made machine and AI-ready - the key motivation behind the FAIR Guiding Principles. Concurrently, while recent efforts have explored the use of deep learning to fuse disparate data into predictive models for a wide range of biomedical applications, these models often fail even when the correct answer is already known, and fail to explain individual predictions in terms that data scientists can appreciate. These limitations suggest that new methods to produce practical artificial intelligence are still needed.
In this talk, I will discuss our work in (1) building an integrative knowledge infrastructure to prepare FAIR and "AI-ready" data and services along with (2) neurosymbolic AI methods to improve the quality of predictions and to generate plausible explanations. Attention is given to standards, platforms, and methods to wrangle knowledge into simple, but effective semantic and latent representations, and to make these available into standards-compliant and discoverable interfaces that can be used in model building, validation, and explanation. Our work, and those of others in the field, creates a baseline for building trustworthy and easy to deploy AI models in biomedicine.
Bio
Dr. Michel Dumontier is the Distinguished Professor of Data Science at Maastricht University, founder and executive director of the Institute of Data Science, and co-founder of the FAIR (Findable, Accessible, Interoperable and Reusable) data principles. His research explores socio-technological approaches for responsible discovery science, which includes collaborative multi-modal knowledge graphs, privacy-preserving distributed data mining, and AI methods for drug discovery and personalized medicine. His work is supported through the Dutch National Research Agenda, the Netherlands Organisation for Scientific Research, Horizon Europe, the European Open Science Cloud, the US National Institutes of Health, and a Marie-Curie Innovative Training Network. He is the editor-in-chief for the journal Data Science and is internationally recognized for his contributions in bioinformatics, biomedical informatics, and semantic technologies including ontologies and linked data.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
2. Centre for
Industrial Mathematics
Outline
1 Motivation
2 Hybrid system
3 Extension of hybrid time domain
4 Conclusion and outlook
2 / 40Motivation Hybrid system Hybrid domain Summary
3. Centre for
Industrial Mathematics
Logistics network as hybrid system from PhD thesis:
3 production sites (D, F, E) for
Liquidring-Vaccum (LRVP), Industrial (IND)
and Side-channel (SC) pumps
5 distribution centers (D, NL, B, F, E)
33 first and second-tier suppliers for the
production of pumps
90 suppliers for components that are needed
for the assembly of pump sets
More than 1000 customers
3 / 40Motivation Hybrid system Hybrid domain Summary
4. Centre for
Industrial Mathematics
Logistics network as hybrid system from PhD thesis:
Instability leads to:
High inventory costs
Large number of unsatisfied orders
Loss of customers
3 / 40Motivation Hybrid system Hybrid domain Summary
5. Centre for
Industrial Mathematics
Analysis steps:
1 Mathematical modelling
2 Model reduction, if the model size is large
3 Stability analysis
4 / 40Motivation Hybrid system Hybrid domain Summary
6. Centre for
Industrial Mathematics
Modelling approaches
Discrete system:
Decentralized supply chain
Re-entrant/queueing
system
”Bucket brigade”
Continuous system:
Ordinary differential equations
- Damped oscillator model
Multilevel network
Partial differential equations
Hybrid model:
Hybrid system
Switched system
Stochastic system:
Stochastic system
Queueing/fluid network
5 / 40Motivation Hybrid system Hybrid domain Summary
7. Centre for
Industrial Mathematics
Hybrid system
xi - state of logistics location Σi (stock level)
u - external input (customer orders, raw material)
State xi changes continuously during production.
Σi
t0
xi (0)
xi (t)
xi
6 / 40Motivation Hybrid system Hybrid domain Summary
8. Centre for
Industrial Mathematics
Hybrid system
xi - state of logistics location Σi (stock level)
u - external input (customer orders, raw material)
When a truck picks up finished material, state xi ”jumps”.
Σi
t0
xi (0)
xi (t)
t1
xi
6 / 40Motivation Hybrid system Hybrid domain Summary
9. Centre for
Industrial Mathematics
Hybrid system
xi - state of logistics location Σi (stock level)
u - external input (customer orders, raw material)
After the jump state xi changes again continuously.
Σi
t0
xi (0)
xi (t)
t1
xi
6 / 40Motivation Hybrid system Hybrid domain Summary
10. Centre for
Industrial Mathematics
Hybrid system
xi - state of logistics location Σi (stock level)
u - external input (customer orders, raw material)
Hybrid dynamics of location Σi :
˙xi = fi (x1, . . . , xn, u), (x1, . . . , xn, u)∈Ci production
x+
i = gi (x1, . . . , xn, u), (x1, . . . , xn, u)∈Di transportation
Σi
t0
xi (0)
xi (t)
t1 t2 t3
xi
6 / 40Motivation Hybrid system Hybrid domain Summary
11. Centre for
Industrial Mathematics
Hybrid system
The whole network Σ is given by interconnection of individual
locations:
Hybrid dynamics of the whole logistics network Σ:
˙x = ?, (x1, . . . , xn, u) ∈ C =? overall production
x+ = ?, (x1, . . . , xn, u) ∈ D =? overall transportation
7 / 40Motivation Hybrid system Hybrid domain Summary
12. Centre for
Industrial Mathematics
Hybrid system (Teel’s framework)
Σi :
˙xi = fi (x1, . . . , xn, u), (x1, . . . , xn, u)∈Ci
x+
i = gi (x1, . . . , xn, u), (x1, . . . , xn, u)∈Di
xi ∈χi ⊂RNi ,fi :Ci →RNi ,gi :Di →χi ,Ci , Di ⊂χ1×. . .×χn×U1×. . .×Un.
8 / 40Motivation Hybrid system Hybrid domain Summary
13. Centre for
Industrial Mathematics
Hybrid system (Teel’s framework)
Σi :
˙xi = fi (x1, . . . , xn, u), (x1, . . . , xn, u)∈Ci
x+
i = gi (x1, . . . , xn, u), (x1, . . . , xn, u)∈Di
xi ∈χi ⊂RNi ,fi :Ci →RNi ,gi :Di →χi ,Ci , Di ⊂χ1×. . .×χn×U1×. . .×Un.
Basic regularity conditions for ∃ of solutions (Goebel & Teel 2006):
fi , gi are continuous;
Ci , Di are closed.
8 / 40Motivation Hybrid system Hybrid domain Summary
14. Centre for
Industrial Mathematics
Hybrid system (Teel’s framework)
Σi :
˙xi = fi (x1, . . . , xn, u), (x1, . . . , xn, u)∈Ci
x+
i = gi (x1, . . . , xn, u), (x1, . . . , xn, u)∈Di
xi ∈χi ⊂RNi ,fi :Ci →RNi ,gi :Di →χi ,Ci , Di ⊂χ1×. . .×χn×U1×. . .×Un.
Basic regularity conditions for ∃ of solutions (Goebel & Teel 2006):
fi , gi are continuous;
Ci , Di are closed.
Solution xi (t, k) is defined on hybrid time domain:
dom xi := ∪[tk, tk+1]×{k}
t is time and k is number of the last jump.
8 / 40Motivation Hybrid system Hybrid domain Summary
15. Centre for
Industrial Mathematics
Stability notions
Global stability (GS)
Hybrid system is called globally stable (GS), if there exists σ ∈ K∞
such that any solution x satisfies
|x(t, k)| ≤ σ(|x0|), ∀(t, k) ∈ dom x.
Global attractivity (GA)
Hybrid system is called globally attractive (GA), if for each > 0
and r > 0 there exists T > 0 such that, for any solution x,
|x(0, 0)| ≤ r, (t, k) ∈ dom, x, and t + k ≥ T imply |x(t, k)| ≤ .
Global asymptotical stability (GAS)
Hybrid system is called globally asymptotically stable (GAS), if it is
both GS and GA.
9 / 40Motivation Hybrid system Hybrid domain Summary
17. Centre for
Industrial Mathematics
Example: Bouncing ball
Figure: Trajectory of the
bouncing ball.
Figure: Time domain of the
bouncing ball.
11 / 40Motivation Hybrid system Hybrid domain Summary
18. Centre for
Industrial Mathematics
Zeno solutions
Definition (Ames et. al. 2006)
The solution (x, u) possesses:
chattering Zeno behaviour, if there exists a finite K ≥ 0 such
that (t, k), (t, k + 1) ∈ dom x for all k ≥ K;
genuinely Zeno behaviour, if there exists a finite T ≥ 0 such
that for all (s, k), (t, k + 1) ∈ dom x, s < t < T.
Bouncing ball possesses the genuinely Zeno behaviour with
T = tmax = 2h
γ + 2λ
1−λ
2h
γ .
Furthermore, it is GAS (Sanfelice and Teel 2008).
12 / 40Motivation Hybrid system Hybrid domain Summary
19. Centre for
Industrial Mathematics
Interconnection of hybrid systems
Σ:
˙x = f (x, u), (x, u)∈C
x+ = g(x, u), (x, u)∈D
χ:=χ1×. . .×χn, x:=(xT
1 , . . ., xT
n )T ∈χ⊂RN, N:= Ni , C:= ∩ Ci ,
D:= ∪ Di , f :=(f T
1 , . . ., f T
n )T , g:=(gT
1 , . . ., gT
n )T with
gi (x, u) :=
gi (x, ui ), if (x, u) ∈ Di ,
xi , otherwise .
13 / 40Motivation Hybrid system Hybrid domain Summary
20. Centre for
Industrial Mathematics
Example of interconnection of hybrid systems
two bouncing balls with states x1 ∈ R2 and x2 ∈ R2
respectively.
they are interconnected by an elastic elastic coefficient k ≥ 0
horizontal distance between the balls is neglected
the balls do not hit each other.
The interaction force due to the elastic spring is given by
±k(x1
1 − x2
1 ).
14 / 40Motivation Hybrid system Hybrid domain Summary
22. Centre for
Industrial Mathematics
Example of interconnection of hybrid systems
Dynamics of interconnection
˙z = f (z), z ∈ C := C1 ∩ C2,
z+ = g(z), z ∈ D := D1 ∪ D2 ∈ R4,
where z := (x1T
, x2T
)T , f (z) := (f 1T , f 2T )T ,
g(z) := (g1T , g2T )T ,
gi
(z) = gi
(x1
, x2
) :=
gi (x1, x2), if (x1, x2) ∈ Di ,
xi , otherwise .
16 / 40Motivation Hybrid system Hybrid domain Summary
23. Centre for
Industrial Mathematics
Drawback
Taking x1
1 (0) = x1
2 (0) = 0 and x2
1 (0) = h > 0, x2
2 (0) = v ∈ R the
hybrid arc
x1
1 (t, j) = x1
2 (t, j) = 0, x2
1 (t, j) = h, x2
2 (t, j) = v
is a solution with hybrid time domain {(0, j)}∞
j=0.
Thus tmax = 0 and the system jumps infinitely many times from a
non-zero state to the same state ⇒ the solution possesses
chattering Zeno solution ⇒ No physical meaning! and furthermore
is no more GAS.
17 / 40Motivation Hybrid system Hybrid domain Summary
24. Centre for
Industrial Mathematics
Our approach
We take into account which of the system can jump by:
IC (x, u):={i : (x, u) ∈ Ci }, ID(x, u):={i : (x, u) ∈ Di }.
Then dynamics is given by
˙xi = fi (x, u), i ∈ IC (x, u),
x+
i = gi (x, u), i ∈ ID(x, u).
Hybrid time domain considers the jumps of the subsystems
separately:
domk1,...,kn := ∪[tk, tk+1] × {k1, . . . , kn} ⊂ R+ × Nn
+,
where k = k1 + · · · + kn and ki ∈ N+ calculates the jumps of
the ith subsystem.
ti
max
18 / 40Motivation Hybrid system Hybrid domain Summary
25. Centre for
Industrial Mathematics
Solution
(i) (x(0, 0), u(0, 0)) ∈ Ci ∪ Di , ∀i
(ii) for i ∈ IC (x1
1 (t, k), . . . , xn(t, k), u(t, k)),
˙xi (t, k)=fi (x1
1 (min{t, t1
max}, k), . . ., xn(min{t, tn
max}, k), u(t, k))
(iii) for all (t, k) ∈ domk x with (t, k + p) ∈ domk x p ≥ 1
for i ∈ ID(x1
1 (t, k), . . . , xn(t, k), u(t, k),
x+
i (min{t, tk
max}, k+p) =
gi (x1
1 (min{t, t1
max}, k), . . . , xn(min{t, tn
max}, k), u(t, k))
k = (k1, . . . , kn) ∈ Nn
+
19 / 40Motivation Hybrid system Hybrid domain Summary
26. Centre for
Industrial Mathematics
Application to an example
The sets IC (x) and ID(x) in the example with an interconnection
of bouncing balls are given by
IC (x) = {1, 2}, ID(x) = ∅, if x1
1 > 0, x2
1 > 0,
IC (x) = {1, 2}, ID(x) = {1}, if x1
1 = 0, x2
1 > 0,
IC (x) = {1, 2}, ID(x) = {2}, if x1
1 > 0, x2
1 = 0,
IC (x) = {1, 2}, ID(x) = {1, 2}, if x1
1 = 0, x2
1 = 0.
Then the arc
x1
1 (t, j) = x1
2 (t, j) = 0, x2
1 (t, j) = h, x2
2 (t, j) = v
is not a solution, because it corresponds to IC = {1, 2}, ID = {1},
i.e., the second subsystem is not allowed to jump.
20 / 40Motivation Hybrid system Hybrid domain Summary
27. Centre for
Industrial Mathematics
GAS
Adaptation of Matrosov’s theorem
Let a hybrid system be GS. Then, it is GAS if ∃m∈N, and for each
0 < δ < ∆,
a number µ > 0,
continuous wc,j : (∪i
¯Ci ) ∩ ΩIC (x)(δ, ∆) → R,
wd,j : (∪i
¯Di ) ∩ ΩID (x)(δ, ∆) → R, j ∈ {1, . . . , m},
Vj : RN 0 → R, j ∈ {1, . . . , m} are C1 on an open set
containing (∪i
¯Ci ) ∩ ΩIC (x)(δ, ∆),
21 / 40Motivation Hybrid system Hybrid domain Summary
28. Centre for
Industrial Mathematics
GAS
Adaptation of Matrosov’s theorem (continued)
such that, for each j ∈ {1, . . . , m}
i) |Vj (x)| ≤ µ ∀x ∈ (∪i
¯Ci )∪(∪i Di ) ∪ (∪i gi (Di ))∩Ω(δ, ∆)
ii) Vj (x)IC
, (f T
1 , . . ., f T
n )T
IC
≤wc,j (x), ∀x∈(∪i Ci )∩ΩIC (x)(δ, ∆)
iii)Vj ((gT
1 , . . . , gT
n )T
(˜x))−Vj (˜x)≤wd,j (x), ∀x∈(∪i Di )∩ΩID (x)(δ, ∆)
and with wc,0, wd,0 : RN → {0} and wc,m+1, wd,m+1 : RN → {1}
such that for each l ∈ {0, . . . , m},
1) if x ∈ (∪i
¯Ci ) ∩ Ω(δ, ∆) and wc,j (x) = 0 for all j ∈ {0, . . . , l}
then wc,l+1(x) ≤ 0,
2) if x ∈ (∪i
¯Di ) ∩ Ω(δ, ∆)ID (x) and wd,j (x) = 0 for all
j ∈ {0, . . . , l} then wd,l+1(x) ≤ 0.
22 / 40Motivation Hybrid system Hybrid domain Summary
29. Centre for
Industrial Mathematics
Application of Matrosov’s theorem to an interconnection
of bouncing balls
System is GS (Sanfelice and Teel 2008).
Define z := (x1, x2)T and
V1(z) := V1(x1, x2) := 1
2(x1
2
2
+ x2
2
2
) + γx1
1 + γx2
1 + k
2 (x2
1 − x1
1 )2
V2(z) := V2(x1, x2) = γx1
2 + γx2
2 .
Consider the following four cases:
both components of the state flow continuously:
˙V1(z) = x1
2 (−kx1
1 + kx2
1 − γ) + x2
2 (kx1
1 − kx2
1 − γ) + γx1
2 +
γx2
2 + k(x2
1 − x1
1 )(x2
2 − x1
2 ) = 0,
˙V2(z) = −2γ2
23 / 40Motivation Hybrid system Hybrid domain Summary
30. Centre for
Industrial Mathematics
Application of Matrosov’s theorem to an interconnection
of bouncing balls
both components of the state jump:
V1(z+) − V1(z) = −1
2(1 − λ2)x1
2
2
− 1
2(1 − λ2)x2
2
2
,
V2(z+) − V2(z) = −(1 + λ)γ(x1
2 + x2
2 )
the first component of the state jumps and the second flows
continuously:
V1(z+) − V1(z) = −1
2(1 − λ2)x1
2
2
,
V2(z+) − V2(z) = −(1 + λ)γx1
2
the first component of the state flows continuously and the
second jumps:
V1(z+) − V1(z) = −1
2(1 − λ2)x2
2
2
,
V2(z+) − V2(z) = −(1 + λ)γx2
2
Then the interconnected system is GAS.
24 / 40Motivation Hybrid system Hybrid domain Summary
32. Centre for
Industrial Mathematics
Existence of solutions
Basic regularity conditions:
1 χi is open, U is closed, and Ci , Di ⊂ χ1 × · · · × χn × U are
relatively closed in χ1 × · · · × χn × U;
2 fi , gi are continuous.
Theorem (Existence of solutions)
Assume the basic regularity conditions hold.
If one of the following conditions holds:
(i) (x0, u0) ∈ Di for all i ∈ {1, . . . , n};
(ii) (x0, u0) ∈ Ci and for some neighborhood P of (x0, u0), for all
(x , u0) ∈ P ∩ Ci , TCi
(x , u0) ∩ fi (x , u0) = ∅, for all i ∈ {1, . . . , n} ;
(iii) 1 ≤ |IC (x0, u0)| < n, 1 ≤ |ID(x0, u0)| < n and for some
neighborhood P of (x0, u0), for all i ∈ ICi
(x0, u0),
(x , u0) ∈ P ∩ Ci , TCi
(x , u0) ∩ fi (x , u0) = ∅,
26 / 40Motivation Hybrid system Hybrid domain Summary
33. Centre for
Industrial Mathematics
Existence of solutions
Theorem (Existence of solutions, continued)
then there exists a solution pair (x, u) for hybrid system with
(t, ¯k) ∈ domx for some t > 0 or ¯k ≡ (0, . . . , 0)T ∈ Nn
+.
Furthermore, if for all i ∈ {1, . . . , n}, gi (Di ) ∈ Ci ∪ Di , then there
exists a solution with t > 0, ¯k ∈ Nn
+ such that
(x(t, ¯k), u(t, ¯k)) ∈ Ci ∪ Di .
Proof
We consider all the cases separately and construct solutions for
them explicitly.
27 / 40Motivation Hybrid system Hybrid domain Summary
34. Centre for
Industrial Mathematics
Another problem (semi-genuinely Zeno solution)
Example:
two balls with masses m1 and m2 such that m2 > m1.
the first ball was launched at an angle 0 < θ < π
2 to the
horizontal line and then bounces with initial velocity v1
towards the second ball
the second balls rolls with constant velocity v2 towards the
first ball.
28 / 40Motivation Hybrid system Hybrid domain Summary
35. Centre for
Industrial Mathematics
Dynamics in C
For
(x1, y1, v1, x2, y2, v2, q)∈C={(x1, y1, v1, x2, y2, v2, q)∈R7 : y1 ≥ 0}:
˙x1 =
v1 cos θ, if q = 0,
−v1, if q = 1
˙y1 =
v1 sin θ − gt, if q = 0,
0, if q = 1
˙v1 = 0
˙x2 =
−v2, if q = 0,
v2, if q = 1
˙y2 = 0
˙v2 = 0
˙q = 0
q - logical variable that determines, whether they have already
collided.
29 / 40Motivation Hybrid system Hybrid domain Summary
36. Centre for
Industrial Mathematics
Dynamics in D = D1 ∪ D2
At the bounce in
(x1, y1, v1, x2, y2, v2, q)∈D1={(x1, y1, v1, x2, y2, v2, q)∈R7 : y1 =
0, v1 ≥ 0, q = 0}:
x+
1 = x1
y+
1 = y1
v+
1 = µv1
x+
2 = x1
y+
2 = y1
v+
2 = v2
q+
= q
30 / 40Motivation Hybrid system Hybrid domain Summary
38. Centre for
Industrial Mathematics
Trajectories
Figure: Trajectories before the stop of the first ball.
The first ball ”stops” earlier due to the loss of energy, i.e. at
t1
max < t2
max
32 / 40Motivation Hybrid system Hybrid domain Summary
39. Centre for
Industrial Mathematics
Trajectories
Figure: Trajectories before the collision.
After some time the second ball reaches the first, i.e. at
(x1, y1, v1, x2, y2, v2, q) ∈ D2 and they collide.
33 / 40Motivation Hybrid system Hybrid domain Summary
40. Centre for
Industrial Mathematics
Trajectories
Figure: Trajectories after the collision.
As m2 > m1, the first ball begins to move again in opposite
direction to the second ball.
Thus hybrid time domain of the first ball is extended beyond t1
max!
34 / 40Motivation Hybrid system Hybrid domain Summary
41. Centre for
Industrial Mathematics
Possible solution
Instead of considering min{t, ti
max}, we consider intervals
[ti,ki
min, ti,ki
max], ki = 0, 1, 2, . . .
where system i is ”active”, and intervals
[ti,ki
max, ti,ki +1
min ],
where system i is ”passive” (in physical sense).
θi (t) :=
t, ti,ki
min < t < ti,ki
max
ti,ki
max, ti,ki
max < t < ti,ki +1
min
identifies, whether at time t the system i is ”active” or ”passive”.
35 / 40Motivation Hybrid system Hybrid domain Summary
42. Centre for
Industrial Mathematics
Definition of solution
Hybrid arc x and hybrid input u are a solution pair, if
(i) (x(0, 0), u(0, 0)) ∈ Ci ∪ Di , ∀i
(ii) for i ∈ IC (x1
1 (t, k), . . . , xn(t, k), u(t, k)),
˙xi (t, k) = fi (x1
1 (θ1(t), k), . . . , xn(θn(t), k), u(t, k));
(iii) for all (t, k) ∈ domk x with (t, k + p) ∈ domk x where p ≥ 1
for i ∈ ID(x1
1 (t, k), . . . , xn(t, k), u(t, k),
x+
i (θi (t), k + p) = gi (x1
1 (θ1(t), k), . . . , xn(θn(t), k), u(t, k)).
k := (k1, . . . , kn) ∈ Nn
+
36 / 40Motivation Hybrid system Hybrid domain Summary
43. Centre for
Industrial Mathematics
Application to an example
1 The second ball is always ”active” ⇒ k2 = 0, t2,k2
min = 0,
t2,k2
max = ∞.
2 The first ball first bounces from t1,0
min = 0 till t1,0
max.
3 Then it lies on the floor.
4 At t1,1
min the two balls collide
5 And the first ball flows till t1,1
max = ∞ ⇒ Thus k1 = 1.
37 / 40Motivation Hybrid system Hybrid domain Summary
44. Centre for
Industrial Mathematics
Existence of solutions
Theorem (Existence of solutions, continued)
then there exists a solution pair (x, u) for hybrid system with
(t, ¯k) ∈ domx for some t > 0 or ¯k ≡ (0, . . . , 0)T ∈ Nn
+.
Furthermore, if for all i ∈ {1, . . . , n}, gi (Di ) ∈ Ci ∪ Di , then there
exists a solution with t > 0, ¯k ∈ Nn
+ such that
(x(t, ¯k), u(t, ¯k)) ∈ Ci ∪ Di .
Proof
For each interval [ti,ki
min, ti,ki +1
min ] we consider all the cases separately
and construct solutions for them explicitly. Then we concatenate
the solutions on the intervals.
38 / 40Motivation Hybrid system Hybrid domain Summary
45. Centre for
Industrial Mathematics
Conclusion
Hybrid systems allow to describe complex dynamics like
behaviour of logistics networks
Hybrid systems may possess solutions with no physical
meaning (Zeno behaviour) ⇒ hybrid systems have to be used
very carefully
If subsystems in interconnections of hybrid systems possess
Zeno behaviour, one can extend the notion of time domain to
exclude such solutions
39 / 40Motivation Hybrid system Hybrid domain Summary