IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Turning Krimp into a Triclustering Technique on Sets of Attribute-Condition P...Dmitrii Ignatov
Mining ternary relations or triadic Boolean tensors is one of the recent trends in knowledge discovery that allows one to take into account various modalities of input object-attribute data.
For example, in movie databases like IMBD, an analyst may find not only movies grouped by specific genres but see their common keywords. In the so called folksonomies, users can be grouped according to their shared resources and used tags. In gene expression analysis, genes can be grouped along with samples of tissues and time intervals providing comprehensible patterns. However, pattern explosion effects even with one more dimension are seriously aggravated. In this paper, we continue our previous study on searching for a smaller collection of ``optimal'' patterns in triadic data with respect to a set of quality criteria such as patterns' cardinality, density, diversity, coverage, etc. We show how a simple data preprocessing has enabled us to use the frequent itemset mining algorithm.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Turning Krimp into a Triclustering Technique on Sets of Attribute-Condition P...Dmitrii Ignatov
Mining ternary relations or triadic Boolean tensors is one of the recent trends in knowledge discovery that allows one to take into account various modalities of input object-attribute data.
For example, in movie databases like IMBD, an analyst may find not only movies grouped by specific genres but see their common keywords. In the so called folksonomies, users can be grouped according to their shared resources and used tags. In gene expression analysis, genes can be grouped along with samples of tissues and time intervals providing comprehensible patterns. However, pattern explosion effects even with one more dimension are seriously aggravated. In this paper, we continue our previous study on searching for a smaller collection of ``optimal'' patterns in triadic data with respect to a set of quality criteria such as patterns' cardinality, density, diversity, coverage, etc. We show how a simple data preprocessing has enabled us to use the frequent itemset mining algorithm.
Cari presentation maurice-tchoupe-joskelngoufoMokhtar SELLAMI
A publish/subscribe approach for implementing GAG’s distributed
collaborative business processes with high data availability
Maurice Tchoupé Tchendji and Joskel Ngoufo Tagueu
[AAAI-16] Tiebreaking Strategies for A* Search: How to Explore the Final Fron...Asai Masataro
This is a presentation used in the aural session in AAAI-16. The original paper is available at http: guicho271828.github.io/publications/ .
# Abstract
Despite recent improvements in search techniques for cost-optimal classical planning, the exponential growth of
the size of the search frontier in A* is unavoidable. We investigate tiebreaking strategies for A*,
experimentally analyzing the performance of standard tiebreaking strategies that break ties according to the heuristic value of the nodes. We find that tiebreaking has a significant impact on search algorithm performance when there are zero-cost operators that induce large plateau regions in the search space. We develop a new framework for tiebreaking based on a depth metric which measures distance from the entrance to the plateau, and propose a new, randomized strategy which significantly outperforms standard strategies on domains with zero-cost actions.
Digital Signal Processing[ECEG-3171]-Ch1_L06Rediet Moges
This Digital Signal Processing Lecture material is the property of the author (Rediet M.) . It is not for publication,nor is it to be sold or reproduced.
#Africa#Ethiopia
Many biological systems exhibit heterogeneiety on a population level. This heterogeneity can be captured by describing the temporal evolution of the probability of an individual in the population to be in a certain state as partial differential equation. To tune parameters of such a partial differential equation to experimental data, a partial differential equation constrained optimisation problem has to be solved. Hence, for biological systems with a large number of states, a high-dimensional partial differential equation has to be solved. This can easily render the optimisation problem intractable, As there are no well-established, efficient integration schemes for high dimensional partial differential equations available. In this talk we will present techniques to translate the partial differential equation constrained optimization problem into a hierarchical, ordinary differential equation constrained optimization problem given a certain set of assumptions. We will present these assumptionas as well as the derivation of the hierarchical, ordinary differential equation constrained optimisation problem. Moreover we will present numerical schemes for the computation of the respective objective function and its gradient. Eventually we will also present numerical schemes to solve the constrained optimisation problem and apply these techniques to small and large scale biological applications for which experimental data is available.
Bayesian inference for mixed-effects models driven by SDEs and other stochast...Umberto Picchini
An important, and well studied, class of stochastic models is given by stochastic differential equations (SDEs). In this talk, we consider Bayesian inference based on measurements from several individuals, to provide inference at the "population level" using mixed-effects modelling. We consider the case where dynamics are expressed via SDEs or other stochastic (Markovian) models. Stochastic differential equation mixed-effects models (SDEMEMs) are flexible hierarchical models that account for (i) the intrinsic random variability in the latent states dynamics, as well as (ii) the variability between individuals, and also (iii) account for measurement error. This flexibility gives rise to methodological and computational difficulties.
Fully Bayesian inference for nonlinear SDEMEMs is complicated by the typical intractability of the observed data likelihood which motivates the use of sampling-based approaches such as Markov chain Monte Carlo. A Gibbs sampler is proposed to target the marginal posterior of all parameters of interest. The algorithm is made computationally efficient through careful use of blocking strategies, particle filters (sequential Monte Carlo) and correlated pseudo-marginal approaches. The resulting methodology is is flexible, general and is able to deal with a large class of nonlinear SDEMEMs [1]. In a more recent work [2], we also explored ways to make inference even more scalable to an increasing number of individuals, while also dealing with state-space models driven by other stochastic dynamic models than SDEs, eg Markov jump processes and nonlinear solvers typically used in systems biology.
[1] S. Wiqvist, A. Golightly, AT McLean, U. Picchini (2020). Efficient inference for stochastic differential mixed-effects models using correlated particle pseudo-marginal algorithms, CSDA, https://doi.org/10.1016/j.csda.2020.107151
[2] S. Persson, N. Welkenhuysen, S. Shashkova, S. Wiqvist, P. Reith, G. W. Schmidt, U. Picchini, M. Cvijovic (2021). PEPSDI: Scalable and flexible inference framework for stochastic dynamic single-cell models, bioRxiv doi:10.1101/2021.07.01.450748.
Cari presentation maurice-tchoupe-joskelngoufoMokhtar SELLAMI
A publish/subscribe approach for implementing GAG’s distributed
collaborative business processes with high data availability
Maurice Tchoupé Tchendji and Joskel Ngoufo Tagueu
[AAAI-16] Tiebreaking Strategies for A* Search: How to Explore the Final Fron...Asai Masataro
This is a presentation used in the aural session in AAAI-16. The original paper is available at http: guicho271828.github.io/publications/ .
# Abstract
Despite recent improvements in search techniques for cost-optimal classical planning, the exponential growth of
the size of the search frontier in A* is unavoidable. We investigate tiebreaking strategies for A*,
experimentally analyzing the performance of standard tiebreaking strategies that break ties according to the heuristic value of the nodes. We find that tiebreaking has a significant impact on search algorithm performance when there are zero-cost operators that induce large plateau regions in the search space. We develop a new framework for tiebreaking based on a depth metric which measures distance from the entrance to the plateau, and propose a new, randomized strategy which significantly outperforms standard strategies on domains with zero-cost actions.
Digital Signal Processing[ECEG-3171]-Ch1_L06Rediet Moges
This Digital Signal Processing Lecture material is the property of the author (Rediet M.) . It is not for publication,nor is it to be sold or reproduced.
#Africa#Ethiopia
Many biological systems exhibit heterogeneiety on a population level. This heterogeneity can be captured by describing the temporal evolution of the probability of an individual in the population to be in a certain state as partial differential equation. To tune parameters of such a partial differential equation to experimental data, a partial differential equation constrained optimisation problem has to be solved. Hence, for biological systems with a large number of states, a high-dimensional partial differential equation has to be solved. This can easily render the optimisation problem intractable, As there are no well-established, efficient integration schemes for high dimensional partial differential equations available. In this talk we will present techniques to translate the partial differential equation constrained optimization problem into a hierarchical, ordinary differential equation constrained optimization problem given a certain set of assumptions. We will present these assumptionas as well as the derivation of the hierarchical, ordinary differential equation constrained optimisation problem. Moreover we will present numerical schemes for the computation of the respective objective function and its gradient. Eventually we will also present numerical schemes to solve the constrained optimisation problem and apply these techniques to small and large scale biological applications for which experimental data is available.
Bayesian inference for mixed-effects models driven by SDEs and other stochast...Umberto Picchini
An important, and well studied, class of stochastic models is given by stochastic differential equations (SDEs). In this talk, we consider Bayesian inference based on measurements from several individuals, to provide inference at the "population level" using mixed-effects modelling. We consider the case where dynamics are expressed via SDEs or other stochastic (Markovian) models. Stochastic differential equation mixed-effects models (SDEMEMs) are flexible hierarchical models that account for (i) the intrinsic random variability in the latent states dynamics, as well as (ii) the variability between individuals, and also (iii) account for measurement error. This flexibility gives rise to methodological and computational difficulties.
Fully Bayesian inference for nonlinear SDEMEMs is complicated by the typical intractability of the observed data likelihood which motivates the use of sampling-based approaches such as Markov chain Monte Carlo. A Gibbs sampler is proposed to target the marginal posterior of all parameters of interest. The algorithm is made computationally efficient through careful use of blocking strategies, particle filters (sequential Monte Carlo) and correlated pseudo-marginal approaches. The resulting methodology is is flexible, general and is able to deal with a large class of nonlinear SDEMEMs [1]. In a more recent work [2], we also explored ways to make inference even more scalable to an increasing number of individuals, while also dealing with state-space models driven by other stochastic dynamic models than SDEs, eg Markov jump processes and nonlinear solvers typically used in systems biology.
[1] S. Wiqvist, A. Golightly, AT McLean, U. Picchini (2020). Efficient inference for stochastic differential mixed-effects models using correlated particle pseudo-marginal algorithms, CSDA, https://doi.org/10.1016/j.csda.2020.107151
[2] S. Persson, N. Welkenhuysen, S. Shashkova, S. Wiqvist, P. Reith, G. W. Schmidt, U. Picchini, M. Cvijovic (2021). PEPSDI: Scalable and flexible inference framework for stochastic dynamic single-cell models, bioRxiv doi:10.1101/2021.07.01.450748.
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...Chiheb Ben Hammouda
Conference talk at the SIAM Conference on Financial Mathematics and Engineering, held in virtual format, June 1-4 2021, about our recently published work "Hierarchical adaptive sparse grids and quasi-Monte Carlo for option pricing under the rough Bergomi model".
- Link of the paper: https://www.tandfonline.com/doi/abs/10.1080/14697688.2020.1744700
To describe the dynamics taking place in networks that structurally change over time, we propose an approach to search for attributes whose value changes impact the topology of the graph. In several applications, it appears that the variations of a group of attributes are often followed by some structural changes in the graph that one may assume they generate. We formalize the triggering pattern discovery problem as a method jointly rooted in sequence mining and graph analysis. We apply our approach on three real-world dynamic graphs of different natures - a co-authoring network, an airline network, and a social bookmarking system - assessing the relevancy of the triggering pattern mining approach.
Incremental and Multi-feature Tensor Subspace Learning applied for Background...ActiveEon
ICIAR'14 - International Conference on Image Analysis and Recognition. Incremental and Multi-feature Tensor Subspace Learning applied for Background Modeling and Subtraction.
Accelerating Pseudo-Marginal MCMC using Gaussian ProcessesMatt Moores
The grouped independence Metropolis-Hastings (GIMH) and Markov chain within Metropolis (MCWM) algorithms are pseudo-marginal methods used to perform Bayesian inference in latent variable models. These methods replace intractable likelihood calculations with unbiased estimates within Markov chain Monte Carlo algorithms. The GIMH method has the posterior of interest as its limiting distribution, but suffers from poor mixing if it is too computationally intensive to obtain high-precision likelihood estimates. The MCWM algorithm has better mixing properties, but less theoretical support. In this paper we accelerate the GIMH method by using a Gaussian process (GP) approximation to the log-likelihood and train this GP using a short pilot run of the MCWM algorithm. Our new method, GP-GIMH, is illustrated on simulated data from a stochastic volatility and a gene network model. Our approach produces reasonable estimates of the univariate and bivariate posterior distributions, and the posterior correlation matrix in these examples with at least an order of magnitude improvement in computing time.
My talk entitled "Numerical Smoothing and Hierarchical Approximations for Efficient Option Pricing and Density Estimation", that I gave at the "International Conference on Computational Finance (ICCF)", Wuppertal June 6-10, 2022. The talk is related to our recent works "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" (link: https://arxiv.org/abs/2111.01874) and "Multilevel Monte Carlo combined with numerical smoothing for robust and efficient option pricing and density estimation" (link: https://arxiv.org/abs/2003.05708). In these two works, we introduce the numerical smoothing technique that improves the regularity of observables when approximating expectations (or the related integration problems). We provide a smoothness analysis and we show how this technique leads to better performance for the different methods that we used (i) adaptive sparse grids, (ii) Quasi-Monte Carlo, and (iii) multilevel Monte Carlo. Our applications are option pricing and density estimation. Our approach is generic and can be applied to solve a broad class of problems, particularly for approximating distribution functions, financial Greeks computation, and risk estimation.
Similar to Unexpected Default in an Information based model (20)
Toxic effects of heavy metals : Lead and Arsenicsanjana502982
Heavy metals are naturally occuring metallic chemical elements that have relatively high density, and are toxic at even low concentrations. All toxic metals are termed as heavy metals irrespective of their atomic mass and density, eg. arsenic, lead, mercury, cadmium, thallium, chromium, etc.
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
Salas, V. (2024) "John of St. Thomas (Poinsot) on the Science of Sacred Theol...Studia Poinsotiana
I Introduction
II Subalternation and Theology
III Theology and Dogmatic Declarations
IV The Mixed Principles of Theology
V Virtual Revelation: The Unity of Theology
VI Theology as a Natural Science
VII Theology’s Certitude
VIII Conclusion
Notes
Bibliography
All the contents are fully attributable to the author, Doctor Victor Salas. Should you wish to get this text republished, get in touch with the author or the editorial committee of the Studia Poinsotiana. Insofar as possible, we will be happy to broker your contact.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
1. Unexpected Default in an Information based model
Dr. Matteo L. BEDINI
Intesa Sanpaolo - DRFM, Derivatives
Milano, 27 January 2016
2. Summary
The work provides sufficient conditions for a default time τ for being
totally inaccessible in a framework where market information is modelled
explicitly through a Brownian bridge between 0 and 0 on the random time
interval [0, τ].
This talk is based on a joint work with Prof. Dr. Rainer Buckdahn and
Prof. Dr. Hans-Jürgen Engelbert:
MLB, R. Buckdahn, H.-J. Engelbert, Unexpected Default in an
Information Based model, Preprint, 2016 (Submitted). Available at
https://arxiv.org/abs/1611.02952.
Disclaimer
The opinions expressed in these slides are solely of the author and do not
necessarily represent those of the present or past employers.
Work partially supported by the European Community’s FP 7 Programme
under contract PITN-GA-2008-213841, Marie Curie ITN "Controlled
Systems".
3. Outline
1 Objective and Motivation
2 The Information process
3 Main result and its proof
4 Further developments and Bibliography
4. Objective and Motivation
Outline
1 Objective and Motivation
2 The Information process
3 Main result and its proof
4 Further developments and Bibliography
M. L. Bedini (ISP - DRFM) Unexpected Default & Information QFW, UniMiB, 27/01/2017 4 / 23
5. Objective and Motivation The flow of information on a default: reduced-form models
In most of the credit-risk models used by practitioner, the information on a
default time τ is modelled by H = (Ht)t≥0, the smallest filtration making
τ a stopping time, which is generated by the single-jump process occurring
at τ, meaning that market agents just know if the default has occurred or
not.
Figure: The minimal filtration making τ a stopping time.
M. L. Bedini (ISP - DRFM) Unexpected Default & Information QFW, UniMiB, 27/01/2017 5 / 23
6. Objective and Motivation Explicitly modelling the information on a default ([BBE])
Financial reality can be more complex, there are periods where the default
is more likely to happen than in others. For this reason in the
information-based approach the flow of market information on the default
is modelled by Fβ = Fβ
t
t≥0
, the filtration generated by β = (βt, t ≥ 0),
a Brownian bridge between 0 and 0 on the random time interval [0, τ].
Figure: The filtration generated by the information process β.
M. L. Bedini (ISP - DRFM) Unexpected Default & Information QFW, UniMiB, 27/01/2017 6 / 23
7. Objective and Motivation Earlier works
Key question
Is the default time a predictable, accessible or totally inaccessible stopping
time?
Structural approach to credit risk (see, e.g., [M74]). Default time
is predictable (as any stopping time in a Brownian filtration) .
Reduced-form models (see, e.g., [DSS] or [EJY]). Key result of
Dellacherie and Meyer ([DM]): if the law of τ is diffuse, then τ is a
totally inaccessible stopping time with respect to H.
The fact that financial markets cannot foresee the time of default of a
company (non-negligible credit-spread even for short maturities) makes the
reduced-form models well accepted by practitioners. In this sense, totally
inaccessible default times seem to be the best candidates for modelling
times of bankruptcy.
See, e.g. Jarrow and Protter [JP] and Giesecke [G] on the relations
between financial information and the properties of the default time.
M. L. Bedini (ISP - DRFM) Unexpected Default & Information QFW, UniMiB, 27/01/2017 7 / 23
8. Objective and Motivation Main result
Our focus is on the classification of the default time with respect to the filtration
Fβ
generated by the information process and our main result is the following: if
the distribution of the default time τ admits a density f with respect to the
Lebesgue measure, then τ is a totally inaccessible stopping time and its
compensator K = (Kt, t ≥ 0) is given by
Kt =
t∧τˆ
0
f (s)
´
(s,+∞)
v
2πs(v−s) f (v) dv
dLβ
(s, 0) ,
where Lβ
(t, 0) is the local time of the information process β at level 0 up to time
t.
Main features
Common assumption that τ admits a continuous density with respect to the
Lebesgue measure.
The default time is a totally-inaccessible stopping time.
The model for the flow of market information on the default is more
sophisticated than the standard approach.
M. L. Bedini (ISP - DRFM) Unexpected Default & Information QFW, UniMiB, 27/01/2017 8 / 23
9. The Information process
Outline
1 Objective and Motivation
2 The Information process
3 Main result and its proof
4 Further developments and Bibliography
M. L. Bedini (ISP - DRFM) Unexpected Default & Information QFW, UniMiB, 27/01/2017 9 / 23
10. The Information process Brownian bridges on random intervals
(Ω, F, P) complete probability space, N collection of (P, F)-null sets,
W = (Wt, t ≥ 0) a B.m., τ : Ω → (0, +∞) a r.v. independent of W .
Given r ∈ (0, +∞), a standard Brownian bridge βr = (βr
t , t ≥ 0) between
0 and 0 on [0, r] is given by:
βr
t := Wt −
t
r ∨ t
Wr∨t, t ≥ 0.
(see, e.g., [KS] Section 5.6.B).
Definition (see [BBE], Def. 3.1)
The process β = (βt, t ≥ 0) is called information process:
βt := Wt −
t
τ ∨ t
Wτ∨t, t ≥ 0.
Fβ = Fβ
t := σ (βs, 0 ≤ s ≤ t) ∨ N
t≥0
(right-continuous and complete,
see [BBE] Cor. 6.1).
M. L. Bedini (ISP - DRFM) Unexpected Default & Information QFW, UniMiB, 27/01/2017 10 / 23
11. The Information process Conditional law
The law of βt, conditional to τ = r ∈ (0, +∞), is the same of a standard
Brownian bridge βr between 0 and 0 on the deterministic time interval
[0, r] ([BBE], Lem. 2.4, Cor. 2.2):
P (βt = ·|τ = r) = N 0,
t (r − t)
r
.
Denote by p (, t, ·, y), the Gaussian density with mean y and variance t:
p (t, x, y) :=
1
√
2πt
exp −
(x − y)2
2t
, x ∈ R. (1)
The conditional density of βt, knowing τ = r, is equal to the density
ϕt (r, x) of a standard Brownian bridge βr given by
ϕt (r, x) :=
p t(r−t)
r , x, 0 , 0 < t < r, x ∈ R,
0, r ≤ t, x ∈ R.
(2)
M. L. Bedini (ISP - DRFM) Unexpected Default & Information QFW, UniMiB, 27/01/2017 11 / 23
12. The Information process Main properties
For all t > 0, {βt = 0} = {τ ≤ t} , P-a.s ([BBE], Prop. 3.1).
τ is an Fβ-stopping time ([BBE], Cor. 3.1).
β is an Fβ-Markov process ([BBE], Theo. 6.1).
Define the a-posteriori density function of τ as
φt (r, x) :=
ϕt (r, x)
ˆ
(t,+∞)
ϕt (v, x) dF (v)
, (r, t) ∈ (0, +∞) × R+, x ∈ R , (3)
Let t > 0, g : R+ → R Borel function s.t. E [|g (τ)|] < +∞. Then
E g (τ) I{t<τ}|Fβ
t =
ˆ
(t,+∞)
g (r) φt (r, βt) dF (r) I{t<τ}, P-a.s.
(4)
([BBE], Theo. 4.1, Cor. 4.1 and Cor. 6.1 ).
M. L. Bedini (ISP - DRFM) Unexpected Default & Information QFW, UniMiB, 27/01/2017 12 / 23
13. The Information process Semimartingale decomposition & local time
Define
u (s, x) := E
βs
τ − s
I{s<τ}|βs = x , s ∈ R+, x ∈ R. (5)
Theorem ([BBE], Theo. 7.1).
The process b = (bt, t ≥ 0) given by
bt := βt +
tˆ
0
u (s, βs) ds,
is an Fβ
-Brownian motion stopped at τ and β is an Fβ
-semimartingale (loc.
mart. + BV).
Being β a semimartingale, its (right) local time Lβ
(t, x) at level x up to time t is
defined through Tanaka’s formula (see, e.g., [RY], Theo VI.(1.2)):
Lβ
(t, x) = |βt − x| − |β0 − x| −
tˆ
0
sign (βs − x) dβs, t ≥ 0,
where sign (x) := 1 if x > 0 and sign (x) := −1 if x ≤ 0.
M. L. Bedini (ISP - DRFM) Unexpected Default & Information QFW, UniMiB, 27/01/2017 13 / 23
14. Main result and its proof
Outline
1 Objective and Motivation
2 The Information process
3 Main result and its proof
4 Further developments and Bibliography
M. L. Bedini (ISP - DRFM) Unexpected Default & Information QFW, UniMiB, 27/01/2017 14 / 23
15. Main result and its proof Statement of the main result
Let H = Ht := I{t≤τ} be the single-jump process occurring at τ.
Theorem
Suppose that the distribution function F of τ admits a continuous density
f with respect to the Lebesgue measure. Then τ is an Fβ-totally
inaccessible stopping time and the process K = (Kt, t ≥ 0) defined by
Kt :=
t∧τˆ
0
f (s)
´
(s,+∞)
v
2πs(v−s) f (v) dv
dLβ
(s, 0) , (6)
is the compensator of the Fβ-submartingale H1.
1
The Fβ
-compensator of H is its Fβ
-dual predictable projection, i.e. the unique
Fβ
-predictable increasing càdlàg process K with K0− = 0 and s.t. H − K is an
Fβ
-martingale.
M. L. Bedini (ISP - DRFM) Unexpected Default & Information QFW, UniMiB, 27/01/2017 15 / 23
16. Main result and its proof Key properties of the Local time
Well known: There exists a modification of Lβ (t, x) , t ≥ 0, x ∈ R
s.t. (t, x) → Lβ (t, x) is continuous in t, càdlàg in x. We prove joint
continuity in t, x.
In particular: Lβ (t, 0) , t ≥ 0 is a continuous increasing process,
hence, the compensator K given by (6) is continuous, which is
equivalent to say that τ is a totally inacessible stopping time with
respect to Fβ (see, e.g., [K], Cor. 25.18).
The occupation time formula (see, e.g., [RW], Theo. IV.(45.4)), in
our framework, takes the following form:
t∧τˆ
0
h (s, βs) ds =
tˆ
0
h (s, βs) d β, β s =
ˆ
R
tˆ
0
h (s, x) dLβ
(s, x)
dx,
for all t ≥ 0 and all non-negative Borel functions h on R+ × R, P-a.s.
The function x → Lβ (t, x) is bounded, for all t ∈ R+, P-a.s. (and
the bound may depend on t and ω).
Outside a negligible set, the sequence Lβ (·, xn) weakly converges to
Lβ (·, x) as xn → x ∈ R.
M. L. Bedini (ISP - DRFM) Unexpected Default & Information QFW, UniMiB, 27/01/2017 16 / 23
17. Main result and its proof Laplacian approach
Recall
Let (C, F) be an integrable increasing càd process, L the càd modification of
of Lt = E [C∞|Ft] , t ≥ 0.
Potential generated by C: Xt := Lt − Ct, t ≥ 0. Suppose that X ∈ (D).
Notation: For h > 0:
phX = (phXt, t ≥ 0) is the càd modification of the supermartingale
phXt = E [Xt+h|Ft] , t, h ≥ 0;
Ah
t := 1
h
´ t
0
(Xs − phXs) ds, s ≥ 0 (integrable increasing process).
Theorem (P.-A. Meyer [M66])
There exists a unique integrable F-predictable increasing process A generating the
potential X. For every stopping time η it holds
Ah
η
σ(L1
,L∞
)
−−−−−−→
h↓0
Aη.
In our setting: F = Fβ
, Ct = Ht = I{τ≤t}, t ≥ 0, C∞ = H∞ = 1; potential
generated by Xt := 1 − Ht = I{τ>t}, t ≥ 0.
M. L. Bedini (ISP - DRFM) Unexpected Default & Information QFW, UniMiB, 27/01/2017 17 / 23
18. Main result and its proof Proof (1/4)
For every h > 0 define the process Kh
= Kh
t , t ≥ 0 as
Kh
t :=
1
h
tˆ
0
I{s<τ} − E I{s+h<τ}|Fβ
s ds
=
tˆ
0
1
h
P s < τ < s + h|Fβ
s ds, P-a.s.
The proof is then made by two main parts:
1st part: Prove that Kt − Kt0
is the P-a.s. limit of Kh
t − Kh
t0
as h ↓ 0, for
every 0 < t0 < t.
2nd part: Prove that K is indistinguishable from the compensator of H.
Compatness Criterion of Dunford-Pettis: Khn
t − Khn
t0
n∈N
relatively
compact in the weak topology σ L1
, L∞
⇒ it is uniformly integrable.
Thus, P-a.s. convergence of Khn
t − Khn
t0
n∈N
⇒ L1
-convergence ⇒
convergence in σ L1
, L∞
to Kt − Kt0 .
The result follows by uniqueness of the limit in the weak topology
σ L1
, L∞
.
M. L. Bedini (ISP - DRFM) Unexpected Default & Information QFW, UniMiB, 27/01/2017 18 / 23
19. Main result and its proof Proof (2/4)
Let us focus on the first part of the proof:
lim
h↓0
Kh
t − Kh
t0
= lim
h↓0
t∧τˆ
t0∧τ
1
h
´ s+h
s
ϕs (r, βs) f (r) dr
´ +∞
s
ϕs (v, βs) f (v) dv
ds
= lim
h↓0
t∧τˆ
t0∧τ
1
h
´ s+h
s
ϕs (r, βs) dr
´ +∞
s
ϕs (v, βs) f (v) dv
f (s) ds (7)
+ lim
h↓0
t∧τˆ
t0∧τ
1
h
´ s+h
s
ϕs (r, βs) [f (r) − f (s)] dr
´ +∞
s
ϕs (v, βs) f (v) dv
ds. (8)
With a procedure analogous to that used in the computation of the limit (7), one
can prove that the limit (8) is equal to 0 P-a.s. by using the uniform continuity of
f on [t0, t + 1].
M. L. Bedini (ISP - DRFM) Unexpected Default & Information QFW, UniMiB, 27/01/2017 19 / 23
20. Main result and its proof Proof (3/4)
It remains to compute:
lim
h↓0
t∧τˆ
t0∧τ
1
h
´ s+h
s
ϕs (r, βs) dr
´ +∞
s
ϕs (v, βs) f (v) dv
f (s) ds
= lim
h↓0
t∧τˆ
t0∧τ
1
h
´ h
0
ϕs (u, βs) du
´ +∞
s
ϕs (v, βs) f (v) dv
f (s) ds
= lim
h↓0
t∧τˆ
t0∧τ
1
h
hˆ
0
p
su
s + u
, βs, 0 du g (s, βs)f (s) ds
= lim
h↓0
t∧τˆ
t0∧τ
1
h
hˆ
0
p (u, βs, 0) du g (s, βs)f (s) ds
P-a.s., where the last equality is a consequence of the following (rather technical)
result:
lim
h↓0
t∧τˆ
t0∧τ
1
h
hˆ
0
p
su
s + u
, βs, 0 − p (u, βs, 0) du g (s, βs) f (s) ds = 0, P-a.s.
M. L. Bedini (ISP - DRFM) Unexpected Default & Information QFW, UniMiB, 27/01/2017 20 / 23
21. Main result and its proof Proof (4/4)
In the last step, by the occupation time formula:
lim
h↓0
t∧τˆ
t0∧τ
1
h
hˆ
0
p (u, βs, 0) du g (s, βs) f (s) ds = lim
h↓0
t∧τˆ
t0∧τ
q (h, βs) g (s, βs) f (s) ds
= lim
h↓0
+∞ˆ
−∞
tˆ
t0
g (s, x) f (s) dLβ
(s, x)
q (h, x) dx, P-a.s.
For every h > 0, q (h, ·) is the probability density function of a probability
measure Qh that converges weakly to the Dirac measure δ0 at 0 as h ↓ 0. Since
the integrand is continuous and bouned, one can pass to the limit and using the
definition of g (s, x) we obtain:
lim
h↓0
+∞ˆ
−∞
tˆ
t0
g (s, x) f (s) dLβ
(s, x)
q (h, x) dx =
tˆ
t0
g (s, 0) f (s) dLβ
(s, 0)
= Kt − Kt0
, P-a.s.
M. L. Bedini (ISP - DRFM) Unexpected Default & Information QFW, UniMiB, 27/01/2017 21 / 23
22. Further developments and Bibliography
Outline
1 Objective and Motivation
2 The Information process
3 Main result and its proof
4 Further developments and Bibliography
M. L. Bedini (ISP - DRFM) Unexpected Default & Information QFW, UniMiB, 27/01/2017 22 / 23
23. Further developments and Bibliography Predictable default time and Enlargment of Filtrations
Non-trivial and sufficient conditions for making the default time a
predictable stopping time are considered in another paper, [BH].
Other topics related with Brownian bridges on stochastic intervals are
concerned with:
the problem of studying the progressive enlargement of a reference
filtration F by the filtration Fβ
generated by the information process,
further applications to Mathematical Finance.
M. L. Bedini (ISP - DRFM) Unexpected Default & Information QFW, UniMiB, 27/01/2017 23 / 23
24. Bibliography
[BBE] M. L. Bedini, R. Buckdahn, H.-J. Engelbert. Brownian Bridges on
Random Intervals. Teor. Veroyatnost. i Primenen., 61:1, 129–157,
2016.
[BH] M. L. Bedini, M. Hinz. Credit Defalt Prediction and Parabolic
Potential Theory. Statistics and Probability Letters (accepted),
2017.
[DM] C. Dellacherie, P.-A. Meyer. Probabilities and Potential.
North-Holland, 1978
[DSS] D. Duffie, M. Schroder, C. Skiadas. Recursive valuation of
defaultable securities and the timing of resolution of uncertainty.
Annals of Applied Probability, 6: 1075-1090, 1996.
[EJY] R.J. Elliott, M. Jeanblanc and M. Yor. On models of default risk.
Mathematical Finance, 10:179-196, 2000.
[G] K. Giesecke. Default and information. Journal of Economic
Dynamics and Control, 30:2281-2303, 2006.
M. L. Bedini (ISP - DRFM) Unexpected Default & Information QFW, UniMiB, 27/01/2017 23 / 23
25. Bibliography
[JP] R. Jarrow and P. Protter. Structural versus Reduced Form Models:
A New Information Based Perspective. Journal of Investment
Management, 2004.
[K] O. Kallenberg. Foundation of Modern Probability. Springer- Verlag,
New-York, Second edition, 2002.
[KS] I. Karatzas and S. Shreve. Brownian Motion and Stochastic
Calculus. Springer- Verlag, Berlin, Second edition, 1991.
[M74] R. Merton. On the pricing of Corporate Debt: The Risk Structure
of Interest Rates. Journal of Finance, 3:449-470, 1974.
[M66] P.-A. Meyer. Probability and Potentials. Blaisdail Publishing
Company, London 1966.
[RY] D. Revuz, M. Yor. Continuous Martingales and Brownian Motion.
Springer-Verlag, Berlin, Third edition, 1999.
M. L. Bedini (ISP - DRFM) Unexpected Default & Information QFW, UniMiB, 27/01/2017 23 / 23
26. Bibliography
[RW] L. C. G. Rogers, D. Williams. Diffusions, Markov Processes and
Martingales. Vol. 2: Itô Calculus. Cambridge University Press,
Second edition, 2000.
M. L. Bedini (ISP - DRFM) Unexpected Default & Information QFW, UniMiB, 27/01/2017 23 / 23