Information Content of Complex NetworksHector Zenil
This short talk given in Stockholm, Sweden, explains how algorithmic complexity measures, notably Kolmogorov complexity approximated both by lossless compression algorithms and the Block Decomposition Method (BDM) are capable of characterizing graphs and networks by some of their group-theoretic and topological properties, notably graph automorphism group size and clustering coefficients of complex networks. The method distinguished between models of networks such as regular, random, small-world and scale-free.
A Numerical Method for the Evaluation of Kolmogorov Complexity, An alternativ...Hector Zenil
We present a novel alternative method (other than using compression algorithms) to approximate the algorithmic complexity of a string by calculating its algorithmic probability and applying Chaitin-Levin's coding theorem.
Towards a stable definition of Algorithmic RandomnessHector Zenil
Although information content is invariant up to an additive constant, the range of possible additive constants applicable to programming languages is so large that in practice it plays a major role in the actual evaluation of K(s), the Kolmogorov complexity of a string s. We present a summary of the approach we've developed to overcome the problem by calculating its algorithmic probability and evaluating the algorithmic complexity via the coding theorem, thereby providing a stable framework for Kolmogorov complexity even for short strings. We also show that reasonable formalisms produce reasonable complexity classifications.
Fractal dimension versus Computational ComplexityHector Zenil
We investigate connections and tradeoffs between two important complexity measures: fractal dimension and computational (time) complexity. We report exciting results applied to space-time diagrams of small Turing machines with precise mathematical relations and formal conjectures connecting these measures. The preprint of the paper is available at: http://arxiv.org/abs/1309.1779
Fractal Dimension of Space-time Diagrams and the Runtime Complexity of Small ...Hector Zenil
Complexity measures are designed to capture complex behaviour and to quantify how complex that particular behaviour is. If a certain phenomenon is genuinely complex this means that it does not all of a sudden becomes simple by just translating the phenomenon to a different setting or framework with a different complexity value. It is in this sense that we expect different complexity measures from possibly entirely different fields to be related to each other. This work presents our work on a beautiful connection between the fractal dimension of space-time diagrams of Turing machines and their time complexity. Presented at Machines, Computations and Universality (MCU) 2013, Zurich, Switzerland.
Algorithmic Information Theory and Computational BiologyHector Zenil
I present cutting-edge concepts and tools drawn from algorithmic information theory (AIT) for new generation genetic sequencing, network biology and bioinformatics in general. AIT is the most advanced mathematical theory of information theory formally characterising the concepts and differences between simplicity, randomness and structure. Measures of AIT will empower computational medicine and systems biology to deal with big data, sophisticated analytics and a powerful new understanding framework.
A NEW PARALLEL ALGORITHM FOR COMPUTING MINIMUM SPANNING TREEijscmc
Computing the minimum spanning tree of the graph is one of the fundamental computational problems. In
this paper, we present a new parallel algorithm for computing the minimum spanning tree of an undirected
weighted graph with n vertices and m edges. This algorithm uses the cluster techniques to reduce the
number of processors by fraction 1/f (n) and the parallel work by the fraction O ( 1 lo g ( f ( n )) ),where f (n) is an
arbitrary function. In the case f (n) =1, the algorithm runs in logarithmic-time and use super linear work on
EREWPRAM model. In general, the proposed algorithm is the simplest one.
Presentation of my NSERC-USRA funded summer research project given at the Canadian Undergraduate Mathematics Conference (CUMC) 2014.
Please refer to the project site: http://jessebett.com/Radial-Basis-Function-USRA/
Information Content of Complex NetworksHector Zenil
This short talk given in Stockholm, Sweden, explains how algorithmic complexity measures, notably Kolmogorov complexity approximated both by lossless compression algorithms and the Block Decomposition Method (BDM) are capable of characterizing graphs and networks by some of their group-theoretic and topological properties, notably graph automorphism group size and clustering coefficients of complex networks. The method distinguished between models of networks such as regular, random, small-world and scale-free.
A Numerical Method for the Evaluation of Kolmogorov Complexity, An alternativ...Hector Zenil
We present a novel alternative method (other than using compression algorithms) to approximate the algorithmic complexity of a string by calculating its algorithmic probability and applying Chaitin-Levin's coding theorem.
Towards a stable definition of Algorithmic RandomnessHector Zenil
Although information content is invariant up to an additive constant, the range of possible additive constants applicable to programming languages is so large that in practice it plays a major role in the actual evaluation of K(s), the Kolmogorov complexity of a string s. We present a summary of the approach we've developed to overcome the problem by calculating its algorithmic probability and evaluating the algorithmic complexity via the coding theorem, thereby providing a stable framework for Kolmogorov complexity even for short strings. We also show that reasonable formalisms produce reasonable complexity classifications.
Fractal dimension versus Computational ComplexityHector Zenil
We investigate connections and tradeoffs between two important complexity measures: fractal dimension and computational (time) complexity. We report exciting results applied to space-time diagrams of small Turing machines with precise mathematical relations and formal conjectures connecting these measures. The preprint of the paper is available at: http://arxiv.org/abs/1309.1779
Fractal Dimension of Space-time Diagrams and the Runtime Complexity of Small ...Hector Zenil
Complexity measures are designed to capture complex behaviour and to quantify how complex that particular behaviour is. If a certain phenomenon is genuinely complex this means that it does not all of a sudden becomes simple by just translating the phenomenon to a different setting or framework with a different complexity value. It is in this sense that we expect different complexity measures from possibly entirely different fields to be related to each other. This work presents our work on a beautiful connection between the fractal dimension of space-time diagrams of Turing machines and their time complexity. Presented at Machines, Computations and Universality (MCU) 2013, Zurich, Switzerland.
Algorithmic Information Theory and Computational BiologyHector Zenil
I present cutting-edge concepts and tools drawn from algorithmic information theory (AIT) for new generation genetic sequencing, network biology and bioinformatics in general. AIT is the most advanced mathematical theory of information theory formally characterising the concepts and differences between simplicity, randomness and structure. Measures of AIT will empower computational medicine and systems biology to deal with big data, sophisticated analytics and a powerful new understanding framework.
A NEW PARALLEL ALGORITHM FOR COMPUTING MINIMUM SPANNING TREEijscmc
Computing the minimum spanning tree of the graph is one of the fundamental computational problems. In
this paper, we present a new parallel algorithm for computing the minimum spanning tree of an undirected
weighted graph with n vertices and m edges. This algorithm uses the cluster techniques to reduce the
number of processors by fraction 1/f (n) and the parallel work by the fraction O ( 1 lo g ( f ( n )) ),where f (n) is an
arbitrary function. In the case f (n) =1, the algorithm runs in logarithmic-time and use super linear work on
EREWPRAM model. In general, the proposed algorithm is the simplest one.
Presentation of my NSERC-USRA funded summer research project given at the Canadian Undergraduate Mathematics Conference (CUMC) 2014.
Please refer to the project site: http://jessebett.com/Radial-Basis-Function-USRA/
Illustration Clamor Echelon Evaluation via Prime Piece PsychotherapyIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Fuzzy clustering algorithm can not obtain good clustering effect when the sample characteristic is not
obvious and need to determine the number of clusters firstly. For thi0s reason, this paper proposes an
adaptive fuzzy kernel clustering algorithm. The algorithm firstly use the adaptive function of clustering
number to calculate the optimal clustering number, then the samples of input space is mapped to highdimensional
feature space using gaussian kernel and clustering in the feature space. The Matlab simulation
results confirmed that the algorithm's performance has greatly improvement than classical clustering algorithm and has faster convergence speed and more accurate clustering results
The variational Gaussian process (VGP), a Bayesian nonparametric model which adapts its shape to match com- plex posterior distributions. The VGP generates approximate posterior samples by generating latent inputs and warping them through random non-linear mappings; the distribution over random mappings is learned during inference, enabling the transformed outputs to adapt to varying complexity.
I am Stacy W. I am a Statistical Physics Assignment Expert at statisticsassignmenthelp.com. I hold a Masters in Statistics from, University of McGill, Canada
I have been helping students with their homework for the past 7years. I solve assignments related to Statistical.
Visit statisticsassignmenthelp.com or email info@statisticsassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with Statistical Physics Assignments.
A novel approach for high speed convolution of finite and infinite length seq...eSAT Journals
Abstract
Digital signal processing, Digital control systems, Telecommunication, Audio and Video processing are important applications in
VLSI. Design and implementation of DSP systems with advances in VLSI demands low power, efficiency in energy, portability,
reliability and miniaturization. In digital signal processing, linear-time invariant systems are important sub-class of systems and are
the heart and soul of DSP.
In many application areas, linear and circular convolution are fundamental computations. Convolution with very long sequences is
often required. Discrete linear convolution of two finite-length and infinite length sequences using circular convolution on for
Overlap-Add and Overlap-Save methods can be computed. In real-time signal processing, circular convolution is much more
effective than linear convolution. Circular convolution is simpler to compute and produces less output samples compared to linear
convolution. Also linear convolution can be computed from circular convolution. In this paper, both linear, circular convolutions are
performed using vedic multiplier architecture based on vertical and cross wise algorithm of Urdhva-Tiryabhyam. The implementation
uses hierarchical design approach which leads to improvement in computational speed, power reduction, minimization in hardware
resources and area. Coding is done using Verilog HDL. Simulation and synthesis are performed using Xilinx FPGA.
Keywords: Linear and Circular convolution, Urdhva - Tiryagbhyam, carry save multiplier, Overlap –Add/ Save Verilog
HDL.
Digital Signal Processing[ECEG-3171]-Ch1_L02Rediet Moges
This Digital Signal Processing Lecture material is the property of the author (Rediet M.) . It is not for publication,nor is it to be sold or reproduced
#Africa#Ethiopia
International Journal of Research in Engineering and Science is an open access peer-reviewed international forum for scientists involved in research to publish quality and refereed papers. Papers reporting original research or experimentally proved review work are welcome. Papers for publication are selected through peer review to ensure originality, relevance, and readability.
A STATISTICAL COMPARATIVE STUDY OF SOME SORTING ALGORITHMSijfcstjournal
This research paper is a statistical comparative study of a few average case asymptotically optimal sorting
algorithms namely, Quick sort, Heap sort and K- sort. The three sorting algorithms all with the same
average case complexity have been compared by obtaining the corresponding statistical bounds while
subjecting these procedures over the randomly generated data from some standard discrete and continuous
probability distributions such as Binomial distribution, Uniform discrete and continuous distribution and
Poisson distribution. The statistical analysis is well supplemented by the parameterized complexity
analysis
Illustration Clamor Echelon Evaluation via Prime Piece PsychotherapyIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Fuzzy clustering algorithm can not obtain good clustering effect when the sample characteristic is not
obvious and need to determine the number of clusters firstly. For thi0s reason, this paper proposes an
adaptive fuzzy kernel clustering algorithm. The algorithm firstly use the adaptive function of clustering
number to calculate the optimal clustering number, then the samples of input space is mapped to highdimensional
feature space using gaussian kernel and clustering in the feature space. The Matlab simulation
results confirmed that the algorithm's performance has greatly improvement than classical clustering algorithm and has faster convergence speed and more accurate clustering results
The variational Gaussian process (VGP), a Bayesian nonparametric model which adapts its shape to match com- plex posterior distributions. The VGP generates approximate posterior samples by generating latent inputs and warping them through random non-linear mappings; the distribution over random mappings is learned during inference, enabling the transformed outputs to adapt to varying complexity.
I am Stacy W. I am a Statistical Physics Assignment Expert at statisticsassignmenthelp.com. I hold a Masters in Statistics from, University of McGill, Canada
I have been helping students with their homework for the past 7years. I solve assignments related to Statistical.
Visit statisticsassignmenthelp.com or email info@statisticsassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with Statistical Physics Assignments.
A novel approach for high speed convolution of finite and infinite length seq...eSAT Journals
Abstract
Digital signal processing, Digital control systems, Telecommunication, Audio and Video processing are important applications in
VLSI. Design and implementation of DSP systems with advances in VLSI demands low power, efficiency in energy, portability,
reliability and miniaturization. In digital signal processing, linear-time invariant systems are important sub-class of systems and are
the heart and soul of DSP.
In many application areas, linear and circular convolution are fundamental computations. Convolution with very long sequences is
often required. Discrete linear convolution of two finite-length and infinite length sequences using circular convolution on for
Overlap-Add and Overlap-Save methods can be computed. In real-time signal processing, circular convolution is much more
effective than linear convolution. Circular convolution is simpler to compute and produces less output samples compared to linear
convolution. Also linear convolution can be computed from circular convolution. In this paper, both linear, circular convolutions are
performed using vedic multiplier architecture based on vertical and cross wise algorithm of Urdhva-Tiryabhyam. The implementation
uses hierarchical design approach which leads to improvement in computational speed, power reduction, minimization in hardware
resources and area. Coding is done using Verilog HDL. Simulation and synthesis are performed using Xilinx FPGA.
Keywords: Linear and Circular convolution, Urdhva - Tiryagbhyam, carry save multiplier, Overlap –Add/ Save Verilog
HDL.
Digital Signal Processing[ECEG-3171]-Ch1_L02Rediet Moges
This Digital Signal Processing Lecture material is the property of the author (Rediet M.) . It is not for publication,nor is it to be sold or reproduced
#Africa#Ethiopia
International Journal of Research in Engineering and Science is an open access peer-reviewed international forum for scientists involved in research to publish quality and refereed papers. Papers reporting original research or experimentally proved review work are welcome. Papers for publication are selected through peer review to ensure originality, relevance, and readability.
A STATISTICAL COMPARATIVE STUDY OF SOME SORTING ALGORITHMSijfcstjournal
This research paper is a statistical comparative study of a few average case asymptotically optimal sorting
algorithms namely, Quick sort, Heap sort and K- sort. The three sorting algorithms all with the same
average case complexity have been compared by obtaining the corresponding statistical bounds while
subjecting these procedures over the randomly generated data from some standard discrete and continuous
probability distributions such as Binomial distribution, Uniform discrete and continuous distribution and
Poisson distribution. The statistical analysis is well supplemented by the parameterized complexity
analysis
This research paper is a statistical comparative study of a few average case asymptotically optimal sorting algorithms namely, Quick sort, Heap sort and K- sort. The three sorting algorithms all with the same average case complexity have been compared by obtaining the corresponding statistical bounds while subjecting these procedures over the randomly generated data from some standard discrete and continuous
probability distributions such as Binomial distribution, Uniform discrete and continuous distribution and Poisson distribution. The statistical analysis is well supplemented by the parameterized complexity analysis.
SEQUENTIAL CLUSTERING-BASED EVENT DETECTION FOR NONINTRUSIVE LOAD MONITORINGcscpconf
The problem of change-point detection has been well studied and adopted in many signal processing applications. In such applications, the informative segments of the signal are the stationary ones before and after the change-point. However, for some novel signal processing and machine learning applications such as Non-Intrusive Load Monitoring (NILM), the information contained in the non-stationary transient intervals is of equal or even more importance to the recognition process. In this paper, we introduce a novel clustering-based sequential detection of abrupt changes in an aggregate electricity consumption profile with the accurate decomposition of the input signal into stationary and non-stationary segments. We also introduce various event models in the context of clustering analysis. The proposed algorithm is applied to building-level energy profiles with promising results for the residential BLUED power dataset.
SEQUENTIAL CLUSTERING-BASED EVENT DETECTION FOR NONINTRUSIVE LOAD MONITORINGcsandit
The problem of change-point detection has been well studied and adopted in many signal processing applications. In such applications, the informative segments of the signal are the
stationary ones before and after the change-point. However, for some novel signal processing and machine learning applications such as Non-Intrusive Load Monitoring (NILM), the information contained in the non-stationary transient intervals is of equal or even more importance to the recognition process. In this paper, we introduce a novel clustering-based sequential detection of abrupt changes in an aggregate electricity consumption profile with
accurate decomposition of the input signal into stationary and non-stationary segments. We also introduce various event models in the context of clustering analysis. The proposed algorithm is applied to building-level energy profiles with promising results for the residential BLUED power dataset.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Bayesian inference for mixed-effects models driven by SDEs and other stochast...Umberto Picchini
An important, and well studied, class of stochastic models is given by stochastic differential equations (SDEs). In this talk, we consider Bayesian inference based on measurements from several individuals, to provide inference at the "population level" using mixed-effects modelling. We consider the case where dynamics are expressed via SDEs or other stochastic (Markovian) models. Stochastic differential equation mixed-effects models (SDEMEMs) are flexible hierarchical models that account for (i) the intrinsic random variability in the latent states dynamics, as well as (ii) the variability between individuals, and also (iii) account for measurement error. This flexibility gives rise to methodological and computational difficulties.
Fully Bayesian inference for nonlinear SDEMEMs is complicated by the typical intractability of the observed data likelihood which motivates the use of sampling-based approaches such as Markov chain Monte Carlo. A Gibbs sampler is proposed to target the marginal posterior of all parameters of interest. The algorithm is made computationally efficient through careful use of blocking strategies, particle filters (sequential Monte Carlo) and correlated pseudo-marginal approaches. The resulting methodology is is flexible, general and is able to deal with a large class of nonlinear SDEMEMs [1]. In a more recent work [2], we also explored ways to make inference even more scalable to an increasing number of individuals, while also dealing with state-space models driven by other stochastic dynamic models than SDEs, eg Markov jump processes and nonlinear solvers typically used in systems biology.
[1] S. Wiqvist, A. Golightly, AT McLean, U. Picchini (2020). Efficient inference for stochastic differential mixed-effects models using correlated particle pseudo-marginal algorithms, CSDA, https://doi.org/10.1016/j.csda.2020.107151
[2] S. Persson, N. Welkenhuysen, S. Shashkova, S. Wiqvist, P. Reith, G. W. Schmidt, U. Picchini, M. Cvijovic (2021). PEPSDI: Scalable and flexible inference framework for stochastic dynamic single-cell models, bioRxiv doi:10.1101/2021.07.01.450748.
The numerical solution of Huxley equation by the use of two finite difference methods is done. The first one is the explicit scheme and the second one is the Crank-Nicholson scheme. The comparison between the two methods showed that the explicit scheme is easier and has faster convergence while the Crank-Nicholson scheme is more accurate. In addition, the stability analysis using Fourier (von Neumann) method of two schemes is investigated. The resulting analysis showed that the first scheme
is conditionally stable if, r ≤ 2 − aβ∆t , ∆t ≤ 2(∆x)2 and the second
scheme is unconditionally stable.
On the Cryptographic Measures and Chaotic Dynamical Complexity Measuresijcisjournal
The relationship between cryptographic measures and chaotic dynamical complexity measures is studied in
this paper, including linear complexity and measure entropy, nonlinear complexity and source entropy.
Moreover, a method is presented to guarantee the complexity of chaos-based pseudorandom sequence due
to this relationship. This is important to the development of chaos-based cryptography.
Time alignment techniques for experimental sensor dataIJCSES Journal
Experimental data is subject to data loss, which presents a challenge for representing the data with a
proper time scale. Additionally, data from separate measurement systems need to be aligned in order to
use the data cooperatively. Due to the need for accurate time alignment, various practical techniques are
presented along with an illustrative example detailing each step of the time alignment procedure for actual
experimental data from an Unmanned Aerial Vehicle (UAV). Some example MATLAB code is also
provided.
Continuum Modeling and Control of Large Nonuniform NetworksYang Zhang
Presented at The 49th Annual Allerton Conference on Communication, Control, and Computing, 2011
Abstract—Recent research has shown that some Markov chains modeling networks converge to continuum limits, which are solutions of partial differential equations (PDEs), as the number of the network nodes approaches infinity. Hence we can approximate such large networks by PDEs. However, the previous results were limited to uniform immobile networks with a fixed transmission rule. In this paper we first extend the analysis to uniform networks with more general transmission rules. Then through location transformations we derive the continuum limits of nonuniform and possibly mobile networks. Finally, by comparing the continuum limits of corresponding nonuniform and uniform networks, we develop a method to control the transmissions in nonuniform and mobile networks so that the continuum limit is invariant under node locations, and hence mobility. This enables nonuniform and mobile networks to maintain stable global characteristics in the presence of varying node locations.
THE FOURIER TRANSFORM FOR SATELLITE IMAGE COMPRESSIONcscpconf
The need to transmit or store satellite images is growing rapidly with the development of
modern communications and new imaging systems. The goal of compression is to facilitate the
storage and transmission of large images on the ground with high compression ratios and
minimum distortion. In this work, we present a new coding scheme for satellite images. At first,
the image will be downloaded followed by a fast Fourier transform FFT. The result obtained
after FFT processing undergoes a scalar quantization (SQ). The results obtained after the
quantization phase are encoded using entropy encoding. This approach has been tested on
satellite image and Lena picture. After decompression, the images were reconstructed faithfully
and memory space required for storage has been reduced by more than 80%
The fourier transform for satellite image compressioncsandit
The need to transmit or store satellite images is growing rapidly with the development of
modern communications and new imaging systems. The goal of compression is to facilitate the
storage and transmission of large images on the ground with high compression ratios and
minimum distortion. In this work, we present a new coding scheme for satellite images. At first,
the image will be downloaded followed by a fast Fourier transform FFT. The result obtained
after FFT processing undergoes a scalar quantization (SQ). The results obtained after the
quantization phase are encoded using entropy encoding. This approach has been tested on
satellite image and Lena picture. After decompression, the images were reconstructed faithfully
and memory space required for storage has been reduced by more than 80%
BLIND SIGNATURE SCHEME BASED ON CHEBYSHEV POLYNOMIALSIJNSA Journal
A blind signature scheme is a cryptographic protocol to obtain a valid signature for a message from a signer such that signer’s view of the protocol can’t be linked to the resulting message signature pair.This paper presents blind signature scheme using Chebyshev polynomials. The security of the given scheme depends upon the intractability of the integer factorization problem and discrete logarithms of Chebyshev polynomials.
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Slide 1: Title Slide
Extrachromosomal Inheritance
Slide 2: Introduction to Extrachromosomal Inheritance
Definition: Extrachromosomal inheritance refers to the transmission of genetic material that is not found within the nucleus.
Key Components: Involves genes located in mitochondria, chloroplasts, and plasmids.
Slide 3: Mitochondrial Inheritance
Mitochondria: Organelles responsible for energy production.
Mitochondrial DNA (mtDNA): Circular DNA molecule found in mitochondria.
Inheritance Pattern: Maternally inherited, meaning it is passed from mothers to all their offspring.
Diseases: Examples include Leber’s hereditary optic neuropathy (LHON) and mitochondrial myopathy.
Slide 4: Chloroplast Inheritance
Chloroplasts: Organelles responsible for photosynthesis in plants.
Chloroplast DNA (cpDNA): Circular DNA molecule found in chloroplasts.
Inheritance Pattern: Often maternally inherited in most plants, but can vary in some species.
Examples: Variegation in plants, where leaf color patterns are determined by chloroplast DNA.
Slide 5: Plasmid Inheritance
Plasmids: Small, circular DNA molecules found in bacteria and some eukaryotes.
Features: Can carry antibiotic resistance genes and can be transferred between cells through processes like conjugation.
Significance: Important in biotechnology for gene cloning and genetic engineering.
Slide 6: Mechanisms of Extrachromosomal Inheritance
Non-Mendelian Patterns: Do not follow Mendel’s laws of inheritance.
Cytoplasmic Segregation: During cell division, organelles like mitochondria and chloroplasts are randomly distributed to daughter cells.
Heteroplasmy: Presence of more than one type of organellar genome within a cell, leading to variation in expression.
Slide 7: Examples of Extrachromosomal Inheritance
Four O’clock Plant (Mirabilis jalapa): Shows variegated leaves due to different cpDNA in leaf cells.
Petite Mutants in Yeast: Result from mutations in mitochondrial DNA affecting respiration.
Slide 8: Importance of Extrachromosomal Inheritance
Evolution: Provides insight into the evolution of eukaryotic cells.
Medicine: Understanding mitochondrial inheritance helps in diagnosing and treating mitochondrial diseases.
Agriculture: Chloroplast inheritance can be used in plant breeding and genetic modification.
Slide 9: Recent Research and Advances
Gene Editing: Techniques like CRISPR-Cas9 are being used to edit mitochondrial and chloroplast DNA.
Therapies: Development of mitochondrial replacement therapy (MRT) for preventing mitochondrial diseases.
Slide 10: Conclusion
Summary: Extrachromosomal inheritance involves the transmission of genetic material outside the nucleus and plays a crucial role in genetics, medicine, and biotechnology.
Future Directions: Continued research and technological advancements hold promise for new treatments and applications.
Slide 11: Questions and Discussion
Invite Audience: Open the floor for any questions or further discussion on the topic.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
Graph Spectra through Network Complexity Measures: Information Content of Eigenvalues
1. Graph Spectra through Network
Complexity Measures
Information Content of Eigenvalues
Hector Zenil
(joint work with Narsis Kiani and Jesper Tegn´er)
Unit of Computational Medicine, Karolinska Institutet
@ Department of Mathematics, Stockholm University
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 1 / 42
2. Outline:
1 Estimating Kolmogorov complexity
2 n-dimensional complexity
3 Graph Algorithmic Probability and Kolmogorov complexity of
networks
4 Applications to complex networks and graph spectra
Material mostly drawn from:
1 joint with Soler et al. Computability (2013). [1]
2 joint with Gauvrit et al. Behavior Research Methods (2013). [3]
3 Zenil et al. Physica A (2014). [4]
4 joint with Soler et al. PLoS ONE (2014). [6]
5 Zenil, Kiani and Tegn´er, LNCS 9044, (2015). [2]
6 Zenil and Tegn´er, Symmetry (forthcoming).
7 Zenil, Kiani and Tegn´er, Seminars in Cell and Developmental
Biology (in revision).
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 2 / 42
3. Main goal
Main goal throughout this talk: To study properties of graphs and
networks with measures from information theory and algorithmic
complexity.
Table : Numerical calculations of (mostly) uncomputable functions:
Busy Beaver problem upper semi-computable
Kolmogorov-Chaitin complexity lower semi-computable
Algorithmic Probability (Solomonoff-Levin) upper semi-computable
Bennett’s Logical Depth uncomputable
Lower semi-computable: can be approximated from above.
Upper semi-computable: can be approximated from below.
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 3 / 42
4. The basic unit in Theoretical Computer Science
The cell (the smallest unit of life) is to Biology what the Turing machine is
to Theoretical Computer Science.
Finite state diagram
[A.M. Turing (1936)]
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 4 / 42
5. One machine for everything
Computation (Turing-)universality
(a) Turing proves that a M with input x can be encoded as an input M(x)
for a machine U such that if M(x) = y then U(M(x)) = y for any Turing
machine M.
You do not need a computer for each different task, only one!
There is no distinction between software/hardware or data/program
Together with Church’s thesis:
Church-(Turing)’s thesis
(b) Every effectively computable function is computable by a Turing
machine.
Together the 2 suggest that:
Anything can be programmed/simulated/emulated by a universal
Turing machine.
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 5 / 42
6. The undecidability of the Halting Problem
The existence of the universal Turing machine U brings a fundamental
G¨odel-type contradiction about the power of U (any universal machine):
Let’s say we want to know whether a machine M will halt for input x.
Assumption:
We can program U in such a way that if M(x) halts then U(M(x)) = 0
otherwise U(M(x)) = 1. So U is a (halting) decider.
Contradiction:
Let M(x) = U(x), then U(U(x)) = 0, if and only if, U(x) = 1 and
U(U(x)) = 1, if and only if, U(x) = 0.
Therefore the assumption that we can know whether a Turing machine
halts in general is not true.
There is also a non-constructive proof using Cantor’s diagonalisation method.
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 6 / 42
7. Computational irreducibility
(1) Most fundamental irreducibility:
If M halts for input x, you have to run either M(x) or U(M(x)) to know
it, but if M does not halt, neither running M(x) or U(M(x)) will tell you
that they do not halt.
Most uncomputability results are of this type, you can know in one
direction but not the other (e.g. when a string is random as we will see).
(2) Secondary irreducibility (corollary):
U(M(x)) can only produce time speedup on M(x) but not computation
speed up (connected to time complexity, P = NP time results) in general,
specially for (1). In other words, O(U(M(x))) ∼ O(M(x)), or
O(U(M(x))) = c × O(M(x)), with c a constant.
(2) is believed to be more pervasive than what (1) implies.
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 7 / 42
8. Complexity and information content of strings
Example (3 strings of length 40)
a: 1111111111111111111111111111111111111111
b: 11001010110010010100111000101010100101011
c: 0101010101010101010101010101010101010101
According to Shannon (1948):
(a) has minimum Entropy (only one micro-state).
(b) has maximum Entropy (two micro-states with same frequency
each).
(c) has also maximum Entropy! (two micro-states with same
frequency each).
Shannon Entropy inherits from classical probability
Shannon Entropy suffers of similar limitations: strings (b) and (c) have the
same Shannon Entropy (same number of 0s and 1s) but they appear of
very different nature to us.
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 8 / 42
9. Statistical v algorithmic
Entropy rate can only fix statistical regularities but not correlation
Thue-Morse sequence: 01101001100101101001011001101001
Segment of π in binary: 0010010000111111011010101000100
Definition
Kolmogorov(-Chaitin) complexity (1965,1966):
KU(s) = min{|p|, U(p) = s}
Algorithmic Randomness (also Martin L¨of and Schnorr)
A string s is random if K(s) (in bits) ∼ |s|.
Correlation versus causation
Shannon Entropy is to correlation what Kolmogorov is to causation!
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 9 / 42
10. Example of an evaluation of K
The string 01010101...01 can be produced by the following program:
Program A:
1: n:= 0
2: Print n
3: n:= n+1 mod 2
4: Goto 2
The length of A (in bits) is an upper bound of K(010101...01) (+ the
halting condition).
Semi-computability of K
Exhibiting a short version of a string is a sufficient test for
non-randomness, but the lack of a short description (program) does not
imply a sufficient test for randomness.
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 10 / 42
11. The founding theorem of K complexity: Invariance to
choice of U
Do we measure K with programming language or universal TM U1 or U2?
|KU1 (s) − KU2 (s)| < cU1,U2
It is not relevant in the limit, the difference is a constant that vanishes the
longer the strings.
Rate of convergence of K and the behaviour of c with respect to |s|
The Invariance theorem in practice is a negative result
The constant involved can be arbitrarily large, the theorem tells nothing
about the convergence. Any estimating method of K is subject to it.
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 11 / 42
12. Compression is Entropy rate not K
Actual implementations of lossless compression have 2 main drawbacks
and pitfalls:
Lossless compression as entropy rate estimators
Actual implementations of lossless compression algorithms (e.g.
Lempev-Ziv, BZip2, PNG), seek for statistical regularities, repetitions in a
sliding fixed-length window of size w, hence entropy rate estimators up to
block (micro-state) length w. Their success is only based on one side of
the non-randomness test, i.e. low entropy = low K.
Compressing short strings
The compressor also adds the decompression instructions to the file. Any
string shorter than say 100 bits is impossible to further compress or to get
any meaningful ranking from compressing them (100 bps strings in
structural molecular biology is long).
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 12 / 42
13. Alternative to lossless compression algorithms
Figure : (originally Emile Borel’s infinite monkey theorem): A monkey on a
computer produces more structure by chance than a monkey on a typewriter.
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 13 / 42
14. Algorithmic Probability (semi-measure, Levin’s Universal
Distribution)
Definition
The classical probability of production of a bit string s among all 2n bit
stings of size n (classical monkey theorem):
Pr(s) = 1/2n
(1)
Definition
Let U be a (prefix-free from Kraft’s inequality) universal Turing machine
and p a program that produces s running on U, then
m(s) =
p:U(p)=s
1/2|p|
< 1 (2)
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 14 / 42
15. The algorithmic Coding theorem
Connection to K!
The greatest contributor in the def. of m(s) is the shortest program p, i.e.
K(s).
The algorithmic Coding theorem describes the reverse connection between
K(s) and m(s):
Theorem
K(s) = − log2(m(s)) + O(1) (3)
Frequency and complexity are related
If a string s is produced by many programs then there is also a short
program that produces s (Thomas & Cover (1991)).
[Solomonoff (1964); Levin (1974); Chaitin (1976)]
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 15 / 42
16. The Coding Theorem Method (CTM) flow chart
Enumerate & run every TM ∈ (n, m) for increasing n and m (Busy Beaver values
to determine halting time, otherwise informed runtime cutoff value (see e.g.
Calude & Stay, Most programs stop quickly or never halt, 2006).
[Soler, Zenil et al, PLoS ONE (2014)]
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 16 / 42
17. Changes in computational formalism
[H. Zenil and J-P. Delahaye, On the Algorithmic Nature of the World; 2010]
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 17 / 42
18. Elementary Cellular Automata
An elementary cellular automaton (ECA) is defined by a local function
f : {0, 1}3 → {0, 1},
Figure : Space-time evolution of a cellular automaton (ECA rule 30).
f maps the state of a cell and its two immediate neighbours (range = 1)
to a new cell state: ft : r−1, r0, r+1 → r0. Cells are updated synchronously
according to f over all cells in a row.
[Wolfram, (1994)]
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 18 / 42
19. Convergence in ECA classification (CTM v Compress)
Scatterplot of ECA classification: CTM (x-axis) versus Compress (y-axis).
[Soler-Toscano et al., Computability; 2013]
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 19 / 42
20. Part II
GRAPH ENTROPY, GRAPH ALGORITHMIC
PROBABILITY AND GRAPH KOLMOGOROV
COMPLEXITY
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 20 / 42
21. Graph Entropy definitions are not robust
Several definitions (e.g. from molecular biology) if Graph Entropy have
been proposed, e.g.:
A complete graph has highest entropy H if defined as containing all
possible subgraphs up to the graph size, i.e.
H(G) = −
|G|
i
P(Gi ) log2 P(Gi )
where Gi is a subgraph of increasing size i in G. However,
H(Adj(G)) = −P(Adj(G)) log2 P(Adj(G)) = 0 ! (and also all the adj
matrices of all the subgraphs, so the sum would be 0 too !)
Graph Entropy
Complete and disconnected have then maximal and minimal entropy
respectively. Alternative definitions include, for example, the number of
bifurcations traversing the graph starting from any random node, etc.
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 21 / 42
22. Graph Kolmogorov complexity (Physica A)
Unlike Graph Entropy, Graph Kolmogorov complexity is very robust:
complete graph: K ∼ log(|N|) E-R random graph: K ∼ |E|
M. Gell-Mann (Nobel Prize 1969) thought that any reasonable measure of complexity of
graphs should have both completely disconnected and completely connected graphs to
have minimal complexity (The quark and the jaguar, 1994).
Graph Kolmogorov complexity
Complete and disconnected graphs with |N| nodes have low (algorithmic)
information content. In a random graph every edge e ∈ E requires some
information to be described. Both K(G) ∼ K(Adj(G)) !
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 22 / 42
23. Numerical estimation of K(G)
An labelled graph is uniquely represented by its adjacency matrix. So the
question is What is the Kolmogorov complexity of an adjacency
matrix?
Figure : Two-dimensional Turing machines, also known as Turmites (Langton, Physica
D, 1986).
We will provide the definition of Kolmogorov complexity for unlabelled
graphs later.
[Zenil et al. Physica A, 2014]
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 23 / 42
24. An Information-theoretic Divide-and-Conquer Algorithm!
The Block Decomposition method uses the Coding Theorem method.
Formally, we will say that an object c has (2D) Kolmogorov complexity:
K2Dd×d
(c) =
(ru,nu)∈cd×d
K2D(ru) + log2(nu) (4)
where cd×d represents the set with elements (ru, nu), obtained from
decomposing the object into (overlapping) blocks of d × d with boundary
conditions. In each (ru, nu) pair, ru is one of such squares and nu its
multiplicity.
[Zenil et al., Two-Dimensional Kolmogorov Complexity and Validation of the
Coding Theorem Method by Compressibility (2012)]
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 24 / 42
25. Classification of ECA by BDM (= Km) and Compress
Representative ECAs sorted by BDM (top row) and Compress (bottom row).
[H. Zenil, F. Soler-Toscano, J.-P. Delahaye and N. Gauvrit, Two-Dimensional
Kolmogorov Complexity and Validation of the Coding Theorem Method by
Compressibility (2012)]
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 25 / 42
26. Complementary methods for different object lengths
The methods coexist and complement each other for different string
lengths (transitions are also smooth).
method short strings long strings scalability time domain
< 100 bits > 100 bits
Lossless O(n) H
compression ×
Coding
Theorem O(exp) K
method (CTM) × ×
CTM + Block
Decomposition O(n) K → H
method (BDM)
Table : H stands for Shannon Entropy and K for Kolmogorov complexity. BDM
can therefore be taken as an improvement to (Block) Entropy rate for a fixed
block size. For CTM: http://www.complexitycalculator.com
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 26 / 42
27. Graph algorithmic probability
Works on directed and undirected graphs.
Torus boundary conditions provide a solution to the boundaries problem.
Overlapping sub matrices avoids the problem of not permutation invariance but
leads to overfitting.
The best option is to recursively divide into square matrices for which exact
complexity estimations are known.
[Zenil et al. Physica A (2014)]
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 27 / 42
28. K and graph automorphism group (Physica A)
Figure : Left: An adjacency matrix is not a graph invariant yet isomorphic graphs
have similar K. Right: Graphs with large automorphism group size (group
symmetry) have lower K.
This correlation suggests that the complexity of unlabelled graphs is captured by
the complexity of their adjacency matrix (which is a labelled graph object).
Indeed, in Zenil et al. LNCS we show that the complexity of a labelled graph is a
good approximation to its unlabelled graph complexity.
[Zenil et al. Physica A (2014)]
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 28 / 42
29. Unlabelled Graph Complexity
The proof sketch of the labelled graph complexity ∼ unlabelled graph
complexity uses the fact that there is an algorithm (e.g. brute force) of
finite (small) size that produces any isomorphic graph from any other.
Yet, one can define Graph unlabelled Kolmogorov complexity as follows:
Definition
Graph Unlabelled Kolmogorov Complexity: Let Adj(G) be the adjacency
matrix of G and Aut(G) its automorphism group, then,
K(G) = min{K(Adj(G))|Adj(G) ∈ A(Aut(G))}
where A(Aut(G)) is the set of adjacency matrices of all G ∈ Aut(G).
(The problem is believed to be in NP but not in NP-complete).
[Zenil, Kiani and Tegn´er (forthcoming)]
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 29 / 42
30. Graph automorphisms and algorithmic complexity by BDM
Classifying (and clustering) ∼ 250 graphs (no Aut(G) correction) with
different topological properties by K (BDM):
[Zenil et al. Physica A (2014)]
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 30 / 42
31. Graph definitions
Definition
Dual graph: A dual graph of a plane graph G is a graph that has a vertex
corresponding to each face of G, and an edge joining two neighboring
faces for each edge in G.
Definition
Graph spectra: The set of graph eigenvalues of the adjacency matrix is
called the spectrum of the graph. The Laplacian matrix of a graph is
sometimes also known as the graph’s spectrum.
Definition
Cospectral graphs: Two graphs are called isospectral or cospectral if they
have the same spectra.
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 31 / 42
32. Testing compression and BDM on dual graphs
[Zenil et al. Physica A (2014)]
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 32 / 42
33. H, compression and BDM on cospectral graphs
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 33 / 42
34. Quantifying Loss of Information in Network-based
Dimensionality Reduction Techniques
Figure : Flowchart of Quantifying Loss of Information in Network-based
Dimensionality Reduction Techniques.
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 34 / 42
35. Methods of (Algorithmic) Information Theory in network
dimensionality reduction
Figure : Information content of graph spectra and graph motif analysis.
Information content of 16 graphs of different types and the information content
of their graph spectra approximated by Bzip2, Compress and BDM.
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 35 / 42
36. Methods of (Algorithmic) Information Theory in network
dimensionality reduction
Figure : Information content progression of sparsification. Information loss
after keeping from 20 to 80% of the graph edges (100% corresponds to the
information content of the original graph).
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 36 / 42
37. Methods of (Algorithmic) Information Theory in network
dimensionality reduction
Figure : Plot comparing all methods as applied to 4 artificial networks. The
information content measured as normalized complexity with two different lossless
compression algorithms was used to assess the sparsification, graph spectra and
graph motif methods. The 6 networks from the Mendes DB are of the same size
and each method displays different phenomena.
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 37 / 42
38. Eigenvalue information weight effect on graph spectra
In graph spectra either the largest eigenvalue (λ1) is only considered, or all
eigenvalues (λ1...λn) are given the same weight. Yet eigenvalues capture
different properties and are sensitive to graph specifity, e.g. in a complete
graph λ1 provides the graph size.
Figure : Graph spectra can be plotted in an n-dimensional space where n is the
graph node size (and number of Eigenvalues). When a graph G evolves its
spectra changes from Spec1(G) to Spec2(G ) as in the figure, but if not all
eigenvalues are equally important hence the distance d(Spec1(G), Spec2(G )) is
on a manifold and not on Euclidian space.
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 38 / 42
39. Eigenvalues in Graph Spectra are not all the same
Nor their magnitude is of any relevance (e.g. taking the largest one only):
Figure : Statistics (ρ) and p-value plots between graph complexity (BDM) and
largest, second largest and smallest Eigenvalues of 204 different graph classes
including 4913 graphs. Clearly the graph class complexity correlates in different
ways to different Eigenvalues.
[Source: Zenil, Kiani and Tegn´er LNCS (2015)]
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 39 / 42
40. Eigenvalues of evolving networks
Most informative eigenvalues to characterize a family of networks and
individuals in such a family:
Figure : The complexity of graph versus the complexity of the list of eigenvalues
per position (rows) provides information about the amount and kind of
information stored in each eigenvalue, and the maximum entropy of rows also
identifies the eigenvalue that best characterize the changes in the evolving
network that otherwise display very little topological changes.
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 40 / 42
41. Entropy and complexity of Eigenvalue families
Let n be the number of datapoints of an evolving graph (or a family of
graphs to study), H the Shannon Entropy, K Kolmogorov complexity and
KS the Kolmogorov-Sinai Entropy (∼ interval Shannon Entropy), then we
are interested in:
H(Spec(Gi
)), K(Spec(Gi
)), KS(Spec(Gi
))
where i ∈ {1, . . . , n} to study the Eigenvalue behavior with respect to
KBDM(Gi ), and
KS(λ1
1, λ2
1, . . . , λn
1)
. . .
KS(λ2
2, λ2
2, . . . , λn
2)
. . .
maximizing the differences between Gi hence characterizing G in time.
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 41 / 42
42. Part 2 Summary
1 We have a sound, robust and native 2-dimensional complexity
measure applicable to graphs and networks.
2 The method is scalable, e.g. in 3 dimensions, I call CTM3D, the “3D
printing complexity measure” because as you can see it only requires
the Turing machine to operate in a 3D grid, effectively the probability
of a random computer program to print a 3D object!
3 The defined graph complexity measure captures algebraic, topological
(and, forthcoming, even physical properties) of graphs and networks.
4 There is a potential for applications in network and synthetic biology.
5 The method may prove to be very effective at giving proper weight to
eigenvalues and even shedding light on their meaning and information
content.
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 42 / 42
43. F. Soler-Toscano, H. Zenil, J.-P. Delahaye and N. Gauvrit, Correspondence
and Independence of Numerical Evaluations of Algorithmic Information
Measures, Computability, vol. 2, no. 2, pp. 125–140, 2013.
H. Zenil, N.A. Kiani, J. Tegn´er, Numerical Investigation of Graph Spectra
and Information Interpretability of Eigenvalues, IWBBIO 2015, LNCS 9044,
pp. 395–405. Springer, 2015.
N. Gauvrit, H. Zenil, F. Soler-Toscano and J.-P. Delahaye, Algorithmic
complexity for short binary strings applied to psychology: a primer, Behavior
Research Methods, vol. 46-3, pp 732-744, 2013.
H. Zenil, F. Soler-Toscano, K. Dingle and A. Louis, Correlation of
Automorphism Group Size and Topological Properties with Program-size
Complexity Evaluations of Graphs and Complex Networks, Physica A:
Statistical Mechanics and its Applications, vol. 404, pp. 341–358, 2014.
J.-P. Delahaye and H. Zenil, Numerical Evaluation of the Complexity of
Short Strings, Applied Mathematics and Computation, 2011.
F. Soler-Toscano, H. Zenil, J.-P. Delahaye and N. Gauvrit, Calculating
Kolmogorov Complexity from the Output Frequency Distributions of Small
Turing Machines, PLoS ONE, 9(5): e96223, 2014.
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 42 / 42
44. J.P. Delahaye and H. Zenil, On the Kolmogorov-Chaitin complexity for short
sequences, in Cristian Calude (eds), Complexity and Randomness: From
Leibniz to Chaitin, World Scientific, 2007.
G.J. Chaitin A Theory of Program Size Formally Identical to Information
Theory, J. Assoc. Comput. Mach. 22, 329-340, 1975.
R. Cilibrasi and P. Vit´anyi, Clustering by compression, IEEE Trans. on
Information Theory, 51(4), 2005.
A.N. Kolmogorov, Three approaches to the quantitative definition of
information Problems of Information and Transmission, 1(1):1–7, 1965.
L. Levin, Laws of information conservation (non-growth) and aspects of the
foundation of probability theory, Problems of Information Transmission,
10(3):206–210, 1974.
R.J. Solomonoff. A formal theory of inductive inference: Parts 1 and 2,
Information and Control, 7:1–22 and 224–254, 1964.
S. Wolfram, A New Kind of Science, Wolfram Media, 2002.
Zenil, Kiani, Tegn´er (Karolinska Institutet) Information Content of Eigenvalues May 27, 2015 42 / 42