This document summarizes Samuel Relton's presentation on Fréchet derivatives of matrix functions and their applications. Some key points:
1) It discusses how to define and compute Fréchet derivatives of matrix functions, which describe how small perturbations to a matrix affect the output of the function.
2) Applications include estimating sensitivity in nuclear activation models, predicting algebraic error in finite element methods, and analyzing condition numbers.
3) It presents algorithms for efficiently computing the most sensitive elements of a matrix to changes in the output of a function, with applications to finite elements.
Advantages of quadratic quantization techniques in the description of the pre...Orchidea Maria Lecian
School and Workshop on Mathematical Physics, Stará Lesná,
Slovakia, September 18 - 25 2017
Author: Orchidea Maria Lecian
Speakeer: Orchidea Maria Lecian
Comenius Univeristy, Bratislava,
Faculty of Mathematics, Physics and Informatics
Department of Theoretical Physics and Physics Education
Taylor's Theorem for Matrix Functions and Pseudospectral Bounds on the Condit...Sam Relton
Discusses a generalization of Taylor's theorem to matrix functions followed by new upper bounds on their condition numbers.
The resulting algorithm is shown to approximate the condition number of the function A^t much faster than current alternatives. We would recommend using this algorithm first, reverting to other (slower) algorithms if a tighter bound is required.
Optimal order a posteriori error bounds in L∞(L2) norm are derived for semidiscrete semilinear parabolic problems. Standard continuous Galerkin (conforming) finite element method is employed. Our main tools in deriving these error estimates are the elliptic reconstruction technique which is first introduced by Makridakis and Nochetto [5], with the aid of Gronwall’s lemma and continuation argument.
The presentation material for the reading club of Pattern Recognition and Machine Learning by Bishop.
The contents of the section cover
- EM algorithm for HMM
- Forward-Backward Algorithm
-------------------------------------------------------------------------
研究室でのBishop著『パターン認識と機械学習』(PRML)の輪講用発表資料(ぜんぶ英語)です。
担当範囲は
・隠れマルコフモデルに対するEMアルゴリズムのEステップ
・フォワード-バックワードアルゴリズム
Advantages of quadratic quantization techniques in the description of the pre...Orchidea Maria Lecian
School and Workshop on Mathematical Physics, Stará Lesná,
Slovakia, September 18 - 25 2017
Author: Orchidea Maria Lecian
Speakeer: Orchidea Maria Lecian
Comenius Univeristy, Bratislava,
Faculty of Mathematics, Physics and Informatics
Department of Theoretical Physics and Physics Education
Taylor's Theorem for Matrix Functions and Pseudospectral Bounds on the Condit...Sam Relton
Discusses a generalization of Taylor's theorem to matrix functions followed by new upper bounds on their condition numbers.
The resulting algorithm is shown to approximate the condition number of the function A^t much faster than current alternatives. We would recommend using this algorithm first, reverting to other (slower) algorithms if a tighter bound is required.
Optimal order a posteriori error bounds in L∞(L2) norm are derived for semidiscrete semilinear parabolic problems. Standard continuous Galerkin (conforming) finite element method is employed. Our main tools in deriving these error estimates are the elliptic reconstruction technique which is first introduced by Makridakis and Nochetto [5], with the aid of Gronwall’s lemma and continuation argument.
The presentation material for the reading club of Pattern Recognition and Machine Learning by Bishop.
The contents of the section cover
- EM algorithm for HMM
- Forward-Backward Algorithm
-------------------------------------------------------------------------
研究室でのBishop著『パターン認識と機械学習』(PRML)の輪講用発表資料(ぜんぶ英語)です。
担当範囲は
・隠れマルコフモデルに対するEMアルゴリズムのEステップ
・フォワード-バックワードアルゴリズム
Slide set presented for the Wireless Communication module at Jacobs University Bremen, Fall 2015.
Teacher: Dr. Stefano Severi, assistant: Andrei Stoica
Distributed solution of stochastic optimal control problem on GPUsPantelis Sopasakis
Stochastic optimal control problems arise in many
applications and are, in principle,
large-scale involving up to millions of decision variables. Their
applicability in control applications is often limited by the
availability of algorithms that can solve them efficiently and within
the sampling time of the controlled system.
In this paper we propose a dual accelerated proximal
gradient algorithm which is amenable to parallelization and
demonstrate that its GPU implementation affords high speed-up
values (with respect to a CPU implementation) and greatly outperforms
well-established commercial optimizers such as Gurobi.
Markov Chain Monitoring - Application to demand prediction in bike sharing sy...Harshal Chaudhari
The presentation accompanying the paper at SDM 2018 - https://epubs.siam.org/doi/abs/10.1137/1.9781611975321.50
Github: https://github.com/chdhr-harshal/mc-monitor
In networking applications, one often wishes to obtain estimates about the number of objects at different parts of the network (e.g., the number of cars at an intersection of a road network or the number of packets expected to reach a node in a computer network) by monitoring the traffic in a small number of network nodes or edges. We formalize this task by defining the Markov Chain Monitoring problem. Given an initial distribution of items over the nodes of a Markov chain, we wish to estimate the distribution of items at subsequent times. We do this by asking a limited number of queries that retrieve, for example, how many items transitioned to a specific node or over a specific edge at a particular time. We consider different types of queries, each defining a different variant of the Markov Chain Monitoring. For each variant, we design efficient algorithms for choosing the queries that make our estimates as accurate as possible. In our experiments with synthetic and real datasets we demonstrate the efficiency and the efficacy of our algorithms in a variety of settings.
This talk was based on my Master's thesis which I had completed earlier that year. It gives an overview on how certain parallel dynamic programming can be computed in parallel efficiently, and what we want that to mean here.
The plots in "Performance Examples" show speedup S on the left and efficiency E on the right, both against input size.
Read more over here: http://reitzig.github.io/publications/Reitzig2012
It presents various approximation schemes including absolute approximation, epsilon approximation and also presents some polynomial time approximation schemes. It also presents some probabilistically good algorithms.
Laplace Transform
content:
PIERRE-SIMON LAPLACE
Existence of Laplace Transform
Laplace Transform of some basic functions
Piece Wise continuous function
Image Processing by using Laplace Transform
Real Life Application of Laplace Transform
Limitations of Laplace Transform
Conclusion
Robust model predictive control for discrete-time fractional-order systemsPantelis Sopasakis
In this paper we propose a tube-based robust model predictive control scheme for fractional-order discrete-
time systems of the Grunwald-Letnikov type with state and input constraints. We first approximate the infinite-dimensional fractional-order system by a finite-dimensional linear system and we show that the actual dynamics can be approximated arbitrarily tight. We use the approximate dynamics to design a tube-based model predictive controller which endows to the controlled closed-loop system robust stability properties
Slide set presented for the Wireless Communication module at Jacobs University Bremen, Fall 2015.
Teacher: Dr. Stefano Severi, assistant: Andrei Stoica
Distributed solution of stochastic optimal control problem on GPUsPantelis Sopasakis
Stochastic optimal control problems arise in many
applications and are, in principle,
large-scale involving up to millions of decision variables. Their
applicability in control applications is often limited by the
availability of algorithms that can solve them efficiently and within
the sampling time of the controlled system.
In this paper we propose a dual accelerated proximal
gradient algorithm which is amenable to parallelization and
demonstrate that its GPU implementation affords high speed-up
values (with respect to a CPU implementation) and greatly outperforms
well-established commercial optimizers such as Gurobi.
Markov Chain Monitoring - Application to demand prediction in bike sharing sy...Harshal Chaudhari
The presentation accompanying the paper at SDM 2018 - https://epubs.siam.org/doi/abs/10.1137/1.9781611975321.50
Github: https://github.com/chdhr-harshal/mc-monitor
In networking applications, one often wishes to obtain estimates about the number of objects at different parts of the network (e.g., the number of cars at an intersection of a road network or the number of packets expected to reach a node in a computer network) by monitoring the traffic in a small number of network nodes or edges. We formalize this task by defining the Markov Chain Monitoring problem. Given an initial distribution of items over the nodes of a Markov chain, we wish to estimate the distribution of items at subsequent times. We do this by asking a limited number of queries that retrieve, for example, how many items transitioned to a specific node or over a specific edge at a particular time. We consider different types of queries, each defining a different variant of the Markov Chain Monitoring. For each variant, we design efficient algorithms for choosing the queries that make our estimates as accurate as possible. In our experiments with synthetic and real datasets we demonstrate the efficiency and the efficacy of our algorithms in a variety of settings.
This talk was based on my Master's thesis which I had completed earlier that year. It gives an overview on how certain parallel dynamic programming can be computed in parallel efficiently, and what we want that to mean here.
The plots in "Performance Examples" show speedup S on the left and efficiency E on the right, both against input size.
Read more over here: http://reitzig.github.io/publications/Reitzig2012
It presents various approximation schemes including absolute approximation, epsilon approximation and also presents some polynomial time approximation schemes. It also presents some probabilistically good algorithms.
Laplace Transform
content:
PIERRE-SIMON LAPLACE
Existence of Laplace Transform
Laplace Transform of some basic functions
Piece Wise continuous function
Image Processing by using Laplace Transform
Real Life Application of Laplace Transform
Limitations of Laplace Transform
Conclusion
Robust model predictive control for discrete-time fractional-order systemsPantelis Sopasakis
In this paper we propose a tube-based robust model predictive control scheme for fractional-order discrete-
time systems of the Grunwald-Letnikov type with state and input constraints. We first approximate the infinite-dimensional fractional-order system by a finite-dimensional linear system and we show that the actual dynamics can be approximated arbitrarily tight. We use the approximate dynamics to design a tube-based model predictive controller which endows to the controlled closed-loop system robust stability properties
JEE Mathematics/ Lakshmikanta Satapathy/ Differential calculus/ Questions on Application of derivative as rate measurer involving sand cone and slipping ladder problem
Application of partial derivatives with two variablesSagar Patel
Application of Partial Derivatives with Two Variables
Maxima And Minima Values.
Maximum And Minimum Values.
Tangent and Normal.
Error And Approximation.
This paper introduces a new comparison base stable sorting algorithm, named RA sort. The RA sort
involves only the comparison of pair of elements in an array which ultimately sorts the array and does not
involve the comparison of each element with every other element. It tries to build upon the relationship
established between the elements in each pass. Instead of going for a blind comparison we prefer a
selective comparison to get an efficient method. Sorting is a fundamental operation in computer science.
This algorithm is analysed both theoretically and empirically to get a robust average case result. We have
performed its Empirical analysis and compared its performance with the well-known quick sort for various
input types. Although the theoretical worst case complexity of RA sort is Yworst(n) = O(n√), the
experimental results suggest an empirical Oemp(nlgn)1.333 time complexity for typical input instances, where
the parameter n characterizes the input size. The theoretical complexity is given for comparison operation.
We emphasize that the theoretical complexity is operation specific whereas the empirical one represents the
overall algorithmic complexity.
Bayesian Inference and Uncertainty Quantification for Inverse ProblemsMatt Moores
So-called “inverse” problems arise when the parameters of a physical system cannot be directly observed. The mapping between these latent parameters and the space of noisy observations is represented as a mathematical model, often involving a system of differential equations. We seek to infer the parameter values that best fit our observed data. However, it is also vital to obtain accurate quantification of the uncertainty involved with these parameters, particularly when the output of the model will be used for forecasting. Bayesian inference provides well-calibrated uncertainty estimates, represented by the posterior distribution over the parameters. In this talk, I will give a brief introduction to Markov chain Monte Carlo (MCMC) algorithms for sampling from the posterior distribution and describe how they can be combined with numerical solvers for the forward model. We apply these methods to two examples of ODE models: growth curves in ecology, and thermogravimetric analysis (TGA) in chemistry. This is joint work with Matthew Berry, Mark Nelson, Brian Monaghan and Raymond Longbottom.
SLIDING WINDOW SUM ALGORITHMS FOR DEEP NEURAL NETWORKSIJCI JOURNAL
Sliding window sums are widely used for string indexing, hashing and time series analysis. We have
developed a family of the generic vectorized sliding sum algorithms that provide speedup of O(P/w) for
window size w and number of processors P. For a sum with a commutative operator the speedup is
improved to O(P/log(w)). Even more important, our algorithms exhibit efficient memory access patterns. In
this paper we study the application of sliding sum algorithms to the training and inference of Deep Neural
Networks. We demonstrate how both pooling and convolution primitives could be expressed as sliding
sums and evaluated by the compute kernels with a shared structure. We show that the sliding sum
convolution kernels are more efficient than the commonly used GEMM kernels on CPUs and could even
outperform their GPU counterparts.
I am Martina J. I am a Signals and Systems Assignment Expert at matlabassignmentexperts.com. I hold a Master's in Matlab, from the University of Maryland. I have been helping students with their assignments for the past 9 years. I solve assignments related to Signals and Systems.
Visit matlabassignmentexperts.com or email info@matlabassignmentexperts.com.
You can also call on +1 678 648 4277 for any assistance with Signals and Systems assignments.
Low Power Adaptive FIR Filter Based on Distributed ArithmeticIJERA Editor
This paper aims at implementation of a low power adaptive FIR filter based on distributed arithmetic (DA) with
low power, high throughput, and low area. Least Mean Square (LMS) Algorithm is used to update the weight
and decrease the mean square error between the current filter output and the desired response. The pipelined
Distributed Arithmetic table reduces switching activity and hence it reduces power. The power consumption is
reduced by keeping bit-clock used in carry-save accumulation much faster than clock of rest of the operations.
We have implemented it in Quartus II and found that there is a reduction in the total power and the core dynamic
power by 31.31% and 100.24% respectively when compared with the architecture without DA table
FPGA Implementation of A New Chien Search Block for Reed-Solomon Codes RS (25...IJERA Editor
The Reed-Solomon codes RS are widely used in communication systems, in particular forming part of the specification for the ETSI digital terrestrial television standard. In this paper a simple algorithm for error detection in the Chien Search block is proposed. This algorithm is based on a simple factorization of the error locator polynomial, which allows reducing the number of components required to implement the proposed algorithm on FPGA board. Consequently, it reduces the power consumption with a percentage which can reach 50 % compared to the basic RS decoder. First, we developed the design of Chien Search Block Second, we generated and simulated the hardware description language source code using Quartus software tools,finally we implemented the proposed algorithm of Chien search block for Reed-Solomon codesRS (255, 239) on FPGA board to show both the reduced hardware resources and low complexity compared to the basic algorithm.
Objective: The main target of this project is to study the Baby-Step Giant-Step algorithm and propose an approach for the betterment of the algorithm for solving Elliptic Curve Discrete Logarithmic Problem.
Abstract : Motivated by the recovery and prediction of electricity consumption time series, we extend Nonnegative Matrix Factorization to take into account external features as side information. We consider general linear measurement settings, and propose a framework which models non-linear relationships between external features and the response variable. We extend previous theoretical results to obtain a sufficient condition on the identifiability of NMF with side information. Based on the classical Hierarchical Alternating Least Squares (HALS) algorithm, we propose a new algorithm (HALSX, or Hierarchical Alternating Least Squares with eXogeneous variables) which estimates NMF in this setting. The algorithm is validated on both simulated and real electricity consumption datasets as well as a recommendation system dataset, to show its performance in matrix recovery and prediction for new rows and columns.
I am Tim D. I am a Computer Network Assignments Expert at computernetworkassignmenthelp.com. I hold a Master's in Computer Science from, West Virginia University, USA. I have been helping students with their assignments for the past 15 years. I solve assignments related to the Computer Network.
Visit computernetworkassignmenthelp.com or email support@computernetworkassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with the Computer Network Assignments.
Elements Space and Amplitude Perturbation Using Genetic Algorithm for Antenna...CSCJournals
A simple and fast genetic algorithm (GA) developed to reduce the sidelobes in non-uniformly spaced linear antenna arrays. The proposed GA algorithm optimizes two vectors of variables to increase the Main lobe to Sidelobe power ratio (M/S) of array’s radiation pattern. The algorithm, in the first phase calculates the positions of the array elements and in the second phase, it manipulates the amplitude of excitation signals for each element. The simulations performed for 16 and 24 elements array structure. The results indicated that M/S improved in first phase from 13.2 to over 22.2dB meanwhile the half power beamwidth (HPBW) left almost unchanged. After element replacement, in the second phase, by using amplitude tapering further improvement up to 32dB was achieved. Also, the simulations shown that after element space perturbation, some antenna elements can be merged together without any performance degradation in radiation pattern in terms of gain and sidelobes level.
I am Samantha K. I am a Network Design Assignment Expert at computernetworkassignmenthelp.com. I hold a Master's in Computer Science from, McGill University, Canada. I have been helping students with their assignments for the past 13 years. I solve assignments related to Network Design.
Visit computernetworkassignmenthelp.com or email support@computernetworkassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with Network Design Assignment.
Similar to Frechet Derivatives of Matrix Functions and Applications (20)
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Toxic effects of heavy metals : Lead and Arsenicsanjana502982
Heavy metals are naturally occuring metallic chemical elements that have relatively high density, and are toxic at even low concentrations. All toxic metals are termed as heavy metals irrespective of their atomic mass and density, eg. arsenic, lead, mercury, cadmium, thallium, chromium, etc.
BREEDING METHODS FOR DISEASE RESISTANCE.pptxRASHMI M G
Plant breeding for disease resistance is a strategy to reduce crop losses caused by disease. Plants have an innate immune system that allows them to recognize pathogens and provide resistance. However, breeding for long-lasting resistance often involves combining multiple resistance genes
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptxRASHMI M G
Abnormal or anomalous secondary growth in plants. It defines secondary growth as an increase in plant girth due to vascular cambium or cork cambium. Anomalous secondary growth does not follow the normal pattern of a single vascular cambium producing xylem internally and phloem externally.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Nucleic Acid-its structural and functional complexity.
Frechet Derivatives of Matrix Functions and Applications
1. Frechet Derivatives of Matrix Functions and
Applications
Samuel Relton
samuel.relton@maths.man.ac.uk @sdrelton
samrelton.com blog.samrelton.com
Joint work with Nicholas J. Higham
higham@maths.man.ac.uk @nhigham
www.maths.man.ac.uk/~higham nickhigham.wordpress.com
University of Manchester, UK
September 4, 2014
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 1 / 23
2. Outline
Matrix Functions, their Derivatives, and the Condition Number
Elementwise Sensitivity
Physics: Nuclear Activation Sensitivity Problem
Dierential Equations: Predicting Algebraic Error in the FEM
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 2 / 23
3. Matrix Functions
We are interested in functions f : Cnn7! Cnn e.g.
Matrix Exponential eA =
1X
k=0
Ak
k!
Matrix Cosine cos(A) =
1X
k=0
(1)kA2k
(2k)!
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 3 / 23
4. Matrix Functions
We are interested in functions f : Cnn7! Cnn e.g.
Matrix Exponential eA =
1X
k=0
Ak
k!
Matrix Cosine cos(A) =
1X
k=0
(1)kA2k
(2k)!
De
5. ne f (A) by Taylor series when f is analytic
If A = XDX1 then f (A) = Xf (D)X1
Dierential equations: du
dt = Au(t), u = etAu(0)
Use cos(A) and sin(A) for second order ODEs
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 3 / 23
7. nition (Frechet derivative)
The Frechet derivative of f at A is the unique linear function
Lf (A, ) : Cnn7! Cnn such that for all E
f (A + E) f (A) Lf (A, E) = o(kEk).
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 4 / 23
9. nition (Frechet derivative)
The Frechet derivative of f at A is the unique linear function
Lf (A, ) : Cnn7! Cnn such that for all E
f (A + E) f (A) Lf (A, E) = o(kEk).
Applications include manifold optimization, Markov models,
bladder cancer, image processing, and network analysis
Higher order derivatives recently analyzed (Higham R., 2014)
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 4 / 23
10. Sensitivity of Matrix Functions
f
f
SA
f (SA)
SX
f (SX )
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 5 / 23
11. Sensitivity of Matrix Functions
f
f
SA
f (SA)
SX
f (SX )
The function f is well conditioned at A and
ill conditioned at X
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 5 / 23
12. The Norm-wise Condition Number
The two condition numbers for a matrix function are:
condabs(f , A) = max
kEk=1
kLf (A, E)k,
condrel(f , A) = max
kEk=1
kLf (A, E)k
kAk
kf (A)k
.
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 6 / 23
13. Elementwise Sensitivity
If we change just one element Aij , how is f (A) aected?
Let Eij =
ij
, then the dierence between f (A) and f (A + Eij ) is
kf (A) f (A + Eij )k kLf (A, Eij )k.
kLf (A, Eij )k gives the sensitivity in (i , j) component
Sometimes we want the t most sensitive elements for t = 5: 20
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 7 / 23
14. A simple algorithm
To compute the most sensitive t entries of A:
1 for i = 1: n
2 for j = 1: n
3 if Aij6= 0
4 Compute and store kLf (A, Eij )k
5 end if
6 end for
7 end for
8 Take the largest t values of kLf (A, Eij )k
Cost: Up to O(n5)
ops since computing Lf (A, E) costs O(n3)
ops
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 8 / 23
15. A simple algorithm
To compute the most sensitive t entries of A:
1 for i = 1: n
2 for j = 1: n
3 if Aij6= 0
4 Compute and store kLf (A, Eij )k
5 end if
6 end for
7 end for
8 Take the largest t values of kLf (A, Eij )k
Cost: Up to O(n5)
ops since computing Lf (A, E) costs O(n3)
ops
Trivially parallel but still very expensive when A is large
Speed this up using block norm estimation (work in progress)
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 8 / 23
16. The Nuclear Activation Sensitivity Problem
Chemical reactions: u0(t) = Au(t)
u(t) = eAtu(0) tells us the
concentration of each element at time t
qT u(t) is the dosage at time t
Aij represents the reaction between
elements i and j (so ignore Aij = 0)
Aij is subject to measurement error
What happens to qT u(t) when it
changes?
Implications for safety in radiation exposure models etc.
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 9 / 23
17. Nuclear Activation Solution - 1
If Aij is perturbed, this introduces a relative error in qT u(t) of
jqT (etA+Eij etA)u(0)j
jqT etAu(0)j
jqT Lex (tA, Eij )u(0)j
jqT etAu(0)j
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 10 / 23
18. Nuclear Activation Solution - 1
If Aij is perturbed, this introduces a relative error in qT u(t) of
jqT (etA+Eij etA)u(0)j
jqT etAu(0)j
jqT Lex (tA, Eij )u(0)j
jqT etAu(0)j
We note that:
The denominator is the same for all perturbations
This requires computing a derivative in all directions Aij6= 0
Can we improve upon this?
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 10 / 23
19. Nuclear activation solution - 2
Using vec(AXB) = (BT
A)vec(X) we see the sensitivity in direction Eij is
jqT Lex (tA, Eij )u(0)j = j(u(0)T
qT )Kex (tA) vec(Eij )j.
Therefore the sensitivity in ALL n2 directions is
j[(u(0)T
qT )Kex (tA)]T j = jvec(Lex (tA, unvec(u(0)
q)T )T j.
Only 1 derivative needed for all sensitivities
Found 2 bugs in existing commercial software!
Extend for time dependent coecients A = A(t)
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 11 / 23
20. Predicting Algebraic Error in an ODE
Let's solve the model ODE
u00 = f (x), x 2 (0, 1), u(0) = u(1) = 0
with the
21. nite element method using piecewise linear basis functions i .
Exact solution u(x) = e5(x0.5)2
e5=4 determines f (x)
Generate a grid of n = 19 equally spaced points xi
Generate system Ax = b where Aij =
R 1
0 ij and bi = f (xi ).
A = diag(1, 2,1) in this case
Solve with CG iteration
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 12 / 23
23. nite element space (dimension 19)
Let uh 2 Vh be the best solution possible from Vh
Let uk
est be our numerical solution corresponding to k iterations of CG
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 13 / 23
25. nite element space (dimension 19)
Let uh 2 Vh be the best solution possible from Vh
Let uk
est be our numerical solution corresponding to k iterations of CG
The discretization error is u uh
The algebraic error is uh uk
est
The total error is u uk
est = alg. err. + disc. err.
Sometimes alg err dominates the total err, how do we detect this?
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 13 / 23
26. Discretization error
−3 Discretization Error
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
3.5
3
2.5
2
1.5
1
0.5
0
−0.5
−1
−1.5
x 10
u uh
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 14 / 23
27. Algebraic Error - 8 CG iterations
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.015
0.01
0.005
0
−0.005
−0.01
−0.015
Algebraic Error k = 8
Alg. Err.
Total Err.
Nodes 9{11 highlighted
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 15 / 23
28. Algebraic Error - 9 CG iterations
−3 Algebraic Error k = 9
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
5
4
3
2
1
0
−1
−2
−3
−4
−5
x 10
Alg. Err.
Total Err.
Nodes 9{11 highlighted
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 16 / 23
29. Elementwise sensitivity analysis
Taking f (A) = A1 we can calculate the sensitivity of each element
Lf (A, E) = A1EA1 so easily computed
Ignore Aij = 0 since the two basis elements don't overlap
Results plotted on the following heat map
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 17 / 23
30. Elementwise sensitivity analysis
Most sensitive elements of A when computing A−1 in 1−norm
2 4 6 8 10 12 14 16 18
2
4
6
8
10
12
14
16
18
0.6
0.5
0.4
0.3
0.2
0.1
0
Row/Cols 9{11 in the middle
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 18 / 23
31. 2D Peak Problem
0.03
0.025
0.02
0.015
0.01
0.005
0
0
0.2
0.4
0.6
0.8
1 0
0.2
0.4
0.6
0.8
1
−0.005
Peak problem
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 19 / 23
32. Algebraic Error Estimation
2
1
0
−1
0
0.5
1 0
0.5
1
−2
−4
x 10
1.5
1
0.5
0
−0.5
−1
0
0.5
1 0
0.5
1
−1.5
−7
x 10
Left: True algebraic error using 7 CG iterations.
Right: Error in estimated algebraic error using 1st Frechet derivative.
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 20 / 23
33. Higher Order Derivatives to Estimate Alg. Err.
−6
10
−8
10
−10
10
−12
10
−14
10
−16
0 50 100 150 200
10
Componentwise error using kth order derivatives, k = 1, 3, 5.
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 21 / 23
34. Possible extensions
Can this be used to modify the discretization mesh to obtain better
accuracy? (See Papez, Liesen, and Strakos 2014)
Currently too expensive: can we estimate the sensitivities?
Can this be extended to f (A) = eA (exponential integrators)?
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 22 / 23
35. Conclusions
Explained elementwise sensitivity of matrix functions
New applications in nuclear physics and FEM analysis
Former is basically solved, latter needs to be cheaper
Future work:
Estimate sensitivities more eciently (block norm estimation)
Further comparison of nuclear physics solution to commercial
alternative
Further analysis of ODE problem
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 23 / 23
37. ned recursively:
L(k)
f (A+Ek+1, E1, ... , Ek ) L(k)
f (A, E1, ... , Ek ) =
L(k+1)
f (A, E1, ... , Ek , Ek+1) + o(kEk+1k)
Also have a simple method to compute them. For example:
f
0
BB@
2
A E1 E2 0
0 A 0 E2
0 0 A E1
0 0 A
664
3
1
775
CCA
=
2
f (A) Lf (A, E1) Lf (A, E2) L(2)
664
f (A, E1, E2)
0 f (A) 0 Lf (A, E2)
0 0 f (A) Lf (A, E1)
0 0 0 f (A)
3
775
More info in Higham Relton, SIMAX 35(4), 2014.
Sam Relton (UoM) Derivatives of matrix functions September 4, 2014 1 / 1