This document presents and compares three approximation methods for thin plate spline mappings that reduce the computational complexity from O(p3) to O(m3), where m is a small subset of points p. Method 1 uses only the subset of points to estimate the mapping. Method 2 uses the subset of basis functions with all target values. Method 3 approximates the full matrix using the Nyström method. Experiments on synthetic grids show Method 3 has the lowest error, followed by Method 2, with Method 1 having the highest error. The three methods trade off accuracy, computation time, and the ability to do principal warp analysis.
Finite-difference modeling, accuracy, and boundary conditions- Arthur Weglein...Arthur Weglein
This short report gives a brief review on the finite difference modeling method used in MOSRP
and its boundary conditions as a preparation for the Green’s theorem RTM. The first
part gives the finite difference formulae we used and the second part describes the implemented
boundary conditions. The last part, using two examples, points out some impacts of the accuracy
of source fields on the results of modeling.
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...Polytechnique Montreal
This paper applies a compressed algorithm to improve the spectrum sensing performance of cognitive radio technology.
At the fusion center, the recovery error in the analog to information converter (AIC) when reconstructing the
transmit signal from the received time-discrete signal causes degradation of the detection performance. Therefore, we
propose a subspace pursuit (SP) algorithm to reduce the recovery error and thereby enhance the detection performance.
In this study, we employ a wide-band, low SNR, distributed compressed sensing regime to analyze and evaluate the
proposed approach. Simulations are provided to demonstrate the performance of the proposed algorithm.
Finite-difference modeling, accuracy, and boundary conditions- Arthur Weglein...Arthur Weglein
This short report gives a brief review on the finite difference modeling method used in MOSRP
and its boundary conditions as a preparation for the Green’s theorem RTM. The first
part gives the finite difference formulae we used and the second part describes the implemented
boundary conditions. The last part, using two examples, points out some impacts of the accuracy
of source fields on the results of modeling.
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...Polytechnique Montreal
This paper applies a compressed algorithm to improve the spectrum sensing performance of cognitive radio technology.
At the fusion center, the recovery error in the analog to information converter (AIC) when reconstructing the
transmit signal from the received time-discrete signal causes degradation of the detection performance. Therefore, we
propose a subspace pursuit (SP) algorithm to reduce the recovery error and thereby enhance the detection performance.
In this study, we employ a wide-band, low SNR, distributed compressed sensing regime to analyze and evaluate the
proposed approach. Simulations are provided to demonstrate the performance of the proposed algorithm.
Investigation on the Pattern Synthesis of Subarray Weights for Low EMI Applic...IOSRJECE
In modern radar applications, it is frequently required to produce sum and difference patterns sequentially. The sum pattern amplitude coefficients are obtained by using Dolph-Chebyshev synthesis method where as the difference pattern excitation coefficients will be optimized in this present work. For this purpose optimal group weights will be introduced to the different array elements to obtain any type of beam depending on the application. Optimization of excitation to the array elements is the main objective so in this process a subarray configuration is adopted. However, Differential Evolution Algorithm is applied for optimization method. The proposed method is reliable and accurate. It is superior to other methods in terms of convergence speed and robustness. Numerical and simulation results are presented.
I am Irene M. I am a Diffusion Assignment Expert at statisticsassignmenthelp.com. I hold a Masters in Statistics from California, USA.
I have been helping students with their homework for the past 8 years. I solve assignments related to Diffusion. Visit statisticsassignmenthelp.com or email info@statisticsassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with Diffusion Assignments.
A Quest for Subexponential Time Parameterized Algorithms for Planar-k-Path: F...cseiitgn
The field of designing subexponential time parameterized algorithms has gained a lot of momentum lately. While the subexponential time algorithm for Planar-k-Path (finding a path of length at least k on planar graphs) has been known for last 15 years. There was no such algorithms known on directed planar graphs. In this talk, I will survey this journey of designing subexponential time parameterized algorithms for finding a path of length at least k in planar undirected graphs to planar directed graphs; highlighting the new tools and techniques that got developed on the way.
A generalized class of normalized distance functions called Q-Metrics is described in this presentation. The Q-Metrics approach relies on a unique functional, using a single bounded parameter (Lambda), which characterizes the conventional distance functions in a normalized per-unit metric space. In addition to this coverage property, a distinguishing and extremely attractive characteristic of the Q-Metric function is its low computational complexity. Q-Metrics satisfy the standard metric axioms. Novel networks for classification and regression tasks are defined and constructed using Q-Metrics. These new networks are shown to outperform conventional feed forward back propagation networks with the same size when tested on real data sets.
Some fixed point theorems of expansion mapping in g-metric spacesinventionjournals
Over the past two decades the development of fixed point theory in metric spaces has attracted
considerable attention due to numerous applications in areas such as variation and linear inequalities,
optimization and approximation theory. Therefore, different Authors proved many fixed points results for self
mapping defined on complete G-Metric space. The objectives of this study are to prove fixed point results for
mapping satisfying expansion conditions.
The Inverse Scattering Series (ISS) is a direct inversion method
for a multidimensional acoustic, elastic and anelastic earth. It
communicates that all inversion processing goals are able to
be achieved directly and without any subsurface information.
This task is reached through a task-specific subseries of the
ISS. Using primaries in the data as subevents of the first-order
internal multiples, the leading-order attenuator can predict the
time of all the first-order internal multiples and is able to attenuate
them.
However, the ISS internal multiple attenuation algorithm can
be a computationally demanding method specially in a complex
earth. By using an approach that is based on two angular
quantities and that was proposed in Terenghi et al. (2012), the
cost of the algorithm can be controlled. The idea is to use the
two angles as key-control parameters, by limiting their variation,
to disregard some calculated contributions of the algorithm
that are negligible. Moreover, the range of integration
can be chosen as a compromise of the required degree of accuracy
and the computational time saving.
This time-saving approach is presented
Stratified Monte Carlo and bootstrapping for approximate Bayesian computationUmberto Picchini
Presented on 7 May 2020 at "One World Approximate Bayesian Computation (ABC) Seminar". A video is available at https://youtu.be/IOPnRfAJ_W8
Approximate Bayesian computation (ABC) is computationally intensive for complex model simulators. To exploit expensive simulations, data-resampling via bootstrapping was used with success in [1] to obtain many artificial datasets at little cost and construct a synthetic likelihood. When using the same approach within ABC to produce a pseudo-marginal ABC-MCMC algorithm, the posterior variance is inflated, thus producing biased posterior inference. Here we use stratified Monte Carlo to considerably reduce the bias induced by data resampling. We also show that it is possible to obtain reliable inference using a larger than usual ABC threshold, by employing stratified Monte Carlo. Finally, we show that with stratified sampling we obtain a less variable ABC likelihood. In our paper [2] we consider simulation studies for static (Gaussian, g-and-k distribution, Ising model) and dynamic models (Lotka-Volterra). For the Lotka-Volterra case study, we compare our results against a standard pseudo-Marginal ABC and find that our approach is four times more efficient and, given limited computational budget, it explores the posterior surface more thoroughly. A comparison against state-of-art sequential Monte Carlo ABC is also reported.
References
[1] R. G. Everitt (2017). Bootstrapped synthetic likelihood. arXiv:1711.05825.
[2] U. Picchini, R.G. Everitt (2019). Stratified sampling and resampling for approximateBayesian computation. arXiv:1905.07976
Social media has become a terrific markeitng tool, but many companies have trouble getting started. Here are some of the steps I recommend to clients, starting with the blog.
This is a paper that was developed as a collaboration with Bill Milligan at Level 3 in the late 90's. We extended the OFA concept of Cary Millsap to the 11.0.3 Apps.
Investigation on the Pattern Synthesis of Subarray Weights for Low EMI Applic...IOSRJECE
In modern radar applications, it is frequently required to produce sum and difference patterns sequentially. The sum pattern amplitude coefficients are obtained by using Dolph-Chebyshev synthesis method where as the difference pattern excitation coefficients will be optimized in this present work. For this purpose optimal group weights will be introduced to the different array elements to obtain any type of beam depending on the application. Optimization of excitation to the array elements is the main objective so in this process a subarray configuration is adopted. However, Differential Evolution Algorithm is applied for optimization method. The proposed method is reliable and accurate. It is superior to other methods in terms of convergence speed and robustness. Numerical and simulation results are presented.
I am Irene M. I am a Diffusion Assignment Expert at statisticsassignmenthelp.com. I hold a Masters in Statistics from California, USA.
I have been helping students with their homework for the past 8 years. I solve assignments related to Diffusion. Visit statisticsassignmenthelp.com or email info@statisticsassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with Diffusion Assignments.
A Quest for Subexponential Time Parameterized Algorithms for Planar-k-Path: F...cseiitgn
The field of designing subexponential time parameterized algorithms has gained a lot of momentum lately. While the subexponential time algorithm for Planar-k-Path (finding a path of length at least k on planar graphs) has been known for last 15 years. There was no such algorithms known on directed planar graphs. In this talk, I will survey this journey of designing subexponential time parameterized algorithms for finding a path of length at least k in planar undirected graphs to planar directed graphs; highlighting the new tools and techniques that got developed on the way.
A generalized class of normalized distance functions called Q-Metrics is described in this presentation. The Q-Metrics approach relies on a unique functional, using a single bounded parameter (Lambda), which characterizes the conventional distance functions in a normalized per-unit metric space. In addition to this coverage property, a distinguishing and extremely attractive characteristic of the Q-Metric function is its low computational complexity. Q-Metrics satisfy the standard metric axioms. Novel networks for classification and regression tasks are defined and constructed using Q-Metrics. These new networks are shown to outperform conventional feed forward back propagation networks with the same size when tested on real data sets.
Some fixed point theorems of expansion mapping in g-metric spacesinventionjournals
Over the past two decades the development of fixed point theory in metric spaces has attracted
considerable attention due to numerous applications in areas such as variation and linear inequalities,
optimization and approximation theory. Therefore, different Authors proved many fixed points results for self
mapping defined on complete G-Metric space. The objectives of this study are to prove fixed point results for
mapping satisfying expansion conditions.
The Inverse Scattering Series (ISS) is a direct inversion method
for a multidimensional acoustic, elastic and anelastic earth. It
communicates that all inversion processing goals are able to
be achieved directly and without any subsurface information.
This task is reached through a task-specific subseries of the
ISS. Using primaries in the data as subevents of the first-order
internal multiples, the leading-order attenuator can predict the
time of all the first-order internal multiples and is able to attenuate
them.
However, the ISS internal multiple attenuation algorithm can
be a computationally demanding method specially in a complex
earth. By using an approach that is based on two angular
quantities and that was proposed in Terenghi et al. (2012), the
cost of the algorithm can be controlled. The idea is to use the
two angles as key-control parameters, by limiting their variation,
to disregard some calculated contributions of the algorithm
that are negligible. Moreover, the range of integration
can be chosen as a compromise of the required degree of accuracy
and the computational time saving.
This time-saving approach is presented
Stratified Monte Carlo and bootstrapping for approximate Bayesian computationUmberto Picchini
Presented on 7 May 2020 at "One World Approximate Bayesian Computation (ABC) Seminar". A video is available at https://youtu.be/IOPnRfAJ_W8
Approximate Bayesian computation (ABC) is computationally intensive for complex model simulators. To exploit expensive simulations, data-resampling via bootstrapping was used with success in [1] to obtain many artificial datasets at little cost and construct a synthetic likelihood. When using the same approach within ABC to produce a pseudo-marginal ABC-MCMC algorithm, the posterior variance is inflated, thus producing biased posterior inference. Here we use stratified Monte Carlo to considerably reduce the bias induced by data resampling. We also show that it is possible to obtain reliable inference using a larger than usual ABC threshold, by employing stratified Monte Carlo. Finally, we show that with stratified sampling we obtain a less variable ABC likelihood. In our paper [2] we consider simulation studies for static (Gaussian, g-and-k distribution, Ising model) and dynamic models (Lotka-Volterra). For the Lotka-Volterra case study, we compare our results against a standard pseudo-Marginal ABC and find that our approach is four times more efficient and, given limited computational budget, it explores the posterior surface more thoroughly. A comparison against state-of-art sequential Monte Carlo ABC is also reported.
References
[1] R. G. Everitt (2017). Bootstrapped synthetic likelihood. arXiv:1711.05825.
[2] U. Picchini, R.G. Everitt (2019). Stratified sampling and resampling for approximateBayesian computation. arXiv:1905.07976
Social media has become a terrific markeitng tool, but many companies have trouble getting started. Here are some of the steps I recommend to clients, starting with the blog.
This is a paper that was developed as a collaboration with Bill Milligan at Level 3 in the late 90's. We extended the OFA concept of Cary Millsap to the 11.0.3 Apps.
Presentatie Cloud computing en de gevolgen voor creditmanagement. Presentatie door Piet van Vugt tijdens de Credit Expo 14 nov 2012. Presentatie aangeboden door credit software specialist OnGuard.
Social Inc. en IAB Nederland presenteren het
e-book ‘Social Business Now’. Dit e-book is bedoeld voor zowel leidinggevenden als voor mensen op de werkvloer - van communicatiemedewerker en marketeer tot overig personeel - die geïnspireerd willen raken door en kennis willen opdoen over de kansen van social business.
Tony Fiore presented "Top 5 Issues Affecting the HR Profession in Ohio" at the 2015 Ohio SHRM Employment Law + Legislative Conference on June 3.
Tony examined workers' compensation, unemployment compensation, employment and labor laws, minimum wage and health care.
Using several mathematical examples from three different authors in texts from different courses this paper illustrates the easier way to avoid confusions and always get the correct results with the least effort was to use the proposed Excel Gamma function explained in detail for the proper use of the Q(z) and ercf(x) functions in most communication courses. The paper serves as a tutorial and introduction for such functions
We consider here k-valent plane and toroidal maps with faces of size a and b. The faces are said to be in a lego if the faces are organized in blocks that then tile the sphere. We expose some enumeration results and the technical enumeration methods.
Then we expose how we managed to draw the graphs from the combinatorial data.
This presentation is a part of Computer Oriented Numerical Method . Newton-Cotes formulas are an extremely useful and straightforward family of numerical integration techniques.
Parallel Evaluation of Multi-Semi-JoinsJonny Daenen
Presentation given on VLDB 2016: 42nd International Conference on Very Large Data Bases.
Paper: http://dx.doi.org/10.14778/2977797.2977800
ArXiv: https://arxiv.org/abs/1605.05219
Poster: https://zenodo.org/record/61653 (doi 10.5281/zenodo.61653)
Gumbo Software: https://github.com/JonnyDaenen/Gumbo
Abstract
While services such as Amazon AWS make computing power abundantly available, adding more computing nodes can incur high costs in, for instance, pay-as-you-go plans while not always significantly improving the net running time (aka wall-clock time) of queries. In this work, we provide algorithms for parallel evaluation of SGF queries in MapReduce that optimize total time, while retaining low net time. Not only can SGF queries specify all semi-join reducers, but also more expressive queries involving disjunction and negation. Since SGF queries can be seen as Boolean combinations of (potentially nested) semi-joins, we introduce a novel multi-semi-join (MSJ) MapReduce operator that enables the evaluation of a set of semi-joins in one job. We use this operator to obtain parallel query plans for SGF queries that outvalue sequential plans w.r.t. net time and provide additional optimizations aimed at minimizing total time without severely affecting net time. Even though the latter optimizations are NP-hard, we present effective greedy algorithms. Our experiments, conducted using our own implementation Gumbo on top of Hadoop, confirm the usefulness of parallel query plans, and the effectiveness and scalability of our optimizations, all with a significant improvement over Pig and Hive.
This document presents a novel Copula based approach to generate critical sea states given a target reliability index based on the return period of the extreme event. Copula based approach is much more flexible and powerful when compared to conventional approaches using linear correlation coefficient.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 4
Approximate Thin Plate Spline Mappings
1. Approximate Thin Plate Spline Mappings
Gianluca Donato1
and Serge Belongie2
1
Digital Persona, Inc., Redwood City, CA 94063
gianlucad@digitalpersona.com
2
U.C. San Diego, La Jolla, CA 92093-0114
sjb@cs.ucsd.edu
Abstract. The thin plate spline (TPS) is an effective tool for modeling
coordinate transformations that has been applied successfully in several
computer vision applications. Unfortunately the solution requires the in-
version of a p×p matrix, where p is the number of points in the data set,
thus making it impractical for large scale applications. As it turns out,
a surprisingly good approximate solution is often possible using only a
small subset of corresponding points. We begin by discussing the obvious
approach of using the subsampled set to estimate a transformation that is
then applied to all the points, and we show the drawbacks of this method.
We then proceed to borrow a technique from the machine learning com-
munity for function approximation using radial basis functions (RBFs)
and adapt it to the task at hand. Using this method, we demonstrate
a significant improvement over the naive method. One drawback of this
method, however, is that is does not allow for principal warp analysis,
a technique for studying shape deformations introduced by Bookstein
based on the eigenvectors of the p × p bending energy matrix. To ad-
dress this, we describe a third approximation method based on a classic
matrix completion technique that allows for principal warp analysis as
a by-product. By means of experiments on real and synthetic data, we
demonstrate the pros and cons of these different approximations so as
to allow the reader to make an informed decision suited to his or her
application.
1 Introduction
The thin plate spline (TPS) is a commonly used basis function for representing
coordinate mappings from R2
to R2
. Bookstein [3] and Davis et al. [5], for exam-
ple, have studied its application to the problem of modeling changes in biological
forms. The thin plate spline is the 2D generalization of the cubic spline. In its
regularized form the TPS model includes the affine model as a special case.
One drawback of the TPS model is that its solution requires the inversion
of a large, dense matrix of size p × p, where p is the number of points in the
data set. Our goal in this paper is to present and compare three approximation
methods that address this computational problem through the use of a subset of
corresponding points. In doing so, we highlight connections to related approaches
in the area of Gaussian RBF networks that are relevant to the TPS mapping
A. Heyden et al. (Eds.): ECCV 2002, LNCS 2352, pp. 21–31, 2002.
c Springer-Verlag Berlin Heidelberg 2002
2. 22 G. Donato and S. Belongie
problem. Finally, we discuss a novel application of the Nystr¨om approximation
[1] to the TPS mapping problem.
Our experimental results suggest that the present work should be particu-
larly useful in applications such as shape matching and correspondence recovery
(e.g. [2,7,4]) as well as in graphics applications such as morphing.
2 Review of Thin Plate Splines
Let vi denote the target function values at locations (xi, yi) in the plane, with
i = 1, 2, . . . , p. In particular, we will set vi equal to the target coordinates
(xi,yi) in turn to obtain one continuous transformation for each coordinate. We
assume that the locations (xi, yi) are all different and are not collinear. The TPS
interpolant f(x, y) minimizes the bending energy
If =
R2
(f2
xx + 2f2
xy + f2
yy)dxdy
and has the form
f(x, y) = a1 + axx + ayy +
p
i=1
wiU ( (xi, yi) − (x, y) )
where U(r) = r2
log r. In order for f(x, y) to have square integrable second
derivatives, we require that
p
i=1
wi = 0 and
p
i=1
wixi =
p
i=1
wiyi = 0 .
Together with the interpolation conditions, f(xi, yi) = vi, this yields a linear
system for the TPS coefficients:
K P
PT
O
w
a
=
v
o
(1)
where Kij = U( (xi, yi) − (xj, yj) ), the ith row of P is (1, xi, yi), O is a 3 × 3
matrix of zeros, o is a 3 × 1 column vector of zeros, w and v are column vectors
formed from wi and vi, respectively, and a is the column vector with elements
a1, ax, ay. We will denote the (p + 3) × (p + 3) matrix of this system by L; as
discussed e.g. in [7], L is nonsingular. If we denote the upper left p × p block of
L−1
by L−1
p , then it can be shown that
If ∝ vT
L−1
p v = wT
Kw .
3. Approximate Thin Plate Spline Mappings 23
When there is noise in the specified values vi, one may wish to relax the exact
interpolation requirement by means of regularization. This is accomplished by
minimizing
H[f] =
n
i=1
(vi − f(xi, yi))2
+ λIf .
The regularization parameter λ, a positive scalar, controls the amount of smooth-
ing; the limiting case of λ = 0 reduces to exact interpolation. As demonstrated
in [9,6], we can solve for the TPS coefficients in the regularized case by replacing
the matrix K by K + λI, where I is the p × p identity matrix.
3 Approximation Techniques
Since inverting L is an O(p3
) operation, solving for the TPS coefficients can be
very expensive when p is large. We will now discuss three different approximation
methods that reduce this computational burden to O(m3
), where m can be as
small as 0.1p. The corresponding savings factors in memory (5x) and processing
time (1000x) thus make TPS methods tractable when p is very large.
In the discussion below we use the following partition of the K matrix:
K =
A B
BT
C
(2)
with A ∈ Rm×m
, B ∈ Rm×n
, and C ∈ Rn×n
. Without loss of generality, we
will assume the p points are labeled in random order, so that the first m points
represent a randomly selected subset.
3.1 Method 1: Simple Subsampling
The simplest approximation technique is to solve for the TPS mapping between
a randomly selected subset of the correspondences. This amounts to using A
in place of K in Equation (1). We can then use the recovered coefficients to
extrapolate the TPS mapping to the remaining points. The result of applying
this approximation to some sample shapes is shown in Figure 1. In this case,
certain parts were not sampled at all, and as a result the mapping in those areas
is poor.
3.2 Method 2: Basis Function Subset
An improved approximation can be obtained by using a subset of the basis
functions with all of the target values. Such an approach appears in [10,6] and
Section 3.1 of [8] for the case of Gaussian RBFs. In the TPS case, we need to
account for the affine terms, which leads to a modified set of linear equations.
Starting from the cost function
R[ ˜w, a] =
1
2
v − ˜K ˜w − Pa 2
+
λ
2
˜wT
A ˜w ,
4. 24 G. Donato and S. Belongie
(a) (b) (c) (d)
(e) (f) (g) (h)
Fig. 1. Thin plate spline (TPS) mapping example. (a,b) Template and target synthetic
fish shapes, each consisting of 98 points. (Correspondences between the two shapes are
known.) (c) TPS mapping of (a) onto (b) using the subset of points indicated by circles
(Method 1). Corresponding points are indicated by connecting line segments. Notice
the quality of the mapping is poor where the samples are sparse. An improved approxi-
mation can be obtained by making use of the full set of target values; this is illustrated
in (d), where we have used Method 2 (discussed in Section 3.2). A similar mapping is
found for the same set of samples using Method 3 (see Section 3.3). In (e-h) we observe
the same behavior for a pair of handwritten digits, where the correspondences (89 in
all) have been found using the method of [2].
we minimize it by setting ∂R/∂ ˜w and ∂R/∂a to zero, which leads to the following
(m + 3) × (m + 3) linear system,
˜KT ˜K + λA ˜KT
P
PT ˜K PT
P
˜w
a
=
˜KT
v
PT
v
(3)
where ˜KT
= [A BT
], ˜w is an m × 1 vector of TPS coefficients, and the rest
of the entries are as before. Thus we seek weights for the reduced set of basis
functions that take into account the full set of p target values contained in v. If
we call ˜P the first m rows of P and ˜I the first m columns of the p × p identity
matrix, then under the assumption ˜PT
˜w = 0, Equation (3) is equivalent to
˜K + λ˜I P
˜PT
O
˜w
a
=
v
o
which corresponds to the regularized version of Equation (1) when using the
subsampled ˜K and ˜PT
in place of K and PT
.
The application of this technique to the fish and digit shapes is shown in
Figure 1(d,h).
5. Approximate Thin Plate Spline Mappings 25
3.3 Method 3: Matrix Approximation
The essence of Method 2 was to use a subset of exact basis functions to approx-
imate a full set of target values. We now consider an approach that uses a full
set of approximate basis functions to approximate the full set of target values.
The approach is based on a technique known as the Nystr¨om method.
The Nystr¨om method provides a means of approximating the eigenvectors
of K without using C. It was originally developed in the late 1920s for the
numerical solution of eigenfunction problems [1] and was recently used in [11]
for fast approximate Gaussian process regression and in [8] (implicitly) to speed
up several machine learning techniques using Gaussian kernels. Implicit to the
Nystr¨om method is the assumption that C can be approximated by BT
A−1
B,
i.e.
ˆK =
A B
BT
BT
A−1
B
(4)
If rank(K) = m and the m rows of the submatrix [A B] are linearly indepen-
dent, then ˆK = K. In general, the quality of the approximation can be expressed
as the norm of the difference C − BT
A−1
B, the Schur complement of K.
Given the m × m diagonalization A = UΛUT
, we can proceed to find the
approximate eigenvectors of K:
ˆK = ˜UΛ ˜UT
, with ˜U =
U
BT
UΛ−1 (5)
Note that in general the columns of ˜U are not orthogonal. To address this, first
define Z = ˜UΛ1/2
so that ˆK = ZZT
. Let QΣQT
denote the diagonalization
of ZT
Z. Then the matrix V = ZQΣ−1/2
contains the leading orthonormalized
eigenvectors of ˆK, i.e. ˆK = V ΣV T
, with V T
V = I.
From the standard formula for the partitioned inverse of L, we have
L−1
=
K−1
+ K−1
PS−1
PT
K−1
−K−1
PS−1
−S−1
PT
K−1
S−1 , S = −PT
K−1
P
and thus
w
a
= L−1 v
o
=
(I + K−1
PS−1
PT
)K−1
v
−S−1
PT
K−1
v
Using the Nystr¨om approximation to K, we have ˆK−1
= V Σ−1
V T
and
ˆw = (I + V Σ−1
V T
P ˆS−1
PT
)V Σ−1
V T
v ,
ˆa = − ˆS−1
PT
V Σ−1
V T
v
with ˆS = −PT
V Σ−1
V T
P, which is 3 × 3. Therefore, by computing matrix-
vector products in the appropriate order, we can obtain estimates to the TPS
6. 26 G. Donato and S. Belongie
(a) (b) (c)
Fig. 2. Grids used for experimental testing. (a) Reference point set S1: 12 × 12 points
on the interval [0, 128]×[0, 128]. (b,c) Warped point sets S2 and S3 with bending energy
0.3 and 0.8, respectively. To test the quality of the different approximation methods,
we used varying percentages of points to estimate the TPS mapping from S1 to S2 and
from S1 to S3.
coefficients without ever having to invert or store a large p × p matrix. For the
regularized case, one can proceed in the same manner, using
(V ΣV T
+ λI)−1
= V (Σ + λI)−1
V T
.
Finally, the approximate bending energy is given by
wT ˆKw = (V T
w)T
Σ(V T
w)
Note that this bending energy is the average of the energies associated to the x
and y components as in [3].
Let us briefly consider what ˆw represents. The first m components roughly
correspond to the entries in ˜w for Method 2; these in turn correspond to the
columns of ˆK (i.e. ˜K) for which exact information is available. The remaining
entries weight columns of ˆK with (implicitly) filled-in values for all but the
first m entries. In our experiments, we have observed that the latter values of
ˆw are nonzero, which indicates that these approximate basis functions are not
being disregarded. Qualitatively, the approximation quality of methods 2 and 3
are very similar, which is not surprising since they make use of the same basic
information. The pros and cons of these two methods are investigated in the
following section.
4 Experiments
4.1 Synthetic Grid Test
In order to compare the above three approximation methods, we ran a set of
experiments based on warped versions of the cartesian grid shown in Figure 2(a).
The grid consists of 12 × 12 points in a square of dimensions 128 × 128. Call this
set of points S1. Using the technique described in Appendix A, we generated
7. Approximate Thin Plate Spline Mappings 27
5 10 15 20 25 30
0
1
2
3
4
5
6
7
8
Percentage of samples used
MSE
If
=0.3
Method 1
Method 2
Method 3
5 10 15 20 25 30
0
1
2
3
4
5
6
7
8
Percentage of samples used
MSE
If
=0.8
Method 1
Method 2
Method 3
Fig. 3. Comparison of approximation error. Mean squared error in position between
points in the target grid and corresponding points in the approximately warped ref-
erence grid is plotted vs. percentage of randomly selected samples used. Performance
curves for each of the three methods are shown in (a) for If = 0.3 and (b) for If = 0.8.
point sets S2 and S3 by applying random TPS warps with bending energy 0.3
and 0.8, respectively; see Figure 2(b,c). We then studied the quality of each
approximation method by varying the percentage of random samples used to
estimate the (unregularized) mapping of S1 onto S2 and S3, and measuring the
mean squared error (MSE) in the estimated coordinates. The results are plotted
in Figure 3. The error bars indicate one standard deviation over 20 repeated
trials.
4.2 Approximate Principal Warps
In [3] Bookstein develops a multivariate shape analysis framework based on
eigenvectors of the bending energy matrix L−1
p KL−1
p = L−1
p , which he refers to as
principal warps. Interestingly, the first 3 principal warps always have eigenvalue
zero, since any warping of three points in general position (a triangle) can be
represented by an affine transform, for which the bending energy is zero. The
shape and associated eigenvalue of the remaining principal warps lend insight
into the bending energy “cost” of a given mapping in terms of that mapping’s
projection onto the principal warps. Through the Nystr¨om approximation in
Method 3, one can produce approximate principal warps using ˆL−1
p as follows:
ˆL−1
p = ˆK−1
+ ˆK−1
PS−1
PT ˆK−1
= V Σ−1
V T
+ V Σ−1
V T
PS−1
PT
V Σ−1
V T
= V (Σ−1
+ Σ−1
V T
PS−1
PT
V Σ−1
)V T
∆
= V ˆΛV T
8. 28 G. Donato and S. Belongie
Fig. 4. Approximate principal warps for the fish shape. From left to right and top
to bottom, the surfaces are ordered by eigenvalue in increasing order. The first three
principal warps, which represent the affine component of the transformation and have
eigenvalue zero, are not shown.
where
ˆΛ
∆
= Σ−1
+ Σ−1
V T
PS−1
PT
V Σ−1
= WDWT
to obtain orthogonal eigenvectors we proceed as in section 3.3 to get
ˆΛ = ˆW ˆΣ ˆWT
where ˆW
∆
= WD1/2
Q ˆΣ1/2
and Q ˆΣQT
is the diagonalization of D1/2
WT
WD1/2
.
Thus we can write
ˆL−1
p = V ˆW ˆΣ ˆWT
V T
An illustration of approximate principal warps for the fish shape is shown
in Figure 4, wherein we have used m = 15 samples. As in [3], the principal
warps are visualized as continuous surfaces, where the surface is obtained by
applying a warp to the coordinates in the plane using a given eigenvector of ˆL−1
p
as the nonlinear spline coefficients; the affine coordinates are set to zero. The
9. Approximate Thin Plate Spline Mappings 29
Fig. 5. Exact principal warps for the fish shape.
corresponding exact principal warps are shown in Figure 5. In both cases, warps
4 through 12 are shown, sorted in ascending order by eigenvalue.
Given a rank m Nystr¨om approximation, at most m−3 principal warps with
nonzero eigenvalue are available. These correspond to the principal warps at
the “low frequency” end, meaning that very localized warps, e.g. pronounced
stretching between adjacent points in the target shape, will not be captured by
the approximation.
4.3 Discussion
We now discuss the relative merits of the above three methods. From the syn-
thetic grid tests we see that Method 1, as expected, has the highest MSE. Con-
sidering that the spacing between neighboring points in the grid is about 10, it
is noteworthy, however, that all three methods achieve an MSE of less than 2
at 30% subsampling. Thus while Method 1 is not optimal in the sense of MSE,
its performance is likely to be reasonable for some applications, and it has the
advantage of being the least expensive of the three methods.
In terms of MSE, Methods 2 and 3 perform roughly the same, with Method
2 holding a slight edge, more so at 5% for the second warped grid. Method 3
has a disadvantage built in relative to Method 2, due to the orthogonalization
10. 30 G. Donato and S. Belongie
(a) (b)
Fig. 6. Comparison of Method 2 (a) and 3 (b) for poorly chosen sample locations. (The
performance of Method 1 was terrible and is not shown.) Both methods perform well
considering the location of the samples. Note that the error is slightly lower for Method
3, particularly at points far away from the samples.
step; this leads to an additional loss in significant figures and a slight increase
in MSE. In this regard Method 2 is the preferred choice.
While Method 3 is comparatively expensive and has slightly higher MSE
than Method 2, it has the benefit of providing approximate eigenvectors of the
bending energy matrix. Thus with Method 3 one has the option of studying
shape transformations using principal warp analysis.
As a final note, we have observed that when the samples are chosen badly,
e.g. crowded into a small area, Method 3 performs better than Method 2. This is
illustrated in Figure 6, where all of the samples have been chosen at the back of
the tail fin. Larger displacements between corresponding points are evident near
the front of the fish for Method 2. We have also observed that the bending energy
estimate of Method 2 ( ˜wT
A ˜w) exhibits higher variance than that of Method 3;
e.g. at a 20% sampling rate on the fish shapes warped using If = 0.3 over 100
trials, Method 2 estimates If to be 0.29 with σ = 0.13 whereas Method 3 gives
0.25 and σ = 0.06. We conjecture that this advantage arises from the presence
of the approximate basis functions in the Nystr¨om approximation, though a
rigorous explanation is not known to us.
5 Conclusion
We have discussed three approximate methods for recovering TPS mappings
between 2D pointsets that greatly reduce the computational burden. An exper-
imental comparison of the approximation error suggests that the two methods
that use only a subset of the available correspondences but take into account the
full set of target values perform very well. Finally, we observed that the method
based on the Nystr¨om approximation allows for principal warp analysis and per-
11. Approximate Thin Plate Spline Mappings 31
forms better than the basis-subset method when the subset of correspondences
is chosen poorly.
Acknowledgments. The authors wish to thanks Charless Fowlkes, Jitendra
Malik, Andrew Ng, Lorenzo Torresani, Yair Weiss, and Alice Zheng for helpful
discussions. We would also like to thank Haili Chui and Anand Rangarajan for
useful insights and for providing the fish datasets.
Appendix: Generating Random TPS Transformations
To produce a random TPS transformation with bending energy ν, first choose a
set of p reference points (e.g. on a grid) and form L−1
p . Now generate a random
vector u, set its last three components to zero, and normalize it. Compute the
diagonalization L−1
p = UΛUT
, with the eigenvalues sorted in descending order.
Finally, compute w =
√
νUΛ1/2
u. Since If is unaffected by the affine terms,
their values are arbitrary; we set translation to (0, 0) and scaling to (1, 0) and
(0, 1).
References
1. C. T. H. Baker. The numerical treatment of integral equations. Oxford: Clarendon
Press, 1977.
2. S. Belongie, J. Malik, and J. Puzicha. Matching shapes. In Proc. 8th Int’l. Conf.
Computer Vision, volume 1, pages 454–461, July 2001.
3. F. L. Bookstein. Principal warps: thin-plate splines and decomposition of defor-
mations. IEEE Trans. Pattern Analysis and Machine Intelligence, 11(6):567–585,
June 1989.
4. H. Chui and A. Rangarajan. A new algorithm for non-rigid point matching. In
Proc. IEEE Conf. Comput. Vision and Pattern Recognition, pages 44–51, June
2000.
5. M.H. Davis, A. Khotanzad, D. Flamig, and S. Harms. A physics-based coordinate
transformation for 3-d image matching. IEEE Trans. Medical Imaging, 16(3):317–
328, June 1997.
6. F. Girosi, M. Jones, and T. Poggio. Regularization theory and neural networks
architectures. Neural Computation, 7(2):219–269, 1995.
7. M. J. D. Powell. A thin plate spline method for mapping curves into curves in two
dimensions. In Computational Techniques and Applications (CTAC95), Melbourne,
Australia, 1995.
8. A.J. Smola and B. Sch¨olkopf. Sparse greedy matrix approximation for machine
learning. In ICML, 2000.
9. G. Wahba. Spline Models for Observational Data. SIAM, 1990.
10. Y. Weiss. Smoothness in layers: Motion segmentation using nonparametric mixture
estimation. In Proc. IEEE Conf. Comput. Vision and Pattern Recognition, pages
520–526, 1997.
11. C. Williams and M. Seeger. Using the Nystr¨om method to speed up kernel ma-
chines. In T. K. Leen, T. G. Dietterich, and V. Tresp, editors, Advances in Neural
Information Processing Systems 13: Proceedings of the 2000 Conference, pages
682–688, 2001.