This document presents mean value coordinates for quad cages in 3D (QMVC). QMVC generalizes mean value coordinates (MVC) to cages with non-planar quadrilaterals. It characterizes valid non-planar quads and presents a smooth projection operator. QMVC provides linear precision, is positive, smooth, and closed-form. It is compared to existing coordinate methods like MVC, Green coordinates, and spherical barycentric coordinates. QMVC can also be combined with maximum entropy coordinates to handle invalid cages.
Recent developments in the field of reduced order modeling - and in particular, active subspace construction - have made it possible to efficiently approximate complex models by constructing low-order response surfaces based upon a small subspace of the original high dimensional parameter space. These methods rely upon the fact that the response tends to vary more prominently in a few dominant directions defined by linear combinations of the original inputs, allowing for a rotation of the coordinate axis and a consequent transformation of the parameters. In this talk, we discuss a gradient free active subspace algorithm that is feasible for high dimensional parameter spaces where finite-difference techniques are impractical. We illustrate an initialized gradient-free active subspace algorithm for a neutronics example implemented with SCALE6.1.
PR-305: Exploring Simple Siamese Representation LearningSungchul Kim
SimSiam is a self-supervised learning method that uses a Siamese network with stop-gradient to learn representations from unlabeled data. The paper finds that stop-gradient plays an essential role in preventing the model from collapsing to a degenerate solution. Additionally, it is hypothesized that SimSiam implicitly optimizes an Expectation-Maximization-like algorithm that alternates between updating the network parameters and assigning representations to samples in a manner analogous to k-means clustering.
We presents a deep architecture for dense semantic correspondence, called pyramidal affine regression networks (PARN), that estimates locally-varying affine transformation fields across images.
To deal with intra-class appearance and shape variations that commonly exist among different instances within the same object category,
we leverage a pyramidal model where affine transformation fields are progressively estimated in a coarse-to-fine manner so that the smoothness constraint is naturally imposed within deep networks.
PARN estimates residual affine transformations at each level and composes them to estimate final affine transformations.
Furthermore, to overcome the limitations of insufficient training data for semantic correspondence, we propose a novel weakly-supervised training scheme that generates progressive supervisions by leveraging a correspondence consistency across image pairs.
Our method is fully learnable in an end-to-end manner and does not require quantizing infinite continuous affine transformation fields.
1. The document discusses barriers to scaling electronic structure methods to large systems, such as the inability of sparse matrix multiplication kernels to access strong parallel scaling and entrenched data structures that limit innovation.
2. It proposes a fast, generic, and data local N-body solver approach using new mathematics that is not constrained by row-column data structures and allows a single programming model.
3. Key aspects of this approach include exploiting locality in higher dimensional product volumes through techniques like occlusion-culling, resolving identity iteratively to compress matrices by orders of magnitude, and developing optimized sparse matrix multiplication kernels.
1. The document proposes an algorithmic framework for large-scale circuit simulation using exponential integrators. It uses exponential Rosenbrock methods and an invert Krylov subspace approach to efficiently compute the matrix exponential-vector product to solve the circuit equations explicitly without needing Newton-Raphson iterations.
2. The framework was shown to accurately simulate benchmark circuits while achieving speedups over traditional approaches. It can handle large-scale, strongly coupled circuits that traditional methods have difficulty with.
3. Future work includes exploring parallelization opportunities to further accelerate the method using multicore/many-core systems and developing additional tools based on the proposed derivatives-based approach.
Multi-class Classification on Riemannian Manifolds for Video SurveillanceDiego Tosato
In video surveillance, classification of visual data can be very hard due to the scarce resolution and the noise characterizing the sensors data. In this paper, we propose a novel feature, the ARray of COvariances (ARCO), and a multi-class classification framework operating on Riemannian manifolds. ARCO is composed by a structure of covariance matrices of image features, able to extract information from data at prohibitive low resolutions. The proposed classification framework consists in instantiating a new multi-class boosting method, working on the manifoldof symmetric positive definite d×d (covariance) matrices. As practical applications, we consider different surveillance tasks, such as head pose classification and pedestrian detection, providing novel state-of-the-art performances on standard datasets.
The document proposes a framework for recognizing actions across cameras by exploring correlation subspaces. It first learns a joint subspace using Canonical Correlation Analysis (CCA) on unlabeled multi-view data. It then trains a Support Vector Machine (SVM) in this subspace with a novel correlation regularizer that favors dimensions with higher correlation between views, improving generalization to target views. Experiments on the IXMAS dataset show the method outperforms baselines, with the regularizer successfully suppressing weights for less correlated dimensions.
Recent developments in the field of reduced order modeling - and in particular, active subspace construction - have made it possible to efficiently approximate complex models by constructing low-order response surfaces based upon a small subspace of the original high dimensional parameter space. These methods rely upon the fact that the response tends to vary more prominently in a few dominant directions defined by linear combinations of the original inputs, allowing for a rotation of the coordinate axis and a consequent transformation of the parameters. In this talk, we discuss a gradient free active subspace algorithm that is feasible for high dimensional parameter spaces where finite-difference techniques are impractical. We illustrate an initialized gradient-free active subspace algorithm for a neutronics example implemented with SCALE6.1.
PR-305: Exploring Simple Siamese Representation LearningSungchul Kim
SimSiam is a self-supervised learning method that uses a Siamese network with stop-gradient to learn representations from unlabeled data. The paper finds that stop-gradient plays an essential role in preventing the model from collapsing to a degenerate solution. Additionally, it is hypothesized that SimSiam implicitly optimizes an Expectation-Maximization-like algorithm that alternates between updating the network parameters and assigning representations to samples in a manner analogous to k-means clustering.
We presents a deep architecture for dense semantic correspondence, called pyramidal affine regression networks (PARN), that estimates locally-varying affine transformation fields across images.
To deal with intra-class appearance and shape variations that commonly exist among different instances within the same object category,
we leverage a pyramidal model where affine transformation fields are progressively estimated in a coarse-to-fine manner so that the smoothness constraint is naturally imposed within deep networks.
PARN estimates residual affine transformations at each level and composes them to estimate final affine transformations.
Furthermore, to overcome the limitations of insufficient training data for semantic correspondence, we propose a novel weakly-supervised training scheme that generates progressive supervisions by leveraging a correspondence consistency across image pairs.
Our method is fully learnable in an end-to-end manner and does not require quantizing infinite continuous affine transformation fields.
1. The document discusses barriers to scaling electronic structure methods to large systems, such as the inability of sparse matrix multiplication kernels to access strong parallel scaling and entrenched data structures that limit innovation.
2. It proposes a fast, generic, and data local N-body solver approach using new mathematics that is not constrained by row-column data structures and allows a single programming model.
3. Key aspects of this approach include exploiting locality in higher dimensional product volumes through techniques like occlusion-culling, resolving identity iteratively to compress matrices by orders of magnitude, and developing optimized sparse matrix multiplication kernels.
1. The document proposes an algorithmic framework for large-scale circuit simulation using exponential integrators. It uses exponential Rosenbrock methods and an invert Krylov subspace approach to efficiently compute the matrix exponential-vector product to solve the circuit equations explicitly without needing Newton-Raphson iterations.
2. The framework was shown to accurately simulate benchmark circuits while achieving speedups over traditional approaches. It can handle large-scale, strongly coupled circuits that traditional methods have difficulty with.
3. Future work includes exploring parallelization opportunities to further accelerate the method using multicore/many-core systems and developing additional tools based on the proposed derivatives-based approach.
Multi-class Classification on Riemannian Manifolds for Video SurveillanceDiego Tosato
In video surveillance, classification of visual data can be very hard due to the scarce resolution and the noise characterizing the sensors data. In this paper, we propose a novel feature, the ARray of COvariances (ARCO), and a multi-class classification framework operating on Riemannian manifolds. ARCO is composed by a structure of covariance matrices of image features, able to extract information from data at prohibitive low resolutions. The proposed classification framework consists in instantiating a new multi-class boosting method, working on the manifoldof symmetric positive definite d×d (covariance) matrices. As practical applications, we consider different surveillance tasks, such as head pose classification and pedestrian detection, providing novel state-of-the-art performances on standard datasets.
The document proposes a framework for recognizing actions across cameras by exploring correlation subspaces. It first learns a joint subspace using Canonical Correlation Analysis (CCA) on unlabeled multi-view data. It then trains a Support Vector Machine (SVM) in this subspace with a novel correlation regularizer that favors dimensions with higher correlation between views, improving generalization to target views. Experiments on the IXMAS dataset show the method outperforms baselines, with the regularizer successfully suppressing weights for less correlated dimensions.
This document provides an introduction to model order reduction techniques. It discusses how model order reduction works by projecting large dynamical systems onto dominant subspaces to obtain reduced order models. It covers motivation for model order reduction due to rising complexity in mathematical models. It also describes moment matching and Krylov subspace methods, which are different approaches for finding projection matrices. Krylov subspace methods allow for implicit moment matching through Arnoldi iteration and have advantages in terms of computational cost and memory requirements over other methods.
This document discusses techniques for fitting curves to data, including least squares regression and interpolation. It covers fitting straight lines to data using linear regression as well as fitting polynomials using polynomial regression. Examples are provided to demonstrate finding the linear regression line that best fits a data set and determining its standard error and correlation coefficient. The document also discusses extending linear regression to fit higher order polynomials to data and finding the coefficients by minimizing the residual sum of squares. Linearization techniques are introduced to express nonlinear data in a form compatible with linear regression.
The document discusses wireframe modeling in CAD/CAM. A wireframe model represents an object with edges but no surfaces. It consists of geometric data defining point positions and connectivity data relating points as edges. Basic entities include points, lines, arcs, and circles. Analytic entities use mathematical equations while synthetic entities combine multiple curve segments. Bezier and B-spline curves allow more flexible shape control compared to analytic curves. The document also covers parametric representations of curves, properties of Bezier curves, and Hermite splines.
Algorithmic Techniques for Parametric Model RecoveryCurvSurf
A complete description of algorithmic techniques for automatic feature extraction from point cloud. The orthogonal distance fitting, an art of maximum liklihood estimation, plays the main role. Differential geometry determines the type of object surface.
This document provides an overview of MATLAB, including the MATLAB desktop, variables, vectors, matrices, matrix operations, array operations, built-in functions, data visualization, flow control using if and for statements, and user-defined functions. It introduces key MATLAB concepts like the command window, workspace, and editor. It also demonstrates how to create and manipulate variables, vectors, matrices, and plots in MATLAB.
Marwan Mattar presented his PhD thesis defense on unsupervised joint alignment, clustering, and feature learning. His research goal was to develop an unsupervised data set-agnostic processing module that includes alignment, clustering, and feature learning. He developed techniques for joint alignment of data using transformations, clustering data in an unsupervised manner, and learning features from the data. His techniques were shown to outperform other methods on tasks involving time series classification, face verification, and clustering of handwritten digits and ECG heart data.
This document outlines the key steps in a typical computational fluid dynamics (CFD) analysis:
1) Define modeling goals and assumptions
2) Identify the domain to be modeled
3) Create a geometric model of the domain
4) Design and create a mesh of the domain
5) Set up the solver with appropriate physical models and boundary conditions
6) Compute the solution by solving the governing equations
7) Examine the results to validate the solution and extract useful data
Master Thesis: Conformal multi-material mesh generation from labelled medical...Christian Kehl
An important step in orthopaedic pre-operative planning is the generation of accurate volume meshes out of segmented volume image. These meshes are used in patient-specific, bio-mechanical finite element simulations to optimize positioning and design of implants. The development of accurate, multi-material volume meshing methods for medical applications is an active and interdisciplinary field of research. Several methods in the field that were proposed in recent years claim to accurately perform the task, each concept with its advantages and disadvantages. The approaches to the task are diverse. The question is: Which approach is the most suitable one? How do we evaluate the excellence of such methods ? What criteria can be applied to measure the quality of a multi-labelled volume mesh ? And which ones have the most impact on the subsequent simulation, so that stress calculations on the implant are realistic and correct ?
These are the basic research questions that are discussed in this work.
This document discusses best practices for conducting and reporting on computational fluid dynamics (CFD) analyses to achieve credible and confident results. It emphasizes the importance of verification and validation to demonstrate acceptable levels of error and uncertainty. It provides guidance on quantifying various sources of error in CFD simulations and outlines recommended steps for grid convergence studies, reporting results, and validating simulations against experimental data.
Curves play a significant role in CAD modeling, especially for generating wireframe models. There are three main types of computer-aided design models: wireframe, surface, and solid. Wireframe models use only points and curves to represent an object in the simplest form. Curves can be classified as analytical, interpolated, or approximated. Analytical curves have fixed mathematical equations, interpolated curves pass through given data points in a fixed form, and approximated curves provide the most flexibility in complex shape creation. Parametric equations are preferred over non-parametric equations for representing curves in CAD programs. Common analytical curves include lines, circles, ellipses, parabolas, and hyperbolas. Interpolated curves can
The document discusses using the differential quadrature method to analyze buckling in thin plates. It provides an overview of buckling and introduces the differential quadrature method as an efficient numerical technique. The method transforms differential equations into algebraic equations using sampling points. The document applies the method to analyze buckling in isotropic rectangular plates with different boundary conditions and aspect ratios. Results show the differential quadrature method provides accurate results using fewer grid points compared to other methods like finite element analysis.
This document summarizes a research project on real time pose control of a 6-RSS parallel robot. The project involved obtaining the exact pose of the end-effector through kinematics modeling of the parallel robot, dynamic modeling of the actuators, and designing a controller for real time pose control. Kinematics were modeled using both analytical inverse and forward kinematics methods. Actuator dynamics were modeled linearly and nonlinearly, and parameters were identified using genetic algorithms and multi-objective optimization. Real time pose control was tested in simulation using open-loop and closed-loop path tracking with a PID controller.
Performance Benchmarking of the R Programming Environment on the Stampede 1.5...James McCombs
We present performance results obtained with a new single-node performance benchmark of the R programming environment on the many-core Xeon Phi Knights Landing and standard Xeonbased compute nodes of the Stampede supercomputer cluster at the Texas Advanced Computing Center. The benchmark consists of microbenchmarks of linear algebra kernels and machine learning functionality that includes clustering and neural network training from the R distribution. The standard Xeon-based nodes outperformed their Xeon Phi counterparts for matrices of small to medium dimensions, performing approximately twice as fast for most of the linear algebra microbenchmarks. For matrices of medium to large dimensions, the Knights Landing nodes were competitive with or outperformed the standard Xeon-based nodes with most of the linear algebra microbenchmarks, executing as much as five
times faster than the standard Xeon-based nodes. For the clustering and neural network training microbenchmarks, the standard Xeonbased nodes performed up to four times faster than their Xeon Phi counterparts for many large data sets, indicating that commonly used R packages may need to be reengineered to take advantage of existing optimized, scalable kernels.
Approaches to formal verification of ams designAmbuj Mishra
Masters thesis on Approaches to formal verification of analog and mixed signal designs presented in June 2016 at International Institute of Information Technology, Bangalore (IIITB).
This document provides an introduction to model order reduction techniques. It discusses how model order reduction works by projecting large dynamical systems onto dominant subspaces to obtain reduced order models. It covers motivation for model order reduction due to rising complexity in mathematical models. It also describes moment matching and Krylov subspace methods, which are different approaches for finding projection matrices. Krylov subspace methods allow for implicit moment matching through Arnoldi iteration and have advantages in terms of computational cost and memory requirements over other methods.
This document discusses techniques for fitting curves to data, including least squares regression and interpolation. It covers fitting straight lines to data using linear regression as well as fitting polynomials using polynomial regression. Examples are provided to demonstrate finding the linear regression line that best fits a data set and determining its standard error and correlation coefficient. The document also discusses extending linear regression to fit higher order polynomials to data and finding the coefficients by minimizing the residual sum of squares. Linearization techniques are introduced to express nonlinear data in a form compatible with linear regression.
The document discusses wireframe modeling in CAD/CAM. A wireframe model represents an object with edges but no surfaces. It consists of geometric data defining point positions and connectivity data relating points as edges. Basic entities include points, lines, arcs, and circles. Analytic entities use mathematical equations while synthetic entities combine multiple curve segments. Bezier and B-spline curves allow more flexible shape control compared to analytic curves. The document also covers parametric representations of curves, properties of Bezier curves, and Hermite splines.
Algorithmic Techniques for Parametric Model RecoveryCurvSurf
A complete description of algorithmic techniques for automatic feature extraction from point cloud. The orthogonal distance fitting, an art of maximum liklihood estimation, plays the main role. Differential geometry determines the type of object surface.
This document provides an overview of MATLAB, including the MATLAB desktop, variables, vectors, matrices, matrix operations, array operations, built-in functions, data visualization, flow control using if and for statements, and user-defined functions. It introduces key MATLAB concepts like the command window, workspace, and editor. It also demonstrates how to create and manipulate variables, vectors, matrices, and plots in MATLAB.
Marwan Mattar presented his PhD thesis defense on unsupervised joint alignment, clustering, and feature learning. His research goal was to develop an unsupervised data set-agnostic processing module that includes alignment, clustering, and feature learning. He developed techniques for joint alignment of data using transformations, clustering data in an unsupervised manner, and learning features from the data. His techniques were shown to outperform other methods on tasks involving time series classification, face verification, and clustering of handwritten digits and ECG heart data.
This document outlines the key steps in a typical computational fluid dynamics (CFD) analysis:
1) Define modeling goals and assumptions
2) Identify the domain to be modeled
3) Create a geometric model of the domain
4) Design and create a mesh of the domain
5) Set up the solver with appropriate physical models and boundary conditions
6) Compute the solution by solving the governing equations
7) Examine the results to validate the solution and extract useful data
Master Thesis: Conformal multi-material mesh generation from labelled medical...Christian Kehl
An important step in orthopaedic pre-operative planning is the generation of accurate volume meshes out of segmented volume image. These meshes are used in patient-specific, bio-mechanical finite element simulations to optimize positioning and design of implants. The development of accurate, multi-material volume meshing methods for medical applications is an active and interdisciplinary field of research. Several methods in the field that were proposed in recent years claim to accurately perform the task, each concept with its advantages and disadvantages. The approaches to the task are diverse. The question is: Which approach is the most suitable one? How do we evaluate the excellence of such methods ? What criteria can be applied to measure the quality of a multi-labelled volume mesh ? And which ones have the most impact on the subsequent simulation, so that stress calculations on the implant are realistic and correct ?
These are the basic research questions that are discussed in this work.
This document discusses best practices for conducting and reporting on computational fluid dynamics (CFD) analyses to achieve credible and confident results. It emphasizes the importance of verification and validation to demonstrate acceptable levels of error and uncertainty. It provides guidance on quantifying various sources of error in CFD simulations and outlines recommended steps for grid convergence studies, reporting results, and validating simulations against experimental data.
Curves play a significant role in CAD modeling, especially for generating wireframe models. There are three main types of computer-aided design models: wireframe, surface, and solid. Wireframe models use only points and curves to represent an object in the simplest form. Curves can be classified as analytical, interpolated, or approximated. Analytical curves have fixed mathematical equations, interpolated curves pass through given data points in a fixed form, and approximated curves provide the most flexibility in complex shape creation. Parametric equations are preferred over non-parametric equations for representing curves in CAD programs. Common analytical curves include lines, circles, ellipses, parabolas, and hyperbolas. Interpolated curves can
The document discusses using the differential quadrature method to analyze buckling in thin plates. It provides an overview of buckling and introduces the differential quadrature method as an efficient numerical technique. The method transforms differential equations into algebraic equations using sampling points. The document applies the method to analyze buckling in isotropic rectangular plates with different boundary conditions and aspect ratios. Results show the differential quadrature method provides accurate results using fewer grid points compared to other methods like finite element analysis.
This document summarizes a research project on real time pose control of a 6-RSS parallel robot. The project involved obtaining the exact pose of the end-effector through kinematics modeling of the parallel robot, dynamic modeling of the actuators, and designing a controller for real time pose control. Kinematics were modeled using both analytical inverse and forward kinematics methods. Actuator dynamics were modeled linearly and nonlinearly, and parameters were identified using genetic algorithms and multi-objective optimization. Real time pose control was tested in simulation using open-loop and closed-loop path tracking with a PID controller.
Performance Benchmarking of the R Programming Environment on the Stampede 1.5...James McCombs
We present performance results obtained with a new single-node performance benchmark of the R programming environment on the many-core Xeon Phi Knights Landing and standard Xeonbased compute nodes of the Stampede supercomputer cluster at the Texas Advanced Computing Center. The benchmark consists of microbenchmarks of linear algebra kernels and machine learning functionality that includes clustering and neural network training from the R distribution. The standard Xeon-based nodes outperformed their Xeon Phi counterparts for matrices of small to medium dimensions, performing approximately twice as fast for most of the linear algebra microbenchmarks. For matrices of medium to large dimensions, the Knights Landing nodes were competitive with or outperformed the standard Xeon-based nodes with most of the linear algebra microbenchmarks, executing as much as five
times faster than the standard Xeon-based nodes. For the clustering and neural network training microbenchmarks, the standard Xeonbased nodes performed up to four times faster than their Xeon Phi counterparts for many large data sets, indicating that commonly used R packages may need to be reengineered to take advantage of existing optimized, scalable kernels.
Approaches to formal verification of ams designAmbuj Mishra
Masters thesis on Approaches to formal verification of analog and mixed signal designs presented in June 2016 at International Institute of Information Technology, Bangalore (IIITB).
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
ESPP presentation to EU Waste Water Network, 4th June 2024 “EU policies driving nutrient removal and recycling
and the revised UWWTD (Urban Waste Water Treatment Directive)”
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
15. WHEN CONSIDERING QUADS
Choice of basis functions:
Bilinear coordinates
Valid quads:
Extend « convexity » to 3D
Our validity predicate:
« A quad is valid if convex when projected on any of the 4 planes spanned by its vertices »
17. SPECTRAL ANALYSIS
• Rank = 2 :
• Point is lying on a planar quad
basis functions (MVC are an interpolant, see paper)
• Rank = 3 :
• Point is lying on the (non-planar) bilinear quad
basis functions (MVC are an interpolant, see paper)
• Otherwise
solution known up to a component along the kernel
18. ANALYSIS OF THE LAST CASE
Solution
Least-norm solution to Eq (1)
(Eq (1))
Unique vector in kernel of Eq (1)
What we seek for
Closed-form expressions:
19. « APPROPRIATE » VALUES FOR
(Eq (1))
Desired properties of λ:
• Any choice (even random values) of λ leads to linear precision (the input
matter what)
• 𝐶∞
coordinates λ must be 𝐶∞
• Interpolation λ is constrained near the cage:
lim
𝜂 → 𝜉
𝑤𝑞 𝜂 = 𝑤𝑞 𝜉
𝜂
𝜉
22. SAMPLING CONSTRAINTS
𝜂
Desired properties of λ:
• Interpolation
we make the sampling dependent on the evaluation point η
• Smooth coordinates
we make the sampling a smooth function of the evaluation point η
𝜂
𝜂
𝜂
𝜂
23. SAMPLING THROUGH PROJECTIONS
𝜂
Sampling:
• We project η on the quad
• We construct a sampling centered on this point in the (u,v) space
• Regularity of the sampling inferred by the regularity of the projection
24. SAMPLING THROUGH PROJECTIONS
Two-step projection:
• Challenge: project on a non-convex geometry
• Step 1: Project inside the convex hull of the 4 points (tetrahedron), using absolute MVC
• Step 2: Project the constructed point by intersecting the line directed by the quad’s
• Our validity predicate ensures that this projection is valid (see paper for details)
25. TO RECAP’: QMVC IN A NUTSHELL
Linear precision equation:
kernel vector
minimal-norm solution
𝜂
𝜂
Quad’s unnormalized coordinates:
33. COMPARISON WITH SMVC
[Langer et al. 2006] Spherical barycentric coordinates
Why SMVC?
• MVC-based
• Can use any (convex) planar n-gon
more general than us on that account
sometimes too restrictive for modelling
44. CONCLUSION
Contributions:
• Entension of MVC to cages featuring non-planar quads
• Characterization of valid non-planar quads for cage modelling
• A smooth projection operator on non-planar quads
Perspectives:
• Sampling-free λ construction
• Better priors for positive QMVC using the MEC approach
• Green coordinates for quad cages? (quasi-conformal)
• Non-planar n-gons?
45. THANK YOU!
Source code available at
perso.telecom-paristech.fr/boubek/papers/QMVC/
Contains our implementation of:
- QMVC
- MVC
- SMVC
- GC
- MEC
- Demo viewer for comparison
Different control structures are used to animate 3D shapes, depending on the type of intended deformation
In this talk we will focus on cages, which are typically used to perform large scale deformations featuring anisotropic stretch and offer precise volume control.
They are a lot of coordinates in the literature, all coming with various properties.
The first one is mandatory and allows to recover the input shape with the input cage, which means that the encoding is valid.
Positivity results in more natural deformations, as translating parts of the cage in one direction does not result in mesh translation in the opposite direction,
The coordinates are required to be smooth so that the resulting deformation is smooth,
The resulting deformation should be shape-aware,
Ideally the coordinates should have a close-form expression and be computed efficiently
Optionnaly, coordinates can lead to interpolation on the cage surface, for precise control.
What motivates our work is a very simple observation:
If you pay attention to cages designed by artists, they feature a lot of quads, and that is because most artists model cages with a box-modeling strategy and extrusion of edges result in quads
Quads also allow to better conform to the large features of the shape, and that are also easier to deform than triangles.
And that is also the conclusion reached by recent work in automatic cage design.
While artists design triquad cages for these reasons,
Most coordinates require the cage to be triangulated for their computation,
Resulting in asymetrical artifacts.
In this work we present coordinates based on mean value coordinates that support non-planar anisotropic quads.
Let’s have a look at mean value coordinates
A cage is provided to the user, with a function living on the cage surface.
MVC allow to extrapolate this function to any point in the 3D space through a simple integral.
Given an evaluation point, we consider the unit sphere centered on it.
A spherical averaging of the function is then performed, while enforcing nearby cage vertices to have a strong influence.
By considering that both the cage geometry and embedded function are linear,
The extrapolated function can be expressed as a linear combination of the values defined at the cage vertices,
The barycentric weights being the mean value coordinates of the point.
This definition implies directly linear precision,
Interpolation on the cage,
And smoothness of the resulting function.
To compute Mean value coordiantes,
One has to compute the spherical integral
of each vertex basis function on each cage facet
Sum the contributions of all adjacent facets to compute the unnormalized coordinates
And finally normalize them to obtain the MVC.
In the end, we see that, in order to compute MVC for any type of linear geometry, one only has to know how to compute the weighted integral of the basis function on a spherical facet.
On triangle cages,
Ju and colleagues noted that summing up the unnormalized coordinates against the vectors starting from the evaluation point and ending at the triangle corners resulted in a very simple quantity, called the mean vector of the face, which is simply the integral of the unit normal on the spherical triangle in red.
By using this expression, the unnormalized weights at the 3 corners of the triangle can be obtained jointly,
since the resulting matrix expression is full rank (in that case, we have three equations, and three unknowns).
Before going through the derivation for quads, we need to go over a few differences that exist with the simpler triangle case.
The first thing is that there exist an infinity of choices for linear basis functions on a quad, and the choice of basis functions will impact the behavior of the resulting coordinates.
In our work, we consider bilinear coordinates, which are smooth and are commonly used in Graphics.
The second point is, that it is not always possible to consider any quad for cage modelling.
Indeed, for example, consider this planar non-convex quad in red.
If we encode MVC with a cage featuring this quad, and we deform the cage to match the convex quad in blue, we face a contradiction.
As we want to obtain interpolation on the cage facets, this point should actually be deformed to these two different points when deforming the encoding non-convex quad, which is impossible.
In our work, we extend this notion of convexity to 3D, by considering that a quad is valid if convex when projected on any of the 4 planes spanned by its vertices.
Now going back to the computation,
We can still compute the mean vector of the quad,
And relate the 4 unnormalized weights through the same matrix expression,
But unfortunately we cannot obtain directly the 4 unnormalized weights, as the matrix has a rank that is at most 3.
We can actually characterize all situations depending on the rank of the matrix A_q
If the rank equals 2, we know that the point lies on a planar quad, and we can simply return the quad’s basis functions.
When the rank is 3,
The point might be still be lying on a non-planar bilinear quad, which is rather simple to test.
Otherwise, we know the solution, up to a component along the kernel of A_q.
More precisely,
The solution is given by this expression,
Which involves the least-norm solution to Equation 1,
And the unique vector in the kernel of A_q.
All we are missing is the component along this 4-dimensional vector, component that we call here the lambda coordinate.
Note that we expressed these quantities using the Singular Value Decomposition of the matrix A_q, but that it is not actually needed, and these quantities have simple geometric closed-form expressions.
Let’s see what important properties should be verified by the lambda coordinate.
First, any choice will lead to linear precision, because it is the component along the kernel of the linear precision equation.
Which means that we can give absolutely any value to this lambda coordinate, and we will obtain valid coordinates.
Second, the resulting coordinates should be smooth, so the lambda coordinate should be smooth as well.
Finally, we wish to obtain interpolation on the cage facets, which constrains the lambda coordinate near the cage.
We make the simple observation that,
If we knew the ground truth weight vector w_q,
We could obtain the ground truth lambda coordinate using a simple projection along the kernel vector.
We are going to compute a smooth approximation of lambda, by first
Computing a approximate weight vector omega_q, using a Riemann summation in place of the continuous integrals,
And we will project this approximate weight vector along the kernel vector.
Note that, we added this correction term,
That corresponds to scaling the resulting approximation so that it matches the component along the least-norm solution vector, for which we know the ground truth.
The underlying assumption is, that the error we make when approximating the weights, are somehow distributed similarly along all components of the spectrum of A_q.
This experiment validates our assumption.
We encoded a straight box within a straight box-cage, and we twisted the box-cage to observe the resulting deformations.
On the left, are indicated the sampling size that were used for our approximation.
When no norm correction is used, we obtain coordinates that are still smooth, but far from what we want.
If we increase the sampling, we obtain better results
And eventually it converges to the ground truth solution with a lot of samples
When we compare to the results obtained with the norm correction, as you can see, even with a sampling of 3 times 3 per large quad, we obtain a solution that matches almost exactly the solution obtained with more than 10 thousand samples per quad.
We can not use any sampling for our purpose,
And if we look back at the properties that we want to obtain for our coordinates,
We want to obtain interpolation on the cage.
This forces us to make the sampling dependent on the evaluation point eta, so that the sampling results in a Dirac distribution any time eta belongs to the bilinear quad.
As we want to obtain smooth coordinates, we further need to make this sampling a smooth function of the evaluation point eta.
We proceed as follows:
We first project eta on the bilinear quad
And we then construct a sampling centered on this point.
The regularity of the sampling we be directly given by the regularity of the projection operator that we use.
The main challenge here is therefore to come up with an operator that can project any 3D point on a non-convex object, such as the bilinear quad.
We proceed in two steps,
We project on the convex hull of the 4 points using the absolute value of the MVC with respect to this tetrahedron, this results in a point that is inside the tetrahedron, in blue.
At this point, we are close enough to the bilinear quad to define a proper projection, by computing the quad’s mean vector, and intersecting the resulting line with the bilinear quad.
The validity of this projection is ensured by our validity predicate for the quads!
The main challenge here is therefore to come up with an operator that can project any 3D point on a non-convex object, such as the bilinear quad.
We proceed in two steps,
We project on the convex hull of the 4 points using the absolute value of the MVC with respect to this tetrahedron, this results in a point that is inside the tetrahedron, in blue.
At this point, we are close enough to the bilinear quad to define a proper projection, by computing the quad’s mean vector, and intersecting the resulting line with the bilinear quad.
The validity of this projection is ensured by our validity predicate for the quads!