In this talk, I give a gentle introduction to geometric and topological data analysis and then segue into some natural questions that arise when one combines the topological view with the perhaps more well-studied linear algebraic view.
Few More Results on Sum Labeling of Split Graphsijcoa
A sum labeling is a mapping λ from the vertices of G into the positive integers such that, for any two vertices u, v ϵ V (G) with labels λ(u) and λ(v), respectively, (uv) is an edge iff λ(u) + λ(v) is the label of another vertex in V (G). Any graph supporting such a labeling is called a sum graph. It is necessary to add (as a disjoint union) a component to sum label a graph. This disconnected component is a set of isolated vertices known as isolates and the labeling scheme that requires the fewest isolates is termed optimal. The number of isolates required for a graph to support a sum labeling is known as the sum number of the graph. In this paper, we will obtain optimal sum labeling scheme for path union of split graph of star, K_(1,m)⨀Spl(P_n) and K_(1,m)⨀Spl(K_(1,n)).
Welcome to my prsentation on graph and treeDyuti Islam
Binary Tree
12. Binary Tree Binary Tree is a rooted tree in which root can have maximum two children such that each of them is again a binary tree. That means, there can be 0,1, or 2 children of any node.
13. Strict Binary Tree
14. Strict Binary Tree Strict Binary Tree is a Binary tree in which root can have exactly two children or no children at all. That means, there can be 0 or 2 children of any node.
15. Complete Binary Tree
16. Complete Binary Tree Complete Binary Tree is a Strict Binary tree in which every leaf node is at same level. That means, there are equal number of children in right and left subtree for every node.
Presentation given at DMZ about Data Structure Graphs.
Also known as Applying Social Network Analysis Techniques to Data Modeling and Data Architecture
n-Curving - A Transformation of Curvesguest3f32c32
A method of transforming plane curves, developed on the basis of Functional Theoretic Algebra.
http://en.wikipedia.org/wiki/Functional-theoretic_algebra
Few More Results on Sum Labeling of Split Graphsijcoa
A sum labeling is a mapping λ from the vertices of G into the positive integers such that, for any two vertices u, v ϵ V (G) with labels λ(u) and λ(v), respectively, (uv) is an edge iff λ(u) + λ(v) is the label of another vertex in V (G). Any graph supporting such a labeling is called a sum graph. It is necessary to add (as a disjoint union) a component to sum label a graph. This disconnected component is a set of isolated vertices known as isolates and the labeling scheme that requires the fewest isolates is termed optimal. The number of isolates required for a graph to support a sum labeling is known as the sum number of the graph. In this paper, we will obtain optimal sum labeling scheme for path union of split graph of star, K_(1,m)⨀Spl(P_n) and K_(1,m)⨀Spl(K_(1,n)).
Welcome to my prsentation on graph and treeDyuti Islam
Binary Tree
12. Binary Tree Binary Tree is a rooted tree in which root can have maximum two children such that each of them is again a binary tree. That means, there can be 0,1, or 2 children of any node.
13. Strict Binary Tree
14. Strict Binary Tree Strict Binary Tree is a Binary tree in which root can have exactly two children or no children at all. That means, there can be 0 or 2 children of any node.
15. Complete Binary Tree
16. Complete Binary Tree Complete Binary Tree is a Strict Binary tree in which every leaf node is at same level. That means, there are equal number of children in right and left subtree for every node.
Presentation given at DMZ about Data Structure Graphs.
Also known as Applying Social Network Analysis Techniques to Data Modeling and Data Architecture
n-Curving - A Transformation of Curvesguest3f32c32
A method of transforming plane curves, developed on the basis of Functional Theoretic Algebra.
http://en.wikipedia.org/wiki/Functional-theoretic_algebra
Given a graphG=(V,E) and a set T of non-negative integers containing 0, a T -coloring of G is an integer function f of the vertices of G such that |f(u)-f(v)|∉T whenever uv∈E. The edge-span of a T-coloring is the maximum value of |f(u)-f(v)| over all edges uv, and the T- edge-span of a graph G is the minimum value of the edge-span among all possible T -colorings of G. This paper discusses the T -edge span of the folded hypercube network of dimension n for the k-multiple-of-s set, T={0,s,2s,…ks}∪S, where s and k≥1 and S⊆{s+1,s+2,…ks-1}.
Mathematics (from Greek μάθημα máthēma, “knowledge, study, learning”) is the study of topics such as quantity (numbers), structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope and definition of mathematics
In this talk, I address two new ideas in sampling geometric objects. The first is a new take on adaptive sampling with respect to the local feature size, i.e., the distance to the medial axis. We recently proved that such samples acn be viewed as uniform samples with respect to an alternative metric on the Euclidean space. The second is a generalization of Voronoi refinement sampling. There, one also achieves an adaptive sample while simultaneously "discovering" the underlying sizing function. We show how to construct such samples that are spaced uniformly with respect to the $k$th nearest neighbor distance function.
018 20160902 Machine Learning Framework for Analysis of Transport through Com...Ha Phuong
• Propose a data-driven framework to study the relationship
between fluid flow at the macro scale and the internal pore
structure, across the micro and mesoscales, in porous, granular media.
Quantifies a hypothesized link between high permeability and
efficient shortest paths that thread through relatively large
pore bodies connected to each other by high conductance pore throats, embodying connectivity and pore structure.
Given a graphG=(V,E) and a set T of non-negative integers containing 0, a T -coloring of G is an integer function f of the vertices of G such that |f(u)-f(v)|∉T whenever uv∈E. The edge-span of a T-coloring is the maximum value of |f(u)-f(v)| over all edges uv, and the T- edge-span of a graph G is the minimum value of the edge-span among all possible T -colorings of G. This paper discusses the T -edge span of the folded hypercube network of dimension n for the k-multiple-of-s set, T={0,s,2s,…ks}∪S, where s and k≥1 and S⊆{s+1,s+2,…ks-1}.
Mathematics (from Greek μάθημα máthēma, “knowledge, study, learning”) is the study of topics such as quantity (numbers), structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope and definition of mathematics
In this talk, I address two new ideas in sampling geometric objects. The first is a new take on adaptive sampling with respect to the local feature size, i.e., the distance to the medial axis. We recently proved that such samples acn be viewed as uniform samples with respect to an alternative metric on the Euclidean space. The second is a generalization of Voronoi refinement sampling. There, one also achieves an adaptive sample while simultaneously "discovering" the underlying sizing function. We show how to construct such samples that are spaced uniformly with respect to the $k$th nearest neighbor distance function.
018 20160902 Machine Learning Framework for Analysis of Transport through Com...Ha Phuong
• Propose a data-driven framework to study the relationship
between fluid flow at the macro scale and the internal pore
structure, across the micro and mesoscales, in porous, granular media.
Quantifies a hypothesized link between high permeability and
efficient shortest paths that thread through relatively large
pore bodies connected to each other by high conductance pore throats, embodying connectivity and pore structure.
The variational Gaussian process (VGP), a Bayesian nonparametric model which adapts its shape to match com- plex posterior distributions. The VGP generates approximate posterior samples by generating latent inputs and warping them through random non-linear mappings; the distribution over random mappings is learned during inference, enabling the transformed outputs to adapt to varying complexity.
017_20160826 Thermodynamics Of Stochastic Turing MachinesHa Phuong
Show how to construct stochastic models which mimic the
behavior of a general-purpose computer (a Turing machine).
Discrete state systems obeying a Markovian master equation,
which are logically reversible and have a well-defined and
consistent thermodynamic interpretation
This is the second lecture in the CS 6212 class. Covers asymptotic notation and data structures. Also outlines the coming lectures wherein we will study the various algorithm design techniques.
Gaps between the theory and practice of large-scale matrix-based network comp...David Gleich
I discuss some runtimes for the personalized PageRank vector and how it relates to open questions in how we should tackle these network based measures via matrix computations.
A set of notes prepared for an introductory machine learning course, assuming very limited linear algebra background, because all linear algebra operations are fully written out. These notes go into thorough derivations of the generalized linear regression formulation, demonstrating how to write it out in matrix form.
The tutorial describes existing approaches to model graph databases and different techniques implemented in RDF and Database engines including their main drawbacks when a large volume of interconnected data needs to be traversed.
Characterizing the Distortion of Some Simple Euclidean EmbeddingsDon Sheehy
This talk addresses some upper and lower bounds techniques for bounding the distortion between mappings between Euclidean metric spaces including circles, spheres, pairs of lines, triples of planes, and the union of a hyperplane and a point.
Sensors and Samples: A Homological ApproachDon Sheehy
In their seminal work on homological sensor networks, de Silva and Ghrist showed the surprising fact that its possible to certify the coverage of a coordinate free sensor network even with very minimal knowledge of the space to be covered. We give a new, simpler proof of the de Silva-Ghrist Topological Coverage Criterion that eliminates any assumptions about the smoothness of the boundary of the underlying space, allowing the results to be applied to much more general problems. The new proof factors the geometric, topological, and combinatorial aspects of this approach. This factoring reveals an interesting new connection between the topological coverage condition and the notion of weak feature size in geometric sampling theory. We then apply this connection to the problem of showing that for a given scale, if one knows the number of connected components and the distance to the boundary, one can also infer the higher betti numbers or provide strong evidence that more samples are needed. This is in contrast to previous work which merely assumed a good sample and gives no guarantees if the sampling condition is not met.
The Persistent Homology of Distance Functions under Random ProjectionDon Sheehy
Given n points P in a Euclidean space, the Johnson-Lindenstrauss lemma guarantees that the distances between pairs of points is preserved up to a small constant factor with high probability by random projection into O(log n) dimensions. In this paper, we show that the persistent homology of the distance function to P is also preserved up to a comparable constant factor. One could never hope to preserve the distance function to P pointwise, but we show that it is preserved sufficiently at the critical points of the distance function to guarantee similar persistent homology. We prove these results in the more general setting of weighted k-th nearest neighbor distances, for which k=1 and all weights equal to zero gives the usual distance to P.
Geometric Separators and the Parabolic LiftDon Sheehy
We present a simplification of the geometric separator algorithm of Miller and Thurston that uses parabolic lifting rather than stereographic projection. The result entirely eliminates the middle phase of that algorithm, which finds a conformal transformation to arrange the points nicely on the sphere.
A New Approach to Output-Sensitive Voronoi Diagrams and Delaunay TriangulationsDon Sheehy
We describe a new algorithm for computing the Voronoi diagram of a set of $n$ points in constant-dimensional Euclidean space. The running time of our algorithm is $O(f \log n \log \spread)$ where $f$ is the output complexity of the Voronoi diagram and $\spread$ is the spread of the input, the ratio of largest to smallest pairwise distances. Despite the simplicity of the algorithm and its analysis, it improves on the state of the art for all inputs with polynomial spread and near-linear output size. The key idea is to first build the Voronoi diagram of a superset of the input points using ideas from Voronoi refinement mesh generation. Then, the extra points are removed in a straightforward way that allows the total work to be bounded in terms of the output complexity, yielding the output sensitive bound. The removal only involves local flips and is inspired by kinetic data structures.
The word optimal is used in different ways in mesh generation. It could mean that the output is in some sense, "the best mesh" or that the algorithm is, by some measure, "the best algorithm". One might hope that the best algorithm also produces the best mesh, but maybe some tradeoffs are necessary. In this talk, I will survey several different notions of optimality in mesh generation and explore the different tradeoffs between them. The bias will be towards Delaunay/Voronoi methods.
Output-Sensitive Voronoi Diagrams and Delaunay Triangulations Don Sheehy
Voronoi diagrams and their duals, Delaunay triangulations, are used in many areas of computing and the sciences. Starting in 3-dimensions, there is a substantial (i.e. polynomial) difference between the best case and the worst case complexity of these objects when starting with n points. This motivates the search for algorithms that are output-senstiive rather than relying only on worst-case guarantees. In this talk, I will describe a simple, new algorithm for computing Voronoi diagrams in d-dimensions that runs in O(f log n log spread) time, where f is the output size and the spread of the input points is the ratio of the diameter to the closest pair distance. For a wide range of inputs, this is the best known algorithm. The algorithm is novel in the that it turns the classic algorithm of Delaunay refinement for mesh generation on its head, working backwards from a quality mesh to the Delaunay triangulation of the input. Along the way, we will see instances of several other classic problems for which no higher-dimensional results are known, including kinetic convex hulls and splitting Delaunay triangulations.
SOCG: Linear-Size Approximations to the Vietoris-Rips FiltrationDon Sheehy
The Vietoris-Rips filtration is a versatile tool in topological data analysis.
Unfortunately, it is often too large to construct in full.
We show how to construct an $O(n)$-size filtered simplicial complex on an $n$-point metric space such that the persistence diagram is a good approximation to that of the Vietoris-Rips filtration.
The filtration can be constructed in $O(n\log n)$ time.
The constants depend only on the doubling dimension of the metric space and the desired tightness of the approximation.
For the first time, this makes it computationally tractable to approximate the persistence diagram of the Vietoris-Rips filtration across all scales for large data sets.
Our approach uses a hierarchical net-tree to sparsify the filtration.
We can either sparsify the data by throwing out points at larger scales to give a zigzag filtration,
or sparsify the underlying graph by throwing out edges at larger scales to give a standard filtration.
Both methods yield the same guarantees.
Linear-Size Approximations to the Vietoris-Rips Filtration - Presented at Uni...Don Sheehy
The Vietoris-Rips filtration is a versatile tool in topological data analysis.
Unfortunately, it is often too large to construct in full.
We show how to construct an $O(n)$-size filtered simplicial complex on an $n$-point metric space such that the persistence diagram is a good approximation to that of the Vietoris-Rips filtration.
The filtration can be constructed in $O(n\log n)$ time.
The constants depend only on the doubling dimension of the metric space and the desired tightness of the approximation.
For the first time, this makes it computationally tractable to approximate the persistence diagram of the Vietoris-Rips filtration across all scales for large data sets.
Our approach uses a hierarchical net-tree to sparsify the filtration.
We can either sparsify the data by throwing out points at larger scales to give a zigzag filtration,
or sparsify the underlying graph by throwing out edges at larger scales to give a standard filtration.
Both methods yield the same guarantees.
Often, high dimensional data lie
close to a low-dimensional submanifold and it is of interest to understand the geometry of these submanifolds.
The homology groups of a manifold are important topological invariants that provide an algebraic summary of the manifold.
These groups contain rich topological information, for instance, about the connected components, holes, tunnels and sometimes the dimension of the manifold.
In this paper, we consider the statistical problem of estimating the homology of a manifold from noisy samples under several different noise models.
We derive upper and lower bounds on the minimax risk for this problem.
Our upper bounds are based on estimators which are constructed from a union of balls of appropriate radius around carefully selected points.
In each case we establish complementary lower bounds using Le Cam's lemma.
A Multicover Nerve for Geometric InferenceDon Sheehy
We show that filtering the barycentric decomposition of a Cech complex by the cardinality of the vertices captures precisely the topology of k-covered regions among a collection of balls for all values of k.
Moreover, we relate this result to the Vietoris-Rips complex to get an approximation in terms of the persistent homology.
ATMCS: Linear-Size Approximations to the Vietoris-Rips FiltrationDon Sheehy
The Vietoris-Rips filtration is a versatile tool in topological data analysis.
Unfortunately, it is often too large to construct in full.
We show how to construct an $O(n)$-size filtered simplicial complex on an $n$-point metric space such that the persistence diagram is a good approximation to that of the Vietoris-Rips filtration.
The filtration can be constructed in $O(n\log n)$ time.
The constants depend only on the doubling dimension of the metric space and the desired tightness of the approximation.
For the first time, this makes it computationally tractable to approximate the persistence diagram of the Vietoris-Rips filtration across all scales for large data sets.
Our approach uses a hierarchical net-tree to sparsify the filtration.
We can either sparsify the data by throwing out points at larger scales to give a zigzag filtration,
or sparsify the underlying graph by throwing out edges at larger scales to give a standard filtration.
Both methods yield the same guarantees.
New Bounds on the Size of Optimal MeshesDon Sheehy
The theory of optimal size meshes gives a method for analyzing the output size (number of simplices) of a Delaunay refinement mesh in terms of the integral of a sizing function over the input domain.
The input points define a maximal such sizing function called the feature size.
This paper presents a way to bound the feature size integral in terms of an easy to compute property of a suitable ordering of the point set.
The key idea is to consider the pacing of an ordered point set, a measure of the rate of change in the feature size as points are added one at a time.
In previous work, Miller et al.\ showed that if an ordered point set has pacing $\phi$, then the number of vertices in an optimal mesh will be $O(\phi^dn)$, where $d$ is the input dimension.
We give a new analysis of this integral showing that the output size is only $\Theta(n + n\log \phi)$.
The new analysis tightens bounds from several previous results and provides matching lower bounds.
Moreover, it precisely characterizes inputs that yield outputs of size $O(n)$.
In this talk, we will be looking at a basic primitive in computational geometry, the flip. Also known as bistellar flips, edge-flips, rotations, and Pachner moves, this local change operation has been discovered and rediscovered in a variety of fields (thus the many names) and has proven useful both as an algorithmic tool as well as a proof technology. For algorithm designers working outside of computational geometry, one can consider the flip move as a higher dimensional analog of the tree rotations used in binary trees. I will survey some of the most important results about flips with an emphasis on developing a general geometric intuition that has led to many advances.
Beating the Spread: Time-Optimal Point MeshingDon Sheehy
We present NetMesh, a new algorithm that produces a conforming Delaunay mesh for point sets in any fixed dimension with guaranteed optimal mesh size and quality.
Our comparison based algorithm runs in time $O(n\log n + m)$, where $n$ is the input size and $m$ is the output size, and with constants depending only on the dimension and the desired element quality bounds.
It can terminate early in $O(n\log n)$ time returning a $O(n)$ size Voronoi diagram of a superset of $P$ with a relaxed quality bound, which again matches the known lower bounds.
The previous best results in the comparison model depended on the log of the <b>spread</b> of the input, the ratio of the largest to smallest pairwise distance among input points.
We reduce this dependence to $O(\log n)$ by using a sequence of $\epsilon$-nets to determine input insertion order in an incremental Voronoi diagram.
We generate a hierarchy of well-spaced meshes and use these to show that the complexity of the Voronoi diagram stays linear in the number of points throughout the construction.
Here's a toy problem: What is the SMALLEST number of unit balls you can fit in a box such that no more will fit?
In this talk, I will show how just thinking about a naive greedy approach to this problem leads to a simple derivation of several of the most important theoretical results in the field of mesh generation.
We'll prove classic upper and lower bounds on both the number of balls and the complexity of their interrelationships.
Then, we'll relate this problem to a similar one called the Fat Voronoi Problem, in which we try to find point sets such that every Voronoi cell is fat
(the ratio of the radii of the largest contained to smallest containing ball is bounded).
This problem has tremendous promise in the future of mesh generation as it can circumvent the classic lowerbounds presented in the first half of the talk.
Unfortunately the simple approach no longer works.
In the end we will show that the number of neighbors of any cell in a Fat Voronoi Diagram in the plane is bounded by a constant
(if you think that's obvious, spend a minute to try to prove it).
We'll also talk a little about the higher dimensional version of the problem and its wide range of applications.
What is the difference between a mesh and a net?
What is the difference between a metric space epsilon-net and a range space epsilon-net?
What is the difference between geometric divide-and-conquer and combinatorial divide-and-conquer?
In this talk, I will answer these questions and discuss how these different ideas come together to finally settle the question of how to compute conforming point set meshes in optimal time. The meshing problem is to discretize space into as few pieces as possible and yet still capture the underlying density of the input points. Meshes are fundamental in scientific computing, graphics, and more recently, topological data analysis.
This is joint work with Gary Miller and Todd Phillips
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptxRASHMI M G
Abnormal or anomalous secondary growth in plants. It defines secondary growth as an increase in plant girth due to vascular cambium or cork cambium. Anomalous secondary growth does not follow the normal pattern of a single vascular cambium producing xylem internally and phloem externally.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
Nucleophilic Addition of carbonyl compounds.pptxSSR02
Nucleophilic addition is the most important reaction of carbonyls. Not just aldehydes and ketones, but also carboxylic acid derivatives in general.
Carbonyls undergo addition reactions with a large range of nucleophiles.
Comparing the relative basicity of the nucleophile and the product is extremely helpful in determining how reversible the addition reaction is. Reactions with Grignards and hydrides are irreversible. Reactions with weak bases like halides and carboxylates generally don’t happen.
Electronic effects (inductive effects, electron donation) have a large impact on reactivity.
Large groups adjacent to the carbonyl will slow the rate of reaction.
Neutral nucleophiles can also add to carbonyls, although their additions are generally slower and more reversible. Acid catalysis is sometimes employed to increase the rate of addition.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
4. Geometric and Topological Data Analysis
Geometric Data
Shape
Analysis
Geometry
Processing
Regression
(space between data)
Clustering
(meaning of data)
Geometry of Data
Persistent Homology
(beyond linear structure)
(beyond simple connectivity)
5. Homology
Homology turns topological questions into algebraic questions.
Augment the data points with
edges, triangles, tetrahedra, etc.
!
The kth boundary matrix ∂k maps
k-simplices to the (k-1)-simplices
in their boundary.
!
The kth homology group is the
quotient ker ∂k / im ∂k+1.
!
Homology encodes connected
components, holes, and voids.
9. Computing Persistent Homology
Input: Boundary Matrix D
Find V, R such that
D = RV
V is upper-triangular
R is “reduced” (i.e. no two columns
have lowest nonzeros in the same row)
It’s just Gaussian elimination!
Output is a collection of pairs corresponding
to the lowest nonzeros in R.
10. Nested Dissection
A method for solving symmetric positive definite linear systems.
Ax = b
If A is n x n, consider the n vertex graph with an edge (i,j) for each
nonzero entry A(i, j) of A.
!
Find a vertex separator S such that
- |S| = O(nβ
)
- each connected piece has at most cn vertices (for some c < 1).
!
Repeat. Order the pivots going up from the leaves of the recursion.
The Punchline:
Inverting A can be done in O(nβω
) time.
Also works for computing ranks of singular,
nonsymmetric matrices over finite fields.
11. Reasonable complexes have small separators.
The theory of geometric separators applies to graphs of nice meshes.
!
Separators on graphs can be “lifted” to separators on complexes.
!
Improves the asymptotic complexity of static homology.
!
Persistence?
12. Thanks.
Some open problems.
!
How do we reconcile the filtration
order and the nested dissection order?
!
Is there a quotient version of
nested dissection?
!
Is there a reasonable separator theory
for filtrations?