This document summarizes a presentation on computing polytopes via a vertex oracle. It discusses:
1. Using a vertex oracle that takes a direction vector as input and outputs a vertex of the resultant polytope that is extremal in that direction.
2. An incremental algorithm that starts with an inner approximation of the resultant polytope and iteratively calls the oracle to extend illegal facets until the approximation equals the resultant polytope.
3. The oracle works by lifting the point set to construct a regular subdivision, then refining to a triangulation to extract the vertex of the resultant polytope extremal in the given direction.
Presentation of the paper "An output-sensitive algorithm for computing (projections of) resultant polytopes" in the Annual Symposium on Computational Geometry (SoCG 2012)
The document describes an incremental algorithm for computing the Birkhoff resultant polytope Π of a system of n+1 polynomials in n variables. The algorithm takes as input the supports of the polynomials and incrementally constructs an inner approximation Q of Π by calling an oracle to extend illegal facets. At each step Q is refined until all facets are legal, at which point Q = Π. The algorithm outputs the H-representation and V-representation of Π.
The document defines deterministic finite automata (DFAs) and describes their key components: states, symbols, transition function, start state, and accepting states. It provides examples of how to represent DFAs using transition tables and diagrams. The document also discusses extending the transition function to strings, minimizing DFAs, representing DFAs functionally, automatic theorem proving using DFAs, and the product and complement operations on DFAs.
High-dimensional polytopes defined by oracles: algorithms, computations and a...Vissarion Fisikopoulos
The document discusses algorithms for computing volumes of polytopes. It notes that exactly computing volumes is hard, but randomized polynomial-time algorithms can approximate volumes with high probability. It describes two algorithms: Random Directions Hit-and-Run (RDHR), which generates random points within a polytope via random walks; and Multiphase Monte Carlo, which approximates a polytope's volume by sampling points within a sequence of enclosing balls. RDHR mixes in O(d^3) steps and these algorithms can compute volumes of high-dimensional polytopes that exact algorithms cannot handle.
A New Polynomial-Time Algorithm for Linear ProgrammingSSA KPI
This document summarizes a new polynomial-time algorithm for linear programming.
1) The algorithm reduces the general linear programming problem to a canonical form and solves it through repeated application of projective transformations and optimization over spheres.
2) Each projective transformation followed by optimization reduces the objective function value by a constant factor, allowing the optimal solution to be found in polynomial time.
3) The algorithm runs in O(n3.5L0.5lnLlnlnL) time, an improvement over the ellipsoid method's O(n6L2lnLlnlnL) time.
A Unifying Review of Gaussian Linear Models (Roweis 1999)Feynman Liang
Through a linear Gaussian process, we can unify a family of Gaussian linear models including Factor Analysis, PCA, Kalman Filters, Mixture of Gaussians, and Hidden Markov Models.
Here are my slides in some basic algorithms in Computational Geometry:
1.- Line Intersection
2.- Sweeping Line
3.- Convex Hull
They are the classic one, but there is still a lot for anybody wanting to get in computer graphics to study. I recomend
Mark de Berg, Otfried Cheong, Marc van Kreveld, and Mark Overmars. 2008. Computational Geometry: Algorithms and Applications (3rd ed. ed.). TELOS, Santa Clara, CA, USA.
A new approach in specifying the inverse quadratic matrix in modulo-2 for con...Anax Fotopoulos
The document discusses a new approach for specifying the inverse quadratic matrix in modulo-2 for information channels. It describes modeling communication systems using state space equations from digital control theory. It discusses concepts like controllability and observability of systems using rank tests on controllability and observability tables. It also covers concepts from information theory like groups, cyclic groups, and rings as they relate to channel encoding.
Presentation of the paper "An output-sensitive algorithm for computing (projections of) resultant polytopes" in the Annual Symposium on Computational Geometry (SoCG 2012)
The document describes an incremental algorithm for computing the Birkhoff resultant polytope Π of a system of n+1 polynomials in n variables. The algorithm takes as input the supports of the polynomials and incrementally constructs an inner approximation Q of Π by calling an oracle to extend illegal facets. At each step Q is refined until all facets are legal, at which point Q = Π. The algorithm outputs the H-representation and V-representation of Π.
The document defines deterministic finite automata (DFAs) and describes their key components: states, symbols, transition function, start state, and accepting states. It provides examples of how to represent DFAs using transition tables and diagrams. The document also discusses extending the transition function to strings, minimizing DFAs, representing DFAs functionally, automatic theorem proving using DFAs, and the product and complement operations on DFAs.
High-dimensional polytopes defined by oracles: algorithms, computations and a...Vissarion Fisikopoulos
The document discusses algorithms for computing volumes of polytopes. It notes that exactly computing volumes is hard, but randomized polynomial-time algorithms can approximate volumes with high probability. It describes two algorithms: Random Directions Hit-and-Run (RDHR), which generates random points within a polytope via random walks; and Multiphase Monte Carlo, which approximates a polytope's volume by sampling points within a sequence of enclosing balls. RDHR mixes in O(d^3) steps and these algorithms can compute volumes of high-dimensional polytopes that exact algorithms cannot handle.
A New Polynomial-Time Algorithm for Linear ProgrammingSSA KPI
This document summarizes a new polynomial-time algorithm for linear programming.
1) The algorithm reduces the general linear programming problem to a canonical form and solves it through repeated application of projective transformations and optimization over spheres.
2) Each projective transformation followed by optimization reduces the objective function value by a constant factor, allowing the optimal solution to be found in polynomial time.
3) The algorithm runs in O(n3.5L0.5lnLlnlnL) time, an improvement over the ellipsoid method's O(n6L2lnLlnlnL) time.
A Unifying Review of Gaussian Linear Models (Roweis 1999)Feynman Liang
Through a linear Gaussian process, we can unify a family of Gaussian linear models including Factor Analysis, PCA, Kalman Filters, Mixture of Gaussians, and Hidden Markov Models.
Here are my slides in some basic algorithms in Computational Geometry:
1.- Line Intersection
2.- Sweeping Line
3.- Convex Hull
They are the classic one, but there is still a lot for anybody wanting to get in computer graphics to study. I recomend
Mark de Berg, Otfried Cheong, Marc van Kreveld, and Mark Overmars. 2008. Computational Geometry: Algorithms and Applications (3rd ed. ed.). TELOS, Santa Clara, CA, USA.
A new approach in specifying the inverse quadratic matrix in modulo-2 for con...Anax Fotopoulos
The document discusses a new approach for specifying the inverse quadratic matrix in modulo-2 for information channels. It describes modeling communication systems using state space equations from digital control theory. It discusses concepts like controllability and observability of systems using rank tests on controllability and observability tables. It also covers concepts from information theory like groups, cyclic groups, and rings as they relate to channel encoding.
In the study of probabilistic integrators for deterministic ordinary differential equations, one goal is to establish the convergence (in an appropriate topology) of the random solutions to the true deterministic solution of an initial value problem defined by some operator. The challenge is to identify the right conditions on the additive noise with which one constructs the probabilistic integrator, so that the convergence of the random solutions has the same order as the underlying deterministic integrator. In the context of ordinary differential equations, Conrad et. al. (Stat.
Comput., 2017), established the mean square convergence of the solutions for globally Lipschitz vector fields, under the assumptions of i.i.d., state-independent, mean-zero Gaussian noise. We extend their analysis by considering vector fields that need not be globally Lipschitz, and by
considering non-Gaussian, non-i.i.d. noise that can depend on the state and that can have nonzero mean. A key assumption is a uniform moment bound condition on the noise. We obtain convergence in the stronger topology of the uniform norm, and establish results that connect this topology to the regularity of the additive noise. Joint work with A. M. Stuart (Caltech), T. J. Sullivan (Free University of Berlin).
This document provides an overview of stacks as an abstract data type (ADT). It defines a stack as a last-in first-out data structure for storing arbitrary objects. The key stack operations of push, pop, top, and size are described. Exceptions that can occur for empty stacks are discussed. An array-based implementation of stacks is presented, including algorithms for the stack operations and analysis of its performance and limitations. Applications of stacks like undo buffers and method call stacks are mentioned. Finally, an example of using a stack to check matching parentheses in an expression is provided.
This document discusses probabilistic inference using Bayesian networks and variable elimination. It introduces the concepts of probabilistic inference, Bayesian networks, and variable elimination as a method for performing efficient inference. Variable elimination involves alternating between joining factors and eliminating variables to compute posterior probabilities without enumerating the entire joint distribution. Approximate inference methods like sampling are also discussed as alternatives to exact inference through variable elimination.
This document presents a construction of an explicit polynomial-time hitting set generator for 1-branching programs of width 3. It establishes that almost k-wise independent sets satisfy a richness condition that is both necessary and sufficient for being a hitting set. Specifically:
1) An almost O(log n)-wise independent set is shown to be ε-rich for ε > 5/6, which is sufficient for being a hitting set for width-3 1-branching programs.
2) Any hitting set must satisfy a weaker richness condition, which almost O(log n)-wise independent sets do.
3) Therefore, extending an almost O(log n)-wise independent set with all vectors
Likelihood-free methods provide techniques for approximating Bayesian computations when the likelihood function is unavailable or computationally intractable. Monte Carlo methods like importance sampling and iterated importance sampling generate samples from an approximating distribution to estimate integrals. Population Monte Carlo is an iterative Monte Carlo algorithm that propagates a population of particles over time to explore the target distribution. Approximate Bayesian computation uses simulation-based methods to approximate posterior distributions when the likelihood is unavailable.
Representation formula for traffic flow estimation on a networkGuillaume Costeseque
This document discusses representation formulas for traffic flow estimation on networks using Hamilton-Jacobi equations. It begins by motivating the use of HJ equations, noting advantages like smooth solutions and physically meaningful quantities. It then presents the basic ideas of Lax-Hopf formulas for solving HJ equations on networks, including a simple case study of a junction. The document outlines its topics which include notations from traffic flow modeling, basic recalls on Lax-Hopf formulas, HJ equations on networks, and a new approach.
This document discusses Approximate Bayesian Computation (ABC), a simulation-based method for conducting Bayesian inference when the likelihood function is intractable or impossible to evaluate directly. ABC produces an approximation of the posterior distribution by simulating data under different parameter values and accepting simulations that match the observed data. The document provides background on how ABC originated from population genetics models and outlines some of the advances in ABC, including how it can be used as an inference machine to estimate parameters from simulated data.
Some properties of m sequences over finite field fpIAEME Publication
1) The document discusses properties of M-sequences over finite fields Fp when p is an odd prime.
2) Some key properties discussed are: the set of cyclic permutations of a non-zero period is not closed under addition; the matrix of these permutations is symmetric about the second diagonal; and the sum of any two rows with one translated by half the period is the zero sequence.
3) The document also presents theorems about the characteristics and representations of M-sequences over finite fields, including their relation to irreducible polynomials and representation as matrices.
Summarize for Discrete mathematics (math 521) and it's especially students of graduate studies of the Institute of Statistical Studies and Research - Cairo University
The document discusses circuit theorems including the superposition theorem and Thevenin's theorem. It provides examples and proofs of the superposition theorem for calculating voltages in linear circuits by treating multiple sources independently. Thevenin's theorem is introduced as stating that a two-terminal linear circuit can be modeled by an equivalent circuit of a voltage source in series with a resistor, where the voltage source is the open circuit voltage and the resistor is the input resistance with all independent sources removed.
The document contains tables listing formulas for inverse Laplace transforms of various functions. It includes:
1) General formulas for inverse Laplace transforms of functions with parameters like shifts or scalings.
2) Formulas for inverse transforms of rational functions involving polynomials.
3) Formulas involving square roots of polynomials.
4) More general formulas for inverse transforms of functions involving arbitrary powers. Notation for special functions like Bessel functions is also explained.
This document provides tables of Laplace transforms for various functions including:
- General formulas for Laplace transforms
- Expressions involving power-law functions
- Expressions involving exponential functions
- Expressions involving hyperbolic functions
- Expressions involving logarithmic functions
For each function, the original function f(x) is provided along with its corresponding Laplace transform f(p). References for further information on Laplace transforms are also included.
The document describes a control framework called the "stack of tasks" which provides hierarchical task-based control for real-time redundant manipulators. It allows implementation of a data flow graph controlled by Python scripting. Tasks are defined as functions of the robot configuration, time, and other parameters that should converge to zero. The framework computes joint velocities to minimize higher priority tasks while satisfying lower priority tasks when possible. It has been tested on robots including HRP-2, Nao, and Romeo.
This document discusses approximate Bayesian computation (ABC). ABC allows Bayesian inference when the likelihood function is intractable or impossible to evaluate directly. It introduces ABC, describes how it originated from population genetics models, and outlines some of its limitations and advances, including various related computational methods like ABC with empirical likelihoods. The document also examines how ABC relates to other simulation-based statistical methods and considers perspectives on how Bayesian ABC can be.
The document describes Approximate Bayesian Computation (ABC) methods. It introduces Population Monte Carlo (PMC), which is an ABC algorithm that uses sequential importance sampling to generate particles from successive approximations of the target distribution. PMC proceeds by sampling particles from a proposal distribution, weighting them by the ratio of the target to proposal densities, and resampling according to the weights. The weighted samples across iterations are then used to estimate expectations with respect to the target distribution.
Computing the volume of a convex body is a fundamental problem in computational geometry and optimization. In this talk we discuss the computational complexity of this problem from a theoretical as well as practical point of view. We show examples of how volume computation appear in applications ranging from combinatorics to algebraic geometry.
Next, we design the first practical algorithm for polytope volume approximation in high dimensions (few hundreds).
The algorithm utilizes uniform sampling from a convex region and efficient boundary polytope oracles.
Interestingly, our software provides a framework for exploring theoretical advances since it is believed, and our experiments provide evidence for this belief, that the current asymptotic bounds are unrealistically high.
This document outlines a presentation on query answering in probabilistic Datalog+/– ontologies under group preferences. It begins with an introduction that motivates the need to model group preferences and uncertainty on the semantic web. It then provides preliminaries on Datalog+/– and the chase procedure. Finally, it outlines the components of the proposed model for handling group preferences and different strategies for answering top-k ranked disjunctive atomic queries under the model.
"An output-sensitive algorithm for computing projections of resultant polytop...Vissarion Fisikopoulos
This document summarizes an incremental algorithm for computing projections of resultant polytopes. The algorithm uses an oracle that, given a direction vector, computes a vertex of the resultant polytope that is extremal in that direction. It starts with an initial inner approximation of the resultant polytope and incrementally extends facets by calling the oracle until all facets are legal, meaning they support facets of the true resultant polytope. This provides an output-sensitive algorithm that computes only as many triangulations as needed to represent the resultant polytope.
This document discusses using group theory and Lie algebras to formulate quantum mechanics from classical mechanics. It begins by reviewing classical phase space methods and their relation to Lie groups. It then develops an analogous formalism for quantum mechanics by replacing classical observables with operators satisfying the same Lie algebra. Unitary representations of this algebra define quantum states. The Heisenberg algebra is introduced for a particle, and its representation leads to a probabilistic interpretation. Dynamics are discussed using Hamiltonians of Newtonian form. As an example, the position-momentum uncertainty principle is derived from the Heisenberg commutation relation.
The document discusses OTTER, a theorem prover that uses resolution style proofs. It provides background on OTTER, describing it as a resolution style theorem prover and discussing its clause representation and main strategies like restriction strategies, direction strategies, and look-ahead strategies. Examples are given of using OTTER to solve problems like the 15 puzzle and analyzing its complexity.
In the study of probabilistic integrators for deterministic ordinary differential equations, one goal is to establish the convergence (in an appropriate topology) of the random solutions to the true deterministic solution of an initial value problem defined by some operator. The challenge is to identify the right conditions on the additive noise with which one constructs the probabilistic integrator, so that the convergence of the random solutions has the same order as the underlying deterministic integrator. In the context of ordinary differential equations, Conrad et. al. (Stat.
Comput., 2017), established the mean square convergence of the solutions for globally Lipschitz vector fields, under the assumptions of i.i.d., state-independent, mean-zero Gaussian noise. We extend their analysis by considering vector fields that need not be globally Lipschitz, and by
considering non-Gaussian, non-i.i.d. noise that can depend on the state and that can have nonzero mean. A key assumption is a uniform moment bound condition on the noise. We obtain convergence in the stronger topology of the uniform norm, and establish results that connect this topology to the regularity of the additive noise. Joint work with A. M. Stuart (Caltech), T. J. Sullivan (Free University of Berlin).
This document provides an overview of stacks as an abstract data type (ADT). It defines a stack as a last-in first-out data structure for storing arbitrary objects. The key stack operations of push, pop, top, and size are described. Exceptions that can occur for empty stacks are discussed. An array-based implementation of stacks is presented, including algorithms for the stack operations and analysis of its performance and limitations. Applications of stacks like undo buffers and method call stacks are mentioned. Finally, an example of using a stack to check matching parentheses in an expression is provided.
This document discusses probabilistic inference using Bayesian networks and variable elimination. It introduces the concepts of probabilistic inference, Bayesian networks, and variable elimination as a method for performing efficient inference. Variable elimination involves alternating between joining factors and eliminating variables to compute posterior probabilities without enumerating the entire joint distribution. Approximate inference methods like sampling are also discussed as alternatives to exact inference through variable elimination.
This document presents a construction of an explicit polynomial-time hitting set generator for 1-branching programs of width 3. It establishes that almost k-wise independent sets satisfy a richness condition that is both necessary and sufficient for being a hitting set. Specifically:
1) An almost O(log n)-wise independent set is shown to be ε-rich for ε > 5/6, which is sufficient for being a hitting set for width-3 1-branching programs.
2) Any hitting set must satisfy a weaker richness condition, which almost O(log n)-wise independent sets do.
3) Therefore, extending an almost O(log n)-wise independent set with all vectors
Likelihood-free methods provide techniques for approximating Bayesian computations when the likelihood function is unavailable or computationally intractable. Monte Carlo methods like importance sampling and iterated importance sampling generate samples from an approximating distribution to estimate integrals. Population Monte Carlo is an iterative Monte Carlo algorithm that propagates a population of particles over time to explore the target distribution. Approximate Bayesian computation uses simulation-based methods to approximate posterior distributions when the likelihood is unavailable.
Representation formula for traffic flow estimation on a networkGuillaume Costeseque
This document discusses representation formulas for traffic flow estimation on networks using Hamilton-Jacobi equations. It begins by motivating the use of HJ equations, noting advantages like smooth solutions and physically meaningful quantities. It then presents the basic ideas of Lax-Hopf formulas for solving HJ equations on networks, including a simple case study of a junction. The document outlines its topics which include notations from traffic flow modeling, basic recalls on Lax-Hopf formulas, HJ equations on networks, and a new approach.
This document discusses Approximate Bayesian Computation (ABC), a simulation-based method for conducting Bayesian inference when the likelihood function is intractable or impossible to evaluate directly. ABC produces an approximation of the posterior distribution by simulating data under different parameter values and accepting simulations that match the observed data. The document provides background on how ABC originated from population genetics models and outlines some of the advances in ABC, including how it can be used as an inference machine to estimate parameters from simulated data.
Some properties of m sequences over finite field fpIAEME Publication
1) The document discusses properties of M-sequences over finite fields Fp when p is an odd prime.
2) Some key properties discussed are: the set of cyclic permutations of a non-zero period is not closed under addition; the matrix of these permutations is symmetric about the second diagonal; and the sum of any two rows with one translated by half the period is the zero sequence.
3) The document also presents theorems about the characteristics and representations of M-sequences over finite fields, including their relation to irreducible polynomials and representation as matrices.
Summarize for Discrete mathematics (math 521) and it's especially students of graduate studies of the Institute of Statistical Studies and Research - Cairo University
The document discusses circuit theorems including the superposition theorem and Thevenin's theorem. It provides examples and proofs of the superposition theorem for calculating voltages in linear circuits by treating multiple sources independently. Thevenin's theorem is introduced as stating that a two-terminal linear circuit can be modeled by an equivalent circuit of a voltage source in series with a resistor, where the voltage source is the open circuit voltage and the resistor is the input resistance with all independent sources removed.
The document contains tables listing formulas for inverse Laplace transforms of various functions. It includes:
1) General formulas for inverse Laplace transforms of functions with parameters like shifts or scalings.
2) Formulas for inverse transforms of rational functions involving polynomials.
3) Formulas involving square roots of polynomials.
4) More general formulas for inverse transforms of functions involving arbitrary powers. Notation for special functions like Bessel functions is also explained.
This document provides tables of Laplace transforms for various functions including:
- General formulas for Laplace transforms
- Expressions involving power-law functions
- Expressions involving exponential functions
- Expressions involving hyperbolic functions
- Expressions involving logarithmic functions
For each function, the original function f(x) is provided along with its corresponding Laplace transform f(p). References for further information on Laplace transforms are also included.
The document describes a control framework called the "stack of tasks" which provides hierarchical task-based control for real-time redundant manipulators. It allows implementation of a data flow graph controlled by Python scripting. Tasks are defined as functions of the robot configuration, time, and other parameters that should converge to zero. The framework computes joint velocities to minimize higher priority tasks while satisfying lower priority tasks when possible. It has been tested on robots including HRP-2, Nao, and Romeo.
This document discusses approximate Bayesian computation (ABC). ABC allows Bayesian inference when the likelihood function is intractable or impossible to evaluate directly. It introduces ABC, describes how it originated from population genetics models, and outlines some of its limitations and advances, including various related computational methods like ABC with empirical likelihoods. The document also examines how ABC relates to other simulation-based statistical methods and considers perspectives on how Bayesian ABC can be.
The document describes Approximate Bayesian Computation (ABC) methods. It introduces Population Monte Carlo (PMC), which is an ABC algorithm that uses sequential importance sampling to generate particles from successive approximations of the target distribution. PMC proceeds by sampling particles from a proposal distribution, weighting them by the ratio of the target to proposal densities, and resampling according to the weights. The weighted samples across iterations are then used to estimate expectations with respect to the target distribution.
Computing the volume of a convex body is a fundamental problem in computational geometry and optimization. In this talk we discuss the computational complexity of this problem from a theoretical as well as practical point of view. We show examples of how volume computation appear in applications ranging from combinatorics to algebraic geometry.
Next, we design the first practical algorithm for polytope volume approximation in high dimensions (few hundreds).
The algorithm utilizes uniform sampling from a convex region and efficient boundary polytope oracles.
Interestingly, our software provides a framework for exploring theoretical advances since it is believed, and our experiments provide evidence for this belief, that the current asymptotic bounds are unrealistically high.
This document outlines a presentation on query answering in probabilistic Datalog+/– ontologies under group preferences. It begins with an introduction that motivates the need to model group preferences and uncertainty on the semantic web. It then provides preliminaries on Datalog+/– and the chase procedure. Finally, it outlines the components of the proposed model for handling group preferences and different strategies for answering top-k ranked disjunctive atomic queries under the model.
"An output-sensitive algorithm for computing projections of resultant polytop...Vissarion Fisikopoulos
This document summarizes an incremental algorithm for computing projections of resultant polytopes. The algorithm uses an oracle that, given a direction vector, computes a vertex of the resultant polytope that is extremal in that direction. It starts with an initial inner approximation of the resultant polytope and incrementally extends facets by calling the oracle until all facets are legal, meaning they support facets of the true resultant polytope. This provides an output-sensitive algorithm that computes only as many triangulations as needed to represent the resultant polytope.
This document discusses using group theory and Lie algebras to formulate quantum mechanics from classical mechanics. It begins by reviewing classical phase space methods and their relation to Lie groups. It then develops an analogous formalism for quantum mechanics by replacing classical observables with operators satisfying the same Lie algebra. Unitary representations of this algebra define quantum states. The Heisenberg algebra is introduced for a particle, and its representation leads to a probabilistic interpretation. Dynamics are discussed using Hamiltonians of Newtonian form. As an example, the position-momentum uncertainty principle is derived from the Heisenberg commutation relation.
The document discusses OTTER, a theorem prover that uses resolution style proofs. It provides background on OTTER, describing it as a resolution style theorem prover and discussing its clause representation and main strategies like restriction strategies, direction strategies, and look-ahead strategies. Examples are given of using OTTER to solve problems like the 15 puzzle and analyzing its complexity.
Efficient Edge-Skeleton Computation for Polytopes Defined by OraclesVissarion Fisikopoulos
This document summarizes algorithms for computing the edge skeleton of a polytope defined by oracle functions. It first describes an existing algorithm for vertex enumeration in the oracle model that works by computing an initial simplex and recursively querying the oracle. It then presents a new algorithm for computing the edge skeleton that takes as input the oracle functions and a superset of edge directions, and works by generating candidate edge segments and validating them with the oracle. The runtime of this edge skeleton algorithm is polynomial in parameters of the polytope representation.
This document summarizes Chris Swierczewski's general exam presentation on computational applications of Riemann surfaces and Abelian functions. The presentation covered the geometry and algebra of Riemann surfaces, including bases of cycles, holomorphic differentials, and period matrices. Applications discussed include using Riemann theta functions to find periodic solutions to integrable PDEs like the Kadomtsev–Petviashvili equation. The talk also discussed linear matrix representations of algebraic curves and the constructive Schottky problem of realizing a Riemann matrix as the period matrix of a curve.
The document discusses complex eigenvalues and eigenvectors for systems of linear differential equations. It shows that if the matrix A has complex conjugate eigenvalue pairs r1 and r2, then the corresponding eigenvectors and solutions will also be complex conjugates. This leads to real-valued fundamental solutions that can express the general solution. An example demonstrates these concepts, finding the complex eigenvalues and eigenvectors and expressing the general solution in terms of real-valued functions. Spiral points, centers, eigenvalues, and trajectory behaviors are also summarized.
I am Marianna P. I am an Algorithm Exam Expert at programmingexamhelp.com. I hold a PhD. in Programming, from Curtin University, Australia. I have been helping students with their exams for the past 10 years. You can hire me to take your exam in Algorithm.
Visit programmingexamhelp.com or email support@programmingexamhelp.com. You can also call on +1 678 648 4277 for any assistance with the Algorithm Exam.
Polyhedral computations in computational algebraic geometry and optimizationVissarion Fisikopoulos
The document summarizes a talk on polyhedral computations in computational algebraic geometry and optimization. It discusses algorithms for enumerating vertices of resultant polytopes and 2-level polytopes. Applications include support computation for implicit equations and computing resultants and discriminants. Open problems include finding the maximum number of faces of 4-dimensional resultant polytopes and explaining symmetries in their maximal f-vectors.
14th Athens Colloquium on Algorithms and Complexity (ACAC19)Apostolos Chalkis
This document presents a new method for estimating the volume of convex polytopes called practical volume estimation by a new annealing schedule. It uses a multiphase Monte Carlo approach with a sequence of concentric convex bodies to approximate the volume. A new simulated annealing method constructs a sparser sequence of bodies. Billiard walk sampling is used for volume-represented and zonotope polytopes. The method scales to dimensions of 100 in an hour for random V-polytopes and zonotopes, outperforming previous methods with theoretical complexity of O*(d^3).
The document discusses linear classifiers for machine learning and data mining. It introduces linear classifiers as parametric models that use hyperplanes to split data into classes. The decision surface is defined by the equation of the hyperplane. Methods for developing an initial solution like gradient descent and minimizing squared error are presented. Properties of the hyperplane like normal vectors and distances from points to the hyperplane are defined. The document outlines developing linear classifiers and their geometric properties.
The document discusses inference rules for quantifiers in discrete mathematics. It provides examples of using universal instantiation, universal generalization, existential instantiation, and existential generalization. It also discusses the rules of universal specification and universal generalization in more detail with examples. Finally, it presents proofs involving quantifiers over integers to demonstrate techniques like direct proof, proof by contradiction, and proving statements' contrapositives.
I am Vincent S. I am an Algorithm Assignment Expert at programminghomeworkhelp.com. I hold a Ph.D. in Programming from, University of Minnesota, USA. I have been helping students with their homework for the past 9 years. I solve assignments related to Algorithms.
Visit programminghomeworkhelp.com or email support@programminghomeworkhelp.com. You can also call on +1 678 648 4277 for any assistance with Algorithm assignments.
b. (10 pts) Implement the rotate left method for AVL trees.c. (10 .pdfakanshanawal
b. (10 pts) Implement the rotate left method for AVL trees.
c. (10 pts) Implement the rotate right method for AVL trees.
d. (10 pts) Implement the rotate left-right method for AVL trees.
e. (10 pts) Implement the rotate right-left method for AVL trees.
f. (15 pts) Implement the rebalance method for AVL trees.
g. (3 pts) Implement pre-order, in-order, and post-order traversals for AVL trees.
All of this should be implemented in the below code, in C++, without changing any of the
parameters or functions as they are laid out.
#include
#include
#include "AVLTree.h"
#include "AVLNode.h"
using namespace std;
AVLTree::AVLTree() {
root = nullptr;
size = 0;
}
std::shared_ptr AVLTree::getRoot() {
return root;
}
int AVLTree::getSize() {
return size;
}
std::shared_ptr AVLTree::search(int val) {
return search(root, val);
}
std::shared_ptr AVLTree::search(std::shared_ptr n, int val) {
if (n == NULL) //if root node is null just return null value
{
return n;
}
//if target is less than root's value search in left half
if (val < n->value)
{
return search(n->left, val);
}
//if target is greater than root's value search in right half
else if (val > n->value)
{
return search(n->right, val);
}
//else if target is equal to root node just return the root node i.e n.
return n;
}
std::shared_ptr AVLTree::minimum() {
std::shared_ptr curr = root;
while (curr && curr->left) {
curr = curr->left;
}
return curr;
}
std::shared_ptr AVLTree::minimum(std::shared_ptr n) {
std::shared_ptr curr = n;
while (curr && curr->left) {
curr = curr->left;
}
return curr;
}
std::shared_ptr AVLTree::maximum() {
std::shared_ptr curr = root;
while (curr && curr->right) {
curr = curr->right;
}
return curr;
}
std::shared_ptr AVLTree::maximum(std::shared_ptr n) {
std::shared_ptr curr = n;
while (curr && curr->right) {
curr = curr->right;
}
return curr;
}
int getHeight(std::shared_ptr n) {
if (n == nullptr) {
return -1;
}
return n->height;
}
int getBalanceFactor(std::shared_ptr n) {
if (n == nullptr) {
return 0;
}
return getHeight(n->left) - getHeight(n->right);
}
void AVLTree::insertValue(int val) {
root = insertValue(root, val);
}
std::shared_ptr AVLTree::insertValue(std::shared_ptr n, int val) {
if (n == nullptr) {
shared_ptr newNode(new AVLNode(val));
size++;
return newNode;
}
if (val < n->value) {
n->left = insertValue(n->left, val);
}
else if (val > n->value) {
n->right = insertValue(n->right, val);
}
else {
// Duplicate value, no change
return n;
}
// Update the height of the current node
n->height = 1 + max(getHeight(n->left), getHeight(n->right));
// Rebalance the node if necessary
return rebalance(n);
}
void AVLTree::deleteValue(int val) {
root = deleteValue(root, val);
}
std::shared_ptr AVLTree::deleteValue(std::shared_ptr n, int val) {
if (n == nullptr) {
return nullptr;
}
if (val < n->value) {
n->left = deleteValue(n->left, val);
}
else if (val > n->value) {
n->right = deleteValue(n->right, val);
}
else {
// Found the value to delete
if (n->left == nullptr && n->right == nullptr) {
// .
The document discusses non-negative matrix factorization and algorithms for solving it. It introduces non-negative matrix factorization as factorizing a non-negative matrix A into non-negative matrices W and H such that A = W×H. It then presents a simple algorithm for solving the exact non-negative matrix factorization problem in polynomial time by modeling it as a satisfiability problem over polynomial constraints. It also discusses an approach for simplicial factorization that reduces the number of variables by exploiting the rank of the matrix.
Non-sampling functional approximation of linear and non-linear Bayesian UpdateAlexander Litvinenko
We offer a non-sampling functional approximation of non-linear surrogate to classical Bayesian Update formula. We start with prior Polynomial Chaos Expansion (PCE), express log-likelihood in a PCE basis and obtain a new posterior PCE.
Main IDEA is to update not probability density, but basis coefficients.
This document summarizes three key points:
1) It proves the logical equivalences of three expressions: (a) ¬ (p∨ (q∨¬r)) ∧q≡ ((¬p ∧ q) ∧ R), (b) (p→r) ∨ (q→r) ≡ (p∧q) →r, and (c) (∀×∈⋃ (P(x) →¬Q(x)) ≡ ¬∃×∈⋃. (P(x) ∧Q(x)).
2) It analyzes several expressions involving predicates like D(x
Rasterisation of a circle by the bresenham algorithmKALAIRANJANI21
The document summarizes Bresenham's algorithm for drawing circles through rasterization. It describes how the algorithm works by starting at a point on the circle, calculating the radius, and recursively determining the next points to plot by considering the difference between the actual circle path and the discrete pixel locations. It provides implementations of the algorithm for drawing a full circle and discusses optimizations like reducing variables and only drawing portions of circles.
Rasterisation of a circle by the bresenham algorithmKALAIRANJANI21
The document summarizes Bresenham's algorithm for drawing circles through rasterization. It describes how the algorithm works by recursively computing the next point on the circle based on an error term. It initializes the error term and progresses around the circle by incrementing coordinates while checking the error to determine when to increment the y-coordinate. The algorithm avoids square roots and trigonometric functions for efficiency. It can be generalized to draw ellipses and parabolas using a similar approach.
Optimal order a posteriori error bounds in L∞(L2) norm are derived for semidiscrete semilinear parabolic problems. Standard continuous Galerkin (conforming) finite element method is employed. Our main tools in deriving these error estimates are the elliptic reconstruction technique which is first introduced by Makridakis and Nochetto [5], with the aid of Gronwall’s lemma and continuation argument.
Similar to Conctructing Polytopes via a Vertex Oracle (20)
The document discusses meshing periodic surfaces in CGAL. It describes adapting the CGAL surface meshing algorithm to work with periodic triangulations by modifying point insertion and refinement criteria. Examples meshing various periodic minimal surfaces like the gyroid and schwarz P surface are shown for different criteria values. Future work includes improving the refinement criteria to handle all cases and proving algorithm correctness and termination.
This document discusses regular triangulations and resultant polytopes. It introduces the concepts of mixed subdivisions, the Cayley trick relating triangulations to mixed subdivisions, and how mixed subdivisions relate to the Newton polytope of the resultant. It defines i-mixed cells and how they correspond to vertices of the resultant polytope. The document outlines an algorithm to enumerate the vertices of the resultant polytope based on enumerating regular triangulations using i-mixed cell configurations and cubical flips between mixed subdivisions. It provides examples and complexity analysis and discusses future work on fully enumerating resultant polytopes.
"Faster Geometric Algorithms via Dynamic Determinant Computation." Vissarion Fisikopoulos
The document proposes faster algorithms for geometric problems by using dynamic determinant computation. Many geometric algorithms involve computing determinants of matrices to evaluate geometric predicates. Computing determinants directly is expensive, especially for high-dimensional problems. The document presents an algorithm for dynamically updating determinants when a column of the matrix is changed in O(d^2) time, faster than recomputing from scratch. This dynamic determinant computation can speed up algorithms that require repeated predicate evaluations, such as the incremental convex hull algorithm, by updating determinants instead of recomputing them.
Efficient Volume and Edge-Skeleton Computation for Polytopes Given by OraclesVissarion Fisikopoulos
The document discusses efficient algorithms for computing volume and edge skeletons of polytopes defined implicitly by optimization oracles. It presents an algorithm to compute the edge skeleton of a polytope in oracle calls and arithmetic operations. It also describes using geometric random walks and optimization oracles to approximate polytope volume, which is more efficient than exact computation for high dimensions. Experimental results show the approach computes volume within minutes for polytopes up to dimension 12 with less than 2% error.
The document discusses resultant polytopes, which are the Newton polytopes of the resultant polynomial for a system of polynomials. It focuses on analyzing the face vectors of 4-dimensional resultant polytopes. The main results characterize the possible face vectors in three cases: (1) when all supports have size 2 except one of size 5, (2) when all supports have size 2 except two of sizes 3 and 4, and (3) when all supports have size 2 except three of size 3. Lower bounds are proved for the number of faces in the third case. Computation of resultant polytopes uses software that implements algorithms utilizing polytope computations and tropical geometry.
High-dimensional polytopes defined by oracles: algorithms, computations and a...Vissarion Fisikopoulos
This document summarizes a PhD thesis defense about algorithms and computations involving high-dimensional polytopes defined by oracles. It introduces polytope representations, oracle definitions, and discusses resultant polytopes arising in algebraic geometry. It outlines an output-sensitive algorithm for computing projections of resultant polytopes using mixed subdivisions. It also describes work on edge-skeleton computations, a volume algorithm, 4D resultant polytope combinatorics, and high-dimensional predicate software.
Recently, there is a growing interest in geospatial trajectory computing. We call trajectories the sequences of time-stamped locations. As the technology for tracking moving objects becomes cheaper and more accurate, massive amounts of spatial trajectories are generated nowadays by smartphones, infrastructure, computer games, natural phenomena, and many other sources.
In this talk we will present the set of tools available in Boost Geometry to work with trajectories highlighting latest as well as older library developments. Starting with more basic operations like length, distance and closest points computations between trajectories we move forward to more advanced operations like compression or simplification as well as the conceptually opposite operation of densify by interpolating or generating random points on a given trajectory. We conclude with the important topic of similarity measurements between trajectories.
All implemented algorithms are parameterized by using the Boost Geometry's strategy mechanism that control the accuracy-efficiency trade-off and work for 3 different coordinate systems (namely, cartesian, spherical and ellipsoidal) each of which comes with its own advantages and limitations.
This document discusses algorithms for computing the volume of high-dimensional convex bodies. It begins by introducing the problem and some challenges in high dimensions. It then describes various randomized algorithms that use sampling techniques like Markov chain Monte Carlo (MCMC) to estimate volumes. Specific algorithms discussed include multiphase Monte Carlo, hit-and-run sampling, and billiard walks. The document reviews the theoretical and practical complexity of these algorithms. It also presents applications of volume computation in fields like engineering, biology, and machine learning.
The document describes an algorithm for enumerating 2-level polytopes in fixed dimensions. A 2-level polytope has vertices that are contained in two parallel hyperplanes. The algorithm takes as input a list of (d-1)-dimensional 2-level polytopes and extends each one to d dimensions, computing the closed sets of vertices to obtain new d-dimensional 2-level polytopes. Experimental results show the numbers of 2-level polytopes enumerated for dimensions up to 6. Open questions ask for a more output-sensitive enumeration algorithm and whether the number of d-dimensional 2-level polytopes is exponential in d.
The Newton polytope of the resultant, or resultant polytope, characterizes the resultant polynomial more precisely than total degree. The combinatorics of resultant polytopes are known in the Sylvester case [Gelfand et al.90] and up to dimension 3 [Sturmfels 94]. We extend this work by studying the combinatorial characterization of 4-dimensional resultant polytopes, which show a greater diversity and involve computational and combinatorial challenges. In particular, our experiments, based on software respol for computing resultant polytopes, establish lower bounds on the maximal number of faces. By studying mixed subdivisions, we obtain tight upper bounds on the maximal number of facets and ridges, thus arriving at the following maximal f-vector: (22,66,66,22), i.e. vector of face cardinalities. Certain general features emerge, such as the symmetry of the maximal f-vector, which are intriguing but still under investigation. We establish a result of independent interest, namely that the f-vector is maximized when the input supports are sufficiently generic, namely full dimensional and without parallel edges. Lastly, we offer a classification result of all possible 4-dimensional resultant polytopes.
This document summarizes two algorithms for computing properties of high-dimensional polytopes given access to certain oracle functions:
1. An algorithm for computing the edge-skeleton of a polytope in oracle polynomial-time using an oracle that returns the vertex maximizing a linear function.
2. A randomized algorithm for approximating the volume of a polytope by generating random points within it using a hit-and-run process, and estimating the volume from these points. The algorithm runs in oracle polynomial-time and provides an approximation with high probability.
Experimental results show the volume algorithm can approximate volumes of polytopes up to 100 dimensions within 1% error in under 2 hours, outperforming exact
We experimentally study the fundamental problem of computing the volume of a convex polytope given as an intersection of linear inequalities. We implement and evaluate practical randomized algorithms for accurately approximating the polytope’s volume in high dimensions (e.g. one hundred). To carry out this efficiently we experimentally correlate the effect of parameters, such as random walk length and number of sample points, on accuracy andruntime. Moreover, we exploit the problem’s geometry by implementing an iterative rounding procedure, computing partial generations of random points and designing fast polytope boundary oracles. Our publicly available code is significantly faster than exact computation and more accurate than existing approximation methods. We provide volume approximations for the Birkhoff polytopes B11,...,B15, whereas exact methods have only computed that ofB10.
A new practical algorithm for volume estimation using annealing of convex bodiesVissarion Fisikopoulos
We study the problem of estimating the volume of convex polytopes, focusing on H- and V-polytopes, as well as zonotopes. Although a lot of effort is devoted to practical algorithms for H-polytopes there is no such method for the latter two representations. We propose a new, practical algorithm for all representations, which is faster than existing methods. It relies on Hit-and-Run sampling, and combines a new simulated annealing method with the Multiphase Monte Carlo (MMC) approach. Our method introduces the following key features to make it adaptive: (a) It defines a sequence of convex bodies in MMC by introducing a new annealing schedule, whose length is shorter than in previous methods with high probability, and the need of computing an enclosing and an inscribed ball is removed; (b) It exploits statistical properties in rejection-sampling and proposes a better empirical convergence criterion for specifying each step; (c) For zonotopes, it may use a sequence of convex bodies for MMC different than balls, where the chosen body adapts to the input. We offer an open-source, optimized C++ implementation, and analyze its performance to show that it outperforms state-of-the-art software for H-polytopes by Cousins-Vempala (2016) and Emiris-Fisikopoulos (2018), while it undertakes volume computations that were intractable until now, as it is the first polynomial-time, practical method for V-polytopes and zonotopes that scales to high dimensions (currently 100). We further focus on zonotopes, and characterize them by their order (number of generators over dimension), because this largely determines sampling complexity. We analyze a related application, where we evaluate methods of zonotope approximation in engineering.
The figure of the Earth can be modelled either by a cartesian plane, a sphere or an (oblate) ellipsoid, in decreasing order with respect to the approximation quality. The shortest path between two points on such a surface is called a geodesic. Studying geodesic problems on ellipsoids dates back to Newton. However, the majority of open-source GIS systems today use methods on the cartesian plane. The main advantages of those approaches are simplicity of implementation and performance. On the other hand, those approaches come with a handicap: accuracy.
We experimentally study the accuracy-performance trade-offs of various methods for distance computation (as well as similar geodesic problems such as azimuth and area computation). We test projections paired with cartesian computations, spherical-trigonometric computations and a number of ellipsoidal methods such as [Andoyer'65] and [Thomas'70] formulas, [Vincenty'75] iterative method, great elliptic arc's method, and [Karney'15] series approximation. We also show that some methods from the bibliography (e.g. [Tseng'15]) are neither faster nor more accurate compared to the above list of methods and thus become redundant. For our experiments we use the open source libraries Boost Geometry and GeographicLib.
Our results are of independent interest since we are not aware of a similar experimental study. More interestingly, they can be used as a reference for practitioners that want to use the most efficient method with respect to some given accuracy.
Geodesic computations (such as distance computations) apart from being a fundamental problem in computational geometry and geography/geodesy are also building blocks for many higher level algorithms such as k-nearest neighbour problems, line interpolation, densification of geometries, area and buffer, to name a few.
# References
* Some experimental results can be found here: https://github.com/vissarion/geometry/wiki/Accuracy-and-performance-of-geographic-algorithms
* A related talk (with some graphs on performance and accuracy) can be found here https://fosdem.org/2019/schedule/event/geo_boostgeometry
* The source code of most of the algorithms of the study is in Boost Geometry: https://github.com/boostorg/geometry but we contain to our study GeographicLib https://geographiclib.sourceforge.io
The Earth is not flat; but it's not round either (Geography on Boost.Geometry)Vissarion Fisikopoulos
What is a great circle, a loxodrome or a geodesic? What are the differences between them and which one is more suitable for each GIS application? This talk addresses this kind of questions and how geography is implemented in Boost.Geometry. The library that is currently being used to provide GIS support to MySQL.
Following up the introductory talk on Boost.Geometry we discuss the algorithmic, the implementation as well as the user perspective of the development of geography in Boost.Geometry. We define basic geometric objects such as geodesics, and the modeling of the Earth as a sphere or ellipsoid. We try to understand the effect of different Earth models to the accuracy and speed of fundamental geometric algorithms (such as length, area, intersection etc.) by showing particular examples. Finally, we are having a look towards the future of geography in Boost.Geometry.
Presentation of the paper "Combinatorics of 4-dimensional Resultant Polytopes" in International Symposium on Symbolic and Algebraic Computation (ISSAC) 2013
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Mending Clothing to Support Sustainable Fashion_CIMaR 2024.pdfSelcen Ozturkcan
Ozturkcan, S., Berndt, A., & Angelakis, A. (2024). Mending clothing to support sustainable fashion. Presented at the 31st Annual Conference by the Consortium for International Marketing Research (CIMaR), 10-13 Jun 2024, University of Gävle, Sweden.
PPT on Direct Seeded Rice presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
Travis Hills of MN is Making Clean Water Accessible to All Through High Flux ...Travis Hills MN
By harnessing the power of High Flux Vacuum Membrane Distillation, Travis Hills from MN envisions a future where clean and safe drinking water is accessible to all, regardless of geographical location or economic status.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...Scintica Instrumentation
Targeting Hsp90 and its pathogen Orthologs with Tethered Inhibitors as a Diagnostic and Therapeutic Strategy for cancer and infectious diseases with Dr. Timothy Haystead.
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
1. Constructing Polytopes via a Vertex Oracle
Vissarion Fisikopoulos
Joint work with I.Z. Emiris, C. Konaxis (now U. Crete) and
L. Pe˜naranda (now IMPA, Rio)
Department of Informatics, University of Athens
Mittagsseminar, ETH, Zurich, 12.Jul.2012
2. Main actor: resultant polytope
Geometry: Minkowski summands of secondary polytopes, equival.
classes of secondary vertices, generalize Birkhoff polytopes
Motivation: useful to express the solvability of polynomial systems
Applications: discriminant and resultant computation, implicitization
of parametric hypersurfaces
Enneper’s Minimal Surface
3. Existing work
Theory of resultants, secondary polytopes, Cayley trick [GKZ ’94]
TOPCOM [Rambau ’02] computes all vertices of secondary polytope.
[Michiels & Verschelde DCG’99] coarse equivalence classes of
secondary polytope vertices.
[Michiels & Cools DCG’00] decomposition of Σ(A) in Minkoski
summands, including N(R).
Tropical geometry [Sturmfels-Yu ’08]: algorithms for resultant
polytope (GFan library) [Jensen-Yu ’11] and discriminant polytope
(TropLi software) [Rincn ’12].
4. What is a resultant polytope?
Given n + 1 point sets A0, A1, . . . , An ⊂ Zn
A0
A1
a1
a3
a2
a4
5. What is a resultant polytope?
Given n + 1 point sets A0, A1, . . . , An ⊂ Zn
A =
n
i=0(Ai × {ei }) ⊂ Z2n
where ei = (0, . . . , 1, . . . , 0) ⊂ Zn
A
A0
A1
a1
a3
a3, 1
a1, 0
a2
a4
a4, 1
a2, 0
6. What is a resultant polytope?
Given n + 1 point sets A0, A1, . . . , An ⊂ Zn
A =
n
i=0(Ai × {ei }) ⊂ Z2n
where ei = (0, . . . , 1, . . . , 0) ⊂ Zn
Given T a triangulation of conv(A), a cell is a-mixed if it contains 2
vertices from Aj , j = i, and one vertex a ∈ Ai .
A
A0
A1
a1 a2
a3 a4
a3, 1 a4, 1
a1, 0 a2, 0
7. What is a resultant polytope?
Given n + 1 point sets A0, A1, . . . , An ⊂ Zn
A =
n
i=0(Ai × {ei }) ⊂ Z2n
where ei = (0, . . . , 1, . . . , 0) ⊂ Zn
Given T a triangulation of conv(A), a cell is a-mixed if it contains 2
vertices from Aj , j = i, and one vertex a ∈ Ai .
ρT (a) = a−mixed
σ∈T:a∈σ
vol(σ) ∈ N, a ∈ A
ρT = (0, 2, 1, 0)
A
A0
A1
a1 a2
a3 a4
a3, 1 a4, 1
a1, 0 a2, 0
8. What is a resultant polytope?
Given n + 1 point sets A0, A1, . . . , An ⊂ Zn
A =
n
i=0(Ai × {ei }) ⊂ Z2n
where ei = (0, . . . , 1, . . . , 0) ⊂ Zn
Given T a triangulation of conv(A), a cell is a-mixed if it contains 2
vertices from Aj , j = i, and one vertex a ∈ Ai .
ρT (a) = a−mixed
σ∈T:a∈σ
vol(σ) ∈ N, a ∈ A
Resultant polytope N(R) = conv(ρT : T triang. of conv(A))
A N(R)
A0
A1
9. Connection with Algebra
The support of a polynomial is the the set of exponents of its
monomials with non-zero coefficient.
The resultant R is the polynomial in the coefficients of a system of
polynomials which is zero iff the system has a common solution.
The resultant polytope N(R), is the convex hull of the support of R.
A0
A1
N(R) R(a, b, c, d, e) = ad2
b + c2
b2
− 2caeb + a2
e2
f0(x) = ax2
+ b
f1(x) = cx2
+ dx + e
10. Connection with Algebra
The support of a polynomial is the the set of exponents of its
monomials with non-zero coefficient.
The resultant R is the polynomial in the coefficients of a system of
polynomials which is zero iff the system has a common solution.
The resultant polytope N(R), is the convex hull of the support of R.
A0
A1
N(R)
f0(x, y) = ax + by + c
f1(x, y) = dx + ey + f
f2(x, y) = gx + hy + iA2
a b c
d e f
g h i
4-dimensional Birkhoff polytope
R(a, b, c, d, e, f, g, h, i) =
11. Connection with Algebra
The support of a polynomial is the the set of exponents of its
monomials with non-zero coefficient.
The resultant R is the polynomial in the coefficients of a system of
polynomials which is zero iff the system has a common solution.
The resultant polytope N(R), is the convex hull of the support of R.
A0
A1
N(R)
f0(x, y) = axy2
+ x4
y + c
f1(x, y) = dx + ey
f2(x, y) = gx2
+ hy + iA2
NP-hard to compute the resultant
in the general case
12. The idea of the algorithm
Input: A ∈ Z2n
defined by A0, A1, . . . , An ⊂ Zn
Simplistic method:
compute the secondary polytope Σ(A)
many-to-one relation between vertices of Σ(A) and N(R) vertices
Cannot enumerate 1 representative per class by walking on secondary
edges
13. The idea of the algorithm
Input: A ∈ Z2n
defined by A0, A1, . . . , An ⊂ Zn
New Algorithm:
Vertex oracle: given a direction vector compute a vertex of N(R)
Output sensitive: computes only one triangulation of A per N(R)
vertex + one per N(R) facet
Computes projections of N(R) or Σ(A)
14. A basic tool for the oracle:
Regular triangulations of A ⊂ Rd
are obtained by projecting the lower (or
upper) hull of A lifted to Rd+1
via a generic lifting function w ∈ (R|A|
)×
.
w = (2, 1, 4)w = (2, 6, 4)
A
If w is not generic then we construct a regular subdivision.
15. The Vertex (Optimization) Oracle
Input: A ⊂ Z2n
, direction w ∈ (R|A|
)×
Output: vertex ∈ N(R), extremal wrt w
1. use w as a lifting to construct regular subdivision S of A
w
face of Σ(A)
S
16. The Vertex (Optimization) Oracle
Input: A ⊂ Z2n
, direction w ∈ (R|A|
)×
Output: vertex ∈ N(R), extremal wrt w
1. use w as a lifting to construct regular subdivision S of A
2. refine S into triangulation T of A
w
face of Σ(A)
T
T
S
17. The Vertex (Optimization) Oracle
Input: A ⊂ Z2n
, direction w ∈ (R|A|
)×
Output: vertex ∈ N(R), extremal wrt w
1. use w as a lifting to construct regular subdivision S of A
2. refine S into triangulation T of A
3. return ρT ∈ N|A|
N(R)
w
face of Σ(A)
T
T
S
ρT
18. The Vertex (Optimization) Oracle
Input: A ⊂ Z2n
, direction w ∈ (R|A|
)×
Output: vertex ∈ N(R), extremal wrt w
1. use w as a lifting to construct regular subdivision S of A
2. refine S into triangulation T of A
3. return ρT ∈ N|A|
Lemma
Oracle’s output is
always a vertex of the target polytope,
extremal wrt w.
20. Incremental Algorithm
Input: A
Output: H-rep. QH , V-rep. QV of Q = N(R)
1. initialization step
2. all hyperplanes of QH are illegal
Q
N(R)
2 kinds of hyperplanes of QH :
legal if it supports facet
⊂ N(R)
illegal otherwise
21. Incremental Algorithm
Input: A
Output: H-rep. QH , V-rep. QV of Q = N(R)
1. initialization step
2. all hyperplanes of QH are illegal
3. while ∃ illegal hyperplane H ⊂ QH with outer normal w do
call oracle for w and compute v, QV ← QV ∪ {v}
N(R)
Q
w
Extending an illegal facet
22. Incremental Algorithm
Input: A
Output: H-rep. QH , V-rep. QV of Q = N(R)
1. initialization step
2. all hyperplanes of QH are illegal
3. while ∃ illegal hyperplane H ⊂ QH with outer normal w do
call oracle for w and compute v, QV ← QV ∪ {v}
if v /∈ QV ∩ H then QH ← CH(QV ∪ {v}) else H is legal
N(R)
Q
Extending an illegal facet
23. Incremental Algorithm
Input: A
Output: H-rep. QH , V-rep. QV of Q = N(R)
1. initialization step
2. all hyperplanes of QH are illegal
3. while ∃ illegal hyperplane H ⊂ QH with outer normal w do
call oracle for w and compute v, QV ← QV ∪ {v}
if v /∈ QV ∩ H then QH ← CH(QV ∪ {v}) else H is legal
N(R)
Q
Validating a legal facet
24. Incremental Algorithm
Input: A
Output: H-rep. QH , V-rep. QV of Q = N(R)
1. initialization step
2. all hyperplanes of QH are illegal
3. while ∃ illegal hyperplane H ⊂ QH with outer normal w do
call oracle for w and compute v, QV ← QV ∪ {v}
if v /∈ QV ∩ H then QH ← CH(QV ∪ {v}) else H is legal
N(R)
Q
Validating a legal facet
25. Incremental Algorithm
Input: A
Output: H-rep. QH , V-rep. QV of Q = N(R)
1. initialization step
2. all hyperplanes of QH are illegal
3. while ∃ illegal hyperplane H ⊂ QH with outer normal w do
call oracle for w and compute v, QV ← QV ∪ {v}
if v /∈ QV ∩ H then QH ← CH(QV ∪ {v}) else H is legal
N(R)
Q
26. Incremental Algorithm
Input: A
Output: H-rep. QH , V-rep. QV of Q = N(R)
1. initialization step
2. all hyperplanes of QH are illegal
3. while ∃ illegal hyperplane H ⊂ QH with outer normal w do
call oracle for w and compute v, QV ← QV ∪ {v}
if v /∈ QV ∩ H then QH ← CH(QV ∪ {v}) else H is legal
N(R)
Q
At any step, Q is an inner
approximation . . .
27. Incremental Algorithm
Input: A
Output: H-rep. QH , V-rep. QV of Q = N(R)
1. initialization step
2. all hyperplanes of QH are illegal
3. while ∃ illegal hyperplane H ⊂ QH with outer normal w do
call oracle for w and compute v, QV ← QV ∪ {v}
if v /∈ QV ∩ H then QH ← CH(QV ∪ {v}) else H is legal
N(R)
Q
Qo
At any step, Q is an inner
approximation . . . from which we
can compute an outer approximation
Qo.
28. Incremental Algorithm
Input: A
Output: H-rep. QH , V-rep. QV of Q = N(R)
1. initialization step
2. all hyperplanes of QH are illegal
3. while ∃ illegal hyperplane H ⊂ QH with outer normal w do
call oracle for w and compute v, QV ← QV ∪ {v}
if v /∈ QV ∩ H then QH ← CH(QV ∪ {v}) else H is legal
N(R)
Q
29. Incremental Algorithm
Input: A
Output: H-rep. QH , V-rep. QV of Q = N(R)
1. initialization step
2. all hyperplanes of QH are illegal
3. while ∃ illegal hyperplane H ⊂ QH with outer normal w do
call oracle for w and compute v, QV ← QV ∪ {v}
if v /∈ QV ∩ H then QH ← CH(QV ∪ {v}) else H is legal
N(R)
Q
30. Incremental Algorithm
Input: A
Output: H-rep. QH , V-rep. QV of Q = N(R)
1. initialization step
2. all hyperplanes of QH are illegal
3. while ∃ illegal hyperplane H ⊂ QH with outer normal w do
call oracle for w and compute v, QV ← QV ∪ {v}
if v /∈ QV ∩ H then QH ← CH(QV ∪ {v}) else H is legal
N(R)
Q
31. Incremental Algorithm
Input: A
Output: H-rep. QH , V-rep. QV of Q = N(R)
1. initialization step
2. all hyperplanes of QH are illegal
3. while ∃ illegal hyperplane H ⊂ QH with outer normal w do
call oracle for w and compute v, QV ← QV ∪ {v}
if v /∈ QV ∩ H then QH ← CH(QV ∪ {v}) else H is legal
N(R)
Q
32. Incremental Algorithm
Input: A
Output: H-rep. QH , V-rep. QV of Q = N(R)
1. initialization step
2. all hyperplanes of QH are illegal
3. while ∃ illegal hyperplane H ⊂ QH with outer normal w do
call oracle for w and compute v, QV ← QV ∪ {v}
if v /∈ QV ∩ H then QH ← CH(QV ∪ {v}) else H is legal
N(R)
Q
33. Incremental Algorithm
Input: A
Output: H-rep. QH , V-rep. QV of Q = N(R)
1. initialization step
2. all hyperplanes of QH are illegal
3. while ∃ illegal hyperplane H ⊂ QH with outer normal w do
call oracle for w and compute v, QV ← QV ∪ {v}
if v /∈ QV ∩ H then QH ← CH(QV ∪ {v}) else H is legal
N(R)
Q
34. Incremental Algorithm
Input: A
Output: H-rep. QH , V-rep. QV of Q = N(R)
1. initialization step
2. all hyperplanes of QH are illegal
3. while ∃ illegal hyperplane H ⊂ QH with outer normal w do
call oracle for w and compute v, QV ← QV ∪ {v}
if v /∈ QV ∩ H then QH ← CH(QV ∪ {v}) else H is legal
N(R)
Q
35. Incremental Algorithm
Input: A
Output: H-rep. QH , V-rep. QV of Q = N(R)
1. initialization step
2. all hyperplanes of QH are illegal
3. while ∃ illegal hyperplane H ⊂ QH with outer normal w do
call oracle for w and compute v, QV ← QV ∪ {v}
if v /∈ QV ∩ H then QH ← CH(QV ∪ {v}) else H is legal
N(R)
Q
36. Incremental Algorithm
Input: A
Output: H-rep. QH , V-rep. QV of Q = N(R)
1. initialization step
2. all hyperplanes of QH are illegal
3. while ∃ illegal hyperplane H ⊂ QH with outer normal w do
call oracle for w and compute v, QV ← QV ∪ {v}
if v /∈ QV ∩ H then QH ← CH(QV ∪ {v}) else H is legal
N(R)
Q
37. Incremental Algorithm
Input: A
Output: H-rep. QH , V-rep. QV of Q = N(R)
1. initialization step
2. all hyperplanes of QH are illegal
3. while ∃ illegal hyperplane H ⊂ QH with outer normal w do
call oracle for w and compute v, QV ← QV ∪ {v}
if v /∈ QV ∩ H then QH ← CH(QV ∪ {v}) else H is legal
N(R)
Q
38. Incremental Algorithm
Input: A
Output: H-rep. QH , V-rep. QV of Q = N(R)
1. initialization step
2. all hyperplanes of QH are illegal
3. while ∃ illegal hyperplane H ⊂ QH with outer normal w do
call oracle for w and compute v, QV ← QV ∪ {v}
if v /∈ QV ∩ H then QH ← CH(QV ∪ {v}) else H is legal
N(R)
Q
39. Incremental Algorithm
Input: A
Output: H-rep. QH , V-rep. QV of Q = N(R)
1. initialization step
2. all hyperplanes of QH are illegal
3. while ∃ illegal hyperplane H ⊂ QH with outer normal w do
call oracle for w and compute v, QV ← QV ∪ {v}
if v /∈ QV ∩ H then QH ← CH(QV ∪ {v}) else H is legal
N(R)
Q
40. Incremental Algorithm
Input: A
Output: H-rep. QH , V-rep. QV of Q = N(R)
1. initialization step
2. all hyperplanes of QH are illegal
3. while ∃ illegal hyperplane H ⊂ QH with outer normal w do
call oracle for w and compute v, QV ← QV ∪ {v}
if v /∈ QV ∩ H then QH ← CH(QV ∪ {v}) else H is legal
N(R)
Q
41. Incremental Algorithm
Input: A
Output: H-rep. QH , V-rep. QV of Q = N(R)
1. initialization step
2. all hyperplanes of QH are illegal
3. while ∃ illegal hyperplane H ⊂ QH with outer normal w do
call oracle for w and compute v, QV ← QV ∪ {v}
if v /∈ QV ∩ H then QH ← CH(QV ∪ {v}) else H is legal
N(R)
Q
42. Complexity
Theorem
We compute the Vertex- and Halfspace-representations of N(R), as well
as a triangulation T of N(R), in
O∗
( m5
|vtx(N(R))| · |T|2
),
where m = dim N(R), and |T| the number of full-dim faces of T.
Elements of proof
Computation is done in dimension m = |A| − 2n + 1, N(R) ⊂ R|A|
.
At most ≤ vtx(N(R)) + fct(N(R)) oracle calls (Lem. 9).
Beneath-and-Beyond algorithm for converting V-rep. to H-rep
[Joswig ’02].
43. ResPol package
C++
towards high-dimensional
triangulation [Boissonnat,Devillers,Hornus]
extreme points d [G¨artner] (preprocessing step)
Hashing of determinantal predicates: optimizing sequences of similar
determinants
http://sourceforge.net/projects/respol
48. Ongoing and future work
Extension of hashing determinants to CH computations
(with L.Pe˜naranda) (to appear in ESA’12)
Combinatorial characterization of 4-dimensional resultant polytopes
(with I.Z.Emiris, A.Dickenstein)
Computation of discriminant polytopes
(with I.Z.Emiris, A.Dickenstein)
Membership oracles from vertex (optimization) oracles
(with B.G¨artner)
References
The paper: “An output-sensitive algorithm for computing
projections of resultant polytopes.” in SoCG’12
The code: http://respol.sourceforge.net
49. The end. . .
(figure courtesy of M.Joswig)
Facet and vertex graph of the largest 4-dimensional resultant polytope
Thank You !