The document summarizes three theorems about balancing weights on boundaries and skeletons of polygons and polyhedra:
1. For any polygon or bounded region containing the origin, weights can be placed on the boundary such that their center of mass is the origin, if the largest weight does not exceed the sum of the other weights.
2. For any 3D polyhedron containing the origin, there exist three points on the boundary that form an equilateral triangle centered at the origin.
3. For any 3D bounded convex polyhedron containing the origin, there exist three points on the skeleton whose center of mass is the origin.
A Calabi-Yau manifold is a smooth space that represents a deformation which smooths out an orbifold singularity. This document discusses superstring theory and fermions in string theories. It introduces the spinning string action and shows that the Neveu-Schwarz model contains a tachyon ground state while the Ramond model contains massless fermions. Combining the two sectors using the Gliozzi-Scherk-Olive projection results in a model with N=1 supersymmetry in ten dimensions.
This document provides an overview of string theory and superstring theory. It discusses the following key points:
1) A Calabi-Yau manifold is a smooth space that is Ricci flat and represents a deformation that smooths out an orbifold singularity from a space-time perspective.
2) In the 1960s, particle physics was dominated by S-matrix theory, which focused on scattering matrix properties rather than fundamental fields. S-matrix theory assumed analyticity, crossing, and unitarity of scattering amplitudes.
3) Early string theory models treated particles as vibrating strings to address limitations of S-matrix theory for high spin particles. This led to the development of bosonic string theory and super
This document provides an overview of quantum electrodynamics (QED). It begins by discussing cross sections and the scattering matrix, defining cross section as the effective size of target particles. It then derives an expression for cross section in terms of the transition rate and flux of incident particles. Next, it summarizes the derivation of the differential cross section and decay rate formulas in QED using relativistic quantum field theory and Feynman diagrams. It concludes by briefly reviewing the historical development of QED and the equivalence of the propagator approach and other formulations.
Analysis and algebra on differentiable manifoldsSpringer
This chapter discusses integration on manifolds. It defines orientation of manifolds and orientation-preserving maps between manifolds. It presents Stokes' theorem and Green's theorem, which relate integrals over boundaries to integrals of differential forms. It introduces de Rham cohomology groups, which classify closed forms modulo exact forms. Examples are given of calculating the orientability and cohomology of manifolds like the cylinder, Möbius strip, and real projective plane.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document discusses Born reciprocity in string theory and how it relates to the nature of spacetime. It argues that while string theory is formulated in terms of maps into spacetime, this breaks Born reciprocity. The document suggests that quasi-periodicity of strings without assuming a periodic target spacetime better respects Born reciprocity. This leads to a phase space formulation of string theory without assuming locality of spacetime at short distances.
This document provides an outline of string theory. It begins with background on reductionism in physics and the unification of forces. String theory emerged as a way to address difficulties in quantizing gravity. There are five consistent string theories in 10 dimensions: type I open superstring theory with oriented strings; type IIA closed superstring theory with two independent sets of supersymmetry; heterotic string theories that combine bosonic and supersymmetric strings. String theory led to the discovery of supersymmetry and relates fundamental forces and particles to vibrational modes of strings.
The document discusses the concepts of rigidity and tensegrity structures. It defines rigidity as being determined by the materials, external forces, topology, and geometry of a structure. Tensegrity structures use a combination of tension and compression elements to achieve rigidity. The document outlines methods for analyzing rigidity, including using equilibrium stresses, stress matrices, and Henneberg transformations to build globally rigid structures.
A Calabi-Yau manifold is a smooth space that represents a deformation which smooths out an orbifold singularity. This document discusses superstring theory and fermions in string theories. It introduces the spinning string action and shows that the Neveu-Schwarz model contains a tachyon ground state while the Ramond model contains massless fermions. Combining the two sectors using the Gliozzi-Scherk-Olive projection results in a model with N=1 supersymmetry in ten dimensions.
This document provides an overview of string theory and superstring theory. It discusses the following key points:
1) A Calabi-Yau manifold is a smooth space that is Ricci flat and represents a deformation that smooths out an orbifold singularity from a space-time perspective.
2) In the 1960s, particle physics was dominated by S-matrix theory, which focused on scattering matrix properties rather than fundamental fields. S-matrix theory assumed analyticity, crossing, and unitarity of scattering amplitudes.
3) Early string theory models treated particles as vibrating strings to address limitations of S-matrix theory for high spin particles. This led to the development of bosonic string theory and super
This document provides an overview of quantum electrodynamics (QED). It begins by discussing cross sections and the scattering matrix, defining cross section as the effective size of target particles. It then derives an expression for cross section in terms of the transition rate and flux of incident particles. Next, it summarizes the derivation of the differential cross section and decay rate formulas in QED using relativistic quantum field theory and Feynman diagrams. It concludes by briefly reviewing the historical development of QED and the equivalence of the propagator approach and other formulations.
Analysis and algebra on differentiable manifoldsSpringer
This chapter discusses integration on manifolds. It defines orientation of manifolds and orientation-preserving maps between manifolds. It presents Stokes' theorem and Green's theorem, which relate integrals over boundaries to integrals of differential forms. It introduces de Rham cohomology groups, which classify closed forms modulo exact forms. Examples are given of calculating the orientability and cohomology of manifolds like the cylinder, Möbius strip, and real projective plane.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document discusses Born reciprocity in string theory and how it relates to the nature of spacetime. It argues that while string theory is formulated in terms of maps into spacetime, this breaks Born reciprocity. The document suggests that quasi-periodicity of strings without assuming a periodic target spacetime better respects Born reciprocity. This leads to a phase space formulation of string theory without assuming locality of spacetime at short distances.
This document provides an outline of string theory. It begins with background on reductionism in physics and the unification of forces. String theory emerged as a way to address difficulties in quantizing gravity. There are five consistent string theories in 10 dimensions: type I open superstring theory with oriented strings; type IIA closed superstring theory with two independent sets of supersymmetry; heterotic string theories that combine bosonic and supersymmetric strings. String theory led to the discovery of supersymmetry and relates fundamental forces and particles to vibrational modes of strings.
The document discusses the concepts of rigidity and tensegrity structures. It defines rigidity as being determined by the materials, external forces, topology, and geometry of a structure. Tensegrity structures use a combination of tension and compression elements to achieve rigidity. The document outlines methods for analyzing rigidity, including using equilibrium stresses, stress matrices, and Henneberg transformations to build globally rigid structures.
This document discusses semi-transitive maps in dynamical systems. It begins by defining semi-transitive maps and showing that every transitive map is semi-transitive but not vice versa. It then proves several theorems about properties of semi-transitive maps, including that dense orbit implies semi-transitive for perfect spaces and semi-transitive implies dense orbit for separable second category spaces. It also shows that for infinite spaces, dense orbit and dense periodic points imply sensitive dependence on initial conditions for semi-transitive maps. The document aims to generalize results about topological transitivity to semi-transitive maps.
The document summarizes key aspects of the Standard Model of particle physics. It describes how the Standard Model accounts for fundamental particles like quarks and leptons that interact via four fundamental forces - gravitation, electromagnetism, weak force, and strong force. These interactions are mediated by exchange of spin-1/2 bosons. The Standard Model has been very successful in explaining experimental observations, but questions remain like incorporating gravity and the origin of particle masses.
- The document discusses the 4/3 problem as it relates to the gravitational field of a uniform massive ball moving at constant velocity.
- It derives expressions for the gravitational field potentials both inside and outside the moving ball using the superposition principle and Lorentz transformations.
- Calculations show that the effective mass of the gravitational field found from the field energy does not equal the effective mass found from the field momentum, with a ratio of approximately 4/3, demonstrating that the 4/3 problem exists for gravitational fields as it does for electromagnetic fields.
This document discusses how nonlinear electrodynamics (NLED) may provide a mechanism for forming astrophysical charged black holes during gravitational core collapse supernovae. NLED allows the timescale for charge neutralization in a proto-neutron star to be longer than the gravitational collapse timescale, preventing neutralization and allowing the core to collapse while positively charged. This could result in a charged black hole. The document outlines the theoretical framework of NLED, how it exhibits an inherent repulsive action, and how this could keep the charge separation state of a proto-neutron star stalled long enough for gravitational collapse to occur, forming an astrophysical charged black hole.
Supersymmetry (SUSY) is a proposed symmetry between bosons and fermions that could help solve issues in the Standard Model such as the hierarchy problem. SUSY introduces new "quantum" dimensions beyond the usual 3 spatial and 1 time dimension. SUSY generators called Q transform fermions into bosons and vice versa. The SUSY algebra involves the generators Q satisfying anticommutation relations in addition to the usual commutation relations of generators like momentum P and angular momentum M. While experimental evidence for SUSY is still lacking, it is an attractive theoretical idea that may be discovered at energy scales below 1 TeV.
This document discusses uncertainty principles and their application to the double slit experiment. It summarizes Heisenberg's uncertainty principle and its limitations in describing position and momentum spreads. It then applies various uncertainty inequalities to analyze Bohr's argument that an interference pattern requires not knowing which slit particles pass through. Local uncertainty principles assert that low momentum uncertainty implies not only large position uncertainty, but low probability of localization. The document analyzes applying these principles to justify Bohr's response to Einstein's proposed resolution of the double slit ambiguity.
This document discusses the quantization of electromagnetic radiation fields within the framework of quantum electrodynamics (QED). It begins by introducing the classical description of radiation fields in terms of vector potentials that satisfy the transversality condition. Next, it describes quantizing the classical radiation field by treating it as a collection of independent harmonic oscillators, with each oscillator characterized by a wave vector and polarization. Finally, it discusses how the quantization of these radiation oscillators leads to treating their canonical variables as non-commuting operators, in analogy to the quantization of position and momentum in non-relativistic quantum mechanics. This lays the foundation for a quantum description of radiation phenomena using the formalism of QED.
This article continues the study of concrete algebra-like structures in our polyadic approach, where the arities of all operations are initially taken as arbitrary, but the relations between them, the arity shapes, are to be found from some natural conditions ("arity freedom principle"). In this way, generalized associative algebras, coassociative coalgebras, bialgebras and Hopf algebras are defined and investigated. They have many unusual features in comparison with the binary case. For instance, both the algebra and its underlying field can be zeroless and nonunital, the existence of the unit and counit is not obligatory, and the dimension of the algebra is not arbitrary, but "quantized". The polyadic convolution product and bialgebra can be defined, and when the algebra and coalgebra have unequal arities, the polyadic version of the antipode, the querantipode, has different properties. As a possible application to quantum group theory, we introduce the polyadic version of braidings, almost co-commutativity, quasitriangularity and the equations for the R-matrix (which can be treated as a polyadic analog of the Yang-Baxter equation). Finally, we propose another concept of deformation which is governed not by the twist map, but by the medial map, where only the latter is unique in the polyadic case. We present the corresponding braidings, almost co-mediality and M-matrix, for which the compatibility equations are found.
This document summarizes research on simplifying calculations of scattering amplitudes, especially for tree-level amplitudes. It introduces the spinor-helicity formalism for writing compact expressions for amplitudes. It then discusses color decomposition in SU(N) gauge theory and the Yang-Mills Lagrangian. Specific techniques explored include BCFW recursion relations, an inductive proof of the Parke-Taylor formula, the 4-graviton amplitude and KLT relations, multi-leg shifts, and the MHV vertex expansion. The goal is to develop recursion techniques that vastly simplify calculations compared to traditional Feynman diagrams.
Central forces are forces that point radially towards or away from a source and depend only on the distance from the source. For a central force, the angular momentum of a particle is conserved, and the particle's motion occurs in a single plane. By defining an effective potential energy, the two-dimensional problem can be reduced to an equivalent one-dimensional problem, simplifying the analysis. Solving the equations of motion yields the particle's position r and angle θ as functions of time t, or r as a function of θ, describing the trajectory.
This document summarizes a proof that the prime numbers contain arbitrarily long arithmetic progressions. The proof has three main ingredients: 1) Szemerédi's theorem, which states that any subset of integers with positive density contains long arithmetic progressions; 2) a transference principle showing that pseudorandom sets also contain long progressions; 3) a result of Goldston and Yıldırım showing that almost all primes lie in a pseudorandom set. The document outlines how these ingredients can be combined to prove the main theorems that prime numbers contain arbitrarily long progressions and any subset of primes with positive density contains long progressions.
In this lecture, I will describe how to calculate optical response functions using real-time simulations. In particular, I will discuss td-hartree, td-dft and similar approximations.
N. Bilic - "Hamiltonian Method in the Braneworld" 1/3SEENET-MTP
This document provides an overview of the Hamiltonian method in braneworld cosmology. It begins with introductory lectures on preliminaries like Legendre transformations, thermodynamics, fluid mechanics, and basic cosmology. It then covers braneworld universes, strings and branes, and the Randall-Sundrum model. The document concludes with applications of the Hamiltonian method to topics like quintessence, dark energy/matter unification, and tachyon condensates in braneworlds.
The document provides instructions for a 5-hour theoretical physics competition with 3 questions. It details formatting requirements for working out the questions, including labeling pages with question number, page number, and total pages used. It also provides instructions for arranging the completed pages in proper order at the end. The first theoretical question is about vibrational modes in a linear crystal lattice model and includes parts on deriving the equation of motion, solving for mode frequencies and wave numbers, calculating average phonon energy, determining total crystal energy, and relating heat capacity to temperature. The second question considers a "rail gun" device constructed by a young man to launch himself across a strait to reach his love within 11 seconds. It involves deriving acceleration, calculating
N. Bilic - "Hamiltonian Method in the Braneworld" 2/3SEENET-MTP
This document discusses braneworld models and the Randall-Sundrum model. It begins by introducing the relativistic particle and string actions used to describe dynamics in higher dimensions. It then summarizes the two Randall-Sundrum models: RS I contains two branes separated in a fifth dimension to address the hierarchy problem, while RS II has the negative tension brane sent to infinity and observers on a single positive tension brane. Finally, it derives the RS II model solution, using Gaussian normal coordinates and imposing junction conditions at the brane.
This document discusses Bose-Einstein condensation controlled by a combination of trapping potentials, including a harmonic oscillator potential (HOP) along the x-axis and an optical lattice potential (OLP) along the y-axis. It analyzes how parameters in the HOP and OLP, such as the anisotropy parameter and q parameter, affect properties of the trapping potential, initial and final wave functions, and chemical potential. The study uses the Gross-Pitaevskii equation and Crank-Nicolson numerical method to solve for the wave function under different trapping conditions. Results show relationships between the chemical potential and the HOP anisotropy/OLP q parameter, as well as the distribution of
This document summarizes algorithms for solving the Range 1 Query (R1Q) problem. The R1Q problem involves preprocessing a d-dimensional binary matrix to answer queries about whether a given range contains a 1. The document presents:
1) A deterministic linear-space data structure that solves orthogonal R1Qs in constant time for any dimension d.
2) Randomized sublinear-space data structures that solve orthogonal R1Qs and provide a tradeoff between query time and error probability.
3) Deterministic near-linear space data structures that solve non-orthogonal R1Qs (e.g. for axis-parallel triangles and spheres) in constant time.
Este documento describe un proyecto para dar a los niños de 7 y 8 años un espacio para exhibir sus dibujos, manualidades y pinturas. El proyecto consiste en seleccionar los 5 mejores trabajos de cada grado para exponerlos en una maqueta y exhibir el resto en las paredes de un salón de tercer año para que profesores, familias y estudiantes los valoren y descubran futuros artistas.
This document discusses semi-transitive maps in dynamical systems. It begins by defining semi-transitive maps and showing that every transitive map is semi-transitive but not vice versa. It then proves several theorems about properties of semi-transitive maps, including that dense orbit implies semi-transitive for perfect spaces and semi-transitive implies dense orbit for separable second category spaces. It also shows that for infinite spaces, dense orbit and dense periodic points imply sensitive dependence on initial conditions for semi-transitive maps. The document aims to generalize results about topological transitivity to semi-transitive maps.
The document summarizes key aspects of the Standard Model of particle physics. It describes how the Standard Model accounts for fundamental particles like quarks and leptons that interact via four fundamental forces - gravitation, electromagnetism, weak force, and strong force. These interactions are mediated by exchange of spin-1/2 bosons. The Standard Model has been very successful in explaining experimental observations, but questions remain like incorporating gravity and the origin of particle masses.
- The document discusses the 4/3 problem as it relates to the gravitational field of a uniform massive ball moving at constant velocity.
- It derives expressions for the gravitational field potentials both inside and outside the moving ball using the superposition principle and Lorentz transformations.
- Calculations show that the effective mass of the gravitational field found from the field energy does not equal the effective mass found from the field momentum, with a ratio of approximately 4/3, demonstrating that the 4/3 problem exists for gravitational fields as it does for electromagnetic fields.
This document discusses how nonlinear electrodynamics (NLED) may provide a mechanism for forming astrophysical charged black holes during gravitational core collapse supernovae. NLED allows the timescale for charge neutralization in a proto-neutron star to be longer than the gravitational collapse timescale, preventing neutralization and allowing the core to collapse while positively charged. This could result in a charged black hole. The document outlines the theoretical framework of NLED, how it exhibits an inherent repulsive action, and how this could keep the charge separation state of a proto-neutron star stalled long enough for gravitational collapse to occur, forming an astrophysical charged black hole.
Supersymmetry (SUSY) is a proposed symmetry between bosons and fermions that could help solve issues in the Standard Model such as the hierarchy problem. SUSY introduces new "quantum" dimensions beyond the usual 3 spatial and 1 time dimension. SUSY generators called Q transform fermions into bosons and vice versa. The SUSY algebra involves the generators Q satisfying anticommutation relations in addition to the usual commutation relations of generators like momentum P and angular momentum M. While experimental evidence for SUSY is still lacking, it is an attractive theoretical idea that may be discovered at energy scales below 1 TeV.
This document discusses uncertainty principles and their application to the double slit experiment. It summarizes Heisenberg's uncertainty principle and its limitations in describing position and momentum spreads. It then applies various uncertainty inequalities to analyze Bohr's argument that an interference pattern requires not knowing which slit particles pass through. Local uncertainty principles assert that low momentum uncertainty implies not only large position uncertainty, but low probability of localization. The document analyzes applying these principles to justify Bohr's response to Einstein's proposed resolution of the double slit ambiguity.
This document discusses the quantization of electromagnetic radiation fields within the framework of quantum electrodynamics (QED). It begins by introducing the classical description of radiation fields in terms of vector potentials that satisfy the transversality condition. Next, it describes quantizing the classical radiation field by treating it as a collection of independent harmonic oscillators, with each oscillator characterized by a wave vector and polarization. Finally, it discusses how the quantization of these radiation oscillators leads to treating their canonical variables as non-commuting operators, in analogy to the quantization of position and momentum in non-relativistic quantum mechanics. This lays the foundation for a quantum description of radiation phenomena using the formalism of QED.
This article continues the study of concrete algebra-like structures in our polyadic approach, where the arities of all operations are initially taken as arbitrary, but the relations between them, the arity shapes, are to be found from some natural conditions ("arity freedom principle"). In this way, generalized associative algebras, coassociative coalgebras, bialgebras and Hopf algebras are defined and investigated. They have many unusual features in comparison with the binary case. For instance, both the algebra and its underlying field can be zeroless and nonunital, the existence of the unit and counit is not obligatory, and the dimension of the algebra is not arbitrary, but "quantized". The polyadic convolution product and bialgebra can be defined, and when the algebra and coalgebra have unequal arities, the polyadic version of the antipode, the querantipode, has different properties. As a possible application to quantum group theory, we introduce the polyadic version of braidings, almost co-commutativity, quasitriangularity and the equations for the R-matrix (which can be treated as a polyadic analog of the Yang-Baxter equation). Finally, we propose another concept of deformation which is governed not by the twist map, but by the medial map, where only the latter is unique in the polyadic case. We present the corresponding braidings, almost co-mediality and M-matrix, for which the compatibility equations are found.
This document summarizes research on simplifying calculations of scattering amplitudes, especially for tree-level amplitudes. It introduces the spinor-helicity formalism for writing compact expressions for amplitudes. It then discusses color decomposition in SU(N) gauge theory and the Yang-Mills Lagrangian. Specific techniques explored include BCFW recursion relations, an inductive proof of the Parke-Taylor formula, the 4-graviton amplitude and KLT relations, multi-leg shifts, and the MHV vertex expansion. The goal is to develop recursion techniques that vastly simplify calculations compared to traditional Feynman diagrams.
Central forces are forces that point radially towards or away from a source and depend only on the distance from the source. For a central force, the angular momentum of a particle is conserved, and the particle's motion occurs in a single plane. By defining an effective potential energy, the two-dimensional problem can be reduced to an equivalent one-dimensional problem, simplifying the analysis. Solving the equations of motion yields the particle's position r and angle θ as functions of time t, or r as a function of θ, describing the trajectory.
This document summarizes a proof that the prime numbers contain arbitrarily long arithmetic progressions. The proof has three main ingredients: 1) Szemerédi's theorem, which states that any subset of integers with positive density contains long arithmetic progressions; 2) a transference principle showing that pseudorandom sets also contain long progressions; 3) a result of Goldston and Yıldırım showing that almost all primes lie in a pseudorandom set. The document outlines how these ingredients can be combined to prove the main theorems that prime numbers contain arbitrarily long progressions and any subset of primes with positive density contains long progressions.
In this lecture, I will describe how to calculate optical response functions using real-time simulations. In particular, I will discuss td-hartree, td-dft and similar approximations.
N. Bilic - "Hamiltonian Method in the Braneworld" 1/3SEENET-MTP
This document provides an overview of the Hamiltonian method in braneworld cosmology. It begins with introductory lectures on preliminaries like Legendre transformations, thermodynamics, fluid mechanics, and basic cosmology. It then covers braneworld universes, strings and branes, and the Randall-Sundrum model. The document concludes with applications of the Hamiltonian method to topics like quintessence, dark energy/matter unification, and tachyon condensates in braneworlds.
The document provides instructions for a 5-hour theoretical physics competition with 3 questions. It details formatting requirements for working out the questions, including labeling pages with question number, page number, and total pages used. It also provides instructions for arranging the completed pages in proper order at the end. The first theoretical question is about vibrational modes in a linear crystal lattice model and includes parts on deriving the equation of motion, solving for mode frequencies and wave numbers, calculating average phonon energy, determining total crystal energy, and relating heat capacity to temperature. The second question considers a "rail gun" device constructed by a young man to launch himself across a strait to reach his love within 11 seconds. It involves deriving acceleration, calculating
N. Bilic - "Hamiltonian Method in the Braneworld" 2/3SEENET-MTP
This document discusses braneworld models and the Randall-Sundrum model. It begins by introducing the relativistic particle and string actions used to describe dynamics in higher dimensions. It then summarizes the two Randall-Sundrum models: RS I contains two branes separated in a fifth dimension to address the hierarchy problem, while RS II has the negative tension brane sent to infinity and observers on a single positive tension brane. Finally, it derives the RS II model solution, using Gaussian normal coordinates and imposing junction conditions at the brane.
This document discusses Bose-Einstein condensation controlled by a combination of trapping potentials, including a harmonic oscillator potential (HOP) along the x-axis and an optical lattice potential (OLP) along the y-axis. It analyzes how parameters in the HOP and OLP, such as the anisotropy parameter and q parameter, affect properties of the trapping potential, initial and final wave functions, and chemical potential. The study uses the Gross-Pitaevskii equation and Crank-Nicolson numerical method to solve for the wave function under different trapping conditions. Results show relationships between the chemical potential and the HOP anisotropy/OLP q parameter, as well as the distribution of
This document summarizes algorithms for solving the Range 1 Query (R1Q) problem. The R1Q problem involves preprocessing a d-dimensional binary matrix to answer queries about whether a given range contains a 1. The document presents:
1) A deterministic linear-space data structure that solves orthogonal R1Qs in constant time for any dimension d.
2) Randomized sublinear-space data structures that solve orthogonal R1Qs and provide a tradeoff between query time and error probability.
3) Deterministic near-linear space data structures that solve non-orthogonal R1Qs (e.g. for axis-parallel triangles and spheres) in constant time.
Este documento describe un proyecto para dar a los niños de 7 y 8 años un espacio para exhibir sus dibujos, manualidades y pinturas. El proyecto consiste en seleccionar los 5 mejores trabajos de cada grado para exponerlos en una maqueta y exhibir el resto en las paredes de un salón de tercer año para que profesores, familias y estudiantes los valoren y descubran futuros artistas.
El narcotráfico supone el comercio de sustancias tóxicas,que engloba la fabricación, distribución, venta, control de mercados, consumo y reciclaje de estupefacientes, adictivos o no, potencialmente dañinos para la salud. La mayoría de las legislaciones internacionales prohíben o limitan el narcotráfico, con penas que incluyen la ejecución por diversos medios,unque esto varía en función de la sustancia y de la legislación local.
This document proposes a new algorithmic framework called Cache-Oblivious Wavefront (COW) for parallelizing recursive dynamic programming algorithms. COW aims to improve parallelism without sacrificing cache efficiency. It does so by scheduling tasks for execution as soon as their real dependency constraints are satisfied, while still using the same recursive divide-and-conquer strategy as cache-optimal algorithms to maintain optimal cache performance. The document shows that COW can theoretically reduce the span of several important dynamic programming algorithms like Floyd-Warshall's algorithm and longest common subsequence, while keeping the total work and cache complexity optimal. Experimental results on real machines demonstrate a 3-5x speedup in running time and 10-20x improvement
This short document promotes creating presentations using Haiku Deck, a tool for making slideshows. It encourages the reader to get started making their own Haiku Deck presentation and sharing it on SlideShare. In a single sentence, it pitches the idea of using Haiku Deck to easily design presentations.
Luis Carlos Galán Sarmiento (Bucaramanga, 29 de septiembre de 1943 — Bogotá, 18 de agosto de 1989) fue un abogado, economista, periodista y político colombiano, candidato a la Presidencia de Colombia en 1982 por el Nuevo Liberalismo y en 1989 por el Partido Liberal Colombiano.
This document discusses the importance of adopting a gender approach to water resource management. It notes that women are primarily responsible for domestic water tasks in most societies but are often overlooked in water projects and management. Mainstreaming gender can lead to more efficient, effective, equitable and sustainable water systems. The document provides examples from various countries where integrating women in water management committees and decisions has improved cost recovery, hygiene, and sustainability of water infrastructure and services.
Moses Kisoi Mwangangi is a Kenyan professional with over 10 years of experience in business development, sales, customer relations, and project management. He has worked in roles at MTN Business, Supergroup of Companies, Safaricom, and the United Nations. He holds an MBA in Marketing and is pursuing further education. He is proficient in business strategy, sales, customer service, leadership, and computer skills. He aims to leverage his expertise and drive business growth through relationship-building and goal achievement.
The document describes the Pochoir stencil compiler, which allows programmers to write specifications for stencil computations in a domain-specific language embedded in C++. The Pochoir compiler then translates these specifications into high-performance parallel Cilk code using an efficient cache-oblivious algorithm called TRAP. Benchmark results show that the Pochoir-generated code runs 2-10 times faster than standard parallel loop implementations for a variety of stencil computations.
The researcher focuses on studying fundamental tradeoffs between cache-obliviousness, cache-optimality, and parallelism of algorithms and data structures. Their work combines theory and experiments on topics like stencil computation, dynamic programming, and numerical algorithms. Recent work showed that optimal time and cache complexity can be achieved simultaneously for problems like longest common subsequence via a "cache-oblivious wavefront" scheduling technique. Open questions remain about applying this approach more broadly and understanding tradeoffs between time and cache complexity.
This document summarizes three applications of commutative algebra to string theory. The first two applications involve interpreting certain products in topological field theory as Ext computations for sheaves on a Calabi-Yau manifold or in terms of matrix factorizations, which can be analyzed using computer algebra tools. The third application relates monodromy in string theory to solutions of differential equations, showing how monodromy can be described in terms of a computed ring.
This document discusses whether quantum mechanics is involved in the early evolution of the universe and if a Machian relationship between gravitons and gravitinos can help answer this question. It proposes that:
1) Gravitons and gravitinos carry information and their relationship, described as a Mach's principle, conserves this information from the electroweak era to today. This suggests quantum mechanics may not be essential in early universe formation.
2) A minimum amount of initial information, such as a small value for Planck's constant, is needed to set fundamental cosmological parameters and could be transferred from a prior universe.
3) Early spacetime may have had a pre-quantum state with low entropy and degrees of freedom
physics430_lecture06. center of mass, angular momentumMughuAlain
This document summarizes a physics lecture on center of mass and angular momentum. It defines center of mass as a weighted average of particle positions, and shows that the center of mass of a system moves as if external forces act at that point. It also defines angular momentum for single and multiple particles, and shows that angular momentum is conserved for a system with no external torques. Key equations include definitions of center of mass, angular momentum, and the relationship between angular momentum and torque.
1) The document provides an overview of the contents of Part II of a slideshow on modern physics, which covers topics such as charge and current densities, electromagnetic induction, Maxwell's equations, special relativity, tensors, blackbody radiation, photons, electrons, scattering problems, and waves.
2) It aims to provide a brief yet modern review of foundational concepts in electromagnetism and set the stage for introducing special relativity, quantum mechanics, and matter waves for undergraduate students.
3) The overview highlights that succeeding chapters will develop tensor formulations of electromagnetism and special relativity from first principles before discussing applications like blackbody radiation and early quantum models.
FDM is an older method than FEM that requires less computational power but is also less accurate in some cases where higher-order accuracy is required. FEM permit to get a higher order of accuracy, but requires more computational power and is also more exigent on the quality of the mesh.29-Jun-2017
https://feaforall.com › difference-bet...
What's the difference between FEM and FDM? - FEA for All
Feedback
About featured snippets
The document proposes a new infinity differentiable weight function, Wε(r), for smoothed particle hydrodynamics (SPH) methods. Wε(r) is based on a smeared-out Heaviside function and satisfies properties required of weight functions, including compact support, continuity, and asymptotic unity. The consistency and estimation errors of SPH interpolation using Wε(r) are analyzed, showing it provides C1 consistency and O(h4) order accuracy. The new weight function is developed to address limitations of existing weight functions like cubic and quartic splines in SPH approaches.
This document summarizes John Obuch's paper on modeling crystallization processes in one dimension. It presents four models of crystal growth: (1) the classic Avrami model where crystals grow at a constant rate until impinging on neighbors, (2) a modified model where growth stops when crystals are a distance Δ apart, (3) a model where growth stops at a fixed width, and (4) a combined model of (2) and (3). It derives an expression for the expected crystallized volume V(t) at time t using the Beta distribution of gap sizes between nucleation sites. The document then describes simulating the models by randomly placing nucleation sites and calculating the critical times when crystals impinge.
The document discusses Markov chains and their relationship to random walks on graphs and electrical networks. Some key points:
- A Markov chain is a process that transitions between a finite set of states based on transition probabilities that depend only on the current state.
- For a strongly connected Markov chain, there exists a unique stationary distribution that the long-term probabilities of the chain converge to, regardless of the starting state.
- Random walks on undirected graphs can be modeled as Markov chains, where the transition probabilities are proportional to edge conductances in an analogous electrical network. The stationary distribution of such a random walk is proportional to vertex degrees or conductances.
In this paper, the underlying principles about the theory of relativity are briefly introduced and reviewed. The mathematical prerequisite needed for the understanding of general relativity and of Einstein field equations are discussed. Concepts such as the principle of least action will be included and its explanation using the Lagrange equations will be given. Where possible, the mathematical details and rigorous analysis of the subject has been given in order to ensure a more precise and thorough understanding of the theory of relativity. A brief mathematical analysis of how to derive the Einstein’s field’s equations from the Einstein-Hilbert action and the Schwarzschild solution was also given.
This document summarizes a physics lecture on oscillations. It begins by reviewing Hooke's law and how it relates the force from a spring to displacement. It then shows that Hooke's law applies to small displacements from any equilibrium point using a Taylor series expansion. Simple harmonic motion is introduced as oscillatory motion governed by Hooke's law. The solutions to the differential equation for simple harmonic motion are derived and expressed in terms of sine and cosine functions. Examples are given of a mass on a spring and a bottle floating in water to illustrate simple harmonic oscillations. Energy considerations are also discussed showing how potential and kinetic energy oscillate out of phase during simple harmonic motion.
This document discusses wave equations and their applications. It contains 6 chapters that cover basic results of wave equations, different types of wave equations like scalar and electromagnetic wave equations, and applications of wave equations. The introduction explains that the document contains details about different types of wave equations and their applications in sciences. Chapter 1 discusses basic concepts like relativistic and non-relativistic wave equations. Chapter 2 covers the scalar wave equation. Chapter 3 is about electromagnetic wave equations. Chapter 4 is on applications of wave equations.
This document summarizes the topological structure of ternary residue curve maps, which describe the dynamics of ternary distillation processes. It introduces differential equations that model ternary distillation and place a meaningful structure on ternary phase diagrams. By recognizing this structure is subject to the Poincaré-Hopf index theorem, the authors obtained a topological relationship between azeotropes and pure components in ternary mixtures. This relationship provides useful information about ternary mixture distillation behavior and predicts situations where ternary azeotropes cannot occur.
This document discusses computing fundamental numbers of the variety of nodal cubics in projective 3-space (P3). It constructs several compactifications of this variety, which can be obtained as a sequence of blow-ups of a projective bundle Knod. The numbers computed include intersection numbers involving characteristic conditions like points, lines, and tangency to planes, as well as the condition that the node lies on a plane. The computations are carried out using the Wit symbolic computation system.
An Elementary Proof of Kepler's Law of Ellipses Arpan Saha
This talk, given as a part of the Annual Seminar Weekend 2010, IIT Bombay, was centered around the hodograph
proof of Kepler’s Law of Ellipses independently discovered by Maxwell, Hamilton and Feynman.
This document is a research essay presented by Canlin Zhang to the University of Waterloo in partial fulfillment of the requirements for a Master's degree in Pure Mathematics. The essay introduces some basic concepts in operator theory and their connections to quantum computation and information. It discusses topics such as quantum algorithms, quantum channels, quantum error correction, and noiseless subsystems. The essay is divided into six sections that cover these topics at a high-level introduction.
This course covers advanced quantum mechanics for graduate students. It aims to help students gain a deeper foundation in quantum mechanics through topics like the Schrödinger equation, particle in a box, harmonic oscillator, hydrogen atom, angular momentum, and approximation methods. The key objectives are to understand quantum theory and apply it to important physical systems, and recognize the necessity of quantum methods in atomic and nuclear physics. Some specific topics covered include the wave function, Born's statistical interpretation, probability, normalization, momentum, uncertainty principle, and the time-dependent and time-independent Schrödinger equations.
This document summarizes part of a paper by D.G. Northcott on the notion of a first neighbourhood ring, with an application to the Af+Bφ theorem. It introduces the concept of a first neighbourhood ring and superficial elements. The key points are:
1) It defines a superficial element as one where amv+s = mv+s for large v, generalizing the definition to allow zero divisors.
2) It proves three properties are equivalent for an element a to be superficial: the form ideal no:(φ) is the isolated component no, amv = mv+s for large v, and nv+s:(a) = mv for large v.
3) It shows
1) A new cosmological model is proposed where the universe is spontaneously created from nothing via quantum tunneling into a de Sitter space.
2) After tunneling, the model evolves according to the inflationary scenario, avoiding the big bang singularity and not requiring initial conditions.
3) The model suggests that the universe was created via quantum tunneling from a state of literally nothing into a de Sitter space, which then evolved into the expanding universe we observe according to known physics.
This document summarizes Chapter 2 of a textbook on functional analysis in mechanics. It introduces Sobolev spaces, which are function spaces used to model mechanical problems. Sobolev spaces allow for generalized notions of derivatives of functions. The chapter discusses imbedding theorems for Sobolev spaces, which describe how functions in one Sobolev space can be mapped continuously or compactly to other function spaces. It provides examples of imbedding properties for specific Sobolev spaces over different domains.
1. Weight Balancing on Boundaries and Skeletons
Luis Barba
Boursier FRIA du FNRS, ULB,
Brussels, Belgium, and School of
Computer Science, Carleton
University, Ottawa, Canada
Otfried Cheong
Department of Computer Science,
KAIST, Daejeon, Korea
Jean-Lou De Carufel
School of Computer Science,
Carleton University, Ottawa,
Canada
Michael Gene Dobbins
GAIA, POSTECH, Pohang, Korea
Rudolf Fleischer
IIPL and SCS, Fudan University,
Shanghai, P. R. China, and CS,
GUtech, Muscat, Oman
Akitoshi Kawamura
Department of Computer Science,
University of Tokyo, Tokyo, Japan
Matias Korman
National Institute of Informatics,
Tokyo, Japan, and JST ERATO
Kawarabayashi Project, Japan
Yoshio Okamoto
Department of Communication
Engineering and Informatics,
University of Electro-
Communications, Tokyo, Japan
János Pach
EPFL, Lausanne, Switzerland and
Rényi Institute, Budapest,
Hungary
Yuan Tang
Software School, Shanghai Key
Laboratory of Intelligent
Information Processing, Fudan
University, Shanghai, P. R. China
Takeshi Tokuyama
Graduate School of Information
Sciences, Tohoku University,
Sendai, Japan
Sander Verdonschot
School of Computer Science,
Carleton University, Ottawa,
Canada
Tianhao Wang
Software School, Shanghai Key
Laboratory of Intelligent
Information Processing, Fudan
University, Shanghai, P. R. China
ABSTRACT
Given a polygonal region containing a target point (which
we assume is the origin), it is not hard to see that there are
two points on the perimeter that are antipodal, i.e., whose
midpoint is the origin. We prove three generalizations of this
fact. (1) For any polygon (or any bounded closed region with
connected boundary) containing the origin, it is possible to
place a given set of weights on the boundary so that their
barycenter (center of mass) coincides with the origin, pro-
vided that the largest weight does not exceed the sum of the
other weights. (2) On the boundary of any 3-dimensional
bounded polyhedron containing the origin, there exist three
points that form an equilateral triangle centered at the ori-
gin. (3) On the 1-skeleton of any 3-dimensional bounded
convex polyhedron containing the origin, there exist three
points whose center of mass coincides with the origin.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are not
made or distributed for profit or commercial advantage and that copies bear
this notice and the full citation on the first page. Copyrights for components
of this work owned by others than ACM must be honored. Abstracting with
credit is permitted. To copy otherwise, or republish, to post on servers or to
redistribute to lists, requires prior specific permission and/or a fee. Request
permissions from Permissions@acm.org.
SoCG’14, June 8–11, 2014, Kyoto, Japan.
Copyright 2014 ACM 978-1-4503-2594-3/14/06 ...$15.00.
Categories and Subject Descriptors
G.2.0 [Discrete Mathematics]: General; I.3.5
[Computer Graphics]: Computational Geometry
and Object Modeling
General Terms
Algorithms, Theory
Keywords
Antipodal pair, balancing, barycenter, Borsuk-Ulam theo-
rem, Minkowski sum, skeleton of polyhedra
1. INTRODUCTION
We will discuss three generalizations of the following ob-
servation (in this paper, a polygon or a polyhedron is always
closed and bounded).
Theorem 0. On the perimeter of any polygon contain-
ing the origin, there are two points that are antipodal, i.e.,
whose midpoint is the origin.
In other words, we have
2P ⊆ ∂P ⊕ ∂P
for any polygon P, where ∂P denotes its boundary, A⊕B =
{ x + y | x ∈ A, y ∈ B } is the Minkowski sum of regions
436
2. A and B, and αA = { αx | x ∈ A } is the copy of A scaled
(about the origin) by a real number α.
Proof of Theorem 0. Consider −P, the copy of the
given polygon P reflected about the origin. Since P and
−P cannot be properly contained in the other (and they
both contain the origin), their boundaries intersect at some
point q ∈ ∂P ∩(−∂P). Then q and −q form the desired pair
of points.
Distinct weights
An interpretation of Theorem 0 is that we can put two equal
weights on the perimeter and balance them about the origin.
Generalizing this to different sets of weights, we prove the
following in Section 2 (note that this subsumes Theorem 0).
Theorem 1. Suppose that k weights w1 ≥ w2 ≥ · · · ≥ wk
satisfy w1 ≤ w2 + · · · + wk. Then for any polygon (or any
region enclosed by a Jordan curve) P ⊆ R2
containing the
origin, the weights can be placed on the boundary ∂P so that
their center of mass is the origin.
In terms of the Minkowski sum, the theorem says that
(w1 + · · · + wk)P ⊆ w1∂P ⊕ w2∂P ⊕ · · · ⊕ wk∂P
if none of the weights is bigger than the sum of the rest.
If P is the unit disk, Theorem 1 is related to a reachability
problem of a chain of links (or a robot arm) of lengths w1,
w2, . . . , wk where one end is placed at the origin, each link
can be rotated around the joints, and the links are allowed
to cross each other. In order to reach every point of the disk
of radius w1 +· · ·+wk centered at the origin, it is well known
(e.g., [4]) that the condition w1 ≤ w2 + · · · + wk is sufficient
(and necessary). Theorem 1 generalizes this to arbitrary P.
We will also give efficient algorithms to find such a location
of points. On the other hand, if we drop the condition w1 ≤
w2 + · · · + wk, then the conclusion does not hold in general
(just let P be a disk centered at the origin), and it is NP-
complete to decide whether it holds for a given polygon P.
Tripodal points
The other two results concern the 3-dimensional (or higher)
setting, where instead of a polygon we are given a polyhe-
dron. Generalizing the notion of antipodal points in Theo-
rem 0, we prove the following in Section 3.
Theorem 2. On the boundary of any 3-dimensional poly-
hedron containing the origin, there are tripodal points, i.e.,
three points forming an equilateral triangle centered at the
origin.
A classical problem reminiscent of Theorem 2 is the square
peg problem of Toeplitz. Given a closed curve in a plane,
the problem asks for a location of four vertices of a square
on it. It was conjectured by Otto Toeplitz in 1911 that
every Jordan curve contains such four points. Although it
is still open for general Jordan curves, it has been affirma-
tively solved for curves with some smoothness conditions.
As a variant of this problem, Meyerson [8] and Kronheimer
and Kronheimer [5] proved that for any triangle T and any
Jordan curve C, we can find three points on C forming the
vertices of a triangle similar to T (note the contrast to our
Theorem 2 where we needed the triangle to be equilateral).
See a recent survey of Matschke [7] on these problems.
Weights on the skeleton
By viewing Theorem 0 again as balancing of two equal
weights, we can consider another generalization to polyhe-
dra, asking whether we can put (equal) weights so that their
barycenter is the origin. Note, however, that this is not very
interesting if we are allowed to put them anywhere on the
surface of the polyhedron: we can then cut the polyhedron
by any plane through the origin and apply the 2-dimensional
Theorem 1 to (a connected component of) the section, show-
ing that this is possible for any number of equal weights. The
question becomes interesting if we restrict the weights to lie
on the edges of the polyhedron.
Theorem 3. On the edges of any 3-dimensional bounded
convex polyhedron containing the origin, there exist three
points whose barycenter coincides with the origin.
In other words,
3P ⊆ S1(P) ⊕ S1(P) ⊕ S1(P),
where S1(P) denotes the 1-skeleton of a convex polyhe-
dron P.
In fact, we will prove in Section 4 that the same thing
(with d weights) is true in dimension d when d is the product
of a power of 2 and a power of 3. We conjecture that this
is true for all d and for non-convex polyhedra, but this is
left for future work. It may also be worth trying to combine
this with our first question in this paper, asking whether we
have something similar to Theorem 3 for distinct weights.
Related work
Bringing the center of mass to a desired point by putting
counterweights is a common technique for reduction of vi-
brations in mechanical engineering [1]. There have been
studies on Minkowski operations considering boundary of
objects (e.g., Ghosh and Haralick [3]), but our paper seems
to be the first to deal with the general question of covering
the body with convex linear combinations of the boundary.
We note that another generalization of Theorem 0 is the
Borsuk-Ulam theorem, which has many applications in dis-
crete and computational geometry [6]. It states that every
continuous function f : Sd
→ Rd
on the d-dimensional sphere
(i.e., the boundary of the (d + 1)-dimensional ball) centered
at the origin has a point x such that f(x) = f(−x). When
d = 1, this implies Theorem 0 for convex P if we define f(x)
as the distance from the origin to ∂P in the direction x.
A variant of the Borsuk-Ulam theorem is used also in our
proof of Theorem 3. We hope to discover more relations of
our observations to the Borsuk-Ulam theorem.
2. DISTINCT WEIGHTS
We prove Theorem 1. Let P be a polygon. First, put the
biggest weight w1 at a nearest point p on ∂P from the origin,
and all other weights at one point so that the barycenter of
all weights is at the origin. This means putting the weights
w2, . . . , wk at the point −p · w1/(w2 + . . . + wk), which is
inside P by the assumption w1 ≤ w2 + · · · + wk.
Starting at this configuration, let w1 run along the perime-
ter ∂P, while moving w2 accordingly so that the barycenter
of all weights remains at the origin (all other weights are
held fixed). This means moving w2 along a copy of ∂P mag-
nified by −w1/w2. Since this is at least as big as ∂P, and
w2 initially lies inside P, it must come out at some point,
437
3. q1
P
a magnified copy of P
q2
q2
q1
Figure 1: Second paragraph of the Proof of Theorem 1. If
the weights w1 and w2 are initially at q1 ∈ ∂P and q2 ∈ P,
and w1 moves along ∂P, then to keep the barycenter fixed,
w2 must move along a magnified (and reflected) copy of ∂P,
so that it hits ∂P at some point q2.
giving a configuration where w1 and w2 are both on ∂P. See
Figure 1 for illustation.
Now fix w1 at this point and let w2 run along ∂P, while
moving w3 accordingly so that the barycenter of all points
remains at the origin. This means moving w3 along a copy
of ∂P magnified by −w2/w3. Since this is at least as big as
∂P, and w3 initially lies inside P, it must come out at some
point, giving a configuration where w1, w2, w3 are on ∂P.
Repeating this, we can bring all weights to ∂P. This
proves Theorem 1.
In this proof, we moved points “along” the boundary ∂P
(or its copies) and argued that they must “come out” of P.
This may sound as if we used the fact that ∂P is a curve,
but a closer look at the proof shows that the theorem holds
as long as (P is a closed bounded set and) ∂P is connected.
Algorithmic aspects
We consider the computational problem that corresponds
to Theorem 1: Given a region P and a set of k weights, we
want to determine whether we can balance the weights by
putting them on ∂P, and if so, find such a location. We
restrict ourselves to the case where P is a simple polygon
with n vertices, and design algorithms in terms of n and k.
If none of the weights exceeds the sum of the others, the
proof of Theorem 1 implies a polynomial-time algorithm. In
order to replace qs and qs+1 with a pair of boundary points
qs and qs+1 (Figure 1), we need to find an intersection point
of ∂P and its reflected and scaled copy as shown in the proof.
This can be done in O(n log n) time. The initial location of
the largest weight can be found in O(n) time. Thus, we have
an O(kn log n) time algorithm.
We can design a faster algorithm as follows. We greedily
divide the weights into three groups so that no group weighs
more than the sum of the rest. This is always possible in
O(k) time (as long as no single weight exceeds the sum of
the others). Thus, we have an instance for k = 3, which we
p p
−2q1
q1
(2, 2)
(2, −2)
(−2, 2)
(−2, −2)
(0, −1)
(0, 1)
Figure 2: A hard instance of the balancing location problem.
solve in O(n log n) time. This gives an O(k + n log n)-time
algorithm, although the output may look a little artificial
since all weights will be located at (at most) three points.
If we are given a set of unknown number of weights that
may contain a weight exceeding the sum of the rest, the
problem is NP-complete.
Proposition 1. There exists a polygon P ⊆ R2
contain-
ing the origin such that it is NP-complete to determine if
a given set of weights can be placed on the boundary ∂P so
that their barycenter is at the origin.
Proof. The problem is clearly in NP, and we prove its
NP-hardness by reducing the PARTITION problem to it.
The input of PARTITION problem is a set of N nonnegative
integers a1, a2, . . . aN and ask whether there is a subset X ⊂
{1, . . . , N} such that
i∈X
ai =
i∈{1,...,N}X
ai.
Without loss of generality, we assume a1 ≥ a2 ≥ · · · ≥ aN
(this can be done by reordering the weights if necessary).
We transform the problem into a weight balancing problem
as follows: we set k = N + 1, wi = ai−1 for i = 2, 3, . . . , k,
and w1 = 2 k
i=2 wi.
Let P be the non-convex polygon with vertices (0, 1),
(2, 2), (2, −2), (0, −1), (−2, −2), and (−2, 2) (see the left
picture of Figure 2). Note that P = −P and −2P contains
the convex hull conv(P) of P. Moreover, the two reflex ver-
tices of −2P are the only points of 2P ∩ conv(P), and each
of the points is the midpoint of an edge of conv(P) as shown
in the right picture.
Observe that the only possible location q1 of weight w1
should be one of the two reflex vertices since its reflection
−2q1 should be written as a convex combination of other
points on ∂P, and hence contained in conv(P). That is,
−2q1 lies on the midpoint of an edge of conv(P) (without
loss of generality, we may assume it is the edge e from (−2, 2)
to (2, 2)). In particular, there is a solution if and only if we
can place the remaining points in a way that their barycenter
lies on the midpoint of e.
Since the new target point lies on the edge e of conv(P),
the only possible location for the remaining points is (−2, 2)
or (2, 2). Moreover, the barycenter becomes the midpoint
if and only if the weights are equally divided. Thus, the
PARTITION problem is reduced to the balancing location
problem. Since PARTITION is NP-complete, we conclude
that detecting the existence of a balancing location is also
NP-complete.
438
4. 3. TRIPODAL POINTS
In this section, we consider a 3-dimensional (bounded,
closed, not necessarily convex) polyhedron P, and prove
Theorem 2, which states that there are tripodal points on
the boundary ∂P. Note that tripodal points are a natural
analogue of antipodal points: saying that three points are
tripodal is equivalent to requiring that they are at the same
distance from the origin and their barycenter is the origin.
Let p0 and p1 be a nearest and a farthest point on ∂P,
respectively, from the origin o. They exist because ∂P is
compact. Consider a simple piecewise-linear path L from
p0 to p1 on ∂P, parametrized by a one-to-one continuous
function γ : [0, 1] → L such that γ(0) = p0 and γ(1) = p1.
We claim that there exist three points a ∈ L, b ∈ ∂P, and
c ∈ ∂P that are tripodal. For each q ∈ L, let H(q) be the
set of vectors perpendicular to the line through o to q. We
use the following fact.
Lemma 1. There exists a continuous piecewise algebraic
function v: L → S2
such that v(q) ∈ H(q) for all q ∈ L.
Proof. Let S be a set of points on L that contains both
endpoints and all the joints of L (recall that L is piecewise
linear). We first set v(u) arbitrarily on each point u of S.
Thus, it suffices to define v on the line segment e = uv
spanned by a consecutive pair of points u, v in S along L.
By the choice of S, e is contained in L. We parametrize
e by x: [0, 1] → e such that x(0) = u and x(1) = v. For
every t ∈ [0, 1], let h(x(t)) be the projection of tv(u) + (1 −
t)v(v) to H(x(t)). If h(x(t)) is not a zero vector, we scale
h(x(t)) to have a unit vector and define it as v(x(t)). Since
h(x(t)) becomes zero only if the vector tv(u) + (1 − t)v(v)
is perpendicular to H(x(t)), a suitably large choice of S can
ensure that h(x(t)) is not zero for any t ∈ [0, 1]. We can
easily check that v is continuous and piecewise algebraic.
We fix such a function v: L → S2
. For each t ∈ [0, 1] and
each angle θ ∈ [0, 2π), let b(t, θ) and c(t, θ) be the unique pair
of points such that γ(t), b(t, θ), c(t, θ) are tripodal points
and the vector b(t, θ) − c(t, θ) ∈ H(γ(t)) makes an angle of
+θ with v(γ(t)). Define f1(t, θ) ∈ {+, −, 0} by whether the
point b(t, θ) lies inside (the interior of) P, outside P, or on
∂P. Define f2(t, θ) analogously using the point c(t, θ).
If there is (t, θ) such that f1(t, θ) = f2(t, θ) = 0, then γ(t),
b(t, θ), c(t, θ) are tripodal points and we are done. Suppose
otherwise. Define the signature of (t, θ), denoted F(t, θ), as
++ if (f1(t, θ), f2(t, θ)) ∈ {(+, +), (+, 0), (0, +)},
−− if (f1(t, θ), f2(t, θ)) ∈ {(−, −), (−, 0), (0, −)},
+− if (f1(t, θ), f2(t, θ)) = (+, −),
−+ if (f1(t, θ), f2(t, θ)) = (−, +)
(Figure 3). Since p0 and p1 are the nearest and the farthest
points, it holds that F(0, θ) = ++ and F(1, θ) = −−.
For each θ, consider the transition of F(t, θ) as t changes
from 0 to 1. Since v is piecewise algebraic, the number of
transitions is finite, and we obtain a finite walk W(θ) from
++ to −− in the graph C shown in Figure 4.
Consider the edge e between ++ and +−. A walk from
++ to −− is called even if it uses e an even number (possibly
zero) of times, and it is called odd otherwise. For example,
the path ++, +−, −− is odd and ++, −+, −− is even.
As we increase θ continuously from one angle to another,
the walk W(θ) may change, but the parity remains invariant,
because all that can happen are a finite number of
o
γ(t)
b(t, θ)
c(t, θ)
Figure 3: The signature is +− for this (t, θ).
−+
−− +−
++
Figure 4: The cycle C.
• insertions, where an entry a in W(θ) is replaced by a
sequence a, b, a, where b is a neighbor of a in C, and
• deletions, where a sequence a, b, a is replaced by a,
and these events do not change the parity of the walk.
On the other hand, the walks W(0) and W(π) have differ-
ent parities. To see this, let e be the edge between ++ and
−+. Since e and e form a cut separating ++ and −− in C,
each walk must use them an odd number of times in total.
Since b(t, 0) = c(t, π) and c(t, 0) = b(t, π), the walk W(π) is
obtained from W(0) by exchanging −+ and +−, and hence
has opposite parity.
This is a contradiction, and Theorem 2 follows.
We here note that Theorem 2 extends to the case when
∂P is a smooth manifold or a PL manifold. A proof is given
in the appendix.
Theorem 2 was about tripodal points in 3-dimensional
polyhedra. We may ask similar questions for three points
forming other shapes, or for higher dimensions.
Conjecture 1. For a d-dimensional polyhedron P con-
taining the origin, there exist d points on ∂P forming a reg-
ular (d − 1)-dimensional simplex centered at the origin.
Algorithmic aspects need further investigation. It is easy
to devise an O(n3
)-time algorithm to find a tripodal location
guaranteed by Theorem 2 for a polyhedron with n vertices,
just by going through all the triples of faces. It is not clear
if this can be improved.
4. LOCATING WEIGHTS ON EDGES
A d-dimensional (closed bounded) polyhedron P decom-
poses into faces of dimensions i = 0, 1, . . . d. Let Fi be the set
of i-dimensional faces. The union Sk(P) = k
i=0 f∈Fi
f of
faces of at most k dimensions is called the k-skeleton of P. In
particular, the 1-skeleton S1(P) is the union of edges (includ-
ing vertices), and the (d − 1)-skeleton is ∂P. Thus, another
439
5. natural higher-dimensional analogue of the 2-dimensional
Theorem 0 (where we put weights on S1(P) = ∂P) is to
try to place weights on the 1-skeleton S1(P):
Conjecture 2. On the 1-skeleton of any d-dimensional
(bounded) polyhedron containing the origin, there exist d
points whose barycenter is at the origin.
In other words,
dP ⊆ S1(P) ⊕ · · · ⊕ S1(P)
d times
for any d-dimensional polyhedron P ⊆ Rd
.
We have been unable to prove the conjecture, even for
d = 3. However, we observe that having an additional point
makes the problem much easier.
Proposition 2. On the 1-skeleton of any 3-dimensional
(bounded convex) polyhedron containing the origin, there ex-
ist four points whose barycenter is at the origin.
Proof. We consider a plane H through the origin, and
find an antipodal pair (q, q ) on ∂P ∩ H by Theorem 0. Let
F and F be the faces containing q and q . Again by The-
orem 0, we can find pairs (q1, q2) and (q3, q4) on edges of
F and F with barycenter q ∈ F and q ∈ F , respectively.
These four points q1, q2, q3, q4 satisfy our criteria.
In the remainder of this section, we consider the case in
which P is convex. Using an elementary argument, we can
show that Conjecture 2 is true for convex polyhedra when d
is a power of 2. A key tool is the following lemma.
Lemma 2. For any convex polyhedron P ⊂ Rd
, we have
2P ⊆ S d/2 (P) ⊕ S d/2 (P).
Proof. Choose any point of the left-hand side, 2P. We
will show that this point is in the right-hand side. We may
assume that this point is in the interior of 2P, since the
right-hand side is a closed set. Also, without loss of gen-
erality, we may assume that this point is the origin. Thus,
assuming that P contains the origin in its interior, we need
to show that the origin belongs to the right-hand side, or
equivalently, that S d/2 (P) ∩ S d/2 (−P) is nonempty.
For simplicity of notation, we assume that d is even. The
odd case is shown identically by replacing d/2 by d/2 and
d/2 accordingly.
Since P contains the origin in its interior, the intersection
P ∩ (−P) is a d-dimensional convex polyhedron. Moreover,
its boundary C is centrally symmetric (i.e., C = −C). It
suffices to show that C has a vertex in Sd/2(P) ∩ Sd/2(−P).
A facet ((d − 1)-dimensional face) of C is a subset of a
facet of either P or −P. We start with the special case in
which C is simple. That is, every vertex of C is contained
in exactly d facets of P or −P. A vertex of C is of type
(j, d − j) if it is contained in j facets of P and d − j facets of
−P. Let v be any vertex of C, and let (k, d − k) be the type
of v. If k = d/2, we are done. Thus, we assume without loss
of generality that k < d/2. Since C is centrally symmetric,
−v ∈ C, and −v is of type (d−k, k). Since the 1-skeleton of
C is connected, there exists a path P in the skeleton from
v to −v. Let (x, y) be an edge of P with x and y of type
(i, d−i) and (j, d−j), respectively. Then j ∈ {i−1, i, i+1}.
Thus, there exists a vertex w on P of type (d/2, d/2).
Now, we consider the general case where C might have a
vertex that is an intersection of more than d facets. We con-
sider an infinitesimal perturbation of hyperplanes defining
facets of P to make C simple. Then, the perturbed version
˜C of C has a vertex ˜v of type (d/2, d/2), which corresponds
to a vertex v of C. Thus, v must lie at an intersection of
Sd/2(P) and Sd/2(−P).
Thus we can always find an antipodal pair of points from
d/2 - and d/2 -dimensional faces. However, this does not
extend to other pairs of dimensions k and d − k.
Proposition 3. There exists a convex polyhedron P ⊆
Rd
containing the origin such that for any k < d/2 , it
holds that Sk(P) ∩ Sd−k(−P) = ∅.
Proof. First, we consider the case where d = 2m is even
(thus, k < m). Consider an equilateral triangle T centered
at the origin. Then, we observe that all three vertices of
T lie outside −T. Let Tm
= T × · · · × T be the Cartesian
product of T in R2m
. Then, a k-dimensional face of P = Tm
is the Cartesian product of k edges and m − k vertices of T.
Since m − k > 0 and a vertex of T lies outside −T, the face
cannot intersect −P = (−T)m
. If d = 2m + 1 ≥ 3 is odd,
we consider P = I × Tm
, where I = [−1, 2] is an interval.
The remaining argument is analogous.
We then prove the following proposition, implying that
Conjecture 2 is true for convex polyhedra if d = 2i
for any
i ≥ 0.
Proposition 4. Let k be a positive integer and let d ≤
2k
. Then, on the 1-skeleton of any d-dimensional convex
polyhedron, there are 2k
points whose barycenter is at the
origin.
Proof. We use induction on k. The statement is true
for k = 1 (Theorem 0). It follows from Lemma 2 that there
are antipodal points x ∈ F and −x ∈ F , where F and F
are faces from S d/2 (P) and S d/2 (P), respectively. By
the induction hypothesis applied to F and F (translated
by −x and x), we have 2k−1
points on the skeleton of F
with barycenter x, and 2k−1
points on the skeleton of F
with barycenter −x. These 2k
points together satisfy our
requirement.
We note that our method is constructive, and such a loca-
tion of points can be computed in polynomial time for any
fixed dimension.
Using a generalization of the Borsuk-Ulam theorem in
terms of Zp-valued Euler class of a vector bundle, we can
extend the proof to other values of d.
Theorem 4. Conjecture 2 is true for convex polyhedra if
d = 2i
3j
for any i, j ≥ 0.
Note that Theorem 3 in the introduction is a special case
of this. Proving Theorem 4 requires some mathematical
tools, and we give an outline of the proof in the appendix.
Recently, Conjecture 2 has been affirmatively settled for
convex polyhedra, for all d, by Dobbins [2].
Algorithmic aspects of Proposition 4 and Theorem 4 are
also unexplored.
440
6. Acknowledgements
This work was initiated at the Fields Workshop on Dis-
crete and Computational Geometry (15th Korean Workshop
on Computational Geometry), Ottawa, Canada, in August
2012, and it was continued at IPDF 2012 Workshop on Algo-
rithmics on Massive Data, held in Sunshine Coast, Australia,
August 23–27, 2012, supported by the University of Sydney
International Program Development Funding, a workshop in
Ikaho, Japan, and 16th Korean Workshop on Computation
Geometry in Kanazawa, Japan. The authors are grateful to
the organizers of these workshops. They also thank John
Iacono and Pat Morin for pointing out Proposition 1. This
work was supported by Fonds de recherche du Qu´ebec —
Nature et technologies (FQRNT), the Secretary for Univer-
sities and Research of the Ministry of Economy and Knowl-
edge of the Government of Catalonia, the European Union,
the ESF EUROCORES programme EuroGIGA — ComPoSe
IP04 — MICINN Project EUI-EURC-2011-4306, Grant-in-
Aid for Scientific Research from Ministry of Education, Sci-
ence and Culture, Japan and Japan Society for the Pro-
motion of Science, the ELC project (Grant-in-Aid for Sci-
entific Research on Innovative Areas, MEXT Japan), NSF
China (no. 60973026), the Shanghai Leading Academic Dis-
cipline Project (no. B114), the Shanghai Committee of Sci-
ence and Technology (nos. 08DZ2271800 and 09DZ2272800),
ERATO Kawarabayashi Large Graph Project, the Research
Council (TRC) of the Sultanate of Oman, NRF grant 2011-
0030044 (SRC-GAIA) funded by the government of Korea,
and the Natural Sciences and Engineering Research Council
of Canada (NSERC).
5. REFERENCES
[1] V. H. Arakelian and M. R. Smith. Shaking force and
shaking moment balancing of mechanisms: A
historical review with new examples. Journal of
Mechanical Design, 127(2):334–339, 2005. Erratum in
the same journal 127(5) 1034–1035 (2005).
[2] M. G. Dobbins. A point in a d-polytope is the
barycenter of n points in its (d/n)-faces. ArXiv
e-prints, Dec. 2013.
[3] P. K. Ghosh and R. M. Haralick. Mathematical
morphological operations of boundary-represented
geometric objects. Journal of Mathematical Imaging
and Vision, 6(2-3):199–222, 1996.
[4] J. E. Hopcroft, D. Joseph, and S. Whitesides. On the
movement of robot arms in 2-dimensional bounded
regions. SIAM Journal on Computing, 14(2):315–333,
1985.
[5] E. H. Kronheimer and P. B. Kronheimer. The tripos
problem. Journal of the London Mathematical Society
(2), 24:182–192, 1981.
[6] J. Matouˇsek. Using the Borsuk-Ulam Theorem.
Springer Verlag, 2003.
[7] B. Matschke. A survey on the square peg problem.
Notices of the American Mathematical Society,
61:346–352, 2014.
[8] M. D. Meyerson. Equilateral triangles and continuous
curves. Fundamenta Mathematicae, 110:1–9, 1980.
[9] H. J. Munkholm. Borsuk-Ulam type theorems for
proper Zp-actions on (mod p homology) n-spheres.
Mathematica Scandinavica, 24:167–185, 1969.
[10] C. P. Rourke and B. J. Sanderson. Introduction to
Piecewise-Linear Topology. Springer-Verlag, 1972.
APPENDIX
A. POSTPONED PROOFS
A.1 Generalized tripodal points
An approach similar to Theorem 2 can be used to obtain
a more general statement.
Proposition 5. Let P be a compact region in R3
con-
taining the origin. Assume that there exists a number N
such that for any n > N there exists a polyhedron Pn
containing the origin such that ∂P is located in the 1
n
-
neighborhood of ∂Pn (with respect to the Hausdorff metric).
Then, there exist three points on ∂P that are tripodal.
The requirement for P is satisfied when ∂P is a smooth
manifold or a PL manifold [10]. We conjecture that this
result extends to higher dimensions.
Proof of Proposition 5. By Theorem 2, there exist
tripodal points (q1(n), q2(n), q3(n)) on each ∂Pn. Since
{(q1(n), q2(n), q3(n))}n≥N is an infinite sequence in a com-
pact subset of the 9-dimensional space, it has a subsequence
that converges to a point (q1, q2, q3) in ∂P × ∂P × ∂P. This
gives tripodal points on ∂P.
A.2 Proof of Theorem 4
We give an outline of the proof of Theorem 4.
Proposition 6. Let d = 3k be a multiple of 3. For a
d-dimensional convex compact polyhedron P containing the
origin o in its interior, we can find three points q1, q2 and
q3 in the k-skeleton of P such that q1 + q2 + q3 = 0.
Theorem 4 easily follows from Proposition 6: If d = 2i
3j
,
we can reduce the dimension to 3j
by using Lemma 2, and
then reduce the dimension similarly to 1 by using Proposi-
tion 6 analogously to the argument in Proposition 4. The
proof of Proposition 6 uses results from topology that are
not widely known and require considerable machinery to de-
velop. Thus, we first give an informal introduction, then
provide further details assuming familiarity with algebraic
topology and the Euler cohomology class. A proof of the
proposition in a more general form with further conse-
quences for Conjecture 2 will be published in a companion
paper.
We start with an informal introduction to the Euler class.
Recall that a vector bundle is a generalization of the product
space of a vector space (the fiber space) with some manifold
(the base space). Unlike a product space, which comes with
a pair of coordinate projections, a vector bundle only has
a projection to the base space. When a vector bundle is
“twisted,” it is not possible to also define a projection to
the fiber space such that the product of these projections is
a homeomorphism. The Euler class indicates that a vector
bundle is twisted by detecting the intersection of a pair of
generic sections. For example, both the (unbounded) cylin-
der and M¨obius strip are rank-1 vector bundles over a circle.
The Z2-Euler class of the cylinder is trivial, but that of the
M¨obius strip is not. Thus, the cylinder has disjoint sections,
but the M¨obius strip does not. Moreover, the cylinder is a
product space, but the M¨obius strip is not.
441
7. A vector bundle can sometimes be formed by starting with
a product space and taking the quotient by a group that acts
on both of these spaces. The resulting vector bundle might
then be twisted. If we are given some function φ: X → Y
respecting a group action G that is free on X where Y is a
vector space, we may show that φ vanishes by forming the
vector bundle π : (X × Y )/G → X/G. Then, φ defines a
section of this vector bundle, and if the Euler class of the
bundle is non-trivial, φ must intersect the zero section. For
example, let G Z2 with generator acting on the circle S by
rotating by the angle π and acting on the line R by scaling by
−1. The cylinder is the product space S×R, and the M¨obius
strip is the cylinder twisted by the Z2-action of G. That is,
the M¨obius strip is the quotient space (S × R)/G, which
becomes a vector bundle with projection π : (S × R)/G →
S/G. The fact that the Z2-Euler class of the M¨obius strip
is non-trivial implies that any function φ: S → R respecting
the action of G must vanish somewhere.
We may think of a homology class as an equivalence class
of weighted subspaces of a certain dimension. When the
coefficients are in a field, homology groups become vector
spaces, and the corresponding cohomology space is the dual
space of linear functionals. Here we will consider coefficients
in the field Z3 of integers modulo 3. The Euler class, as
a cohomology class, is then a certain linear functional on
homology classes that detects the kernel of a section in the
following way.
Proposition 7. For a vector bundle over a closed mani-
fold with Z3-Euler class e, a section σ, and a homology class
a of the base space, if e(a) = 0, then the support of any
representative of a intersects the kernel of σ.
Proof of Proposition 6. Let
V =
x1,1 x1,2 x1,3
...
xd,1 xd,2 xd,3
∈ Rd×3
x1,1
...
xd,1
+
x1,2
...
xd,2
+
x1,3
...
xd,3
= 0
and let Q = P3
∩V where P3
= P ×P ×P. That is, Q is the
convex compact polyhedron consisting of triples of points in
P with barycenter at the origin. Note Q has dimension 3d−
d = 2d. For now, we assume that V intersects P3
generically.
That is, every face of Q is of the form F = (F1 ×F2 ×F3)∩V
where each Fi is a face of P and dim F = dim F1 + dim F2 +
dim F3 − d. Later we will use a perturbation argument to
deal with the non-generic case. We define a of map φ =
(φ1, φ2, φ3): ∂Q → R3
as follows. We first define φi on a
vertex v of ∂Q where v = (v1, v2, v3) and each component
vi = (v1,i, . . . , vd,i) is a point in P. Each point vi is in some
face of dimension di = d − ci and 3
i=1 di = d = 3k. Let
φi(v) = di−k. Now extend φi to the vertices of a barycentric
subdivision of ∂Q as follows. For each face F extend φi
to the barycenter of F by the mean of φi on the vertices
of F, so φi(vF ) = 1
h
h
j=1 φi(vF,j) where vF,1, . . . , vF,h are
the vertices of F, vF = 1
h
h
j=1 vF,j is the barycenter of
F. Finally, extend φi to all of ∂Q by linear interpolation
of vertices on the simplices of the barycentric subdivision.
Observe that φ is a continuous function of ∂Q.
Our goal is to find a vertex v of Q such that φ(v) = 0.
If we have such v, then di = k for all i = 1, 2, 3, which
implies each vi is in the k-skeleton of P. Thus we have
Proposition 6. We will find such a vertex in two parts. First,
we will see that φ must vanish somewhere on a 2-dimensional
face of Q. Second, a minimal face where φ vanishes cannot
have dimension 2 or 1, so φ must vanish on a vertex. For
the first part we make use of a topological result given by
Munkholm [9] to prove a Borsuk-Ulam type theorem for Zp-
actions, which we now state for p = 3.
Theorem 5 ([9]). For a group G Z3 acting freely on
the m-sphere Sm
and acting on V by cyclically permuting
coordinates in R3
, the vector bundle
π : E = (Sm
× V )/G → B = Sm
/G, π([θ, x]G) = [θ]G
has non-trivial Z3-Euler class.
To see that φ vanishes on some 2-face of Q, form such a
vector bundle,
π : E = (∂Q × V )/G → B = ∂Q/G, π([θ, x]G) = [θ]G.
By Theorem 5, the Z3-Euler class e ∈ H2
(B; Z3) is non-
trivial. Thus there is some a ∈ H2(B; Z3) such that e(a) =
0. By the canonical isomorphism of singular and cellular
homology, a has some representative with support A in the
2-skeleton of B, which lifts to a subset of the 2-skeleton of
Q. Let Z = ker φ. By Proposition 7, A intersects ker φ/G,
and this lifts to the intersection of Z with certain 2-faces of
Q.
Now that we know Z intersects a 2-face of Q, we will see
that it must contain a vertex. Let F be a minimal face
intersecting Z. F is the intersection of V with some face
F1 × F2 × F3 of P3
, and since Q has dimension 2d, F has
codimension at least 2d−2, which means the total codimen-
sion of the faces F1, F2, F3 is at least 2d − 2, which makes
the sum of their dimensions is at most d+2. By the pigeon-
hole principle, at most (3k + 2)(k + 1)−1
< 3 faces can have
dimension at least k +1. We may assume by symmetry that
F3 is a face of dimension k or smaller. That is, φ3(p) ≤ 0
for all p ∈ F. We can conclude from this that φ3(p) = 0 for
all p ∈ F. To see this, suppose there is some p ∈ F such
that φ3(p) < 0. This would give φ3(vF ) < 0, which implies
φ3(p) < 0 for all p in the interior of F, but then Z must
intersect F on its boundary, contradicting the minimality of
F.
Suppose F has dimension 2. Then, φ is a continuous map
from F to the line {(t, −t, 0) | t ∈ R} that attains 0, so
φ must attain 0 on the boundary of F, contradicting the
minimality of F.
Suppose F has dimension 1. Let v, w be their vertices of
F. Since V intersects P3
generically exactly one face of P
defining v differs from the corresponding faces defining F
and this face is one dimension lower; likewise for w. That
is, up to symmetry, we have two cases:
v = G1×F2×F3, w = H1×F2×F3 or w = F1×H2×F3,
where dim G1 = dim F1 − 1 and dim Hi = dim Fi − 1. This
gives φ(v) − φ(w) = either (0, 0, 0) or (−1, 1, 0). Since φ is
integral on v, w and attains 0 on F, we must have either
φ(v) = 0 or φ(w) = 0, contradicting the minimality of F.
As long as V intersects P3
genericaly, the dimension of a
minimal face F intersecting the kernel of φ is at most 2, but
442
8. cannot be 2 or 1, so F = v = (v1, v2, v3) must be a vertex
where vi is in the k-skeleton of P.
Now consider the case where V does not intersect P3
generically. The polyhedron can be defined by a linear
vector inequality P = {x | Ax ≤ b} for some A ∈ Rn×d
,
b ∈ Rn
where P has n facets. For a linear operator given
by ε ∈ Rn×d
, let Pε = {x | (A + ε)x ≤ b}. For almost every
ε sufficiently small, V does intersect P3
ε generically, and by
the argument just given, there is some v(ε) ∈ S1(Pε)3
∩ V .
Since S1(Pε) is bounded for ε small, there is some limit point
limε→0 v(ε) ∈ S1(P)3
∩ V , where limε→0 v(ε) is the limits of
all convergent sequences.
443