Kristoffer Arnsfelt Hansen, Rasmus Ibsen-Jensen and Peter Bro Miltersen. The complexity of solving reachability games using value and strategy iteration
Should a football team go for a one or two point conversion? A dynamic progra...Laura Albert
The document discusses using dynamic programming to determine when an NFL team should attempt a one-point or two-point conversion after scoring a touchdown. It models the problem as a series of decisions based on the score differential and remaining possessions. The dynamic programming approach considers all possible outcomes at each stage and guarantees an optimal solution, without enumerating all possibilities. It formulates the problem as a longest path problem to maximize the probability of winning.
The document discusses creating a complexity theory for randomized search heuristics. It uses the example of the Mastermind problem, where an oracle chooses a secret binary string and an algorithm tries to discover it by querying strings and receiving feedback on matches. This is modeled as a black-box optimization problem. The document proposes analyzing such problems using a ranking-based query complexity model rather than only analyzing specific algorithms on specific problems. It suggests this approach could provide general lower bounds and help develop a complexity theory for randomized search heuristics.
This document summarizes research on the combinatorial properties of Burrows-Wheeler Transforms (BWT). It discusses prior work that characterized words with simple BWT image forms. It also introduces two general decision problems about BWT images and claims to provide efficient solutions to these problems. Specifically, it presents a theorem providing a criterion to check whether a given word is a valid BWT image based on analyzing the number of orbits in the word's stable sorting.
This document discusses the relationships between orbits of linear maps and regular languages. It shows that the chamber hitting problem (CHP) and permutation filter-realizability problem are Turing equivalent. It also shows that the injective filter-realizability problem and surjective filter-realizability problem are decidable, while the track product of the periodic and permutation filter-realizability problem is undecidable. The zero in the upper right corner problem, which is undecidable, can be reduced to the latter regular realizability problem.
The document describes a method for canonizing graphs of bounded treewidth in AC1 complexity. It presents the following:
1) Existing results showing canonization of bounded treewidth graphs is in P, TC1, TC2, and LogCFL complexity classes.
2) A new algorithm that canonizes bounded treewidth graphs in AC1 complexity by computing a tree decomposition of depth O(log n) and constructing a minimal description circuit of depth O(log n).
3) The algorithm works by computing descriptions for bags of the tree decomposition in parallel, sorting descriptions, and recursively combining descriptions while maintaining a circuit depth of O(log n).
This document advocates against violence towards women and encourages women in abusive relationships to seek help. It warns women not to wait for an abusive situation to change as it will likely continue or escalate. Women are urged to not be silent and to scream for help if they are being tortured or made to feel like a slave. The overall message is one of empowering women in abusive situations to stand up for themselves, seek support from others, and remove themselves from relationships where they are being physically or psychologically harmed.
The document summarizes the musical history and collaborations between Eric Clapton and members of Derek and the Dominos. It mentions that Eric Clapton, Bobby Whitlock, Carl Radle, and Jim Gordon formed Derek and the Dominos in 1971. Their album Layla and Other Assorted Love Songs featured the hit song "Layla" and was influenced by Duane Allman of The Allman Brothers Band.
Should a football team go for a one or two point conversion? A dynamic progra...Laura Albert
The document discusses using dynamic programming to determine when an NFL team should attempt a one-point or two-point conversion after scoring a touchdown. It models the problem as a series of decisions based on the score differential and remaining possessions. The dynamic programming approach considers all possible outcomes at each stage and guarantees an optimal solution, without enumerating all possibilities. It formulates the problem as a longest path problem to maximize the probability of winning.
The document discusses creating a complexity theory for randomized search heuristics. It uses the example of the Mastermind problem, where an oracle chooses a secret binary string and an algorithm tries to discover it by querying strings and receiving feedback on matches. This is modeled as a black-box optimization problem. The document proposes analyzing such problems using a ranking-based query complexity model rather than only analyzing specific algorithms on specific problems. It suggests this approach could provide general lower bounds and help develop a complexity theory for randomized search heuristics.
This document summarizes research on the combinatorial properties of Burrows-Wheeler Transforms (BWT). It discusses prior work that characterized words with simple BWT image forms. It also introduces two general decision problems about BWT images and claims to provide efficient solutions to these problems. Specifically, it presents a theorem providing a criterion to check whether a given word is a valid BWT image based on analyzing the number of orbits in the word's stable sorting.
This document discusses the relationships between orbits of linear maps and regular languages. It shows that the chamber hitting problem (CHP) and permutation filter-realizability problem are Turing equivalent. It also shows that the injective filter-realizability problem and surjective filter-realizability problem are decidable, while the track product of the periodic and permutation filter-realizability problem is undecidable. The zero in the upper right corner problem, which is undecidable, can be reduced to the latter regular realizability problem.
The document describes a method for canonizing graphs of bounded treewidth in AC1 complexity. It presents the following:
1) Existing results showing canonization of bounded treewidth graphs is in P, TC1, TC2, and LogCFL complexity classes.
2) A new algorithm that canonizes bounded treewidth graphs in AC1 complexity by computing a tree decomposition of depth O(log n) and constructing a minimal description circuit of depth O(log n).
3) The algorithm works by computing descriptions for bags of the tree decomposition in parallel, sorting descriptions, and recursively combining descriptions while maintaining a circuit depth of O(log n).
This document advocates against violence towards women and encourages women in abusive relationships to seek help. It warns women not to wait for an abusive situation to change as it will likely continue or escalate. Women are urged to not be silent and to scream for help if they are being tortured or made to feel like a slave. The overall message is one of empowering women in abusive situations to stand up for themselves, seek support from others, and remove themselves from relationships where they are being physically or psychologically harmed.
The document summarizes the musical history and collaborations between Eric Clapton and members of Derek and the Dominos. It mentions that Eric Clapton, Bobby Whitlock, Carl Radle, and Jim Gordon formed Derek and the Dominos in 1971. Their album Layla and Other Assorted Love Songs featured the hit song "Layla" and was influenced by Duane Allman of The Allman Brothers Band.
The document discusses power series representations and geometric series. It introduces geometric series, which converge inside the unit circle and diverge outside. It explores expanding functions as power series inside and outside the unit circle. The document also discusses uniform convergence, where the number of terms needed for a given accuracy does not depend on the point in the region of convergence. Term-by-term integration is only valid for uniformly convergent series.
This document provides an overview of the Lagrangian method for solving constrained nonlinear optimization problems. It begins with a review of the Lagrangian method and how it allows constrained problems to be formulated as unconstrained problems by introducing Lagrange multipliers. It then provides examples of applying the method to find the optimal solution. Specifically, it presents a linear example of selecting the optimal location for a waste processing plant with constraints.
This document summarizes research on extortion strategies in the Iterated Prisoner's Dilemma game. It introduces the Prisoner's Dilemma and describes how it can model situations like trench warfare. It explains how Press and Dyson showed that one player can unilaterally control the other player's payoff through a zero-determinant strategy. Using such a strategy, a player can extort their opponent by enforcing a relationship that gives themselves a higher payoff. The document demonstrates that an extorting player can always receive their maximum payoff, even against an adapting opponent who tries to change their strategy.
The document describes an algorithm called Overlay Stitch Meshing (OSM) for producing triangulations with no large angles from planar straight line graphs. The algorithm produces triangles with angles between 170 degrees and results in a mesh size that is O(log L/s) competitive with the optimal triangulation, where L is the total edge length and s is the minimum feature size. It works by overlaying triangles and only keeping overlay edges that intersect the input graph in a "good" way, with an angle of at least 30 degrees. The algorithm and its analysis introduce new techniques for proving logarithmic competitiveness.
This document presents a novel method called the Eigenfunction Expansion Method (EFEM) for analytically solving transient heat conduction problems with phase change in cylindrical coordinates. The method involves formulating the governing equations and associated boundary conditions, introducing coefficients, solving the eigenvalue problems, and representing the solution as a series expansion of the eigenfunctions. Dimensionless parameters are introduced to simplify the problem. The EFEM is then applied to solve a one-dimensional phase change problem. Results show that increasing the number of terms in the series expansion decreases the truncation error and that the Stefan number affects the melting fraction evolution over time.
inverse kinematics of robotic manipulatorsShyamal25
This document discusses inverse kinematics and provides examples of calculating the joint angles required to position a robot end-effector at a desired location and orientation. It begins by defining inverse kinematics and providing a simple example of calculating joint angles for a planar robot arm. It then presents the forward kinematics equations and inverse kinematics problem for a 6 degree-of-freedom Stanford arm. It concludes by discussing issues like solvability, singularities, and the workspace boundaries for different types of planar robot arms.
1. The document discusses tensor analysis and its use in studying the Einstein field equations. It defines key tensors such as the Riemann-Christoffel curvature tensor and its properties including the antisymmetric and cyclic properties.
2. Bianchi identities are derived using a geodesic coordinate system. Taking the covariant derivative of the curvature tensor leads to the Bianchi identities.
3. Other concepts discussed include the Ricci tensor, gradient and divergence of tensors, and the Einstein tensor obtained by contracting the Bianchi identities. The Einstein tensor is related to the Ricci tensor and metric tensor.
This document provides lecture notes on statistics II with Mathematica. It covers topics related to hypothesis testing procedures including formulating null and alternative hypotheses, choosing appropriate test statistics and their probability distributions, determining critical values and rejection regions based on the level of significance, and making decisions to reject or fail to reject the null hypothesis based on the test statistic value or p-value. Examples of hypothesis tests for single and two population means are presented for normal and t-distributions. Diagrams illustrate the acceptance and rejection regions for one-tailed left, two-tailed, and one-tailed right tests. Tables summarize the decision rules for various test types based on the test statistic and p-value.
- The document outlines a BSc research project on pricing financial derivatives using the Black-Scholes model.
- The project aims to learn established financial models, compare pricing techniques, and see how newer models relate to existing ones.
- It provides background on the student's motivation and experience, and introduces key concepts like options, the Black-Scholes equation, and its derivation and solution.
- The student will present their work on applying and extending the Black-Scholes model to price derivatives.
[This sheet must be completed and attached to the last page of.docxhanneloremccaffery
[This sheet must be completed and attached to the last page of your homework]
ISE 421
Operations Research II
Term 161
Homework #1
Student Name ID# Signature
Homework Guidelines
To receive full credit, you should make sure you follow the following guidelines.
Homework Presentation:
• Every main problem should be answered on a different page.
• You should submit the solutions for the first two problems only.
• All pages of your homework should be in chronological order.
• Your name, and the homework number should be clearly indicated.
Modeling Questions:
• Clearly define all the variables in one group. Then clearly define all the parameters in another group. Then display
the final model in the standard style (Objective, Constraints, Restriction on Domain). You can use ABCD, and
EVER OLD CARD mnemonic if desired.
ISE-421 HW-1
Problem #1
Suppose that the decision variables of a mathematical programming model are defined as:
xi,j,t := acers of land plot i allocated to crop j in year t
Ct := the funds in SAR donated by the governament at the begining of year t
Rj,t := the revenue generated from crop j in $ at the end of year t
where i = 1, . . . , 47; j = 1, . . . , 9; t = 1, . . . , 10.
Use summation (
∑
) and enumeration (∀) indexed notation to write expressions for each of
the following systems of constraints in terms of these decision variables, and determine how many
constraints belong to each system. You need to define additional variables to model the following
constraints. Assume $1 = 3.75SAR. In addition assume appropriate information wherever neces-
sary.
(a) The acres allocated in each plot i cannot exceed the available acreage (call it Ai) in any year.
(b) At least 1000 total acres must be devoted to corn (corp j = 4) in each year.
(c) At least one-third of the total acreage planted over 10 years must be in soybeans (corp j = 2).
(d) Either rice (corp j = 9) or wheat (corp j = 8) should be planted in a given year.
(e) Grapes (corp j = 7) should be planted in a year, when the current funds from the government
and the total revenue from the previous year is greater than or equal to 38000 SAR.
(f) In the odd years (t = 1, 3, . . . , 9) land plot 32 is unusable.
(g) On the same land plot, there should be at least a two years of difference between corn and rice
crops plantation.
(h) If soybeans are planted in a land plot, then no other crops should be planted on the same land
plot.
(i) Every plot must be used for planting in a given year.
(j) In every year, there should be at least 7 different crops.
1
ISE-421 HW-1
Problem #2
Consider the following IP problem.
maximize :
14 ∗ x1 + 22 ∗ x2 + 12 ∗ x3 + 10 ∗ x4
subject to :
50 ∗ x1 + 70 ∗ x2 + 40 ∗ x3 + 30 ∗ x4 ≤ 100
10 ∗ x1 + 60 ∗ x2 + 50 ∗ x3 + 60 ∗ x4 ≤ 80
6 ∗ x1 + 1 ∗ x2 + 3 ∗ x3 + 7 ∗ x4 ≤ 9
xi ∈ {0, 1} ∀ i = 1, . . . , 4
(a) Write the LP relaxation of the above model.
(b) Get the optimal objective function value of the LP relaxation from Tabl.
This document discusses concepts related to estimation variance in resources estimation. It defines extension variance as the variance when extending a value from a known sample or block to an unknown area, and estimation variance as the variance when extending values from multiple known samples or blocks. It provides formulas for calculating extension and estimation variance based on variogram models and the geometry of samples and blocks. Nomograms are presented to aid in calculating variances for different sample/block configurations in 1D, 2D and 3D using spherical variogram models. The concept of global estimation variance, which sums the individual variances, is also introduced.
1. The document discusses arithmetic and geometric sequences.
2. Arithmetic sequences are defined by adding a common difference to get the next term, while geometric sequences multiply by a common ratio.
3. Examples are provided of finding terms in arithmetic and geometric sequences using formulas.
This document presents a numerical solution and comparison of linear Black-Scholes models using finite difference and finite element methods. It begins with an introduction to the Black-Scholes partial differential equation and previous analytical and numerical solutions in the literature. The document then transforms the Black-Scholes equation into a heat equation and presents the finite element formulation and discretization. Numerical results are obtained for the European call and put options and compared between finite difference and finite element methods.
A successful maximum likelihood parameter estimation in skewed distributions ...Hideo Hirose
A successful maximum likelihood parameter estimation scheme using
the continuation method (homotopy method) is introduced. This
algorithm is particularly useful for the three-parameter skewed
distributions including thresholds. Such three-parameter
distributions are, for example, Weibull, log-normal, gamma and
inverse Gaussian distributions. As the proposed algorithm can almost
always obtain the local maximum likelihood estimates automatically,
it is of considerable practical value. The Monte Carlo simulation
study shows the effectiveness of the proposed method.
Intro to Quant Trading Strategies (Lecture 4 of 10)Adrian Aley
- The document introduces pairs trading via cointegration, where two assets that are cointegrated, or move together in the long run, can be traded to exploit short-term deviations from their long-term equilibrium.
- Cointegration means finding a linear combination of the two assets such that it is stationary. This stationary combination represents the long-run equilibrium between the assets.
- The document discusses testing for cointegration using augmented Dickey-Fuller tests, and outlines the vector error correction model (VECM) representation used to model cointegrated assets and implement pairs trading strategies.
In this work we discuss how to compute KLE with complexity O(k n log n), how to approximate large covariance matrices (in H-matrix format), how to use the Lanczos method.
We solve elliptic PDE with uncertain coefficients. We apply Karhunen-Loeve expansion to separate stochastic part from spatial part. The corresponding eigenvalue problem with covariance function is solved via the Hierarchical Matrix technique. We also demonstrate how low-rank tensor method can be applied for high-dimensional problems (e.g., to compute higher order statistical moments) . We provide explicit formulas to compute statistical moments of order k with linear complexity.
This document provides simple derivations of the Greek letters (Delta, Theta, Gamma, Vega, Rho) for European call and put options within the Black-Scholes options pricing model framework. The derivations bypass complicated mathematical calculations and are relatively simple to follow. Some examples of calculating the Greek letters are also provided. The key Greek letters - Delta, Theta and Gamma - are shown to satisfy the Black-Scholes partial differential equation.
The document discusses computational models for algebraic decision trees and algebraic computation trees over a ground field F. It describes how algebraic decision trees use polynomials of degree ≤ d to branch at each node, while algebraic computation trees allow testing polynomials to be calculated from previous polynomials along the path. The document then covers existing lower bounds on the complexity C(S) of the membership problem for a set S in terms of topological invariants of S, such as the number of connected components, Euler characteristic, and sum of Betti numbers.
The document discusses recognizing sparse perfect elimination bipartite graphs. It begins with an example of Gaussian elimination on a matrix that introduces new non-zero values. The key points are that perfect elimination bipartite graphs correspond to matrices that can be eliminated without creating new non-zeros, and this can be achieved by finding a sequence of bisimplicial edges in the corresponding bipartite graph. The document proposes using bisimplicial edges as pivots during elimination to avoid introducing new non-zeros.
The document discusses power series representations and geometric series. It introduces geometric series, which converge inside the unit circle and diverge outside. It explores expanding functions as power series inside and outside the unit circle. The document also discusses uniform convergence, where the number of terms needed for a given accuracy does not depend on the point in the region of convergence. Term-by-term integration is only valid for uniformly convergent series.
This document provides an overview of the Lagrangian method for solving constrained nonlinear optimization problems. It begins with a review of the Lagrangian method and how it allows constrained problems to be formulated as unconstrained problems by introducing Lagrange multipliers. It then provides examples of applying the method to find the optimal solution. Specifically, it presents a linear example of selecting the optimal location for a waste processing plant with constraints.
This document summarizes research on extortion strategies in the Iterated Prisoner's Dilemma game. It introduces the Prisoner's Dilemma and describes how it can model situations like trench warfare. It explains how Press and Dyson showed that one player can unilaterally control the other player's payoff through a zero-determinant strategy. Using such a strategy, a player can extort their opponent by enforcing a relationship that gives themselves a higher payoff. The document demonstrates that an extorting player can always receive their maximum payoff, even against an adapting opponent who tries to change their strategy.
The document describes an algorithm called Overlay Stitch Meshing (OSM) for producing triangulations with no large angles from planar straight line graphs. The algorithm produces triangles with angles between 170 degrees and results in a mesh size that is O(log L/s) competitive with the optimal triangulation, where L is the total edge length and s is the minimum feature size. It works by overlaying triangles and only keeping overlay edges that intersect the input graph in a "good" way, with an angle of at least 30 degrees. The algorithm and its analysis introduce new techniques for proving logarithmic competitiveness.
This document presents a novel method called the Eigenfunction Expansion Method (EFEM) for analytically solving transient heat conduction problems with phase change in cylindrical coordinates. The method involves formulating the governing equations and associated boundary conditions, introducing coefficients, solving the eigenvalue problems, and representing the solution as a series expansion of the eigenfunctions. Dimensionless parameters are introduced to simplify the problem. The EFEM is then applied to solve a one-dimensional phase change problem. Results show that increasing the number of terms in the series expansion decreases the truncation error and that the Stefan number affects the melting fraction evolution over time.
inverse kinematics of robotic manipulatorsShyamal25
This document discusses inverse kinematics and provides examples of calculating the joint angles required to position a robot end-effector at a desired location and orientation. It begins by defining inverse kinematics and providing a simple example of calculating joint angles for a planar robot arm. It then presents the forward kinematics equations and inverse kinematics problem for a 6 degree-of-freedom Stanford arm. It concludes by discussing issues like solvability, singularities, and the workspace boundaries for different types of planar robot arms.
1. The document discusses tensor analysis and its use in studying the Einstein field equations. It defines key tensors such as the Riemann-Christoffel curvature tensor and its properties including the antisymmetric and cyclic properties.
2. Bianchi identities are derived using a geodesic coordinate system. Taking the covariant derivative of the curvature tensor leads to the Bianchi identities.
3. Other concepts discussed include the Ricci tensor, gradient and divergence of tensors, and the Einstein tensor obtained by contracting the Bianchi identities. The Einstein tensor is related to the Ricci tensor and metric tensor.
This document provides lecture notes on statistics II with Mathematica. It covers topics related to hypothesis testing procedures including formulating null and alternative hypotheses, choosing appropriate test statistics and their probability distributions, determining critical values and rejection regions based on the level of significance, and making decisions to reject or fail to reject the null hypothesis based on the test statistic value or p-value. Examples of hypothesis tests for single and two population means are presented for normal and t-distributions. Diagrams illustrate the acceptance and rejection regions for one-tailed left, two-tailed, and one-tailed right tests. Tables summarize the decision rules for various test types based on the test statistic and p-value.
- The document outlines a BSc research project on pricing financial derivatives using the Black-Scholes model.
- The project aims to learn established financial models, compare pricing techniques, and see how newer models relate to existing ones.
- It provides background on the student's motivation and experience, and introduces key concepts like options, the Black-Scholes equation, and its derivation and solution.
- The student will present their work on applying and extending the Black-Scholes model to price derivatives.
[This sheet must be completed and attached to the last page of.docxhanneloremccaffery
[This sheet must be completed and attached to the last page of your homework]
ISE 421
Operations Research II
Term 161
Homework #1
Student Name ID# Signature
Homework Guidelines
To receive full credit, you should make sure you follow the following guidelines.
Homework Presentation:
• Every main problem should be answered on a different page.
• You should submit the solutions for the first two problems only.
• All pages of your homework should be in chronological order.
• Your name, and the homework number should be clearly indicated.
Modeling Questions:
• Clearly define all the variables in one group. Then clearly define all the parameters in another group. Then display
the final model in the standard style (Objective, Constraints, Restriction on Domain). You can use ABCD, and
EVER OLD CARD mnemonic if desired.
ISE-421 HW-1
Problem #1
Suppose that the decision variables of a mathematical programming model are defined as:
xi,j,t := acers of land plot i allocated to crop j in year t
Ct := the funds in SAR donated by the governament at the begining of year t
Rj,t := the revenue generated from crop j in $ at the end of year t
where i = 1, . . . , 47; j = 1, . . . , 9; t = 1, . . . , 10.
Use summation (
∑
) and enumeration (∀) indexed notation to write expressions for each of
the following systems of constraints in terms of these decision variables, and determine how many
constraints belong to each system. You need to define additional variables to model the following
constraints. Assume $1 = 3.75SAR. In addition assume appropriate information wherever neces-
sary.
(a) The acres allocated in each plot i cannot exceed the available acreage (call it Ai) in any year.
(b) At least 1000 total acres must be devoted to corn (corp j = 4) in each year.
(c) At least one-third of the total acreage planted over 10 years must be in soybeans (corp j = 2).
(d) Either rice (corp j = 9) or wheat (corp j = 8) should be planted in a given year.
(e) Grapes (corp j = 7) should be planted in a year, when the current funds from the government
and the total revenue from the previous year is greater than or equal to 38000 SAR.
(f) In the odd years (t = 1, 3, . . . , 9) land plot 32 is unusable.
(g) On the same land plot, there should be at least a two years of difference between corn and rice
crops plantation.
(h) If soybeans are planted in a land plot, then no other crops should be planted on the same land
plot.
(i) Every plot must be used for planting in a given year.
(j) In every year, there should be at least 7 different crops.
1
ISE-421 HW-1
Problem #2
Consider the following IP problem.
maximize :
14 ∗ x1 + 22 ∗ x2 + 12 ∗ x3 + 10 ∗ x4
subject to :
50 ∗ x1 + 70 ∗ x2 + 40 ∗ x3 + 30 ∗ x4 ≤ 100
10 ∗ x1 + 60 ∗ x2 + 50 ∗ x3 + 60 ∗ x4 ≤ 80
6 ∗ x1 + 1 ∗ x2 + 3 ∗ x3 + 7 ∗ x4 ≤ 9
xi ∈ {0, 1} ∀ i = 1, . . . , 4
(a) Write the LP relaxation of the above model.
(b) Get the optimal objective function value of the LP relaxation from Tabl.
This document discusses concepts related to estimation variance in resources estimation. It defines extension variance as the variance when extending a value from a known sample or block to an unknown area, and estimation variance as the variance when extending values from multiple known samples or blocks. It provides formulas for calculating extension and estimation variance based on variogram models and the geometry of samples and blocks. Nomograms are presented to aid in calculating variances for different sample/block configurations in 1D, 2D and 3D using spherical variogram models. The concept of global estimation variance, which sums the individual variances, is also introduced.
1. The document discusses arithmetic and geometric sequences.
2. Arithmetic sequences are defined by adding a common difference to get the next term, while geometric sequences multiply by a common ratio.
3. Examples are provided of finding terms in arithmetic and geometric sequences using formulas.
This document presents a numerical solution and comparison of linear Black-Scholes models using finite difference and finite element methods. It begins with an introduction to the Black-Scholes partial differential equation and previous analytical and numerical solutions in the literature. The document then transforms the Black-Scholes equation into a heat equation and presents the finite element formulation and discretization. Numerical results are obtained for the European call and put options and compared between finite difference and finite element methods.
A successful maximum likelihood parameter estimation in skewed distributions ...Hideo Hirose
A successful maximum likelihood parameter estimation scheme using
the continuation method (homotopy method) is introduced. This
algorithm is particularly useful for the three-parameter skewed
distributions including thresholds. Such three-parameter
distributions are, for example, Weibull, log-normal, gamma and
inverse Gaussian distributions. As the proposed algorithm can almost
always obtain the local maximum likelihood estimates automatically,
it is of considerable practical value. The Monte Carlo simulation
study shows the effectiveness of the proposed method.
Intro to Quant Trading Strategies (Lecture 4 of 10)Adrian Aley
- The document introduces pairs trading via cointegration, where two assets that are cointegrated, or move together in the long run, can be traded to exploit short-term deviations from their long-term equilibrium.
- Cointegration means finding a linear combination of the two assets such that it is stationary. This stationary combination represents the long-run equilibrium between the assets.
- The document discusses testing for cointegration using augmented Dickey-Fuller tests, and outlines the vector error correction model (VECM) representation used to model cointegrated assets and implement pairs trading strategies.
In this work we discuss how to compute KLE with complexity O(k n log n), how to approximate large covariance matrices (in H-matrix format), how to use the Lanczos method.
We solve elliptic PDE with uncertain coefficients. We apply Karhunen-Loeve expansion to separate stochastic part from spatial part. The corresponding eigenvalue problem with covariance function is solved via the Hierarchical Matrix technique. We also demonstrate how low-rank tensor method can be applied for high-dimensional problems (e.g., to compute higher order statistical moments) . We provide explicit formulas to compute statistical moments of order k with linear complexity.
This document provides simple derivations of the Greek letters (Delta, Theta, Gamma, Vega, Rho) for European call and put options within the Black-Scholes options pricing model framework. The derivations bypass complicated mathematical calculations and are relatively simple to follow. Some examples of calculating the Greek letters are also provided. The key Greek letters - Delta, Theta and Gamma - are shown to satisfy the Black-Scholes partial differential equation.
The document discusses computational models for algebraic decision trees and algebraic computation trees over a ground field F. It describes how algebraic decision trees use polynomials of degree ≤ d to branch at each node, while algebraic computation trees allow testing polynomials to be calculated from previous polynomials along the path. The document then covers existing lower bounds on the complexity C(S) of the membership problem for a set S in terms of topological invariants of S, such as the number of connected components, Euler characteristic, and sum of Betti numbers.
The document discusses recognizing sparse perfect elimination bipartite graphs. It begins with an example of Gaussian elimination on a matrix that introduces new non-zero values. The key points are that perfect elimination bipartite graphs correspond to matrices that can be eliminated without creating new non-zeros, and this can be achieved by finding a sequence of bisimplicial edges in the corresponding bipartite graph. The document proposes using bisimplicial edges as pivots during elimination to avoid introducing new non-zeros.
The document discusses recognizing sparse perfect elimination bipartite graphs through matrix elimination. It provides an example of Gaussian elimination on a matrix that introduces new non-zero values. The key points are:
- Perfect elimination bipartite graphs correspond to matrices that allow elimination without creating new non-zeros.
- Existing algorithms have time complexity of O(n^5) or O(n^3/log n) but may produce dense matrices from sparse ones.
- A new algorithm is proposed that avoids this issue by working directly with the sparse matrix structure.
The document discusses the method of multiplicities, which is a technique for combinatorics using algebra. It involves finding a polynomial that vanishes on a set with high multiplicity. This is applied to problems in list decoding of Reed-Solomon codes, bounding the size of Kakeya sets, and constructing randomness extractors. Specifically, the method is used to improve bounds on list decoding, show that certain Kakeya sets must be large, and allow extraction of more randomness from weak sources. Propagating multiplicities of derivatives allows tighter analysis of these problems.
The document summarizes research on multiple-conclusion calculi for first-order Gödel logic. It introduces Gödel logic and describes its semantics using both many-valued semantics based on truth values in the interval [0,1] and Kripke-style semantics. It then outlines proof theory for Gödel logic, including early sequent calculi and more recent hypersequent calculi. The hypersequent calculus introduced in 1991 uses standard rules and has been extended to the first-order case. The document provides details on the structural and logical rules of this single-conclusion hypersequent system.
The document summarizes a talk on polynomial identity testing (PIT). PIT is the problem of determining if a polynomial computed by an arithmetic circuit is identical to the zero polynomial. The talk outlines the definition of PIT, its connection to circuit lower bounds, and surveys positive results for restricted circuit classes. It also provides examples of proof techniques for PIT on depth-3 and depth-4 circuits and discusses the relationship between PIT and polynomial factorization.
This document summarizes an algorithm for maximizing throughput in online scheduling of equal length jobs. The algorithm aims to schedule incoming jobs with the goal of maximizing total value of completed jobs by their deadlines. It uses a charging scheme and potential function to prove it is (2+√5)-competitive, an improvement over prior algorithms. The algorithm handles jobs arriving online with weights, processing times, deadlines, and considers models where preemption allows restarting or resuming previously completed work. Open questions remain around settling the exact competitive ratio and developing new algorithmic methods.
The document discusses efficient algorithms for performing approximate matching queries on strings that have been grammar-compressed. It introduces the concept of implicit unit-Monge matrices which can represent permutation matrices in a space-efficient way using a range tree data structure. This representation allows dominance counting queries, needed for string comparison, to be performed in O(log2 n) time after an O(n log n) preprocessing step. More advanced data structures can improve these asymptotic time and space bounds further.
This document presents an overview of the consensus problem from an informal and formal perspective. It discusses how consensus requires representativity, where the decision reflects a sufficient number of individual opinions, and stability, where the decision is robust to individual opinion variations. It also presents some key formalizations, including defining consensus as a function from the set of sensor inputs and memory states to decisions. It introduces the concept of a geodesic to measure stability as the maximum number of state transitions needed to return to the starting configuration along a trajectory where each sensor changes at most once.
The document presents a polynomial-time algorithm for finding a minimal conflicting set of rows (MCSR) in a binary matrix that contains a given row. It defines MCSR as a set of rows that does not have the consecutive ones property but where any proper subset does have the property. The algorithm works by representing the binary matrix as a vertex-colored bipartite graph and detecting forbidden substructures called Tucker configurations that characterize when the consecutive ones property does not hold. It finds an MCSR containing the given row by pruning rows from the graph until a Tucker configuration exists using the current set but not with any proper subset.
The document discusses locally decodable codes, which allow recovery of individual data symbols from a coded data set even after erasures. Reed-Muller codes and multiplicity codes were early constructions that provided locality but only up to a rate of 0.5. Matching vector codes were later introduced and can achieve locality r for codes of positive rate and length n=O(r^2). However, the optimal tradeoff between rate, length, and locality remains an open problem.
The document discusses locally decodable codes, which allow recovery of individual data symbols from a coded data set even after erasures. Reed-Muller codes and multiplicity codes were early constructions that provided locality but only up to a rate of 0.5. Matching vector codes were later introduced and can achieve locality r for codes of positive rate and length n=O(r^2). However, the optimal tradeoff between rate, length, and locality remains an open problem.
This document discusses the relationships between orbits of linear maps and regular languages. It shows that the chamber hitting problem (CHP) and permutation filter realizability problem are Turing equivalent. It also shows that the injective filter and surjective filter realizability problems are decidable by reducing them to problems about orbits. However, the regular realizability problem for the track product of the periodic and permutation filters is undecidable, as it can reduce the undecidable zero in the upper right corner problem.
The document summarizes precedence automata and languages. It provides historical background on operator precedence grammars and Floyd languages. It then discusses how precedence parsing works using an example arithmetic expression. Key points include using a precedence table to determine parentheses insertion and defining three types of moves for an automata model based on symbol precedence: push, mark, and flush. The example demonstrates the automata processing a Dyck language expression.
The document discusses the constraint satisfaction problem (CSP) and the dichotomy conjecture regarding the complexity of CSP instances. It provides definitions and examples of CSPs. It explains the role of polymorphisms in determining the complexity, identifying semilattice, majority and affine polymorphisms as "good". It outlines the dichotomy conjecture that CSPs are either solvable in polynomial time or NP-complete depending on the presence of certain types of local structure defined by polymorphisms. The document also discusses algorithms and results for various constraint languages.
This document describes a Synchronized Alternating Pushdown Automaton (SAPDA) that accepts the language of reduplication with a center marker (RCM). The SAPDA utilizes recursive conjunctive transitions to check that the nth letter before the center marker '$' is the same as the nth letter from the end of the string, for all letters n. This allows the SAPDA to accept strings of the form w$w, where w is any string over the alphabet {a,b}. The construction of the SAPDA involves states that check specific letters at specific positions relative to the center marker.
The document discusses the constraint satisfaction problem (CSP) and the dichotomy conjecture in computational complexity theory. It defines CSP and provides examples. It discusses the role of polymorphisms - operations that preserve constraints. The presence or absence of certain polymorphisms like semilattice, majority, and affine operations determines the complexity of CSP for a given constraint language. The document outlines a proposed dichotomy - CSP is either solvable in polynomial time or NP-complete, depending on the polymorphisms. It surveys partial results proving this conjecture and algorithms for certain constraint languages.
The document discusses shared-memory systems and charts. It provides definitions and concepts related to modeling shared-memory concurrency using partial orders of events called pomsets. Specifically, it defines:
- Shared-memory systems as consisting of registers, data, processes, actions, and rules for updating configurations.
- Pomsets as labeled partial orders used to model executions.
- The may-occur-concurrently relation for rules in a shared-memory system.
- Partial-order semantics for runs of pomsets in a shared-memory system.
- Shared-memory charts (SMCs) as pomsets with gates used to model specifications.
The document discusses precedence automata and languages. It provides historical background on operator precedence grammars and related families of languages. As an example, it explains how parsing an arithmetic expression like 4+5×6 works according to an implicit context-free grammar and by respecting the precedence of operators. It introduces the concept of a precedence table to determine the admissible parentheses generators between pairs of symbols in a grammar.
1. The complexity of solving reachability games using value and strategy iteration Kristoffer Arnsfelt Hansen Rasmus Ibsen-Jensen Peter Bro Miltersen Aarhus University Denmark CSR 2011, 14’th June
2.
3. Matrix games von Neumann 1928 2/42 0 1 -1 -1 0 1 1 -1 0
4. Matrix games von Neumann 1928 2/42 0 1 -1 -1 0 1 1 -1 0
5. Concurrent reachability games Everett 1957/de Alfaro, Henzinger, Kupferman 1998 Each entry can be either 0, 1 or a pointer vs. Dante* Lucifer* 0 1 * Naming convention from Hansen, Koucky and Miltersen, 2009 3/42 0 1 -1 -1 0 1 1 -1 0
6. Concurrent reachability games Everett 1957/de Alfaro, Henzinger, Kupferman 1998 vs. Dante* Lucifer* Each entry can be either 0, 1 or a pointer * Naming convention from Hansen, Koucky and Miltersen, 2009 3/42
7. Concurrent reachability games Everett 1957/de Alfaro, Henzinger, Kupferman 1998 Each entry can be either 0, 1 or a pointer 3/42
8. Concurrent reachability games Everett 1957/de Alfaro, Henzinger, Kupferman 1998 Each entry can be either 0 , 1 or a pointer 3/42 0 0 0 0 0 0 0 0 0 0 0 0
9. Concurrent reachability games Everett 1957/de Alfaro, Henzinger, Kupferman 1998 Each entry can be either 0, 1 or a pointer 3/42 1 0 0 1 0 1 0 0 0 0 0 0 0 0 0
10. Concurrent reachability games Everett 1957/de Alfaro, Henzinger, Kupferman 1998 Each entry can be either 0, 1 or a pointer S: 3/42 1 0 0 S 1 0 S S 1 0 0 S 0 S S 0 0 S 0 S S 0 0 S 0 S S
11. Concurrent reachability games Everett 1957/de Alfaro, Henzinger, Kupferman 1998 Each entry can be either 0, 1 or a pointer S: 3/42 1 0 0 S 1 0 S S 1 0 0 S 0 S S 0 0 S 0 S S 0 0 S 0 S S
12. Histories Each entry can be either 0, 1 or a pointer S: 4/42 1 0 0 S 1 0 S S 1 0 0 S 0 S S 0 0 S 0 S S 0 0 S 0 S S
20. Value iteration example – G 0 S: 12/42 1 0 0 S 1 0 S S 1 0 0 S 0 S S 0 0 S 0 S S 0 0 S 0 S S
21. Value iteration example – G 0 S: 0 0 0 0 12/42 1 0 0 S 1 0 S S 1 0 0 S 0 S S 0 0 S 0 S S 0 0 S 0 S S
22. Value iteration example – G 1 S: 0 0 0 0 13/42 0 0 S 0 S S 0 0 S 0 S S 0 0 S 0 S S 1 0 0 S 1 0 S S 1
23. Value iteration example – G 1 S: 0 0 0 0 0 0 0 13/42 0 0 S 0 S S 0 0 S 0 S S 0 0 S 0 S S 1 0 0 S 1 0 S S 1
24. Value iteration example – G 1 S: 0 0 0 0 1 1 1 1 13/42 0 0 S 0 S S 0 0 S 0 S S 0 0 S 0 S S 1 0 0 S 1 0 S S 1 0 0 0
25. Value iteration example – G 1 S: 0 0 0 0 13/42 0 0 S 0 S S 0 0 S 0 S S 0 0 S 0 S S 1 0 0 S 1 0 S S 1 1 0 0 1 0 1
26. Value iteration example – G 1 S: 0 0 0 0 0 0 0 13/42 0 0 S 0 S S 0 0 S 0 S S 0 0 S 0 S S 1 0 0 S 1 0 S S 1 1 0 0 1 0 1
27. Value iteration example – G 1 0 S: 0.33333/ 0 0 0 13/42 0 0 S 0 S S 0 0 S 0 S S 0 0 S 0 S S 1 0 0 S 1 0 S S 1 1 0 0 0 1 0 0 0 1
28. Value iteration example – G 1 S: 0 0 0.33333/ 0 0 0 0 0 13/42 0 0 S 0 S S 0 0 S 0 S S 0 0 S 0 S S 1 0 0 S 1 0 S S 1
29. Value iteration example – G 1 S: 0 0 0 0 0 0 0 0 0 0.33333/ 0 0 13/42 0 0 S 0 S S 0 0 S 0 S S 0 0 S 0 S S 1 0 0 S 1 0 S S 1 0 0 0
30. Value iteration example – G 1 S: 0 0.33333/ 0 0 0/ 0 13/42 0 0 S 0 S S 0 0 S 0 S S 0 0 S 0 S S 1 0 0 S 1 0 S S 1 0 0 0 0 0 0 0 0 0
31. Value iteration example – G 1 S: 0 0 0 0.33333/ 0 0/ 0/ 0/ 13/42 0 0 S 0 S S 0 0 S 0 S S 0 0 S 0 S S 1 0 0 S 1 0 S S 1
32. Value iteration example – G 2 S: 0 0 0 0.33333/ 0.33333 0.11111/ 0/ 0/ 14/42 0 0 S 0 S S 0 0 S 0 S S 0 0 S 0 S S 1 0 0 S 1 0 S S 1
33. Value iteration example – G 3 S: 0.11111 0 0 0.33333/ 0.33333 0.11111/ 0/ 0.03704/ 15/42 0 0 S 0 S S 0 0 S 0 S S 0 0 S 0 S S 1 0 0 S 1 0 S S 1
34. Value iteration example – G 4 S: 0.11111 0.03704 0 0.33333/ 0.33333 0.11111/ 0.01235/ 0.03704/ 16/42 0 0 S 0 S S 0 0 S 0 S S 0 0 S 0 S S 1 0 0 S 1 0 S S 1
35. Value iteration example – G 5 S: 0.11111 0.03704 0.01235 0.33748/ 0.33333 0.11533/ 0.01754/ 0.04147/ 17/42 0 0 S 0 S S 0 0 S 0 S S 0 0 S 0 S S 1 0 0 S 1 0 S S 1
36. Value iteration example – G 6 S: 0.11533 0.04147 0.01754 0.33925/ 0.33748 0.11855/ 0.02172/ 0.04493/ 18/42 0 0 S 0 S S 0 0 S 0 S S 0 0 S 0 S S 1 0 0 S 1 0 S S 1
37. Value iteration example – G 7 S: 0.11855 0.04493 0.02172 0.34068/ 0.33925 0.12064/ 0.02519/ 0.04772/ 19/42 0 0 S 0 S S 0 0 S 0 S S 0 0 S 0 S S 1 0 0 S 1 0 S S 1
38. Value iteration example – G 8 S: 0.12064 0.04772 0.02519 0.34187/ 0.34068 0.12388/ 0.02815/ 0.04991/ 20/42 0 0 S 0 S S 0 0 S 0 S S 0 0 S 0 S S 1 0 0 S 1 0 S S 1
39. Value iteration example – G 9 S: 0.12388 0.04991 0.02815 0.34378/ 0.34187 0.12517/ 0.03070/ 0.05129 / 21/42 0 0 S 0 S S 0 0 S 0 S S 0 0 S 0 S S 1 0 0 S 1 0 S S 1