The document presents a new feebly secure cryptographic construction that improves on previous work. It introduces new techniques for proving lower bounds on circuit complexity through gate elimination methods applied to linear Boolean functions. Specifically:
1) A new feebly secure cryptographic protocol is proposed that uses linear functions and block matrices to make inversion without a trapdoor harder than with a trapdoor, while keeping encryption complexity similar to inversion without a trapdoor.
2) New predicates and algorithms are introduced for applying gate elimination to linear functions represented as matrices, allowing estimation of complexity.
3) Analysis shows the new construction has an order of security approaching 5/4, improving on previous work that achieved 25/22. This represents
The document discusses the history of video games and how libraries are using gaming to engage patrons and teach information literacy skills. It provides a brief overview of the progression of gaming from the 1950s to present day and notes that the average age of gamers spans all age groups. Libraries are pursuing gaming initiatives as outreach, to bring in community members, and to provide staff development and instruction opportunities. Examples of library gaming programs and resources for further information are also listed.
This document describes a Synchronized Alternating Pushdown Automaton (SAPDA) that accepts the language of reduplication with a center marker (RCM). The SAPDA utilizes recursive conjunctive transitions to check that the nth letter before the center marker '$' is the same as the nth letter from the end of the string, for all letters n. This allows the SAPDA to accept strings of the form w$w, where w is any string over the alphabet {a,b}. The construction of the SAPDA involves states that check specific letters at specific positions relative to the center marker.
“Specification by Example” is a set of process patterns that helps to validate the application for faster feedback and minimal documentation. With Specification by Example, teams write just enough documenta- tion to facilitate change effectively in short iterations or in flow-based development.
The document discusses computational models for algebraic decision trees and algebraic computation trees over a ground field F. It describes how algebraic decision trees use polynomials of degree ≤ d to branch at each node, while algebraic computation trees allow testing polynomials to be calculated from previous polynomials along the path. The document then covers existing lower bounds on the complexity C(S) of the membership problem for a set S in terms of topological invariants of S, such as the number of connected components, Euler characteristic, and sum of Betti numbers.
The document discusses recognizing sparse perfect elimination bipartite graphs. It begins with an example of Gaussian elimination on a matrix that introduces new non-zero values. The key points are that perfect elimination bipartite graphs correspond to matrices that can be eliminated without creating new non-zeros, and this can be achieved by finding a sequence of bisimplicial edges in the corresponding bipartite graph. The document proposes using bisimplicial edges as pivots during elimination to avoid introducing new non-zeros.
The document presents a new feebly secure cryptographic construction that improves on previous work. It introduces new techniques for proving lower bounds on circuit complexity through gate elimination methods applied to linear Boolean functions. Specifically:
1) A new feebly secure cryptographic protocol is proposed that uses linear functions and block matrices to make inversion without a trapdoor harder than with a trapdoor, while keeping encryption complexity similar to inversion without a trapdoor.
2) New predicates and algorithms are introduced for applying gate elimination to linear functions represented as matrices, allowing estimation of complexity.
3) Analysis shows the new construction has an order of security approaching 5/4, improving on previous work that achieved 25/22. This represents
The document discusses the history of video games and how libraries are using gaming to engage patrons and teach information literacy skills. It provides a brief overview of the progression of gaming from the 1950s to present day and notes that the average age of gamers spans all age groups. Libraries are pursuing gaming initiatives as outreach, to bring in community members, and to provide staff development and instruction opportunities. Examples of library gaming programs and resources for further information are also listed.
This document describes a Synchronized Alternating Pushdown Automaton (SAPDA) that accepts the language of reduplication with a center marker (RCM). The SAPDA utilizes recursive conjunctive transitions to check that the nth letter before the center marker '$' is the same as the nth letter from the end of the string, for all letters n. This allows the SAPDA to accept strings of the form w$w, where w is any string over the alphabet {a,b}. The construction of the SAPDA involves states that check specific letters at specific positions relative to the center marker.
“Specification by Example” is a set of process patterns that helps to validate the application for faster feedback and minimal documentation. With Specification by Example, teams write just enough documenta- tion to facilitate change effectively in short iterations or in flow-based development.
The document discusses computational models for algebraic decision trees and algebraic computation trees over a ground field F. It describes how algebraic decision trees use polynomials of degree ≤ d to branch at each node, while algebraic computation trees allow testing polynomials to be calculated from previous polynomials along the path. The document then covers existing lower bounds on the complexity C(S) of the membership problem for a set S in terms of topological invariants of S, such as the number of connected components, Euler characteristic, and sum of Betti numbers.
The document discusses recognizing sparse perfect elimination bipartite graphs. It begins with an example of Gaussian elimination on a matrix that introduces new non-zero values. The key points are that perfect elimination bipartite graphs correspond to matrices that can be eliminated without creating new non-zeros, and this can be achieved by finding a sequence of bisimplicial edges in the corresponding bipartite graph. The document proposes using bisimplicial edges as pivots during elimination to avoid introducing new non-zeros.
The document discusses recognizing sparse perfect elimination bipartite graphs through matrix elimination. It provides an example of Gaussian elimination on a matrix that introduces new non-zero values. The key points are:
- Perfect elimination bipartite graphs correspond to matrices that allow elimination without creating new non-zeros.
- Existing algorithms have time complexity of O(n^5) or O(n^3/log n) but may produce dense matrices from sparse ones.
- A new algorithm is proposed that avoids this issue by working directly with the sparse matrix structure.
The document discusses the method of multiplicities, which is a technique for combinatorics using algebra. It involves finding a polynomial that vanishes on a set with high multiplicity. This is applied to problems in list decoding of Reed-Solomon codes, bounding the size of Kakeya sets, and constructing randomness extractors. Specifically, the method is used to improve bounds on list decoding, show that certain Kakeya sets must be large, and allow extraction of more randomness from weak sources. Propagating multiplicities of derivatives allows tighter analysis of these problems.
The document summarizes research on multiple-conclusion calculi for first-order Gödel logic. It introduces Gödel logic and describes its semantics using both many-valued semantics based on truth values in the interval [0,1] and Kripke-style semantics. It then outlines proof theory for Gödel logic, including early sequent calculi and more recent hypersequent calculi. The hypersequent calculus introduced in 1991 uses standard rules and has been extended to the first-order case. The document provides details on the structural and logical rules of this single-conclusion hypersequent system.
The document summarizes a talk on polynomial identity testing (PIT). PIT is the problem of determining if a polynomial computed by an arithmetic circuit is identical to the zero polynomial. The talk outlines the definition of PIT, its connection to circuit lower bounds, and surveys positive results for restricted circuit classes. It also provides examples of proof techniques for PIT on depth-3 and depth-4 circuits and discusses the relationship between PIT and polynomial factorization.
This document summarizes an algorithm for maximizing throughput in online scheduling of equal length jobs. The algorithm aims to schedule incoming jobs with the goal of maximizing total value of completed jobs by their deadlines. It uses a charging scheme and potential function to prove it is (2+√5)-competitive, an improvement over prior algorithms. The algorithm handles jobs arriving online with weights, processing times, deadlines, and considers models where preemption allows restarting or resuming previously completed work. Open questions remain around settling the exact competitive ratio and developing new algorithmic methods.
The document discusses efficient algorithms for performing approximate matching queries on strings that have been grammar-compressed. It introduces the concept of implicit unit-Monge matrices which can represent permutation matrices in a space-efficient way using a range tree data structure. This representation allows dominance counting queries, needed for string comparison, to be performed in O(log2 n) time after an O(n log n) preprocessing step. More advanced data structures can improve these asymptotic time and space bounds further.
This document presents an overview of the consensus problem from an informal and formal perspective. It discusses how consensus requires representativity, where the decision reflects a sufficient number of individual opinions, and stability, where the decision is robust to individual opinion variations. It also presents some key formalizations, including defining consensus as a function from the set of sensor inputs and memory states to decisions. It introduces the concept of a geodesic to measure stability as the maximum number of state transitions needed to return to the starting configuration along a trajectory where each sensor changes at most once.
This document summarizes research on the combinatorial properties of Burrows-Wheeler Transforms (BWT). It discusses prior work that characterized words with simple BWT image forms. It also introduces two general decision problems about BWT images and claims to provide efficient solutions to these problems. Specifically, it presents a theorem providing a criterion to check whether a given word is a valid BWT image based on analyzing the number of orbits in the word's stable sorting.
The document presents a polynomial-time algorithm for finding a minimal conflicting set of rows (MCSR) in a binary matrix that contains a given row. It defines MCSR as a set of rows that does not have the consecutive ones property but where any proper subset does have the property. The algorithm works by representing the binary matrix as a vertex-colored bipartite graph and detecting forbidden substructures called Tucker configurations that characterize when the consecutive ones property does not hold. It finds an MCSR containing the given row by pruning rows from the graph until a Tucker configuration exists using the current set but not with any proper subset.
The document discusses locally decodable codes, which allow recovery of individual data symbols from a coded data set even after erasures. Reed-Muller codes and multiplicity codes were early constructions that provided locality but only up to a rate of 0.5. Matching vector codes were later introduced and can achieve locality r for codes of positive rate and length n=O(r^2). However, the optimal tradeoff between rate, length, and locality remains an open problem.
The document discusses locally decodable codes, which allow recovery of individual data symbols from a coded data set even after erasures. Reed-Muller codes and multiplicity codes were early constructions that provided locality but only up to a rate of 0.5. Matching vector codes were later introduced and can achieve locality r for codes of positive rate and length n=O(r^2). However, the optimal tradeoff between rate, length, and locality remains an open problem.
This document discusses the relationships between orbits of linear maps and regular languages. It shows that the chamber hitting problem (CHP) and permutation filter realizability problem are Turing equivalent. It also shows that the injective filter and surjective filter realizability problems are decidable by reducing them to problems about orbits. However, the regular realizability problem for the track product of the periodic and permutation filters is undecidable, as it can reduce the undecidable zero in the upper right corner problem.
The document summarizes precedence automata and languages. It provides historical background on operator precedence grammars and Floyd languages. It then discusses how precedence parsing works using an example arithmetic expression. Key points include using a precedence table to determine parentheses insertion and defining three types of moves for an automata model based on symbol precedence: push, mark, and flush. The example demonstrates the automata processing a Dyck language expression.
The document discusses the constraint satisfaction problem (CSP) and the dichotomy conjecture regarding the complexity of CSP instances. It provides definitions and examples of CSPs. It explains the role of polymorphisms in determining the complexity, identifying semilattice, majority and affine polymorphisms as "good". It outlines the dichotomy conjecture that CSPs are either solvable in polynomial time or NP-complete depending on the presence of certain types of local structure defined by polymorphisms. The document also discusses algorithms and results for various constraint languages.
The document discusses the constraint satisfaction problem (CSP) and the dichotomy conjecture in computational complexity theory. It defines CSP and provides examples. It discusses the role of polymorphisms - operations that preserve constraints. The presence or absence of certain polymorphisms like semilattice, majority, and affine operations determines the complexity of CSP for a given constraint language. The document outlines a proposed dichotomy - CSP is either solvable in polynomial time or NP-complete, depending on the polymorphisms. It surveys partial results proving this conjecture and algorithms for certain constraint languages.
The document discusses shared-memory systems and charts. It provides definitions and concepts related to modeling shared-memory concurrency using partial orders of events called pomsets. Specifically, it defines:
- Shared-memory systems as consisting of registers, data, processes, actions, and rules for updating configurations.
- Pomsets as labeled partial orders used to model executions.
- The may-occur-concurrently relation for rules in a shared-memory system.
- Partial-order semantics for runs of pomsets in a shared-memory system.
- Shared-memory charts (SMCs) as pomsets with gates used to model specifications.
This document discusses the relationships between orbits of linear maps and regular languages. It shows that the chamber hitting problem (CHP) and permutation filter-realizability problem are Turing equivalent. It also shows that the injective filter-realizability problem and surjective filter-realizability problem are decidable, while the track product of the periodic and permutation filter-realizability problem is undecidable. The zero in the upper right corner problem, which is undecidable, can be reduced to the latter regular realizability problem.
The document discusses precedence automata and languages. It provides historical background on operator precedence grammars and related families of languages. As an example, it explains how parsing an arithmetic expression like 4+5×6 works according to an implicit context-free grammar and by respecting the precedence of operators. It introduces the concept of a precedence table to determine the admissible parentheses generators between pairs of symbols in a grammar.
Locally decodable codes allow recovery of individual data symbols even after data loss by accessing only a small number of codeword symbols. Reed-Muller codes provide locality but only up to a rate of 0.5, while multiplicity codes achieve higher rates but have weaker locality guarantees. Matching vector codes can match the best known locality bounds, constructing codes of length n with locality r for constant r, but the optimal tradeoff between rate, length and locality remains an open problem.
This document summarizes research on the combinatorial properties of Burrows-Wheeler Transforms (BWT). It discusses prior work that characterized words with simple BWT image forms. It also defines two general decision problems regarding whether a word is a valid BWT image or can form a specific BWT image pattern. The authors then present efficient solutions to these two problems, including a theorem providing a criterion for determining if a word is a BWT image based on the number of orbits in its stable sorting.
The document discusses recognizing sparse perfect elimination bipartite graphs through matrix elimination. It provides an example of Gaussian elimination on a matrix that introduces new non-zero values. The key points are:
- Perfect elimination bipartite graphs correspond to matrices that allow elimination without creating new non-zeros.
- Existing algorithms have time complexity of O(n^5) or O(n^3/log n) but may produce dense matrices from sparse ones.
- A new algorithm is proposed that avoids this issue by working directly with the sparse matrix structure.
The document discusses the method of multiplicities, which is a technique for combinatorics using algebra. It involves finding a polynomial that vanishes on a set with high multiplicity. This is applied to problems in list decoding of Reed-Solomon codes, bounding the size of Kakeya sets, and constructing randomness extractors. Specifically, the method is used to improve bounds on list decoding, show that certain Kakeya sets must be large, and allow extraction of more randomness from weak sources. Propagating multiplicities of derivatives allows tighter analysis of these problems.
The document summarizes research on multiple-conclusion calculi for first-order Gödel logic. It introduces Gödel logic and describes its semantics using both many-valued semantics based on truth values in the interval [0,1] and Kripke-style semantics. It then outlines proof theory for Gödel logic, including early sequent calculi and more recent hypersequent calculi. The hypersequent calculus introduced in 1991 uses standard rules and has been extended to the first-order case. The document provides details on the structural and logical rules of this single-conclusion hypersequent system.
The document summarizes a talk on polynomial identity testing (PIT). PIT is the problem of determining if a polynomial computed by an arithmetic circuit is identical to the zero polynomial. The talk outlines the definition of PIT, its connection to circuit lower bounds, and surveys positive results for restricted circuit classes. It also provides examples of proof techniques for PIT on depth-3 and depth-4 circuits and discusses the relationship between PIT and polynomial factorization.
This document summarizes an algorithm for maximizing throughput in online scheduling of equal length jobs. The algorithm aims to schedule incoming jobs with the goal of maximizing total value of completed jobs by their deadlines. It uses a charging scheme and potential function to prove it is (2+√5)-competitive, an improvement over prior algorithms. The algorithm handles jobs arriving online with weights, processing times, deadlines, and considers models where preemption allows restarting or resuming previously completed work. Open questions remain around settling the exact competitive ratio and developing new algorithmic methods.
The document discusses efficient algorithms for performing approximate matching queries on strings that have been grammar-compressed. It introduces the concept of implicit unit-Monge matrices which can represent permutation matrices in a space-efficient way using a range tree data structure. This representation allows dominance counting queries, needed for string comparison, to be performed in O(log2 n) time after an O(n log n) preprocessing step. More advanced data structures can improve these asymptotic time and space bounds further.
This document presents an overview of the consensus problem from an informal and formal perspective. It discusses how consensus requires representativity, where the decision reflects a sufficient number of individual opinions, and stability, where the decision is robust to individual opinion variations. It also presents some key formalizations, including defining consensus as a function from the set of sensor inputs and memory states to decisions. It introduces the concept of a geodesic to measure stability as the maximum number of state transitions needed to return to the starting configuration along a trajectory where each sensor changes at most once.
This document summarizes research on the combinatorial properties of Burrows-Wheeler Transforms (BWT). It discusses prior work that characterized words with simple BWT image forms. It also introduces two general decision problems about BWT images and claims to provide efficient solutions to these problems. Specifically, it presents a theorem providing a criterion to check whether a given word is a valid BWT image based on analyzing the number of orbits in the word's stable sorting.
The document presents a polynomial-time algorithm for finding a minimal conflicting set of rows (MCSR) in a binary matrix that contains a given row. It defines MCSR as a set of rows that does not have the consecutive ones property but where any proper subset does have the property. The algorithm works by representing the binary matrix as a vertex-colored bipartite graph and detecting forbidden substructures called Tucker configurations that characterize when the consecutive ones property does not hold. It finds an MCSR containing the given row by pruning rows from the graph until a Tucker configuration exists using the current set but not with any proper subset.
The document discusses locally decodable codes, which allow recovery of individual data symbols from a coded data set even after erasures. Reed-Muller codes and multiplicity codes were early constructions that provided locality but only up to a rate of 0.5. Matching vector codes were later introduced and can achieve locality r for codes of positive rate and length n=O(r^2). However, the optimal tradeoff between rate, length, and locality remains an open problem.
The document discusses locally decodable codes, which allow recovery of individual data symbols from a coded data set even after erasures. Reed-Muller codes and multiplicity codes were early constructions that provided locality but only up to a rate of 0.5. Matching vector codes were later introduced and can achieve locality r for codes of positive rate and length n=O(r^2). However, the optimal tradeoff between rate, length, and locality remains an open problem.
This document discusses the relationships between orbits of linear maps and regular languages. It shows that the chamber hitting problem (CHP) and permutation filter realizability problem are Turing equivalent. It also shows that the injective filter and surjective filter realizability problems are decidable by reducing them to problems about orbits. However, the regular realizability problem for the track product of the periodic and permutation filters is undecidable, as it can reduce the undecidable zero in the upper right corner problem.
The document summarizes precedence automata and languages. It provides historical background on operator precedence grammars and Floyd languages. It then discusses how precedence parsing works using an example arithmetic expression. Key points include using a precedence table to determine parentheses insertion and defining three types of moves for an automata model based on symbol precedence: push, mark, and flush. The example demonstrates the automata processing a Dyck language expression.
The document discusses the constraint satisfaction problem (CSP) and the dichotomy conjecture regarding the complexity of CSP instances. It provides definitions and examples of CSPs. It explains the role of polymorphisms in determining the complexity, identifying semilattice, majority and affine polymorphisms as "good". It outlines the dichotomy conjecture that CSPs are either solvable in polynomial time or NP-complete depending on the presence of certain types of local structure defined by polymorphisms. The document also discusses algorithms and results for various constraint languages.
The document discusses the constraint satisfaction problem (CSP) and the dichotomy conjecture in computational complexity theory. It defines CSP and provides examples. It discusses the role of polymorphisms - operations that preserve constraints. The presence or absence of certain polymorphisms like semilattice, majority, and affine operations determines the complexity of CSP for a given constraint language. The document outlines a proposed dichotomy - CSP is either solvable in polynomial time or NP-complete, depending on the polymorphisms. It surveys partial results proving this conjecture and algorithms for certain constraint languages.
The document discusses shared-memory systems and charts. It provides definitions and concepts related to modeling shared-memory concurrency using partial orders of events called pomsets. Specifically, it defines:
- Shared-memory systems as consisting of registers, data, processes, actions, and rules for updating configurations.
- Pomsets as labeled partial orders used to model executions.
- The may-occur-concurrently relation for rules in a shared-memory system.
- Partial-order semantics for runs of pomsets in a shared-memory system.
- Shared-memory charts (SMCs) as pomsets with gates used to model specifications.
This document discusses the relationships between orbits of linear maps and regular languages. It shows that the chamber hitting problem (CHP) and permutation filter-realizability problem are Turing equivalent. It also shows that the injective filter-realizability problem and surjective filter-realizability problem are decidable, while the track product of the periodic and permutation filter-realizability problem is undecidable. The zero in the upper right corner problem, which is undecidable, can be reduced to the latter regular realizability problem.
The document discusses precedence automata and languages. It provides historical background on operator precedence grammars and related families of languages. As an example, it explains how parsing an arithmetic expression like 4+5×6 works according to an implicit context-free grammar and by respecting the precedence of operators. It introduces the concept of a precedence table to determine the admissible parentheses generators between pairs of symbols in a grammar.
Locally decodable codes allow recovery of individual data symbols even after data loss by accessing only a small number of codeword symbols. Reed-Muller codes provide locality but only up to a rate of 0.5, while multiplicity codes achieve higher rates but have weaker locality guarantees. Matching vector codes can match the best known locality bounds, constructing codes of length n with locality r for constant r, but the optimal tradeoff between rate, length and locality remains an open problem.
This document summarizes research on the combinatorial properties of Burrows-Wheeler Transforms (BWT). It discusses prior work that characterized words with simple BWT image forms. It also defines two general decision problems regarding whether a word is a valid BWT image or can form a specific BWT image pattern. The authors then present efficient solutions to these two problems, including a theorem providing a criterion for determining if a word is a BWT image based on the number of orbits in its stable sorting.