A “promise problem” is a formulation of partial decision problem. Complexity issues about promise problems arise from considerations about cracking problems for public-key cryptosystems. Using a notion of Turing reducibility between promise problems, this paper disproves a conjecture made by Even and Yacobi (1980), that would imply nonexistence of public-key cryptosystems with NP-hard cracking problems. In its place a new conjecture is raised having the same consequence. In addition, the new conjecture implies that NP-complete sets cannot be accepted by Turing machines that have at most one accepting computation for each input word.
This file contains the concepts of Class P, Class NP, NP- completeness, Travelling Salesman Person problem, Clique Problem, Vertex cover problem, Hamiltonian problem, FFT and DFT.
P, NP, NP-Complete, and NP-Hard
Reductionism in Algorithms
NP-Completeness and Cooks Theorem
NP-Complete and NP-Hard Problems
Travelling Salesman Problem (TSP)
Travelling Salesman Problem (TSP) - Approximation Algorithms
PRIMES is in P - (A hope for NP problems in P)
Millennium Problems
Conclusions
This document discusses branch and bound algorithms and NP-hard and NP-complete problems. It provides examples and proofs related to these topics.
1) Branch and bound is an algorithm that systematically enumerates candidate solutions by discarding subsets that are provably suboptimal. The knapsack problem is used as an example problem.
2) NP-hard and NP-complete problems are those whose best known algorithms run in non-polynomial time. If a problem can be solved in polynomial time, then all NP-complete problems can be. Proving problems NP-complete involves reducing other known NP-complete problems to the target problem.
3) Trees are connected graphs without cycles. They are used to represent hierarchies and
The document reviews concepts related to NP-completeness, including reductions between problems. It provides examples of reducing the directed Hamiltonian cycle problem to the undirected version. It also reduces 3-SAT to the clique problem by transforming a Boolean formula to a graph, then further reduces clique to vertex cover. Hundreds of problems have been shown to be NP-complete through relatively simple reductions like these that leverage previous results.
A NEW ATTACK ON RSA WITH A COMPOSED DECRYPTION EXPONENTijcisjournal
In this paper, we consider an RSA modulus N=pq, where the prime factors p, q are of the same size. We
present an attack on RSA when the decryption exponent d is in the form d=Md1+d0 where M is a given
positive integer and d1 and d0 are two suitably small unknown integers. In 1999, Boneh and Durfee
presented an attack on RSA when
0.292 d < N . When d=Md1+d0, our attack enables one to overcome
Boneh and Durfee's bound and to factor the RSA modulus
Automatski - NP-Complete - TSP - Travelling Salesman Problem Solved in O(N^4)Aditya Yadav
The document discusses solving the NP-complete travelling salesman problem (TSP) in O(N^4) time. It begins by introducing Automatski Solutions and their efforts over 25 years to solve some of the toughest computational problems considered unsolvable, including 7 NP-complete problems. It then provides background on the TSP, describing it as one of the most important theoretical problems involving finding the shortest route to visit each of N cities once. The document proceeds to explain Automatski's deterministic algorithm for solving the TSP in O(N^4) time, significantly improving on the best known solution. It claims this proves P=NP and has implications for cryptography such as cracking RSA 2048.
Skiena algorithm 2007 lecture19 introduction to np completezukun
1. The document introduces the concept of NP-completeness and discusses how it can be used to show that many problems that cannot be solved efficiently are essentially the same problem.
2. It describes how reductions can be used to show that if one problem can be transformed or reduced to another problem, then finding an efficient algorithm for one would imply an efficient algorithm for the other.
3. The traveling salesman problem and the satisfiability problem are used as examples to illustrate decision problems, instances, encodings, and reductions.
This document summarizes and proves results about unequal-cost prefix-free codes with a binary alphabet where symbols have costs of 1 and 2 units. It shows that the average word length of such codes obeys a fundamental lower bound, analogous to the bound for equal-cost codes. It also proves that prefix-free codes can always be constructed whose average word length is within 2 units of this fundamental bound. This achieves a generalization of key results about average word length and fundamental bounds from equal-cost to unequal-cost prefix-free codes.
This file contains the concepts of Class P, Class NP, NP- completeness, Travelling Salesman Person problem, Clique Problem, Vertex cover problem, Hamiltonian problem, FFT and DFT.
P, NP, NP-Complete, and NP-Hard
Reductionism in Algorithms
NP-Completeness and Cooks Theorem
NP-Complete and NP-Hard Problems
Travelling Salesman Problem (TSP)
Travelling Salesman Problem (TSP) - Approximation Algorithms
PRIMES is in P - (A hope for NP problems in P)
Millennium Problems
Conclusions
This document discusses branch and bound algorithms and NP-hard and NP-complete problems. It provides examples and proofs related to these topics.
1) Branch and bound is an algorithm that systematically enumerates candidate solutions by discarding subsets that are provably suboptimal. The knapsack problem is used as an example problem.
2) NP-hard and NP-complete problems are those whose best known algorithms run in non-polynomial time. If a problem can be solved in polynomial time, then all NP-complete problems can be. Proving problems NP-complete involves reducing other known NP-complete problems to the target problem.
3) Trees are connected graphs without cycles. They are used to represent hierarchies and
The document reviews concepts related to NP-completeness, including reductions between problems. It provides examples of reducing the directed Hamiltonian cycle problem to the undirected version. It also reduces 3-SAT to the clique problem by transforming a Boolean formula to a graph, then further reduces clique to vertex cover. Hundreds of problems have been shown to be NP-complete through relatively simple reductions like these that leverage previous results.
A NEW ATTACK ON RSA WITH A COMPOSED DECRYPTION EXPONENTijcisjournal
In this paper, we consider an RSA modulus N=pq, where the prime factors p, q are of the same size. We
present an attack on RSA when the decryption exponent d is in the form d=Md1+d0 where M is a given
positive integer and d1 and d0 are two suitably small unknown integers. In 1999, Boneh and Durfee
presented an attack on RSA when
0.292 d < N . When d=Md1+d0, our attack enables one to overcome
Boneh and Durfee's bound and to factor the RSA modulus
Automatski - NP-Complete - TSP - Travelling Salesman Problem Solved in O(N^4)Aditya Yadav
The document discusses solving the NP-complete travelling salesman problem (TSP) in O(N^4) time. It begins by introducing Automatski Solutions and their efforts over 25 years to solve some of the toughest computational problems considered unsolvable, including 7 NP-complete problems. It then provides background on the TSP, describing it as one of the most important theoretical problems involving finding the shortest route to visit each of N cities once. The document proceeds to explain Automatski's deterministic algorithm for solving the TSP in O(N^4) time, significantly improving on the best known solution. It claims this proves P=NP and has implications for cryptography such as cracking RSA 2048.
Skiena algorithm 2007 lecture19 introduction to np completezukun
1. The document introduces the concept of NP-completeness and discusses how it can be used to show that many problems that cannot be solved efficiently are essentially the same problem.
2. It describes how reductions can be used to show that if one problem can be transformed or reduced to another problem, then finding an efficient algorithm for one would imply an efficient algorithm for the other.
3. The traveling salesman problem and the satisfiability problem are used as examples to illustrate decision problems, instances, encodings, and reductions.
This document summarizes and proves results about unequal-cost prefix-free codes with a binary alphabet where symbols have costs of 1 and 2 units. It shows that the average word length of such codes obeys a fundamental lower bound, analogous to the bound for equal-cost codes. It also proves that prefix-free codes can always be constructed whose average word length is within 2 units of this fundamental bound. This achieves a generalization of key results about average word length and fundamental bounds from equal-cost to unequal-cost prefix-free codes.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
This document summarizes research on strong duality analysis for discrete-time constrained portfolio optimization problems. It begins by introducing the mathematical formulation of a discrete-time portfolio selection model with constraints expressed as convex inequalities. It then discusses a risk neutral computational approach based on embedding the primal constrained problem into a family of unconstrained problems in auxiliary markets. Weak duality is shown to hold, relating the optimal values of the primal and auxiliary problems. The document defines a dual problem, known as Pliska's κ dual, that seeks to minimize the optimal values of the auxiliary problems. Conditions for strong duality are presented, under which the optimal solution to the dual problem also solves the primal constrained problem.
NP-hard and NP-complete problems deal with the distinction between problems that can be solved in polynomial time versus those where no polynomial time algorithm is known. The document discusses key concepts like P vs NP problems, the theory of NP-completeness, nondeterministic algorithms, reducibility, Cook's theorem stating that satisfiability is in P if and only if P=NP, and examples of NP-hard graph problems like graph coloring. Cook's theorem shows that the satisfiability problem is in NP and is NP-complete, meaning that if any NP-complete problem can be solved in polynomial time, then NP would equal P.
The document proposes a new model for solving Weighted Constraint Satisfaction Problems (WCSPs) using a Hopfield neural network approach. It formulates WCSPs as a 0-1 quadratic programming problem subject to linear constraints, which can be solved using the Hopfield neural network. The model was tested on benchmark WCSP instances and was able to find optimal solutions, with the same time complexity as other known methods. The approach recognizes optimal solutions for WCSPs by minimizing an original energy function using the Hopfield neural network.
- RSA public-key cryptography allows for secure communication through the use of public and private key pairs. The public key can be shared openly but the private key must remain confidential.
- Prior to RSA, all cryptography relied on symmetric keys, where the same key was used for both encryption and decryption. This required secure distribution of the shared secret key.
- RSA was invented in 1978 and revolutionized cryptography by allowing encryption with a public key that corresponds to a private key for decryption, eliminating the need for secure key exchange. It remains the most widely used public-key cryptosystem today.
A method to determine partial weight enumerator for linear block codesAlexander Decker
This document presents a method to determine partial weight enumerators (PWE) for linear block codes using the error impulse technique and Monte Carlo method. The PWE can be used to compute an upper bound on the error probability of the maximum likelihood decoder. As an application, the document provides PWEs and analytical performances of shortened BCH codes, including BCH(130,66), BCH(103,47), and BCH(111,55). The full weight distributions of these codes are unknown. The proposed method estimates the PWE by drawing random codewords and computing the recovery rate of known-weight codewords, obtaining the PWE within a confidence interval.
The document discusses mathematical induction and recursion.
It defines mathematical induction as a method of proof that establishes a statement is true for all natural numbers by proving the base case and inductive step. Recursion is defined as a programming strategy that solves large problems by splitting them into smaller subproblems of the same kind. Examples are provided to demonstrate proofs by mathematical induction and defining functions recursively.
The document presents a method for solving fuzzy assignment problems using triangular and trapezoidal fuzzy numbers. It formulates the fuzzy assignment problem into a crisp linear programming problem that can be solved using the Hungarian method. The paper also uses Robust's ranking method to transform fuzzy costs into crisp values, allowing conventional solution methods to be applied. It aims to provide a more realistic approach to assignment problems by considering costs as fuzzy numbers rather than deterministic values.
A method to determine partial weight enumerator for linear block codesAlexander Decker
This document presents a method to determine partial weight enumerators for linear block codes using error impulse technique and Monte Carlo method. The partial weight enumerator can be used to compute an upper bound on the error probability of maximum likelihood decoding. As an application, the method provides partial weight enumerators and performance analyses of three shortened BCH codes: BCH(130,66), BCH(103,47), and BCH(111,55). The full weight distributions of these codes are unknown.
The document discusses error correcting codes and provides details on linear codes, multiplication codes (cyclic codes), and composition codes. It then poses some open questions regarding composition codes, specifically the cardinality of symmetric differences of sets related to the composition codes. The document proves a theorem showing that for any k ≤ m, the cardinality of the symmetric differences of sets is at most m, partially answering the open questions regarding composition codes.
Do we need a logic of quantum computation?Matthew Leifer
1) The document discusses whether quantum computing needs a formal logic in the same way that classical computing is understood through classical logic. It examines previous proposals for "quantum logics" and focuses on Sequential Quantum Logic (SQL).
2) SQL models sequences of quantum measurements and operations through projection operators and sequential conjunction. The document proposes testing SQL propositions through a quantum algorithm that prepares an encoded "history state" and applies renormalization operations.
3) The proposed algorithm could test SQL propositions with exponentially small probability of success. Several open questions are raised about generalizing and improving SQL as a logic for quantum computing.
This document presents a new immune algorithm called IMMALG for solving the chromatic number problem. IMMALG incorporates a stochastic aging operator and local search procedure to improve performance. It is characterized and parameterized using Kullback entropy. Experimental results show IMMALG is competitive with state-of-the-art evolutionary algorithms for chromatic number problem instances.
This document discusses short packet transmission in machine-type communications (MTC). MTC has unique characteristics including short data packets, low latency requirements, and a massive number of connected devices. Existing wireless technologies are not well-suited for MTC due to assumptions of long packets and latency tolerances. A new theoretical framework called finite blocklength information theory provides tight bounds on achievable rates for short packet transmissions, taking into account both blocklength and error probability. These bounds demonstrate a fundamental difference between short and long packet transmission.
I am John G. I am a Stochastic Processes Assignment Expert at statisticsassignmenthelp.com. I hold a Masters in Statistics, from London, UK. I have been helping students with their homework for the past 5 years. I solve assignments related to Stochastic Processes.
Visit statisticsassignmenthelp.com or email info@statisticsassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with Stochastic Processes Assignments.
Global Domination Set in Intuitionistic Fuzzy Graphijceronline
The document defines and discusses global domination sets in intuitionistic fuzzy graphs (IFGs). Some key points:
- A global domination set of an IFG G is a domination set that dominates both G and its complement. The global domination number γg(G) is the minimum cardinality of a global domination set.
- Bounds on γg(G) are established, such as Min{|Vi|+|Vj|} ≤ γg(G) ≤ p, where Vi and Vj are vertices.
- Properties of γg(G) are proved, including γg(G) = γg(Gc) where Gc is the complement of G.
- Special
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document summarizes key concepts from a PhD dissertation on uncertainty in deep learning:
1) There are two types of uncertainties - epistemic uncertainty from lack of knowledge that decreases with more data, and aleatoric uncertainty from inherent noise that cannot be reduced. Deep learning models need to estimate both to provide predictive uncertainty.
2) Variational inference allows approximating intractable Bayesian posteriors by minimizing the KL divergence between an approximating distribution and the true posterior. Dropout can be seen as a Bayesian approximation where weights follow a Bernoulli distribution.
3) With dropout as a variational distribution, predictive uncertainty in regression is estimated from multiple stochastic forward passes, with aleatoric uncertainty from noise and epistem
The document discusses hybridizing complete and incomplete methods for solving constraint satisfaction problems (CSPs). Complete methods, such as constraint propagation and backtracking, exhaustively search the solution space but can be slow. Incomplete methods, like local search and genetic algorithms, focus on promising areas but may miss solutions. The document proposes integrating propagation, local search, and genetic algorithms into a unified framework to leverage their respective advantages.
(DL hacks輪読) How to Train Deep Variational Autoencoders and Probabilistic Lad...Masahiro Suzuki
This document discusses techniques for training deep variational autoencoders and probabilistic ladder networks. It proposes three advances: 1) Using an inference model similar to ladder networks with multiple stochastic layers, 2) Adding a warm-up period to keep units active early in training, and 3) Using batch normalization. These advances allow training models with up to five stochastic layers and achieve state-of-the-art log-likelihood results on benchmark datasets. The document explains variational autoencoders, probabilistic ladder networks, and how the proposed techniques parameterize the generative and inference models.
NP completeness. Classes P and NP are two frequently studied classes of problems in computer science. Class P is the set of all problems that can be solved by a deterministic Turing machine in polynomial time.
This document summarizes and critiques the public-key cryptosystem proposed by Wagner and Magyarik in 1984. The summary identifies 5 main critiques of their system: 1) It is too vague in its general form, 2) Their concrete example is also vague and insecure, 3) It can generate "spurious keys", 4) It is actually based on the word choice problem rather than the word problem, and 5) It does not constitute a full cryptosystem but rather a starting point for further research. The document then proposes an alternative public-key cryptosystem based on transformation groups, using a group with a known coNP-complete word problem.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
This document summarizes research on strong duality analysis for discrete-time constrained portfolio optimization problems. It begins by introducing the mathematical formulation of a discrete-time portfolio selection model with constraints expressed as convex inequalities. It then discusses a risk neutral computational approach based on embedding the primal constrained problem into a family of unconstrained problems in auxiliary markets. Weak duality is shown to hold, relating the optimal values of the primal and auxiliary problems. The document defines a dual problem, known as Pliska's κ dual, that seeks to minimize the optimal values of the auxiliary problems. Conditions for strong duality are presented, under which the optimal solution to the dual problem also solves the primal constrained problem.
NP-hard and NP-complete problems deal with the distinction between problems that can be solved in polynomial time versus those where no polynomial time algorithm is known. The document discusses key concepts like P vs NP problems, the theory of NP-completeness, nondeterministic algorithms, reducibility, Cook's theorem stating that satisfiability is in P if and only if P=NP, and examples of NP-hard graph problems like graph coloring. Cook's theorem shows that the satisfiability problem is in NP and is NP-complete, meaning that if any NP-complete problem can be solved in polynomial time, then NP would equal P.
The document proposes a new model for solving Weighted Constraint Satisfaction Problems (WCSPs) using a Hopfield neural network approach. It formulates WCSPs as a 0-1 quadratic programming problem subject to linear constraints, which can be solved using the Hopfield neural network. The model was tested on benchmark WCSP instances and was able to find optimal solutions, with the same time complexity as other known methods. The approach recognizes optimal solutions for WCSPs by minimizing an original energy function using the Hopfield neural network.
- RSA public-key cryptography allows for secure communication through the use of public and private key pairs. The public key can be shared openly but the private key must remain confidential.
- Prior to RSA, all cryptography relied on symmetric keys, where the same key was used for both encryption and decryption. This required secure distribution of the shared secret key.
- RSA was invented in 1978 and revolutionized cryptography by allowing encryption with a public key that corresponds to a private key for decryption, eliminating the need for secure key exchange. It remains the most widely used public-key cryptosystem today.
A method to determine partial weight enumerator for linear block codesAlexander Decker
This document presents a method to determine partial weight enumerators (PWE) for linear block codes using the error impulse technique and Monte Carlo method. The PWE can be used to compute an upper bound on the error probability of the maximum likelihood decoder. As an application, the document provides PWEs and analytical performances of shortened BCH codes, including BCH(130,66), BCH(103,47), and BCH(111,55). The full weight distributions of these codes are unknown. The proposed method estimates the PWE by drawing random codewords and computing the recovery rate of known-weight codewords, obtaining the PWE within a confidence interval.
The document discusses mathematical induction and recursion.
It defines mathematical induction as a method of proof that establishes a statement is true for all natural numbers by proving the base case and inductive step. Recursion is defined as a programming strategy that solves large problems by splitting them into smaller subproblems of the same kind. Examples are provided to demonstrate proofs by mathematical induction and defining functions recursively.
The document presents a method for solving fuzzy assignment problems using triangular and trapezoidal fuzzy numbers. It formulates the fuzzy assignment problem into a crisp linear programming problem that can be solved using the Hungarian method. The paper also uses Robust's ranking method to transform fuzzy costs into crisp values, allowing conventional solution methods to be applied. It aims to provide a more realistic approach to assignment problems by considering costs as fuzzy numbers rather than deterministic values.
A method to determine partial weight enumerator for linear block codesAlexander Decker
This document presents a method to determine partial weight enumerators for linear block codes using error impulse technique and Monte Carlo method. The partial weight enumerator can be used to compute an upper bound on the error probability of maximum likelihood decoding. As an application, the method provides partial weight enumerators and performance analyses of three shortened BCH codes: BCH(130,66), BCH(103,47), and BCH(111,55). The full weight distributions of these codes are unknown.
The document discusses error correcting codes and provides details on linear codes, multiplication codes (cyclic codes), and composition codes. It then poses some open questions regarding composition codes, specifically the cardinality of symmetric differences of sets related to the composition codes. The document proves a theorem showing that for any k ≤ m, the cardinality of the symmetric differences of sets is at most m, partially answering the open questions regarding composition codes.
Do we need a logic of quantum computation?Matthew Leifer
1) The document discusses whether quantum computing needs a formal logic in the same way that classical computing is understood through classical logic. It examines previous proposals for "quantum logics" and focuses on Sequential Quantum Logic (SQL).
2) SQL models sequences of quantum measurements and operations through projection operators and sequential conjunction. The document proposes testing SQL propositions through a quantum algorithm that prepares an encoded "history state" and applies renormalization operations.
3) The proposed algorithm could test SQL propositions with exponentially small probability of success. Several open questions are raised about generalizing and improving SQL as a logic for quantum computing.
This document presents a new immune algorithm called IMMALG for solving the chromatic number problem. IMMALG incorporates a stochastic aging operator and local search procedure to improve performance. It is characterized and parameterized using Kullback entropy. Experimental results show IMMALG is competitive with state-of-the-art evolutionary algorithms for chromatic number problem instances.
This document discusses short packet transmission in machine-type communications (MTC). MTC has unique characteristics including short data packets, low latency requirements, and a massive number of connected devices. Existing wireless technologies are not well-suited for MTC due to assumptions of long packets and latency tolerances. A new theoretical framework called finite blocklength information theory provides tight bounds on achievable rates for short packet transmissions, taking into account both blocklength and error probability. These bounds demonstrate a fundamental difference between short and long packet transmission.
I am John G. I am a Stochastic Processes Assignment Expert at statisticsassignmenthelp.com. I hold a Masters in Statistics, from London, UK. I have been helping students with their homework for the past 5 years. I solve assignments related to Stochastic Processes.
Visit statisticsassignmenthelp.com or email info@statisticsassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with Stochastic Processes Assignments.
Global Domination Set in Intuitionistic Fuzzy Graphijceronline
The document defines and discusses global domination sets in intuitionistic fuzzy graphs (IFGs). Some key points:
- A global domination set of an IFG G is a domination set that dominates both G and its complement. The global domination number γg(G) is the minimum cardinality of a global domination set.
- Bounds on γg(G) are established, such as Min{|Vi|+|Vj|} ≤ γg(G) ≤ p, where Vi and Vj are vertices.
- Properties of γg(G) are proved, including γg(G) = γg(Gc) where Gc is the complement of G.
- Special
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document summarizes key concepts from a PhD dissertation on uncertainty in deep learning:
1) There are two types of uncertainties - epistemic uncertainty from lack of knowledge that decreases with more data, and aleatoric uncertainty from inherent noise that cannot be reduced. Deep learning models need to estimate both to provide predictive uncertainty.
2) Variational inference allows approximating intractable Bayesian posteriors by minimizing the KL divergence between an approximating distribution and the true posterior. Dropout can be seen as a Bayesian approximation where weights follow a Bernoulli distribution.
3) With dropout as a variational distribution, predictive uncertainty in regression is estimated from multiple stochastic forward passes, with aleatoric uncertainty from noise and epistem
The document discusses hybridizing complete and incomplete methods for solving constraint satisfaction problems (CSPs). Complete methods, such as constraint propagation and backtracking, exhaustively search the solution space but can be slow. Incomplete methods, like local search and genetic algorithms, focus on promising areas but may miss solutions. The document proposes integrating propagation, local search, and genetic algorithms into a unified framework to leverage their respective advantages.
(DL hacks輪読) How to Train Deep Variational Autoencoders and Probabilistic Lad...Masahiro Suzuki
This document discusses techniques for training deep variational autoencoders and probabilistic ladder networks. It proposes three advances: 1) Using an inference model similar to ladder networks with multiple stochastic layers, 2) Adding a warm-up period to keep units active early in training, and 3) Using batch normalization. These advances allow training models with up to five stochastic layers and achieve state-of-the-art log-likelihood results on benchmark datasets. The document explains variational autoencoders, probabilistic ladder networks, and how the proposed techniques parameterize the generative and inference models.
NP completeness. Classes P and NP are two frequently studied classes of problems in computer science. Class P is the set of all problems that can be solved by a deterministic Turing machine in polynomial time.
This document summarizes and critiques the public-key cryptosystem proposed by Wagner and Magyarik in 1984. The summary identifies 5 main critiques of their system: 1) It is too vague in its general form, 2) Their concrete example is also vague and insecure, 3) It can generate "spurious keys", 4) It is actually based on the word choice problem rather than the word problem, and 5) It does not constitute a full cryptosystem but rather a starting point for further research. The document then proposes an alternative public-key cryptosystem based on transformation groups, using a group with a known coNP-complete word problem.
I am Charles B. I am a Programming Exam Expert at programmingexamhelp.com. I hold a Ph.D. in Programming Texas University, USA. I have been helping students with their exams for the past 9 years. You can hire me to take your exam in Programming.
Visit programmingexamhelp.com or email support@programmingexamhelp.com. You can also call on +1 678 648 4277 for any assistance with the Programming Exam.
Last time we talked about propositional logic, a logic on simple statements.
This time we will talk about first order logic, a logic on quantified statements.
First order logic is much more expressive than propositional logic.
The topics on first order logic are:
1-Quantifiers
2-Negation
3-Multiple quantifiers
4-Arguments of quantified statements
Average Polynomial Time Complexity Of Some NP-Complete ProblemsAudrey Britton
This document discusses the average polynomial time complexity of some NP-complete problems. Specifically:
- It proves that when N(n) = [n^e] for 0 < e < 2, the CLIQUEN(n) problem (finding a clique of size k in a graph with n vertices and at most N(n) edges) is NP-complete, yet can be solved in average and almost everywhere polynomial time.
- It considers the CLIQUE problem under different non-uniform distributions on the input graphs, showing the problem can be solved in average polynomial time for certain values of the distribution parameter p.
- The main result shows that a family of subproblems of CLIQUE that remain
A Stochastic Limit Approach To The SAT ProblemValerie Felton
This document proposes using quantum adaptive stochastic systems to solve NP-complete problems like SAT in polynomial time. It summarizes the SAT problem, discusses existing quantum algorithms for it, and introduces the concept of using channels instead of just unitary operators to model more realistic quantum computations. It argues that combining a quantum SAT algorithm with a stochastic limit approach using channels could provide a method to distinguish computation results that existing algorithms cannot, potentially solving NP-complete problems efficiently.
This document discusses NP-hard and NP-complete problems. It begins by defining the classes P, NP, NP-hard, and NP-complete. It then provides examples of NP-hard problems like the traveling salesperson problem, satisfiability problem, and chromatic number problem. It explains that to show a problem is NP-hard, one shows it is at least as hard as another known NP-hard problem. The document concludes by discussing how restricting NP-hard problems can result in problems that are solvable in polynomial time.
I am Gabriel C. I love exploring new topics. Academic writing seemed an interesting option for me. After working for many years with programmingexamhelp.com, I have assisted many students with their exams. I can proudly say, each student I have served is happy with the quality of the solution that I have provided. I have acquired Business analyst of Information Technology, Montreal College of Information Technology, Canada.
This document provides an introduction to the theory of computation, including definitions of key concepts like automata theory, symbols, alphabets, strings, languages, and sets. It discusses how automata theory deals with formal models of computation and is used in areas like text processing and programming languages. Mathematical terminology is introduced, such as symbols, alphabets, strings, languages, sets, and the power and Cartesian product of alphabets. Examples are given to illustrate concepts like strings, languages, and valid versus invalid computations based on whether a string is contained within a language.
Algorithms and Complexity: Cryptography TheoryAlex Prut
This document discusses algorithms and complexity in cryptography. It begins by defining function problems in computational complexity theory as computational problems where the expected output is more complex than a simple yes or no answer. It then discusses one-way functions, which are easy to compute but believed to be hard to invert. The document provides examples of one-way functions based on integer multiplication, discrete logarithms, and the RSA cryptosystem. It argues that the existence of one-way functions separates the complexity classes P and UP, and that cryptography relies on the assumption that stronger versions of one-way functions exist.
This document discusses P, NP and NP-complete problems. It begins by introducing tractable and intractable problems, and defines problems that can be solved in polynomial time as tractable, while problems that cannot are intractable. It then discusses the classes P and NP, with P containing problems that can be solved deterministically in polynomial time, and NP containing problems that can be solved non-deterministically in polynomial time. The document concludes by defining NP-complete problems as those in NP that are as hard as any other problem in the class, in that any NP problem can be reduced to an NP-complete problem in polynomial time.
I am Britney P. I love exploring new topics. Academic writing seemed an interesting option for me. After working for many years with progamminghomeworkhelp.com, I have assisted many students with their Design and Analysis of Algorithms Assignments. I can proudly say, each student I have served is happy with the quality of the solution that I have provided. I have acquired my bachelor's from Sunway University, Malaysia.
The document discusses approximation algorithms for NP-hard optimization problems. It provides examples of approximation algorithms for problems like set cover, vertex cover, traveling salesman problem (TSP), and knapsack. For set cover, it shows that a greedy algorithm provides a (1+ln n)-approximation. For vertex cover and TSP, it describes 2-approximation algorithms. It also presents a fully polynomial-time approximation scheme (FPTAS) for knapsack that provides a solution within (1-eps) of optimal.
Presentation of daa on approximation algorithm and vertex cover problem sumit gyawali
This document presents an approximation algorithm for the vertex cover problem. It first defines approximation algorithms as techniques for dealing with NP-complete optimization problems that provide near-optimal solutions in polynomial time. It then defines the vertex cover problem as finding a subset of vertices that covers all graph edges, and provides an example 2-approximation algorithm that iteratively adds vertices covering arbitrary remaining edges to the cover set.
A Branch And Bound Algorithm For The Generalized Assignment ProblemSteven Wallach
This paper describes a branch and bound algorithm for solving the generalized assignment problem. The algorithm works by first solving a relaxation of the problem to obtain a lower bound. It then checks constraints for violations and solves binary knapsack problems to further refine the lower bound. The algorithm exploits the structure of the problem by using solutions to binary knapsack problems to determine bounds rather than linear programming. Computational results are provided for problems with up to 4,000 variables.
I am Bing Jr. I am a Computer Network Assignment Expert at computernetworkassignmenthelp.com. I hold a Master's in Computer Science from, Glasgow University, UK. I have been helping students with their assignments for the past 10 years. I solve assignments related to the Computer Network.
Visit computernetworkassignmenthelp.com or email support@computernetworkassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with the Computer Network Assignment.
This document discusses the limits of computation. It distinguishes between intractable problems that take an impractical amount of time to solve versus truly unsolvable problems. It describes different complexity classes based on how fast the number of operations grows with input size. Hard problems like the traveling salesman problem are inherently difficult even with faster computers. Reductions show relationships between problem difficulties. The halting problem and incompleteness theorems prove certain logical and mathematical questions cannot be answered algorithmically.
This document discusses the limits of computation. It distinguishes between intractable problems that take an impractical amount of time to solve versus truly unsolvable problems. It describes different complexity classes based on how fast the number of operations grows with input size. Hard problems like the traveling salesman problem are in NP and are inherently difficult even with faster computers or quantum computing. Reductions show relationships between problem difficulties. The halting problem and Godel's incompleteness theorems establish fundamental limits of computation and logical systems.
Similar to The complexity of promise problems with applications to public-key cryptography (20)
Também conhecido como o “Time Lock Puzzle”, o LCS35 é um desafio em forma de criptografia projetado em 1999 pelo pesquisador Ron Rivest, do Instituto de Tecnologia de Massachusetts (MIT). Quando este problema matemático for resolvido, uma cápsula do tempo de chumbo será aberta no MIT.
O puzzle envolve a divisão de um número incrivelmente enorme, por um número que é apenas um pouco menor que o da conta (mas ainda com mais de 600 dígitos).
Ninguém sabe o que está dentro da capsula e, segundo os dados de Rivest, estima-se que levaria cerca de 35 anos para que o enigma seja resolvido. Os interessados ainda terão de esperar para descobrir o que de fato tem na capsula.
Wow! Signal Decoded as Foundations of Theory of EverythingXequeMateShannon
The document decodes the "Wow! Signal" as containing information about the foundations of a Theory of Everything. It presents three ways the signal can be divided and analyzed:
1) The first two numbers in the signal (6 and 14) indicate its true length and that the second part verifies the first.
2) Dividing the main part into two groups of three elements yields equal sums, justifying this division.
3) The numbers correspond to degrees of freedom of stable and meta-stable manifolds described in Scale Symmetric Theory, which underlie different levels of nature.
Em Matemática, um número normal é um número real cujos algarismos são distribuídos de maneira aleatória no seu desenvolvimento decimal, isto é, os algarismos aparecem todos com a mesma freqüência. Os "algarismos" se referem aos algarismos antes da vírgula e a seqüência infinita de algarismos após a vírgula.
A Teoria de Cordas e a Unificação das Forças da NaturezaXequeMateShannon
O documento descreve o método científico, incluindo observação de fenômenos, criação de hipóteses, teste de hipóteses e estabelecimento de leis científicas. Também discute a situação atual das quatro forças fundamentais da natureza e a necessidade de uma teoria unificada que explique todas elas.
O documento introduz os princípios e aplicações dos algoritmos genéticos, que são algoritmos inspirados na evolução biológica e na genética. Descreve os componentes-chave dos algoritmos genéticos, incluindo a representação cromossômica das soluções, a avaliação de aptidão, a seleção baseada na aptidão e os operadores genéticos como cruzamento e mutação. Aplicações dos algoritmos genéticos incluem otimização de funções matemáticas, problemas de otimização combinatória
Hamiltonian design in readout from room-temperature Raman atomic memory XequeMateShannon
We present an experimental demonstration of the Hamiltonian manipulation in light-atom interface in Raman-type warm rubidium-87 vapor atomic memory. By adjusting the detuning of the driving beam we varied the relative contributions of the Stokes and anti-Stokes scattering to the process of four-wave mixing which reads out a spatially multimode state of atomic memory. We measured the temporal evolution of the readout fields and the spatial intensity correlations between write-in and readout as a function of detuning with the use of an intensified camera. The correlation maps enabled us to resolve between the anti-Stokes and the Stokes scattering and to quantify their contributions. Our experimental results agree quantitatively with a simple, plane-wave theoretical model we provide. They allow for a simple interpretation of the coaction of the anti-Stokes and the Stokes scattering at the readout stage. The Stokes contribution yields additional, adjustable gain at the readout stage, albeit with inevitable extra noise. Here we provide a simple and useful framework to trace it and the results can be utilized in the existing atomic memories setups. Furthermore, the shown Hamiltonian manipulation offers a broad range of atom-light interfaces readily applicable in current and future quantum protocols with atomic ensembles.
An efficient algorithm for the computation of Bernoulli numbersXequeMateShannon
This document describes an efficient algorithm for computing large Bernoulli numbers. The algorithm is based on the asymptotic formula for Bernoulli numbers and uses the Euler product formula with primes. The algorithm allows individual Bernoulli numbers to be computed much faster than recurrence formulas. Using this algorithm, the authors computed B(750,000) in a few hours and set a record by computing B(5,000,000), which has over 27 million decimal digits. The algorithm involves computing the principal term, a few terms of the Euler product, and the fractional part using the Von Staudt-Clausen formula.
Good cryptography requires good random numbers. This paper evaluates the hardwarebased Intel Random Number Generator (RNG) for use in cryptographic applications.
Almost all cryptographic protocols require the generation and use of secret values that must be unknown to attackers. For example, random number generators are required to generate public/private keypairs for asymmetric (public key) algorithms including RSA, DSA, and Diffie-Hellman. Keys for symmetric and hybrid cryptosystems are also generated randomly. RNGs are also used to create challenges, nonces (salts), padding bytes, and blinding values. The one time pad – the only provably-secure encryption system – uses as much key material as ciphertext and requires that the keystream be generated from a truly random process.
A palavra aleatoriedade exprime quebra de ordem, propósito, causa, ou imprevisibilidade em uma terminologia não científica. Um processo aleatório é o processo repetitivo cujo resultado não descreve um padrão determinístico, mas segue uma distribuição de probabilidade.
Quantum cryptography can, in principle, provide unconditional security guaranteed by the law of physics only. Here, we survey the theory and practice of the subject and highlight some recent developments.
The Security of Practical Quantum Key DistributionXequeMateShannon
Quantum key distribution (QKD) is the first quantum information task to reach the level of mature technology, already fit for commercialization. It aims at the creation of a secret key between authorized partners connected by a quantum channel and a classical authenticated channel. The security of the key can in principle be guaranteed without putting any restriction on the eavesdropper's power.
The first two sections provide a concise up-to-date review of QKD, biased toward the practical side. The rest of the paper presents the essential theoretical tools that have been developed to assess the security of the main experimental platforms (discrete variables, continuous variables and distributed-phase-reference protocols).
Experimental realisation of Shor's quantum factoring algorithm using qubit r...XequeMateShannon
Quantum computational algorithms exploit quantum mechanics to solve problems exponentially faster than the best classical algorithms. Shor's quantum algorithm for fast number factoring is a key example and the prime motivator in the international effort to realise a quantum computer. However, due to the substantial resource requirement, to date, there have been only four small-scale demonstrations. Here we address this resource demand and demonstrate a scalable version of Shor's algorithm in which the n qubit control register is replaced by a single qubit that is recycled n times: the total number of qubits is one third of that required in the standard protocol. Encoding the work register in higher-dimensional states, we implement a two-photon compiled algorithm to factor N=21. The algorithmic output is distinguishable from noise, in contrast to previous demonstrations. These results point to larger-scale implementations of Shor's algorithm by harnessing scalable resource reductions applicable to all physical architectures.
The usual theory of inflation breaks down in eternal inflation. We derive a dual description of eternal inflation in terms of a deformed Euclidean CFT located at the threshold of eternal inflation. The partition function gives the amplitude of different geometries of the threshold surface in the no-boundary state. Its local and global behavior in dual toy models shows that the amplitude is low for surfaces which are not nearly conformal to the round three-sphere and essentially zero for surfaces with negative curvature. Based on this we conjecture that the exit from eternal inflation does not produce an infinite fractal-like multiverse, but is finite and reasonably smooth.
The different faces of mass action in virus assemblyXequeMateShannon
The spontaneous encapsulation of genomic and non-genomic polyanions by coat proteins of simple icosahedral viruses is driven, in the first instance, by electrostatic interactions with polycationic RNA binding domains on these proteins. The efficiency with which the polyanions can be encapsulated in vitro, and presumably also in vivo, must in addition be governed by the loss of translational and mixing entropy associated with co-assembly, at least if this co-assembly constitutes a reversible process. These forms of entropy counteract the impact of attractive interactions between the constituents and hence they counteract complexation. By invoking mass action-type arguments and a simple model describing electrostatic interactions, we show how these forms of entropy might settle the competition between negatively charged polymers of different molecular weights for co-assembly with the coat proteins. In direct competition, mass action turns out to strongly work against the encapsulation of RNAs that are significantly shorter, which is typically the case for non-viral (host) RNAs. We also find that coat proteins favor forming virus particles over nonspecific binding to other proteins in the cytosol even if these are present in vast excess. Our results rationalize a number of recent in vitro co-assembly experiments showing that short polyanions are less effective at attracting virus coat proteins to form virus-like particles than long ones do, even if both are present at equal weight concentrations in the assembly mixture.
A Digital Signature Based on a Conventional Encryption FunctionXequeMateShannon
A new digital signature based only on a conventional encryption function (such as DES) is described which is as secure as the underlying encryption function -- the security does not depend on the difficulty of factoring and the high computational costs of modular arithmetic are avoided. The signature system can sign an unlimited number of messages, and the signature size increases logarithmically as a function of the number of messages signed. Signature size in a ‘typical’ system might range from a few hundred bytes to a few kilobytes, and generation of a signature might require a few hundred to a few thousand computations of the underlying conventional encryption function.
Quantum algorithm for solving linear systems of equationsXequeMateShannon
Solving linear systems of equations is a common problem that arises both on its own and as a subroutine in more complex problems: given a matrix A and a vector b, find a vector x such that Ax=b. We consider the case where one doesn't need to know the solution x itself, but rather an approximation of the expectation value of some operator associated with x, e.g., x'Mx for some matrix M. In this case, when A is sparse, N by N and has condition number kappa, classical algorithms can find x and estimate x'Mx in O(N sqrt(kappa)) time. Here, we exhibit a quantum algorithm for this task that runs in poly(log N, kappa) time, an exponential improvement over the best classical algorithm.
Shor's discrete logarithm quantum algorithm for elliptic curvesXequeMateShannon
This document summarizes John Proos and Christof Zalka's research on implementing Shor's quantum algorithm for solving the discrete logarithm problem over elliptic curves. It shows that for elliptic curve cryptography, a quantum computer could solve problems beyond the capabilities of classical computers using fewer qubits than for integer factorization. Specifically, a 160-bit elliptic curve key could be broken using around 1000 qubits, compared to 2000 qubits needed to factor a 1024-bit RSA modulus. The main technical challenge is implementing the extended Euclidean algorithm to compute multiplicative inverses modulo a prime p.
Countermeasures Against High-Order Fault-Injection Attacks on CRT-RSAXequeMateShannon
This document presents a classification and analysis of existing countermeasures against fault injection attacks on CRT-RSA cryptosystems. It shows that many countermeasures share common features of protecting integrity through redundant computations but optimize them differently. It also demonstrates that faults on code can be modeled as faults on data and that there is no conceptual difference between test-based and infective countermeasures. This analysis is then used to improve countermeasures by fixing known issues, optimizing implementations, and presenting a method to design provable high-order countermeasures to resist multiple faults.
The cost of acquiring information by natural selectionCarl Bergstrom
This is a short talk that I gave at the Banff International Research Station workshop on Modeling and Theory in Population Biology. The idea is to try to understand how the burden of natural selection relates to the amount of information that selection puts into the genome.
It's based on the first part of this research paper:
The cost of information acquisition by natural selection
Ryan Seamus McGee, Olivia Kosterlitz, Artem Kaznatcheev, Benjamin Kerr, Carl T. Bergstrom
bioRxiv 2022.07.02.498577; doi: https://doi.org/10.1101/2022.07.02.498577
Mechanisms and Applications of Antiviral Neutralizing Antibodies - Creative B...Creative-Biolabs
Neutralizing antibodies, pivotal in immune defense, specifically bind and inhibit viral pathogens, thereby playing a crucial role in protecting against and mitigating infectious diseases. In this slide, we will introduce what antibodies and neutralizing antibodies are, the production and regulation of neutralizing antibodies, their mechanisms of action, classification and applications, as well as the challenges they face.
PPT on Sustainable Land Management presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
TOPIC OF DISCUSSION: CENTRIFUGATION SLIDESHARE.pptxshubhijain836
Centrifugation is a powerful technique used in laboratories to separate components of a heterogeneous mixture based on their density. This process utilizes centrifugal force to rapidly spin samples, causing denser particles to migrate outward more quickly than lighter ones. As a result, distinct layers form within the sample tube, allowing for easy isolation and purification of target substances.
BIRDS DIVERSITY OF SOOTEA BISWANATH ASSAM.ppt.pptxgoluk9330
Ahota Beel, nestled in Sootea Biswanath Assam , is celebrated for its extraordinary diversity of bird species. This wetland sanctuary supports a myriad of avian residents and migrants alike. Visitors can admire the elegant flights of migratory species such as the Northern Pintail and Eurasian Wigeon, alongside resident birds including the Asian Openbill and Pheasant-tailed Jacana. With its tranquil scenery and varied habitats, Ahota Beel offers a perfect haven for birdwatchers to appreciate and study the vibrant birdlife that thrives in this natural refuge.
Signatures of wave erosion in Titan’s coastsSérgio Sacani
The shorelines of Titan’s hydrocarbon seas trace flooded erosional landforms such as river valleys; however, it isunclear whether coastal erosion has subsequently altered these shorelines. Spacecraft observations and theo-retical models suggest that wind may cause waves to form on Titan’s seas, potentially driving coastal erosion,but the observational evidence of waves is indirect, and the processes affecting shoreline evolution on Titanremain unknown. No widely accepted framework exists for using shoreline morphology to quantitatively dis-cern coastal erosion mechanisms, even on Earth, where the dominant mechanisms are known. We combinelandscape evolution models with measurements of shoreline shape on Earth to characterize how differentcoastal erosion mechanisms affect shoreline morphology. Applying this framework to Titan, we find that theshorelines of Titan’s seas are most consistent with flooded landscapes that subsequently have been eroded bywaves, rather than a uniform erosional process or no coastal erosion, particularly if wave growth saturates atfetch lengths of tens of kilometers.
PPT on Alternate Wetting and Drying presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...Scintica Instrumentation
Targeting Hsp90 and its pathogen Orthologs with Tethered Inhibitors as a Diagnostic and Therapeutic Strategy for cancer and infectious diseases with Dr. Timothy Haystead.
Anti-Universe And Emergent Gravity and the Dark UniverseSérgio Sacani
Recent theoretical progress indicates that spacetime and gravity emerge together from the entanglement structure of an underlying microscopic theory. These ideas are best understood in Anti-de Sitter space, where they rely on the area law for entanglement entropy. The extension to de Sitter space requires taking into account the entropy and temperature associated with the cosmological horizon. Using insights from string theory, black hole physics and quantum information theory we argue that the positive dark energy leads to a thermal volume law contribution to the entropy that overtakes the area law precisely at the cosmological horizon. Due to the competition between area and volume law entanglement the microscopic de Sitter states do not thermalise at sub-Hubble scales: they exhibit memory effects in the form of an entropy displacement caused by matter. The emergent laws of gravity contain an additional ‘dark’ gravitational force describing the ‘elastic’ response due to the entropy displacement. We derive an estimate of the strength of this extra force in terms of the baryonic mass, Newton’s constant and the Hubble acceleration scale a0 = cH0, and provide evidence for the fact that this additional ‘dark gravity force’ explains the observed phenomena in galaxies and clusters currently attributed to dark matter.
Mending Clothing to Support Sustainable Fashion_CIMaR 2024.pdfSelcen Ozturkcan
Ozturkcan, S., Berndt, A., & Angelakis, A. (2024). Mending clothing to support sustainable fashion. Presented at the 31st Annual Conference by the Consortium for International Marketing Research (CIMaR), 10-13 Jun 2024, University of Gävle, Sweden.
2. 160 EVEN~ SELMAN~ AND YACOBI
to determine whether there exists an algorithm d that solves the problem,
i.e., such that ~¢(x) converges for all input instances x and such that
Vx[d(x) = "yes" ~ P(x)].
In practice, one often encounters problems for which only a subclass of
the domain of all instances is of concern. Such problems are here called
promise problems. Informally, a promise problem has the structure
input x,
promise Q(x),
property R (x),
where Q and R are predicates. Formally, a promise problem is a pair of
predicates (Q,R). The predicate Q is called the promise. A deterministic
Turing machine M solves the promise problem (Q, R) if
Vx[Q(x)~ [M(x)~ A (M(x)= "yes" ~ R(x))]].
(The notation M(X) ~ means that M eventually halts on input x.). A promise
problem (Q, R) is solvable if there exists a Turing machine M that solves it.
If a Turing machine M solves (Q, R), then the language L(M) accepted by M
is a solution to (Q, R).
The study of problems with this format is certainly not new. For partial
recursive functions one wants a program that computes correctly on its
domain. And, techniques for establishing correctness of a program are
typically distinct from halting issues. Problems of this kind have also arisen
in context-free language theory (Ullian, 1967).
Complexity issues about promise problems arise from Even and Yacobi's
(1980) work in public-key cryptography. Since their work is the primary
motivation for the complexity results that follow, and since we wish to draw
conclusions about public-key cryptosystems, the model of a public-key cryp-
tosystem which they use is now given.
A public-key cryptosystem consists of three fixed and publicly known
deterministic algorithms E, D, and G that operate in polynomial time. The
diagram gives the basic layout; E is the encryption algorithm, D is the
decryption algorithm, and G is the key generator; M, C, K 1, K2, and X will
be binary words, called the message, cryptogram, encryption key, decryption
key, and trap-door, respectively. Prior to transmission of messages M of
length n, the receiver generates X, say randomly, where ]XI is a polynomial
in n, and then uses X to compute the pair (K1, K2)= G(X). K 1 is made
publicly known, but X and K 2 remain private. The encryption-decryption
key pair may be used for encoding and decoding purposes for a relatively
long time.
3. COMPLEXITY OF PROMISE PROBLEMS 161
M [,, c .I DE
l
K1
G
Transmitter Reeeiver
L X
1-
When a transmitter wants to send a message M of length n to a receiver,
he computes C = E(K 1,M) and sends C on an open channel. The receiver,
knowing K2, reconstructs M by M= D(K2, C). It is assumed that, for every
X, if (K1, g2) = G(X), then M = D(K2, E(K1, M)). To the extent that n is a
parameter, this system is similar to Brassard's transient key cryptosystem
(Brassard, 1983).
The inverse condition guarantees that the function )~ME(K1,M) is
one-one, for each K~ that is the public-key generated by G for some X.
)cME(Ka, M) may very well not be onto.
A basic issue is whether there exist public-key cryptosystems with hard
cracking problems. The cracking problem is the problem of computing M
such that E(Ka, M)= C if such M exists. Therefore, the cracking problem
for a fixed public-key cryptosystem is describable as a promise problem in
the following way.
input n, K 1, C and M' (where IM'] = n),
promise there exists X such that K~ is the public-key
generated by G on input X and there exists a
message M, IMI = n, such that E(Ka, M) = C,
property M' >/M, where M is the message which
satisfies E(K1, M) = C.
Since ).ME(K1, M) is one-one, there is at most one M which satisfies
E(K1,M) = C. Thus, if the promise is true, the question of whether the
numerical value of M' is greater or equal to the numerical value of M is
always meaningful, and has a positive or negative answer. Furthermore, the
computational version of the cracking problem (find M) is polynomial-time
equivalent to the version given here. Given M such that E(KI,M)= C,
simply check whether M'/> M. Conversely, if the promise holds for n, K~,
and C, an algorithm d that solves the above formulation of the cracking
problem can be used in a binary search to find M, and this computation runs
in polynomial time relative to za{.
A public-key cryptosystem can be deemed secure only if every algorithm
4. 162 EVEN, SELMAN, AND YACOBI
that solves the promise problem formulation of its cracking problem is inef-
ficient. Attempts to formulate the cracking problem as an ordinary decision
problem would lead to faulty considerations. To see this, let Q and R denote
the promise and property predicates, respectively, of the cracking promise
problem and, tentatively, define the conjunction Q ~ R to be the cracking
decision problem. Now, one might have a public-key cryptosystem, say Y,
such that the cracking decision problem Q (3R is difficult to solve. But if
there is an efficient algorithm that solves the cracking promise problem, i.e.,
that efficiently finds M when the promise predicate Q is true and that gives
"garbage" output when Q is false, then J is not a usable system. On the
other hand, if the cracking promise problem (Q, R) has no efficient solution,
then the cracking decision problem Q ~ R is difficult to solve also, because
any algorithm that solves Q N R also solves (Q, R).
This paper is organized so that Sections 2 and 3 provide basic complexity
notions and results about promise prolems. Section 4 shows that a conjecture
raised by Even and Yacobi (1980) is false. That conjecture would, if it were
true, imply that public-key cryptosystems with NP-hard cracking problems
do not exist. In Section 5 a new conjecture is raised that has the same conse-
quence. In addition, the new conjecture implies that NP-complete sets cannot
be accepted by Turing machines that have at most one accepting
computation for each input word.
NP-hardness is a worst case notion, and, of course, one needs to know
that the cracking problem of a given public-key cryptosystem is hard for
almost all cases. If a cryptosystem for which the cracking problem is hard
does not exist when the worst case approach is taken, then certainly it does
not exist when an average or most-cases approach is used.
2. COMPLEXITY CONCEPTS FOR PROMISE PROBLEMS
Recall that if M is a deterministic Turing machine that solves a promise
problem (Q, R), then the language L(M) accepted by M is called a solution
to (Q, R). By means of this technical device every promise problem specifies
a class of languages and solutions to promise problems can be described and
manipulated set theoretically. We require notation for promise problems that
have solutions in NP. One possibility is to extend the definition of NP to
include promise problems. However, in order to keep NP sacrosanct for
decision problems (encoded as languages), the following notation is
introduced instead.
DEFINITION 1. NPP is the class of all promise problems (Q, R) such that
(Q,R) has a solution in NP. Co-NPP is the class of all promise problems
(Q, R) such that (Q, MR) is in NPP.
5. COMPLEXITY OF PROMISE PROBLEMS 163
A promise problem (Q, R) belongs to co-NPP if and only if (Q, R) has a
solution in co-NP. Also note that a recursive language L is a solution to
(Q, R) if and only if ~L is a solution to (Q, ~R). Thus, L is a solution in
NP to (Q, R) if and only if ~L is a solution in co-NP to (Q, ~R).
Every set S in NP may be considered to be a promise problem (2;*, S)
with the trivial promise 2~*, where S is a language over the finite alphabet 2?.
(Occasionally we use sets instead of predicates when we name a promise
problem.) If S is in NP (co-NP, NP ~ co-NP), then (S*, S) is in NPP (co-
NPP, NPP ~ co-NPP, respectively). In this way, NPP is a proper extension
of NP, co-NPP is a proper~"exteris'ior/ of co:NP~,-arid NPP A co-NPP is a
proper extension of NP ~ co-NP.
If a promise problem (Q, R) is in NPP ~ co-NPP, then by definition there
is a solution in NP and there is a solution in co-NP. However, there is no
reason to believe that (Q, R) has a solution in NP ~ co-NP. This simple
observation suggests significant structural differences between the classes
NPP ~ co-NPP and NP ~ eo-NP, and later results will bear this out.
Let ~<Pm and ~<P denote polynomial-time many-one and Turing
reducibilities, respectively. (This notation was established in Ladner, Lynch,
and Selman (1975).) In accordance with generally accepted usage, recall that
a language L is NP-complete if L is in NP and every set in NP is ~<Pm-
reducible to L and recall that a language L is NP-hard if every set in NP is
~<~-reducible to L. (Garey and Johnson (1979) contains a useful discussion
and terminological history which explains why NP-complete and NP-hard
are defined by different reducibilities.)
DEFINITION 2. A promise problem (Q,R) is NP-hard if it is solvable
and every solution L of (Q, R) is NP-hard.
It follows from the definition that if an NP-hard promise problem has a
tractable solution (i.e., in P), then P--NP. In particular, if a public-key
cryptosystem has an NP-hard cracking problem, then the system can be
cracked (in worst case) in polynomial time only if P = NP.
For an oracle Turing machine M with oracle set A, let L(M, A) denote the
language accepted by M with oracle A. According to Definition 2, (Q, R) is
NP-hard if and only if, for every set S in NP and for every solution A of
(Q, R), there is an oracle Turing machine M that operates in polynomial
time so that S = L (M, A).
DEFINITION 3. A promise problem (Q, R) is uniformly NP-hard if it is
solvable and for every set S in NP, there is an oracle Turing machine M that
operates in polynomial time such that, for all solutions A of (Q,R),
S = L(M, A).
Uniformly NP-hard obviously implies NP-hard. The converse can be
6. 164 EVEN~ SELMAN, AND YACOBI
expected to be false but no proof is yet known. In any case, the main results
of the next section will hold for both notions.
The concepts thus far defined can now be used for definitions of
reductions and uniform reductions between promise problems.
DEFINITION 4. A promise problem (Q,R) is Turing reducible in
polynomial time to a promise problem (S, T), in symbols, (Q, R) ~<~e(S, T),
if, for every solution A of (S, T), there is an oracle Turing machine M that
operates in polynomial time such that M with oracle A solves (Q, R).
DEFINITION 5. A promise problem (Q,R) is uniformly Turing reducible
in polynomial time to a promise problem (S, T), (Q, R) ~<eu~r (S, T), if there is
an oracle Turing machine M that operates in polynomial time such that, for
every solution A of (S, T), M with oracle A solves (Q, R).
LEMMA l. (i) ~<~e and <~. are transitive relations.
(ii) (O, R) ~<~. (S, T) implies (a, R) ~<~o (S, T).
(iii) A solvable promise problem (Q,R) is NP-hard if and only if for
every set S in NP, (S*, S) ~<~e(Q,R).
(iv) A solvable promise problem (Q,R) is uniformly NP-hard if and
only if,for every set S in NP, (Y~*,S) eP<er (Q,R)-
Whether ~<~Pimplies ~<~ is an open question.
3. ELEMENTARY RESULTS
Since we are focusing on polynomial time complexity issues, let us assume
henceforth that, for all promise problems (Q, R) mentioned, both Q and R
are recursive predicates. Furthermore, we assume that if M is a Turing
machine that solves (Q, R) then M halts on every input. Therefore, every
solution to a promise problem is a recursive set. The solution criterion
becomes
Q(x) ~ (M(x) -----"yes" ~ R(x)).
The solutions to a promise problem (Q,R) can be completely charac-
terized set theoretically: A recursive set A is a solution to (Q, R) if and only
if A = (QNR)UB, where Q AB--O and B is recursive. In particular,
every promise problem is solvable; Q AR, R, and R U ~Q are solutions;
Q n R is the smallest solution, and R U ~Q is the largest solution. If (Q, R)
is an NP-hard promise problem, then Q AR, R, and R U ~Q are NP-hard
sets.
7. COMPLEXITY OF PROMISE PROBLEMS 165
Promise problems with tractable promise can be analyzed rather
completely. First of all, R is NP-hard, and Q is in P does not imply that
(Q, R) is NP-hard. (Take Q to be empty or finite and observe that Q n R is
either empty or finite and so, assuming P 4: NP, (Q, R) is not NP-hard.) To
obtain a nontrivial example, use Ladner's result (Ladner, 1975) to obtain an
NP-complete set R and a set Q in P such that QNR is not NP-hard
(although QAR is in NP-P assuming P4:NP). Thus, (Q,R) is not NP-
hard.
THEOREM 1. If R is NP-hard, Q is in P, and (Q,R ) has a solution in P,
then (~Q,R ) is NP-hard.
Proof Let R be NP-hard, Q in P, and let A in P be a solution to (Q, R).
Let B be an arbitrary solution to (~Q, R). To show that (~Q, R) is NP-hard,
it suffices to show R ~<~B. This is accomplished by the following algorithm
with oracle set B.
input x;
if O(x)
thenifx EA {Q(x)~ (xEA ~ R(x))}
then accept
else reject
elselfx ~ B {~Q(x)~ (xCB~R(x))}
then accept
else reject.
<~rA.LEMMA 2. If Q is in P and A is a solution to (Q, R), then Q n R PP
Proof Let Q belong to P and let A be any solution to (Q,R). The
reduction follows from the obsevation that Q n R = Q n A.
As an immediate consequence we have
THEOREM 2. If Q is in P, then (Q, R) is an NP-hard promise problem if
and only if Q n R is an NP-hard set.
Theorem 2 can be used to generate many interesting examples of NP-hard
promise problems in NPP. The technique is this: Let R be any known NP-
complete problem and let Q n R be a refinement that is still NP-complete,
where Q belongs to P. Then, (Q, R) is NP-hard. For example, let SAT be an
encoding of the satisfactory formulas of propositional logic, and let 3 be an
encoding of all formulas with three literals per clause. 3 n SAT is the well-
known NP-complete set 3SAT. Since 3 is in P, Theorem 2 applies. Hence,
(3, SAT) is an NP-hard promise problem in NPP.
THEOREM 3. If Q belongs to P and (Q, R) is an NP-hard problem, then
(Q, R) is uniformly NP-hard.
643/61/2-7
8. 166 EVEN~ SELMAN, AND YACOBI
Proof Since (Q,R) is NP-hard, QC3R is an NP-hard solution. LetMbe
an oracle Turing machine that operates in polynomial time and that, by use
of Lemma 2, uniformly reduces QC3R to solutions of (Q,R). Let S be any
set in NP. Then, the machine which ~er-reduces S to Q ~ R followed by M
uniformly reduces S to each solution of (Q, R). II
For promises in NP, we have
LEMMA 3. If Q is in NP, then (Q,R) is in NPP if and only if Q~R is
in NP.
Proof The proof from right to left follows directly from the definition of
NPP. For the proof in the other direction, let A be a solution in NP to
(Q,R). Thus, Q~A is in NP, but Q(-3A = QC3R. |
4. ON THE CLASS NPP A eo-NPP
Let us consider the cracking problem for public-key cryptosystems once
again, and let us note that cracking problems are in NPP ~ co-NPP. To see
that the cracking problem has a solution in NP, the procedure on input n,
K 1, C, and M' is to guess X and 34, test whether X and M satisfy the
promise and if so, then accept if M' ) M. Similarly, it is easy to see that the
cracking problem is in co-NPP. (In this case, accept if M' < M.)
It is well known that NP ~ eo-NP contains an NP-hard set if and only if
NP is closed under complements (cf. Brassard, 1979 or Selman, 1974).
Hence, the former property is unlikely to be true. It is hypothesized in Even
and Yacobi (1980) that there exist no NP-hard promise problems in
NPP (3 co-NPP. By the remarks in the preceding paragraph, a consequence
of this hypothesis is nonexistence of public-key cryptosystems with NP-hard
cracking problems. By the following lemma it is certainly reasonable to
conjecture that NPP 4=eo-NPP.
LEMMA 4. NPP = eo-NPP if and only if NP = co-NP.
Proof Since NPP and co-NPP are extensions of NP and eo-NP, respec-
tively, the proof from left to right is trivial. The converse implication follows
from the observation that ifA is a solution to (Q,R), then A is a solution to
(Q,~R). |
However, we now provide an example of an NP-hard promise problem in
NPP C3co-NPP. Let • denote the logical operator "exclusive or." Let SAT
denote the NP-complete satisfiability problem. We will take the liberty also
of writing SAT as a predicate, so that SAT(x) asserts that x is satisfiable.
Let EX denote the predicate defined by EX(x,y)~SAT(x)@SAT(y).
Define SAT 1 = 2x2ySAT(x), so that SATl(x,y)~ SAT(x).
9. COMPLEXITY OF PROMISE PROBLEMS 167
THEOREM 4. (i) (EX, SA T1) ~ NPP A co-NPP.
(ii) (EX, SA T1) is NP-hard.
Proof (i) Let x and y be input words and suppose the promise EX(x, y)
is true. Then, ~SAT(x) is equivalent to the predicate SAT(y). Thus, it is
evident that both (EX, SAT1) and (EX, ~SAT1) belong to NPP.
(ii) Let A be any solution to (EX, SAT1). (Technically, A is a
language consisting of encoded ordered pairs; we will write (x,y)CA to
denote membership in A.) If SAT(x) @ SAT(y), then (x, y) ~ A ~ SAT(x).
To show that A is NP-hard, it suffices to~-slaiJw that SAT ~<TA, by the
following iterative algorithm with oracle A.
Let ~, be a program variable that ranges over propositional formulas. Let
0(ol ..... an) be an input formula with Boolean variables e 1,..., a n.
:= G.);
for/:= 1 tondo
t~/has free variables a i..... an}
if (~'(0, 0"i+ 1,... , O'n), I/./(l, Gi+I,... , O'n)) ~ h
then ~' : = ~,(0, ai+ 1..... an)
else ~ : = ~(1, tTi+1..... crn);
{~ is a variable-free Boolean expression}
if ~ has value 1
then accept {~ is satisfiable /
else reject {¢ is not satisfiable/.
The algorithm clearly operates in polynomial time. If the accept state is
reached, then a satisfying assignment for ¢ has been found. Conversely,
suppose ~i is satisfiable. We claim that the loop preserves satisfiability of ~,.
At each execution of the loop body, if ~,(ol,..., an) is satisfiable, then
~(0,~ri+ 1..... trn) is satisfiable or ~,(1,ai+l,...,an) is satisfiable. If exactly one
of these is satisfiable, then the promise is true and the oracle query provides
correct information. If both of these are true, then ~, remains true
independent of the value of the oracle query. Hence, the algorithm correctly
reduces SAT to the solution A of (EX, SAT1) in polynomial time. I
Since the algorithm just given does not depend on choice of solution, we
have the following corollary.
COROLLARY 1. (EX, SAT1) is uniformly NP-hard.
COROLLARY 2. For each promise problem (Q, R) in NPP, (Q, R) ~
(EX, SAT 1).
Proof. Let (Q, R) ~ NPP. Let L be a solution in NP to (Q, R). Let A be
an arbitrary solution to (EX, SAT1). Let M be an oracle Turing machine
10. 168 EVEN, SELMAN, AND YACOBI
that operates in polynomial time such that L =L(M,A). By definition, M
with oracleA solves (Q,R). II
Whereas NP N co-NP probably does not contain a set that is complete for
NP ~ co-NP (Sipser, 1982), we see here that NPP ~ eo-NPP does contain
the complete promise problem (EX, SAT1). The results of this secton
indicate that complexity classes of promise problems have different structural
properties than do complexity classes of ordinary decision problems.
In anticipation of the issues to be raised in the next section, it is worth
noting the complexity of the promise predicate EX. Following Papadimitriou
and Yannakakis (1982), define Dp = {L 1 AL21L 1@NP and L 2C co-NP}.
Trivially, NP_ De, co-NP_ De, and De ~Af (where AP is defined to be
pN~). It is expected that each of these inclusions is proper.
PROPOSITION 1. EX /s complete for De.
Proof Let x and y be propositional formulas and assume without loss of
generality that x and y have no propositional variables in common. Then
EX(x,y)~ SAT(x V y) and ,-~SAT(xAy), and so EX belongs to De. Let
SAT-UNSAT be the decision problem defined by
(x, y) E SAT-UNSAT ~ SAT(x) and ,-,SAT(y).
It is shown in Papadimitriou and Yannakakis (1982) that SAT-UNSAT is
complete for DP. Finally, a straightforward reduction of SAT-UNSAT to EX
is given by
(X,y) E SAT-UNSAT ~ EX(x, 0) and EX(1, y). I
5. A NEW CONJECTURE
We have just seen that there does exist an NP-hard promise problem
(Q, R), in NPP ~ co-NPP, but with Q E DP and probably not in NP. We
conjecture that promise problems with the first two properties cannot have Q
in NP. More precisely,
Conjecture. There exists no promise problem (Q, R) such that
(i) (Q,R) E NPP~co-NPP,
(ii) (Q, R) is NP-hard, and
(iii) Q is in NP.
PROPOSITION 2. The conjecture implies that public-key cryptosystems
with NP-hard cracking problems do not exist.
11. COMPLEXITY OF PROMISE PROBLEMS 169
To see the correctness of this proposition, simply observe (cf. the
Introduction) that the promise predicate for cracking problems is in NP.
This conjecture has even larger interest. We consider the set ~g of problems
in NP that have unique solution (Valiant, 1976). That is, ~' is the set of all
languages L for which there is a nondeterministic Turing machine M that
witnesses L ~ NP such that for every input x to M, M has at most one
accepting computation. Whether ~ = NP is a well-known open problem.
(Aspects of this problem are discussed in Book, Long and Selman (1982),
Geske and Grollmann (1983), and Rackoff (1982).)
THEOREM5. The conjecture implies NP ¢ ~'.
Proof. We will prove the contrapositive. Assume NP = ~/ and let L be
any NP-complete set contained in ~/. A promise problem (Q,R) is to be
constructed that satisfies the conditions of the conjecture. Let M be a
nondeterministic Turing machine that is a witness to L C ff and that
operates in polynomial time p. Assume without loss of generality that M can
make at most two distinct next moves in any configuration. Then, every
computation of M on an input word x corresponds in a natural way to a
binary choice sequence y of length at most p(Ixl). Let ~< be any standard
polynomial time computable ordering of the binary strings. Define the
predicates Q and R by
Q(x,z)~ xC L
and
R(x,z)~ xCL andz<~y,
where y is the unique binary choice sequence that causes M to accept x.
By definition, Q E NP. It is easy to see that (Q,R)~ NPP. Namely, a
solution to (Q, R) that belongs to NP is given by the following nondeter-
ministic procedure: Given x and z, guess a choice sequence y whose length is
within the bound p(]x ]) and check to determine whether y causes M to accept
x. If so, and if z <~y, then accept. Otherwise, do not accept.
Similarly, it is easy to see that (Q, ~R) belongs to NPP. Therefore, (Q, R)
belongs to NPP n eo-NPP.
We now use the premise that L is NP-complete in order to show that
(Q, R) is NP-hard. Let A be any solution of (Q, R). To show that A is NP-
hard, it suffices to show that L ~<rA, and this is accomplished by the
following algorithm with oracle A:
(1) input x;
(2) apply a binary search to the set of strings of length <.~v(Ixl)to try
to find the largest string z such that {x, y} E A;
12. 170 EVEN, SELMAN, AND YACOBI
(3) if step 2 does not output a string z
(4) then reject
(5) else |fz is an accepting choice sequence of M for x
(6) then accept
(7) else reject.
Since step (2) can be performed in time O(log(2P(m)))= O(p(Ixt)), the
algorithm runs in polynomial time. We argue that the algorithm correctly
reduces L to A. Suppose x C L. Then, the promise Q(x, z) is satisfied. Hence,
(x, z) C A ~ R(x, z). The predicate R enjoys the property
R(x, zl) A (z2 <~zl)-~ R(x, z2).
Therefore, step (2) does find a largest string z such that (x,z)CA. By
definition of R, this string z must be the unique accepting choice sequence of
M on input x. Hence, the algorithm accepts x.
Conversely, suppose the algorithm accepts input x. Then step (5) is
reached. Hence, x C L(M). |
It may be worth noting that ff is closed under ~<em-reductions. Therefore,
NP = g/ if and only if g/ contains an NP-complete set, and so we have
COROLLARY 3. If the conjecture is true, then no NP-complete set can be
accepted by any Turing machine that has at most one accepting computation
for each input word.
A variant of the conjecture is indeed true, assuming NP 4=co-NP. This is
shown in the next and final theorem. Unfortunately, the variant does not
seem to have the same nice consequences for public-key cryptosystems.
THEOREM 6. NP = eo-NP if and only if there exists a promise problem
(Q, R) such that
(i) (Q, R) E NPP ~ eo-NPP,
(ii) (Q, R) is uniformly NP-hard,
(iii) Q is in eo-NP.
The proof will require some facts about oracle Turing machines. First, an
oracle Turing machine is a multitape Turing machine with a distinguished
work tape, the query tape, and three distinguished states QUERY, YES, and
NO. At some step of a computation on an input string x, M may transfer
into the state QUERY. In state QUERY, M transfers into the state YES if
the string currently appearing on the query tape is in some oracle setA;
otherwise, M transfers into the state NO; in either case the tape is instantly
erased.
13. COMPLEXITY OF PROMISE PROBLEMS 171
Every oracle Turing machine M~ is equivalent to an oracle Turing
machine M 2 such that for every input string x and every query string w, M 2
on input x never enters state QUERY with w written on its query tape more
than once. Moreover, if M 1 operates in polynomial time, then so does M 2.
Now we will describe M 2. M 2 will simulate M 1, but as it does so will build
two finite "tables" Ty and T~¢. At all times, Tr (-3 T N = 0. Tr will contain
each query string that receives a YES answer from its oracle and Tu will
contain each query string that receives a NO answer from its oracle. One of
the work tapes of M 2 is used to store (an appropriate encoding of) Tr,
another of the work tapes M 2 is used to store (an appropriate encoding of)
TN, and both these tapes are initially empty. On input x, M 2 begins a
simulation of M 1. Whenever this simulated computation is to enter a query
configuration with a word w written on the query tape, M 2 first determines
whether the string w belongs to T r U T N already. If w C Ty k) T~-, then M z
erases its query tape and continues its simulation of M1 in state YES if
w E Tr and in state NO if w ~ T:¢. (In particular M 2 does not enter its query
state.) If w ~ TvU TN, then m 2 enters state QUERY and writes the string w
onto Ty if the transfer state is YES, and onto T N otherwise. Clearly, M 2 is
equivalent to M~ and M 2 operates in polynomial time if M 1 does. Given an
input word x and oracle set A, M 2 on input x never queries A more than once
about any string w, and Tr and T u are maintained to insure consistency of
oracle responses.
Now the proof of Theorem 6 is given.
Proof Suppose NP = co-NP and let L be any NP-complete set. Then,
(22", L) satisfies all the conditions of the theorem.
To prove the converse, let L belong to NP and let M be a deterministic
oracle Turing machine that uniformly reduces L to solutions of (Q, R). Then,
M with its accepting and rejecting states reversed (call this machine M')
reduces ~L to solutions of (Q,R). In accordance with the argument just
given, M' on an input x builds tables Ty and T u and so does not query its
oracle about any word w more than once. Let M;, i = 1, 2, be NP-acceptors
that solve (Q, R) and (Q, ~R), respectively, and let M s be an NP-acceptor
for ~Q. We now describe an NP-acceptor N for ~L. On input x, N begins a
simulation of M' on x but replaces each query w to the oracle by simulations
of w on the NP-acceptors M i, i= 1, 2, 3, according to the following
nondeterministic algorithm. Machine N builds tables Tr and T N also and
initially Tr = T u = 0.
if w C T r
then continue simulation of M' in the YES state;
lfwC T N
then continue simulation of M' in the NO state;
14. 172 EVEN, SELMAN, AND YACOBI
ifw~ TrU TN
then ifM~ accepts w
--, begin
:= TyU [w];
continue simulation of M' in the YES state
end
[3 (M2accepts w) or (M3accepts w)
begin
TN := [w];
continue simulation of M' in the NO state
end.
If the algorithm is executed on a word w and the guard that is chosen does
not evaluate to true, then the simulation by N of M' on input x is to
terminate without accepting. For every input word x to N, the simulation by
N of M' on x can be completed, i.e., there is a computation of N on x that
reaches one of the accepting or rejecting states of M'. To see this, observe
that w E Q implies M~ accepts w if and only if w ~ R and M 2 accepts w if
and only if w ~ ~R, and w E ,-,Q implies M 3 accepts w. Thus, for each word
w that is not already in Tr LJ Tu, at least one of the guards can evaluate to
true.
It is obvious that the language accepted by N belongs to NP. We need to
see that ~L is this language. First we will show that if x is accepted by N,
then x ~ ~L. Consider a fixed accepting computation of N on x and consider
the final values of the tables Tr and TN that N constructs. Since N simulates
M', M' must accept x with any oracle A such that Tr_~A and TN_ ~A.
Since M' uniformly reduces ,-,L to solutions of (Q, R), if a solution A of
(Q,R) can be found such that Trc_A and TN~A, then xE~L follows
from the previous statement. We claim that the set A =L(M 0- Tu is a
solution of (Q,R), that Tr%A and that TN~_~A. If wE Q, then w~A if
and only if wER, because L(M 0 is a solution of (Q,R), and so A is a
solution of (Q, R) also. Obviously, TNc ,-,A. Tr_ A because the algorithm
places a word w into Tr only if M 1 accepts w and because Tr~ TN-- 0.
This completes the proof in one direction.
Now let x E ~L and let us see that N has an accepting computation on
input x. Consider the final values of Tr and TN that are constructed by M'
when M' is executed on input x with L(M1) as the oracle. Recall that this
computation accepts x, since L(M1) is a solution of (Q, R). We claim there
is a computation of N on input x for which the final values of its table are
also Tr and TN. It follows immediately that this computation accepts x. To
establish our claim, first note that Ty ~ L(M 0 and TNc_~L(M O. If w ~ Tr,
it follows that the guard "M1 accepts w" is true and therefore this
computation of the algorithm places w into its table of YES responses. If
15. COMPLEXITY OF PROMISE PROBLEMS 173
w E TN, then w ~ L(M1) and so either w E Q c3 MR or w C ~Q. Thus, the
guard "(M 2 accepts w) or (M3 accepts w)" is true. This computation of the
algorithm places w into its table of NO responses. This completes the proof
of the claim. Hence, if x C ~L, then N accepts x.
We proved that ~L is the language accepted by N; therefore, ~L E NP.
Since L is an arbitrary language in NP, NP = co-NP follows. II
RECEIVED:June 13, 1983; ACCEPTED:January 26, 1984
REFERENCES
BOOK, R., LONG, T., AND SELMAN, A. (1982) Qualitative controlled relativizations of
complexity classes, manuscript.
BRASSARD, G. (1979), A note on the complexity of cryptography, IEEE Trans. Inform.
Theory IT-25, No. 2, 232-233.
BRASSARD,G. (1983), Relativized cryptography, IEEE Trans. Inform. Theory, IT-29, No. 6,
877-894.
EVEN, S., AND YACOBI,Y. (1980), Cryptography and NP-completeness, "Proc. 7th Colloq.
Automata, Lang. Programming," Lecture Notes in Computer Science Vol. 85,
pp. 195-207, Springer Verlag, Berlin/New York.
GAREY,M., ANDJOHNSON,D. (1979), "Computers and Intractability: A Guide to the Theory
of NP-Completeness," Freeman, San Francisco.
GESKE, J. AND GROLLMANN, J. (1983), Relativizations of unambiguous and random
polynomial time classes, manuscript.
LADNER, R. (1975), On the structure of polynomial time reducibility, J. Assoc. Comput.
Mach. 22, 155-171.
LADNER, R., LYNCH, N., AND SELMAN, A. (1975), A comparison of polynomial time
reducibilities, Theoret. Comput. Sci. l, 103-123.
PAPADIMITRIOU,C., ANDYANNAKAKIS,M. (1982), The complexity of facets (and some facets
of complexity), in "Proc. 14th Ann. ACM Sympos. on Theory of Computing," 255-260.
RACKOEF, C. (1982), Relativized questions involving probabilistic algorithms, J. Assoc.
Comput. Mach. 29, 261-268.
SELMAN, A. (1974), On the structure of NP, Notices Amer. Math. Soc. 21, No. 6, 310.
SIPSER, M. (1982), On relativization and the existence of complete sets, in "Proc. 9th Colloq.
Automata, Lang. Programming," Lecture Notes in Computer Science Vol. 140,
pp. 523-531, Springer-Verlag.
ULLIAN, J. (1967), Partial algorithm problems for context-free languages, Inform. Contr. ll,
80-101.
VALIANT, L. (1976), Relative complexity of checking and evaluating, Inform. Process. 5,
20-23.