There are three main limitations of algorithms:
1) Some problems cannot be solved by any algorithm.
2) Some problems can be solved algorithmically but not in polynomial time.
3) Some problems can be solved in polynomial time but existing algorithms are not efficient.
Lower bounds help establish the minimum efficiency of algorithms. There are four main methods to determine lower bounds: trivial counting arguments, information theory, adversary arguments, and problem reduction. Decision trees can also help analyze comparison-based algorithms and establish lower bounds.
In computer science, divide and conquer is an algorithm design paradigm based on multi-branched recursion. A divide-and-conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same or related type until these become simple enough to be solved directly.
In computer science, divide and conquer (D&C) is an algorithm design paradigm based on multi-branched recursion. A divide and conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same (or related) type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem.
In computer science, merge sort (also commonly spelled mergesort) is an O(n log n) comparison-based sorting algorithm. Most implementations produce a stable sort, which means that the implementation preserves the input order of equal elements in the sorted output. Mergesort is a divide and conquer algorithm that was invented by John von Neumann in 1945. A detailed description and analysis of bottom-up mergesort appeared in a report by Goldstine and Neumann as early as 1948.
In computer science, divide and conquer is an algorithm design paradigm based on multi-branched recursion. A divide-and-conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same or related type until these become simple enough to be solved directly.
In computer science, divide and conquer (D&C) is an algorithm design paradigm based on multi-branched recursion. A divide and conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same (or related) type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem.
In computer science, merge sort (also commonly spelled mergesort) is an O(n log n) comparison-based sorting algorithm. Most implementations produce a stable sort, which means that the implementation preserves the input order of equal elements in the sorted output. Mergesort is a divide and conquer algorithm that was invented by John von Neumann in 1945. A detailed description and analysis of bottom-up mergesort appeared in a report by Goldstine and Neumann as early as 1948.
Case study of Divide and Conquer approach contains information about-merge sort and quick sort algorithms, closest pair of points, binary search, la-russe multiplicaton, min-max problems and also strassen multiplication.
Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete:
An optimization problem with discrete variables is known as a discrete optimization, in which an object such as an integer, permutation or graph must be found from a countable set.
A problem with continuous variables is known as a continuous optimization, in which an optimal value from a continuous function must be found. They can include constrained problems and multimodal problems.
In mathematics, the Lambert W function, also called the omega function or product logarithm, is a set of functions, namely the branches of the inverse relation of the function f(z) = zez where ez is the exponential function and z is any complex number. In other words
{\displaystyle z=f^{-1}(ze^{z})=W(ze^{z})} z=f^{-1}(ze^{z})=W(ze^{z})
By substituting {\displaystyle z'=ze^{z}} z'=ze^{z} in the above equation, we get the defining equation for the W function (and for the W relation in general):
{\displaystyle z'=W(z')e^{W(z')}} z'=W(z')e^{W(z')}
for any complex number z'.
Since the function ƒ is not injective, the relation W is multivalued (except at 0). If we restrict attention to real-valued W, the complex variable z is then replaced by the real variable x, and the relation is defined only for x ≥ −1/e, and is double-valued on (−1/e, 0). The additional constraint W ≥ −1 defines a single-valued function W0(x). We have W0(0) = 0 and W0(−1/e) = −1. Meanwhile, the lower branch has W ≤ −1 and is denoted W−1(x). It decreases from W−1(−1/e) = −1 to W−1(0−) = −∞.
The Lambert W relation cannot be expressed in terms of elementary functions.[1] It is useful in combinatorics, for instance in the enumeration of trees. It can be used to solve various equations involving exponentials (e.g. the maxima of the Planck, Bose–Einstein, and Fermi–Dirac distributions) and also occurs in the solution of delay differential equations, such as y'(t) = a y(t − 1). In biochemistry, and in particular enzyme kinetics, a closed-form solution for the time course kinetics analysis of Michaelis–Menten kinetics is described in terms of the Lambert W function.
Case study of Divide and Conquer approach contains information about-merge sort and quick sort algorithms, closest pair of points, binary search, la-russe multiplicaton, min-max problems and also strassen multiplication.
Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete:
An optimization problem with discrete variables is known as a discrete optimization, in which an object such as an integer, permutation or graph must be found from a countable set.
A problem with continuous variables is known as a continuous optimization, in which an optimal value from a continuous function must be found. They can include constrained problems and multimodal problems.
In mathematics, the Lambert W function, also called the omega function or product logarithm, is a set of functions, namely the branches of the inverse relation of the function f(z) = zez where ez is the exponential function and z is any complex number. In other words
{\displaystyle z=f^{-1}(ze^{z})=W(ze^{z})} z=f^{-1}(ze^{z})=W(ze^{z})
By substituting {\displaystyle z'=ze^{z}} z'=ze^{z} in the above equation, we get the defining equation for the W function (and for the W relation in general):
{\displaystyle z'=W(z')e^{W(z')}} z'=W(z')e^{W(z')}
for any complex number z'.
Since the function ƒ is not injective, the relation W is multivalued (except at 0). If we restrict attention to real-valued W, the complex variable z is then replaced by the real variable x, and the relation is defined only for x ≥ −1/e, and is double-valued on (−1/e, 0). The additional constraint W ≥ −1 defines a single-valued function W0(x). We have W0(0) = 0 and W0(−1/e) = −1. Meanwhile, the lower branch has W ≤ −1 and is denoted W−1(x). It decreases from W−1(−1/e) = −1 to W−1(0−) = −∞.
The Lambert W relation cannot be expressed in terms of elementary functions.[1] It is useful in combinatorics, for instance in the enumeration of trees. It can be used to solve various equations involving exponentials (e.g. the maxima of the Planck, Bose–Einstein, and Fermi–Dirac distributions) and also occurs in the solution of delay differential equations, such as y'(t) = a y(t − 1). In biochemistry, and in particular enzyme kinetics, a closed-form solution for the time course kinetics analysis of Michaelis–Menten kinetics is described in terms of the Lambert W function.
I am Marianna P. I am a Computer Science Exam Expert at programmingexamhelp.com. I hold a Bachelor of Information Technology from, California Institute of Technology, United States. I have been helping students with their exams for the past 12 years. You can hire me to take your exam in Computer Science.
Visit programmingexamhelp.com or email support@programmingexamhelp.com. You can also call on +1 678 648 4277 for any assistance with the Computer Science Exam.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Daa chapter6
1. CHINE! 6
Limitation of Algorithms
—=<3=——=G=——=c=——=c=——«:>=——=a=——=Q=—{¢}+
2.2.1. Limitations of Algorithm
There are many algorithms for solving a variety of different problems.
They are very powerful instruments, especially when they are executed by modern
computers. The power of algorithms is limited because of the following reasons:
0 There are some problems cannot be solved by any algorithm.
I There are some problems can be solved algorithmically but not in
polynomial time.
0 There are some problems can be solved in polynomial time by some
algorithms, but they are usually lower bounds on their efficiency.
Algorithms limits are identified by the following:
in Lower-Bound Arguments
in Decision Trees
t P, NP and NP—Complete Problems
2.2.2. Lower-Bound Arguments
We can look at the efficiency of an algorithm two ways. We can establish
its asymptotic efiiciency class (say, for the worst case) and see where this class
stands with respect to the hierarchy of efliciency classes.
For example, selection sort, whose efficiency is quadratic, is a reasonably
fast algorithm, whereas the algorithm for the Tower of Hanoi problem is very slow
because its efficiency is exponential.
Lower bounds means estimating the minimum amount of work needed to
solve the problem. We present several methods for establishing lower bounds and
illustrate them with specific examples.
Trivial Lower Bounds
Information-Theoretic Arguments
Adversary Arguments
Problem Reduction
er‘F’E
In analyzing the efficiency of specific algorithms in the preceding, we
should distinguish between a lower-bound class and a minimum number of times a
particular operation needs to be executed.
2. Trivial Lower Bounds
The simplest method of obtaining a lower-bound class is based on counting
the number of items in the problem’s input that must be processed and the number
of output items that need to be produced
Since any algorithm must at least “read” all the items it needs to process
and “write” all its outputs, such a count yields a trivial lower bound
For example, any algorithm for generating all permutations of n distinct
items must be in Q(n!) because the size of the output is n!. And this bound is tight
because good algorithms for generating permutations spend a constant time on each
of them except the initial one.
Consider the problem of evaluating a polynomial of degree n at a given
point x, given its coefficients an, an—l, . . . , a0. p(x) = anxn + an-lx‘fl + . . . + an. All
the coefficients have to be processed by any polynomial-evaluation algorithm. i.e
Q(n). This is tight lower bound.
Similarly, a trivial lower bound for computing the product of two 11 X n
matrices is Q(n2) because any such algorithm has to process 2n2 elements in the
input matrices and generate n2 elements of the product. It is still unknown,
however, whether this bound is tight.
The trivial bound for the traveling salesman problem is Q(n2), because its
input is n(n-I)/2 intercity distances and its output is a list of n + 1 cities making up
an optimal tour. But this bound is useless because there is no known algorithm with
the running time being a polynomial function.
Detemiining the lower bound lies in which part of an input must be
processed by any algorithm solving the problem. For example, searching for an
element of a given value in a sorted array does not require processing all its
elements.
Information-Theoretic Arguments
The information-theoretical approach seeks to establish a lower bound
based on the amount of information it has to produce by algorithm.
Consider an example “Game of guessing number”, the well-known game
of deducing a positive integer between 1 and 11 selected by somebody by asking
that person questions with yes/no answers. The amount of uncertainty that any
algorithm solving this problem has to resolve can be measured by log n.
The number of bits needed to specify a particular number among the 11
possibilities. Each answer to the question gives information about each bit.
Is the first bit zero? —> No—> first bit is 1
Is the second bit zero? —> Yes—> second bit is 0
3. Is the third bit zero? —> Yes—> third bit is 0
Is the forth bit zero? —> Yes—> forth bit is 0
The number in binary is 1000, i.e.8 in decimal value.
The above approach is called the information-theoretic argument because
of its connection to information theory. This is useful for finding information-
theoretic lower bounds for many problems involving comparisons, including
sorting and searching.
Its underlying idea can be realized the mechanism of decision trees.
Because
Adversary Arguments
Adversary Argument is a method of proving by playing a role of
adversary (opponent) in which algorithm has to work more for adjusting input
consistently.
Consider the Game of guessing number between positive integer l and n
by asking a person (Adversary) with yes/no type answers for questions. After each
question at least one-half of the numbers reduced. If an algorithm stops before the
size of the set is reduced to l, the adversary can exhibit a number.
Any algorithm needs log 11 iterations to shrink an n—element set to a one-
element set by halving and rounding up the size of the remaining set. Hence, at
least log 11 questions need to be asked by any algorithm in the worst case. This
example illustrates the adversary method for establishing lower bounds.
Consider the problem of merging two sorted lists of size n a1< a2 < . . . < an
and b1< b2 < . .< bn into a single sorted list of size 2n. For simplicity, we assume
that all the a’s and b’s are distinct, which gives the problem a unique solution.
Merging is done by repeatedly comparing the first elements in the
remaining lists and outputting the smaller among them. The number of key
comparisons (lower bound) in the worst case for this algorithm for merging is 2n —
1.
Problem Reduction
Problem reduction is a method in which a difficult unsolvable problem P is
reduced to another solvable problem B which can be solved by a known algorithm.
A similar reduction idea can be used for finding a lower bound. To show
that problem P is at least as hard as another problem Q with a known lower bound,
we need to reduce Q to P (not P to Q!). In other words, we should show that an
arbitrary instance of problem Q can be transformed to an instance of problem P, so
any algorithm solving P would solve Q as well. Then a lower bound for Q will be a
4. lower bound for P. Below table lists several important problems that are often used
for this purpose.
Problem Lower bound Tightness
Sorting 9(11 log 11) yes
searching in a sorted array £2 (log 11) yes
element uniqueness problem [2 (n log 11) yes
multiplication of n—digit integers 9(11) unknown
multiplication of n X n matrices 9(112) unknown
Consider the Euclidean minimum spanning tree problem as an example of
establishing a lower bound by reduction:
Given n points in the Cartesian plane, construct a tree of minimum total
length whose vertices are the given points. As a problem with a known lower
bound, we use the element uniqueness problem.
We can transform any set x1, x2, . . . , xn of 11 real numbers into a set of n
points in the Cartesian plane by simply adding 0 as the points’ y coordinate: (x1, 0),
(x2, 0), . . . , (xn, 0). Let T be a minimum spanning tree found for this set of points.
Since T must contain a shortest edge, checking whether T contains a zero length
edge will answer the question about uniqueness of the given numbers. This
reduction implies that Q (n log n) is a lower bound for the Euclidean minimum
spanning tree problem.
Note: Limitations of algorithm can be studied by obtaining lower bound
efficiency
2.2.3. Decision Trees
Important algorithms like sorting and searching are based on comparing
items of their inputs. The study of the performance of such algorithm is called a
decision tree. As an example, Below Figure presents a decision tree of an algorithm
for finding a minimum of three numbers. Each internal node of a binary decision
tree represents a key comparison indicated in the node.
5. Fig .' Decision tree for finding a minimum of three numbers.
Consider a binary decision tree with height h and leaves 11. and height h,
then h 2[ log n ]. A biriaiy tree of height h with the largest number of leaves on the
last level is 2“. In other words, 2“ 2 n, which puts a lower bound on the heights of
binary decision trees. Hence the worst-case number of comparisons made by any
comparison-based algorithm for the problem is called the information theoretic
lower bound.
Iabc
Yes a<__ b>%
as... 32:../' _ /. .
ebc Che /c_ba
./ “as /-"““. -- “a...
yeS.//qu H‘s—O -/b<.a ”BEE VE-qqcy veS"/b<.a ')
|C<a<b| |b<a<C| |b<C<a||C<b<a|
Fig .' Decision tree for the tree-element selection sort.
A triple above a node indicates the state of the array being sorted. Note two
redundant comparisons b < a with a single possible outcome because of the results
of some previously made comparisons.
abc
fl.
es /' “a.V .. a < b .. no
abc back
es "/fi‘x“ __/" K.Y Q 4 c .133“ no yes a < c .5 no
-<—< C _<— < C
Y93/” E/ “H ‘x no
a<./C__j: jb<cfl
Fig .' Decision treefor the tree-e-lement selection sort
The three-element insertion sort whose decision tree is given in Figure 5.3,
this number is (2 + 3 + 3 + 2 + 3 + 3)/6 = 2.66. Under the standard assumption that
all n! outcomes of sorting are equally likely, the following lower bound on the
6. average number of comparisons Cavg made by any comparison-based algorithm in
sorting an n—element list has been proved:
Cavg(n) 2 log2 n!.
Decision tree is a convenient model of algorithms involving comparisons in
which
0 internal nodes represent comparisons
0 leaves represent outcomes (or input cases)
Decision Trees and Sorting Algorithms
0 Any comparison-based sorting algorithm can be represented by a decision
tree (for each fixed n)
Number of leaves (outcomes) 3 n!
Height of binary tree with n! leaves >=[log2n!]
0 Minimum number of comparisons in the worst case 3 éloggnlu for any
comparison-based sorting algorithm, since the longest path represents the
worst case and its length is the height
0 [log2n!]n loggn (by Sterling approximation)
0 This lower bound is tight (mergesort or heapsort)
Decision Trees for Searching a Sorted Array
Decision trees can be used for establishing lower bounds on the number of
key comparisons in searching a sorted array of 11 keys: A[0]<A[l]< . . .<A[n - l].
The principal algorithm for this problem is binary search. The number of
comparisons made by binary search in the worst case, Cwom(n), is given by the
formula
Cwomfil) = [log2 I1] + 1: [10g2(11+1)]
A
“’"Alil'" >—
< : '.‘>
—""’Al0|_'_','_.‘..‘>— am 59'?—
‘E = '3‘ ‘1'. = :3
|<A[U|| l AID] | lwotannl |LA|1LA12|1| lam l ./"A[3
DWELNB]: lAisI | banal
Fig: Ternary decision tree for binary search in a four-element array.
7. ./""_"‘.
ffll/
< >
/"‘ ' W /"' ' ' ' ‘X
.i‘_”‘3l./' xii?!
< 2: ‘C b
<AIOI W0]. AM] (Ah 1. A[Z]! /-:q[3-]--,_
x..- ___/
< >
{A[Zl. A[3]l > AB]
Fig: Binary decision tree for binary search in a four-element array.
As comparison of the decision trees in the above illustrates, the biriaiy
decision tree is simply the temaiy decision tree with all the middle subtrees
eliminated. Applying inequality to such biriaiy decision trees immediately yields
Cum/n) 2 [10g2(11+1)]
2.2.4. P - Class
The class P consist of those problems that are solvable in polynomial time
that is in time 0 (nk) for some constant’ k’ where n is input size.
0 The worst-case time complexity is limited by the polynomial function of its
input size.
0 The class of problems where the solution can discovered “quickly”
In time polynomial in the size of the input
Example
Sorting 9T(n) = O(n2), T(n) = O(n log n)
Searching 9T(n) = O(n), T(n) = lO(log n)
Matrix multiplication 9T(n) = O(n3), T(n) = O(n2'83)
0 P Problems are the set of all decision issues that can be solved by
algorithms with polynomial time requirements.
0 More specifically, they are problems that can be solved in time O(nk) for
some constant k, where n is the size of the input to the problem.
0 The key is that n is the size of input.
Intractable and tractable
8. 0 An issue is said to be intractable if it is not possible to be solved by a
polynomial-time algorithm.
0 Conversely, the problem is said to be tractable if a polynomial algorithm
can solve it.
2.2.5. NP-class
The class NP consist of those problems that are verifiable in polynomial
time that is given a certificate of a solution use could verify that the certificate is
correct in time polynomial in the size of the input to the problem.
0 Nondetemiinistic Polynomial time
0 The class of problems where the solution can verified “quickly”
0 In time polynomial in the size of the input
Example
TSP,
integer knapsack problem,
graph coloring,
Hamilton Circuit,
partition problem,
‘/ integer linear programming
In discussing the theory of NP, we only limit the decision problem.
Decision issue is a problem whose solution is only ”yes” or ”no” answer.
xxxxx
Example:
Given an integer x. Determine whether the element x is in the table?
Given an integer x. Determine whether x is prime number?
School
Movies
Planter's Farm
3+ b+c+d+e : minimum weight TSOP = minimum weight
3+ D+c+d+e -= I TSDP = total weight<x
9. Traveling Salesman Optimization Problem (T SOP) is a matter of determining a
tour with a minimum total of minimum weights.
(*TSP is commonly known).
Traveling Salesman Decision Problem (T SDP) is a matter of determining whether
there is a tour with the total weight of the sides not exceeding the value x.
Example: if the problem Traveling Salesman Optimization Problem (TSOP)
minimum tour is 20,
Then the answer to the problem of Traveling Salesman Decision Problem (TSDP) is
”yes” ifx >= 20, and ”no” ifx <20.
2.2.6. NP complete problems
NP hard problem
0 Class of decision problems which are at least as hard as the hardest
problems in NP.
0 Problems that are NP-hard do not have to be elements of NP', indeed, they
may not even be decidable.
Is P = NP?
Any problem in P is also in NP:
P E NP
The big (and open question) is whether NP E P or P = NP
i.e., if it is always easy to check a solution, should it also be easy to find a solution?
Most computer scientists believe that this is false but we do not have a proof ...
NP-Completeness (informally)
NP-complete problems are defined as the hardest problems in NP
10. Most practical problems turn out to be either P or NP-complete.
Study NP-complete problems
Reductions
Reduction is a way of saying that one problem is “easier” than another.
We say that problem A is easier than problem B, (i.e., we write “A 5 B”)
if we can solve A using the algorithm that solves B.
Idea: transform the inputs of A to inputs of B
yes yes
(I
f '3 Problem B < no
no —
Problem A
Polynomial Reductions
0 Given two problems A, B, we say that A is polynomially reducible to B (A
S p B) if:
0 There exists a function f that converts the input of A to inputs of B in
polynomial time
0 A(i) = YES <<>> B(f(i)) = YES
NP-Completeness (formally)
A problem B is NP-complete if:
11. (1)B ENP
(2)A SprorallA E NP
0 If B satisfies only property (2) we say that B is NP-hard
0 No polynomial time algorithm has been discovered for an NP-Complete
problem
0 No one has ever proven that no polynomial time algorithm can exist for any
NP-Complete problem
f B Problem B
Problem A
yes
yes —
< no
no —>
IfA£pBandB €P,thenAEP
ifA£pBandA8 P,thenB 6 P
Proving Polynomial Time
0L f B Polynomial time
algorithm to decide B
{—i
no
Polynomial time algorithm to decide A
1. Use a polynomial time reduction algorithm to transform A into B
2. Run a known polynomial time algorithm for B
3. Use the answer for B as the answer for A