SlideShare a Scribd company logo
1 of 63
Download to read offline
Lower Bounds on Kernelization
Venkatesh Raman
Institiue of Mathematical Sciences, Chennai

March 6, 2014

Venkatesh Raman

Lower Bounds on Kernelization
Some known kernelization results

Linear: MaxSat – 2k clauses, k variables

Venkatesh Raman

Lower Bounds on Kernelization
Some known kernelization results

Linear: MaxSat – 2k clauses, k variables
Quadratic: k-Vertex Cover – 2k vertices but O(k 2 )
edges

Venkatesh Raman

Lower Bounds on Kernelization
Some known kernelization results

Linear: MaxSat – 2k clauses, k variables
Quadratic: k-Vertex Cover – 2k vertices but O(k 2 )
edges
Cubic: k-Dominating Set in graphs without C4 – O(k 3 )
vertices

Venkatesh Raman

Lower Bounds on Kernelization
Some known kernelization results

Linear: MaxSat – 2k clauses, k variables
Quadratic: k-Vertex Cover – 2k vertices but O(k 2 )
edges
Cubic: k-Dominating Set in graphs without C4 – O(k 3 )
vertices
Exponential: k-Path – 2O(k)

Venkatesh Raman

Lower Bounds on Kernelization
Some known kernelization results

Linear: MaxSat – 2k clauses, k variables
Quadratic: k-Vertex Cover – 2k vertices but O(k 2 )
edges
Cubic: k-Dominating Set in graphs without C4 – O(k 3 )
vertices
Exponential: k-Path – 2O(k)
No Kernel: k-Dominating Set is W-hard. So is not
expected to have kernels of any size.

Venkatesh Raman

Lower Bounds on Kernelization
Some known kernelization results

Linear: MaxSat – 2k clauses, k variables
Quadratic: k-Vertex Cover – 2k vertices but O(k 2 )
edges
Cubic: k-Dominating Set in graphs without C4 – O(k 3 )
vertices
Exponential: k-Path – 2O(k)
No Kernel: k-Dominating Set is W-hard. So is not
expected to have kernels of any size.
In this lecture, we will see some techniques to rule out
polynomial kernels.

Venkatesh Raman

Lower Bounds on Kernelization
OR of a language

Definition
Let L ⊆ {0, 1}∗ be a language. Then define
Or(L) = {(x1 , . . . , xp ) | ∃i such that xi ∈ L}
Definition
Let t : N → N  {0} be a function. Then define
Ort (L) = {(x1 , . . . , xt(|x1 |) ) | ∀j |xj | = |x1 |, and ∃i such that xi ∈ L}

Venkatesh Raman

Lower Bounds on Kernelization
Distillation

Let L, L ⊆ {0, 1}∗ be a pair of languages and let t : N → N  {0}
be a function. We say that L has t-bounded distillation algorithm
if there exists
a polynomial time computable function f : {0, 1}∗ → {0, 1}∗
such that
f ((x1 , . . . , xt(|x1 |) )) ∈ L if and only if
(x1 , . . . , xt(|x1 |) ) ∈ Ort (L), and
|f ((x1 , . . . , xt(|x1 |) )| ≤ O(t(|x1 |) log t(|x1 |)).

Venkatesh Raman

Lower Bounds on Kernelization
Fortnow-Santhanam

Theorem (FS 09)
Suppose for a pair of languages L, L ⊆ {0, 1}∗ , there exists a
polynomially bounded function t : N → N  {0} such that L has a
t-bounded distillation algorithm. Then L ∈ NP/poly. In particular,
if L is NP-hard, then coNP ⊆ NP/poly.

Venkatesh Raman

Lower Bounds on Kernelization
Outline of proof of Fortnow Santhanam theorem

NP-complete problem L with A, a t-bounded distillation
algorithm.

Venkatesh Raman

Lower Bounds on Kernelization
Outline of proof of Fortnow Santhanam theorem

NP-complete problem L with A, a t-bounded distillation
algorithm.
Use A to design NDTM that, with a “polynomial advice”, can
decide L in P-time.

Venkatesh Raman

Lower Bounds on Kernelization
Outline of proof of Fortnow Santhanam theorem

NP-complete problem L with A, a t-bounded distillation
algorithm.
Use A to design NDTM that, with a “polynomial advice”, can
decide L in P-time.
L ∈ NP/poly ⇒ coNP ⊆ NP/poly and we get the theorem!

Venkatesh Raman

Lower Bounds on Kernelization
Filling in the details
For the proof, we define the notions needed and the requirements.
Let |xi | = n ∀i ∈ [t(n)].

Venkatesh Raman

Lower Bounds on Kernelization
Filling in the details
For the proof, we define the notions needed and the requirements.
Let |xi | = n ∀i ∈ [t(n)].
Let α(n) = O(t(n) log(t(n))).

Venkatesh Raman

Lower Bounds on Kernelization
Filling in the details
For the proof, we define the notions needed and the requirements.
Let |xi | = n ∀i ∈ [t(n)].
Let α(n) = O(t(n) log(t(n))).
Ln = {x ∈ L : |x| ≤ n}.

Venkatesh Raman

Lower Bounds on Kernelization
Filling in the details
For the proof, we define the notions needed and the requirements.
Let |xi | = n ∀i ∈ [t(n)].
Let α(n) = O(t(n) log(t(n))).
Ln = {x ∈ L : |x| ≤ n}.
given any (x1 , x2 , · · · , xt(n) ) ∈ Or(L) (ie, xi ∈ Ln ∀i ∈ [t(n)])
/
A maps it to y ∈ L ≤α(n)

Venkatesh Raman

Lower Bounds on Kernelization
Filling in the details
For the proof, we define the notions needed and the requirements.
Let |xi | = n ∀i ∈ [t(n)].
Let α(n) = O(t(n) log(t(n))).
Ln = {x ∈ L : |x| ≤ n}.
given any (x1 , x2 , · · · , xt(n) ) ∈ Or(L) (ie, xi ∈ Ln ∀i ∈ [t(n)])
/
A maps it to y ∈ L ≤α(n)
we want to obtain a Sn ⊆ L α(n) with |Sn | polynomially
bounded in n such that

Venkatesh Raman

Lower Bounds on Kernelization
Filling in the details
For the proof, we define the notions needed and the requirements.
Let |xi | = n ∀i ∈ [t(n)].
Let α(n) = O(t(n) log(t(n))).
Ln = {x ∈ L : |x| ≤ n}.
given any (x1 , x2 , · · · , xt(n) ) ∈ Or(L) (ie, xi ∈ Ln ∀i ∈ [t(n)])
/
A maps it to y ∈ L ≤α(n)
we want to obtain a Sn ⊆ L α(n) with |Sn | polynomially
bounded in n such that
If x ∈ Ln - ∃ strings x1 , · · · , xt(n) ∈ Σ n with xi = x for some i
such that A(x1 , · · · , xt(n) ) ∈ Sn

Venkatesh Raman

Lower Bounds on Kernelization
Filling in the details
For the proof, we define the notions needed and the requirements.
Let |xi | = n ∀i ∈ [t(n)].
Let α(n) = O(t(n) log(t(n))).
Ln = {x ∈ L : |x| ≤ n}.
given any (x1 , x2 , · · · , xt(n) ) ∈ Or(L) (ie, xi ∈ Ln ∀i ∈ [t(n)])
/
A maps it to y ∈ L ≤α(n)
we want to obtain a Sn ⊆ L α(n) with |Sn | polynomially
bounded in n such that
If x ∈ Ln - ∃ strings x1 , · · · , xt(n) ∈ Σ n with xi = x for some i
such that A(x1 , · · · , xt(n) ) ∈ Sn
If x ∈ Ln - ∀ strings x1 , · · · , xt(n) ∈ Σ n with xi = x for some i,
/
A(x1 , · · · , xt(n) ) ∈ Sn
/

Venkatesh Raman

Lower Bounds on Kernelization
How will the nondeterministic algorithm work?

Having Sn as advice gives the desired NDTM which when given x
such that |x| = n, checks whether x ∈ L in the following way.
Guesses t(n) strings, x1 , · · · , xt(n) ∈ Σ n

Venkatesh Raman

Lower Bounds on Kernelization
How will the nondeterministic algorithm work?

Having Sn as advice gives the desired NDTM which when given x
such that |x| = n, checks whether x ∈ L in the following way.
Guesses t(n) strings, x1 , · · · , xt(n) ∈ Σ n
Checks whether one of them is x

Venkatesh Raman

Lower Bounds on Kernelization
How will the nondeterministic algorithm work?

Having Sn as advice gives the desired NDTM which when given x
such that |x| = n, checks whether x ∈ L in the following way.
Guesses t(n) strings, x1 , · · · , xt(n) ∈ Σ n
Checks whether one of them is x
Computes A(x1 , · · · , xt(n) ) and accepts iff output is in Sn .

Venkatesh Raman

Lower Bounds on Kernelization
How to get Sn
A : (Ln )t → L ≤α(n)

Venkatesh Raman

Lower Bounds on Kernelization
How to get Sn
A : (Ln )t → L ≤α(n)
y ∈ L ≤α(n) covers a string x ∈ Ln — ∃x1 , · · · , xt ∈ Σ n with
xi = x for some i and A(x1 , · · · , xt(n) ) = y

Venkatesh Raman

Lower Bounds on Kernelization
How to get Sn
A : (Ln )t → L ≤α(n)
y ∈ L ≤α(n) covers a string x ∈ Ln — ∃x1 , · · · , xt ∈ Σ n with
xi = x for some i and A(x1 , · · · , xt(n) ) = y
We construct Sn by iteratively picking the string in L ≤α(n)
which covers the most number of instances in Ln till there are
no strings left to cover.

Venkatesh Raman

Lower Bounds on Kernelization
How to get Sn
A : (Ln )t → L ≤α(n)
y ∈ L ≤α(n) covers a string x ∈ Ln — ∃x1 , · · · , xt ∈ Σ n with
xi = x for some i and A(x1 , · · · , xt(n) ) = y
We construct Sn by iteratively picking the string in L ≤α(n)
which covers the most number of instances in Ln till there are
no strings left to cover.
Let us consider one step of the process. Let F be the set of
uncovered instances in Ln at the start of step.

Venkatesh Raman

Lower Bounds on Kernelization
How to get Sn
A : (Ln )t → L ≤α(n)
y ∈ L ≤α(n) covers a string x ∈ Ln — ∃x1 , · · · , xt ∈ Σ n with
xi = x for some i and A(x1 , · · · , xt(n) ) = y
We construct Sn by iteratively picking the string in L ≤α(n)
which covers the most number of instances in Ln till there are
no strings left to cover.
Let us consider one step of the process. Let F be the set of
uncovered instances in Ln at the start of step.
By PHP there exists a string y ∈ L ≤α(n) such that A maps at
least
|F |t(n)
|L ≤α(n) |
tuples in F t(n) to y .
Venkatesh Raman

Lower Bounds on Kernelization
How to get Sn (Cont.)
At least

|F |t(n)
|L ≤α(n) |

1/t(n)

=

|F |
|L

≤α(n) |

1/t(n)

strings in F are

covered by y in each step.

Venkatesh Raman

Lower Bounds on Kernelization
How to get Sn (Cont.)
At least

|F |t(n)
|L ≤α(n) |

1/t(n)

=

|F |
|L

≤α(n) |

1/t(n)

strings in F are

covered by y in each step.
We can restate the above statement, saying that at least ϕ(s)
fraction of the remaining set is covered in each iteration,
where
1
1
= (α(n)+1)/t(n)
ϕ(n) =
1/t(n)
2
|L ≤α(n) |

Venkatesh Raman

Lower Bounds on Kernelization
How to get Sn (Cont.)
At least

|F |t(n)
|L ≤α(n) |

1/t(n)

=

|F |
|L

≤α(n) |

1/t(n)

strings in F are

covered by y in each step.
We can restate the above statement, saying that at least ϕ(s)
fraction of the remaining set is covered in each iteration,
where
1
1
= (α(n)+1)/t(n)
ϕ(n) =
1/t(n)
2
|L ≤α(n) |
There were 2n strings to cover at the starting. So, the number
of strings left to cover after p steps is at most
(1 − ϕ(n))p 2n ≤

2n
e ϕ(n)·p

which is less than one for p = O(n/ϕ(n)).

Venkatesh Raman

Lower Bounds on Kernelization
How to get Sn (Cont.)
At least

|F |t(n)
|L ≤α(n) |

1/t(n)

=

|F |
|L

≤α(n) |

1/t(n)

strings in F are

covered by y in each step.
We can restate the above statement, saying that at least ϕ(s)
fraction of the remaining set is covered in each iteration,
where
1
1
= (α(n)+1)/t(n)
ϕ(n) =
1/t(n)
2
|L ≤α(n) |
There were 2n strings to cover at the starting. So, the number
of strings left to cover after p steps is at most
(1 − ϕ(n))p 2n ≤

2n
e ϕ(n)·p

which is less than one for p = O(n/ϕ(n)).
So, the process ends after O(n/ϕ(n)) ≤ n · 2(α(n)+1)/t(n) steps,
which is polynomial in n since α(n) = O(t(n) log(t(n))).
Venkatesh Raman

Lower Bounds on Kernelization
Take away
A few comments about the theorem

coNP ⊆ NP/poly implies PH = Σ3 .
p
The theorem gives us the collapse even if the distillation
algorithm is allowed to be in co-nondeterministic.
Main message is, that if you have t(n) instances of size n, you
can not get an instance equivalent to the Or of them in
polynomial time of size O(t(n) log t(n))

Venkatesh Raman

Lower Bounds on Kernelization
How to use the theorem to prove kernel lower bounds
We know that NP-complete problems can not have a
distillation algorithm unless coNP ⊆ NP/poly.

Venkatesh Raman

Lower Bounds on Kernelization
How to use the theorem to prove kernel lower bounds
We know that NP-complete problems can not have a
distillation algorithm unless coNP ⊆ NP/poly.
We want to define some analogue of distillation to produce an
instance (x, k) of a parameterized problem L , starting from
many instances of an NP-complete language L.

Venkatesh Raman

Lower Bounds on Kernelization
How to use the theorem to prove kernel lower bounds
We know that NP-complete problems can not have a
distillation algorithm unless coNP ⊆ NP/poly.
We want to define some analogue of distillation to produce an
instance (x, k) of a parameterized problem L , starting from
many instances of an NP-complete language L.
We call such an algorithm a composition algorithm. We will
define it formally in the next slide.

Venkatesh Raman

Lower Bounds on Kernelization
How to use the theorem to prove kernel lower bounds
We know that NP-complete problems can not have a
distillation algorithm unless coNP ⊆ NP/poly.
We want to define some analogue of distillation to produce an
instance (x, k) of a parameterized problem L , starting from
many instances of an NP-complete language L.
We call such an algorithm a composition algorithm. We will
define it formally in the next slide.
The goal is that composition of an NP-complete language L
into L , combined with a kernel of certain size for L , gives us
distillation L.

Venkatesh Raman

Lower Bounds on Kernelization
How to use the theorem to prove kernel lower bounds
We know that NP-complete problems can not have a
distillation algorithm unless coNP ⊆ NP/poly.
We want to define some analogue of distillation to produce an
instance (x, k) of a parameterized problem L , starting from
many instances of an NP-complete language L.
We call such an algorithm a composition algorithm. We will
define it formally in the next slide.
The goal is that composition of an NP-complete language L
into L , combined with a kernel of certain size for L , gives us
distillation L.
So, if we can show that a composition algorithm exists from L
to L with desired properties, then L can not have a kernel of
certain size.

Venkatesh Raman

Lower Bounds on Kernelization
Weak d-Composition
˜
(Weak d-composition). Let L ⊆ Σ ∗ be a set and let
∗ × N be a parameterized problem. We say that L weak
Q⊆Σ
d-composes into Q if there is an algorithm C which, given t strings
x1 , x2 , . . . , xt , takes time polynomial in t |xi | and outputs an
i=1
instance (y , k) ∈ Σ∗ × N such that the following hold:
k ≤ t 1/d (maxt |xi |)O(1)
i=1
The output is a YES instance of Q if and only if at least one
˜
instance xi is a YES-instance of of L.
Theorem
˜
˜
Let L ⊆ Σ ∗ be a set which is NP-hard. If L weak d-composes into
the parameterized problem Q, then Q has no kernel of size
O(k d− ) for all > 0 unless NP ⊆ coNP/poly.

Venkatesh Raman

Lower Bounds on Kernelization
Proof of the theorem

Theorem
˜
˜
Let L ⊆ Σ ∗ be a set which is NP-hard. If L weak d-composes into
the parameterized problem Q, then Q has no kernel of size
O(k d− ) for all > 0 unless NP ⊆ coNP/poly.
Proof. Let xi = n ∀i ∈ [t(n)] for the input of composition. After
applying the kernelization on the composed instance, the size of
the instance we get is
O(t(n)1/d nc )d− ) = O(t(n)1−(
= O(t(s))

/d) c(d− )

n

)

(for t(s) sufficiently large)

= O(t(s) log t(s))

Venkatesh Raman

Lower Bounds on Kernelization
Some comments about composition
In composition, we asked for the parameter k to be at most
t 1/d (n)O(1) . That ruled out kernels of size k d− .

Venkatesh Raman

Lower Bounds on Kernelization
Some comments about composition
In composition, we asked for the parameter k to be at most
t 1/d (n)O(1) . That ruled out kernels of size k d− .
What if we can output an instance with k = t o(1) (n)O(1) ?
Then we can rule out kernels of k d− for ALL d!

Venkatesh Raman

Lower Bounds on Kernelization
Some comments about composition
In composition, we asked for the parameter k to be at most
t 1/d (n)O(1) . That ruled out kernels of size k d− .
What if we can output an instance with k = t o(1) (n)O(1) ?
Then we can rule out kernels of k d− for ALL d!
We call such an algorithm just “composition”.

Venkatesh Raman

Lower Bounds on Kernelization
Some comments about composition
In composition, we asked for the parameter k to be at most
t 1/d (n)O(1) . That ruled out kernels of size k d− .
What if we can output an instance with k = t o(1) (n)O(1) ?
Then we can rule out kernels of k d− for ALL d!
We call such an algorithm just “composition”.
Since theorem of Fortnow-Santhanam allows
co-nondeterminism, so that allows using coNP compositions
for proving lower bounds.

Venkatesh Raman

Lower Bounds on Kernelization
Some comments about composition
In composition, we asked for the parameter k to be at most
t 1/d (n)O(1) . That ruled out kernels of size k d− .
What if we can output an instance with k = t o(1) (n)O(1) ?
Then we can rule out kernels of k d− for ALL d!
We call such an algorithm just “composition”.
Since theorem of Fortnow-Santhanam allows
co-nondeterminism, so that allows using coNP compositions
for proving lower bounds.
Sometimes getting composition from arbitrary instances of a
language can be difficult.

Venkatesh Raman

Lower Bounds on Kernelization
Some comments about composition
In composition, we asked for the parameter k to be at most
t 1/d (n)O(1) . That ruled out kernels of size k d− .
What if we can output an instance with k = t o(1) (n)O(1) ?
Then we can rule out kernels of k d− for ALL d!
We call such an algorithm just “composition”.
Since theorem of Fortnow-Santhanam allows
co-nondeterminism, so that allows using coNP compositions
for proving lower bounds.
Sometimes getting composition from arbitrary instances of a
language can be difficult.
Some structure on the input instances helps to get a
composition (next slide).

Venkatesh Raman

Lower Bounds on Kernelization
Polynomial Equivalence Relation

(Polynomial Equivalence Relation). An equivalence relation R
on Σ ∗ is called a polynomial equivalence relation if the following
two conditions hold:
1

2

There is an algorithm that given two strings x, y ∈ Σ ∗ decides
whether x and y belong to the same equivalence class in
(|x| + |y |)O(1) time.
For any finite set S ⊆ Σ ∗ the equivalence relation R partitions
the elements of S into at most (maxx∈S |x|)O(1) classes.

Venkatesh Raman

Lower Bounds on Kernelization
What to do with Polynomial Equivalence Relation

The equivalence relation can partition the input on the basis
of different parameters. These equivalence classes can be used
to give the input to the composition a nice structure.
The helpful choices are often partitions which have the same
number of vertices, or the asked solution size etc.
Then all we need to do, is to come up with a composition
algorithm for instances belonging to same equivalence class.
Since there are only polynomial number of equivalence classes,
in the end we can just output an instance of Or(L )
Next slide is a nice illustration of this method by Michal
Pilipczuk.

Venkatesh Raman

Lower Bounds on Kernelization
Proof

Michał Pilipczuk

No-poly-kernels tutorial

11/31
OR-SAT

OR-SAT

Proof

Michał Pilipczuk

No-poly-kernels tutorial

11/31
OR-SAT
NP-hrd

NP-hrd

NP-hrd

NP-hrd

NP-hrd

11/31
No-poly-kernels tutorial
Michał Pilipczuk

NP-hrd

NP-hrd

NP-hrd

OR-SAT

NP-hrd

˜
L

Proof
NP-hrd

NP-hrd

NP-hrd

NP-hrd

NP-hrd

NP-hrd

NP-hrd

1

2

2

2

k

k

k

OR-SAT

NP-hrd

1

11/31
No-poly-kernels tutorial
Michał Pilipczuk

OR-SAT

NP-hrd

1
L

Proof
NP-hrd

NP-hrd

NP-hrd

NP-hrd

NP-hrd

NP-hrd

NP-hrd

L

NP-hrd

1

1

1

2

2

2

k

k

OR-SAT

NP-hrd

OR-SAT

Proof

k

cmp

cmp

cmp

poly(k)

poly(k)

L

poly(k)

Michał Pilipczuk

No-poly-kernels tutorial

11/31
NP-hrd

NP-hrd

NP-hrd

NP-hrd

NP-hrd

1

2

2

2

k

k

k

OR-SAT

NP-hrd

1
L

NP-hrd

kern

kern

kern

11/31
No-poly-kernels tutorial
Michał Pilipczuk

poly(k)
poly(k)
poly(k)

cmp

cmp

cmp

L

NP-hrd

1

OR-SAT

NP-hrd

L

Proof
NP-hrd

NP-hrd

NP-hrd

NP-hrd

1

2

2

2

k

k

k

OR-SAT

NP-hrd

1
L

NP-hrd

kern

kern

kern

11/31
No-poly-kernels tutorial
Michał Pilipczuk

poly(k)
poly(k)
poly(k)

cmp

cmp

L

NP-hrd

cmp

L

NP-hrd

1

OR-SAT

NP-hrd

OR-L

Proof
NP-hrd

NP-hrd

NP-hrd

NP-hrd

1

2

2

2

k

k

k

OR-SAT

NP-hrd

1
L

NP-hrd

kern

kern

kern

11/31
No-poly-kernels tutorial
Michał Pilipczuk

poly(k)
poly(k)
poly(k)

cmp

cmp

L

NP-hrd

cmp

L

NP-hrd

1

OR-SAT

NP-hrd

OR-L

Proof
Take away

We use compositions to rule out polynomial kernels.

Venkatesh Raman

Lower Bounds on Kernelization
Take away

We use compositions to rule out polynomial kernels.
A composition from NP-hard problem L to parameterized
problem L gives kernelization hardness for for L .

Venkatesh Raman

Lower Bounds on Kernelization
Take away

We use compositions to rule out polynomial kernels.
A composition from NP-hard problem L to parameterized
problem L gives kernelization hardness for for L .
k = t o(1) nc ⇒ No polynomial kernel.

Venkatesh Raman

Lower Bounds on Kernelization
Take away

We use compositions to rule out polynomial kernels.
A composition from NP-hard problem L to parameterized
problem L gives kernelization hardness for for L .
k = t o(1) nc ⇒ No polynomial kernel.
k = t 1/d nc ⇒ No kernel of size k d− .

Venkatesh Raman

Lower Bounds on Kernelization
Take away

We use compositions to rule out polynomial kernels.
A composition from NP-hard problem L to parameterized
problem L gives kernelization hardness for for L .
k = t o(1) nc ⇒ No polynomial kernel.
k = t 1/d nc ⇒ No kernel of size k d− .
We can make use of equivalence classes to give structure to
input of the composition.

Venkatesh Raman

Lower Bounds on Kernelization
Take away

We use compositions to rule out polynomial kernels.
A composition from NP-hard problem L to parameterized
problem L gives kernelization hardness for for L .
k = t o(1) nc ⇒ No polynomial kernel.
k = t 1/d nc ⇒ No kernel of size k d− .
We can make use of equivalence classes to give structure to
input of the composition.
Examples on the board!

Venkatesh Raman

Lower Bounds on Kernelization
Thank You!

Venkatesh Raman

Lower Bounds on Kernelization

More Related Content

What's hot

Lossy Kernelization
Lossy KernelizationLossy Kernelization
Lossy Kernelizationmsramanujan
 
Node Unique Label Cover
Node Unique Label CoverNode Unique Label Cover
Node Unique Label Covermsramanujan
 
Algorithm Design and Complexity - Course 10
Algorithm Design and Complexity - Course 10Algorithm Design and Complexity - Course 10
Algorithm Design and Complexity - Course 10Traian Rebedea
 
ADA - Minimum Spanning Tree Prim Kruskal and Dijkstra
ADA - Minimum Spanning Tree Prim Kruskal and Dijkstra ADA - Minimum Spanning Tree Prim Kruskal and Dijkstra
ADA - Minimum Spanning Tree Prim Kruskal and Dijkstra Sahil Kumar
 
Algorithm Design and Complexity - Course 9
Algorithm Design and Complexity - Course 9Algorithm Design and Complexity - Course 9
Algorithm Design and Complexity - Course 9Traian Rebedea
 
On Spaces of Entire Functions Having Slow Growth Represented By Dirichlet Series
On Spaces of Entire Functions Having Slow Growth Represented By Dirichlet SeriesOn Spaces of Entire Functions Having Slow Growth Represented By Dirichlet Series
On Spaces of Entire Functions Having Slow Growth Represented By Dirichlet SeriesIOSR Journals
 
Heuristics for counterexamples to the Agrawal Conjecture
Heuristics for counterexamples to the Agrawal ConjectureHeuristics for counterexamples to the Agrawal Conjecture
Heuristics for counterexamples to the Agrawal ConjectureAmshuman Hegde
 
P, NP and NP-Complete, Theory of NP-Completeness V2
P, NP and NP-Complete, Theory of NP-Completeness V2P, NP and NP-Complete, Theory of NP-Completeness V2
P, NP and NP-Complete, Theory of NP-Completeness V2S.Shayan Daneshvar
 
minimum spanning trees Algorithm
minimum spanning trees Algorithm minimum spanning trees Algorithm
minimum spanning trees Algorithm sachin varun
 
Nies cuny describing_finite_groups
Nies cuny describing_finite_groupsNies cuny describing_finite_groups
Nies cuny describing_finite_groupsAndre Nies
 
Algorithm Design and Complexity - Course 7
Algorithm Design and Complexity - Course 7Algorithm Design and Complexity - Course 7
Algorithm Design and Complexity - Course 7Traian Rebedea
 
Minimum spanning tree algorithms by ibrahim_alfayoumi
Minimum spanning tree algorithms by ibrahim_alfayoumiMinimum spanning tree algorithms by ibrahim_alfayoumi
Minimum spanning tree algorithms by ibrahim_alfayoumiIbrahim Alfayoumi
 
Introduction to Fourier transform and signal analysis
Introduction to Fourier transform and signal analysisIntroduction to Fourier transform and signal analysis
Introduction to Fourier transform and signal analysis宗翰 謝
 

What's hot (20)

Lossy Kernelization
Lossy KernelizationLossy Kernelization
Lossy Kernelization
 
Node Unique Label Cover
Node Unique Label CoverNode Unique Label Cover
Node Unique Label Cover
 
19 Minimum Spanning Trees
19 Minimum Spanning Trees19 Minimum Spanning Trees
19 Minimum Spanning Trees
 
Algorithm Design and Complexity - Course 10
Algorithm Design and Complexity - Course 10Algorithm Design and Complexity - Course 10
Algorithm Design and Complexity - Course 10
 
ADA - Minimum Spanning Tree Prim Kruskal and Dijkstra
ADA - Minimum Spanning Tree Prim Kruskal and Dijkstra ADA - Minimum Spanning Tree Prim Kruskal and Dijkstra
ADA - Minimum Spanning Tree Prim Kruskal and Dijkstra
 
20 Single Source Shorthest Path
20 Single Source Shorthest Path20 Single Source Shorthest Path
20 Single Source Shorthest Path
 
Algorithm Design and Complexity - Course 9
Algorithm Design and Complexity - Course 9Algorithm Design and Complexity - Course 9
Algorithm Design and Complexity - Course 9
 
On Spaces of Entire Functions Having Slow Growth Represented By Dirichlet Series
On Spaces of Entire Functions Having Slow Growth Represented By Dirichlet SeriesOn Spaces of Entire Functions Having Slow Growth Represented By Dirichlet Series
On Spaces of Entire Functions Having Slow Growth Represented By Dirichlet Series
 
Heuristics for counterexamples to the Agrawal Conjecture
Heuristics for counterexamples to the Agrawal ConjectureHeuristics for counterexamples to the Agrawal Conjecture
Heuristics for counterexamples to the Agrawal Conjecture
 
P, NP and NP-Complete, Theory of NP-Completeness V2
P, NP and NP-Complete, Theory of NP-Completeness V2P, NP and NP-Complete, Theory of NP-Completeness V2
P, NP and NP-Complete, Theory of NP-Completeness V2
 
Scribed lec8
Scribed lec8Scribed lec8
Scribed lec8
 
minimum spanning trees Algorithm
minimum spanning trees Algorithm minimum spanning trees Algorithm
minimum spanning trees Algorithm
 
Nies cuny describing_finite_groups
Nies cuny describing_finite_groupsNies cuny describing_finite_groups
Nies cuny describing_finite_groups
 
Algorithm Design and Complexity - Course 7
Algorithm Design and Complexity - Course 7Algorithm Design and Complexity - Course 7
Algorithm Design and Complexity - Course 7
 
Minimum spanning tree algorithms by ibrahim_alfayoumi
Minimum spanning tree algorithms by ibrahim_alfayoumiMinimum spanning tree algorithms by ibrahim_alfayoumi
Minimum spanning tree algorithms by ibrahim_alfayoumi
 
Topological sorting
Topological sortingTopological sorting
Topological sorting
 
Signals Processing Assignment Help
Signals Processing Assignment HelpSignals Processing Assignment Help
Signals Processing Assignment Help
 
Prim algorithm
Prim algorithmPrim algorithm
Prim algorithm
 
05 linear transformations
05 linear transformations05 linear transformations
05 linear transformations
 
Introduction to Fourier transform and signal analysis
Introduction to Fourier transform and signal analysisIntroduction to Fourier transform and signal analysis
Introduction to Fourier transform and signal analysis
 

Viewers also liked

Cut and Count
Cut and CountCut and Count
Cut and CountASPAK2014
 
Treewidth and Applications
Treewidth and ApplicationsTreewidth and Applications
Treewidth and ApplicationsASPAK2014
 
Color Coding
Color CodingColor Coding
Color CodingASPAK2014
 
Matroid Basics
Matroid BasicsMatroid Basics
Matroid BasicsASPAK2014
 
Representative Sets
Representative SetsRepresentative Sets
Representative SetsASPAK2014
 
Important Cuts
Important CutsImportant Cuts
Important CutsASPAK2014
 
Efficient Simplification: The (im)possibilities
Efficient Simplification: The (im)possibilitiesEfficient Simplification: The (im)possibilities
Efficient Simplification: The (im)possibilitiesNeeldhara Misra
 
Dynamic Programming Over Graphs of Bounded Treewidth
Dynamic Programming Over Graphs of Bounded TreewidthDynamic Programming Over Graphs of Bounded Treewidth
Dynamic Programming Over Graphs of Bounded TreewidthASPAK2014
 
Steiner Tree Parameterized by Treewidth
Steiner Tree Parameterized by TreewidthSteiner Tree Parameterized by Treewidth
Steiner Tree Parameterized by TreewidthASPAK2014
 
A Kernel for Planar F-deletion: The Connected Case
A Kernel for Planar F-deletion: The Connected CaseA Kernel for Planar F-deletion: The Connected Case
A Kernel for Planar F-deletion: The Connected CaseNeeldhara Misra
 
Kernels for Planar F-Deletion (Restricted Variants)
Kernels for Planar F-Deletion (Restricted Variants)Kernels for Planar F-Deletion (Restricted Variants)
Kernels for Planar F-Deletion (Restricted Variants)Neeldhara Misra
 
Separators with Non-Hereditary Properties
Separators with Non-Hereditary PropertiesSeparators with Non-Hereditary Properties
Separators with Non-Hereditary PropertiesNeeldhara Misra
 

Viewers also liked (14)

Cut and Count
Cut and CountCut and Count
Cut and Count
 
Treewidth and Applications
Treewidth and ApplicationsTreewidth and Applications
Treewidth and Applications
 
Color Coding
Color CodingColor Coding
Color Coding
 
Matroid Basics
Matroid BasicsMatroid Basics
Matroid Basics
 
Representative Sets
Representative SetsRepresentative Sets
Representative Sets
 
Important Cuts
Important CutsImportant Cuts
Important Cuts
 
Efficient Simplification: The (im)possibilities
Efficient Simplification: The (im)possibilitiesEfficient Simplification: The (im)possibilities
Efficient Simplification: The (im)possibilities
 
Dynamic Programming Over Graphs of Bounded Treewidth
Dynamic Programming Over Graphs of Bounded TreewidthDynamic Programming Over Graphs of Bounded Treewidth
Dynamic Programming Over Graphs of Bounded Treewidth
 
Steiner Tree Parameterized by Treewidth
Steiner Tree Parameterized by TreewidthSteiner Tree Parameterized by Treewidth
Steiner Tree Parameterized by Treewidth
 
EKR for Matchings
EKR for MatchingsEKR for Matchings
EKR for Matchings
 
A Kernel for Planar F-deletion: The Connected Case
A Kernel for Planar F-deletion: The Connected CaseA Kernel for Planar F-deletion: The Connected Case
A Kernel for Planar F-deletion: The Connected Case
 
Kernels for Planar F-Deletion (Restricted Variants)
Kernels for Planar F-Deletion (Restricted Variants)Kernels for Planar F-Deletion (Restricted Variants)
Kernels for Planar F-Deletion (Restricted Variants)
 
From FVS to F-Deletion
From FVS to F-DeletionFrom FVS to F-Deletion
From FVS to F-Deletion
 
Separators with Non-Hereditary Properties
Separators with Non-Hereditary PropertiesSeparators with Non-Hereditary Properties
Separators with Non-Hereditary Properties
 

Similar to Kernel Lower Bounds

The Euclidean Spaces (elementary topology and sequences)
The Euclidean Spaces (elementary topology and sequences)The Euclidean Spaces (elementary topology and sequences)
The Euclidean Spaces (elementary topology and sequences)JelaiAujero
 
Computational Complexity: Complexity Classes
Computational Complexity: Complexity ClassesComputational Complexity: Complexity Classes
Computational Complexity: Complexity ClassesAntonis Antonopoulos
 
Introduction to the theory of optimization
Introduction to the theory of optimizationIntroduction to the theory of optimization
Introduction to the theory of optimizationDelta Pi Systems
 
Interpolation techniques - Background and implementation
Interpolation techniques - Background and implementationInterpolation techniques - Background and implementation
Interpolation techniques - Background and implementationQuasar Chunawala
 
Existance Theory for First Order Nonlinear Random Dfferential Equartion
Existance Theory for First Order Nonlinear Random Dfferential EquartionExistance Theory for First Order Nonlinear Random Dfferential Equartion
Existance Theory for First Order Nonlinear Random Dfferential Equartioninventionjournals
 
Bachelor_Defense
Bachelor_DefenseBachelor_Defense
Bachelor_DefenseTeja Turk
 
Complete l fuzzy metric spaces and common fixed point theorems
Complete l fuzzy metric spaces and  common fixed point theoremsComplete l fuzzy metric spaces and  common fixed point theorems
Complete l fuzzy metric spaces and common fixed point theoremsAlexander Decker
 
Fuzzy random variables and Kolomogrov’s important results
Fuzzy random variables and Kolomogrov’s important resultsFuzzy random variables and Kolomogrov’s important results
Fuzzy random variables and Kolomogrov’s important resultsinventionjournals
 
Unique fixed point theorems for generalized weakly contractive condition in o...
Unique fixed point theorems for generalized weakly contractive condition in o...Unique fixed point theorems for generalized weakly contractive condition in o...
Unique fixed point theorems for generalized weakly contractive condition in o...Alexander Decker
 
6-Nfa & equivalence with RE.pdf
6-Nfa & equivalence with RE.pdf6-Nfa & equivalence with RE.pdf
6-Nfa & equivalence with RE.pdfshruti533256
 
Some Thoughts on Sampling
Some Thoughts on SamplingSome Thoughts on Sampling
Some Thoughts on SamplingDon Sheehy
 
A numerical method to solve fractional Fredholm-Volterra integro-differential...
A numerical method to solve fractional Fredholm-Volterra integro-differential...A numerical method to solve fractional Fredholm-Volterra integro-differential...
A numerical method to solve fractional Fredholm-Volterra integro-differential...OctavianPostavaru
 
Radial Basis Function Interpolation
Radial Basis Function InterpolationRadial Basis Function Interpolation
Radial Basis Function InterpolationJesse Bettencourt
 
MASSS_Presentation_20160209
MASSS_Presentation_20160209MASSS_Presentation_20160209
MASSS_Presentation_20160209Yimin Wu
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
slides_low_rank_matrix_optim_farhad
slides_low_rank_matrix_optim_farhadslides_low_rank_matrix_optim_farhad
slides_low_rank_matrix_optim_farhadFarhad Gholami
 

Similar to Kernel Lower Bounds (20)

The Euclidean Spaces (elementary topology and sequences)
The Euclidean Spaces (elementary topology and sequences)The Euclidean Spaces (elementary topology and sequences)
The Euclidean Spaces (elementary topology and sequences)
 
Dft
DftDft
Dft
 
Computational Complexity: Complexity Classes
Computational Complexity: Complexity ClassesComputational Complexity: Complexity Classes
Computational Complexity: Complexity Classes
 
QMC: Operator Splitting Workshop, Using Sequences of Iterates in Inertial Met...
QMC: Operator Splitting Workshop, Using Sequences of Iterates in Inertial Met...QMC: Operator Splitting Workshop, Using Sequences of Iterates in Inertial Met...
QMC: Operator Splitting Workshop, Using Sequences of Iterates in Inertial Met...
 
Introduction to the theory of optimization
Introduction to the theory of optimizationIntroduction to the theory of optimization
Introduction to the theory of optimization
 
Interpolation techniques - Background and implementation
Interpolation techniques - Background and implementationInterpolation techniques - Background and implementation
Interpolation techniques - Background and implementation
 
Existance Theory for First Order Nonlinear Random Dfferential Equartion
Existance Theory for First Order Nonlinear Random Dfferential EquartionExistance Theory for First Order Nonlinear Random Dfferential Equartion
Existance Theory for First Order Nonlinear Random Dfferential Equartion
 
Bachelor_Defense
Bachelor_DefenseBachelor_Defense
Bachelor_Defense
 
Disjoint sets
Disjoint setsDisjoint sets
Disjoint sets
 
Complete l fuzzy metric spaces and common fixed point theorems
Complete l fuzzy metric spaces and  common fixed point theoremsComplete l fuzzy metric spaces and  common fixed point theorems
Complete l fuzzy metric spaces and common fixed point theorems
 
Fuzzy random variables and Kolomogrov’s important results
Fuzzy random variables and Kolomogrov’s important resultsFuzzy random variables and Kolomogrov’s important results
Fuzzy random variables and Kolomogrov’s important results
 
Unique fixed point theorems for generalized weakly contractive condition in o...
Unique fixed point theorems for generalized weakly contractive condition in o...Unique fixed point theorems for generalized weakly contractive condition in o...
Unique fixed point theorems for generalized weakly contractive condition in o...
 
6-Nfa & equivalence with RE.pdf
6-Nfa & equivalence with RE.pdf6-Nfa & equivalence with RE.pdf
6-Nfa & equivalence with RE.pdf
 
Some Thoughts on Sampling
Some Thoughts on SamplingSome Thoughts on Sampling
Some Thoughts on Sampling
 
25 String Matching
25 String Matching25 String Matching
25 String Matching
 
A numerical method to solve fractional Fredholm-Volterra integro-differential...
A numerical method to solve fractional Fredholm-Volterra integro-differential...A numerical method to solve fractional Fredholm-Volterra integro-differential...
A numerical method to solve fractional Fredholm-Volterra integro-differential...
 
Radial Basis Function Interpolation
Radial Basis Function InterpolationRadial Basis Function Interpolation
Radial Basis Function Interpolation
 
MASSS_Presentation_20160209
MASSS_Presentation_20160209MASSS_Presentation_20160209
MASSS_Presentation_20160209
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)
 
slides_low_rank_matrix_optim_farhad
slides_low_rank_matrix_optim_farhadslides_low_rank_matrix_optim_farhad
slides_low_rank_matrix_optim_farhad
 

Recently uploaded

Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatYousafMalik24
 
Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Celine George
 
Types of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxTypes of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxEyham Joco
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTiammrhaywood
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementmkooblal
 
Capitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptxCapitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptxCapitolTechU
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentInMediaRes1
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfMahmoud M. Sallam
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceSamikshaHamane
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,Virag Sontakke
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxthorishapillay1
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersSabitha Banu
 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxAvyJaneVismanos
 

Recently uploaded (20)

OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice great
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17
 
Types of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxTypes of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptx
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of management
 
Capitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptxCapitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptx
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media Component
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdf
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in Pharmacovigilance
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptx
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginners
 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptx
 

Kernel Lower Bounds

  • 1. Lower Bounds on Kernelization Venkatesh Raman Institiue of Mathematical Sciences, Chennai March 6, 2014 Venkatesh Raman Lower Bounds on Kernelization
  • 2. Some known kernelization results Linear: MaxSat – 2k clauses, k variables Venkatesh Raman Lower Bounds on Kernelization
  • 3. Some known kernelization results Linear: MaxSat – 2k clauses, k variables Quadratic: k-Vertex Cover – 2k vertices but O(k 2 ) edges Venkatesh Raman Lower Bounds on Kernelization
  • 4. Some known kernelization results Linear: MaxSat – 2k clauses, k variables Quadratic: k-Vertex Cover – 2k vertices but O(k 2 ) edges Cubic: k-Dominating Set in graphs without C4 – O(k 3 ) vertices Venkatesh Raman Lower Bounds on Kernelization
  • 5. Some known kernelization results Linear: MaxSat – 2k clauses, k variables Quadratic: k-Vertex Cover – 2k vertices but O(k 2 ) edges Cubic: k-Dominating Set in graphs without C4 – O(k 3 ) vertices Exponential: k-Path – 2O(k) Venkatesh Raman Lower Bounds on Kernelization
  • 6. Some known kernelization results Linear: MaxSat – 2k clauses, k variables Quadratic: k-Vertex Cover – 2k vertices but O(k 2 ) edges Cubic: k-Dominating Set in graphs without C4 – O(k 3 ) vertices Exponential: k-Path – 2O(k) No Kernel: k-Dominating Set is W-hard. So is not expected to have kernels of any size. Venkatesh Raman Lower Bounds on Kernelization
  • 7. Some known kernelization results Linear: MaxSat – 2k clauses, k variables Quadratic: k-Vertex Cover – 2k vertices but O(k 2 ) edges Cubic: k-Dominating Set in graphs without C4 – O(k 3 ) vertices Exponential: k-Path – 2O(k) No Kernel: k-Dominating Set is W-hard. So is not expected to have kernels of any size. In this lecture, we will see some techniques to rule out polynomial kernels. Venkatesh Raman Lower Bounds on Kernelization
  • 8. OR of a language Definition Let L ⊆ {0, 1}∗ be a language. Then define Or(L) = {(x1 , . . . , xp ) | ∃i such that xi ∈ L} Definition Let t : N → N {0} be a function. Then define Ort (L) = {(x1 , . . . , xt(|x1 |) ) | ∀j |xj | = |x1 |, and ∃i such that xi ∈ L} Venkatesh Raman Lower Bounds on Kernelization
  • 9. Distillation Let L, L ⊆ {0, 1}∗ be a pair of languages and let t : N → N {0} be a function. We say that L has t-bounded distillation algorithm if there exists a polynomial time computable function f : {0, 1}∗ → {0, 1}∗ such that f ((x1 , . . . , xt(|x1 |) )) ∈ L if and only if (x1 , . . . , xt(|x1 |) ) ∈ Ort (L), and |f ((x1 , . . . , xt(|x1 |) )| ≤ O(t(|x1 |) log t(|x1 |)). Venkatesh Raman Lower Bounds on Kernelization
  • 10. Fortnow-Santhanam Theorem (FS 09) Suppose for a pair of languages L, L ⊆ {0, 1}∗ , there exists a polynomially bounded function t : N → N {0} such that L has a t-bounded distillation algorithm. Then L ∈ NP/poly. In particular, if L is NP-hard, then coNP ⊆ NP/poly. Venkatesh Raman Lower Bounds on Kernelization
  • 11. Outline of proof of Fortnow Santhanam theorem NP-complete problem L with A, a t-bounded distillation algorithm. Venkatesh Raman Lower Bounds on Kernelization
  • 12. Outline of proof of Fortnow Santhanam theorem NP-complete problem L with A, a t-bounded distillation algorithm. Use A to design NDTM that, with a “polynomial advice”, can decide L in P-time. Venkatesh Raman Lower Bounds on Kernelization
  • 13. Outline of proof of Fortnow Santhanam theorem NP-complete problem L with A, a t-bounded distillation algorithm. Use A to design NDTM that, with a “polynomial advice”, can decide L in P-time. L ∈ NP/poly ⇒ coNP ⊆ NP/poly and we get the theorem! Venkatesh Raman Lower Bounds on Kernelization
  • 14. Filling in the details For the proof, we define the notions needed and the requirements. Let |xi | = n ∀i ∈ [t(n)]. Venkatesh Raman Lower Bounds on Kernelization
  • 15. Filling in the details For the proof, we define the notions needed and the requirements. Let |xi | = n ∀i ∈ [t(n)]. Let α(n) = O(t(n) log(t(n))). Venkatesh Raman Lower Bounds on Kernelization
  • 16. Filling in the details For the proof, we define the notions needed and the requirements. Let |xi | = n ∀i ∈ [t(n)]. Let α(n) = O(t(n) log(t(n))). Ln = {x ∈ L : |x| ≤ n}. Venkatesh Raman Lower Bounds on Kernelization
  • 17. Filling in the details For the proof, we define the notions needed and the requirements. Let |xi | = n ∀i ∈ [t(n)]. Let α(n) = O(t(n) log(t(n))). Ln = {x ∈ L : |x| ≤ n}. given any (x1 , x2 , · · · , xt(n) ) ∈ Or(L) (ie, xi ∈ Ln ∀i ∈ [t(n)]) / A maps it to y ∈ L ≤α(n) Venkatesh Raman Lower Bounds on Kernelization
  • 18. Filling in the details For the proof, we define the notions needed and the requirements. Let |xi | = n ∀i ∈ [t(n)]. Let α(n) = O(t(n) log(t(n))). Ln = {x ∈ L : |x| ≤ n}. given any (x1 , x2 , · · · , xt(n) ) ∈ Or(L) (ie, xi ∈ Ln ∀i ∈ [t(n)]) / A maps it to y ∈ L ≤α(n) we want to obtain a Sn ⊆ L α(n) with |Sn | polynomially bounded in n such that Venkatesh Raman Lower Bounds on Kernelization
  • 19. Filling in the details For the proof, we define the notions needed and the requirements. Let |xi | = n ∀i ∈ [t(n)]. Let α(n) = O(t(n) log(t(n))). Ln = {x ∈ L : |x| ≤ n}. given any (x1 , x2 , · · · , xt(n) ) ∈ Or(L) (ie, xi ∈ Ln ∀i ∈ [t(n)]) / A maps it to y ∈ L ≤α(n) we want to obtain a Sn ⊆ L α(n) with |Sn | polynomially bounded in n such that If x ∈ Ln - ∃ strings x1 , · · · , xt(n) ∈ Σ n with xi = x for some i such that A(x1 , · · · , xt(n) ) ∈ Sn Venkatesh Raman Lower Bounds on Kernelization
  • 20. Filling in the details For the proof, we define the notions needed and the requirements. Let |xi | = n ∀i ∈ [t(n)]. Let α(n) = O(t(n) log(t(n))). Ln = {x ∈ L : |x| ≤ n}. given any (x1 , x2 , · · · , xt(n) ) ∈ Or(L) (ie, xi ∈ Ln ∀i ∈ [t(n)]) / A maps it to y ∈ L ≤α(n) we want to obtain a Sn ⊆ L α(n) with |Sn | polynomially bounded in n such that If x ∈ Ln - ∃ strings x1 , · · · , xt(n) ∈ Σ n with xi = x for some i such that A(x1 , · · · , xt(n) ) ∈ Sn If x ∈ Ln - ∀ strings x1 , · · · , xt(n) ∈ Σ n with xi = x for some i, / A(x1 , · · · , xt(n) ) ∈ Sn / Venkatesh Raman Lower Bounds on Kernelization
  • 21. How will the nondeterministic algorithm work? Having Sn as advice gives the desired NDTM which when given x such that |x| = n, checks whether x ∈ L in the following way. Guesses t(n) strings, x1 , · · · , xt(n) ∈ Σ n Venkatesh Raman Lower Bounds on Kernelization
  • 22. How will the nondeterministic algorithm work? Having Sn as advice gives the desired NDTM which when given x such that |x| = n, checks whether x ∈ L in the following way. Guesses t(n) strings, x1 , · · · , xt(n) ∈ Σ n Checks whether one of them is x Venkatesh Raman Lower Bounds on Kernelization
  • 23. How will the nondeterministic algorithm work? Having Sn as advice gives the desired NDTM which when given x such that |x| = n, checks whether x ∈ L in the following way. Guesses t(n) strings, x1 , · · · , xt(n) ∈ Σ n Checks whether one of them is x Computes A(x1 , · · · , xt(n) ) and accepts iff output is in Sn . Venkatesh Raman Lower Bounds on Kernelization
  • 24. How to get Sn A : (Ln )t → L ≤α(n) Venkatesh Raman Lower Bounds on Kernelization
  • 25. How to get Sn A : (Ln )t → L ≤α(n) y ∈ L ≤α(n) covers a string x ∈ Ln — ∃x1 , · · · , xt ∈ Σ n with xi = x for some i and A(x1 , · · · , xt(n) ) = y Venkatesh Raman Lower Bounds on Kernelization
  • 26. How to get Sn A : (Ln )t → L ≤α(n) y ∈ L ≤α(n) covers a string x ∈ Ln — ∃x1 , · · · , xt ∈ Σ n with xi = x for some i and A(x1 , · · · , xt(n) ) = y We construct Sn by iteratively picking the string in L ≤α(n) which covers the most number of instances in Ln till there are no strings left to cover. Venkatesh Raman Lower Bounds on Kernelization
  • 27. How to get Sn A : (Ln )t → L ≤α(n) y ∈ L ≤α(n) covers a string x ∈ Ln — ∃x1 , · · · , xt ∈ Σ n with xi = x for some i and A(x1 , · · · , xt(n) ) = y We construct Sn by iteratively picking the string in L ≤α(n) which covers the most number of instances in Ln till there are no strings left to cover. Let us consider one step of the process. Let F be the set of uncovered instances in Ln at the start of step. Venkatesh Raman Lower Bounds on Kernelization
  • 28. How to get Sn A : (Ln )t → L ≤α(n) y ∈ L ≤α(n) covers a string x ∈ Ln — ∃x1 , · · · , xt ∈ Σ n with xi = x for some i and A(x1 , · · · , xt(n) ) = y We construct Sn by iteratively picking the string in L ≤α(n) which covers the most number of instances in Ln till there are no strings left to cover. Let us consider one step of the process. Let F be the set of uncovered instances in Ln at the start of step. By PHP there exists a string y ∈ L ≤α(n) such that A maps at least |F |t(n) |L ≤α(n) | tuples in F t(n) to y . Venkatesh Raman Lower Bounds on Kernelization
  • 29. How to get Sn (Cont.) At least |F |t(n) |L ≤α(n) | 1/t(n) = |F | |L ≤α(n) | 1/t(n) strings in F are covered by y in each step. Venkatesh Raman Lower Bounds on Kernelization
  • 30. How to get Sn (Cont.) At least |F |t(n) |L ≤α(n) | 1/t(n) = |F | |L ≤α(n) | 1/t(n) strings in F are covered by y in each step. We can restate the above statement, saying that at least ϕ(s) fraction of the remaining set is covered in each iteration, where 1 1 = (α(n)+1)/t(n) ϕ(n) = 1/t(n) 2 |L ≤α(n) | Venkatesh Raman Lower Bounds on Kernelization
  • 31. How to get Sn (Cont.) At least |F |t(n) |L ≤α(n) | 1/t(n) = |F | |L ≤α(n) | 1/t(n) strings in F are covered by y in each step. We can restate the above statement, saying that at least ϕ(s) fraction of the remaining set is covered in each iteration, where 1 1 = (α(n)+1)/t(n) ϕ(n) = 1/t(n) 2 |L ≤α(n) | There were 2n strings to cover at the starting. So, the number of strings left to cover after p steps is at most (1 − ϕ(n))p 2n ≤ 2n e ϕ(n)·p which is less than one for p = O(n/ϕ(n)). Venkatesh Raman Lower Bounds on Kernelization
  • 32. How to get Sn (Cont.) At least |F |t(n) |L ≤α(n) | 1/t(n) = |F | |L ≤α(n) | 1/t(n) strings in F are covered by y in each step. We can restate the above statement, saying that at least ϕ(s) fraction of the remaining set is covered in each iteration, where 1 1 = (α(n)+1)/t(n) ϕ(n) = 1/t(n) 2 |L ≤α(n) | There were 2n strings to cover at the starting. So, the number of strings left to cover after p steps is at most (1 − ϕ(n))p 2n ≤ 2n e ϕ(n)·p which is less than one for p = O(n/ϕ(n)). So, the process ends after O(n/ϕ(n)) ≤ n · 2(α(n)+1)/t(n) steps, which is polynomial in n since α(n) = O(t(n) log(t(n))). Venkatesh Raman Lower Bounds on Kernelization
  • 33. Take away A few comments about the theorem coNP ⊆ NP/poly implies PH = Σ3 . p The theorem gives us the collapse even if the distillation algorithm is allowed to be in co-nondeterministic. Main message is, that if you have t(n) instances of size n, you can not get an instance equivalent to the Or of them in polynomial time of size O(t(n) log t(n)) Venkatesh Raman Lower Bounds on Kernelization
  • 34. How to use the theorem to prove kernel lower bounds We know that NP-complete problems can not have a distillation algorithm unless coNP ⊆ NP/poly. Venkatesh Raman Lower Bounds on Kernelization
  • 35. How to use the theorem to prove kernel lower bounds We know that NP-complete problems can not have a distillation algorithm unless coNP ⊆ NP/poly. We want to define some analogue of distillation to produce an instance (x, k) of a parameterized problem L , starting from many instances of an NP-complete language L. Venkatesh Raman Lower Bounds on Kernelization
  • 36. How to use the theorem to prove kernel lower bounds We know that NP-complete problems can not have a distillation algorithm unless coNP ⊆ NP/poly. We want to define some analogue of distillation to produce an instance (x, k) of a parameterized problem L , starting from many instances of an NP-complete language L. We call such an algorithm a composition algorithm. We will define it formally in the next slide. Venkatesh Raman Lower Bounds on Kernelization
  • 37. How to use the theorem to prove kernel lower bounds We know that NP-complete problems can not have a distillation algorithm unless coNP ⊆ NP/poly. We want to define some analogue of distillation to produce an instance (x, k) of a parameterized problem L , starting from many instances of an NP-complete language L. We call such an algorithm a composition algorithm. We will define it formally in the next slide. The goal is that composition of an NP-complete language L into L , combined with a kernel of certain size for L , gives us distillation L. Venkatesh Raman Lower Bounds on Kernelization
  • 38. How to use the theorem to prove kernel lower bounds We know that NP-complete problems can not have a distillation algorithm unless coNP ⊆ NP/poly. We want to define some analogue of distillation to produce an instance (x, k) of a parameterized problem L , starting from many instances of an NP-complete language L. We call such an algorithm a composition algorithm. We will define it formally in the next slide. The goal is that composition of an NP-complete language L into L , combined with a kernel of certain size for L , gives us distillation L. So, if we can show that a composition algorithm exists from L to L with desired properties, then L can not have a kernel of certain size. Venkatesh Raman Lower Bounds on Kernelization
  • 39. Weak d-Composition ˜ (Weak d-composition). Let L ⊆ Σ ∗ be a set and let ∗ × N be a parameterized problem. We say that L weak Q⊆Σ d-composes into Q if there is an algorithm C which, given t strings x1 , x2 , . . . , xt , takes time polynomial in t |xi | and outputs an i=1 instance (y , k) ∈ Σ∗ × N such that the following hold: k ≤ t 1/d (maxt |xi |)O(1) i=1 The output is a YES instance of Q if and only if at least one ˜ instance xi is a YES-instance of of L. Theorem ˜ ˜ Let L ⊆ Σ ∗ be a set which is NP-hard. If L weak d-composes into the parameterized problem Q, then Q has no kernel of size O(k d− ) for all > 0 unless NP ⊆ coNP/poly. Venkatesh Raman Lower Bounds on Kernelization
  • 40. Proof of the theorem Theorem ˜ ˜ Let L ⊆ Σ ∗ be a set which is NP-hard. If L weak d-composes into the parameterized problem Q, then Q has no kernel of size O(k d− ) for all > 0 unless NP ⊆ coNP/poly. Proof. Let xi = n ∀i ∈ [t(n)] for the input of composition. After applying the kernelization on the composed instance, the size of the instance we get is O(t(n)1/d nc )d− ) = O(t(n)1−( = O(t(s)) /d) c(d− ) n ) (for t(s) sufficiently large) = O(t(s) log t(s)) Venkatesh Raman Lower Bounds on Kernelization
  • 41. Some comments about composition In composition, we asked for the parameter k to be at most t 1/d (n)O(1) . That ruled out kernels of size k d− . Venkatesh Raman Lower Bounds on Kernelization
  • 42. Some comments about composition In composition, we asked for the parameter k to be at most t 1/d (n)O(1) . That ruled out kernels of size k d− . What if we can output an instance with k = t o(1) (n)O(1) ? Then we can rule out kernels of k d− for ALL d! Venkatesh Raman Lower Bounds on Kernelization
  • 43. Some comments about composition In composition, we asked for the parameter k to be at most t 1/d (n)O(1) . That ruled out kernels of size k d− . What if we can output an instance with k = t o(1) (n)O(1) ? Then we can rule out kernels of k d− for ALL d! We call such an algorithm just “composition”. Venkatesh Raman Lower Bounds on Kernelization
  • 44. Some comments about composition In composition, we asked for the parameter k to be at most t 1/d (n)O(1) . That ruled out kernels of size k d− . What if we can output an instance with k = t o(1) (n)O(1) ? Then we can rule out kernels of k d− for ALL d! We call such an algorithm just “composition”. Since theorem of Fortnow-Santhanam allows co-nondeterminism, so that allows using coNP compositions for proving lower bounds. Venkatesh Raman Lower Bounds on Kernelization
  • 45. Some comments about composition In composition, we asked for the parameter k to be at most t 1/d (n)O(1) . That ruled out kernels of size k d− . What if we can output an instance with k = t o(1) (n)O(1) ? Then we can rule out kernels of k d− for ALL d! We call such an algorithm just “composition”. Since theorem of Fortnow-Santhanam allows co-nondeterminism, so that allows using coNP compositions for proving lower bounds. Sometimes getting composition from arbitrary instances of a language can be difficult. Venkatesh Raman Lower Bounds on Kernelization
  • 46. Some comments about composition In composition, we asked for the parameter k to be at most t 1/d (n)O(1) . That ruled out kernels of size k d− . What if we can output an instance with k = t o(1) (n)O(1) ? Then we can rule out kernels of k d− for ALL d! We call such an algorithm just “composition”. Since theorem of Fortnow-Santhanam allows co-nondeterminism, so that allows using coNP compositions for proving lower bounds. Sometimes getting composition from arbitrary instances of a language can be difficult. Some structure on the input instances helps to get a composition (next slide). Venkatesh Raman Lower Bounds on Kernelization
  • 47. Polynomial Equivalence Relation (Polynomial Equivalence Relation). An equivalence relation R on Σ ∗ is called a polynomial equivalence relation if the following two conditions hold: 1 2 There is an algorithm that given two strings x, y ∈ Σ ∗ decides whether x and y belong to the same equivalence class in (|x| + |y |)O(1) time. For any finite set S ⊆ Σ ∗ the equivalence relation R partitions the elements of S into at most (maxx∈S |x|)O(1) classes. Venkatesh Raman Lower Bounds on Kernelization
  • 48. What to do with Polynomial Equivalence Relation The equivalence relation can partition the input on the basis of different parameters. These equivalence classes can be used to give the input to the composition a nice structure. The helpful choices are often partitions which have the same number of vertices, or the asked solution size etc. Then all we need to do, is to come up with a composition algorithm for instances belonging to same equivalence class. Since there are only polynomial number of equivalence classes, in the end we can just output an instance of Or(L ) Next slide is a nice illustration of this method by Michal Pilipczuk. Venkatesh Raman Lower Bounds on Kernelization
  • 57. Take away We use compositions to rule out polynomial kernels. Venkatesh Raman Lower Bounds on Kernelization
  • 58. Take away We use compositions to rule out polynomial kernels. A composition from NP-hard problem L to parameterized problem L gives kernelization hardness for for L . Venkatesh Raman Lower Bounds on Kernelization
  • 59. Take away We use compositions to rule out polynomial kernels. A composition from NP-hard problem L to parameterized problem L gives kernelization hardness for for L . k = t o(1) nc ⇒ No polynomial kernel. Venkatesh Raman Lower Bounds on Kernelization
  • 60. Take away We use compositions to rule out polynomial kernels. A composition from NP-hard problem L to parameterized problem L gives kernelization hardness for for L . k = t o(1) nc ⇒ No polynomial kernel. k = t 1/d nc ⇒ No kernel of size k d− . Venkatesh Raman Lower Bounds on Kernelization
  • 61. Take away We use compositions to rule out polynomial kernels. A composition from NP-hard problem L to parameterized problem L gives kernelization hardness for for L . k = t o(1) nc ⇒ No polynomial kernel. k = t 1/d nc ⇒ No kernel of size k d− . We can make use of equivalence classes to give structure to input of the composition. Venkatesh Raman Lower Bounds on Kernelization
  • 62. Take away We use compositions to rule out polynomial kernels. A composition from NP-hard problem L to parameterized problem L gives kernelization hardness for for L . k = t o(1) nc ⇒ No polynomial kernel. k = t 1/d nc ⇒ No kernel of size k d− . We can make use of equivalence classes to give structure to input of the composition. Examples on the board! Venkatesh Raman Lower Bounds on Kernelization
  • 63. Thank You! Venkatesh Raman Lower Bounds on Kernelization