This document discusses treewidth and algorithms for graphs with bounded treewidth. It contains three parts:
1) Algorithms for bounded treewidth graphs, including showing weighted max independent set and c-coloring can be solved in FPT time parameterized by treewidth using dynamic programming on tree decompositions.
2) Graph-theoretic properties of treewidth such as definitions of treewidth and tree decompositions.
3) Applications of these algorithms to problems like Hamiltonian cycle on general graphs.
This section define a level subring or level ideals obtain a set of necessary and sufficient condition for the
equality of two ideals and characterizes field in terms of its fuzzy ideals. It also presents a procedure to construct
a fuzzy subrings (fuzzy ideals) from any given ascending chain of subring ideal. We prove that the lattice of
fuzzy congruence of group G (respectively ring R) is isomorphic to the lattice of fuzzy normal subgroup of G
(respectively fuzzy ideals of R).In Yuan Boond Wu wangrning investigated the relationship between the fuzzy
ideals and the fuzzy congruences on a distributive lattice and obtained that the lattice of fuzzy ideals is
isomorphic to the lattice of fuzzy congruences on a generalized Boolean algebra. Fuzzy group theory can be
used to describe, symmetries and permutation in nature and mathematics. The fuzzy group is one of the oldest
branches of abstract algebra. For example group can be used is classify to all of the forms chemical crystal can
take. Group can be used to count the number of non-equivalent objects and permutation or symmetries. For
example, the number of different is switching functions of n, variable when permutation of the input are
allowed. Beside crystallography and combinatory group have application of quantum mechanics.
This is a short presentation on Vertex Cover Problem for beginners in the field of Graph Theory...
Download the presentation for a better experience...
Paper introduction to Combinatorial Optimization on Graphs of Bounded TreewidthYu Liu
This slides introduced the paper: H. L. Bodlaender and a. M. C. a. Koster, “Combinatorial Optimization on Graphs of Bounded Treewidth,” Comput. J., vol. 51, no. 3, pp. 255–269, Nov. 2007.
This section define a level subring or level ideals obtain a set of necessary and sufficient condition for the
equality of two ideals and characterizes field in terms of its fuzzy ideals. It also presents a procedure to construct
a fuzzy subrings (fuzzy ideals) from any given ascending chain of subring ideal. We prove that the lattice of
fuzzy congruence of group G (respectively ring R) is isomorphic to the lattice of fuzzy normal subgroup of G
(respectively fuzzy ideals of R).In Yuan Boond Wu wangrning investigated the relationship between the fuzzy
ideals and the fuzzy congruences on a distributive lattice and obtained that the lattice of fuzzy ideals is
isomorphic to the lattice of fuzzy congruences on a generalized Boolean algebra. Fuzzy group theory can be
used to describe, symmetries and permutation in nature and mathematics. The fuzzy group is one of the oldest
branches of abstract algebra. For example group can be used is classify to all of the forms chemical crystal can
take. Group can be used to count the number of non-equivalent objects and permutation or symmetries. For
example, the number of different is switching functions of n, variable when permutation of the input are
allowed. Beside crystallography and combinatory group have application of quantum mechanics.
This is a short presentation on Vertex Cover Problem for beginners in the field of Graph Theory...
Download the presentation for a better experience...
Paper introduction to Combinatorial Optimization on Graphs of Bounded TreewidthYu Liu
This slides introduced the paper: H. L. Bodlaender and a. M. C. a. Koster, “Combinatorial Optimization on Graphs of Bounded Treewidth,” Comput. J., vol. 51, no. 3, pp. 255–269, Nov. 2007.
MTH 2001 Project 2Instructions• Each group must choos.docxgilpinleeanna
MTH 2001: Project 2
Instructions
• Each group must choose one problem to do, using material from chapter 14 in the textbook.
• Write up a solution including explanations in complete sentences of each step and drawings or computer
graphics if helpful. Cite any sources you use and mention how you made any diagrams.
• Write at a level that will be comprehensible to someone who is mathematically competent, but may
not have taken Calculus 3. Use calculus, but explain your method in simple terms. Your report should
consist of 80−90% explanation and 10−20% equations. If you find yourself with more equations than
words, then you do not have nearly enough explanation. See the checklist at the end of this document.
• One person from each group must present the work orally to Naveed or Ali. Presenters must make an
appointment. Visit the Calc 3 tab: http://www.fit.edu/mac/group_projects_presentations.php
• Submit written work to the Canvas dropbox for Project 2 by October 7 at 9:55PM. The
deadline for the oral presentation is October 7 at 2PM.
Problems
1. You probably studied Newton’s method for approximating the roots of a function (i.e. approximating
values of x such that f(x) = 0) in Calculus 1:
(1) Guess the solution, xj
(2) Find the tangent line of f at xj,
y = f′(xj)(x−xj) + f(xj) (1)
(3) Find the tangent line’s x-intercept, call it xj+1,
0 = f′(xj)(xj+1 −xj) + f(xj)
xj+1f
′(xj) = xjf
′(xj) −f(xj)
xj+1 = xj −
f(xj)
f′(xj)
(2)
(4) If f(xj+1) is sufficiently close to 0, stop, xj+1 is an approximate solution. Otherwise, return
to step (2) with xj+1 as the guess.
See this animation for a geometric view of the process. It simply follows the tangent line to the curve
at a starting point to its x-intercept, and repeats with this new x value until we (hopefully) find a
good approximation of the solution.
Newton’s method can be generalized to two dimensions to approximate the points (x,y) where the
surfaces z = f(x,y) and z = g(x,y) simultaneously touch the xy-plane. (In other words, it can
approximate solutions to the system of equations f(x,y) = 0 and g(x,y) = 0.) Here, the method is
http://www.fit.edu/mac/group_projects_presentations.php
http://upload.wikimedia.org/wikipedia/commons/e/e0/NewtonIteration_Ani.gif
(1) Guess the solution (xj,yj)
(2) Find the tangent planes to each f and g at this point.
z = f(x,y) =
z = g(x,y) =
(3) Find the line of intersection of the planes.
(4) Find the line’s xy-intercept, call this point (xj+1,yj+1),
xj+1 =
yj+1 =
(5) If f(xj+1,yj+1) < ε and g(xj+1,yj+1) < ε for some small number ε (error tolerance), stop,
(xj+1,yj+1) is an approximate solution. Otherwise, return to step (2) with (xj+1,yj+1) as
the guess.
(a) Find equations of the tangent planes for step (2), an equation for their line of intersection for step
(3), and find formulas for xj+1 and yj+1 for step (4).
(b) What assumptions must we make about f and g in order for the method to work? How might
the method fail? Explain in words h ...
A Quest for Subexponential Time Parameterized Algorithms for Planar-k-Path: F...cseiitgn
The field of designing subexponential time parameterized algorithms has gained a lot of momentum lately. While the subexponential time algorithm for Planar-k-Path (finding a path of length at least k on planar graphs) has been known for last 15 years. There was no such algorithms known on directed planar graphs. In this talk, I will survey this journey of designing subexponential time parameterized algorithms for finding a path of length at least k in planar undirected graphs to planar directed graphs; highlighting the new tools and techniques that got developed on the way.
Support Vector Machine (SVM) is a popular supervised machine learning algorithm used for classification and regression tasks. It works by finding a hyperplane in a high-dimensional space that best separates data points of different classes. SVM aims to maximize the margin between the classes, where the margin is defined as the distance between the hyperplane and the nearest data points from each class. The data points that are closest to the hyperplane are called support vectors.
Here are some key concepts associated with SVM:
Hyperplane: In a two-dimensional space, a hyperplane is a line that separates the data points of different classes. In higher dimensions, it becomes a hyperplane. SVM tries to find the hyperplane with the maximum margin between classes.
Margin: The margin is the distance between the hyperplane and the nearest data points of each class. SVM seeks to maximize this margin.
Support Vectors: These are the data points that are closest to the hyperplane and have the most influence on determining its position. These points "support" the placement of the hyperplane.
Kernel Trick: SVM can be extended to non-linearly separable data using the kernel trick. A kernel function takes the original feature space and maps it to a higher-dimensional space where the data might be linearly separable. Common kernel functions include the linear kernel, polynomial kernel, and radial basis function (RBF) kernel.
C parameter: In SVM, the C parameter is a regularization parameter that balances the trade-off between maximizing the margin and minimizing the classification error. A small C value allows for a larger margin but may lead to more misclassifications, while a large C value prioritizes correct classification over margin maximization.
SVM can be used for both classification and regression tasks:
Classification: In classification, SVM tries to find a hyperplane that separates data points into different classes. New data points can then be classified based on which side of the hyperplane they fall.
Regression: In regression, SVM is used to find a hyperplane that best fits the data points. The goal is to minimize the error between the actual and predicted values.
SVMs have been widely used in various fields such as image classification, text categorization, bioinformatics, and more. However, they can be sensitive to the choice of hyperparameters and might not perform well on extremely noisy or overlapping data.
It's important to note that while SVMs are a powerful and versatile algorithm, newer algorithms like deep learning models have gained popularity due to their ability to automatically learn complex features and patterns from data.
Support Vector Machine (SVM) is a powerful machine learning algorithm for classification and regression. It finds a hyperplane that best separates data into classes, aiming to maximize the margin between them. Support vectors, the closest data points to the hyperplane, influence its position.
I am Samantha K. I am a Physics Assignment Expert at eduassignmenthelp.com. I hold a Ph.D. in Physics, from McGill University, Canada. I have been helping students with their homework for the past 8 years. I solve assignments related to Physics.
Visit eduassignmenthelp.com or email info@eduassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with Physics Assignments.
Similar to Dynamic Programming Over Graphs of Bounded Treewidth (20)
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
The Indian economy is classified into different sectors to simplify the analysis and understanding of economic activities. For Class 10, it's essential to grasp the sectors of the Indian economy, understand their characteristics, and recognize their importance. This guide will provide detailed notes on the Sectors of the Indian Economy Class 10, using specific long-tail keywords to enhance comprehension.
For more information, visit-www.vavaclasses.com
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
The French Revolution Class 9 Study Material pdf free download
Dynamic Programming Over Graphs of Bounded Treewidth
1. Treewidth – Dynamic Programming
Saket Saurabh
Institute of Mathematical Sciences, India
´
(Based on the slides of Daniel Marx)
ASPAK 2014: March 3-8
Treewidth – Dynamic Programming – p.1/24
2. Treewidth
Introduction and definition – DONE
Part I: Algorithms for bounded treewidth graphs.
Part II: Graph-theoretic properties of treewidth.
Part III: Applications for general graphs.
Treewidth – Dynamic Programming – p.2/24
4. Treewidth
Treewidth: A measure of how “tree-like” the graph is.
a
(Introduced by Robertson and Seymour.)
Tree decomposition: Vertices are arranged in a tree
b
c
d
e
f
g
structure satisfying the following properties:
1. If u and v are neighbors, then there is a bag containing both of them.
h
2. For every vertex v , the bags containing v form a
connected subtree.
c, d, f
b, c, f
a, b, c
d, f , g
b, e, f
g, h
Treewidth – Dynamic Programming – p.4/24
5. Treewidth
Treewidth: A measure of how “tree-like” the graph is.
a
(Introduced by Robertson and Seymour.)
Tree decomposition: Vertices are arranged in a tree
b
c
d
e
f
g
structure satisfying the following properties:
1. If u and v are neighbors, then there is a bag containing both of them.
h
2. For every vertex v , the bags containing v form a
connected subtree.
c, d, f
b, c, f
a, b, c
d, f , g
b, e, f
g, h
Treewidth – Dynamic Programming – p.4/24
6. Treewidth
Treewidth: A measure of how “tree-like” the graph is.
a
(Introduced by Robertson and Seymour.)
Tree decomposition: Vertices are arranged in a tree
b
c
d
e
f
g
structure satisfying the following properties:
1. If u and v are neighbors, then there is a bag containing both of them.
h
2. For every vertex v , the bags containing v form a
connected subtree.
c, d, f
b, c, f
a, b, c
d, f , g
b, e, f
g, h
Treewidth – Dynamic Programming – p.4/24
7. Treewidth
Treewidth: A measure of how “tree-like” the graph is.
a
(Introduced by Robertson and Seymour.)
Tree decomposition: Vertices are arranged in a tree
b
c
d
e
f
g
structure satisfying the following properties:
1. If u and v are neighbors, then there is a bag containing both of them.
h
2. For every vertex v , the bags containing v form a
connected subtree.
Width of the decomposition: largest bag size −1.
treewidth: width of the best decomposition.
Fact: treewidth = 1 ⇐⇒ graph is a forest
c, d, f
b, c, f
a, b, c
d, f , g
b, e, f
g, h
Treewidth – Dynamic Programming – p.4/24
8. Treewidth
Treewidth: A measure of how “tree-like” the graph is.
a
(Introduced by Robertson and Seymour.)
Tree decomposition: Vertices are arranged in a tree
b
c
d
e
f
g
structure satisfying the following properties:
1. If u and v are neighbors, then there is a bag containing both of them.
h
2. For every vertex v , the bags containing v form a
connected subtree.
Width of the decomposition: largest bag size −1.
treewidth: width of the best decomposition.
Fact: treewidth = 1 ⇐⇒ graph is a forest
c, d, f
b, c, f
a, b, c
d, f , g
b, e, f
g, h
Treewidth – Dynamic Programming – p.4/24
9. Treewidth
Treewidth: A measure of how “tree-like” the graph is.
a
(Introduced by Robertson and Seymour.)
Tree decomposition: Vertices are arranged in a tree
b
c
d
e
f
g
structure satisfying the following properties:
1. If u and v are neighbors, then there is a bag containing both of them.
h
2. For every vertex v , the bags containing v form a
connected subtree.
Width of the decomposition: largest bag size −1.
treewidth: width of the best decomposition.
Fact: treewidth = 1 ⇐⇒ graph is a forest
c, d, f
b, c, f
a, b, c
d, f , g
b, e, f
g, h
Treewidth – Dynamic Programming – p.4/24
10. Finding tree decompositions
Fact: It is NP-hard to determine the treewidth of a graph (given a graph G and an
integer w , decide if the treewidth of G is at most w ), but there is a polynomial-time
algorithm for every fixed w .
Fact: [Bodlaender’s Theorem] For every fixed w , there is a linear-time algorithm that
finds a tree decomposition of width w (if exists).
⇒ Deciding if treewidth is at most w is fixed-parameter tractable.
⇒ If we want an FPT algorithm parameterized by treewidth w of the input graph, then
we can assume that a tree decomposition of width w is available.
Treewidth – Dynamic Programming – p.5/24
11. Finding tree decompositions
Fact: It is NP-hard to determine the treewidth of a graph (given a graph G and an
integer w , decide if the treewidth of G is at most w ), but there is a polynomial-time
algorithm for every fixed w .
Fact: [Bodlaender’s Theorem] For every fixed w , there is a linear-time algorithm that
finds a tree decomposition of width w (if exists).
⇒ Deciding if treewidth is at most w is fixed-parameter tractable.
⇒ If we want an FPT algorithm parameterized by treewidth w of the input graph, then
we can assume that a tree decomposition of width w is available.
Running time is 2O(w
3)
· n. Sometimes it is better to use the following results instead:
Fact: There is a O(33w · w · n2 ) time algorithm that finds a tree decomposition of width
4w + 1, if the treewidth of the graph is at most w .
Fact: There is a polynomial-time algorithm that finds a tree decomposition of width
√
O(w log w ), if the treewidth of the graph is at most w .
Treewidth – Dynamic Programming – p.5/24
13. W EIGHTED M AX I NDEPENDENT S ET
and tree decompositions
Fact: Given a tree decomposition of width w , W EIGHTED M AX I NDEPENDENT S ET can
be solved in time O(2w · n).
c, d, f
Bx : vertices appearing in node x .
Vx : vertices appearing in the subtree rooted at x .
b, c, f
Generalizing our solution for trees:
d, f , g
b, e, f
a, b, c
g, h
Instead of computing 2 values A[v ], B[v ] for each
vertex of the graph, we compute 2|Bx | ≤ 2w +1
values for each bag Bx .
M[x, S]: the maximum weight of an independent
set I ⊆ Vx with I ∩ Bx = S .
∅ =?
b =?
c =?
f =?
bc =?
cf =?
bf =?
bcf =?
How to determine M[x, S] if all the values are known for
the children of x ?
Treewidth – Dynamic Programming – p.7/24
14. Nice tree decompositions
Definition: A rooted tree decomposition is nice if every node x is one of the following
4 types:
Leaf: no children, |Bx | = 1
Introduce: 1 child y , Bx = By ∪ {v } for some vertex v
Forget: 1 child y , Bx = By {v } for some vertex v
Join: 2 children y1 , y2 with Bx = By1 = By2
Leaf
Introduce
Forget
Join
v
u, v , w
u, w
u, v , w
u, w
u, v , w
u, v , w
u, v , w
Fact: A tree decomposition of width w and n nodes can be turned into a nice tree
decomposition of width w and O(wn) nodes in time O(w 2 n).
Treewidth – Dynamic Programming – p.8/24
15. W EIGHTED M AX I NDEPENDENT S ET
and nice tree decompositions
Leaf: no children, |Bx | = 1
Trivial!
Introduce: 1 child y , Bx = By ∪ {v } for some vertex v
m[x, S] =
m[y , S]
if v ∈ S,
m[y , S {v }] + w (v ) if v ∈ S but v has no neighbor in S ,
−∞
if S contains v and its neighbor.
Leaf
Introduce
Forget
Join
v
u, v , w
u, w
u, v , w
u, w
u, v , w
u, v , w
u, v , w
Treewidth – Dynamic Programming – p.9/24
16. W EIGHTED M AX I NDEPENDENT S ET
and nice tree decompositions
Forget: 1 child y , Bx = By {v } for some vertex v
m[x, S] = max{m[y , S], m[y , S ∪ {v }]}
Join: 2 children y1 , y2 with Bx = By1 = By2
m[x, S] = m[y1 , S] + m[y2 , S] − w (S)
Leaf
Introduce
Forget
Join
v
u, v , w
u, w
u, v , w
u, w
u, v , w
u, v , w
u, v , w
Treewidth – Dynamic Programming – p.9/24
17. W EIGHTED M AX I NDEPENDENT S ET
and nice tree decompositions
Forget: 1 child y , Bx = By {v } for some vertex v
m[x, S] = max{m[y , S], m[y , S ∪ {v }]}
Join: 2 children y1 , y2 with Bx = By1 = By2
m[x, S] = m[y1 , S] + m[y2 , S] − w (S)
There are at most 2w +1 · n subproblems m[x, S] and each subproblem can be solved
in constant time (assuming the children are already solved).
⇒ Running time is O(2w · n).
⇒ W EIGHTED M AX I NDEPENDENT S ET is FPT parameterized by treewidth.
⇒ W EIGHTED M IN V ERTEX C OVER is FPT parameterized by treewidth.
Treewidth – Dynamic Programming – p.9/24
18. 3-C OLORING and
tree decompositions
Fact: Given a tree decomposition of width w , 3-C OLORING can be solved in O(3w · n).
Bx : vertices appearing in node x .
Vx : vertices appearing in the subtree rooted at x .
For every node x and coloring c : Bx → {1, 2, 3},
c, d, f
b, c, f
d, f , g
we compute the Boolean value E [x, c], which is
true if and only if c can be extended to a proper
g, h
b, e, f
a, b, c
3-coloring of Vx .
bcf=T
bcf=T
bcf=F
bcf=F
...
...
How to determine E [x, c] if all the values are known for
the children of x ?
Treewidth – Dynamic Programming – p.10/24
19. 3-C OLORING and
nice tree decompositions
Leaf: no children, |Bx | = 1
Trivial!
Introduce: 1 child y , Bx = By ∪ {v } for some vertex v
If c(v ) = c(u) for every neighbor u of v , then E [x, c] = E [y , c ′ ], where c ′ is c
restricted to By .
Forget: 1 child y , Bx = By {v } for some vertex v
E [x, c] is true if E [y , c ′ ] is true for one of the 3 extensions of c to By .
Join: 2 children y1 , y2 with Bx = By1 = By2
E [x, c] = E [y1 , c] ∧ E [y2 , c]
Leaf
Introduce
Forget
Join
v
u, v , w
u, w
u, v , w
u, w
u, v , w
u, v , w
u, v , w
Treewidth – Dynamic Programming – p.11/24
20. 3-C OLORING and
nice tree decompositions
Leaf: no children, |Bx | = 1
Trivial!
Introduce: 1 child y , Bx = By ∪ {v } for some vertex v
If c(v ) = c(u) for every neighbor u of v , then E [x, c] = E [y , c ′ ], where c ′ is c
restricted to By .
Forget: 1 child y , Bx = By {v } for some vertex v
E [x, c] is true if E [y , c ′ ] is true for one of the 3 extensions of c to By .
Join: 2 children y1 , y2 with Bx = By1 = By2
E [x, c] = E [y1 , c] ∧ E [y2 , c]
There are at most 3w +1 · n subproblems E [x, c] and each subproblem can be solved in
constant time (assuming the children are already solved).
⇒ Running time is O(3w · n).
⇒ 3-C OLORING is FPT parameterized by treewidth.
Treewidth – Dynamic Programming – p.11/24
21. Vertex coloring
More generally:
Fact: Given a tree decomposition of width w , c -C OLORING can be solved in O ∗ (c w ).
Exercise: Every graph of treewidth at most w can be colored with w + 1 colors.
Fact: Given a tree decomposition of width w , V ERTEX C OLORING can be solved in
time O ∗ (w w ).
⇒ V ERTEX C OLORING is FPT parameterized by treewidth.
Treewidth – Dynamic Programming – p.12/24
22. Hamiltonian cycle and
tree decompositions
Fact: Given a tree decomposition of width w , H AMILTONIAN CYCLE can be solved in
time w O(w ) · n.
Bx : vertices appearing in node x .
Vx : vertices appearing in the subtree rooted at x .
0
Bx
1
Bx
2
Bx
x
If H is a Hamiltonian cycle, then the subgraph
H[Vx ] is a set of paths with endpoints in Bx .
What are the important properties of H[Vx ] “seen
from the outside world”?
0
1
2
The subsets Bx , Bx , Bx of Bx having degree
Vx
0, 1, and 2.
1
The matching M of Bx .
0
1
2
Number of subproblems (Bx , Bx , Bx , M) for each node x : at most 3w · w w .
Treewidth – Dynamic Programming – p.13/24
23. Hamiltonian cycle and
nice tree decompositions
0
1
2
For each subproblem (Bx , Bx , Bx , M), we have to determine if there is a set of paths
with this pattern.
How to do this for the different types of nodes?
(Assuming that all the subproblems are solved for the children.)
Leaf: no children, |Bx | = 1
Trivial!
Treewidth – Dynamic Programming – p.14/24
24. Hamiltonian cycle and
nice tree decompositions
0
1
2
Solving subproblem (Bx , Bx , Bx , M) of node x .
Forget: 1 child y , Bx = By {v } for some vertex v
0
1
2
In a solution H of (Bx , Bx , Bx , M), vertex v has degree 2. Thus subproblem
0
1
2
0
1
2
(Bx , Bx , Bx , M) of x is equivalent to subproblem (Bx , Bx , Bx ∪ {v }, M) of y .
0
Bx
1
Bx
2
Bx
Treewidth – Dynamic Programming – p.15/24
25. Hamiltonian cycle and
nice tree decompositions
0
1
2
Solving subproblem (Bx , Bx , Bx , M) of node x .
Forget: 1 child y , Bx = By {v } for some vertex v
0
1
2
In a solution H of (Bx , Bx , Bx , M), vertex v has degree 2. Thus subproblem
0
1
2
0
1
2
(Bx , Bx , Bx , M) of x is equivalent to subproblem (Bx , Bx , Bx ∪ {v }, M) of y .
0
Bx
1
Bx
2
Bx
v
Treewidth – Dynamic Programming – p.15/24
26. Hamiltonian cycle and
nice tree decompositions
0
1
2
Solving subproblem (Bx , Bx , Bx , M) of node x .
Forget: 1 child y , Bx = By {v } for some vertex v
0
1
2
In a solution H of (Bx , Bx , Bx , M), vertex v has degree 2. Thus subproblem
0
1
2
0
1
2
(Bx , Bx , Bx , M) of x is equivalent to subproblem (Bx , Bx , Bx ∪ {v }, M) of y .
0
Bx
1
Bx
2
Bx
v
Treewidth – Dynamic Programming – p.15/24
27. Hamiltonian cycle and
nice tree decompositions
0
1
2
Solving subproblem (Bx , Bx , Bx , M) of node x .
Introduce: 1 child y , Bx = By ∪ {v } for some vertex v
0
0
1
2
Case 1: v ∈ Bx . Subproblem is equivalent with (Bx {v }, Bx , Bx , M) for node y .
0
Bx
v
1
Bx
2
Bx
x
Treewidth – Dynamic Programming – p.16/24
28. Hamiltonian cycle and
nice tree decompositions
0
1
2
Solving subproblem (Bx , Bx , Bx , M) of node x .
Introduce: 1 child y , Bx = By ∪ {v } for some vertex v
0
0
1
2
Case 1: v ∈ Bx . Subproblem is equivalent with (Bx {v }, Bx , Bx , M) for node y .
0
Bx
v
1
Bx
0
By
2
Bx
x
1
By
2
By
y
Treewidth – Dynamic Programming – p.16/24
29. Hamiltonian cycle and
nice tree decompositions
0
1
2
Solving subproblem (Bx , Bx , Bx , M) of node x .
Introduce: 1 child y , Bx = By ∪ {v } for some vertex v
0
0
1
2
Case 1: v ∈ Bx . Subproblem is equivalent with (Bx {v }, Bx , Bx , M) for node y .
0
Bx
v
1
Bx
0
By
2
Bx
x
1
By
2
By
y
Treewidth – Dynamic Programming – p.16/24
30. Hamiltonian cycle and
nice tree decompositions
0
1
2
Solving subproblem (Bx , Bx , Bx , M) of node x .
Introduce: 1 child y , Bx = By ∪ {v } for some vertex v
0
0
1
2
Case 1: v ∈ Bx . Subproblem is equivalent with (Bx {v }, Bx , Bx , M) for node y .
0
Bx
v
1
Bx
0
By
2
Bx
x
⇐
1
By
2
By
y
Treewidth – Dynamic Programming – p.16/24
31. Hamiltonian cycle and
nice tree decompositions
0
1
2
Solving subproblem (Bx , Bx , Bx , M) of node x .
Introduce: 1 child y , Bx = By ∪ {v } for some vertex v
1
Case 2: v ∈ Bx . Every neighbor of v in Vx is in Bx . Thus v has to be adjacent with one
other vertex of Bx .
0
1
2
Is there a subproblem (By , By , By , M ′ ) of node y such that making a vertex of By
0
1
2
adjacent to v makes it a solution for subproblem (Bx , Bx , Bx , M) of node x ?
0
Bx
2
Bx
1
Bx
v
x
Treewidth – Dynamic Programming – p.16/24
32. Hamiltonian cycle and
nice tree decompositions
0
1
2
Solving subproblem (Bx , Bx , Bx , M) of node x .
Introduce: 1 child y , Bx = By ∪ {v } for some vertex v
1
Case 2: v ∈ Bx . Every neighbor of v in Vx is in Bx . Thus v has to be adjacent with one
other vertex of Bx .
0
1
2
Is there a subproblem (By , By , By , M ′ ) of node y such that making a vertex of By
0
1
2
adjacent to v makes it a solution for subproblem (Bx , Bx , Bx , M) of node x ?
0
Bx
2
Bx
1
Bx
v
x
Treewidth – Dynamic Programming – p.16/24
33. Hamiltonian cycle and
nice tree decompositions
0
1
2
Solving subproblem (Bx , Bx , Bx , M) of node x .
Introduce: 1 child y , Bx = By ∪ {v } for some vertex v
1
Case 2: v ∈ Bx . Every neighbor of v in Vx is in Bx . Thus v has to be adjacent with one
other vertex of Bx .
0
1
2
Is there a subproblem (By , By , By , M ′ ) of node y such that making a vertex of By
0
1
2
adjacent to v makes it a solution for subproblem (Bx , Bx , Bx , M) of node x ?
0
Bx
0
By
2
Bx
1
Bx
v
x
1
By
2
By
y
Treewidth – Dynamic Programming – p.16/24
34. Hamiltonian cycle and
nice tree decompositions
0
1
2
Solving subproblem (Bx , Bx , Bx , M) of node x .
Introduce: 1 child y , Bx = By ∪ {v } for some vertex v
1
Case 2: v ∈ Bx . Every neighbor of v in Vx is in Bx . Thus v has to be adjacent with one
other vertex of Bx .
0
1
2
Is there a subproblem (By , By , By , M ′ ) of node y such that making a vertex of By
0
1
2
adjacent to v makes it a solution for subproblem (Bx , Bx , Bx , M) of node x ?
0
Bx
0
By
2
Bx
1
Bx
v
x
1
By
2
By
y
Treewidth – Dynamic Programming – p.16/24
35. Hamiltonian cycle and
nice tree decompositions
0
1
2
Solving subproblem (Bx , Bx , Bx , M) of node x .
Introduce: 1 child y , Bx = By ∪ {v } for some vertex v
1
Case 2: v ∈ Bx . Every neighbor of v in Vx is in Bx . Thus v has to be adjacent with one
other vertex of Bx .
0
1
2
Is there a subproblem (By , By , By , M ′ ) of node y such that making a vertex of By
0
1
2
adjacent to v makes it a solution for subproblem (Bx , Bx , Bx , M) of node x ?
0
Bx
1
Bx
0
By
2
Bx
v
x
⇐
1
By
2
By
y
Treewidth – Dynamic Programming – p.16/24
36. Hamiltonian cycle and
nice tree decompositions
0
1
2
Solving subproblem (Bx , Bx , Bx , M) of node x .
Introduce: 1 child y , Bx = By ∪ {v } for some vertex v
2
Case 3: v ∈ Bx . Similar to Case 2, but 2 vertices of By are adjacent with v .
0
Bx
1
Bx
2
Bx
v
x
Treewidth – Dynamic Programming – p.16/24
37. Hamiltonian cycle and
nice tree decompositions
0
1
2
Solving subproblem (Bx , Bx , Bx , M) of node x .
Introduce: 1 child y , Bx = By ∪ {v } for some vertex v
2
Case 3: v ∈ Bx . Similar to Case 2, but 2 vertices of By are adjacent with v .
0
Bx
1
Bx
2
Bx
v
x
Treewidth – Dynamic Programming – p.16/24
38. Hamiltonian cycle and
nice tree decompositions
0
1
2
Solving subproblem (Bx , Bx , Bx , M) of node x .
Introduce: 1 child y , Bx = By ∪ {v } for some vertex v
2
Case 3: v ∈ Bx . Similar to Case 2, but 2 vertices of By are adjacent with v .
0
Bx
1
Bx
2
Bx
v
0
By
x
1
By
y
Treewidth – Dynamic Programming – p.16/24
39. Hamiltonian cycle and
nice tree decompositions
0
1
2
Solving subproblem (Bx , Bx , Bx , M) of node x .
Introduce: 1 child y , Bx = By ∪ {v } for some vertex v
2
Case 3: v ∈ Bx . Similar to Case 2, but 2 vertices of By are adjacent with v .
0
Bx
1
Bx
2
Bx
v
0
By
x
1
By
y
Treewidth – Dynamic Programming – p.16/24
40. Hamiltonian cycle and
nice tree decompositions
0
1
2
Solving subproblem (Bx , Bx , Bx , M) of node x .
Introduce: 1 child y , Bx = By ∪ {v } for some vertex v
2
Case 3: v ∈ Bx . Similar to Case 2, but 2 vertices of By are adjacent with v .
0
Bx
1
Bx
2
Bx
v
0
By
x
⇐
1
By
y
Treewidth – Dynamic Programming – p.16/24
41. Hamiltonian cycle and
nice tree decompositions
0
1
2
Solving subproblem (Bx , Bx , Bx , M) of node x .
Join: 2 children y1 , y2 with Bx = By1 = By2
A solution H is the union of a subgraph H1 ⊆ G [Vy1 ] and a subgraph H2 ⊆ G [Vy2 ].
0
1
2
If H1 is a solution for (By1 , By1 , By1 , M1 ) of node y1 and H2 is a solution for
0
1
2
(By2 , By2 , By2 , M2 ) of node y2 , then we can check if H1 ∪ H2 is a solution for
0
1
2
(Bx , Bx , Bx , M) of node x .
For any two subproblems of y1 and y2 , we check if they have solutions and if their union
0
1
2
is a solution for (Bx , Bx , Bx , M) of node x .
Treewidth – Dynamic Programming – p.17/24
42. Monadic Second Order Logic
Extended Monadic Second Order Logic (EMSO)
A logical language on graphs consisting of the following:
Logical connectives ∧, ∨, →, ¬, =, =
quantifiers ∀, ∃ over vertex/edge variables
predicate adj(u, v ): vertices u and v are adjacent
predicate inc(e, v ): edge e is incident to vertex v
quantifiers ∀, ∃ over vertex/edge set variables
∈, ⊆ for vertex/edge sets
Example: The formula ∃C ⊆ V ∀v ∈ C ∃u1 , u2 ∈ C (u1 = u2 ∧ adj(u1 , v ) ∧ adj(u2 , v ))
is true ...
Treewidth – Dynamic Programming – p.18/24
43. Monadic Second Order Logic
Extended Monadic Second Order Logic (EMSO)
A logical language on graphs consisting of the following:
Logical connectives ∧, ∨, →, ¬, =, =
quantifiers ∀, ∃ over vertex/edge variables
predicate adj(u, v ): vertices u and v are adjacent
predicate inc(e, v ): edge e is incident to vertex v
quantifiers ∀, ∃ over vertex/edge set variables
∈, ⊆ for vertex/edge sets
Example: The formula ∃C ⊆ V ∀v ∈ C ∃u1 , u2 ∈ C (u1 = u2 ∧ adj(u1 , v ) ∧ adj(u2 , v ))
is true if graph G (V , E ) has a cycle.
Treewidth – Dynamic Programming – p.18/24
44. Courcelle’s Theorem
Courcelle’s Theorem: If a graph property can be expressed in EMSO, then for every
fixed w ≥ 1, there is a linear-time algorithm for testing this property on graphs having
treewidth at most w .
Note: The constant depending on w can be very large (double, triple exponential etc.),
therefore a direct dynamic programming algorithm can be more efficient.
Treewidth – Dynamic Programming – p.19/24
45. Courcelle’s Theorem
Courcelle’s Theorem: If a graph property can be expressed in EMSO, then for every
fixed w ≥ 1, there is a linear-time algorithm for testing this property on graphs having
treewidth at most w .
Note: The constant depending on w can be very large (double, triple exponential etc.),
therefore a direct dynamic programming algorithm can be more efficient.
If we can express a property in EMSO, then we immediately get that testing this
property is FPT parameterized by the treewidth w of the input graph.
Can we express 3-C OLORING and H AMILTONIAN C YCLE in EMSO?
Treewidth – Dynamic Programming – p.19/24
46. Using Courcelle’s Theorem
3-C OLORING
∃C1 , C2 , C3 ⊆ V ∀v ∈ V (v ∈ C1 ∨ v ∈ C2 ∨ v ∈ C3 ) ∧ ∀u, v ∈ V adj(u, v ) →
(¬(u ∈ C1 ∧ v ∈ C1 ) ∧ ¬(u ∈ C2 ∧ v ∈ C2 ) ∧ ¬(u ∈ C3 ∧ v ∈ C3 ))
Treewidth – Dynamic Programming – p.20/24
47. Using Courcelle’s Theorem
3-C OLORING
∃C1 , C2 , C3 ⊆ V ∀v ∈ V (v ∈ C1 ∨ v ∈ C2 ∨ v ∈ C3 ) ∧ ∀u, v ∈ V adj(u, v ) →
(¬(u ∈ C1 ∧ v ∈ C1 ) ∧ ¬(u ∈ C2 ∧ v ∈ C2 ) ∧ ¬(u ∈ C3 ∧ v ∈ C3 ))
H AMILTONIAN C YCLE
∃H ⊆ E spanning(H) ∧ (∀v ∈ V degree2(H, v ))
degree0(H, v ) := ¬∃e ∈ H inc(e, v )
degree1(H, v ) := ¬degree0(H, v ) ∧ ¬∃e1 , e2 ∈ H (e1 = e2 ∧ inc(e1 , v ) ∧ inc(e2 , v ))
degree2(H, v ) := ¬degree0(H, v ) ∧ ¬degree1(H, v ) ∧ ¬∃e1 , e2 , e3 ∈ H (e1 =
e2 ∧ e2 = e3 ∧ e1 = e3 ∧ inc(e1 , v ) ∧ inc(e2 , v ) ∧ inc(e3 , v )))
spanning(H) := ∀u, v ∈ V ∃P ⊆ H ∀x ∈ V ((x = u ∨x = v )∧ degree1(P, x))∨(x =
u ∧ x = v ∧ (degree0(P, x) ∨ degree2(P, x)))
Treewidth – Dynamic Programming – p.20/24
48. Using Courcelle’s Theorem
Two ways of using Courcelle’s Theorem:
1. The problem can be described by a single formula (e.g, 3-C OLORING, H AMILTONIAN
C YCLE ).
⇒ Problem can be solved in time f (w ) · n for graphs of treewidth at most w .
⇒ Problem is FPT parameterized by the treewidth w of the input graph.
Treewidth – Dynamic Programming – p.21/24
49. Using Courcelle’s Theorem
Two ways of using Courcelle’s Theorem:
1. The problem can be described by a single formula (e.g, 3-C OLORING, H AMILTONIAN
C YCLE ).
⇒ Problem can be solved in time f (w ) · n for graphs of treewidth at most w .
⇒ Problem is FPT parameterized by the treewidth w of the input graph.
2. The problem can be described by a formula for each value of the parameter k .
Example: For each k , having a cycle of length exactly k can be expressed as
∃v1 , ... , vk ∈ V (adj(v1 , v2 ) ∧ adj(v2 , v3 ) ∧ · · · ∧ adj(vk−1 , vk ) ∧ adj(vk , v1 )).
⇒ Problem can be solved in time f (k, w ) · n for graphs of treewidth w .
⇒ Problem is FPT parameterized with combined parameter k and treewidth w .
Treewidth – Dynamic Programming – p.21/24
50. S UBGRAPH I SOMORPHISM
S UBGRAPH I SOMORPHISM: given graphs H and G , find a copy of H in G as subgraph.
Parameter k := |V (H)| (size of the small graph).
For each H , we can construct a formula φH that expresses “G has a subgraph
isomorphic to H ” (similarly to the k -cycle on the previous slide).
Treewidth – Dynamic Programming – p.22/24
51. S UBGRAPH I SOMORPHISM
S UBGRAPH I SOMORPHISM: given graphs H and G , find a copy of H in G as subgraph.
Parameter k := |V (H)| (size of the small graph).
For each H , we can construct a formula φH that expresses “G has a subgraph
isomorphic to H ” (similarly to the k -cycle on the previous slide).
⇒ By Courcelle’s Theorem, S UBGRAPH I SOMORPHISM can be solved in time
f (H, w ) · n if G has treewidth at most w .
Treewidth – Dynamic Programming – p.22/24
52. S UBGRAPH I SOMORPHISM
S UBGRAPH I SOMORPHISM: given graphs H and G , find a copy of H in G as subgraph.
Parameter k := |V (H)| (size of the small graph).
For each H , we can construct a formula φH that expresses “G has a subgraph
isomorphic to H ” (similarly to the k -cycle on the previous slide).
⇒ By Courcelle’s Theorem, S UBGRAPH I SOMORPHISM can be solved in time
f (H, w ) · n if G has treewidth at most w .
⇒ Since there is only a finite number of simple graphs on k vertices, S UBGRAPH
I SOMORPHISM can be solved in time f (k, w ) · n if H has k vertices and G has
treewidth at most w .
⇒ S UBGRAPH I SOMORPHISM is FPT parameterized by combined parameter
k := |V (H)| and the treewidth w of G .
Treewidth – Dynamic Programming – p.22/24
53. The Robber and Cops game
Game: k cops try to capture a robber in the graph.
In each step, the cops can move from vertex to vertex arbitrarily with helicopters.
The robber moves infinitely fast on the edges, and sees where the cops will land.
Fact:
k + 1 cops can win the game ⇐⇒ the treewidth of the graph is at most k .
Treewidth – Dynamic Programming – p.23/24
54. The Robber and Cops game
Game: k cops try to capture a robber in the graph.
In each step, the cops can move from vertex to vertex arbitrarily with helicopters.
The robber moves infinitely fast on the edges, and sees where the cops will land.
Fact:
k + 1 cops can win the game ⇐⇒ the treewidth of the graph is at most k .
The winner of the game can be determined in time nO(k) using standard techniques
(there are at most nk positions for the cops)
⇓
For every fixed k , it can be checked in polynomial-time if treewidth is at most k .
Treewidth – Dynamic Programming – p.23/24
55. The Robber and Cops game
Game: k cops try to capture a robber in the graph.
In each step, the cops can move from vertex to vertex arbitrarily with helicopters.
The robber moves infinitely fast on the edges, and sees where the cops will land.
Fact:
k + 1 cops can win the game ⇐⇒ the treewidth of the graph is at most k .
The winner of the game can be determined in time nO(k) using standard techniques
(there are at most nk positions for the cops)
⇓
For every fixed k , it can be checked in polynomial-time if treewidth is at most k .
Exercise 1: Show that the treewidth of the k × k grid is at least k − 1.
Exercise 2: Show that the treewidth of the k × k grid is at least k .
Treewidth – Dynamic Programming – p.23/24
56. The Robber and Cops game (cont.)
Example: 2 cops have a winning strategy in a tree.
Treewidth – Dynamic Programming – p.24/24
57. The Robber and Cops game (cont.)
Example: 2 cops have a winning strategy in a tree.
Treewidth – Dynamic Programming – p.24/24
58. The Robber and Cops game (cont.)
Example: 2 cops have a winning strategy in a tree.
Treewidth – Dynamic Programming – p.24/24
59. The Robber and Cops game (cont.)
Example: 2 cops have a winning strategy in a tree.
Treewidth – Dynamic Programming – p.24/24
60. The Robber and Cops game (cont.)
Example: 2 cops have a winning strategy in a tree.
Treewidth – Dynamic Programming – p.24/24
61. The Robber and Cops game (cont.)
Example: 2 cops have a winning strategy in a tree.
Treewidth – Dynamic Programming – p.24/24
62. The Robber and Cops game (cont.)
Example: 2 cops have a winning strategy in a tree.
Treewidth – Dynamic Programming – p.24/24
63. The Robber and Cops game (cont.)
Example: 2 cops have a winning strategy in a tree.
Treewidth – Dynamic Programming – p.24/24
64. The Robber and Cops game (cont.)
Example: 2 cops have a winning strategy in a tree.
Treewidth – Dynamic Programming – p.24/24