This document discusses different approaches for dealing with NP-complete problems, including intelligent exhaustive search techniques like backtracking and branch-and-bound, as well as approximation algorithms. It provides an example of how backtracking can prune portions of the search space when solving Boolean satisfiability problems. The key decisions in backtracking are choosing which subproblem to expand next and which variable to branch on. The backtracking test checks subproblems for failure, success or uncertainty.
Combining Models - In this slides, we look at a way to combine the answers form various weak classifiers to build a robust classifier. At the slides we look at the following subjects:
1.- Model Combination Vs Bayesian Model
2.- Bootstrap Data Sets
And the cherry on the top the AdaBoost
Here a Review of the Combination of Machine Learning models from Bayesian Averaging, Committees to Boosting... Specifically An statistical analysis of Boosting is done
Combining Models - In this slides, we look at a way to combine the answers form various weak classifiers to build a robust classifier. At the slides we look at the following subjects:
1.- Model Combination Vs Bayesian Model
2.- Bootstrap Data Sets
And the cherry on the top the AdaBoost
Here a Review of the Combination of Machine Learning models from Bayesian Averaging, Committees to Boosting... Specifically An statistical analysis of Boosting is done
A gentle introduction to 2 classification techniques, as presented by Kriti Puniyani to the NYC Predictive Analytics group (April 14, 2011). To download the file please go here: http://www.meetup.com/NYC-Predictive-Analytics/files/
Universal Approximation Theorem
Here, we prove that the perceptron multi-layer can approximate all continuous functions in the hypercube [0,1]. For this, we used the Cybenko proof... I tried to include the basic in topology and mathematical analysis to make the slides more understandable. However, they still need some work to be done. In addition, I am a little bit rusty in my mathematical analysis, so I am still not so convinced with my linear functional I defined for the proof...!!! Back to the Rudin and Apostol!!! So expect changes in the future.
Machine learning in science and industry — day 1arogozhnikov
A course of machine learning in science and industry.
- notions and applications
- nearest neighbours: search and machine learning algorithms
- roc curve
- optimal classification and regression
- density estimation
- Gaussian mixtures and EM algorithm
- clustering, an example of clustering in the opera
Machine learning in science and industry — day 2arogozhnikov
- decision trees
- random forest
- Boosting: adaboost
- reweighting with boosting
- gradient boosting
- learning to rank with gradient boosting
- multiclass classification
- trigger in LHCb
- boosting to uniformity and flatness loss
- particle identification
Introduction to machine learning terminology.
Applications within High Energy Physics and outside HEP.
* Basic problems: classification and regression.
* Nearest neighbours approach and spacial indices
* Overfitting (intro)
* Curse of dimensionality
* ROC curve, ROC AUC
* Bayes optimal classifier
* Density estimation: KDE and histograms
* Parametric density estimation
* Mixtures for density estimation and EM algorithm
* Generative approach vs discriminative approach
* Linear decision rule, intro to logistic regression
* Linear regression
A gentle introduction to 2 classification techniques, as presented by Kriti Puniyani to the NYC Predictive Analytics group (April 14, 2011). To download the file please go here: http://www.meetup.com/NYC-Predictive-Analytics/files/
Universal Approximation Theorem
Here, we prove that the perceptron multi-layer can approximate all continuous functions in the hypercube [0,1]. For this, we used the Cybenko proof... I tried to include the basic in topology and mathematical analysis to make the slides more understandable. However, they still need some work to be done. In addition, I am a little bit rusty in my mathematical analysis, so I am still not so convinced with my linear functional I defined for the proof...!!! Back to the Rudin and Apostol!!! So expect changes in the future.
Machine learning in science and industry — day 1arogozhnikov
A course of machine learning in science and industry.
- notions and applications
- nearest neighbours: search and machine learning algorithms
- roc curve
- optimal classification and regression
- density estimation
- Gaussian mixtures and EM algorithm
- clustering, an example of clustering in the opera
Machine learning in science and industry — day 2arogozhnikov
- decision trees
- random forest
- Boosting: adaboost
- reweighting with boosting
- gradient boosting
- learning to rank with gradient boosting
- multiclass classification
- trigger in LHCb
- boosting to uniformity and flatness loss
- particle identification
Introduction to machine learning terminology.
Applications within High Energy Physics and outside HEP.
* Basic problems: classification and regression.
* Nearest neighbours approach and spacial indices
* Overfitting (intro)
* Curse of dimensionality
* ROC curve, ROC AUC
* Bayes optimal classifier
* Density estimation: KDE and histograms
* Parametric density estimation
* Mixtures for density estimation and EM algorithm
* Generative approach vs discriminative approach
* Linear decision rule, intro to logistic regression
* Linear regression
Here are my slides for my preparation class for possible students for the Master in Electrical Engineering and Computer Science (Specialization in Computer Science)... for the entrance examination here at Cinvestav GDL.
The introduction to my class on machine learning. The subjects covered in this class go from:
1.- Linear Classifiers
2.- Non Linear Classifiers
3.- Graphical Models
4.- Clustering
5.- Etc
I am planning to upload the rest once I feel they are at the level.
Here are my slides for my preparation class for possible Master students in Electrical Engineering and Computer Science (Specialization in Computer Science)... for the entrance examination here at Cinvestav GDL.
Here are my slides in some basic algorithms in Computational Geometry:
1.- Line Intersection
2.- Sweeping Line
3.- Convex Hull
They are the classic one, but there is still a lot for anybody wanting to get in computer graphics to study. I recomend
Mark de Berg, Otfried Cheong, Marc van Kreveld, and Mark Overmars. 2008. Computational Geometry: Algorithms and Applications (3rd ed. ed.). TELOS, Santa Clara, CA, USA.
In most of the algorithms analyzed until now, we have been looking and studying problems solvable in polynomial time. The polynomial time algorithm class P are algorithms that on inputs of size n have a worst case running time of O(n^k) for some constant k. Thus, informally, we can say that the Non-Polynomial (NP) time algorithms are the ones that cannot be solved in O(n^k) for any constant k
.
Here is the basic introduction to the probability used in my Analysis of Algorithms course at the Cinvestav Guadalajara. They go from the basic axioms to the Expected Value and Variance.
Here are my slides for my preparation class for possible students in the Master in Electrical Engineering and Computer Science (Specialization in Computer Science)... for the entrance examination here at Cinvestav GDL.
Here, we look at the problem of going from a source s to a possible multiple destinations. At them, each of the Lemmas, Theorems and Corollaries used to prove the properties of the
1. Bellman-Ford
2. Dijkstra
are examined in detail.
Supervised Hidden Markov Chains.
Here, we used the paper by Rabiner as a base for the presentation. Thus, we have the following three problems:
1.- How efficiently compute the probability given a model.
2.- Given an observation to which class it belongs
3.- How to find the parameters given data for training.
The first two follow Rabiner's explanation, but in the third one I used the Lagrange Multiplier Optimization because Rabiner lacks a clear explanation about how solving the issue.
Comparative study of different algorithmsijfcstjournal
This Paper provides a brief description of the Genetic Algorithm (GA), the Simulated Annealing (SA) Algorithm, the Backtracking (BT) Algorithm and the Brute Force (BF) Search Algorithm and attempts to explain the way as how our Proposed Genetic Algorithm (GA), Proposed Simulated Annealing (SA) Algorithm using GA, Classical Backtracking (BT) Algorithm and Classical Brute Force (BF) Search Algorithm can be employed in finding the best solution of N Queens Problem and also, makes a comparison between these four algorithms. It is entirely a review based work. The four algorithms were written as well as implemented. From the Results, it was found that, the Proposed Genetic Algorithm (GA) performed better than the Proposed Simulated Annealing (SA) Algorithm using GA, the Backtracking (BT) Algorithm and the Brute Force (BF) Search Algorithm and it also provided better fitness value (solution) than the Proposed Simulated Annealing Algorithm (SA) using GA, the Backtracking (BT) Algorithm and the Brute Force (BF) Search Algorithm, for different N values. Also, it was noticed that, the Proposed GA took more time to provide result than the Proposed SA using GA.
COMPARATIVE STUDY OF DIFFERENT ALGORITHMS TO SOLVE N QUEENS PROBLEMijfcstjournal
This Paper provides a brief description of the Genetic Algorithm (GA), the Simulated Annealing (SA)
Algorithm, the Backtracking (BT) Algorithm and the Brute Force (BF) Search Algorithm and attempts to
explain the way as how the Proposed Genetic Algorithm (GA), the Proposed Simulated Annealing (SA)
Algorithm using GA, the Backtracking (BT) Algorithm and the Brute Force (BF) Search Algorithm can be
employed in finding the best solution of N Queens Problem and also, makes a comparison between these
four algorithms. It is entirely a review based work. The four algorithms were written as well as
implemented. From the Results, it was found that, the Proposed Genetic Algorithm (GA) performed better
than the Proposed Simulated Annealing (SA) Algorithm using GA, the Backtracking (BT) Algorithm and
the Brute Force (BF) Search Algorithm and it also provided better fitness value (solution) than the
Proposed Simulated Annealing Algorithm (SA) using GA, the Backtracking (BT) Algorithm and the Brute
Force (BF) Search Algorithm, for different N values. Also, it was noticed that, the Proposed GA took more
time to provide result than the Proposed SA using GA.
Research internship on optimal stochastic theory with financial application u...Asma Ben Slimene
This is a presntation of my second year intership on optimal stochastic theory and how we can apply it on some financial application then how we can solve such problems using finite differences methods!
Enjoy it !
Presentation on stochastic control problem with financial applications (Merto...Asma Ben Slimene
This is an introductory to optimal stochastic control theory with two applications in finance: Merton portfolio problem and Investement/consumption problem with numerical results using finite differences approach
John Maxwell, Data Scientist, Nordstrom at MLconf Seattle 2017 MLconf
John Maxwell, a data scientist at Nordstrom, did his graduate work in international development economics, focusing on field experiments. He has since led research projects in Indonesia and Ethiopia related to microenterprise, developed large mathematical simulation models used for investment decisions by WSDOT, built dynamic pricing algorithms at Thriftbooks.com, and led the development of Nordstrom’s open source a/b testing service: Elwin. He currently focuses on contextual multi-armed bandit problems and machine learning infrastructure at Nordstrom.
Abstract summary
Solving the Contextual Multi-Armed Bandit Problem at Nordstrom:
The contextual multi-armed bandit problem, also known as associative reinforcement learning or bandits with side information, is a useful formulation of the multi-armed bandit problem that takes into account information about arms and users when deciding which arm to pull. The barrier to entry for both understanding and implementing contextual multi-armed bandits in production is high. The literature in this field pulls from disparate sources including (but not limited to) classical statistics, reinforcement learning, and information theory. Because of this, finding material that fills the gap between very basic explanations and academic journal articles is challenging. The goal of this talk is to provide those lacking intermediate materials as well as an example implementation. Specifically, I will explain key findings from some of the more cited papers in the contextual bandit literature, discuss the minimum requirements for implementation, and give an overview of a production system for solving contextual multi-armed bandit problems.
As many things in the history of analysis of algorithms the all-pairs shortest path has a long history (From the point of view of Computer Science). We can see the initial results from the book “Studies in the Economics of Transportation” by Beckmann, McGuire, and Winsten (1956) where the notation that we use for
the matrix multiplication alike was first used.
In this slides, we go trough several all pairs Shortest path problem solutions from a slow version to the Johnson algorithm.
Case study of Divide and Conquer approach contains information about-merge sort and quick sort algorithms, closest pair of points, binary search, la-russe multiplicaton, min-max problems and also strassen multiplication.
Simulation-based Optimization of a Real-world Travelling Salesman Problem Usi...CSCJournals
This paper presents a real-world case study of optimizing waste collection in Sweden. The problem, involving approximately 17,000 garbage bins served by three bin lorries, is approached as a travelling salesman problem and solved using simulation-based optimization and an evolutionary algorithm. To improve the performance of the evolutionary algorithm, it is enhanced with a repair function that adjusts its genome values so that shorter routes are found more quickly. The algorithm is tested using two crossover operators, i.e., the order crossover and heuristic crossover, combined with different mutation rates. The results indicate that the order crossover is superior to the heuristics crossover, but that the driving force of the search process is the mutation operator combined with the repair function.
Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete:
An optimization problem with discrete variables is known as a discrete optimization, in which an object such as an integer, permutation or graph must be found from a countable set.
A problem with continuous variables is known as a continuous optimization, in which an optimal value from a continuous function must be found. They can include constrained problems and multimodal problems.
Senior data scientist and founder of the company Intelligentia Data I+D SA de CV. We are offering consultancy services, development of projects and products in Machine Learning, Big Data, Data Sciences and Artificial Intelligence.
My first set of slides (The NN and DL class I am preparing for the fall)... I included the problem of Vanishing Gradient and the need to have ReLu (Mentioning btw the saturation problem inherited from Hebbian Learning)
It has been almost 62 years since the invention of the term Artificial Intelligence by Samuel and Minsky et al. at the Dartmouth workshop College in 1956 (“Dartmouth Summer Research Project on Artificial Intelligence”) where this new area of Computer Science was invented. However, the history of Artificial Intelligence goes back to previous millennia, when the Greeks in their Myths spoke about golden robots at Hephaestus, and the Galatea of Pygmalion. They were the first automatons known at the dawn of history, and although these first attempts were only myths, automatons were invented and built through multiple civilizations in history. Nevertheless, these automatons resembled in quite limited way their final objectives, representing animals and humans. In spite of that, the greatest illusion of an automaton, the Turk by Wolfgang von Kempelen, inspired many people, trough its exhibitions, as Alexander Graham Bell and Charles Babbage to develop inventions that would change forever human history. Thus, the importance of the concept “Artificial Intelligence” as a driver of our technological dreams. And although Artificial Intelligence has never been defined in a precise practical way, the amount of research and methods that have been developed to tackle some of its basics tasks have been and are quite humongous. Thus, the importance of having an introduction to the concepts of Artificial Intelligence, thus the dream can continue.
A review of one of the most popular methods of clustering, a part of what is know as unsupervised learning, K-Means. Here, we go from the basic heuristic used to solve the NP-Hard problem to an approximation algorithm K-Centers. Additionally, we look at variations coming from the Fuzzy Set ideas. In the future, we will add more about On-Line algorithms in the line of Stochastic Gradient Ideas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...ssuser7dcef0
Power plants release a large amount of water vapor into the
atmosphere through the stack. The flue gas can be a potential
source for obtaining much needed cooling water for a power
plant. If a power plant could recover and reuse a portion of this
moisture, it could reduce its total cooling water intake
requirement. One of the most practical way to recover water
from flue gas is to use a condensing heat exchanger. The power
plant could also recover latent heat due to condensation as well
as sensible heat due to lowering the flue gas exit temperature.
Additionally, harmful acids released from the stack can be
reduced in a condensing heat exchanger by acid condensation. reduced in a condensing heat exchanger by acid condensation.
Condensation of vapors in flue gas is a complicated
phenomenon since heat and mass transfer of water vapor and
various acids simultaneously occur in the presence of noncondensable
gases such as nitrogen and oxygen. Design of a
condenser depends on the knowledge and understanding of the
heat and mass transfer processes. A computer program for
numerical simulations of water (H2O) and sulfuric acid (H2SO4)
condensation in a flue gas condensing heat exchanger was
developed using MATLAB. Governing equations based on
mass and energy balances for the system were derived to
predict variables such as flue gas exit temperature, cooling
water outlet temperature, mole fraction and condensation rates
of water and sulfuric acid vapors. The equations were solved
using an iterative solution technique with calculations of heat
and mass transfer coefficients and physical properties.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERSveerababupersonal22
It consists of cw radar and fmcw radar ,range measurement,if amplifier and fmcw altimeterThe CW radar operates using continuous wave transmission, while the FMCW radar employs frequency-modulated continuous wave technology. Range measurement is a crucial aspect of radar systems, providing information about the distance to a target. The IF amplifier plays a key role in signal processing, amplifying intermediate frequency signals for further analysis. The FMCW altimeter utilizes frequency-modulated continuous wave technology to accurately measure altitude above a reference point.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
28 Dealing with the NP Poblems: Exponential Search and Approximation Algorithms
1. Analysis of Algorithms
Dealing with NP Problems:
Intelligent Exponential Search and Approximation Algorithms
Andres Mendez-Vazquez
December 2, 2015
1 / 78
2. Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
2 / 78
3. Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
3 / 78
4. The Dilemma
We face the following
NP problems need solutions in real-life!!!
We only know exponential algorithms for them!!!
What do we do?
4 / 78
5. The Dilemma
We face the following
NP problems need solutions in real-life!!!
We only know exponential algorithms for them!!!
What do we do?
4 / 78
6. The Dilemma
We face the following
NP problems need solutions in real-life!!!
We only know exponential algorithms for them!!!
What do we do?
4 / 78
7. Accuracy
Accuracy issues
NP problems are often optimization problems.
It’s hard to find the EXACT answer.
Maybe we just want to know if our answer is close to the
exact answer.
5 / 78
8. Accuracy
Accuracy issues
NP problems are often optimization problems.
It’s hard to find the EXACT answer.
Maybe we just want to know if our answer is close to the
exact answer.
5 / 78
9. Accuracy
Accuracy issues
NP problems are often optimization problems.
It’s hard to find the EXACT answer.
Maybe we just want to know if our answer is close to the
exact answer.
5 / 78
10. Near optimal solutions
Ways of dealing with NP problems
if the actual input is small, exponential running time may be
perfectly satisfactory.
It may be possible to isolate special cases solvable in polynomial
time.
Ways of dealing with NP problems
Third, it may be possible to use some form of intelligent search to
avoid almost all the time the worst case.
Fourth, it may still be possible to find near-optimal solutions
in polynomial time (either in the worst case or on average).
6 / 78
11. Near optimal solutions
Ways of dealing with NP problems
if the actual input is small, exponential running time may be
perfectly satisfactory.
It may be possible to isolate special cases solvable in polynomial
time.
Ways of dealing with NP problems
Third, it may be possible to use some form of intelligent search to
avoid almost all the time the worst case.
Fourth, it may still be possible to find near-optimal solutions
in polynomial time (either in the worst case or on average).
6 / 78
12. Near optimal solutions
Ways of dealing with NP problems
if the actual input is small, exponential running time may be
perfectly satisfactory.
It may be possible to isolate special cases solvable in polynomial
time.
Ways of dealing with NP problems
Third, it may be possible to use some form of intelligent search to
avoid almost all the time the worst case.
Fourth, it may still be possible to find near-optimal solutions
in polynomial time (either in the worst case or on average).
6 / 78
13. Near optimal solutions
Ways of dealing with NP problems
if the actual input is small, exponential running time may be
perfectly satisfactory.
It may be possible to isolate special cases solvable in polynomial
time.
Ways of dealing with NP problems
Third, it may be possible to use some form of intelligent search to
avoid almost all the time the worst case.
Fourth, it may still be possible to find near-optimal solutions
in polynomial time (either in the worst case or on average).
6 / 78
14. Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
7 / 78
15. Three main branches attacking the NP Problems
Intelligent Exponential Search
Procedures that are exponential time in the worst-case, but with the right
design can be very efficient on typical instances.
Approximation algorithms
Algorithms guaranteed to find a "near optimal" solution, under a
certain bound, in polynomial time.
Heuristics
Useful algorithmic procedures that provide approximated solutions in
polynomial time.
They are a trade-off between optimality, completeness, accuracy and
precision and running time.
8 / 78
16. Three main branches attacking the NP Problems
Intelligent Exponential Search
Procedures that are exponential time in the worst-case, but with the right
design can be very efficient on typical instances.
Approximation algorithms
Algorithms guaranteed to find a "near optimal" solution, under a
certain bound, in polynomial time.
Heuristics
Useful algorithmic procedures that provide approximated solutions in
polynomial time.
They are a trade-off between optimality, completeness, accuracy and
precision and running time.
8 / 78
17. Three main branches attacking the NP Problems
Intelligent Exponential Search
Procedures that are exponential time in the worst-case, but with the right
design can be very efficient on typical instances.
Approximation algorithms
Algorithms guaranteed to find a "near optimal" solution, under a
certain bound, in polynomial time.
Heuristics
Useful algorithmic procedures that provide approximated solutions in
polynomial time.
They are a trade-off between optimality, completeness, accuracy and
precision and running time.
8 / 78
18. Three main branches attacking the NP Problems
Intelligent Exponential Search
Procedures that are exponential time in the worst-case, but with the right
design can be very efficient on typical instances.
Approximation algorithms
Algorithms guaranteed to find a "near optimal" solution, under a
certain bound, in polynomial time.
Heuristics
Useful algorithmic procedures that provide approximated solutions in
polynomial time.
They are a trade-off between optimality, completeness, accuracy and
precision and running time.
8 / 78
19. We have something like this
As sets
Exact Solution
Approximation Algorithms
Heuristics
Intelligent exponentiasl searches
WORST CASE EXPONENTIAL
Polynomial Time
Algorithms
Polynomial Time
NP Problems
9 / 78
20. Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
10 / 78
21. Backtracking
Something Notable
Backtracking is based on that it is often possible to reject a solution by
looking at just a small portion of it.
Example
If an instance of SAT contains the clause Ci = (x1 ∨ x2), then all
assignments with x1 = x2 = 0 can be instantly eliminated.
11 / 78
22. Backtracking
Something Notable
Backtracking is based on that it is often possible to reject a solution by
looking at just a small portion of it.
Example
If an instance of SAT contains the clause Ci = (x1 ∨ x2), then all
assignments with x1 = x2 = 0 can be instantly eliminated.
11 / 78
23. Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
12 / 78
24. Example
Pruning Example
Given the possible values that you can give to two literals:
x1 x2
1 1
1 0
0 1
0 0
It is possible to prune a quarter of the entire search space... Can
this be systematically exploited?
13 / 78
25. An example of exploiting this idea in SAT solvers
Consider the following Boolean formula φ (w, x, y, z)
(w ∨ x ∨ y ∨ z) ∧ (w ∨ ¬x) ∧ (x ∨ ¬y) ∧ (y ∨ ¬z) ∧ (z ∨ ¬w) ∧ (¬w ∨ ¬z)
We start branching in one variable, we can choose w
Note: This selection does not violate any of the clauses of
φ (w, x, y, z)
14 / 78
26. An example of exploiting this idea in SAT solvers
Consider the following Boolean formula φ (w, x, y, z)
(w ∨ x ∨ y ∨ z) ∧ (w ∨ ¬x) ∧ (x ∨ ¬y) ∧ (y ∨ ¬z) ∧ (z ∨ ¬w) ∧ (¬w ∨ ¬z)
We start branching in one variable, we can choose w
Initial formula
Note: This selection does not violate any of the clauses of
φ (w, x, y, z)
14 / 78
29. In addition
What if w = 0, x = 0
Then, the following clauses are satisfied
1 ¬w = 1
2 ¬x = 1
Thus, we have the following left
1 Before
1 (w ∨ x ∨ y ∨ z)∧(w ∨ ¬x)∧(x ∨ ¬y)∧(y ∨ ¬z)∧(z ∨ ¬w)∧(¬w ∨ ¬z)
2 After
1 (0 ∨ 0 ∨ y ∨ z) ∧ (0 ∨ 1) ∧ (0 ∨ ¬y) ∧ (y ∨ ¬z) ∧ (z ∨ 1) ∧ (1 ∨ ¬z)
17 / 78
30. In addition
What if w = 0, x = 0
Then, the following clauses are satisfied
1 ¬w = 1
2 ¬x = 1
Thus, we have the following left
1 Before
1 (w ∨ x ∨ y ∨ z)∧(w ∨ ¬x)∧(x ∨ ¬y)∧(y ∨ ¬z)∧(z ∨ ¬w)∧(¬w ∨ ¬z)
2 After
1 (0 ∨ 0 ∨ y ∨ z) ∧ (0 ∨ 1) ∧ (0 ∨ ¬y) ∧ (y ∨ ¬z) ∧ (z ∨ 1) ∧ (1 ∨ ¬z)
17 / 78
31. Finally
We have the following reduced number of equations
(y ∨ z) , (1) , (¬y) , (y ∨ ¬z) , (1) , (1) ⇔ (y ∨ z) , (¬y) , (y ∨ ¬z)
What if w = 0, x = 1
1 Before
1 (w ∨ x ∨ y ∨ z)∧(w ∨ ¬x)∧(x ∨ ¬y)∧(y ∨ ¬z)∧(z ∨ ¬w)∧(¬w ∨ ¬z)
2 After
1 (1) ∧ (0) ∧ (1) ∧ (y ∨ ¬z) ∧ (1) ∧ (1)
18 / 78
32. Finally
We have the following reduced number of equations
(y ∨ z) , (1) , (¬y) , (y ∨ ¬z) , (1) , (1) ⇔ (y ∨ z) , (¬y) , (y ∨ ¬z)
What if w = 0, x = 1
1 Before
1 (w ∨ x ∨ y ∨ z)∧(w ∨ ¬x)∧(x ∨ ¬y)∧(y ∨ ¬z)∧(z ∨ ¬w)∧(¬w ∨ ¬z)
2 After
1 (1) ∧ (0) ∧ (1) ∧ (y ∨ ¬z) ∧ (1) ∧ (1)
18 / 78
33. Thus
We have something no satisfiable
(1) ∧ (0) ∧ (1) ∧ (y ∨ ¬z) ∧ (1) ∧ (1) ⇔ (), (y ∨ ¬z)
Clearly
We prune that part of the search tree.
Note we use “()≡(0)” to point out to a “empty clause” ruling out
satisfiability.
19 / 78
34. Thus
We have something no satisfiable
(1) ∧ (0) ∧ (1) ∧ (y ∨ ¬z) ∧ (1) ∧ (1) ⇔ (), (y ∨ ¬z)
Clearly
We prune that part of the search tree.
Note we use “()≡(0)” to point out to a “empty clause” ruling out
satisfiability.
19 / 78
35. The decisions we need to make in backtracking
First
Which subproblem to expand next.
Second
Which branching variable to use.
Remark
The benefit of backtracking lies in its ability to eliminate portions of the
search space.
20 / 78
36. The decisions we need to make in backtracking
First
Which subproblem to expand next.
Second
Which branching variable to use.
Remark
The benefit of backtracking lies in its ability to eliminate portions of the
search space.
20 / 78
37. The decisions we need to make in backtracking
First
Which subproblem to expand next.
Second
Which branching variable to use.
Remark
The benefit of backtracking lies in its ability to eliminate portions of the
search space.
20 / 78
38. Choosing
Something Notable
A classic strategy:
You choose the subproblem that contains the smallest clause.
Then, you branch on a variable in that clause.
Then
If the clause is a singleton then at least one of the resulting branches will
be terminated.
21 / 78
39. Choosing
Something Notable
A classic strategy:
You choose the subproblem that contains the smallest clause.
Then, you branch on a variable in that clause.
Then
If the clause is a singleton then at least one of the resulting branches will
be terminated.
21 / 78
40. Choosing
Something Notable
A classic strategy:
You choose the subproblem that contains the smallest clause.
Then, you branch on a variable in that clause.
Then
If the clause is a singleton then at least one of the resulting branches will
be terminated.
21 / 78
41. Choosing
Something Notable
A classic strategy:
You choose the subproblem that contains the smallest clause.
Then, you branch on a variable in that clause.
Then
If the clause is a singleton then at least one of the resulting branches will
be terminated.
21 / 78
42. The Backtracking Test
The test needs to look at the subproblem to declare quickly if
1 Failure: the subproblem has no solution.
2 Success: a solution to the subproblem is found.
3 Uncertainty.
What about SAT
The test declares failure if there is an empty clause
The test declares success if there are no clauses
Uncertainty Otherwise.
22 / 78
43. The Backtracking Test
The test needs to look at the subproblem to declare quickly if
1 Failure: the subproblem has no solution.
2 Success: a solution to the subproblem is found.
3 Uncertainty.
What about SAT
The test declares failure if there is an empty clause
The test declares success if there are no clauses
Uncertainty Otherwise.
22 / 78
44. The Backtracking Test
The test needs to look at the subproblem to declare quickly if
1 Failure: the subproblem has no solution.
2 Success: a solution to the subproblem is found.
3 Uncertainty.
What about SAT
The test declares failure if there is an empty clause
The test declares success if there are no clauses
Uncertainty Otherwise.
22 / 78
45. The Backtracking Test
The test needs to look at the subproblem to declare quickly if
1 Failure: the subproblem has no solution.
2 Success: a solution to the subproblem is found.
3 Uncertainty.
What about SAT
The test declares failure if there is an empty clause
The test declares success if there are no clauses
Uncertainty Otherwise.
22 / 78
46. The Backtracking Test
The test needs to look at the subproblem to declare quickly if
1 Failure: the subproblem has no solution.
2 Success: a solution to the subproblem is found.
3 Uncertainty.
What about SAT
The test declares failure if there is an empty clause
The test declares success if there are no clauses
Uncertainty Otherwise.
22 / 78
47. The Backtracking Test
The test needs to look at the subproblem to declare quickly if
1 Failure: the subproblem has no solution.
2 Success: a solution to the subproblem is found.
3 Uncertainty.
What about SAT
The test declares failure if there is an empty clause
The test declares success if there are no clauses
Uncertainty Otherwise.
22 / 78
49. Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
24 / 78
50. Pseudo-code for Backtracking
We have
BACKTRACKING(P0)
1 Start with some problem P0
2 Let S = {P0}, the set if active subproblems
3 While S = ∅
4 choose a subproblem P ∈ S and remove it from S
5 expand it into smaller subproblems P1, P2, ..., Pk
6 For each Pi
7 if test(Pi) succeeds: halt and return the branch solution
8 if test(Pi) fails: discard Pi
9 Otherwise: add Pi to S
10 return there is no solution.
25 / 78
51. Pseudo-code for Backtracking
We have
BACKTRACKING(P0)
1 Start with some problem P0
2 Let S = {P0}, the set if active subproblems
3 While S = ∅
4 choose a subproblem P ∈ S and remove it from S
5 expand it into smaller subproblems P1, P2, ..., Pk
6 For each Pi
7 if test(Pi) succeeds: halt and return the branch solution
8 if test(Pi) fails: discard Pi
9 Otherwise: add Pi to S
10 return there is no solution.
25 / 78
52. Pseudo-code for Backtracking
We have
BACKTRACKING(P0)
1 Start with some problem P0
2 Let S = {P0}, the set if active subproblems
3 While S = ∅
4 choose a subproblem P ∈ S and remove it from S
5 expand it into smaller subproblems P1, P2, ..., Pk
6 For each Pi
7 if test(Pi) succeeds: halt and return the branch solution
8 if test(Pi) fails: discard Pi
9 Otherwise: add Pi to S
10 return there is no solution.
25 / 78
53. Pseudo-code for Backtracking
We have
BACKTRACKING(P0)
1 Start with some problem P0
2 Let S = {P0}, the set if active subproblems
3 While S = ∅
4 choose a subproblem P ∈ S and remove it from S
5 expand it into smaller subproblems P1, P2, ..., Pk
6 For each Pi
7 if test(Pi) succeeds: halt and return the branch solution
8 if test(Pi) fails: discard Pi
9 Otherwise: add Pi to S
10 return there is no solution.
25 / 78
54. Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
26 / 78
55. Another Exponential Search
Branch-and-Bound
This is a search better fitted to optimization problems
How do we reject subproblems?
First, imagine a minimization problem
To reject a subproblem, its cost must exceeds the best cost found so
far.
PROBLEM!!! This is unknown!!!
Thus, we use a quick lower bound on this cost.
27 / 78
56. Another Exponential Search
Branch-and-Bound
This is a search better fitted to optimization problems
How do we reject subproblems?
First, imagine a minimization problem
To reject a subproblem, its cost must exceeds the best cost found so
far.
PROBLEM!!! This is unknown!!!
Thus, we use a quick lower bound on this cost.
27 / 78
57. Another Exponential Search
Branch-and-Bound
This is a search better fitted to optimization problems
How do we reject subproblems?
First, imagine a minimization problem
To reject a subproblem, its cost must exceeds the best cost found so
far.
PROBLEM!!! This is unknown!!!
Thus, we use a quick lower bound on this cost.
27 / 78
58. Another Exponential Search
Branch-and-Bound
This is a search better fitted to optimization problems
How do we reject subproblems?
First, imagine a minimization problem
To reject a subproblem, its cost must exceeds the best cost found so
far.
PROBLEM!!! This is unknown!!!
Thus, we use a quick lower bound on this cost.
27 / 78
59. Another Exponential Search
Branch-and-Bound
This is a search better fitted to optimization problems
How do we reject subproblems?
First, imagine a minimization problem
To reject a subproblem, its cost must exceeds the best cost found so
far.
PROBLEM!!! This is unknown!!!
Thus, we use a quick lower bound on this cost.
27 / 78
60. Lower Bound
We use the info in the problem
To design an efficient way to approximate the solution.
28 / 78
61. Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
29 / 78
62. Pseudo-code for Branch-and-Bound
We have
BRANCH-AND-BOUND(P0)
1 Start with some problem P0
2 Let S = {P0}, the set if active subproblems
3 bestsofar=∞
4 While S = ∅
5 choose a subproblem (Partial Solution) P ∈ S and remove it from S
6 expand it into smaller subproblems P1, P2, ..., Pk
7 For each Pi
8 if Pi is a complete solution:
9 update bestsofar
10 else
11 if lowerbound(Pi) <bestsofar: add Pi to S
12 return bestsofar
30 / 78
63. Pseudo-code for Branch-and-Bound
We have
BRANCH-AND-BOUND(P0)
1 Start with some problem P0
2 Let S = {P0}, the set if active subproblems
3 bestsofar=∞
4 While S = ∅
5 choose a subproblem (Partial Solution) P ∈ S and remove it from S
6 expand it into smaller subproblems P1, P2, ..., Pk
7 For each Pi
8 if Pi is a complete solution:
9 update bestsofar
10 else
11 if lowerbound(Pi) <bestsofar: add Pi to S
12 return bestsofar
30 / 78
64. Pseudo-code for Branch-and-Bound
We have
BRANCH-AND-BOUND(P0)
1 Start with some problem P0
2 Let S = {P0}, the set if active subproblems
3 bestsofar=∞
4 While S = ∅
5 choose a subproblem (Partial Solution) P ∈ S and remove it from S
6 expand it into smaller subproblems P1, P2, ..., Pk
7 For each Pi
8 if Pi is a complete solution:
9 update bestsofar
10 else
11 if lowerbound(Pi) <bestsofar: add Pi to S
12 return bestsofar
30 / 78
65. Pseudo-code for Branch-and-Bound
We have
BRANCH-AND-BOUND(P0)
1 Start with some problem P0
2 Let S = {P0}, the set if active subproblems
3 bestsofar=∞
4 While S = ∅
5 choose a subproblem (Partial Solution) P ∈ S and remove it from S
6 expand it into smaller subproblems P1, P2, ..., Pk
7 For each Pi
8 if Pi is a complete solution:
9 update bestsofar
10 else
11 if lowerbound(Pi) <bestsofar: add Pi to S
12 return bestsofar
30 / 78
66. Pseudo-code for Branch-and-Bound
We have
BRANCH-AND-BOUND(P0)
1 Start with some problem P0
2 Let S = {P0}, the set if active subproblems
3 bestsofar=∞
4 While S = ∅
5 choose a subproblem (Partial Solution) P ∈ S and remove it from S
6 expand it into smaller subproblems P1, P2, ..., Pk
7 For each Pi
8 if Pi is a complete solution:
9 update bestsofar
10 else
11 if lowerbound(Pi) <bestsofar: add Pi to S
12 return bestsofar
30 / 78
67. Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
31 / 78
68. Example: The Traveling Salesman Problem
A partial solution to the TSP
It is a simple path a b passing through some vertices S ⊆ V with
a, b ∈ S.
Denote this as [a, S, b]
The subproblem
It is necessary to find the best completion of the tour i.e. the cheapest
complementary path b a with intermediate vertices in V − S.
32 / 78
69. Example: The Traveling Salesman Problem
A partial solution to the TSP
It is a simple path a b passing through some vertices S ⊆ V with
a, b ∈ S.
Denote this as [a, S, b]
The subproblem
It is necessary to find the best completion of the tour i.e. the cheapest
complementary path b a with intermediate vertices in V − S.
V
S
b
Light Edges
32 / 78
70. Thus, Given a b
We extend a particular solution [a, S, b]
By a simple edge (b, x) with x ∈ V − S
Leading to |V − S| subproblems
Of the form [a, S ∪ {x} , x]
How can we lower-bound the cost of completing a partial tour
[a, S, b]?
We use the simple strategy, the lower-bound is the sum of the following
quantities:
1 The lightest edge from a to V − S.
2 The lightest edge from b to V − S.
3 The minimum spanning tree of V − S.
33 / 78
71. Thus, Given a b
We extend a particular solution [a, S, b]
By a simple edge (b, x) with x ∈ V − S
Leading to |V − S| subproblems
Of the form [a, S ∪ {x} , x]
How can we lower-bound the cost of completing a partial tour
[a, S, b]?
We use the simple strategy, the lower-bound is the sum of the following
quantities:
1 The lightest edge from a to V − S.
2 The lightest edge from b to V − S.
3 The minimum spanning tree of V − S.
33 / 78
72. Thus, Given a b
We extend a particular solution [a, S, b]
By a simple edge (b, x) with x ∈ V − S
Leading to |V − S| subproblems
Of the form [a, S ∪ {x} , x]
How can we lower-bound the cost of completing a partial tour
[a, S, b]?
We use the simple strategy, the lower-bound is the sum of the following
quantities:
1 The lightest edge from a to V − S.
2 The lightest edge from b to V − S.
3 The minimum spanning tree of V − S.
33 / 78
73. Thus, Given a b
We extend a particular solution [a, S, b]
By a simple edge (b, x) with x ∈ V − S
Leading to |V − S| subproblems
Of the form [a, S ∪ {x} , x]
How can we lower-bound the cost of completing a partial tour
[a, S, b]?
We use the simple strategy, the lower-bound is the sum of the following
quantities:
1 The lightest edge from a to V − S.
2 The lightest edge from b to V − S.
3 The minimum spanning tree of V − S.
33 / 78
74. Thus, Given a b
We extend a particular solution [a, S, b]
By a simple edge (b, x) with x ∈ V − S
Leading to |V − S| subproblems
Of the form [a, S ∪ {x} , x]
How can we lower-bound the cost of completing a partial tour
[a, S, b]?
We use the simple strategy, the lower-bound is the sum of the following
quantities:
1 The lightest edge from a to V − S.
2 The lightest edge from b to V − S.
3 The minimum spanning tree of V − S.
33 / 78
75. Thus, Given a b
We extend a particular solution [a, S, b]
By a simple edge (b, x) with x ∈ V − S
Leading to |V − S| subproblems
Of the form [a, S ∪ {x} , x]
How can we lower-bound the cost of completing a partial tour
[a, S, b]?
We use the simple strategy, the lower-bound is the sum of the following
quantities:
1 The lightest edge from a to V − S.
2 The lightest edge from b to V − S.
3 The minimum spanning tree of V − S.
33 / 78
76. Thus, Given a b
We extend a particular solution [a, S, b]
By a simple edge (b, x) with x ∈ V − S
Leading to |V − S| subproblems
Of the form [a, S ∪ {x} , x]
How can we lower-bound the cost of completing a partial tour
[a, S, b]?
We use the simple strategy, the lower-bound is the sum of the following
quantities:
1 The lightest edge from a to V − S.
2 The lightest edge from b to V − S.
3 The minimum spanning tree of V − S.
33 / 78
77. Example
A graph where a brute force approach will take 7!=5,040
F E
G D
C
B
H
A
1
2 2
1
1
1
1
5
1
2
1
1
34 / 78
79. Example
Branch and Bound Solution
A
B
C E
D H D
E
F
F
G G
G
H
10
10 10
1011
11
11
11
11
14
14
15
36 / 78
80. Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
37 / 78
83. Definition of approximation algorithms
Famework
Suppose that we are working on an optimization problem in which each
potential solution has a positive cost.
We wish to find a near-optimal solution.
Thus
Max cost for a maximization problem.
Min cost for a minimization problem.
39 / 78
84. Definition of approximation algorithms
Famework
Suppose that we are working on an optimization problem in which each
potential solution has a positive cost.
We wish to find a near-optimal solution.
Thus
Max cost for a maximization problem.
Min cost for a minimization problem.
39 / 78
85. Definition of approximation algorithms
Famework
Suppose that we are working on an optimization problem in which each
potential solution has a positive cost.
We wish to find a near-optimal solution.
Thus
Max cost for a maximization problem.
Min cost for a minimization problem.
39 / 78
86. Definition of approximation algorithms
Famework
Suppose that we are working on an optimization problem in which each
potential solution has a positive cost.
We wish to find a near-optimal solution.
Thus
Max cost for a maximization problem.
Min cost for a minimization problem.
39 / 78
87. Definition of approximation algorithms
Famework
Suppose that we are working on an optimization problem in which each
potential solution has a positive cost.
We wish to find a near-optimal solution.
Thus
Max cost for a maximization problem.
Min cost for a minimization problem.
39 / 78
88. Then
Definition
We say that an algorithm for a problem has an approximation ratio of
ρ (n) if:
For any input of size n the cost C of the solution produced by the
algorithm is within a factor of ρ (n) of the cost C∗ of an optimal
solution:
max
C
C∗
,
C∗
C
≤ ρ (n)
40 / 78
89. Then
Definition
We say that an algorithm for a problem has an approximation ratio of
ρ (n) if:
For any input of size n the cost C of the solution produced by the
algorithm is within a factor of ρ (n) of the cost C∗ of an optimal
solution:
max
C
C∗
,
C∗
C
≤ ρ (n)
40 / 78
90. Then
Definition
We say that an algorithm for a problem has an approximation ratio of
ρ (n) if:
For any input of size n the cost C of the solution produced by the
algorithm is within a factor of ρ (n) of the cost C∗ of an optimal
solution:
max
C
C∗
,
C∗
C
≤ ρ (n)
40 / 78
91. Definition of approximation algorithms
Definition
If an algorithm achieves an approximation ratio of ρ (n), we call it a
ρ (n)-approximation algorithm.
Further
The definitions of the approximation ratio and of a ρ (n)-approximation
algorithm apply to both minimization and maximization problems.
41 / 78
92. Definition of approximation algorithms
Definition
If an algorithm achieves an approximation ratio of ρ (n), we call it a
ρ (n)-approximation algorithm.
Further
The definitions of the approximation ratio and of a ρ (n)-approximation
algorithm apply to both minimization and maximization problems.
41 / 78
93. The two possibilities
For a maximization problem
We have that 0 < C ≤ C∗, then the ratio C∗
/C gives the factor by which
the cost of an optimal solution is larger than the cost of the approximate
solution.
For a minimization problem
We have that 0 < C∗ ≤ C, then the ratio C/C∗ gives the factor by which
the approximate solution is larger than the cost of an optimal solution.
Properties
The approximation ratio of an approximation algorithm is never less than 1.
42 / 78
94. The two possibilities
For a maximization problem
We have that 0 < C ≤ C∗, then the ratio C∗
/C gives the factor by which
the cost of an optimal solution is larger than the cost of the approximate
solution.
For a minimization problem
We have that 0 < C∗ ≤ C, then the ratio C/C∗ gives the factor by which
the approximate solution is larger than the cost of an optimal solution.
Properties
The approximation ratio of an approximation algorithm is never less than 1.
42 / 78
95. The two possibilities
For a maximization problem
We have that 0 < C ≤ C∗, then the ratio C∗
/C gives the factor by which
the cost of an optimal solution is larger than the cost of the approximate
solution.
For a minimization problem
We have that 0 < C∗ ≤ C, then the ratio C/C∗ gives the factor by which
the approximate solution is larger than the cost of an optimal solution.
Properties
The approximation ratio of an approximation algorithm is never less than 1.
42 / 78
96. Some characteristics of approximation algorithms
Approximation algorithms have the following characteristics
They have polynomial times.
They are not guarantee to obtain optimal solutions.
However, they guarantee good solution within some factor of the
optimum.
Further
They often use algorithms from related problems as subroutines.
43 / 78
97. Some characteristics of approximation algorithms
Approximation algorithms have the following characteristics
They have polynomial times.
They are not guarantee to obtain optimal solutions.
However, they guarantee good solution within some factor of the
optimum.
Further
They often use algorithms from related problems as subroutines.
43 / 78
98. Some characteristics of approximation algorithms
Approximation algorithms have the following characteristics
They have polynomial times.
They are not guarantee to obtain optimal solutions.
However, they guarantee good solution within some factor of the
optimum.
Further
They often use algorithms from related problems as subroutines.
43 / 78
99. Some characteristics of approximation algorithms
Approximation algorithms have the following characteristics
They have polynomial times.
They are not guarantee to obtain optimal solutions.
However, they guarantee good solution within some factor of the
optimum.
Further
They often use algorithms from related problems as subroutines.
43 / 78
100. Some characteristics of approximation algorithms
Approximation algorithms have the following characteristics
They have polynomial times.
They are not guarantee to obtain optimal solutions.
However, they guarantee good solution within some factor of the
optimum.
Further
They often use algorithms from related problems as subroutines.
43 / 78
101. We will look at...
Examples
Vertex Cover problem.
The Traveling Salesman Problem.
This barely scratches
All the theory of approximation algorithms.
44 / 78
102. We will look at...
Examples
Vertex Cover problem.
The Traveling Salesman Problem.
This barely scratches
All the theory of approximation algorithms.
44 / 78
103. We will look at...
Examples
Vertex Cover problem.
The Traveling Salesman Problem.
This barely scratches
All the theory of approximation algorithms.
44 / 78
104. Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
45 / 78
105. Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
46 / 78
106. Vertex Cover Problem
Definition of a vertex cover
A vertex cover of an undirected graph G = (V , E) is a subset V ⊆ V
such that if (u, v) is an edge of G. then either u ∈ V or v ∈ V (Or
both).
Thus
The size of a vertex cover is the number of vertices in it.
47 / 78
107. Vertex Cover Problem
Definition of a vertex cover
A vertex cover of an undirected graph G = (V , E) is a subset V ⊆ V
such that if (u, v) is an edge of G. then either u ∈ V or v ∈ V (Or
both).
Thus
The size of a vertex cover is the number of vertices in it.
47 / 78
111. Approximation vertex cover algorithm
Now, we have
APPROX-VERTEX-COVER(G)
1 C = ∅
2 E = G.E
3 while E = ∅
4 let (u, v) be an arbitrary edge of E
5 C = C ∪ {u, v}
6 remove from E every edge incident on either u or v
7 return C
Complexity O(V + E)
51 / 78
113. Theorem
Theorem 35.1
APPROX-VERTEX-COVER is a polynomial-time 2-approximation
algorithm.
Proof
We know that the algorithms return a vertex cover until it
finishes, and it runs in poly-time.
Now, we only need to prove that APPROX-VERTEX-COVER
returns a vertex cover of at most twice the size of the optimal
cover.
53 / 78
114. Theorem
Theorem 35.1
APPROX-VERTEX-COVER is a polynomial-time 2-approximation
algorithm.
Proof
We know that the algorithms return a vertex cover until it
finishes, and it runs in poly-time.
Now, we only need to prove that APPROX-VERTEX-COVER
returns a vertex cover of at most twice the size of the optimal
cover.
53 / 78
115. Theorem
Theorem 35.1
APPROX-VERTEX-COVER is a polynomial-time 2-approximation
algorithm.
Proof
We know that the algorithms return a vertex cover until it
finishes, and it runs in poly-time.
Now, we only need to prove that APPROX-VERTEX-COVER
returns a vertex cover of at most twice the size of the optimal
cover.
53 / 78
116. Proof
First
We call C as the set of vertices that is returned by the approximation
algorithm.
Now, given the set of edges A produced by the algorithm.
Now, given an optimal cover C∗:
C∗
must include at least one endpoint of each edge in A.
No two edges in A share an endpoint, thus no two edges are covered by
the same vertex from C∗
:
|C∗| ≥ |A|
54 / 78
117. Proof
First
We call C as the set of vertices that is returned by the approximation
algorithm.
Now, given the set of edges A produced by the algorithm.
Now, given an optimal cover C∗:
C∗
must include at least one endpoint of each edge in A.
No two edges in A share an endpoint, thus no two edges are covered by
the same vertex from C∗
:
|C∗| ≥ |A|
54 / 78
118. Proof
First
We call C as the set of vertices that is returned by the approximation
algorithm.
Now, given the set of edges A produced by the algorithm.
Now, given an optimal cover C∗:
C∗
must include at least one endpoint of each edge in A.
No two edges in A share an endpoint, thus no two edges are covered by
the same vertex from C∗
:
|C∗| ≥ |A|
54 / 78
119. Proof
First
We call C as the set of vertices that is returned by the approximation
algorithm.
Now, given the set of edges A produced by the algorithm.
Now, given an optimal cover C∗:
C∗
must include at least one endpoint of each edge in A.
No two edges in A share an endpoint, thus no two edges are covered by
the same vertex from C∗
:
|C∗| ≥ |A|
54 / 78
120. Proof
First
We call C as the set of vertices that is returned by the approximation
algorithm.
Now, given the set of edges A produced by the algorithm.
Now, given an optimal cover C∗:
C∗
must include at least one endpoint of each edge in A.
No two edges in A share an endpoint, thus no two edges are covered by
the same vertex from C∗
:
|C∗| ≥ |A|
54 / 78
121. Proof
Second
Now, each execution in line 4 picks an edge for which neither of its
endpoints is already in C, thus we have
|C| = 2|A|
≤ 2|C∗
|
55 / 78
123. Remarks
Did you notice the trick?
We do not know the size of C∗, but we obtain a lower bound for it
Actually
The set A is a maximal matching in the graph G.
57 / 78
124. Remarks
Did you notice the trick?
We do not know the size of C∗, but we obtain a lower bound for it
Actually
The set A is a maximal matching in the graph G.
57 / 78
125. What is a Maximal Matching?
Definition
Given a graph G = (V , E), a matching M in G is a set of pairwise
non-adjacent edges; that is, no two edges share a common vertex.
58 / 78
126. Maximal Matching
Definition
A Maximal Matching is a matching M of a graph G with the property
that if any edge not in M is added to M, it is no longer a matching.
59 / 78
127. Example
Here, we can see an example
A
B
C
D
E
F
1
2
3
4
5
6
A
B
C
D
E
F
1
2
3
4
5
6
A
B
C
D
E
F
1
2
3
4
5
6
MATCHING MAXIMAL MATCHING
60 / 78
128. Outline
1 Introduction
The Dilemma
Branches Attacking the Problem
2 Intelligent Exhaustive Search
Backtracking
Example
Backtracking Algorithm
Branch-and-Bound
Algorithm Branch-and-Bound
Example
3 Approximation Algorithms
Introduction
Examples
Vertex Cover Problem
The Traveling-Salesman Problem
61 / 78
129. The Traveling-Salesman Problem
Given a complete undirected graph G = (V , E)
With a no-negative integer cost function c(u, v) associated to each
edge (u, v) ∈ E
We need to find a Hamiltonian cycle of G with minimum cost.
62 / 78
130. The Traveling-Salesman Problem
Given a complete undirected graph G = (V , E)
With a no-negative integer cost function c(u, v) associated to each
edge (u, v) ∈ E
We need to find a Hamiltonian cycle of G with minimum cost.
62 / 78
131. Extra notation
As an extension of our notation
Let c(A) denote the total cost of the edges in a subset A ⊆ E
c(A) =
(u,v)∈A
c(u, v)
63 / 78
132. The extra assumption: The triangle inequality
Assume
:
We will assume that the cost function satisfies the triangle inequality
for all its vertices u, v, w ∈ V then:
c(u, w) ≤ c(u, v) + c(v, w)
64 / 78
133. The 2-approximation algorithm
Pseudocode
APPROX-TSP-TOUR(G, c, r)
1 select a vertex r ∈ G.V to be a “root” vertex.
2 compute a minimum spanning tree T for G from root r using the
MST-PRIM(G, c, r).
3 Let H be a list of vertices, ordered according to when they are first
visited in a preorder tree walk of T.
4 return the Hamiltonian cycle H
65 / 78
134. The 2-approximation algorithm
Pseudocode
APPROX-TSP-TOUR(G, c, r)
1 select a vertex r ∈ G.V to be a “root” vertex.
2 compute a minimum spanning tree T for G from root r using the
MST-PRIM(G, c, r).
3 Let H be a list of vertices, ordered according to when they are first
visited in a preorder tree walk of T.
4 return the Hamiltonian cycle H
65 / 78
135. The 2-approximation algorithm
Pseudocode
APPROX-TSP-TOUR(G, c, r)
1 select a vertex r ∈ G.V to be a “root” vertex.
2 compute a minimum spanning tree T for G from root r using the
MST-PRIM(G, c, r).
3 Let H be a list of vertices, ordered according to when they are first
visited in a preorder tree walk of T.
4 return the Hamiltonian cycle H
65 / 78
136. The 2-approximation algorithm
Pseudocode
APPROX-TSP-TOUR(G, c, r)
1 select a vertex r ∈ G.V to be a “root” vertex.
2 compute a minimum spanning tree T for G from root r using the
MST-PRIM(G, c, r).
3 Let H be a list of vertices, ordered according to when they are first
visited in a preorder tree walk of T.
4 return the Hamiltonian cycle H
65 / 78
138. Theorem that supports the claim
Theorem 35.2
APROX-TSP-TOUR is a polynomial-time 2-approximation algorithm for
the traveling salesman problem with the triangle inequality.
67 / 78
139. Proof
First
Let H∗ denote an optimal tour for a set of vertices,
H∗
is a cycle.
If from H∗
we erase an edge, H∗
becomes a tree.
Let T the minimum spanning tree computed at line 2
Then:
c(T) ≤ c(H∗)
68 / 78
140. Proof
First
Let H∗ denote an optimal tour for a set of vertices,
H∗
is a cycle.
If from H∗
we erase an edge, H∗
becomes a tree.
Let T the minimum spanning tree computed at line 2
Then:
c(T) ≤ c(H∗)
68 / 78
141. Proof
First
Let H∗ denote an optimal tour for a set of vertices,
H∗
is a cycle.
If from H∗
we erase an edge, H∗
becomes a tree.
Let T the minimum spanning tree computed at line 2
Then:
c(T) ≤ c(H∗)
68 / 78
142. Proof
First
Let H∗ denote an optimal tour for a set of vertices,
H∗
is a cycle.
If from H∗
we erase an edge, H∗
becomes a tree.
Let T the minimum spanning tree computed at line 2
Then:
c(T) ≤ c(H∗)
68 / 78
143. Proof
First
Let H∗ denote an optimal tour for a set of vertices,
H∗
is a cycle.
If from H∗
we erase an edge, H∗
becomes a tree.
Let T the minimum spanning tree computed at line 2
Then:
c(T) ≤ c(H∗)
68 / 78
144. Full walk
A full walk W of T lists how many times a vertex has been visited
69 / 78
145. Full walk
A full walk W = a, b, c, b, h, b, a, d, e, f , e, g, e, d, a
70 / 78
146. Then, we have
Since the full walk traverses every edge of T exactly twice
c(W ) ≤ 2c(T)
In addition
However, W is not a tour... What can we do?
71 / 78
147. Then, we have
Since the full walk traverses every edge of T exactly twice
c(W ) ≤ 2c(T)
In addition
However, W is not a tour... What can we do?
71 / 78
148. Use triangle inequality
Something Notable
By the triangle inequality, however, we can delete a visit to any vertex
from W .
IMPORTANT
Using the triangle inequality the cost does not increase.
Deletion of vertices
Delete a vertex v from W between visits to u and w.
The resulting ordering specifies going directly from u and w.
72 / 78
149. Use triangle inequality
Something Notable
By the triangle inequality, however, we can delete a visit to any vertex
from W .
IMPORTANT
Using the triangle inequality the cost does not increase.
Deletion of vertices
Delete a vertex v from W between visits to u and w.
The resulting ordering specifies going directly from u and w.
72 / 78
150. Use triangle inequality
Something Notable
By the triangle inequality, however, we can delete a visit to any vertex
from W .
IMPORTANT
Using the triangle inequality the cost does not increase.
Deletion of vertices
Delete a vertex v from W between visits to u and w.
The resulting ordering specifies going directly from u and w.
72 / 78
151. Use triangle inequality
Something Notable
By the triangle inequality, however, we can delete a visit to any vertex
from W .
IMPORTANT
Using the triangle inequality the cost does not increase.
Deletion of vertices
Delete a vertex v from W between visits to u and w.
The resulting ordering specifies going directly from u and w.
72 / 78
154. Notice the following
We have
The previous visiting order is equal to a preorder walking on the previous
tree T.
Now
Let H the cycle obtained by this preorder walking.
It is a Hamiltonian cycle, since every vertex is visited exactly one
Thus
c(H) ≤ c(W ) ≤ 2c(T) ≤ 2c(H∗)
75 / 78
155. Notice the following
We have
The previous visiting order is equal to a preorder walking on the previous
tree T.
Now
Let H the cycle obtained by this preorder walking.
It is a Hamiltonian cycle, since every vertex is visited exactly one
Thus
c(H) ≤ c(W ) ≤ 2c(T) ≤ 2c(H∗)
75 / 78
156. Notice the following
We have
The previous visiting order is equal to a preorder walking on the previous
tree T.
Now
Let H the cycle obtained by this preorder walking.
It is a Hamiltonian cycle, since every vertex is visited exactly one
Thus
c(H) ≤ c(W ) ≤ 2c(T) ≤ 2c(H∗)
75 / 78
157. Notice the following
We have
The previous visiting order is equal to a preorder walking on the previous
tree T.
Now
Let H the cycle obtained by this preorder walking.
It is a Hamiltonian cycle, since every vertex is visited exactly one
Thus
c(H) ≤ c(W ) ≤ 2c(T) ≤ 2c(H∗)
75 / 78
159. However
Given the General Traveling-Salesman Problem
If we drop the assumption that the cost function c satisfies the triangle
inequality.
Theorem 35.3
If P = NP, then for any constant ρ ≥ 1, there is no polynomial-time
approximation algorithm with approximation ratio ρ for the general
traveling-salesman problem.
77 / 78
160. However
Given the General Traveling-Salesman Problem
If we drop the assumption that the cost function c satisfies the triangle
inequality.
Theorem 35.3
If P = NP, then for any constant ρ ≥ 1, there is no polynomial-time
approximation algorithm with approximation ratio ρ for the general
traveling-salesman problem.
77 / 78