The document discusses image segmentation using minimum cut (st-mincut) algorithms. It describes how to formulate image segmentation as an energy minimization problem and construct a graph such that the minimum cut of the graph corresponds to the minimum of the energy function. Maximum flow algorithms, such as Ford-Fulkerson and Dinic's algorithm, can then be used to find the minimum cut and optimal segmentation. Reparameterization of the energy function does not change the minimum cut.
The document describes the T2-ETA Congestion Pricing Model. The model takes as inputs a road network topology, travel time functions for each link, travel demand between origin-destination pairs, and value of time distributions. The model outputs a user equilibrium traffic assignment and tolls that implement that assignment. The objective is to minimize total perceived travel costs. The model defines decision variables, state variables, constraints, and optimality conditions for determining a system optimal traffic assignment and tolls.
APassengerKnockOnDelayModelForTimetableOptimisation_beamerPeter Sels
The document presents a model for optimizing passenger travel time in train timetabling by accounting for knock-on delays. It develops a stochastic goal function to minimize expected passenger transfer time considering primary delays and knock-on effects. Graph-based approaches are used to derive knock-on time and linearize it for optimization. Results show the optimized schedule reduces expected passenger time by 2.44% compared to the original planned schedule.
This document provides an introduction to information channels. It defines an information channel as having an input alphabet, output alphabet, and conditional probabilities relating input and output symbols. It discusses how to represent channels in matrix form and calculates various probabilities. It also covers zero-memory channels, extensions of channels to multiple inputs/outputs, entropy, mutual information, and uses the binary symmetric channel as an example.
The document discusses network flows and algorithms for finding maximum flows in networks. It begins by defining a flow network as a directed graph with a source, sink, and edge capacities. The maximum flow problem is to find the maximum amount of flow that can be sent from the source to the sink respecting capacity constraints. The Ford-Fulkerson algorithm uses augmenting paths to iteratively increase the flow value. It runs in O(mC) time where m is edges and C is total capacity. The maximum flow value equals the minimum cut capacity, proven using residual graphs. Later sections discuss improvements like capacity scaling and preflow-push algorithms. Bipartite matching is also shown to reduce to a maximum flow problem.
This document describes a damped oscillation graph of a spring-mass system experiencing dry frictional force. The system performs damped harmonic oscillation as the mass oscillates back and forth along the horizontal surface. The document provides the equation of motion, solution, and a Maple code to plot the decreasing amplitude oscillation graph over multiple periods as a function of time.
This document contains a sample GATE paper with questions from various subjects like mathematics, physics, chemistry and general aptitude. The questions include multiple choice, numerical answer type and explanation type questions. Some questions test concepts like differential equations, complex numbers, Laplace transforms, electric circuits etc. The document also contains information about an online portal for GATE preparation that has trained over 1 lakh students across India.
Distributed solution of stochastic optimal control problem on GPUsPantelis Sopasakis
Stochastic optimal control problems arise in many
applications and are, in principle,
large-scale involving up to millions of decision variables. Their
applicability in control applications is often limited by the
availability of algorithms that can solve them efficiently and within
the sampling time of the controlled system.
In this paper we propose a dual accelerated proximal
gradient algorithm which is amenable to parallelization and
demonstrate that its GPU implementation affords high speed-up
values (with respect to a CPU implementation) and greatly outperforms
well-established commercial optimizers such as Gurobi.
We present a novel modeling
methodology to derive a nonlinear dynamical model which
adequately describes the effect of fuel sloshing on the attitude dynamics of a spacecraft. We model the impulsive thrusters using mixed logic and dynamics leading to a hybrid formulation.
We design a hybrid model predictive control scheme for the
attitude control of a launcher during its long coasting period,
aiming at minimising the actuation count of the thrusters.
The document describes the T2-ETA Congestion Pricing Model. The model takes as inputs a road network topology, travel time functions for each link, travel demand between origin-destination pairs, and value of time distributions. The model outputs a user equilibrium traffic assignment and tolls that implement that assignment. The objective is to minimize total perceived travel costs. The model defines decision variables, state variables, constraints, and optimality conditions for determining a system optimal traffic assignment and tolls.
APassengerKnockOnDelayModelForTimetableOptimisation_beamerPeter Sels
The document presents a model for optimizing passenger travel time in train timetabling by accounting for knock-on delays. It develops a stochastic goal function to minimize expected passenger transfer time considering primary delays and knock-on effects. Graph-based approaches are used to derive knock-on time and linearize it for optimization. Results show the optimized schedule reduces expected passenger time by 2.44% compared to the original planned schedule.
This document provides an introduction to information channels. It defines an information channel as having an input alphabet, output alphabet, and conditional probabilities relating input and output symbols. It discusses how to represent channels in matrix form and calculates various probabilities. It also covers zero-memory channels, extensions of channels to multiple inputs/outputs, entropy, mutual information, and uses the binary symmetric channel as an example.
The document discusses network flows and algorithms for finding maximum flows in networks. It begins by defining a flow network as a directed graph with a source, sink, and edge capacities. The maximum flow problem is to find the maximum amount of flow that can be sent from the source to the sink respecting capacity constraints. The Ford-Fulkerson algorithm uses augmenting paths to iteratively increase the flow value. It runs in O(mC) time where m is edges and C is total capacity. The maximum flow value equals the minimum cut capacity, proven using residual graphs. Later sections discuss improvements like capacity scaling and preflow-push algorithms. Bipartite matching is also shown to reduce to a maximum flow problem.
This document describes a damped oscillation graph of a spring-mass system experiencing dry frictional force. The system performs damped harmonic oscillation as the mass oscillates back and forth along the horizontal surface. The document provides the equation of motion, solution, and a Maple code to plot the decreasing amplitude oscillation graph over multiple periods as a function of time.
This document contains a sample GATE paper with questions from various subjects like mathematics, physics, chemistry and general aptitude. The questions include multiple choice, numerical answer type and explanation type questions. Some questions test concepts like differential equations, complex numbers, Laplace transforms, electric circuits etc. The document also contains information about an online portal for GATE preparation that has trained over 1 lakh students across India.
Distributed solution of stochastic optimal control problem on GPUsPantelis Sopasakis
Stochastic optimal control problems arise in many
applications and are, in principle,
large-scale involving up to millions of decision variables. Their
applicability in control applications is often limited by the
availability of algorithms that can solve them efficiently and within
the sampling time of the controlled system.
In this paper we propose a dual accelerated proximal
gradient algorithm which is amenable to parallelization and
demonstrate that its GPU implementation affords high speed-up
values (with respect to a CPU implementation) and greatly outperforms
well-established commercial optimizers such as Gurobi.
We present a novel modeling
methodology to derive a nonlinear dynamical model which
adequately describes the effect of fuel sloshing on the attitude dynamics of a spacecraft. We model the impulsive thrusters using mixed logic and dynamics leading to a hybrid formulation.
We design a hybrid model predictive control scheme for the
attitude control of a launcher during its long coasting period,
aiming at minimising the actuation count of the thrusters.
This document describes a hybrid model predictive control approach for attitude control of spacecraft using impulsive thrusters. The approach models the spacecraft dynamics and minimum impulse effects of the thrusters. It formulates the control problem as minimizing a cost function over future inputs while satisfying the hybrid dynamics and constraints, such as a limit on total thruster actuations. Simulations show the hybrid MPC achieves higher pointing accuracy, lower thruster usage, and better disturbance rejection compared to traditional PD and LQR controllers.
This document discusses processing large datasets with Python and Hadoop. It begins with an example of finding the highest temperature from a climate dataset using a map-reduce approach. Next, it provides code examples for implementing map-reduce in pure Python, with Hadoop Streaming, and with the Dumbo library. The document then discusses using Amazon Elastic MapReduce for running Hadoop jobs on AWS. It poses a question about how to implement breadth-first search as a map-reduce algorithm and ends with an example of using MongoDB's map-reduce functionality.
A very wide spectrum of optimization problems can be efficiently solved with proximal gradient methods which hinge on the celebrated forward-backward splitting (FBS) schema. But such first-order methods are only effective when low or medium accuracy is required and are known to be rather slow or even impractical for badly conditioned problems. Moreover, the straightforward introduction of second-order (Hessian) information is beset with shortcomings as, typically, at every iteration we need to solve a non-separable optimisation problem. In this talk we will follow a different route to the solution of such optimisation problems. We will recast non-smooth optimisation problems as the minimisation of a real-valued, continuously differentiable function known as the forward-backward envelope. We will then employ a semismooth Newton method to solve the equivalent optimisation problem instead of the original one. We will then apply the proposed semismooth Newton method to L1-regularised least squares (LASSO) problems which is motivated by an an interesting application: recursive compressed sensing. Compressed sensing is a signal processing methodology for the reconstruction of sparsely sampled signals and it offers a new paradigm for sampling signals based on their innovation, that is, the minimum number of coefficients sufficient to accurately represent it in an appropriately selected basis. Compressed sensing leads to a lower sampling rate compared to theories using some fixed basis and has many applications in image processing, medical imaging and MRI, photography, holography, facial recognition, radio astronomy, radar technology and more. The traditional compressed sensing approach is naturally offline, in that it amounts to sparsely sampling and reconstructing a given dataset. Recently, an online algorithm for performing compressed sensing on streaming data was proposed; the scheme uses recursive sampling of the input stream and recursive decompression to accurately estimate stream entries from the acquired noisy measurements. We will see how we can tailor the forward-backward Newton method to solve recursive compressed sensing problems at one tenth of the time required by other algorithms such as ISTA, FISTA, ADMM and interior-point methods (L1LS).
This document contains a 30 question multiple choice test on electronics topics. The questions cover areas like signals and systems, communication systems, analog and digital electronics, and CMOS circuits. Some sample questions include determining the output signal frequency of a cascade of T flip flops, simplifying a Boolean function expressed as a sum of minterms, and calculating the load current in an N output current mirror circuit. The test is part of the recruitment process for scientists and engineers at the Indian Space Research Organisation.
Fast parallelizable scenario-based stochastic optimizationPantelis Sopasakis
Fast parallelizable scenario-based stochastic optimization: a forward-backward LBFGS method for stochastic optimal control problems with global convergence rate guarantees. (Talk at EUCCO 2016, Leuven, Belgium).
Performing Manual and Automated Iterations in Engineering Equation Solver (EES) - Examples from Heat Transfer.
All the EES codes shown in the examples are available at: https://goo.gl/KExGFi
Computer graphics lab report with code in cppAlamgir Hossain
This is the lab report for computer graphics in cpp language. Basically this course is only for the computer science and engineering students.
Problem list:
1.Program for the generation of Bresenham Line Drawing.
2. Program for the generation of Digital Differential Analyzer (DDA) Line Drawing.
3. Program for the generation of Midpoint Circle Drawing.
4. Program for the generation of Midpoint Ellipse Drawing.
5. Program for the generation of Translating an object.
6. Program for the generation of Rotating an Object.
7. Program for the generation of scaling an object.
All programs are coaded in cpp language .
This document discusses using digital twins and machine learning to achieve adaptation in uncertain systems. It provides an overview of enterprise system project failures and introduces digital twins as a new approach for design and control. It then describes a conceptual model and approach for developing digital twins for uncertain systems using goals, domain modeling, and agent-based modeling. Finally, it discusses research challenges in using this approach, including validation, verification, modeling expertise, efficiency, explainability and unknown unknowns.
This document describes various network flow models used in linear programming, including transportation, assignment, maximal flow, shortest path, and minimal spanning tree models. It provides characteristics of network models such as nodes and arcs. It also describes the transportation model in particular, including an example problem to minimize shipping costs from origins to destinations. The transportation problem is formulated as a linear program with decision variables, objective function to minimize costs, supply and demand constraints, and non-negativity constraints.
The document describes a discrete-time Kalman filter implemented in MATLAB to estimate the position of an underwater vehicle using sensor measurements. It presents the state space modeling equations used in the filter, including modifying the state vector to address non-linearities in the direction measurement. Simulation results using a carefully designed trajectory show the filter provides estimates with errors generally within a few meters for position, a few centimeters for velocity bias, and a few meters for range over 1000 iterations.
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
Crystal Ball Event Prediction and Log Analysis with Hadoop MapReduce and SparkJivan Nepali
This document summarizes a student's Big Data project using MapReduce (Hadoop) and Spark that analyzes log data. It describes implementations of three approaches (pair, stripe, hybrid) to predict event co-occurrence relationships. It also describes using Spark and Scala to analyze web server log files to find top products, categories, and client IPs. Pseudocode and results are shown for each technique.
This document summarizes two algorithms for computing properties of high-dimensional polytopes given access to certain oracle functions:
1. An algorithm for computing the edge-skeleton of a polytope in oracle polynomial-time using an oracle that returns the vertex maximizing a linear function.
2. A randomized algorithm for approximating the volume of a polytope by generating random points within it using a hit-and-run process, and estimating the volume from these points. The algorithm runs in oracle polynomial-time and provides an approximation with high probability.
Experimental results show the volume algorithm can approximate volumes of polytopes up to 100 dimensions within 1% error in under 2 hours, outperforming exact
This document discusses control theory and its application to adaptive optics systems. It begins with definitions of open-loop and closed-loop control systems, then discusses how block diagrams can represent control systems. The Laplace transform is introduced as a tool for analyzing systems in both the time and frequency domains. An integrator is presented as one choice for the transfer function C(s) in closed-loop control systems. Its properties allow high gain at low frequencies, improving disturbance rejection. The document explores how an integrator in the feedback loop can stabilize control and optimize performance for adaptive optics.
Sampled-Data Piecewise Affine Slab Systems: A Time-Delay ApproachBehzad Samadi
This document proposes a stability analysis method for sampled-data piecewise affine (PWA) systems using convex optimization and a time-delay approach. It formulates the stability analysis of sampled-data PWA slab systems as a convex optimization problem. The analysis uses a Lyapunov-Krasovskii functional and proves that all trajectories converge to an invariant set if certain constraints are satisfied. Future work includes formulating controller synthesis for sampled-data PWA slab systems also as a convex optimization problem.
The document discusses maximum flow algorithms. It begins by introducing concepts such as functions on arcs, excess functions, s-t flows, s-t cuts, and the relationship between flows and cuts. Specifically, it notes that the value of a flow will always be less than or equal to the capacity of a cut, with equality holding when the flow saturates the outgoing arcs of the cut and sends no flow along the incoming arcs. The document then outlines an algorithmic approach of finding s-t paths in the residual graph and pushing flow along these paths until no more exist. However, it notes this will not necessarily find the true maximum flow.
This document summarizes the maximum flow problem and the Ford-Fulkerson method for finding the maximum flow in a network. It defines flows, residual networks, augmenting paths, and describes how the shortest-augmenting path algorithm works by finding augmenting paths to iteratively increase the flow. It also discusses the max-flow min-cut theorem relating the maximum flow to minimum cuts in a network.
This document describes a hybrid model predictive control approach for attitude control of spacecraft using impulsive thrusters. The approach models the spacecraft dynamics and minimum impulse effects of the thrusters. It formulates the control problem as minimizing a cost function over future inputs while satisfying the hybrid dynamics and constraints, such as a limit on total thruster actuations. Simulations show the hybrid MPC achieves higher pointing accuracy, lower thruster usage, and better disturbance rejection compared to traditional PD and LQR controllers.
This document discusses processing large datasets with Python and Hadoop. It begins with an example of finding the highest temperature from a climate dataset using a map-reduce approach. Next, it provides code examples for implementing map-reduce in pure Python, with Hadoop Streaming, and with the Dumbo library. The document then discusses using Amazon Elastic MapReduce for running Hadoop jobs on AWS. It poses a question about how to implement breadth-first search as a map-reduce algorithm and ends with an example of using MongoDB's map-reduce functionality.
A very wide spectrum of optimization problems can be efficiently solved with proximal gradient methods which hinge on the celebrated forward-backward splitting (FBS) schema. But such first-order methods are only effective when low or medium accuracy is required and are known to be rather slow or even impractical for badly conditioned problems. Moreover, the straightforward introduction of second-order (Hessian) information is beset with shortcomings as, typically, at every iteration we need to solve a non-separable optimisation problem. In this talk we will follow a different route to the solution of such optimisation problems. We will recast non-smooth optimisation problems as the minimisation of a real-valued, continuously differentiable function known as the forward-backward envelope. We will then employ a semismooth Newton method to solve the equivalent optimisation problem instead of the original one. We will then apply the proposed semismooth Newton method to L1-regularised least squares (LASSO) problems which is motivated by an an interesting application: recursive compressed sensing. Compressed sensing is a signal processing methodology for the reconstruction of sparsely sampled signals and it offers a new paradigm for sampling signals based on their innovation, that is, the minimum number of coefficients sufficient to accurately represent it in an appropriately selected basis. Compressed sensing leads to a lower sampling rate compared to theories using some fixed basis and has many applications in image processing, medical imaging and MRI, photography, holography, facial recognition, radio astronomy, radar technology and more. The traditional compressed sensing approach is naturally offline, in that it amounts to sparsely sampling and reconstructing a given dataset. Recently, an online algorithm for performing compressed sensing on streaming data was proposed; the scheme uses recursive sampling of the input stream and recursive decompression to accurately estimate stream entries from the acquired noisy measurements. We will see how we can tailor the forward-backward Newton method to solve recursive compressed sensing problems at one tenth of the time required by other algorithms such as ISTA, FISTA, ADMM and interior-point methods (L1LS).
This document contains a 30 question multiple choice test on electronics topics. The questions cover areas like signals and systems, communication systems, analog and digital electronics, and CMOS circuits. Some sample questions include determining the output signal frequency of a cascade of T flip flops, simplifying a Boolean function expressed as a sum of minterms, and calculating the load current in an N output current mirror circuit. The test is part of the recruitment process for scientists and engineers at the Indian Space Research Organisation.
Fast parallelizable scenario-based stochastic optimizationPantelis Sopasakis
Fast parallelizable scenario-based stochastic optimization: a forward-backward LBFGS method for stochastic optimal control problems with global convergence rate guarantees. (Talk at EUCCO 2016, Leuven, Belgium).
Performing Manual and Automated Iterations in Engineering Equation Solver (EES) - Examples from Heat Transfer.
All the EES codes shown in the examples are available at: https://goo.gl/KExGFi
Computer graphics lab report with code in cppAlamgir Hossain
This is the lab report for computer graphics in cpp language. Basically this course is only for the computer science and engineering students.
Problem list:
1.Program for the generation of Bresenham Line Drawing.
2. Program for the generation of Digital Differential Analyzer (DDA) Line Drawing.
3. Program for the generation of Midpoint Circle Drawing.
4. Program for the generation of Midpoint Ellipse Drawing.
5. Program for the generation of Translating an object.
6. Program for the generation of Rotating an Object.
7. Program for the generation of scaling an object.
All programs are coaded in cpp language .
This document discusses using digital twins and machine learning to achieve adaptation in uncertain systems. It provides an overview of enterprise system project failures and introduces digital twins as a new approach for design and control. It then describes a conceptual model and approach for developing digital twins for uncertain systems using goals, domain modeling, and agent-based modeling. Finally, it discusses research challenges in using this approach, including validation, verification, modeling expertise, efficiency, explainability and unknown unknowns.
This document describes various network flow models used in linear programming, including transportation, assignment, maximal flow, shortest path, and minimal spanning tree models. It provides characteristics of network models such as nodes and arcs. It also describes the transportation model in particular, including an example problem to minimize shipping costs from origins to destinations. The transportation problem is formulated as a linear program with decision variables, objective function to minimize costs, supply and demand constraints, and non-negativity constraints.
The document describes a discrete-time Kalman filter implemented in MATLAB to estimate the position of an underwater vehicle using sensor measurements. It presents the state space modeling equations used in the filter, including modifying the state vector to address non-linearities in the direction measurement. Simulation results using a carefully designed trajectory show the filter provides estimates with errors generally within a few meters for position, a few centimeters for velocity bias, and a few meters for range over 1000 iterations.
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
Crystal Ball Event Prediction and Log Analysis with Hadoop MapReduce and SparkJivan Nepali
This document summarizes a student's Big Data project using MapReduce (Hadoop) and Spark that analyzes log data. It describes implementations of three approaches (pair, stripe, hybrid) to predict event co-occurrence relationships. It also describes using Spark and Scala to analyze web server log files to find top products, categories, and client IPs. Pseudocode and results are shown for each technique.
This document summarizes two algorithms for computing properties of high-dimensional polytopes given access to certain oracle functions:
1. An algorithm for computing the edge-skeleton of a polytope in oracle polynomial-time using an oracle that returns the vertex maximizing a linear function.
2. A randomized algorithm for approximating the volume of a polytope by generating random points within it using a hit-and-run process, and estimating the volume from these points. The algorithm runs in oracle polynomial-time and provides an approximation with high probability.
Experimental results show the volume algorithm can approximate volumes of polytopes up to 100 dimensions within 1% error in under 2 hours, outperforming exact
This document discusses control theory and its application to adaptive optics systems. It begins with definitions of open-loop and closed-loop control systems, then discusses how block diagrams can represent control systems. The Laplace transform is introduced as a tool for analyzing systems in both the time and frequency domains. An integrator is presented as one choice for the transfer function C(s) in closed-loop control systems. Its properties allow high gain at low frequencies, improving disturbance rejection. The document explores how an integrator in the feedback loop can stabilize control and optimize performance for adaptive optics.
Sampled-Data Piecewise Affine Slab Systems: A Time-Delay ApproachBehzad Samadi
This document proposes a stability analysis method for sampled-data piecewise affine (PWA) systems using convex optimization and a time-delay approach. It formulates the stability analysis of sampled-data PWA slab systems as a convex optimization problem. The analysis uses a Lyapunov-Krasovskii functional and proves that all trajectories converge to an invariant set if certain constraints are satisfied. Future work includes formulating controller synthesis for sampled-data PWA slab systems also as a convex optimization problem.
The document discusses maximum flow algorithms. It begins by introducing concepts such as functions on arcs, excess functions, s-t flows, s-t cuts, and the relationship between flows and cuts. Specifically, it notes that the value of a flow will always be less than or equal to the capacity of a cut, with equality holding when the flow saturates the outgoing arcs of the cut and sends no flow along the incoming arcs. The document then outlines an algorithmic approach of finding s-t paths in the residual graph and pushing flow along these paths until no more exist. However, it notes this will not necessarily find the true maximum flow.
This document summarizes the maximum flow problem and the Ford-Fulkerson method for finding the maximum flow in a network. It defines flows, residual networks, augmenting paths, and describes how the shortest-augmenting path algorithm works by finding augmenting paths to iteratively increase the flow. It also discusses the max-flow min-cut theorem relating the maximum flow to minimum cuts in a network.
This document provides an overview of a course on network optimization. It introduces the instructor and textbook. It summarizes the Koenigsberg bridge problem, which helped establish the field of graph theory. It discusses the mathematical definitions and terminology used in networks, such as nodes, arcs, paths, and cycles. It outlines three fundamental network flow problems: the shortest path problem, maximum flow problem, and minimum cost flow problem. It describes where network optimization is applied, such as transportation and manufacturing systems. It introduces the topic of computational complexity and how algorithms are analyzed.
This document discusses PageRank, an algorithm used by Google Search to rank websites in their search results. It describes how PageRank works by modeling the web as a directed graph and calculating an importance score for each page based on the page's inlinks. It discusses how PageRank can be formulated as the principal eigenvector of the stochastic link matrix or as the stationary distribution of a random walk on the web graph. It also covers techniques like random teleportation to address issues like spider traps and dead ends.
Introduction to Neural Networks and Deep Learning from ScratchAhmed BESBES
If you're willing to understand how neural networks work behind the scene and debug the back-propagation algorithm step by step by yourself, this presentation should be a good starting point.
We'll cover elements on:
- the popularity of neural networks and their applications
- the artificial neuron and the analogy with the biological one
- the perceptron
- the architecture of multi-layer perceptrons
- loss functions
- activation functions
- the gradient descent algorithm
At the end, there will be an implementation FROM SCRATCH of a fully functioning neural net.
code: https://github.com/ahmedbesbes/Neural-Network-from-scratch
Extended network and algorithm finding maximal flows IJECEIAES
Graph is a powerful mathematical tool applied in many fields as transportation, communication, informatics, economy, in ordinary graph the weights of edges and vertexes are considered independently where the length of a path is the sum of weights of the edges and the vertexes on this path. However, in many practical problems, weights at a vertex are not the same for all paths passing this vertex, but depend on coming and leaving edges. The paper develops a model of extended network that can be applied to modelling many practical problems more exactly and effectively. The main contribution of this paper is algorithm finding maximal flows on extended networks.
TMPA-2017: The Quest for Average Response TimeIosif Itkin
TMPA-2017: Tools and Methods of Program Analysis
3-4 March, 2017, Hotel Holiday Inn Moscow Vinogradovo, Moscow
The Quest for Average Response Time
Thomas A. Henzinger (President, IST, Austria Institute of Science and Technology)
For video follow the link: https://youtu.be/bCMj2toH1b4
Would like to know more?
Visit our website:
www.tmpaconf.org
www.exactprosystems.com/events/tmpa
Follow us:
https://www.linkedin.com/company/exactpro-systems-llc?trk=biz-companies-cym
https://twitter.com/exactpro
Efficient Volume and Edge-Skeleton Computation for Polytopes Given by OraclesVissarion Fisikopoulos
The document discusses efficient algorithms for computing volume and edge skeletons of polytopes defined implicitly by optimization oracles. It presents an algorithm to compute the edge skeleton of a polytope in oracle calls and arithmetic operations. It also describes using geometric random walks and optimization oracles to approximate polytope volume, which is more efficient than exact computation for high dimensions. Experimental results show the approach computes volume within minutes for polytopes up to dimension 12 with less than 2% error.
LAST VERSION available here:
https://speakerdeck.com/opimedia/an-efficient-and-parallel-abstract-interpreter-in-scala-first-algorithm
Presentation for this master thesis.
https://bitbucket.org/OPiMedia/efficient-parallel-abstract-interpreter-in-scala
This document discusses dynamic programming and algorithms for solving all-pair shortest path problems. It begins by defining dynamic programming as avoiding recalculating solutions by storing results in a table. It then describes Floyd's algorithm for finding shortest paths between all pairs of nodes in a graph. The algorithm iterates through nodes, calculating shortest paths that pass through each intermediate node. It takes O(n3) time for a graph with n nodes. Finally, it discusses the multistage graph problem and provides forward and backward algorithms to find the minimum cost path from source to destination in a multistage graph in O(V+E) time, where V and E are the numbers of vertices and edges.
The lecture outline discusses network models and network flow problems. It introduces key concepts like the maximum flow problem and minimum cost flow problem. It provides examples of solving the maximum flow problem using the Ford-Fulkerson method and concepts like residual networks and augmenting paths. The document also provides a sample problem solving the maximum flow problem on a network transporting water.
This document summarizes the CoCoA algorithm for distributed optimization. CoCoA uses a primal-dual framework to solve machine learning problems efficiently when data is distributed across multiple machines. It allows local machines to immediately apply updates to their local dual variables, while averaging the local primal updates over a small number of machines. CoCoA guarantees convergence, requires low communication, and can be implemented in just a few lines of code in systems like Spark. It improves upon mini-batch approaches by handling methods beyond stochastic gradient descent and avoiding issues with stale updates.
Gate 2013 complete solutions of ec electronics and communication engineeringmanish katara
The document is a sample paper for GATE 2013 that contains 25 multiple choice questions related to engineering topics like logic gates, vector fields, impulse response of systems, diodes, IC technology, and more. Each question is followed by a brief explanation of the answer. The questions cover a range of fundamental concepts in areas like signals and systems, electronics, semiconductor devices, and mathematics.
The document is a sample paper for GATE 2013 that contains 25 multiple choice questions related to engineering topics like logic gates, vector fields, impulse response of systems, diodes, IC technology, and more. Each question is followed by a brief explanation of the answer. The questions cover a range of fundamental concepts in areas like signals and systems, electronics, semiconductor devices, and mathematics.
This document presents an outer approximation solution algorithm for solving reliable shortest path problems on transportation networks. The algorithm formulates the problem as a mixed integer conic quadratic program to minimize the mean plus standard deviation of path costs. It then uses an outer approximation approach to decompose the problem and solve it efficiently through alternating steps of solving a master problem and subproblem. Computational results on several test networks show the algorithm converges quickly and outperforms directly solving the large-scale mixed integer conic quadratic program. The approach can also be applied to other reliability metrics and joint inventory location problems.
Relaxation methods for the matrix exponential on large networksDavid Gleich
My talk from the Stanford ICME seminar series on doing network analysis and link prediction using the a fast algorithm for the matrix exponential on graph problems.
Lec10: Medical Image Segmentation as an Energy Minimization ProblemUlaş Bağcı
Enhancement, Noise Reduction, and Signal Processing • MedicalImageRegistration • MedicalImageSegmentation • MedicalImageVisualization • Machine Learning in Medical Imaging • Shape Modeling/Analysis of Medical Images Deep Learning in Radiology Fuzzy Connectivity (FC) – Affinity functions • Absolute FC • Relative FC (and Iterative Relative FC) • Successful example applications of FC in medical imaging • Segmentation of Airway and Airway Walls using RFC based method
Energyfunctional
– Data and Smoothness terms
• GraphCut – Min cut
– Max Flow
• ApplicationsinRadiologyImages
I am Felix T. I am an Electrical Engineering Assignment Expert at eduassignmenthelp.com. I hold a Master’s. in Electrical Engineering, University of Greenwich, UK. I have been helping students with their Assignments for the past 7 years. I solve assignments related to Electrical Engineering.
Visit eduassignmenthelp.com or email info@eduassignmenthelp.com . You can also call on +1 678 648 4277 for any assistance with Electrical Engineering Assignments.
Sampling-Based Planning Algorithms for Multi-Objective MissionsMd Mahbubur Rahman
multiobjective path planning has Increasing demand in military missions, rescue operations, construction job-sites.
There is Lack of robotic path planning algorithm that compromises multiple
objectives. Commonly no solution that optimizes all the objective functions. Here we modify RRT, RRT* sampling based algorithm.
Similar to ECCV2008: MAP Estimation Algorithms in Computer Vision - Part 2 (20)
Mylyn helps address information overload and context loss when multi-tasking. It integrates tasks into the IDE workflow and uses a degree-of-interest model to monitor user interaction and provide a task-focused UI with features like view filtering, element decoration, automatic folding and content assist ranking. This creates a single view of all tasks that are centrally managed within the IDE.
This document provides an overview of OpenCV, an open source computer vision and machine learning software library. It discusses OpenCV's core functionality for representing images as matrices and directly accessing pixel data. It also covers topics like camera calibration, feature point extraction and matching, and estimating camera pose through techniques like structure from motion and planar homography. Hints are provided for Android developers on required permissions and for planar homography estimation using additional constraints rather than OpenCV's general homography function.
This document provides information about the Computer Vision Laboratory 2012 course at the Institute of Visual Computing. The course focuses on computer vision on mobile devices and will involve 180 hours of project work per person. Students will work in groups of 1-2 people on topics like 3D reconstruction from silhouettes or stereo images on mobile devices. Key dates are provided for submitting a work plan, mid-term presentation, and final report. Contact information is given for the lecturers and teaching assistant.
This document summarizes a presentation on natural image statistics given by Siwei Lyu at the 2009 CIFAR NCAP Summer School. The presentation covered several key topics:
1) It discussed the motivation for studying natural image statistics, which is to understand representations in the visual system and develop computer vision applications like denoising.
2) It reviewed common statistical properties found in natural images like 1/f power spectra and non-Gaussian distributions.
3) Maximum entropy and Bayesian models were presented as approaches to model these statistics, with Gaussian and independent component analysis discussed as specific examples.
4) Efficient coding principles from information theory were introduced as a framework for understanding neural representations that aim to decorrelate and
Camera calibration involves determining the internal camera parameters like focal length, image center, distortion, and scaling factors that affect the imaging process. These parameters are important for applications like 3D reconstruction and robotics that require understanding the relationship between 3D world points and their 2D projections in an image. The document describes estimating internal parameters by taking images of a calibration target with known geometry and solving the equations that relate the 3D target points to their 2D image locations. Homogeneous coordinates and projection matrices are used to represent the calibration transformations mathematically.
Brunelli 2008: template matching techniques in computer visionzukun
The document discusses template matching techniques in computer vision. It begins with an overview that defines template matching and discusses some common computer vision tasks it can be used for, like object detection. It then covers topics like detection as hypothesis testing, training and testing techniques, and provides a bibliography.
The HARVEST Programme evaluates feature detectors and descriptors through indirect and direct benchmarks. Indirect benchmarks measure repeatability and matching scores on the affine covariant testbed to evaluate how features persist across transformations. Direct benchmarks evaluate features on image retrieval tasks using the Oxford 5k dataset to measure real-world performance. VLBenchmarks provides software for easily running these benchmarks and reproducing published results. It allows comparing features and selecting the best for a given application.
This document summarizes VLFeat, an open source computer vision library. It provides concise summaries of VLFeat's features, including SIFT, MSER, and other covariant detectors. It also compares VLFeat's performance to other libraries like OpenCV. The document highlights how VLFeat achieves state-of-the-art results in tasks like feature detection, description and matching while maintaining a simple MATLAB interface.
This document summarizes and compares local image descriptors. It begins with an introduction to modern descriptors like SIFT, SURF and DAISY. It then discusses efficient descriptors such as binary descriptors like BRIEF, ORB and BRISK which use comparisons of intensity value pairs. The document concludes with an overview section.
This document discusses various feature detectors used in computer vision. It begins by describing classic detectors such as the Harris detector and Hessian detector that search scale space to find distinguished locations. It then discusses detecting features at multiple scales using the Laplacian of Gaussian and determinant of Hessian. The document also covers affine covariant detectors such as maximally stable extremal regions and affine shape adaptation. It discusses approaches for speeding up detection using approximations like those in SURF and learning to emulate detectors. Finally, it outlines new developments in feature detection.
The document discusses modern feature detection techniques. It provides an introduction and agenda for a talk on advances in feature detectors and descriptors, including improvements since a 2005 paper. It also discusses software suites and benchmarks for feature detection. Several application domains are described, such as wide baseline matching, panoramic image stitching, 3D reconstruction, image search, location recognition, and object tracking.
System 1 and System 2 were basic early systems for image matching that used color and texture matching. Descriptor-based approaches like SIFT provided more invariance but not perfect invariance. Patch descriptors like SIFT were improved by making them more invariant to lighting changes like color and illumination shifts. The best performance came from combining descriptors with color invariance. Representing images as histograms of visual word occurrences captured patterns in local image patches and allowed measuring similarity between images. Large vocabularies of visual words provided more discriminative power but were costly to compute and store.
This document summarizes a research paper on internet video search. It discusses several key challenges: [1] the large variation in how the same thing can appear in images/videos due to lighting, viewpoint etc., [2] defining what defines different objects, and [3] the huge number of different things that exist. It also notes gaps in narrative understanding, shared concepts between humans and machines, and addressing diverse query contexts. The document advocates developing powerful yet simple visual features that capture uniqueness with invariance to irrelevant changes.
The document discusses computer vision techniques for object detection and localization. It describes methods like selective search that group image regions hierarchically to propose object locations. Large datasets like ImageNet and LabelMe that provide training examples are also discussed. Performance on object detection benchmarks like PASCAL VOC is shown to improve significantly over time. Evaluation standards for concept detection like those used in TRECVID are presented. The document concludes that results are impressively improving each year but that the number of detectable concepts remains limited. It also discusses making feature extraction more efficient using techniques like SURF that take advantage of integral images.
This document provides an outline and overview of Yoshua Bengio's 2012 tutorial on representation learning. The key points covered include:
1) The tutorial will cover motivations for representation learning, algorithms such as probabilistic models and auto-encoders, and analysis and practical issues.
2) Representation learning aims to automatically learn good representations of data rather than relying on handcrafted features. Learning representations can help address challenges like exploiting unlabeled data and the curse of dimensionality.
3) Deep learning algorithms attempt to learn multiple levels of increasingly complex representations, with the goal of developing more abstract, disentangled representations that generalize beyond local patterns in the data.
Advances in discrete energy minimisation for computer visionzukun
This document discusses string algorithms and data structures. It introduces the Knuth-Morris-Pratt algorithm for finding patterns in strings in O(n+m) time where n is the length of the text and m is the length of the pattern. It also discusses common string data structures like tries, suffix trees, and suffix arrays. Suffix trees and suffix arrays store all suffixes of a string and support efficient pattern matching and other string operations in linear time or O(m+logn) time where m is the pattern length and n is the text length.
This document provides a tutorial on how to use Gephi software to analyze and visualize network graphs. It outlines the basic steps of importing a sample graph file, applying layout algorithms to organize the nodes, calculating metrics, detecting communities, filtering the graph, and exporting/saving the results. The tutorial demonstrates features of Gephi including node ranking, partitioning, and interactive visualization of the graph.
EM algorithm and its application in probabilistic latent semantic analysiszukun
The document discusses the EM algorithm and its application in Probabilistic Latent Semantic Analysis (pLSA). It begins by introducing the parameter estimation problem and comparing frequentist and Bayesian approaches. It then describes the EM algorithm, which iteratively computes lower bounds to the log-likelihood function. Finally, it applies the EM algorithm to pLSA by modeling documents and words as arising from a mixture of latent topics.
This document describes an efficient framework for part-based object recognition using pictorial structures. The framework represents objects as graphs of parts with spatial relationships. It finds the optimal configuration of parts through global minimization using distance transforms, allowing fast computation despite modeling complex spatial relationships between parts. This enables soft detection to handle partial occlusion without early decisions about part locations.
Iccv2011 learning spatiotemporal graphs of human activities zukun
The document presents a new approach for learning spatiotemporal graphs of human activities from weakly supervised video data. The approach uses 2D+t tubes as mid-level features to represent activities as segmentation graphs, with nodes describing tubes and edges describing various relations. A probabilistic graph mixture model is used to model activities, and learning estimates the model parameters and permutation matrices using a structural EM algorithm. The learned models allow recognizing and segmenting activities in new videos through robust least squares inference. Evaluation on benchmark datasets demonstrates the ability to learn characteristic parts of activities and recognize them under weak supervision.
Iccv2011 learning spatiotemporal graphs of human activities
ECCV2008: MAP Estimation Algorithms in Computer Vision - Part 2
1. MAP Estimation Algorithms in M. Pawan Kumar, University of Oxford Pushmeet Kohli, Microsoft Research Computer Vision - Part II
2. Example: Image Segmentation E(x) = ∑ c i x i + ∑ c ij x i (1-x j ) E: {0,1} n -> R 0 -> fg 1 -> bg Image (D) i i,j n = number of pixels
3. Example: Image Segmentation E(x) = ∑ c i x i + ∑ c ij x i (1-x j ) E: {0,1} n -> R 0 -> fg 1 -> bg i i,j Unary Cost (c i ) Dark ( negative ) Bright (positive) n = number of pixels
4. Example: Image Segmentation E(x) = ∑ c i x i + ∑ c ij x i (1-x j ) E: {0,1} n -> R 0 -> fg 1 -> bg i i,j Discontinuity Cost (c ij ) n = number of pixels
5. Example: Image Segmentation E(x) = ∑ c i x i + ∑ c ij x i (1-x j ) E: {0,1} n -> R 0 -> fg 1 -> bg i i,j Global Minimum (x * ) x * = arg min E(x) x How to minimize E(x)? n = number of pixels
6. Outline of the Tutorial The st-mincut problem What problems can we solve using st-mincut? st-mincut based Move algorithms Recent Advances and Open Problems Connection between st-mincut and energy minimization?
7. Outline of the Tutorial The st-mincut problem What problems can we solve using st-mincut? st-mincut based Move algorithms Connection between st-mincut and energy minimization? Recent Advances and Open Problems
10. The st-Mincut Problem Source Sink v 1 v 2 2 5 9 4 2 1 What is a st-cut? An st-cut ( S , T ) divides the nodes between source and sink. What is the cost of a st-cut? Sum of cost of all edges going from S to T 5 + 2 + 9 = 16
11. The st-Mincut Problem What is a st-cut? An st-cut ( S , T ) divides the nodes between source and sink. What is the cost of a st-cut? Sum of cost of all edges going from S to T What is the st-mincut? st-cut with the minimum cost Source Sink v 1 v 2 2 5 9 4 2 1 2 + 1 + 4 = 7
12. How to compute the st-mincut? Source Sink v 1 v 2 2 5 9 4 2 1 Solve the dual maximum flow problem In every network, the maximum flow equals the cost of the st-mincut Min-cutax-flow Theorem Compute the maximum flow between Source and Sink Constraints Edges: Flow < Capacity Nodes: Flow in = Flow out
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25. History of Maxflow Algorithms [Slide credit: Andrew Goldberg] Augmenting Path and Push-Relabel n: # nodes m: # edges U: maximum edge weight Algorithms assume non-negative edge weights
26. History of Maxflow Algorithms [Slide credit: Andrew Goldberg] Augmenting Path and Push-Relabel n: # nodes m: # edges U: maximum edge weight Algorithms assume non-negative edge weights
27. Augmenting Path based Algorithms a 1 a 2 1000 1 Sink Source 1000 1000 1000 0 Ford Fulkerson: Choose any augmenting path
28. Augmenting Path based Algorithms a 1 a 2 1000 1 Sink Source 1000 1000 1000 0 Bad Augmenting Paths Ford Fulkerson: Choose any augmenting path
29. a 1 a 2 1000 1 Sink Source 1000 1000 1000 0 Augmenting Path based Algorithms Bad Augmenting Path Ford Fulkerson: Choose any augmenting path
30. a 1 a 2 999 0 Sink Source 1000 1000 999 1 Augmenting Path based Algorithms Ford Fulkerson: Choose any augmenting path
31. Augmenting Path based Algorithms a 1 a 2 999 0 Sink Source 1000 1000 999 1 Ford Fulkerson: Choose any augmenting path n: # nodes m: # edges We will have to perform 2000 augmentations! Worst case complexity: O (m x Total_Flow) (Pseudo-polynomial bound: depends on flow)
32. Augmenting Path based Algorithms Dinic: Choose shortest augmenting path n: # nodes m: # edges Worst case Complexity: O (m n 2 ) a 1 a 2 1000 1 Sink Source 1000 1000 1000 0
33.
34.
35. Outline of the Tutorial The st-mincut problem What problems can we solve using st-mincut? st-mincut based Move algorithms Connection between st-mincut and energy minimization? Recent Advances and Open Problems
36. St-mincut and Energy Minimization E: {0,1} n -> R Minimizing a Qudratic Pseudoboolean function E(x) Functions of boolean variables Pseudoboolean? Polynomial time st-mincut algorithms require non-negative edge weights E(x) = ∑ c i x i + ∑ c ij x i (1-x j ) c ij ≥0 i,j i T S st-mincut
45. Graph Construction a 1 a 2 E(a 1 ,a 2 ) = 2 a 1 + 5 ā 1 + 9 a 2 + 4 ā 2 + 2 a 1 ā 2 + ā 1 a 2 2 5 9 4 2 1 a 1 = 1 a 2 = 1 E (1,1) = 11 Cost of cut = 11 Sink (1) Source (0)
46. Graph Construction a 1 a 2 E(a 1 ,a 2 ) = 2 a 1 + 5 ā 1 + 9 a 2 + 4 ā 2 + 2 a 1 ā 2 + ā 1 a 2 2 5 9 4 2 1 Sink (1) Source (0) a 1 = 1 a 2 = 0 E (1,0) = 8 st-mincut cost = 8
47. Energy Function Reparameterization Two functions E 1 and E 2 are reparameterizations if E 1 ( x ) = E 2 ( x ) for all x For instance: E 1 (a 1 ) = 1+ 2a 1 + 3ā 1 E 2 (a 1 ) = 3 + ā 1 a 1 ā 1 1+ 2a 1 + 3ā 1 3 + ā 1 0 1 4 4 1 0 3 3
57. Flow and Reparametrization Sink (0) Source (1) a 1 a 2 0 1 3 0 0 3 E(a 1 ,a 2 ) = 8 + ā 1 + 3a 2 + 3ā 1 a 2 No more augmenting paths possible
58. Flow and Reparametrization Sink (0) Source (1) a 1 a 2 0 1 3 0 0 3 E(a 1 ,a 2 ) = 8 + ā 1 + 3a 2 + 3ā 1 a 2 Total Flow Residual Graph (positive coefficients) bound on the optimal solution Inference of the optimal solution becomes trivial because the bound is tight
59. Flow and Reparametrization Sink (0) Source (1) a 1 a 2 0 1 3 0 0 3 E(a 1 ,a 2 ) = 8 + ā 1 + 3a 2 + 3ā 1 a 2 a 1 = 1 a 2 = 0 E (1,0) = 8 st-mincut cost = 8 Total Flow bound on the optimal solution Inference of the optimal solution becomes trivial because the bound is tight Residual Graph (positive coefficients)
60. Example: Image Segmentation E(x) = ∑ c i x i + ∑ c ij x i (1-x j ) E: {0,1} n -> R 0 -> fg 1 -> bg i i,j Global Minimum (x * ) x * = arg min E(x) x How to minimize E(x)?
61. How does the code look like? Sink (1) Source (0) Graph *g; For all pixels p /* Add a node to the graph */ nodeID(p) = g->add_node(); /* Set cost of terminal edges */ set_weights(nodeID(p), fgCost(p), bgCost(p)); end for all adjacent pixels p,q add_weights(nodeID(p), nodeID(q), cost); end g->compute_maxflow(); label_p = g->is_connected_to_source(nodeID(p)); // is the label of pixel p (0 or 1)
62. How does the code look like? Graph *g; For all pixels p /* Add a node to the graph */ nodeID(p) = g->add_node(); /* Set cost of terminal edges */ set_weights(nodeID(p), fgCost(p), bgCost(p)); end for all adjacent pixels p,q add_weights(nodeID(p), nodeID(q), cost); end g->compute_maxflow(); label_p = g->is_connected_to_source(nodeID(p)); // is the label of pixel p (0 or 1) a 1 a 2 fgCost( a 1 ) Sink (1) Source (0) fgCost( a 2 ) bgCost( a 1 ) bgCost( a 2 )
63. How does the code look like? Graph *g; For all pixels p /* Add a node to the graph */ nodeID(p) = g->add_node(); /* Set cost of terminal edges */ set_weights(nodeID(p), fgCost(p), bgCost(p)); end for all adjacent pixels p,q add_weights(nodeID(p), nodeID(q), cost(p,q)); end g->compute_maxflow(); label_p = g->is_connected_to_source(nodeID(p)); // is the label of pixel p (0 or 1) a 1 a 2 fgCost( a 1 ) Sink (1) Source (0) fgCost( a 2 ) bgCost( a 1 ) bgCost( a 2 ) cost(p,q)
64. How does the code look like? Graph *g; For all pixels p /* Add a node to the graph */ nodeID(p) = g->add_node(); /* Set cost of terminal edges */ set_weights(nodeID(p), fgCost(p), bgCost(p)); end for all adjacent pixels p,q add_weights(nodeID(p), nodeID(q), cost(p,q)); end g->compute_maxflow(); label_p = g->is_connected_to_source(nodeID(p)); // is the label of pixel p (0 or 1) a 1 a 2 fgCost( a 1 ) Sink (1) Source (0) fgCost( a 2 ) bgCost( a 1 ) bgCost( a 2 ) cost(p,q) a 1 = bg a 2 = fg
65. Image Segmentation in Video Image Flow Global Optimum s t = 0 = 1 E(x) x * n-links st-cut
67. Dynamic Energy Minimization E B computationally expensive operation E A Recycling Solutions Can we do better? Boykov & Jolly ICCV’01, Kohli & Torr (ICCV05, PAMI07) S B S A minimize
68. Dynamic Energy Minimization E B computationally expensive operation E A cheaper operation Kohli & Torr (ICCV05, PAMI07) 3 – 100000 time speedup! Reuse flow Boykov & Jolly ICCV’01, Kohli & Torr (ICCV05, PAMI07) S B S A minimize Simpler energy E B* differences between A and B A and B similar Reparametrization
70. Outline of the Tutorial The st-mincut problem What problems can we solve using st-mincut? st-mincut based Move algorithms Connection between st-mincut and energy minimization? Recent Advances and Open Problems
71.
72. Submodular Set Functions Set function f 2 |E| ℝ 2 |E| = #subsets of E Let E= {a 1 ,a 2 , .... a n } be a set
73. Submodular Set Functions Set function f 2 |E| ℝ is submodular if E A B f( A ) + f( B ) f( A B ) + f( A B ) for all A , B E 2 |E| = #subsets of E Let E= {a 1 ,a 2 , .... a n } be a set Important Property Sum of two submodular functions is submodular
74.
75.
76. Quadratic Submodular Pseudoboolean Functions θ ij (0,1) + θ ij (1,0) θ ij (0,0) + θ ij (1,1) For all ij E(x) = ∑ θ i (x i ) + ∑ θ ij (x i ,x j ) i,j i
77. Quadratic Submodular Pseudoboolean Functions θ ij (0,1) + θ ij (1,0) θ ij (0,0) + θ ij (1,1) For all ij E(x) = ∑ c i x i + ∑ c ij x i (1-x j ) c ij ≥0 i,j i Equivalent (transformable) i.e. All submodular QPBFs are st-mincut solvable E(x) = ∑ θ i (x i ) + ∑ θ ij (x i ,x j ) i,j i
78. 0 1 0 1 x i x j = A + 0 1 0 1 0 1 0 1 0 1 0 1 + + if x 1 =1 add C-A if x 2 = 1 add D-C B+C-A-D 0 is true from the submodularity of θ ij How are they equivalent? A = θ ij (0,0) B = θ ij (0,1) C = θ ij (1,0) D = θ ij (1,1) θ ij (x i ,x j ) = θ ij (0,0) + ( θ ij (1,0)- θ ij (0,0)) x i + ( θ ij (1,0)- θ ij (0,0)) x j + ( θ ij (1,0) + θ ij (0,1) - θ ij (0,0) - θ ij (1,1)) (1-x i ) x j A B C D 0 0 C-A C-A 0 D-C 0 D-C 0 B+C-A-D 0 0
79. 0 1 0 1 x i x j = A + 0 1 0 1 0 1 0 1 0 1 0 1 + + if x 1 =1 add C-A if x 2 = 1 add D-C B+C-A-D 0 is true from the submodularity of θ ij How are they equivalent? A = θ ij (0,0) B = θ ij (0,1) C = θ ij (1,0) D = θ ij (1,1) θ ij (x i ,x j ) = θ ij (0,0) + ( θ ij (1,0)- θ ij (0,0)) x i + ( θ ij (1,0)- θ ij (0,0)) x j + ( θ ij (1,0) + θ ij (0,1) - θ ij (0,0) - θ ij (1,1)) (1-x i ) x j A B C D 0 0 C-A C-A 0 D-C 0 D-C 0 B+C-A-D 0 0
80. 0 1 0 1 x i x j = A + 0 1 0 1 0 1 0 1 0 1 0 1 + + if x 1 =1 add C-A if x 2 = 1 add D-C B+C-A-D 0 is true from the submodularity of θ ij How are they equivalent? A = θ ij (0,0) B = θ ij (0,1) C = θ ij (1,0) D = θ ij (1,1) θ ij (x i ,x j ) = θ ij (0,0) + ( θ ij (1,0)- θ ij (0,0)) x i + ( θ ij (1,0)- θ ij (0,0)) x j + ( θ ij (1,0) + θ ij (0,1) - θ ij (0,0) - θ ij (1,1)) (1-x i ) x j A B C D 0 0 C-A C-A 0 D-C 0 D-C 0 B+C-A-D 0 0
81. 0 1 0 1 x i x j = A + 0 1 0 1 0 1 0 1 0 1 0 1 + + if x 1 =1 add C-A if x 2 = 1 add D-C B+C-A-D 0 is true from the submodularity of θ ij How are they equivalent? A = θ ij (0,0) B = θ ij (0,1) C = θ ij (1,0) D = θ ij (1,1) θ ij (x i ,x j ) = θ ij (0,0) + ( θ ij (1,0)- θ ij (0,0)) x i + ( θ ij (1,0)- θ ij (0,0)) x j + ( θ ij (1,0) + θ ij (0,1) - θ ij (0,0) - θ ij (1,1)) (1-x i ) x j A B C D 0 0 C-A C-A 0 D-C 0 D-C 0 B+C-A-D 0 0
82. 0 1 0 1 x i x j = A + 0 1 0 1 0 1 0 1 0 1 0 1 + + if x 1 =1 add C-A if x 2 = 1 add D-C B+C-A-D 0 is true from the submodularity of θ ij How are they equivalent? A = θ ij (0,0) B = θ ij (0,1) C = θ ij (1,0) D = θ ij (1,1) θ ij (x i ,x j ) = θ ij (0,0) + ( θ ij (1,0)- θ ij (0,0)) x i + ( θ ij (1,0)- θ ij (0,0)) x j + ( θ ij (1,0) + θ ij (0,1) - θ ij (0,0) - θ ij (1,1)) (1-x i ) x j A B C D 0 0 C-A C-A 0 D-C 0 D-C 0 B+C-A-D 0 0
83. Quadratic Submodular Pseudoboolean Functions θ ij (0,1) + θ ij (1,0) θ ij (0,0) + θ ij (1,1) For all ij Equivalent (transformable) x in {0,1} n E(x) = ∑ θ i (x i ) + ∑ θ ij (x i ,x j ) i,j i T S st-mincut
92. Transforming problems in QBFs Multi-label Functions Pseudoboolean Functions Higher order Pseudoboolean Functions Quadratic Pseudoboolean Functions
93. Transforming problems in QBFs Multi-label Functions Pseudoboolean Functions Higher order Pseudoboolean Functions Quadratic Pseudoboolean Functions
94.
95. Higher order to Quadratic min f( x ) min C 1 a + C 1 ∑ ā x i x = x,a ϵ {0,1} Higher Order Submodular Function Quadratic Submodular Function ∑ x i 1 2 3 C 1 C 1 ∑ x i
96. Higher order to Quadratic min f( x ) min C 1 a + C 1 ∑ ā x i x = x,a ϵ {0,1} Higher Order Submodular Function Quadratic Submodular Function ∑ x i 1 2 3 C 1 C 1 ∑ x i a=1 a=0 Lower envelop of concave functions is concave
97. Higher order to Quadratic min f( x ) min f 1 (x) a + f 2 (x) ā x = x,a ϵ {0,1} Higher Order Submodular Function Quadratic Submodular Function ∑ x i 1 2 3 a=1 Lower envelop of concave functions is concave f 2 (x) f 1 (x)
98. Higher order to Quadratic min f( x ) min f 1 (x) a + f 2 (x) ā x = x,a ϵ {0,1} Higher Order Submodular Function Quadratic Submodular Function ∑ x i 1 2 3 a=1 a=0 Lower envelop of concave functions is concave f 2 (x) f 1 (x)
99. Transforming problems in QBFs Multi-label Functions Pseudoboolean Functions Higher order Pseudoboolean Functions Quadratic Pseudoboolean Functions
100.
101.
102.
103.
104.
105. Outline of the Tutorial The st-mincut problem What problems can we solve using st-mincut? st-mincut based Move algorithms Connection between st-mincut and energy minimization? Recent Advances and Open Problems
108. Move Making Algorithms Search Neighbourhood Current Solution Optimal Move Solution Space Energy
109.
110.
111.
112. General Binary Moves Minimize over move variables t to get the optimal move x = t x 1 + (1- t ) x 2 New solution Current Solution Second solution E m ( t ) = E( t x 1 + (1- t ) x 2 ) Boykov, Veksler and Zabih, PAMI 2001 Move energy is a submodular QPBF (Exact Minimization Possible)
113.
114.
115.
116.
117.
118.
119. General Binary Moves Minimize over move variables t x = t x 1 + (1-t) x 2 New solution First solution Second solution Move functions can be non-submodular!! Move Type First Solution Second Solution Guarantee Expansion Old solution All alpha Metric Fusion Any solution Any solution
120. Solving Continuous Problems using Fusion Move x = t x 1 + (1-t) x 2 (Lempitsky et al. CVPR08, Woodford et al. CVPR08) x 1 , x 2 can be continuous F x 1 x 2 x Optical Flow Example Final Solution Solution from Method 1 Solution from Method 2
121.
122.
123. Outline of the Tutorial The st-mincut problem What problems can we solve using st-mincut? st-mincut based Move algorithms Connection between st-mincut and energy minimization? Recent Advances and Open Problems
124. Solving Mixed Programming Problems x – binary image segmentation (x i ∊ {0,1}) ω – non-local parameter (lives in some large set Ω ) constant unary potentials pairwise potentials ≥ 0 Rough Shape Prior Stickman Model ω Pose θ i ( ω, x i ) Shape Prior E(x, ω ) = C( ω ) + ∑ θ i ( ω, x i ) + ∑ θ ij ( ω, x i ,x j ) i,j i
125.
126.
127. Summary Exact Transformation (global optimum) Or Relaxed transformation (partially optimal) Labelling Problem Submodular Quadratic Pseudoboolean Function Move making algorithms Sub-problem T S st-mincut
129. Use of Higher order Potentials Stereo - Woodford et al. CVPR 2008 P 1 P 2 P 3 5 6 7 8 Pixels Disparity Labels E(x 1 ,x 2 ,x 3 ) = θ 12 (x 1 ,x 2 ) + θ 23 (x 2 ,x 3 ) θ ij (x i ,x j ) = 0 if x i =x j C otherwise { E(6,6,6) = 0 + 0 = 0
130. Use of Higher order Potentials Stereo - Woodford et al. CVPR 2008 P 1 P 2 P 3 5 6 7 8 Pixels Disparity Labels E(x 1 ,x 2 ,x 3 ) = θ 12 (x 1 ,x 2 ) + θ 23 (x 2 ,x 3 ) θ ij (x i ,x j ) = 0 if x i =x j C otherwise { E(6,6,6) = 0 + 0 = 0 E(6,7,7) = 1 + 0 = 1
131. Use of Higher order Potentials Stereo - Woodford et al. CVPR 2008 P 1 P 2 P 3 5 6 7 8 Pixels Disparity Labels Pairwise potential penalize slanted planar surfaces E(x 1 ,x 2 ,x 3 ) = θ 12 (x 1 ,x 2 ) + θ 23 (x 2 ,x 3 ) θ ij (x i ,x j ) = 0 if x i =x j C otherwise { E(6,6,6) = 0 + 0 = 0 E(6,7,7) = 1 + 0 = 1 E(6,7,8) = 1 + 1 = 2
132. Computing the Optimal Move Search Neighbourhood Current Solution Optimal Move x E ( x ) x c Transformation function T ( t ) T ( x c , t ) = x n = x c + t
133. Computing the Optimal Move Search Neighbourhood Current Solution Optimal Move E ( x ) x c Transformation function T E m Move Energy ( t ) x E m ( t ) = E ( T ( x c , t )) T ( x c , t ) = x n = x c + t
134. Computing the Optimal Move Search Neighbourhood Current Solution Optimal Move E ( x ) x c E m ( t ) = E ( T ( x c , t )) Transformation function T E m Move Energy T ( x c , t ) = x n = x c + t minimize t* Optimal Move ( t ) x