Game theory is used to model strategic decision-making between competitors. It originated in the 20th century and applies concepts like players, strategies, and payoffs. Players select strategies and receive payoffs based on the strategies of all players. The optimal strategy maximizes a player's payoff. Techniques like minimax, maximin, and solving dominance-reduced payoff matrices can help determine optimal strategies and the value of a game.
This document provides an overview of game theory concepts. It defines game theory as analyzing situations of conflict and competition involving decision making by two or more participants. Some key points:
- Game theory was developed in the 20th century, with a seminal 1944 book discussing its application to business strategy.
- Basic concepts include players, pure and mixed strategies, zero-sum vs. non-zero-sum games, and payoff matrices to represent outcomes.
- Solutions include finding equilibrium points using minimax and maximin principles for pure strategies or solving systems of equations for mixed strategies when no equilibrium exists.
- Dominance rules can reduce game matrices, and graphical or algebraic methods solve for mixed strategies without saddles
The document provides a summary of a presentation on solving linear programming problems (LPP) using the graphical method. It defines LPP and the graphical method. It then walks through the steps to solve an example LPP problem graphically, including formulating the problem, framing the graph, plotting the constraints, finding the optimal solution point, and determining the maximum value. The summary concludes that the optimal solution for the example problem is 5 male workers and 6 female workers, with a maximum total return of Rs. 1,01,000.
Game theory is a mathematical approach that analyzes strategic interactions between parties. It is used to understand situations where decision-makers are impacted by others' choices. A game has players, strategies, payoffs, and information. The Nash equilibrium predicts outcomes as the strategies where no player benefits by changing alone given others' choices. For example, in the Prisoner's Dilemma game about two suspects, confessing dominates remaining silent no matter what the other does, leading both to confess for a worse joint outcome than remaining silent.
The document summarizes a lesson on game theory and linear programming. It discusses using linear programming to find optimal strategies in zero-sum games represented by payoff matrices. It provides examples of solving for optimal strategies in Rock-Paper-Scissors and another sample game. The key steps of formulating the column player's problem as a linear program to minimize the maximum payoff for the row player are outlined.
This document provides an overview of game theory and two-person zero-sum games. It defines key concepts such as players, strategies, payoffs, and classifications of games. It also describes the assumptions and solutions for pure strategy and mixed strategy games. Pure strategy games have a saddle point solution found using minimax and maximin rules. Mixed strategy games do not have a saddle point and require determining the optimal probabilities that players select each strategy.
The document contains biographical and contact information for Dr. Atif Shahzad, along with slides from one of his lectures on optimization models and the simplex method. The slides cover converting linear programs to standard form, basic and non-basic variables, basic feasible solutions, optimality and feasibility conditions, and working through an example using the simplex method in 4 steps - choosing an entering variable, leaving variable, updating the pivot row and other rows, and iterating until optimality is reached.
The document discusses the assignment problem and various methods to solve it. The assignment problem involves assigning jobs to workers or other resources in an optimal way according to certain criteria like minimizing time or cost. The Hungarian assignment method is described as a multi-step algorithm to find the optimal assignment between jobs and workers/resources. It involves creating a cost matrix and performing row and column reductions to arrive at a matrix with zeros that indicates the optimal assignment. The document also briefly discusses handling unbalanced and constrained assignment problems.
Game theory is used to model strategic decision-making between competitors. It originated in the 20th century and applies concepts like players, strategies, and payoffs. Players select strategies and receive payoffs based on the strategies of all players. The optimal strategy maximizes a player's payoff. Techniques like minimax, maximin, and solving dominance-reduced payoff matrices can help determine optimal strategies and the value of a game.
This document provides an overview of game theory concepts. It defines game theory as analyzing situations of conflict and competition involving decision making by two or more participants. Some key points:
- Game theory was developed in the 20th century, with a seminal 1944 book discussing its application to business strategy.
- Basic concepts include players, pure and mixed strategies, zero-sum vs. non-zero-sum games, and payoff matrices to represent outcomes.
- Solutions include finding equilibrium points using minimax and maximin principles for pure strategies or solving systems of equations for mixed strategies when no equilibrium exists.
- Dominance rules can reduce game matrices, and graphical or algebraic methods solve for mixed strategies without saddles
The document provides a summary of a presentation on solving linear programming problems (LPP) using the graphical method. It defines LPP and the graphical method. It then walks through the steps to solve an example LPP problem graphically, including formulating the problem, framing the graph, plotting the constraints, finding the optimal solution point, and determining the maximum value. The summary concludes that the optimal solution for the example problem is 5 male workers and 6 female workers, with a maximum total return of Rs. 1,01,000.
Game theory is a mathematical approach that analyzes strategic interactions between parties. It is used to understand situations where decision-makers are impacted by others' choices. A game has players, strategies, payoffs, and information. The Nash equilibrium predicts outcomes as the strategies where no player benefits by changing alone given others' choices. For example, in the Prisoner's Dilemma game about two suspects, confessing dominates remaining silent no matter what the other does, leading both to confess for a worse joint outcome than remaining silent.
The document summarizes a lesson on game theory and linear programming. It discusses using linear programming to find optimal strategies in zero-sum games represented by payoff matrices. It provides examples of solving for optimal strategies in Rock-Paper-Scissors and another sample game. The key steps of formulating the column player's problem as a linear program to minimize the maximum payoff for the row player are outlined.
This document provides an overview of game theory and two-person zero-sum games. It defines key concepts such as players, strategies, payoffs, and classifications of games. It also describes the assumptions and solutions for pure strategy and mixed strategy games. Pure strategy games have a saddle point solution found using minimax and maximin rules. Mixed strategy games do not have a saddle point and require determining the optimal probabilities that players select each strategy.
The document contains biographical and contact information for Dr. Atif Shahzad, along with slides from one of his lectures on optimization models and the simplex method. The slides cover converting linear programs to standard form, basic and non-basic variables, basic feasible solutions, optimality and feasibility conditions, and working through an example using the simplex method in 4 steps - choosing an entering variable, leaving variable, updating the pivot row and other rows, and iterating until optimality is reached.
The document discusses the assignment problem and various methods to solve it. The assignment problem involves assigning jobs to workers or other resources in an optimal way according to certain criteria like minimizing time or cost. The Hungarian assignment method is described as a multi-step algorithm to find the optimal assignment between jobs and workers/resources. It involves creating a cost matrix and performing row and column reductions to arrive at a matrix with zeros that indicates the optimal assignment. The document also briefly discusses handling unbalanced and constrained assignment problems.
Game theory is the study of strategic decision making. It involves analyzing interactions between players where the outcome for each player depends on the actions of all players. Key concepts in game theory include Nash equilibrium, where each player's strategy is the best response to the other players' strategies, and Prisoner's Dilemma, where the non-cooperative equilibrium results in a worse outcome for both players than if they had cooperated. Game theory is applied in economics, political science, biology, and many other fields to model strategic interactions.
This document provides an overview of game theory concepts taught in a university course. It defines game theory as the mathematics of human interactions and decision making. Key concepts discussed include Nash equilibrium, where each player adopts the optimal strategy given other players' strategies. Examples of applications are given in fields like economics, politics and biology. Different types of games and solutions concepts like mixed strategies are also introduced.
Solving Degenaracy in Transportation Problemmkmanik
- The document discusses solving degeneracy in transportation problems using the example of a transportation problem with 4 sources and 5 destinations.
- An initial basic feasible solution is found using the least cost method, but it results in a degenerate solution since the number of allocated cells is less than m + n - 1.
- To solve the degeneracy, an unallocated cell is selected and allocated a value to satisfy the condition. Here, an unallocated cell value of 5 is selected and assigned the value ε.
- The solution is then optimized using the U-V method by calculating Uj + Vi = Cij for allocated cells and penalties Pij for unallocated cells until all penalties are less than
Application of Mathematics in Business : F 107 - Group Kjafar_sadik
The document discusses using linear equations and differential calculus to analyze the business operations of BestWay CNG Filling Station. Linear equations are used to determine the cost, revenue, and profit functions, and calculate the break-even point of 60,000 units. Differential calculus is applied to minimize the area needed for the station given space requirements, determining the optimal dimensions are 245.4 feet by 163 feet for an area of 88,792.2 square feet.
This document covers basic mathematics concepts including mixed numbers, addition and subtraction of mixed numbers, multiplication and division of mixed numbers, and order of operations. It provides step-by-step instructions on how to perform each operation with mixed numbers through examples and practice problems. It also discusses estimating with fractions and mixed numbers and solving applied problems involving various operations with mixed numbers.
This document presents information about the shortest path problem in graphs. It defines key graph terms like vertices, edges, and discusses weighted, directed, and undirected graphs. It provides an example of finding the shortest path between two vertices in a graph using Dijkstra's algorithm and walks through the steps of running the algorithm on a sample graph to find the shortest path between vertices 1 and 9.
Game theory deals with decision making situations where two opponents have conflicting objectives. A game is represented by a payoff matrix showing the payoff to one player for each combination of strategies. The optimal solution, known as a saddle point, is the strategies where neither player can increase their payoff by changing only their own strategy. Mixed strategies, where players randomize between pure strategies, may be required if a pure strategy saddle point does not exist. Graphical and linear programming methods can be used to solve games with mixed strategies.
This document provides an overview of game theory. It defines game theory as the study of how people interact and make decisions strategically, taking into account that each person's actions impact others. It discusses the history and key concepts of game theory, including players, strategies, payoffs, assumptions of rationality and perfect information. It provides examples of zero-sum and non-zero-sum games like the Prisoner's Dilemma. The document is intended to introduce game theory and its basic elements.
This document provides an overview of game theory, which was developed in 1928 to analyze competitive situations. It describes various types of games, such as zero-sum, non-zero-sum, pure-strategy, and mixed-strategy games. Methods for solving different types of games are presented, including the saddle point method for 2x2 games, dominance method, graphical method, and algebraic method. Limitations of game theory in assuming perfect information and rational behavior are also noted.
The document provides an overview of linear programming, including its applications, assumptions, and mathematical formulation. Some key points:
- Linear programming is a tool for maximizing or minimizing quantities like profit or cost, subject to constraints. 50-90% of business decisions and computations involve linear programming.
- Applications in business include production, personnel, inventory, marketing, financial, and blending problems. The objective is to optimize variables like costs, profits, or resources while meeting constraints.
- Assumptions of linear programming include certainty, linearity/proportionality, additivity, divisibility, non-negativity, finiteness, and optimality at corner points.
- A linear programming problem is modeled mathemat
Game theory is the study of how optimal strategies are formulated in conflict situations involving two or more rational opponents with competing interests. It considers how the strategies of one player will impact the outcomes for others. Game theory models classify games based on the number of players, whether the total payoff is zero-sum, and the types of strategies used. The minimax-maximin principle provides a way to determine optimal strategies without knowing the opponent's strategy by having each player maximize their minimum payoff or minimize their maximum loss. A saddle point exists when the maximin and minimax values are equal, indicating optimal strategies for both players.
Game theory is the study of strategic decision making between two or more players under conditions of conflict or competition. A game involves players following a set of rules and receiving payoffs depending on the strategies chosen. Strategies include pure strategies that always select a particular action and mixed strategies that randomly select among pure strategies. The optimal strategies are those that maximize the minimum payoff for one player and minimize the maximum payoff for the other player. When the maximin and minimax values are equal, there is a saddle point representing the optimal strategies for both players.
This document provides an overview of linear programming problems and methods for solving them. It defines a linear programming problem and describes how to write it in standard form with decision variables and constraints. It then explains the simplex method, including how to form an initial tableau and iterate to reach an optimal solution. Finally, it introduces the Big-M method for handling problems with inequality constraints by adding artificial variables with large penalty coefficients. An example demonstrates both simplex and Big-M methods.
The document provides an introduction to operations research. It discusses that operations research is a systematic approach to decision-making and problem-solving that uses techniques like statistics, mathematics, and modeling to arrive at optimal solutions. It also briefly outlines some primary tools used in operations research like statistics, game theory, and probability theory. The document then gives a short history of operations research, noting that it originated in the UK during World War II to analyze problems like radar systems. It concludes with discussing the scope and applications of operations research in fields like management, regulation, and economics.
The document discusses rules of inference in logic. It begins by defining an argument as having premises and a conclusion. Several common rules of inference are then outlined, including modus ponens, modus tollens, and disjunctive syllogism. The remainder of the document works through examples of arguments and tests their validity using the rules of inference. It symbolically represents the arguments and shows the step-by-step workings to determine if the conclusions follow logically from the premises.
This document provides an overview of game theory concepts including its development, assumptions, classification of games, elements, significance, limitations, and methods for solving different types of games. Some key points:
- Game theory was developed in 1928 by John Von Neumann and Oscar Morgenstern to analyze decision-making involving two or more rational opponents.
- Games can be classified as two-person, n-person, zero-sum, non-zero-sum, pure-strategy, or mixed-strategy.
- Elements include the payoff matrix, dominance rules, optimal strategies, and the value of the game.
- Methods for solving games include using pure strategies if a saddle point exists, or mixed
The document discusses transportation problems (TPs), which involve determining the optimal way to route products from multiple supply locations to multiple demand destinations to minimize total transportation costs. It provides the mathematical formulation of a TP as a linear programming problem (LPP) with decision variables representing the quantity transported between each origin-destination pair. Methods for solving TPs include the simplex method by formulating it as an LPP or specialized transportation methods like the northwest corner rule to find an initial feasible solution and stepping stone/modified distribution methods to check for optimality. An example TP is presented to illustrate these concepts.
Some Properties of Determinant of Trapezoidal Fuzzy Number MatricesIJMERJOURNAL
ABSTRACT: The fuzzy set theory has been applied in many fields such as management, engineering, matrices and so on. In this paper, some elementary operations on proposed trapezoidal fuzzy numbers (TrFNs) are defined. We also defined some operations on trapezoidal fuzzy matrices (TrFMs). The notion of Determinant of trapezoidal fuzzy matrices are introduced and discussed. Some of their relevant properties have also been verified.
The document provides instruction on solving applied problems using linear equations. It outlines six steps for solving applied problems: 1) read the problem, 2) assign a variable, 3) write an equation, 4) solve the equation, 5) state the answer, and 6) check the answer. Several examples are then worked through step-by-step to demonstrate solving problems involving unknown numbers, sums of quantities, and consecutive integers. The examples illustrate applying the six step process to arrive at reasonable solutions.
1. The document discusses universal quantification and quantifiers. Universal quantification refers to statements that are true for all variables, while quantifiers are words like "some" or "all" that refer to quantities.
2. It explains that a universally quantified statement is of the form "For all x, P(x) is true" and is defined to be true if P(x) is true for every x, and false if P(x) is false for at least one x.
3. When the universe of discourse can be listed as x1, x2, etc., a universal statement is the same as the conjunction P(x1) and P(x2) and etc., because this
This document discusses big-O, Ω, and Θ notation for analyzing algorithms and describes how to determine the time complexity of various algorithms. It provides examples of algorithms with different complexities, such as O(n), O(n^2), and O(n^3). It explains that both big-O and big-Ω describe the worst case time, and how to prove the lower and upper bounds for different algorithms.
This document discusses algorithms and their analysis. It begins by defining an algorithm and its key characteristics like being finite, definite, and terminating after a finite number of steps. It then discusses designing algorithms to minimize cost and analyzing algorithms to predict their performance. Various algorithm design techniques are covered like divide and conquer, binary search, and its recursive implementation. Asymptotic notations like Big-O, Omega, and Theta are introduced to analyze time and space complexity. Specific algorithms like merge sort, quicksort, and their recursive implementations are explained in detail.
Game theory is the study of strategic decision making. It involves analyzing interactions between players where the outcome for each player depends on the actions of all players. Key concepts in game theory include Nash equilibrium, where each player's strategy is the best response to the other players' strategies, and Prisoner's Dilemma, where the non-cooperative equilibrium results in a worse outcome for both players than if they had cooperated. Game theory is applied in economics, political science, biology, and many other fields to model strategic interactions.
This document provides an overview of game theory concepts taught in a university course. It defines game theory as the mathematics of human interactions and decision making. Key concepts discussed include Nash equilibrium, where each player adopts the optimal strategy given other players' strategies. Examples of applications are given in fields like economics, politics and biology. Different types of games and solutions concepts like mixed strategies are also introduced.
Solving Degenaracy in Transportation Problemmkmanik
- The document discusses solving degeneracy in transportation problems using the example of a transportation problem with 4 sources and 5 destinations.
- An initial basic feasible solution is found using the least cost method, but it results in a degenerate solution since the number of allocated cells is less than m + n - 1.
- To solve the degeneracy, an unallocated cell is selected and allocated a value to satisfy the condition. Here, an unallocated cell value of 5 is selected and assigned the value ε.
- The solution is then optimized using the U-V method by calculating Uj + Vi = Cij for allocated cells and penalties Pij for unallocated cells until all penalties are less than
Application of Mathematics in Business : F 107 - Group Kjafar_sadik
The document discusses using linear equations and differential calculus to analyze the business operations of BestWay CNG Filling Station. Linear equations are used to determine the cost, revenue, and profit functions, and calculate the break-even point of 60,000 units. Differential calculus is applied to minimize the area needed for the station given space requirements, determining the optimal dimensions are 245.4 feet by 163 feet for an area of 88,792.2 square feet.
This document covers basic mathematics concepts including mixed numbers, addition and subtraction of mixed numbers, multiplication and division of mixed numbers, and order of operations. It provides step-by-step instructions on how to perform each operation with mixed numbers through examples and practice problems. It also discusses estimating with fractions and mixed numbers and solving applied problems involving various operations with mixed numbers.
This document presents information about the shortest path problem in graphs. It defines key graph terms like vertices, edges, and discusses weighted, directed, and undirected graphs. It provides an example of finding the shortest path between two vertices in a graph using Dijkstra's algorithm and walks through the steps of running the algorithm on a sample graph to find the shortest path between vertices 1 and 9.
Game theory deals with decision making situations where two opponents have conflicting objectives. A game is represented by a payoff matrix showing the payoff to one player for each combination of strategies. The optimal solution, known as a saddle point, is the strategies where neither player can increase their payoff by changing only their own strategy. Mixed strategies, where players randomize between pure strategies, may be required if a pure strategy saddle point does not exist. Graphical and linear programming methods can be used to solve games with mixed strategies.
This document provides an overview of game theory. It defines game theory as the study of how people interact and make decisions strategically, taking into account that each person's actions impact others. It discusses the history and key concepts of game theory, including players, strategies, payoffs, assumptions of rationality and perfect information. It provides examples of zero-sum and non-zero-sum games like the Prisoner's Dilemma. The document is intended to introduce game theory and its basic elements.
This document provides an overview of game theory, which was developed in 1928 to analyze competitive situations. It describes various types of games, such as zero-sum, non-zero-sum, pure-strategy, and mixed-strategy games. Methods for solving different types of games are presented, including the saddle point method for 2x2 games, dominance method, graphical method, and algebraic method. Limitations of game theory in assuming perfect information and rational behavior are also noted.
The document provides an overview of linear programming, including its applications, assumptions, and mathematical formulation. Some key points:
- Linear programming is a tool for maximizing or minimizing quantities like profit or cost, subject to constraints. 50-90% of business decisions and computations involve linear programming.
- Applications in business include production, personnel, inventory, marketing, financial, and blending problems. The objective is to optimize variables like costs, profits, or resources while meeting constraints.
- Assumptions of linear programming include certainty, linearity/proportionality, additivity, divisibility, non-negativity, finiteness, and optimality at corner points.
- A linear programming problem is modeled mathemat
Game theory is the study of how optimal strategies are formulated in conflict situations involving two or more rational opponents with competing interests. It considers how the strategies of one player will impact the outcomes for others. Game theory models classify games based on the number of players, whether the total payoff is zero-sum, and the types of strategies used. The minimax-maximin principle provides a way to determine optimal strategies without knowing the opponent's strategy by having each player maximize their minimum payoff or minimize their maximum loss. A saddle point exists when the maximin and minimax values are equal, indicating optimal strategies for both players.
Game theory is the study of strategic decision making between two or more players under conditions of conflict or competition. A game involves players following a set of rules and receiving payoffs depending on the strategies chosen. Strategies include pure strategies that always select a particular action and mixed strategies that randomly select among pure strategies. The optimal strategies are those that maximize the minimum payoff for one player and minimize the maximum payoff for the other player. When the maximin and minimax values are equal, there is a saddle point representing the optimal strategies for both players.
This document provides an overview of linear programming problems and methods for solving them. It defines a linear programming problem and describes how to write it in standard form with decision variables and constraints. It then explains the simplex method, including how to form an initial tableau and iterate to reach an optimal solution. Finally, it introduces the Big-M method for handling problems with inequality constraints by adding artificial variables with large penalty coefficients. An example demonstrates both simplex and Big-M methods.
The document provides an introduction to operations research. It discusses that operations research is a systematic approach to decision-making and problem-solving that uses techniques like statistics, mathematics, and modeling to arrive at optimal solutions. It also briefly outlines some primary tools used in operations research like statistics, game theory, and probability theory. The document then gives a short history of operations research, noting that it originated in the UK during World War II to analyze problems like radar systems. It concludes with discussing the scope and applications of operations research in fields like management, regulation, and economics.
The document discusses rules of inference in logic. It begins by defining an argument as having premises and a conclusion. Several common rules of inference are then outlined, including modus ponens, modus tollens, and disjunctive syllogism. The remainder of the document works through examples of arguments and tests their validity using the rules of inference. It symbolically represents the arguments and shows the step-by-step workings to determine if the conclusions follow logically from the premises.
This document provides an overview of game theory concepts including its development, assumptions, classification of games, elements, significance, limitations, and methods for solving different types of games. Some key points:
- Game theory was developed in 1928 by John Von Neumann and Oscar Morgenstern to analyze decision-making involving two or more rational opponents.
- Games can be classified as two-person, n-person, zero-sum, non-zero-sum, pure-strategy, or mixed-strategy.
- Elements include the payoff matrix, dominance rules, optimal strategies, and the value of the game.
- Methods for solving games include using pure strategies if a saddle point exists, or mixed
The document discusses transportation problems (TPs), which involve determining the optimal way to route products from multiple supply locations to multiple demand destinations to minimize total transportation costs. It provides the mathematical formulation of a TP as a linear programming problem (LPP) with decision variables representing the quantity transported between each origin-destination pair. Methods for solving TPs include the simplex method by formulating it as an LPP or specialized transportation methods like the northwest corner rule to find an initial feasible solution and stepping stone/modified distribution methods to check for optimality. An example TP is presented to illustrate these concepts.
Some Properties of Determinant of Trapezoidal Fuzzy Number MatricesIJMERJOURNAL
ABSTRACT: The fuzzy set theory has been applied in many fields such as management, engineering, matrices and so on. In this paper, some elementary operations on proposed trapezoidal fuzzy numbers (TrFNs) are defined. We also defined some operations on trapezoidal fuzzy matrices (TrFMs). The notion of Determinant of trapezoidal fuzzy matrices are introduced and discussed. Some of their relevant properties have also been verified.
The document provides instruction on solving applied problems using linear equations. It outlines six steps for solving applied problems: 1) read the problem, 2) assign a variable, 3) write an equation, 4) solve the equation, 5) state the answer, and 6) check the answer. Several examples are then worked through step-by-step to demonstrate solving problems involving unknown numbers, sums of quantities, and consecutive integers. The examples illustrate applying the six step process to arrive at reasonable solutions.
1. The document discusses universal quantification and quantifiers. Universal quantification refers to statements that are true for all variables, while quantifiers are words like "some" or "all" that refer to quantities.
2. It explains that a universally quantified statement is of the form "For all x, P(x) is true" and is defined to be true if P(x) is true for every x, and false if P(x) is false for at least one x.
3. When the universe of discourse can be listed as x1, x2, etc., a universal statement is the same as the conjunction P(x1) and P(x2) and etc., because this
This document discusses big-O, Ω, and Θ notation for analyzing algorithms and describes how to determine the time complexity of various algorithms. It provides examples of algorithms with different complexities, such as O(n), O(n^2), and O(n^3). It explains that both big-O and big-Ω describe the worst case time, and how to prove the lower and upper bounds for different algorithms.
This document discusses algorithms and their analysis. It begins by defining an algorithm and its key characteristics like being finite, definite, and terminating after a finite number of steps. It then discusses designing algorithms to minimize cost and analyzing algorithms to predict their performance. Various algorithm design techniques are covered like divide and conquer, binary search, and its recursive implementation. Asymptotic notations like Big-O, Omega, and Theta are introduced to analyze time and space complexity. Specific algorithms like merge sort, quicksort, and their recursive implementations are explained in detail.
A compact zero knowledge proof to restrict message space in homomorphic encry...MITSUNARI Shigeo
1) The document proposes a generic method to restrict the message space in homomorphic encryption using zero-knowledge proofs. It converts conditions on multiple ciphertexts into constant-size non-interactive zero-knowledge proofs.
2) Specifically, it shows that multiple ciphertexts satisfying simultaneous polynomial equations can be proven with a four element proof.
3) It then applies this to the concrete case of a two-level homomorphic encryption scheme, proposing a non-interactive zero-knowledge proof with four group elements to prove a ciphertext encrypts a value of 0.
Quick Sort is a sorting algorithm that partitions an array around a pivot element, recursively sorting the subarrays. It has a best case time complexity of O(n log n) when partitions are evenly divided, and worst case of O(n^2) when partitions are highly imbalanced. While fast, it is unstable and dependent on pivot selection. It is widely used due to its efficiency, simplicity, and ability to be parallelized.
Unit-1 Basic Concept of Algorithm.pptxssuser01e301
The document discusses various topics related to algorithms including algorithm design, real-life applications, analysis, and implementation. It specifically covers four algorithms - the taxi algorithm, rent-a-car algorithm, call-me algorithm, and bus algorithm - for getting from an airport to a house. It also provides examples of simple multiplication methods like the American, English, and Russian approaches as well as the divide and conquer method.
Paper Study: Melding the data decision pipelineChenYiHuang5
Melding the data decision pipeline: Decision-Focused Learning for Combinatorial Optimization from AAAI2019.
Derive the math equation from myself and match the same result as two mentioned CMU papers [Donti et. al. 2017, Amos et. al. 2017] while applying the same derivation procedure.
Optimum Engineering Design - Day 4 - Clasical methods of optimizationSantiagoGarridoBulln
The document provides information about optimization methods and integer programming problems. It discusses various optimization problem formulations including linear integer programming, binary integer programming, and mixed integer programming. It also describes methods for solving discrete optimization problems like the enumeration method, branch and bound method, and cutting plane method. Examples are provided to illustrate linear programming problems with integral coefficients and how to solve binary integer programming problems using implicit enumeration.
Searching algorithms can be categorized as internal or external depending on whether the list resides entirely in main memory or secondary storage. Linear or sequential search is a simple search algorithm that checks each element of a list sequentially until a match is found or the whole list has been searched. Binary search is a faster search algorithm that can only be used on sorted lists. It divides the search space in half at each step to locate the target element.
Sorting algorithms arrange items in a list in a specific order. Common sorting algorithms include selection sort, insertion sort, bubble sort, merge sort, quicksort, and radix sort. Sorting algorithms are analyzed based on their time and space complexity, with most having quadratic time complexity for simple algorithms
Data Structure & Algorithms - Mathematicalbabuk110
This document discusses various mathematical notations and asymptotic analysis used for analyzing algorithms. It covers floor and ceiling functions, remainder function, summation symbol, factorial function, permutations, exponents, logarithms, Big-O, Big-Omega and Theta notations. It provides examples of calculating time complexity of insertion sort and bubble sort using asymptotic notations. It also discusses space complexity analysis and how to calculate the space required by an algorithm.
1) The document describes algorithms for solving the maximum flow and electrical flow problems on graphs.
2) It introduces the multiplicative weight update method, which can be used to find an approximate maximum flow in Oε(m3/2) time by reducing the problem to approximating electrical flows.
3) The algorithm works by having the "follower" maintain a distribution over edges using MWU based on "money" or congestion values revealed by approximate electrical flow computations.
This document discusses methods for data fitting, including interpolation and least squares fitting. It explains that interpolation is used to estimate values within existing data points, while fitting finds the general behavior of data. Linear and quadratic interpolation are introduced, along with the Lagrange and Aitken methods. Least squares fitting finds the curve that best fits a set of data by minimizing the sum of squared residuals, with the example of fitting a straight line using normal equations.
This Chapter is part of previous published ch.1 and ch.3 and its use for undergraduate students in physics department. also, you can use it for mathematical and Statistical courses and for those experimental courses of data fitting.
The document discusses algorithms, including their definition, properties, analysis of time and space complexity, and examples of recursion and iteration. It defines an algorithm as a finite set of instructions to accomplish a task. Properties include inputs, outputs, finiteness, definiteness, and effectiveness. Time complexity is analyzed using big-O notation, while space complexity considers static and variable parts. Recursion uses function calls to solve sub-problems, while iteration uses loops. Examples include factorial calculation, GCD, and Towers of Hanoi solved recursively.
This document summarizes a new method for projective splitting algorithms called projective splitting with forward steps. The method allows using forward steps instead of proximal steps when the operator is Lipschitz continuous. This can improve efficiency compared to only using proximal steps. Preliminary computational tests on LASSO problems show the method with greedy block selection and asynchronous delays can speed up convergence compared to non-greedy, synchronous versions. However, more work is still needed to fully understand adaptive step sizes and how to minimize the separation function at each iteration.
2. Asymptotic Notations and Complexity Analysis.pptxRams715121
This document provides an overview of algorithms and asymptotic analysis. It defines key terms like algorithms, complexity analysis, and input/output specifications. It discusses analyzing time and space complexity, and introduces asymptotic notations like Big-O, Big-Omega, and Big-Theta to classify algorithms based on how their running time grows relative to the input size. Common algorithm classes like logarithmic, linear, quadratic, and exponential functions are presented. Examples of linear and binary search algorithms are provided to illustrate algorithm specifications and complexity analyses.
Linear search examines each element of a list sequentially, one by one, and checks if it is the target value. It has a time complexity of O(n) as it requires searching through each element in the worst case. While simple to implement, linear search is inefficient for large lists as other algorithms like binary search require fewer comparisons.
Brief History of Visual Representation LearningSangwoo Mo
The document summarizes the history of visual representation learning in 3 eras: (1) 2012-2015 saw the evolution of deep learning architectures like AlexNet and ResNet; (2) 2016-2019 brought diverse learning paradigms for tasks like few-shot learning and self-supervised learning; (3) 2020-present focuses on scaling laws and foundation models through larger models, data and compute as well as self-supervised methods like MAE and multimodal models like CLIP. The field is now exploring how to scale up vision transformers to match natural language models and better combine self-supervision and generative models.
Learning Visual Representations from Uncurated DataSangwoo Mo
Slide about the defense of my Ph.D. dissertation: "Learning Visual Representations from Uncurated Data"
It includes four papers about
- Learning from multi-object images for contrastive learning [1] and Vision Transformer (ViT) [2]
- Learning with limited labels (semi-sup) for image classification [3] and vision-language [4] models
[1] Mo*, Kang* et al. Object-aware Contrastive Learning for Debiased Scene Representation. NeurIPS’21.
[2] Kang*, Mo* et al. OAMixer: Object-aware Mixing Layer for Vision Transformers. CVPRW’22.
[3] Mo et al. RoPAWS: Robust Semi-supervised Representation Learning from Uncurated Data. ICLR’23.
[4] Mo et al. S-CLIP: Semi-supervised Vision-Language Pre-training using Few Specialist Captions. Under Review.
This document proposes using hyperbolic space to embed hierarchical tree structures, like those that can represent sequences of events in reinforcement learning problems. Specifically, it suggests a method called S-RYM that applies spectral normalization to regularize gradients when training deep reinforcement learning agents with hyperbolic embeddings. This stabilization technique allows naive hyperbolic embeddings to outperform standard Euclidean embeddings. It works by reducing gradient norm explosions during training, allowing the entropy loss to converge properly. The document provides technical details on spectral normalization, hyperbolic space representations, and how S-RYM trains deep reinforcement learning agents with stabilized hyperbolic embeddings.
A Unified Framework for Computer Vision Tasks: (Conditional) Generative Model...Sangwoo Mo
Lab seminar introduces Ting Chen's recent 3 works:
- Pix2seq: A Language Modeling Framework for Object Detection (ICLR’22)
- A Unified Sequence Interface for Vision Tasks (NeurIPS’22)
- A Generalist Framework for Panoptic Segmentation of Images and Videos (submitted to ICLR’23)
This document is a slide presentation on recent advances in deep learning. It discusses self-supervised learning, which involves using unlabeled data to learn representations by predicting structural information within the data. The presentation covers pretext tasks, invariance-based approaches, and generation-based approaches for self-supervised learning in computer vision and natural language processing. It provides examples of specific self-supervised methods like predicting image rotations, clustering representations to generate pseudo-labels, and masked language modeling.
Deep Learning Theory Seminar (Chap 3, part 2)Sangwoo Mo
This document summarizes key points from a lecture on deep learning theory:
1) It discusses the Maurey sampling technique, which shows that a finite sample approximation X^ of a random variable X converges to X as the number of samples k goes to infinity.
2) It proposes extending this technique to sample finite-width neural networks by converting the weight distribution of an infinite network to a probability measure through normalization.
3) The approximation error between outputs of the infinite and finite networks is bounded using Maurey sampling, with the bound converging to zero as the number of samples increases.
Deep Learning Theory Seminar (Chap 1-2, part 1)Sangwoo Mo
1. The document discusses the approximation capabilities of deep neural networks. It outlines topics that will be covered, including approximation, optimization, and generalization.
2. For approximation, it shows that a neural network can approximate any smooth function over a compact domain to any desired accuracy by bounding the function norm. Specifically, it presents constructive proofs that a univariate function can be approximated by a 2-layer network and a multivariate function by a 3-layer network.
3. The chapter will prove approximation capabilities of finite-width neural networks, including constructive proofs for specific activations and universal approximation for general activations. It will discuss approximating indicators with ReLU activations.
The document provides an introduction to diffusion models. It discusses that diffusion models have achieved state-of-the-art performance in image generation, density estimation, and image editing. Specifically, it covers the Denoising Diffusion Probabilistic Model (DDPM) which reparametrizes the reverse distributions of diffusion models to be more efficient. It also discusses the Denoising Diffusion Implicit Model (DDIM) which generates rough sketches of images and then refines them, significantly reducing the number of sampling steps needed compared to DDPM. In summary, diffusion models have emerged as a highly effective approach for generative modeling tasks.
1) The document discusses object-region video transformers (ORViT) for video recognition. ORViT applies attention at both the patch and object levels.
2) ORViT considers three aspects of objects: the objects themselves, interactions between objects, and object dynamics over time.
3) Experimental results show ORViT outperforms baseline models on action recognition, compositional action recognition, and spatio-temporal action detection tasks. ORViT better captures object-level information and dynamics compared to patch-level attention alone.
Deep Implicit Layers: Learning Structured Problems with Neural NetworksSangwoo Mo
Deep implicit layers allow neural networks to solve structured problems by following algorithmic rules. They include layers for convex optimization, discrete optimization, differential equations, and more. The forward pass runs an algorithm, while the backward pass computes gradients using algorithmic properties like KKT conditions. This enables problems like structured prediction, meta-learning, and time series modeling to be solved reliably with neural networks by respecting their underlying structure.
Learning Theory 101 ...and Towards Learning the Flat MinimaSangwoo Mo
The document discusses recent theories on why deep neural networks generalize well despite being highly overparameterized. Classic learning theory, which assumes restricting the hypothesis space is necessary for generalization, fails to explain modern neural networks. Recent studies suggest neural networks generalize because 1) their complexity is underestimated and 2) SGD regularization finds flat minima. Sharpness-aware minimization (SAM) directly optimizes for flat minima and consistently improves generalization, especially for vision transformers which have sharper loss landscapes than ResNets. SAM produces more interpretable attention maps and significantly boosts performance of vision transformers and MLP-Mixers on in-domain and out-of-domain tasks.
Lab seminar on
- Sharpness-Aware Minimization for Efficiently Improving Generalization (ICLR 2021)
- When Vision Transformers Outperform ResNets without Pretraining or Strong Data Augmentations (under review)
This document summarizes recent advances in deep generative models with explicit density estimation. It discusses variational autoencoders (VAEs), including techniques to improve VAEs such as importance weighting, semi-amortized inference, and mitigating posterior collapse. It also covers energy-based models, autoregressive models, flow-based models, vector-quantized VAEs, hierarchical VAEs, and diffusion probabilistic models. The document provides an overview of these generative models with a focus on density estimation and generation quality.
This document summarizes research on reducing the computational complexity of self-attention in Transformer models from O(L2) to O(L log L) or O(L). It describes the Reformer model which uses locality-sensitive hashing to achieve O(L log L) complexity, the Linformer model which uses low-rank approximations and random projections to achieve O(L) complexity, and the Synthesizer model which replaces self-attention with dense or random attention. It also briefly discusses the expressive power of sparse Transformer models.
This document summarizes two meta-learning papers:
1) "Meta-Learning with Implicit Gradients" which introduces Implicit Model-Agnostic Meta-Learning (iMAML), an efficient alternative to MAML that computes meta-gradients without differentiating through the inner loop.
2) "Modular Meta-Learning with Shrinkage" which proposes learning a separate set of parameters for each module with different levels of shrinkage, optimized in an alternating manner to avoid collapse.
Deep Learning for Natural Language ProcessingSangwoo Mo
This document summarizes a lecture on recent advances in deep learning for natural language processing. It discusses improvements to network architectures like attention mechanisms and self-attention, which help models learn long-term dependencies and attend to relevant parts of the input. It also discusses improved training methods to reduce exposure bias and the loss-evaluation mismatch. Newer models presented include the Transformer, which uses only self-attention, and BERT, which introduces a pretrained bidirectional transformer encoder that achieves state-of-the-art results on many NLP tasks.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
5. LP Duality (Example)
• Let’s consider a simple example
minimize 7𝑥* + 𝑥> + 5𝑥@
subject to 𝑥* − 𝑥> + 3𝑥@ ≥ 10
5𝑥* + 2𝑥> − 𝑥@ ≥ 6
𝑥*, 𝑥>, 𝑥@ ≥ 0
• Let 𝑓 be objective, 𝑓* and 𝑓> be L.H.S. of constraints respectively
• The goal is finding the lower bound of the objective 𝑓
• Since 𝑓, and 𝑥, are bounded below, if we can represent 𝑓 with
positive combination of 𝑓, and 𝑥,, be can bound 𝑓 below
• ex) 7𝑥* + 𝑥> + 5𝑥@ ≥ 𝑥* − 𝑥> + 3𝑥@ + 5𝑥* + 2𝑥> − 𝑥@ ≥ 16
5/34
6. LP Duality (Example)
• Let 𝑓 be objective, 𝑓* and 𝑓> be L.H.S. of constraints respectively
• The goal is finding the lower bound of the objective 𝑓
• Let 𝑦, be positive weight for 𝑓, such that 𝑓 ≥ ∑ 𝑦, 𝑓,
• For each coefficient of ∑ 𝑦, 𝑓,, it should be upper bounded by 𝑓
• It leads the dual program of 𝑦,, which also forms LP
• For previous example, the dual program is
maximize 10𝑦* + 6𝑦>
subject to 𝑦* + 5𝑦> ≤ 7
−𝑦* + 2𝑦> ≤ 1
3𝑦* − 𝑦> ≤ 5
𝑦*, 𝑦> ≥ 0
6/34
7. Duality Theorem
• LP duality theorem: Let 𝑥∗, 𝑦∗ are optimal solutions of primal and
dual programs respectively. Then ∑ 𝑐& 𝑥&
∗(
&)* = ∑ 𝑏, 𝑦,
∗:
,)* .
• Weak duality theorem: Let 𝑥, 𝑦 are feasible solutions of primal and
dual programs respectively. Then ∑ 𝑐& 𝑥&
(
&)* ≥ ∑ 𝑏, 𝑦,
:
,)* .
• Proof) ∑ 𝑐& 𝑥&& ≥ ∑ ∑ 𝑎,& 𝑦,, 𝑥&& = ∑ ∑ 𝑎,& 𝑥&& 𝑦,, ≥ ∑ 𝑏, 𝑦,,
• Complementary slackness: From equality condition, we obtain
• Primal C.S.: For each 𝑗, either 𝑥& = 0 or ∑ 𝑎,& 𝑦,, = 𝑐&
• Dual C.S.: For each 𝑖, either 𝑦, = 0 or ∑ 𝑎,& 𝑥&, = 𝑏,
𝑝∗
= 𝑑∗
7/34
9. Facility Location
• (Metric Uncapacitated) Facility Location:
• Let 𝐺 be a bipartite graph with bipartition (𝐹, 𝐶)
• Let 𝑓, be the cost of opening facility 𝑖
• Let 𝑐,& be the cost of connecting city 𝑗 to facility 𝑖
• Find the optimal way to connect cities to open facilities
Source: http://www.or.uni-bonn.de/~vygen/files/tokyo.pdf 9/34
10. Facility Location (Formal Ver.)
• (Metric Uncapacitated) Facility Location:
• Let 𝐺 be a bipartite graph with bipartition (𝐹, 𝐶)
• Let 𝑓, be the cost of opening facility 𝑖
• Let 𝑐,& be the cost of connecting city 𝑗 to facility 𝑖
• Then
minimize ∑ 𝑐,& 𝑥,&,∈N,&∈O + ∑ 𝑓, 𝑦,,∈N
subject to ∑ 𝑥,&,∈N ≥ 1 𝑗 ∈ 𝐶
𝑦, − 𝑥,& ≥ 0 𝑖 ∈ 𝐹, 𝑗 ∈ 𝐶
𝑥,& ∈ {0,1} 𝑖 ∈ 𝐹, 𝑗 ∈ 𝐶
𝑦, ∈ {0,1} 𝑖 ∈ 𝐹
• Let 𝑖 ∈ 𝐼 iff 𝑦, = 1, and 𝜙 𝑗 = 𝑖 iff 𝑥,& = 1
10/34
15. Algorithm
• Phase 1: Collect all candidate facilities
• Let (𝑖, 𝑗) be tight if 𝛼& ≥ 𝑐,&, and (𝑖, 𝑗) be special if 𝛽,& > 0
• Initialize every city not connected, 𝛼&, 𝛽,& = 0
• While not every city connected:
• Increase 𝛼& of every unconnected cities by 1
• If (𝑖, 𝑗) is tight, increase 𝛽,& by 1 to satisfy 𝛼& = 𝑐,& + 𝛽,&
• If 𝑓, = ∑ 𝛽,&& , let facility 𝑖 be temporarily open
• Every city 𝑗 tight to 𝑖 is now connected to 𝑖
15/34
16. Algorithm
• Phase 2: Prune unnecessary facilities
• Let 𝑖 and 𝑖d are conflicting if there is 𝑗 such that 𝛽,&, 𝛽,e& > 0
• Consider graph 𝐹f such that nodes are temporarily open, and
edge implies two nodes are conflicting
• Choose maximal independent set 𝐼 ⊂ 𝐹f and let 𝑖 ∈ 𝐼 be open
• If 𝜙(𝑗) ∈ 𝐼, keep connection and let 𝑗 be directly connected to 𝜙(𝑗)
• Else, let 𝑖 be any neighbor of 𝜙(𝑗), connect 𝑗 to 𝑖, and let 𝑗 be
indirectly connected to 𝑖
• Note that 𝛽,& = 0 if 𝑗 is indirectly connected to 𝑖
16/34
31. Theorem
• Theorem The solution of algorithm satisfy
h 𝑐,& 𝑥,&
,∈N,&∈O
+ 3 h 𝑓, 𝑦,
,∈N
≤ 3 h 𝛼&
&∈O
• Corollary The dual solution is 3-approximation algorithm
• Proof) Let 𝛼& = 𝛼&
i
+ 𝛼&
j
, each contributes to facility/edge
• If 𝑗 is directly connected, let 𝛼&
i
= 𝛽,& and 𝛼&
j
= 𝑐,& where 𝑖 = 𝜙(𝑗)
• If 𝑗 is indirectly connected, let 𝛼&
i
= 0 and 𝛼&
j
= 𝛼&
• Goal: prove that
① ∑ 𝑓, 𝑦,,∈N = ∑ 𝛼&&∈O
i
② ∑ 𝑐,& 𝑥,&,∈N,&∈O ≤ 3 ∑ 𝛼&&∈O
j
31/34
32. Theorem
• Lemma 1 ∑ 𝑓, 𝑦,,∈N = ∑ 𝛼&&∈O
i
• Proof)
h 𝑓, 𝑦,
,∈N
= h 𝑓,
,∈k
= h h 𝛼&
i
&:[ & ),,∈k
= h 𝛼&
i
&∈O
• Lemma 2 ∑ 𝑐,& 𝑥,&,∈N,&∈O ≤ 3 ∑ 𝛼&
j
&∈O
• Proof) For directly connected city, 𝑐[(&)& = 𝛼&
j
≤ 3𝛼&
j
• Goal: prove that 𝑐[(&)& ≤ 3𝛼&
j
for indirectly connected cities too
• Then
h 𝑐,& 𝑥,&
,∈N,&∈O
= h 𝑐[ & &
&∈O
≤ 3 h 𝛼&
j
&∈O
32/34
33. Theorem
• Lemma 2 ∑ 𝑐,& 𝑥,&,∈N,&∈O ≤ 3 ∑ 𝛼&
j
&∈O
• Goal: prove that 𝑐[(&)& ≤ 3𝛼&
j
for indirectly
connected city 𝑗
• Let 𝑗 is temporarily connected to 𝑖′, and indirectly connected to 𝑖
• Then there is 𝑗′ such that specially connected to both 𝑖 and 𝑖′
• Let 𝑡* and 𝑡> be time that 𝑖 and 𝑖′ be opened
• Since both (𝑖, 𝑗d) and (𝑖d, 𝑗d) are special, they must be tight before
either 𝑖 or 𝑖′ be opened i.e. 𝛼&d ≤ min(𝑡*, 𝑡>)
• Since (𝑖d, 𝑗) is tight, 𝛼& ≥ 𝑡>, and it leads 𝛼& ≥ 𝛼&e
• By tightness, 𝛼& ≥ 𝑐,e&, 𝛼&e ≥ 𝑐,&e and 𝛼&e ≥ 𝑐,e&e
• Thus, by triangle inequality, 𝑐,& ≤ 𝑐,e& + 𝑐,&e + 𝑐,e&e ≤ 3𝛼&
= 3𝛼&
j
33/34