SlideShare a Scribd company logo
1 of 44
Download to read offline
Gradient descent method 
2013.11.10 
SanghyukChun 
Many contents are from 
Large Scale Optimization Lecture 4 & 5 by Caramanis& Sanghavi 
Convex Optimization Lecture 10 by Boyd & Vandenberghe 
Convex Optimization textbook Chapter 9 by Boyd & Vandenberghe 1
Contents 
•Introduction 
•Example code & Usage 
•Convergence Conditions 
•Methods & Examples 
•Summary 
2
Introduction 
Unconstraint minimization problem, Description, Pros and Cons 
3
Unconstrained minimization problems 
•Recall: Constrained minimization problems 
•From Lecture 1, the formation of a general constrained convex optimization problem is as follows 
•min푓푥푠.푡.푥∈χ 
•Where 푓:χ→Ris convex and smooth 
•From Lecture 1, the formation of an unconstrained optimization problem is as follows 
•min푓푥 
•Where 푓:푅푛→푅is convex and smooth 
•In this problem, the necessary and sufficient condition for optimal solution x0 is 
•훻푓푥=0푎푡푥=푥0 
4
Unconstrained minimization problems 
•Minimize f(x) 
•When f is differentiable and convex, a necessary and sufficient condition for a point 푥∗to be optimal is훻푓푥∗=0 
•Minimize f(x) is the same as fining solution of 훻푓푥∗=0 
•Min f(x): Analytically solving the optimality equation 
•훻푓푥∗=0: Usually be solved by an iterative algorithm 
5
Description of Gradient Descent Method 
•The idea relies on the fact that −훻푓(푥(푘))is a descent direction 
•푥(푘+1)=푥(푘)−η푘훻푓(푥(푘))푤푖푡ℎ푓푥푘+1<푓(푥푘) 
•Δ푥(푘)is the step, or search direction 
•η푘is the step size, or step length 
•Too small η푘will cause slow convergence 
•Too large η푘could cause overshoot the minima and diverge 
6
Description of Gradient Descent Method 
•Algorithm (Gradient Descent Method) 
•given a starting point 푥∈푑표푚푓 
•repeat 
1.Δ푥≔−훻푓푥 
2.Line search: Choose step size ηvia exact or backtracking line search 
3.Update 푥≔푥+ηΔ푥 
•untilstopping criterion is satisfied 
•Stopping criterion usually 훻푓(푥)2≤휖 
•Very simple, but often very slow; rarely used in practice 
7
Pros and Cons 
•Pros 
•Can be applied to every dimension and space (even possible to infinite dimension) 
•Easy to implement 
•Cons 
•Local optima problem 
•Relatively slow close to minimum 
•For non-differentiable functions, gradient methods are ill-defined 
8
Example Code & Usage 
Example Code, Usage, Questions 
9
Gradient Descent Example Code 
•http://mirlab.org/jang/matlab/toolbox/machineLearning/ 
10
Usage of Gradient Descent Method 
•Linear Regression 
•Find minimum loss function to choose best hypothesis 
11 
Example of Loss function: 
푑푎푡푎푝푟푒푑푖푐푡−푑푎푡푎표푏푠푒푟푣푒푑 2 
Find the hypothesis (function) which minimize the loss function
Usage of Gradient Descent Method 
•Neural Network 
•Back propagation 
•SVM (Support Vector Machine) 
•Graphical models 
•Least Mean Squared Filter 
…and many other applications! 
12
Questions 
•Does Gradient Descent Method always converge? 
•If not, what is condition for convergence? 
•How can make Gradient Descent Method faster? 
•What is proper value for step size η푘 
13
Convergence Conditions 
L-Lipschitzfunction, Strong Convexity, Condition number 
14
L-Lipschitzfunction 
•Definition 
•A function 푓:푅푛→푅is called L-Lipschitzif and only if 훻푓푥−훻푓푦2≤퐿푥−푦2,∀푥,푦∈푅푛 
•We denote this condition by 푓∈퐶퐿, where 퐶퐿is class of L-Lipschitzfunctions 
15
L-Lipschitzfunction 
•Lemma 4.1 
•퐼푓푓∈퐶퐿,푡ℎ푒푛푓푦−푓푥−훻푓푥,푦−푥≤ 퐿 2 푦−푥2 
•Theorem 4.2 
•퐼푓푓∈퐶퐿푎푛푑푓∗=min 푥 푓푥>−∞,푡ℎ푒푛푡ℎ푒푔푟푎푑푖푒푛푡푑푒푠푐푒푛푡 푎푙푔표푟푖푡ℎ푚푤푖푡ℎ푓푖푥푒푑푠푡푒푝푠푖푧푒푠푡푎푡푖푠푓푦푖푛푔η< 2 퐿 푤푖푙푙푐표푛푣푒푟푔푒푡표 푎푠푡푎푡푖표푛푎푟푦푝표푖푛푡 
16
Strong Convexity and implications 
•Definition 
•If there exist a constant m > 0 such that 훻2푓≻=푚퐼푓표푟∀푥∈푆, then the function f(x) is strongly convex function on S 
17
Strong Convexity and implications 
•Lemma 4.3 
•If f is strongly convex on S, we have the following inequality: 
•푓푦≥푓푥+<훻푓푥,푦−푥>+ 푚 2 푦−푥2푓표푟∀푥,푦∈푆 
•Proof 
18 
( ) 
useful as stopping criterion (if you know m)
Strong Convexity and implications 
19 
Proof
Upper Bound of 훻2푓(푥) 
•Lemma 4.3 implies that the sublevel sets contained in S are bounded, so in particular, S is bounded. Therefore the maximum eigenvalue of 훻2푓푥is bounded above on S 
•ThereexistsaconstantMsuchthat훻2푓푥=≺푀퐼푓표푟∀푥∈푆 
•Lemma 4.4 
•퐹표푟푎푛푦푥,푦∈푆,푖푓훻2푓푥=≺푀퐼푓표푟푎푙푙푥∈푆푡ℎ푒푛 푓푦≤푓푥+<훻푓푥,푦−푥>+ 푀 2 푦−푥2 
20
Condition Number 
•From Lemma 4.3 and 4.4 we have 
푚퐼=≺훻2푓푥=≺푀퐼푓표푟∀푥∈푆,푚>0,푀>0 
•The ratio k=M/m is thus an upper bound on the condition number of the matrix 훻2푓푥 
•When the ratio is close to 1, we call it well-conditioned 
•When the ratio is much larger than 1, we call it ill-conditioned 
•When the ratio is exactly 1, it is the best case that only one step will lead to the optimal solution (there is no wrong direction) 
21
Condition Number 
•Theorem 4.5 
•Gradient descent for a strongly convex function f and step sizeη= 1 푀 will converge as 
•푓푥∗−푓∗≤푐푘푓푥0−푓∗,푤ℎ푒푟푒푐≤1− 푚 푀 
•Rate of convergence c is known as linear convergence 
•Since we usually do not know the value of M, we do line search 
•For exact line search, 푐=1− 푚 푀 
•For backtracking line search, 푐=1−min2푚훼,2훽훼푚 푀 <1 
22
Methods & Examples 
Exact Line Search, Backtracking Line Search, Coordinate Descent Method, Steepest Descent Method 
23
Exact Line Search 
• The optimal line search method in which η is chosen to 
minimize f along the ray 푥 − η훻푓 푥 , as shown in below 
• Exact line search is used when the cost of minimization 
problem with one variable is low compared to the cost of 
computing the search direction itself. 
• It is not very practical 
24
Exact Line Search 
•Convergence Analysis 
• 
•푓푥푘−푓∗decreases by at least a constant factor in every iteration 
•Converging to 0 geometric fast. (linear convergence) 
25 
1
Backtracking Line Search 
•It depends on two constants 훼,훽푤푖푡ℎ0<훼<0.5,0<훽<1 
•It starts with unit step size and then reduces it by the factor 훽until the stopping condition 
푓(푥−η훻푓(푥))≤푓(푥)−훼η훻푓푥2 
•Since −훻푓푥is a descent direction and −훻푓푥2<0, so for small enough step size η, we have 
푓푥−η훻푓푥≈푓푥−η훻푓푥2<푓푥−훼η훻푓푥2 
•It shows that the backtracking line search eventually terminates 
•훼is typically chosen between 0.01 and 0.3 
•훽is often chosen to be between 0.1 and 0.8 
26
Backtracking Line Search 
• 
27
Backtracking Line Search 
•Convergence Analysis 
•Claim: η≤ 1 푀 always satisfies the stopping condition 
•Proof 
28
Backtracking Line Search 
•Proof (cont) 
29
Line search types 
•Slide from Optimization Lecture 10 by Boyd 30
Line search example 
•Slide from Optimization Lecture 10 by Boyd 31
Coordinate Descent Method 
• Coordinate descent belongs to the class of several non 
derivative methods used for minimizing differentiable 
functions. 
• Here, cost is minimized in one coordinate direction in each 
iteration. 
32
Coordinate Descent Method 
•Pros 
•It is well suited for parallel computation 
•Cons 
•May not reach the local minimum even for convex function 
33
Converge of Coordinate Descent 
•Lemma 5.4 
34
Coordinate Descent Method 
•Method of selecting the coordinate for next iteration 
•Cyclic Coordinate Descent 
•Greedy Coordinate Descent 
•(Uniform) Random Coordinate Descent 
35
Steepest Descent Method 
•The gradient descent method takes many iterations 
•Steepest Descent Method aims at choosing the best direction at each iteration 
•Normalized steepest descent direction 
•Δ푥푛푠푑=푎푟푔푚푖푛훻푓푥푇푣푣=1} 
•Interpretation: for small 푣,푓푥+푣≈푓푥+훻푓푥푇푣direction Δ푥푛푠푑 is unit-norm step with most negative directional derivative 
•Iteratively, the algorithm follows the following steps 
•Calculate direction of descent Δ푥푛푠푑 
•Calculate step size, t 
•푥+=푥+푡Δ푥푛푠푑 
36
Steepest Descent for various norms 
•The choice of norm used the steepest descent direction can be have dramatic effect on converge rate 
•푙2norm 
•The steepest descent direction is as follows 
•Δ푥푛푠푑= −훻푓(푥) 훻푓(푥)2 
•푙1norm 
•For 푥1= 푖푥푖,a descent direction is as follows, 
•Δ푥푛푑푠=−푠푖푔푛 휕푓푥 휕푥푖 ∗푒푖 ∗ 
•푖∗=푎푟푔min 푖 휕푓 휕푥푖 
•푙∞norm 
•For 푥∞=argmin 푖 푥푖,a descent direction is as follows 
•Δ푥푛푑푠=−푠푖푔푛−훻푓푥 
37
Steepest Descent for various norms 
38 
Quadratic Norm 
푙1-Norm
Steepest Descent for various norms 
•Example 
39
Steepest Descent Convergence Rate 
•Fact: Any norm can be bounded by ∙2, i.e., ∃훾, 훾∈(0,1] such that, 푥≥훾푥2푎푛푑푥∗≥훾푥2 
•Theorem 5.5 
•If f is strongly convex with respect to m and M, and ∙2has 훾, 훾as above then steepest decent with backtracking line search has linear convergence with rate 
•푐=1−2푚훼 훾2min1, 훽훾 푀 
•Proof: Will be proved in the lecture 6 
40
Summary 
41
Summary 
•Unconstrained Convex Optimization Problem 
•Gradient Descent Method 
•Step Size Trade-off between safety and speed 
•Convergence Conditions 
•L-LipschtizFunction 
•Strong Convexity 
•Condition Number 
42
Summary 
•Exact Line Search 
•Backtracking Line Search 
•Coordinate Descent Method 
•Good for parallel computation but not always converge 
•Steepest Descent Method 
•The choice of norm is important 
43
44 
END OF DOCUMENT

More Related Content

What's hot

Lecture 18: Gaussian Mixture Models and Expectation Maximization
Lecture 18: Gaussian Mixture Models and Expectation MaximizationLecture 18: Gaussian Mixture Models and Expectation Maximization
Lecture 18: Gaussian Mixture Models and Expectation Maximization
butest
 

What's hot (20)

Bias and variance trade off
Bias and variance trade offBias and variance trade off
Bias and variance trade off
 
Support Vector Machines ( SVM )
Support Vector Machines ( SVM ) Support Vector Machines ( SVM )
Support Vector Machines ( SVM )
 
Hyperparameter Tuning
Hyperparameter TuningHyperparameter Tuning
Hyperparameter Tuning
 
Machine Learning With Logistic Regression
Machine Learning  With Logistic RegressionMachine Learning  With Logistic Regression
Machine Learning With Logistic Regression
 
Gradient descent optimizer
Gradient descent optimizerGradient descent optimizer
Gradient descent optimizer
 
Optimization problems and algorithms
Optimization problems and  algorithmsOptimization problems and  algorithms
Optimization problems and algorithms
 
Genetic Algorithms - Artificial Intelligence
Genetic Algorithms - Artificial IntelligenceGenetic Algorithms - Artificial Intelligence
Genetic Algorithms - Artificial Intelligence
 
Optimization for Deep Learning
Optimization for Deep LearningOptimization for Deep Learning
Optimization for Deep Learning
 
Feedforward neural network
Feedforward neural networkFeedforward neural network
Feedforward neural network
 
Machine learning session4(linear regression)
Machine learning   session4(linear regression)Machine learning   session4(linear regression)
Machine learning session4(linear regression)
 
Naive Bayes
Naive BayesNaive Bayes
Naive Bayes
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)
 
Support vector machine
Support vector machineSupport vector machine
Support vector machine
 
Steepest descent method
Steepest descent methodSteepest descent method
Steepest descent method
 
Random forest
Random forestRandom forest
Random forest
 
Overview on Optimization algorithms in Deep Learning
Overview on Optimization algorithms in Deep LearningOverview on Optimization algorithms in Deep Learning
Overview on Optimization algorithms in Deep Learning
 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networks
 
Deep Feed Forward Neural Networks and Regularization
Deep Feed Forward Neural Networks and RegularizationDeep Feed Forward Neural Networks and Regularization
Deep Feed Forward Neural Networks and Regularization
 
Lecture 18: Gaussian Mixture Models and Expectation Maximization
Lecture 18: Gaussian Mixture Models and Expectation MaximizationLecture 18: Gaussian Mixture Models and Expectation Maximization
Lecture 18: Gaussian Mixture Models and Expectation Maximization
 
Chap 8. Optimization for training deep models
Chap 8. Optimization for training deep modelsChap 8. Optimization for training deep models
Chap 8. Optimization for training deep models
 

Viewers also liked

07 logistic regression and stochastic gradient descent
07 logistic regression and stochastic gradient descent07 logistic regression and stochastic gradient descent
07 logistic regression and stochastic gradient descent
Subhas Kumar Ghosh
 
04.第四章用Matlab求偏导数
04.第四章用Matlab求偏导数04.第四章用Matlab求偏导数
04.第四章用Matlab求偏导数
Xin Zheng
 
Art2
Art2Art2
Art2
ESCOM
 
Adaptive filtersfinal
Adaptive filtersfinalAdaptive filtersfinal
Adaptive filtersfinal
Wiw Miu
 
Machine learning with Apache Hama
Machine learning with Apache HamaMachine learning with Apache Hama
Machine learning with Apache Hama
Tommaso Teofili
 

Viewers also liked (20)

CS571: Gradient Descent
CS571: Gradient DescentCS571: Gradient Descent
CS571: Gradient Descent
 
Using Gradient Descent for Optimization and Learning
Using Gradient Descent for Optimization and LearningUsing Gradient Descent for Optimization and Learning
Using Gradient Descent for Optimization and Learning
 
Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...
Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...
Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...
 
07 logistic regression and stochastic gradient descent
07 logistic regression and stochastic gradient descent07 logistic regression and stochastic gradient descent
07 logistic regression and stochastic gradient descent
 
Learning to learn by gradient descent by gradient descent
Learning to learn by gradient descent by gradient descentLearning to learn by gradient descent by gradient descent
Learning to learn by gradient descent by gradient descent
 
04.第四章用Matlab求偏导数
04.第四章用Matlab求偏导数04.第四章用Matlab求偏导数
04.第四章用Matlab求偏导数
 
Neiwpcc2010.ppt
Neiwpcc2010.pptNeiwpcc2010.ppt
Neiwpcc2010.ppt
 
Stochastic gradient descent and its tuning
Stochastic gradient descent and its tuningStochastic gradient descent and its tuning
Stochastic gradient descent and its tuning
 
Art2
Art2Art2
Art2
 
CS571: Sentiment Analysis
CS571: Sentiment AnalysisCS571: Sentiment Analysis
CS571: Sentiment Analysis
 
Lecture29
Lecture29Lecture29
Lecture29
 
Adaptive filtersfinal
Adaptive filtersfinalAdaptive filtersfinal
Adaptive filtersfinal
 
Reducting Power Dissipation in Fir Filter: an Analysis
Reducting Power Dissipation in Fir Filter: an AnalysisReducting Power Dissipation in Fir Filter: an Analysis
Reducting Power Dissipation in Fir Filter: an Analysis
 
Machine learning with Apache Hama
Machine learning with Apache HamaMachine learning with Apache Hama
Machine learning with Apache Hama
 
02.03 Artificial Intelligence: Search by Optimization
02.03 Artificial Intelligence: Search by Optimization02.03 Artificial Intelligence: Search by Optimization
02.03 Artificial Intelligence: Search by Optimization
 
A Multiple-Shooting Differential Dynamic Programming Algorithm
A Multiple-Shooting Differential Dynamic Programming AlgorithmA Multiple-Shooting Differential Dynamic Programming Algorithm
A Multiple-Shooting Differential Dynamic Programming Algorithm
 
Statistical Learning and Text Classification with NLTK and scikit-learn
Statistical Learning and Text Classification with NLTK and scikit-learnStatistical Learning and Text Classification with NLTK and scikit-learn
Statistical Learning and Text Classification with NLTK and scikit-learn
 
Optimization of Cairo West Power Plant for Generation
Optimization of Cairo West Power Plant for GenerationOptimization of Cairo West Power Plant for Generation
Optimization of Cairo West Power Plant for Generation
 
Optmization techniques
Optmization techniquesOptmization techniques
Optmization techniques
 
Outdoor propagatiom model
Outdoor propagatiom modelOutdoor propagatiom model
Outdoor propagatiom model
 

Similar to Gradient descent method

Radially-Distorted Conjugate Translations
Radially-Distorted Conjugate TranslationsRadially-Distorted Conjugate Translations
Radially-Distorted Conjugate Translations
James Pritts
 

Similar to Gradient descent method (20)

Coordinate Descent method
Coordinate Descent methodCoordinate Descent method
Coordinate Descent method
 
Radially-Distorted Conjugate Translations
Radially-Distorted Conjugate TranslationsRadially-Distorted Conjugate Translations
Radially-Distorted Conjugate Translations
 
Lecture_3_Gradient_Descent.pptx
Lecture_3_Gradient_Descent.pptxLecture_3_Gradient_Descent.pptx
Lecture_3_Gradient_Descent.pptx
 
Undecidable Problems and Approximation Algorithms
Undecidable Problems and Approximation AlgorithmsUndecidable Problems and Approximation Algorithms
Undecidable Problems and Approximation Algorithms
 
15303589.ppt
15303589.ppt15303589.ppt
15303589.ppt
 
Linear Programing.pptx
Linear Programing.pptxLinear Programing.pptx
Linear Programing.pptx
 
Approximation Algorithms TSP
Approximation Algorithms   TSPApproximation Algorithms   TSP
Approximation Algorithms TSP
 
CHAPTER 4.1.pdf
CHAPTER 4.1.pdfCHAPTER 4.1.pdf
CHAPTER 4.1.pdf
 
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWER
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWERUndecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWER
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWER
 
DL_lecture3_regularization_I.pdf
DL_lecture3_regularization_I.pdfDL_lecture3_regularization_I.pdf
DL_lecture3_regularization_I.pdf
 
10_support_vector_machines (1).pptx
10_support_vector_machines (1).pptx10_support_vector_machines (1).pptx
10_support_vector_machines (1).pptx
 
Lecture 5 - Gradient Descent, a lecture in subject module Statistical & Machi...
Lecture 5 - Gradient Descent, a lecture in subject module Statistical & Machi...Lecture 5 - Gradient Descent, a lecture in subject module Statistical & Machi...
Lecture 5 - Gradient Descent, a lecture in subject module Statistical & Machi...
 
PR-305: Exploring Simple Siamese Representation Learning
PR-305: Exploring Simple Siamese Representation LearningPR-305: Exploring Simple Siamese Representation Learning
PR-305: Exploring Simple Siamese Representation Learning
 
EMOD_Optimization_Presentation.pptx
EMOD_Optimization_Presentation.pptxEMOD_Optimization_Presentation.pptx
EMOD_Optimization_Presentation.pptx
 
Secant method
Secant method Secant method
Secant method
 
Regression ppt
Regression pptRegression ppt
Regression ppt
 
Simplex Algorithm
Simplex AlgorithmSimplex Algorithm
Simplex Algorithm
 
04-Unit Four - OR.pptx
04-Unit Four - OR.pptx04-Unit Four - OR.pptx
04-Unit Four - OR.pptx
 
Unit 3 random number generation, random-variate generation
Unit 3 random number generation, random-variate generationUnit 3 random number generation, random-variate generation
Unit 3 random number generation, random-variate generation
 
Sparsenet
SparsenetSparsenet
Sparsenet
 

Recently uploaded

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Recently uploaded (20)

"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Vector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxVector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptx
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital Adaptability
 

Gradient descent method

  • 1. Gradient descent method 2013.11.10 SanghyukChun Many contents are from Large Scale Optimization Lecture 4 & 5 by Caramanis& Sanghavi Convex Optimization Lecture 10 by Boyd & Vandenberghe Convex Optimization textbook Chapter 9 by Boyd & Vandenberghe 1
  • 2. Contents •Introduction •Example code & Usage •Convergence Conditions •Methods & Examples •Summary 2
  • 3. Introduction Unconstraint minimization problem, Description, Pros and Cons 3
  • 4. Unconstrained minimization problems •Recall: Constrained minimization problems •From Lecture 1, the formation of a general constrained convex optimization problem is as follows •min푓푥푠.푡.푥∈χ •Where 푓:χ→Ris convex and smooth •From Lecture 1, the formation of an unconstrained optimization problem is as follows •min푓푥 •Where 푓:푅푛→푅is convex and smooth •In this problem, the necessary and sufficient condition for optimal solution x0 is •훻푓푥=0푎푡푥=푥0 4
  • 5. Unconstrained minimization problems •Minimize f(x) •When f is differentiable and convex, a necessary and sufficient condition for a point 푥∗to be optimal is훻푓푥∗=0 •Minimize f(x) is the same as fining solution of 훻푓푥∗=0 •Min f(x): Analytically solving the optimality equation •훻푓푥∗=0: Usually be solved by an iterative algorithm 5
  • 6. Description of Gradient Descent Method •The idea relies on the fact that −훻푓(푥(푘))is a descent direction •푥(푘+1)=푥(푘)−η푘훻푓(푥(푘))푤푖푡ℎ푓푥푘+1<푓(푥푘) •Δ푥(푘)is the step, or search direction •η푘is the step size, or step length •Too small η푘will cause slow convergence •Too large η푘could cause overshoot the minima and diverge 6
  • 7. Description of Gradient Descent Method •Algorithm (Gradient Descent Method) •given a starting point 푥∈푑표푚푓 •repeat 1.Δ푥≔−훻푓푥 2.Line search: Choose step size ηvia exact or backtracking line search 3.Update 푥≔푥+ηΔ푥 •untilstopping criterion is satisfied •Stopping criterion usually 훻푓(푥)2≤휖 •Very simple, but often very slow; rarely used in practice 7
  • 8. Pros and Cons •Pros •Can be applied to every dimension and space (even possible to infinite dimension) •Easy to implement •Cons •Local optima problem •Relatively slow close to minimum •For non-differentiable functions, gradient methods are ill-defined 8
  • 9. Example Code & Usage Example Code, Usage, Questions 9
  • 10. Gradient Descent Example Code •http://mirlab.org/jang/matlab/toolbox/machineLearning/ 10
  • 11. Usage of Gradient Descent Method •Linear Regression •Find minimum loss function to choose best hypothesis 11 Example of Loss function: 푑푎푡푎푝푟푒푑푖푐푡−푑푎푡푎표푏푠푒푟푣푒푑 2 Find the hypothesis (function) which minimize the loss function
  • 12. Usage of Gradient Descent Method •Neural Network •Back propagation •SVM (Support Vector Machine) •Graphical models •Least Mean Squared Filter …and many other applications! 12
  • 13. Questions •Does Gradient Descent Method always converge? •If not, what is condition for convergence? •How can make Gradient Descent Method faster? •What is proper value for step size η푘 13
  • 14. Convergence Conditions L-Lipschitzfunction, Strong Convexity, Condition number 14
  • 15. L-Lipschitzfunction •Definition •A function 푓:푅푛→푅is called L-Lipschitzif and only if 훻푓푥−훻푓푦2≤퐿푥−푦2,∀푥,푦∈푅푛 •We denote this condition by 푓∈퐶퐿, where 퐶퐿is class of L-Lipschitzfunctions 15
  • 16. L-Lipschitzfunction •Lemma 4.1 •퐼푓푓∈퐶퐿,푡ℎ푒푛푓푦−푓푥−훻푓푥,푦−푥≤ 퐿 2 푦−푥2 •Theorem 4.2 •퐼푓푓∈퐶퐿푎푛푑푓∗=min 푥 푓푥>−∞,푡ℎ푒푛푡ℎ푒푔푟푎푑푖푒푛푡푑푒푠푐푒푛푡 푎푙푔표푟푖푡ℎ푚푤푖푡ℎ푓푖푥푒푑푠푡푒푝푠푖푧푒푠푡푎푡푖푠푓푦푖푛푔η< 2 퐿 푤푖푙푙푐표푛푣푒푟푔푒푡표 푎푠푡푎푡푖표푛푎푟푦푝표푖푛푡 16
  • 17. Strong Convexity and implications •Definition •If there exist a constant m > 0 such that 훻2푓≻=푚퐼푓표푟∀푥∈푆, then the function f(x) is strongly convex function on S 17
  • 18. Strong Convexity and implications •Lemma 4.3 •If f is strongly convex on S, we have the following inequality: •푓푦≥푓푥+<훻푓푥,푦−푥>+ 푚 2 푦−푥2푓표푟∀푥,푦∈푆 •Proof 18 ( ) useful as stopping criterion (if you know m)
  • 19. Strong Convexity and implications 19 Proof
  • 20. Upper Bound of 훻2푓(푥) •Lemma 4.3 implies that the sublevel sets contained in S are bounded, so in particular, S is bounded. Therefore the maximum eigenvalue of 훻2푓푥is bounded above on S •ThereexistsaconstantMsuchthat훻2푓푥=≺푀퐼푓표푟∀푥∈푆 •Lemma 4.4 •퐹표푟푎푛푦푥,푦∈푆,푖푓훻2푓푥=≺푀퐼푓표푟푎푙푙푥∈푆푡ℎ푒푛 푓푦≤푓푥+<훻푓푥,푦−푥>+ 푀 2 푦−푥2 20
  • 21. Condition Number •From Lemma 4.3 and 4.4 we have 푚퐼=≺훻2푓푥=≺푀퐼푓표푟∀푥∈푆,푚>0,푀>0 •The ratio k=M/m is thus an upper bound on the condition number of the matrix 훻2푓푥 •When the ratio is close to 1, we call it well-conditioned •When the ratio is much larger than 1, we call it ill-conditioned •When the ratio is exactly 1, it is the best case that only one step will lead to the optimal solution (there is no wrong direction) 21
  • 22. Condition Number •Theorem 4.5 •Gradient descent for a strongly convex function f and step sizeη= 1 푀 will converge as •푓푥∗−푓∗≤푐푘푓푥0−푓∗,푤ℎ푒푟푒푐≤1− 푚 푀 •Rate of convergence c is known as linear convergence •Since we usually do not know the value of M, we do line search •For exact line search, 푐=1− 푚 푀 •For backtracking line search, 푐=1−min2푚훼,2훽훼푚 푀 <1 22
  • 23. Methods & Examples Exact Line Search, Backtracking Line Search, Coordinate Descent Method, Steepest Descent Method 23
  • 24. Exact Line Search • The optimal line search method in which η is chosen to minimize f along the ray 푥 − η훻푓 푥 , as shown in below • Exact line search is used when the cost of minimization problem with one variable is low compared to the cost of computing the search direction itself. • It is not very practical 24
  • 25. Exact Line Search •Convergence Analysis • •푓푥푘−푓∗decreases by at least a constant factor in every iteration •Converging to 0 geometric fast. (linear convergence) 25 1
  • 26. Backtracking Line Search •It depends on two constants 훼,훽푤푖푡ℎ0<훼<0.5,0<훽<1 •It starts with unit step size and then reduces it by the factor 훽until the stopping condition 푓(푥−η훻푓(푥))≤푓(푥)−훼η훻푓푥2 •Since −훻푓푥is a descent direction and −훻푓푥2<0, so for small enough step size η, we have 푓푥−η훻푓푥≈푓푥−η훻푓푥2<푓푥−훼η훻푓푥2 •It shows that the backtracking line search eventually terminates •훼is typically chosen between 0.01 and 0.3 •훽is often chosen to be between 0.1 and 0.8 26
  • 28. Backtracking Line Search •Convergence Analysis •Claim: η≤ 1 푀 always satisfies the stopping condition •Proof 28
  • 29. Backtracking Line Search •Proof (cont) 29
  • 30. Line search types •Slide from Optimization Lecture 10 by Boyd 30
  • 31. Line search example •Slide from Optimization Lecture 10 by Boyd 31
  • 32. Coordinate Descent Method • Coordinate descent belongs to the class of several non derivative methods used for minimizing differentiable functions. • Here, cost is minimized in one coordinate direction in each iteration. 32
  • 33. Coordinate Descent Method •Pros •It is well suited for parallel computation •Cons •May not reach the local minimum even for convex function 33
  • 34. Converge of Coordinate Descent •Lemma 5.4 34
  • 35. Coordinate Descent Method •Method of selecting the coordinate for next iteration •Cyclic Coordinate Descent •Greedy Coordinate Descent •(Uniform) Random Coordinate Descent 35
  • 36. Steepest Descent Method •The gradient descent method takes many iterations •Steepest Descent Method aims at choosing the best direction at each iteration •Normalized steepest descent direction •Δ푥푛푠푑=푎푟푔푚푖푛훻푓푥푇푣푣=1} •Interpretation: for small 푣,푓푥+푣≈푓푥+훻푓푥푇푣direction Δ푥푛푠푑 is unit-norm step with most negative directional derivative •Iteratively, the algorithm follows the following steps •Calculate direction of descent Δ푥푛푠푑 •Calculate step size, t •푥+=푥+푡Δ푥푛푠푑 36
  • 37. Steepest Descent for various norms •The choice of norm used the steepest descent direction can be have dramatic effect on converge rate •푙2norm •The steepest descent direction is as follows •Δ푥푛푠푑= −훻푓(푥) 훻푓(푥)2 •푙1norm •For 푥1= 푖푥푖,a descent direction is as follows, •Δ푥푛푑푠=−푠푖푔푛 휕푓푥 휕푥푖 ∗푒푖 ∗ •푖∗=푎푟푔min 푖 휕푓 휕푥푖 •푙∞norm •For 푥∞=argmin 푖 푥푖,a descent direction is as follows •Δ푥푛푑푠=−푠푖푔푛−훻푓푥 37
  • 38. Steepest Descent for various norms 38 Quadratic Norm 푙1-Norm
  • 39. Steepest Descent for various norms •Example 39
  • 40. Steepest Descent Convergence Rate •Fact: Any norm can be bounded by ∙2, i.e., ∃훾, 훾∈(0,1] such that, 푥≥훾푥2푎푛푑푥∗≥훾푥2 •Theorem 5.5 •If f is strongly convex with respect to m and M, and ∙2has 훾, 훾as above then steepest decent with backtracking line search has linear convergence with rate •푐=1−2푚훼 훾2min1, 훽훾 푀 •Proof: Will be proved in the lecture 6 40
  • 42. Summary •Unconstrained Convex Optimization Problem •Gradient Descent Method •Step Size Trade-off between safety and speed •Convergence Conditions •L-LipschtizFunction •Strong Convexity •Condition Number 42
  • 43. Summary •Exact Line Search •Backtracking Line Search •Coordinate Descent Method •Good for parallel computation but not always converge •Steepest Descent Method •The choice of norm is important 43
  • 44. 44 END OF DOCUMENT