• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
REDES NEURONALES Performance Optimization
 

REDES NEURONALES Performance Optimization

on

  • 558 views

Performance Optimization

Performance Optimization

Statistics

Views

Total Views
558
Views on SlideShare
558
Embed Views
0

Actions

Likes
0
Downloads
7
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    REDES NEURONALES Performance Optimization REDES NEURONALES Performance Optimization Presentation Transcript

    • Performance Optimization
    • Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or
    • Steepest Descent Choose the next step so that the function decreases: For small changes in x we can approximate F ( x ): where If we want the function to decrease: We can maximize the decrease by choosing:
    • Example
    • Plot
    • Stable Learning Rates (Quadratic) Stability is determined by the eigenvalues of this matrix. Eigenvalues of [ I -  A ]. Stability Requirement: (  i - eigenvalue of A )
    • Example
    • Minimizing Along a Line Choose  k to minimize where
    • Example
    • Plot Successive steps are orthogonal. F x    T x x k 1 + = p k g k 1 + T p k = =
    • Newton’s Method Take the gradient of this second-order approximation and set it equal to zero to find the stationary point:
    • Example
    • Plot
    • Non-Quadratic Example Stationary Points: F ( x ) F 2 ( x )
    • Different Initial Conditions F ( x ) F 2 ( x )
    • Conjugate Vectors A set of vectors is mutually conjugate with respect to a positive definite Hessian matrix A if One set of conjugate vectors consists of the eigenvectors of A . (The eigenvectors of symmetric matrices are orthogonal.)
    • For Quadratic Functions The change in the gradient at iteration k is where The conjugacy conditions can be rewritten This does not require knowledge of the Hessian matrix.
    • Forming Conjugate Directions Choose the initial search direction as the negative of the gradient. Choose subsequent search directions to be conjugate. where or or
    • Conjugate Gradient algorithm
      • The first search direction is the negative of the gradient.
      • Select the learning rate to minimize along the line.
      • Select the next search direction using
      • If the algorithm has not converged, return to second step.
      • A quadratic function will be minimized in n steps.
      (For quadratic functions.)
    • Example
    • Example
    • Plots Conjugate Gradient Steepest Descent