Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Intro To Gradient Descent in Javascript

266 views

Published on

Gradient descent is the algorithm at the heart of many machine learning problems. In this talk, I’ll introduce the algorithm and code it up from scratch to apply it to a toy linear regression problem on the relationship between videogame metacritic scores and sales.

Published in: Engineering
  • Be the first to comment

  • Be the first to like this

Intro To Gradient Descent in Javascript

  1. 1. gradient descent
  2. 2. const makeModel = (data, learningRate, iterations) !=> { let m = 0; let b = 0; for (let i = 0; i < iterations; i!++) { const mGrad = data.reduce( (acc, { criticScore, globalSales }) !=> acc + (m * criticScore + b - globalSales) * criticScore, 0 ) / data.length; const bGrad = data.reduce( (acc, { criticScore, globalSales }) !=> acc + (m * criticScore + b - globalSales), 0 ) / data.length; m -= mGrad * learningRate; b -= bGrad * learningRate; } return { m, b, predict: (criticScore) !=> m * criticScore + b }; };
  3. 3. 6.5 million
  4. 4. Create a function that predicts global sales given Metacritic score.
  5. 5. Code…
  6. 6. y = m · x + b
  7. 7. Sales = m · CriticScore + b
  8. 8. m = +0.0253 b = -1.1406
  9. 9. Code…
  10. 10. m = +0.0253 b = -1.1406
  11. 11. m = ??????? b = ???????
  12. 12. m = 0 b = 0
  13. 13. mean( )
  14. 14. mean( )
  15. 15. Cost = n ∑ i=1 ((mxi + b) − yi)2 n
  16. 16. Cost = n ∑ i=1 ((mxi + b) − yi)2 n
  17. 17. Cost(m, b) = n ∑ i=1 ((mxi + b) − yi)2 n
  18. 18. Code…
  19. 19. Cost(m, b) = n ∑ i=1 ((mxi + b) − yi)2 n
  20. 20. Cost(m, b) = n ∑ i=1 ((mxi + b) − yi)2 n Cost(m) = n ∑ i=1 (mxi − yi)2 n
  21. 21. m = 0 hint: why might this algo be called “gradient descent?” m′ > 0
  22. 22. m = m − m′
  23. 23. m = -1 m′ < 0 m = m − −m′
  24. 24. Cost = n ∑ i=1 ((mxi + b) − yi)2 n
  25. 25. J = n ∑ i=1 ((mxi + b) − yi)2 n
  26. 26. J = n ∑ i=1 ((mxi + b) − yi)2 n ∂J ∂m ≈ n ∑ i=1 ((mxi + b) − yi)x n
  27. 27. Questions?
  28. 28. Code…
  29. 29. m = 0 m′ > 0
  30. 30. b = 0 b′ > 0
  31. 31. J = n ∑ i=1 ((mxi + b) − yi)2 n
  32. 32. J = n ∑ i=1 ((mxi + b) − yi)2 n ∂J ∂b ≈ n ∑ i=1 ((mxi + b) − yi) n
  33. 33. Code…
  34. 34. 1000000
  35. 35. Code…
  36. 36. “Learning Rate” = 1
  37. 37. “Learning Rate” = .1
  38. 38. .0003
  39. 39. Code…
  40. 40. const makeModel = (data, learningRate, iterations) !=> { let m = 0; let b = 0; for (let i = 0; i < iterations; i!++) { const mGrad = data.reduce( (acc, { criticScore, globalSales }) !=> acc + (m * criticScore + b - globalSales) * criticScore, 0 ) / data.length; const bGrad = data.reduce( (acc, { criticScore, globalSales }) !=> acc + (m * criticScore + b - globalSales), 0 ) / data.length; m -= mGrad * learningRate; b -= bGrad * learningRate; } return { m, b, predict: (criticScore) !=> m * criticScore + b }; };

×