Lightning introduction to deep learning, convolutional neural networks, and recurrent neural networks using the translated Stanford CS-230 cheatsheets.
The original original presentation was conducted in English, using the Japanese cheatsheets.
1. AI Open Education:
Stanford Deep Learning Cheat
Sheets in Japanese
A brief overview of Deep Learning
Kamuela Lau (@kamu_lau)
Software Engineer, Consultant at Rondhuit Co., Ltd.
October 30, 2019 Code Chrysalis x MLT MiniConf #6
2. 自己紹介 - Self Introduction
● 株式会社ロンウイット
○ ソフトウェアエンジニア・コンサルタント
● Georgia Institute of Technology に在学中
○ コンピュータサイエンス・機械学習特化修
士課程
● Machine Learning Tokyo Contributor
● オープンソース活動
● 記事執筆
○ https://codezine.jp/author/1834
● RONDHUIT Co., Ltd.
○ Software Engineer/Consultant
● Graduate student at Georgia Institute of
Technology
○ Computer Science w/ specialization in
machine learning
● Machine Learning Tokyo Contributor
● Open source activities
● Article writing
○ https://codezine.jp/author/1834
Kamuela Lau (twitter: @kamu_lau)
Good reference for reviewing ML/DL or leading your studies; if there are concepts unknown to you, you can look it up…
Mentioned worked on the translation for Tips and Tricks
Will start and focus on Tips and Tricks as it applies to DL in general
Will briefly talk about RNNs and CNNs
Mini-batch Gradient Descent:
First mention briefly Gradient Descent and Loss/Objective Function
Loss Function: Difference between output and expected value
Benefits/drawbacks of gradient on all data vs one point
Cross Entropy