Successfully reported this slideshow.
Your SlideShare is downloading. ×

Ai in 45 minutes

Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad

Check these out next

1 of 33 Ad
Advertisement

More Related Content

Slideshows for you (20)

Similar to Ai in 45 minutes (20)

Advertisement
Advertisement

Ai in 45 minutes

  1. 1. Learn AI in 45 Minutes PRESENTED BY Lin GU and Shijie NIE
  2. 2. CONTENTS Why AI? ABC of AI AI is taking human job NOW Basic knowledge about AI A Real Case Conclusion Write a program in 30 minutes to replace your colleague’s work Take away message
  3. 3. Why AI?
  4. 4. People used to believe AI could never beat human in Go 1997 : October 2015: March 2016: April 2017: Amateur 5-dan Fan Hui 2-Dan Lee Sedol 4:1 Ke Jie (Top Human Go Player) 3:0
  5. 5. AI Is Replacing Human Now 50% 90% 85% Within the next decade or two, 50% of current jobs would be replaced, Oxford 15 years' time, 90% of news will be written by machines. Between 2000- 2010, 85% of manufacturing jobs were lost due to automation
  6. 6. Top Jobs to Be Replaced first Microsoft is proposing research to let AI write the codes. second AI is replacing doctors. Data Entry Keyers Photo Processor Insurance Underwriters Office Clerks Account ants
  7. 7. AI in Breast Cancer Screening Mammogram Breast cancer is the most common invasive cancer in females worldwide. Therefore, every woman should take mammogram screening annually after 40. Two Doctors Before Previously, a mammogram should be checked by two radiologists. One Doctor Now Now, it is read by one computer and reviewed by a radiologists. As Good As Doctors AI shows almost equal performance (AUC 0.82) with three of the radiologists (0.77- 0.87).
  8. 8. Trading floor in UBS’ OFFICE first second Third The Swiss bank's trading floor in Connecticut was as big as 20 basketball courts Goldman Sachs’s New York headquarters employed 600 traders, today there are just two equity traders left. (500K USD per Year) Across Goldman Sachs, over 30% of their staff are now computer engineers.
  9. 9. ABC of A.I.
  10. 10. Image Classification Assign An Input Image One Label For example, in the image below an image classification model takes a single image and assigns probabilities to 4 labels, {cat, dog, hat, mug}. As shown in the image, keep in mind that to a computer an image is represented as one large 3-dimensional array of numbers. In this example, the cat image is 248 pixels wide, 400 pixels tall, and has three color channels Red,Green,Blue (or RGB for short). Therefore, the image consists of 248 x 400 x 3 numbers, or a total of 297,600 numbers. Each number is an integer that ranges from 0 (black) to 255 (white). Our task is to turn this quarter of a million numbers into a single label, such as “cat”.
  11. 11. AI Pipeline Train Data Input: Our input consists of a set of N images, each labeled with one of K different classes. We refer to this data as the training set. Learning: Our task is to use the training set to learn what every one of the classes looks like. We refer to this step as training a classifier, or learning a model. Evaluation: In the end, we evaluate the quality of the classifier by asking it to predict labels for a new set of images that it has never seen before. We will then compare the true labels of these images to the ones predicted by the classifier. Intuitively, we’re hoping that a lot of the predictions match up with the true answers (which we call the ground truth). AI Prediction (1.2 million images with 1000 categories)
  12. 12. Nearest Neighbor Classifier
  13. 13. Which Is Nearest TEXT 点击此处添加标题 标题数字等都可以通过点击和重新输入进行更改,顶部“开始”面 板中可以对字体、字号、颜色、行距等进行修改。建议正文12号字, 1.3倍字间距。标题数字等都可以通过点击和重新输入进行更改, 顶部“开始”面板中可以对字体、字号、颜色、行距等进行修改。 建议正文12号字,1.3倍字间距。 标题数字等都可以通过点击和重新输入进行更改,顶部“开始”面 板中可以对字体、字号、颜色、行距等进行修改。建议正文12号字, 1.3倍字间距。
  14. 14. Challenges TEXT 点击此处添加标题 标题数字等都可以通过点击和重新输入进行更改,顶部“开始”面 板中可以对字体、字号、颜色、行距等进行修改。建议正文12号字, 1.3倍字间距。标题数字等都可以通过点击和重新输入进行更改, 顶部“开始”面板中可以对字体、字号、颜色、行距等进行修改。 建议正文12号字,1.3倍字间距。 标题数字等都可以通过点击和重新输入进行更改,顶部“开始”面 板中可以对字体、字号、颜色、行距等进行修改。建议正文12号字, 1.3倍字间距。
  15. 15. Why Deep Learning Deep learning (also known as deep structured learning or hierarchical learning) is the application to learning tasks of artificial neural networks (ANNs) that contain more than one hidden layers.
  16. 16. Pros and Cons Requires GPU Over-fitting Automatic speech recognition, Image recognition, Natural language processing, Drug discovery and toxicology, Customer relationship management, Recommendation systems Various Applications The training takes a lot of time, but it is fast in testing. Fast in Implementation
  17. 17. What is Deep Learning The basic computational unit of the brain is a neuron. The area of Neural Networks has originally been primarily inspired by the goal of modelling biological neural systems, but has since diverged and become a matter of engineering and achieving good results in Machine Learning tasks.
  18. 18. What is Deep Learning ConvNets transform the original image layer by layer from the original pixel values to the final class scores.
  19. 19. What is Deep Learning
  20. 20. A Real Example
  21. 21. Category Correction Many products are assigned into a wrong category Detect the misplaced products Reassign into the right genres NOISE!
  22. 22. Cat or Dog ? We will present a few simple yet effective methods that you can use to build a powerful image classifier, using only very few training examples. First Second Third Only 2000 training examples (1000 per class). Training a small network from scratch (as a baseline) Fine-tuning the top layers of a pre- trained network to improve.
  23. 23. Deep Learning for Small Data first Rotation_range is a value in degrees (0-180), a range within which to randomly rotate pictures second Zoom_range is for randomly zooming inside pictures. Third Fill_mode is the strategy used for filling in newly created pixels, which can appear after a rotation or a width/height shift.
  24. 24. Training A Small Convnet from Scratch first 3 convolution layers with a ReLU activation and followed by max-pooling layers. second On top of it we stick two fully-connected layers. Third 80% accuracy in 40 lines of code model = Sequential() model.add(Conv2D(32, (3, 3), input_shape=(3, 150, 150))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(32, (3, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64, (3, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2)))
  25. 25. Using the bottleneck features of a pre-trained network. VGG16 Only instantiate the convolutional part of the model, everything up to the fully-connected layers. We will then run this model on our training and validation data once, recording the output (the "bottleneck features" from th VGG16 model: the last activation maps before the fully- connected layers) in two numpy arrays. Then we will train a small fully-connected model on top of the stored features. VGG16 architecture, pre-trained on the ImageNet dataset --a model previously featured on this blog. Because the ImageNet dataset contains several "cat" classes (persian cat, siamese cat...) and many "dog" classes among its total of 1000 classes.
  26. 26. Using the bottleneck features of a pre-trained network. VGG16 Only instantiate the convolutional part of the model, everything up to the fully-connected layers. We will then run this model on our training and validation data once, recording the output (the "bottleneck features" from th VGG16 model: the last activation maps before the fully- connected layers) in two numpy arrays. Then we will train a small fully-connected model on top of the stored features. VGG16 architecture, pre-trained on the ImageNet dataset --a model previously featured on this blog. Because the ImageNet dataset contains several "cat" classes (persian cat, siamese cat...) and many "dog" classes among its total of 1000 classes.
  27. 27. Fine-tuning the Network first Instantiate the convolutional base of VGG16 and load its weights second Add our previously defined fully-connected model on top, and load its weights second Freeze the layers of the VGG16 model up to the last convolutional block
  28. 28. Fine-tuning the top layers of a a pre-trained network VGG16 Instantiate the convolutional base of VGG16 and load its weights.
  29. 29. Fine-tuning the top layers of a a pre-trained network VGG16 Instantiate the convolutional base of VGG16 and load its weights.
  30. 30. Accuracy Improvement 79% 90% 94% 98% First Round Bottleneck Features of VGG16 Fine Tune 25,000 training images.
  31. 31. Off-the-shelf Deep Learning Techniques ResNet Pre-Activation Resnet Inception V3 Xception
  32. 32. Q&A
  33. 33. Lin GU lingu.edu@gmail.com Shijie Nie nieshijie2011@gmail.com

×