Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Sippin: A Mobile Application Case Study presented at Techfest Louisville

159 views

Published on

"Sippin: A Mobile Application Case Study," was presented at Techfest Louisville 2017 hosted by the Technology Association of Louisville Kentucky on Aug. 16th-17th.

Published in: Internet
  • Be the first to comment

  • Be the first to like this

Sippin: A Mobile Application Case Study presented at Techfest Louisville

  1. 1. Mobile Apps and Artificial Intelligence Ray Tri | CTO & Co-Founder Pai Charasika | CMO & Co-Founder
  2. 2. Who are these guys? Pai Charasika  Wake Forest  Business and Communications  Financial Advisor  Local Entrepreneur  Salesman Extraordinaire Ray Tri  University of Louisville  Organizational Leadership and Development  Business Consultant  Serial Entrepreneur  Full Stack Developer
  3. 3. What in the world is Sippin?
  4. 4. Let’s talk AI Intro to Artificial Intelligence
  5. 5. Terminology  AI – Broadest term used to address all approaches  Machine Learning – Using mathematical algorithms to learn and predict.  Neural Network – Attempts to replicate the functions and structure of the human brain to learn and predict.  Deep Learning – A combination of machine learning and neural networking to draw even more significant insights. AI, Machine Learning, and Neural Networks Key Takeaway: All of these approaches are DATA-DRIVEN
  6. 6. Early AI Machine Learning vs. Rules-Based Programming Google’s AlphaGo IBM’s Deep Blue Chess: • 1.40 x 107 legal positions Go: • 3.72 x 1079 legal positions Deep Blue: • Calculates 2 x 107 positions per second • Loop through all possible plays to pick the best one • Based on rules written by programmers AlphaGo: • Impossible to loop through all possible positions and pick the best one. • Instead uses a predictive model to determine its next move. • Built its predictive model by analyzing games of Go played by experts.
  7. 7. Early AI Machine Learning vs. Rules-Based Programming Key Takeaway: Google’s AlphaGo IBM’s Deep Blue AlphaGo designed its own predictive model to make decisions Programmers wrote the rules Deep Blue used to make decisions
  8. 8. Types of Learning Supervised vs. Unsupervised Data Data Data Supervised Unsupervised • A human selects the specific data it learns from • No pre-selected data is provided, but it is grouped into classes. ModelInput Output Data Data Data Analysis Output
  9. 9. Applications of Supervised Learning What problems can it solve? Classification • Classification problems involve real-world situations in which there is a need to determine an outcome from a set of non-continuous outcomes based on a set of factors. Example: Should I golf today? Predictive Classification Model Pre-selected “Training” Data Input “Live” Data Outcome: Yes
  10. 10. Applications of Supervised Learning What problems can it solve? Regression • Regression problems involve real-world situations in which there is a need to determine an outcome from a set of continuous outcomes based on a set of factors. Example: Probability a patient has a second medical incident. Predictive Regression Model Pre-selected “Training” Data Input “Live” Data Second Attack: 38%
  11. 11. Applications of Unsupervised Learning What problems can it solve? Clustering / Anomalous • Clustering / Anomalous problems involve finding and identifying data or patterns that are significant (or abnormal / deviating from the norm) Example: What do people buy? Machine Learning Analysis All Available Data Outcome: If people buy diapers, they probably buy beer.
  12. 12. Neural Networks What is a Neural Network?  A Neural Network attempts to replicate the functions and structure of the human brain in programming to learn and predict. Inputs Inputs are weighted The “decision” happens An output is sent to the next neuron
  13. 13. Neural Networks Scale it up!  A Neural Network passes inputs to many neurons, which pass their outputs to hidden layers to do further calculations. Finally, there is an output layer which gives us our end result. “Back Propagation” is a means by which a Neural Network modifies its own weights (learns) based on whether its output was correct or not. This is a very complex subject that we will not get into today.
  14. 14. Neural Networks Let’s look at one!  I want to know:  How well I will do on a test (y)  This is based on two factors:  How many hours I slept the night before (x1)  How many hours I spent studying for the test (x2) x1 x2 x1w1 11 + x2w1 21 x1w1 12 + x2w1 22 x1w1 13 + x2w1 23 w1 11 w1 12 w1 13 w1 21 w1 22 w1 23 a1w2 11 + a2w2 12 + a3w2 13 w2 11 w2 13 w2 12 Let’s call the output from this layer “a” y 1 1 + 𝑒−𝑥 1 1 + 𝑒−𝑥 1 1 + 𝑒−𝑥 1 1 + 𝑒−𝑥
  15. 15. Deep Learning Neural Networks Pump your neural network up!  A Deep Learning Neural Network has many, many more hidden layers that are used for computation. This gives it the added advantage of being able to perform more and more tasks as more and more layers are added.
  16. 16. Conclusion Let the questions begin

×