Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Introduction to artificial_intelligence_syllabus

118 views

Published on

It has been almost 62 years since the invention of the term Artificial Intelligence by Samuel and Minsky et al. at the Dartmouth workshop College in 1956 (“Dartmouth Summer Research Project on Artificial Intelligence”) where this new area of ​​Computer Science was invented. However, the history of Artificial Intelligence goes back to previous millennia, when the Greeks in their Myths spoke about golden robots at Hephaestus, and the Galatea of ​​Pygmalion. They were the first automatons known at the dawn of history, and although these first attempts were only myths, automatons were invented and built through multiple civilizations in history. Nevertheless, these automatons resembled in quite limited way their final objectives, representing animals and humans. In spite of that, the greatest illusion of an automaton, the Turk by Wolfgang von Kempelen, inspired many people, trough its exhibitions, as Alexander Graham Bell and Charles Babbage to develop inventions that would change forever human history. Thus, the importance of the concept “Artificial Intelligence” as a driver of our technological dreams. And although Artificial Intelligence has never been defined in a precise practical way, the amount of research and methods that have been developed to tackle some of its basics tasks have been and are quite humongous. Thus, the importance of having an introduction to the concepts of Artificial Intelligence, thus the dream can continue.

Published in: Engineering
  • Be the first to comment

  • Be the first to like this

Introduction to artificial_intelligence_syllabus

  1. 1. Introduction to Artificial Intelligence Andres Mendez-Vazquez December 15, 2018 Email: amendez@gdl.cinvestav.mx Room 365 Contents 1 Overview 2 2 Prerequisites 2 3 Bibliography 2 4 Course Requirements 2 4.1 Exams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 4.2 Homework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 4.3 Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 4.4 English . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 5 Subjects 4 1.1 What is Artificial Intelligence? [5, 27, 28] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.2 Defining Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.3 Arguments about AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 The Mathematics for Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.1 Probability [1, 4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.2 Linear Algebra [17, 30] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2.3 Optimization Basics [23, 2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 Searching in AI [11] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3.1 Classic Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3.2 Games as Searches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.4 Constraint Satisfaction Problems [13] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.5 Probabilistic Reasoning [25, 10] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.6 Bayesian Networks [22, 26] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.7 Graphical Models [19] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.8 Neuronal Networks [15] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.9 Machine Learning [3, 32, 14] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.10 Planning [21] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.11 Knowledge Representation [33, 9] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.12 Reinforcement Learning [31, 6, 24, 34] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.13 Logic in Artificial Intelligence[8, 7, 16, 33] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.14 Genetic Algorithms [12] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.15 Relational Learning [18] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1
  2. 2. Syllabus Introduction to Artificial Intelligence 1 Overview It has been almost 62 years since the invention of the term Artificial Intelligence by Samuel and Minsky et al. [27] at the Dartmouth workshop College in 1956 (“Dartmouth Summer Research Project on Artificial Intelligence”) where this new area of Computer Science was invented. However, the history of Artificial Intelligence goes back to previous millennia [20], when the Greeks in their Myths spoke about golden robots at Hephaestus, and the Galatea of Pygmalion. They were the first automatons known at the dawn of history, and although these first attempts were only myths, automatons were invented and built through multiple civilizations in history. Nevertheless, these automatons resembled in quite limited way their final objectives, representing animals and humans. In spite of that, the greatest illusion of an automaton, the Turk by Wolfgang von Kempelen [29], inspired many people, trough its exhibitions, as Alexander Graham Bell and Charles Babbage to develop inventions that would change forever human history. Thus, the importance of the concept “Artificial Intelligence” as a driver of our technological dreams. And although Artificial Intelligence has never been defined in a precise practical way, the amount of research and methods that have been developed to tackle some of its basics tasks have been and are quite humongous. Thus, the importance of having an introduction to the concepts of Artificial Intelligence, thus the dream can continue. 2 Prerequisites Analysis of Algorithm, Probability, Linear Algebra and Basic Optimization. 3 Bibliography The bibliography is at the end of this syllabus. 4 Course Requirements The requirements of the course are Requirement % of Grade 1. Midterm #1 15% 2. Midterm #2 15% 3. Midterm #3 15% 4. Final 15% 5. Homeworks 15% 6. Project 25% We will use the curve for the final grade. 4.1 Exams We will have three exams in this course (scheduled for September 27th, November 8th and December 15th) Two hours per test. Blank pages will be provided. Finally, No makeup test unless medical issues with letter by the doctor. Cinvestav GDL 2
  3. 3. Syllabus Introduction to Artificial Intelligence 4.2 Homework We will have homework with four problems in each. You will use the format of IEEE style latex class at single column for the homeworks. No other format will be accepted. It needs to have the following format: 1. Name and date must be at the first page. 2. Each problem must be stated before the solution. It will no accepted in other way. They need to be printed in paper and submitted the due day during the class. Homeworks that are not submitted during that time will be considered late. The programming assignments need to work in order to be graded. Finally, Late Homeworks will have 0 automatically. 4.3 Project The project will be based in the environment developed by the previous generation using Unity. The project will be one where the group will be divided in two groups 1. Predators (At least two types of it) 2. Prey (At least two types of it) The main objective for each group is to provide each of the elements in the simulation with enough “intelligence” to survive in the simulation. Here survive will involve: 1. To reproduce and to maintain a functional population 2. To avoid destroying the source of food so you suffer a die off 3. To construct a complex environment for the prey/predator elements We will discuss more as the project develops. 4.4 English We will use English to teach the class. Cinvestav GDL 3
  4. 4. Syllabus Introduction to Artificial Intelligence 5 Subjects 1.1 What is Artificial Intelligence? [5, 27, 28] 1.1.1 Introduction 1. Turing test. 2. The Horrible Problem - Norman Chomsky 1.1.2 Defining Artificial Intelligence 1. Define and discuss Strong AI vs. Weak AI. 2. Searle’s Chinese Room. 1.1.3 Arguments about AI 1. Arguments in favor of Strong AI 2. Arguments against Strong AI 3. A little bit of history 1.2 The Mathematics for Artificial Intelligence 1.2.1 Probability [1, 4] 1. Introduction (a) Probability Definition (b) The Sample Space (c) Basic Set Operations (d) Counting i. How to produce probabilities? 2. Conditional Probability (a) Definition and intuition (b) Bayes’ Rule (c) Conditional Probabilities (d) Independence of events 3. Random Variables (a) The basic intuition and definition (b) Distributions (c) Function of Random Variables 4. Expectation (a) Define expectation as a weighted average (b) Linearity of the expectation (c) Variance Cinvestav GDL 4
  5. 5. Syllabus Introduction to Artificial Intelligence 1.2.2 Linear Algebra [17, 30] 1. Linear Equations, Matrices and Gaussian Elimination (a) Linear Equations (b) The Geometry of Linear Equations (c) Matrix Notation (d) Inverses and Transposes (e) Solving the Regression Problem 2. Vector Spaces (a) Space of Vectors and Subspaces (b) Linear Independence, Basis and Dimensions (c) The Four Fundamental Subspaces 3. Orthogonality (a) Orthonormal Basis (b) The Regression Least Square 4. The Eigenvectors and Eigenvalues (a) The Concept of Eigenvalues and Eigenvectors 1.2.3 Optimization Basics [23, 2] 1. Introduction (a) Formulation (b) Example: Least Squared Error 2. The Basics (a) What is a solution? (b) How to recognize a minimum? (c) Linear Search 3. Convex Functions 4. Gradient Descent 5. Stochastic Gradient Descent (a) Stochastic Approximation (b) Iterative Method (c) The Least-Mean-Squares Adaptive Algorithm Cinvestav GDL 5
  6. 6. Syllabus Introduction to Artificial Intelligence 1.3 Searching in AI [11] 1.3.1 Classic Techniques 1. Uninformed Search Strategies. 2. Partial Knowledge Techniques. 3. Heuristics functions. (a) Best First Search (b) A* Search (c) etc 4. Local search techniques (a) Hill climbing (b) Simulated annealing 1.3.2 Games as Searches 1. Introduction 2. Minimax Algorithm (a) Alpha Beta Pruning 3. Stochastic Games 1.4 Constraint Satisfaction Problems [13] 1. Introduction to Constraint Satisfaction Problem (a) Basic Definition (b) Optimization Problems (c) Consistency Techniques (d) Arch Consistency 2. Algorithms (a) Systematic Search (b) Tree Search with Consistency Techniques (c) Incomplete Search: Backtracking Search (d) Branch and Bound Cinvestav GDL 6
  7. 7. Syllabus Introduction to Artificial Intelligence 1.5 Probabilistic Reasoning [25, 10] 1. Reasoning under Uncertainty 2. The Drawbacks of Pure Logic in Reasoning 3. Logic and Probability find themselves 4. Bayesian Inference 5. Hierarchical Modeling 6. From Numerical To Graphical Representations 1.6 Bayesian Networks [22, 26] 1. Basic Concepts. (a) Common effect (b) Chain effects (c) Common cause 2. Markov condition 3. Inference in Bayesian Networks 4. Algorithms: (a) Pearl’s propagation algorithm (b) Junction trees 1.7 Graphical Models [19] 1. Undirected Graph Models (a) Markov Models 2. Local Probabilistic Models 3. Inference (a) Variable Elimination (b) Inference as Optimization (c) Particle-Based Approximation Inference (d) Maximum A Posteriori Inference Cinvestav GDL 7
  8. 8. Syllabus Introduction to Artificial Intelligence 1.8 Neuronal Networks [15] 1. Introduction 2. Hopfield model 3. Perceptron 4. Multilayer Perceptron 5. Universal Approximation Theorem 6. Introduction to Deep Learning (a) The Convolutional Neural Networks 1.9 Machine Learning [3, 32, 14] 1. Introduction to Linear Regression 2. Mean-Square Error Linear Estimation (a) Canonical form (b) Gradient Descent 3. Regularization 4. Logistic Regression 5. Bayesian Learning (a) The Naive Model (b) Maximum Likelihood 6. Clustering Classics (a) K-Means (b) K-Centers 1.10 Planning [21] 1. Definition of Planning 2. Action and Plan Representations 3. Graph Planning (a) Forward and Backward Planning 4. Plan Representation (a) Using Probabilities 5. Advanced Planning (a) Using Probabilities to do Planning (b) Planning as constraint satisfaction Cinvestav GDL 8
  9. 9. Syllabus Introduction to Artificial Intelligence 1.11 Knowledge Representation [33, 9] 1. Conceptual Graphics 2. Using Probabilistic Graphs for Knowledge Representation 3. Probabilistic Reasoning (a) Joining the Logic and Probabilistic Areas 4. Probabilistic Logic Programming (a) Using Approximation 5. Markov Logic Networks 1.12 Reinforcement Learning [31, 6, 24, 34] 1. Introduction (a) Markov Decision Process (b) Value Functions (c) Dynamic Programming (d) Reinforcement Learning 2. Least Square for Policy Iteration 3. Learning Using Models (a) Monte Carlo (b) Factorization (c) Exploration 4. Transfer in Reinforcement Learning 1.13 Logic in Artificial Intelligence[8, 7, 16, 33] 1. Resolution 2. Inference in Propositional Logic 3. Inference in First Order Logic 1.14 Genetic Algorithms [12] 1. Introduction 2. Genetic Algorithms 3. Genetic Programming Cinvestav GDL 9
  10. 10. Syllabus Introduction to Artificial Intelligence 1.15 Relational Learning [18] 1. Entity Relational Model 2. Use Graphical Models for Representation 3. Statistical Relational Learners 4. Classic Problems: (a) Collective classification (b) Link Prediction (c) Social Network Modeling (d) Object Identification (e) Link Based Clustering Cinvestav GDL 10
  11. 11. Syllabus Introduction to Artificial Intelligence References [1] R.B. Ash. Basic Probability Theory. Dover Books on Mathematics Series. Dover Publications, Incorporated, 2012. [2] Mokhtar S. Bazaraa. Nonlinear Programming: Theory and Algorithms. Wiley Publishing, 3rd edition, 2013. [3] Christopher M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2006. [4] J.K. Blitzstein and J. Hwang. Introduction to Probability. Chapman & Hall/CRC Texts in Statistical Science. CRC Press, 2014. [5] Bruce G. Buchanan. A (very) brief history of artificial intelligence. AI Magazine, 26(4):53–60, 2005. [6] Lucian Busoniu, Robert Babuska, Bart De Schutter, and Damien Ernst. Reinforcement Learning and Dynamic Programming Using Function Approximators. CRC Press, Inc., Boca Raton, FL, USA, 1st edition, 2010. [7] Martin Davis, George Logemann, and Donald Loveland. A machine program for theorem-proving. Commun. ACM, 5(7):394–397, July 1962. [8] Martin Davis and Hilary Putnam. A computing procedure for quantification theory. J. ACM, 7(3):201–215, July 1960. [9] Luc De Raedt and Angelika Kimmig. Probabilistic (logic) programming concepts. Machine Learning, 100(1):5– 47, 2015. [10] Rina Dechter. Reasoning with probabilistic and deterministic graphical models: Exact algorithms. Synthesis Lectures on Artificial Intelligence and Machine Learning, 7(3):1–191, 2013. [11] Stefan Edelkamp and Stefan Schrodl. Heuristic Search - Theory and Applications. Academic Press, 2012. [12] Agoston E. Eiben and J. E. Smith. Introduction to Evolutionary Computing. SpringerVerlag, 2003. [13] K. Ghedira. Constraint Satisfaction Problems: CSP Formalisms and Techniques. FOCUS Series. Wiley, 2013. [14] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition. Springer Series in Statistics. Springer New York, 2009. [15] Simon Haykin. Neural Networks and Learning Machines. Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 2008. [16] Shawn Hedman. A First Course in Logic: An Introduction to Model Theory, Proof Theory, Computability, and Complexity (Oxford Texts in Logic). Oxford University Press, Inc., New York, NY, USA, 2004. [17] K. Hoffman and R.A. Kunze. Linear algebra. Prentice-Hall mathematics series. Prentice-Hall, 1971. [18] Hassan Khosravi and Bahareh Bina. A survey on statistical relational learning. In Canadian Conference on Artificial Intelligence, pages 256–268. Springer, 2010. [19] Daphne Koller, Nir Friedman, and Francis Bach. Probabilistic graphical models: principles and techniques. MIT press, 2009. [20] Pamela McCorduck. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. AK Peters Ltd, 2004. Cinvestav GDL 11
  12. 12. Syllabus Introduction to Artificial Intelligence [21] Dana Nau, Malik Ghallab, and Paolo Traverso. Automated Planning: Theory & Practice. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2004. [22] Richard E. Neapolitan. Learning Bayesian Networks. Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 2003. [23] Jorge Nocedal and Stephen J. Wright. Numerical Optimization, second edition. World Scientific, 2006. [24] A. Nowé, P. Vrancx, and Y-M. De Hauwere. Reinforcement Learning: State-of-the-Art, chapter Game Theory and Multi-agent Reinforcement Learning, pages 441–470. Springer, 2012. [25] Judea Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1988. [26] Judea Pearl. Causality: Models, Reasoning and Inference. Cambridge University Press, New York, NY, USA, 2nd edition, 2009. [27] Stuart Russell and Peter Norvig. Artificial Intelligence: A Modern Approach. Prentice Hall Press, Upper Saddle River, NJ, USA, 3rd edition, 2009. [28] John R. Searle. Minds, brains, and programs. Behavioral and Brain Sciences, 3:417–424, 1980. [29] T. Standage. The Turk: The Life and Times of the Famous 19th Century Chess-Playing Machine. Walker, 2002. [30] Gilbert Strang. Introduction to Linear Algebra. Wellesley-Cambridge Press, Wellesley, MA, fourth edition, 2009. [31] Richard S Sutton, Andrew G Barto, Francis Bach, et al. Reinforcement learning: An introduction. MIT press, 1998. [32] Sergios Theodoridis. Machine Learning: A Bayesian and Optimization Perspective. Academic Press, 1st edition, 2015. [33] Frank van Harmelen, Frank van Harmelen, Vladimir Lifschitz, and Bruce Porter. Handbook of Knowledge Representation. Elsevier Science, San Diego, USA, 2007. [34] Martijn van Otterlo and Marco Wiering. Reinforcement learning and markov decision processes. In Reinforce- ment Learning, pages 3–42. Springer, 2012. Cinvestav GDL 12

×