- This document outlines the syllabus for a course on generative models and naive Bayes classifiers.
- It discusses the projects, exams, and administration of the course. Students will complete a poster presentation and video, and exams will cover all material from the course in a cumulative manner.
- The document also provides a brief recap of error-driven learning approaches like discriminative and generative models for classification tasks.
Support Vector Machine (SVM) Artificial IntelligenceHiew Moi Sim
This document contains notes from a university course on support vector machines (SVM). It discusses administration topics like homework deadlines and projects. It also covers technical topics explaining the COLT approach to machine learning generalization bounds and how maximizing the margin of a separating hyperplane relates to minimizing the VC dimension and model size. Students are encouraged to form teams and choose from natural language, computer vision, or speech projects involving classification.
"Number Crunching in Python": slides presented at EuroPython 2012, Florence, Italy
Slides have been authored by me and by Dr. Enrico Franchi.
Scientific and Engineering Computing, Numpy NDArray implementation and some working case studies are reported.
This document discusses knowledge representation in artificial intelligence. It begins by discussing what AI is and some of the underlying assumptions of AI techniques. It then discusses how knowledge representation captures generalizations, is understood by people, can be modified, and is useful in different situations. It provides examples of knowledge representation using tic-tac-toe and magic squares. It also discusses representing facts, reasoning, the frame problem, predicate logic, and approaches to knowledge representation.
The document summarizes a lecture on decision trees for classification. It introduces decision trees as a non-linear classification approach that learns directly from data representations. It then describes the basic ID3 algorithm for learning decision trees in a greedy top-down manner by choosing attributes that best split the data based on information gain at each step, recursively building the tree until reaching leaf nodes of single target classifications. The goal is to learn a compact tree representation of the training data.
This document discusses unsupervised learning techniques for clustering data, specifically K-Means clustering and Gaussian Mixture Models. It explains that K-Means clustering groups data by assigning each point to the nearest cluster center and iteratively updating the cluster centers. Gaussian Mixture Models assume the data is generated from a mixture of Gaussian distributions and uses the Expectation-Maximization algorithm to estimate the parameters of the mixture components.
1) Simulation involves defining a scenario with known probabilistic outcomes, running the scenario many times to model likely outcomes, and comparing the results to alternative models.
2) There are 5 steps to simulation: state the problem, make assumptions, create a mathematical model, run many repetitions, and state conclusions.
3) Probability models describe random phenomena using a sample space (all possible outcomes) and assigning probabilities to each outcome or event.
The document discusses evaluation metrics and methodologies for comparing machine learning models. It introduces key metrics like accuracy, precision, recall and the confusion matrix. It emphasizes the importance of comparing models using statistical tests to determine if performance differences are significant, such as paired t-tests and McNemar's test. Cross-validation is presented as a method for estimating out-of-sample performance and comparing multiple learners on the same data splits.
Support Vector Machine (SVM) Artificial IntelligenceHiew Moi Sim
This document contains notes from a university course on support vector machines (SVM). It discusses administration topics like homework deadlines and projects. It also covers technical topics explaining the COLT approach to machine learning generalization bounds and how maximizing the margin of a separating hyperplane relates to minimizing the VC dimension and model size. Students are encouraged to form teams and choose from natural language, computer vision, or speech projects involving classification.
"Number Crunching in Python": slides presented at EuroPython 2012, Florence, Italy
Slides have been authored by me and by Dr. Enrico Franchi.
Scientific and Engineering Computing, Numpy NDArray implementation and some working case studies are reported.
This document discusses knowledge representation in artificial intelligence. It begins by discussing what AI is and some of the underlying assumptions of AI techniques. It then discusses how knowledge representation captures generalizations, is understood by people, can be modified, and is useful in different situations. It provides examples of knowledge representation using tic-tac-toe and magic squares. It also discusses representing facts, reasoning, the frame problem, predicate logic, and approaches to knowledge representation.
The document summarizes a lecture on decision trees for classification. It introduces decision trees as a non-linear classification approach that learns directly from data representations. It then describes the basic ID3 algorithm for learning decision trees in a greedy top-down manner by choosing attributes that best split the data based on information gain at each step, recursively building the tree until reaching leaf nodes of single target classifications. The goal is to learn a compact tree representation of the training data.
This document discusses unsupervised learning techniques for clustering data, specifically K-Means clustering and Gaussian Mixture Models. It explains that K-Means clustering groups data by assigning each point to the nearest cluster center and iteratively updating the cluster centers. Gaussian Mixture Models assume the data is generated from a mixture of Gaussian distributions and uses the Expectation-Maximization algorithm to estimate the parameters of the mixture components.
1) Simulation involves defining a scenario with known probabilistic outcomes, running the scenario many times to model likely outcomes, and comparing the results to alternative models.
2) There are 5 steps to simulation: state the problem, make assumptions, create a mathematical model, run many repetitions, and state conclusions.
3) Probability models describe random phenomena using a sample space (all possible outcomes) and assigning probabilities to each outcome or event.
The document discusses evaluation metrics and methodologies for comparing machine learning models. It introduces key metrics like accuracy, precision, recall and the confusion matrix. It emphasizes the importance of comparing models using statistical tests to determine if performance differences are significant, such as paired t-tests and McNemar's test. Cross-validation is presented as a method for estimating out-of-sample performance and comparing multiple learners on the same data splits.
Lecture 3 for the AI course in A universityCao Minh Tu
The document discusses evaluation metrics and methodologies for comparing machine learning models. It introduces key metrics like accuracy, precision, recall and the confusion matrix. It emphasizes the importance of comparing models using statistical tests to determine if performance differences are significant, such as paired t-tests and McNemar's test. Cross-validation is presented as a method for estimating out-of-sample performance and comparing multiple learners on the same data splits.
The document discusses various search methods including local search techniques like hill climbing, simulated annealing, beam search, and genetic search. It also covers searching continuous state spaces, nondeterministic actions, and online search where the agent performs actions and acquires sensor data. Local search methods start from an initial state and try to improve the current state to reach the goal. Simulated annealing allows occasional downhill moves to help explore more of the search space.
The document discusses various search methods including local search techniques like hill climbing, simulated annealing, beam search, and genetic search. It also covers searching continuous state spaces, nondeterministic actions, and online search where the agent performs actions and acquires sensor data. Local search methods start from an initial state and try improving the current state by moving locally through the state space.
Based on the provided data, I do not have enough information to make an accurate diagnosis. Some key data points that would be helpful include:
- Details on any symptoms (e.g. chest pain type if present)
- ECG or imaging test results
- Family history of heart disease
- Whether the patient smokes
- Details on lifestyle factors like diet, exercise and stress levels
Without more clinical context, I cannot determine if this patient shows signs of heart disease or what the likely diagnosis may be. Please provide additional medical history if you would like me to try assessing this case.
This document discusses numerical integration techniques. It begins by defining definite integrals and introducing the concepts of lower and upper Riemann sums. It then describes various quadrature rules for approximating integrals, including the trapezoid rule, Simpson's rule, Gaussian quadrature, and adaptive Simpson's method. It explains how these rules replace the integral with a weighted sum using different polynomial approximations within intervals of the bounded region. The document provides examples applying the trapezoid rule and estimating associated errors.
This document provides an overview of machine learning concepts including:
1. It defines data science and machine learning, distinguishing machine learning's focus on letting systems learn from data rather than being explicitly programmed.
2. It describes the two main areas of machine learning - supervised learning which uses labeled examples to predict outcomes, and unsupervised learning which finds patterns in unlabeled data.
3. It outlines the typical machine learning process of obtaining data, cleaning and transforming it, applying mathematical models, and using the resulting models to make predictions. Popular models like decision trees, neural networks, and support vector machines are also briefly introduced.
This document discusses properties of pseudo-random numbers and methods for generating random numbers computationally. It covers:
- Properties of pseudo-random numbers including being continuous between 0 and 1 and uniformly distributed.
- Common methods for generating pseudo-random numbers including table lookup, linear congruential generators (LCG), and feedback shift registers.
- Desirable properties for random number generators including being fast, requiring little memory, having a long cycle or period, and producing numbers that are close to uniform and independent.
This document describes the CSE 215 Algorithms course. It provides an overview of the course purpose, textbook, instructor, grading policy, prerequisites, and topics to be covered including proof by induction, asymptotic notation, and analysis of algorithms. The analysis will focus on determining the asymptotic performance of algorithms as the problem size increases. Proof techniques like induction will be used to analyze algorithms precisely.
In this presentation we describe the formulation of the HMM model as consisting of states that are hidden that generate the observables. We introduce the 3 basic problems: Finding the probability of a sequence of observation given the model, the decoding problem of finding the hidden states given the observations and the model and the training problem of determining the model parameters that generate the given observations. We discuss the Forward, Backward, Viterbi and Forward-Backward algorithms.
Genetic algorithms are a search method that uses principles of natural selection and evolution to find optimal solutions to problems. They work by generating an initial population of random solutions, then selecting the fittest to breed a new generation by combining traits and mutating genes. This process is repeated until a satisfactory solution emerges. The eight queens problem can be solved using a genetic algorithm approach by representing board configurations as individuals, evaluating their fitness based on non-attacking queens, breeding the fittest to produce new configurations, and repeating until a solution is found with all queens non-attacking. Semantic networks represent knowledge through nodes and relationships like IS-A and HAS to model how concepts are related, allowing inference of new facts by traversing links
AI_Session 11: searching with Non-Deterministic Actions and partial observati...Asst.prof M.Gokilavani
This document summarizes a session on problem solving by search in artificial intelligence. It discusses uninformed and informed search strategies like breadth-first search, uniform cost search, depth-first search, greedy best-first search, and A* search. It also covers searching with non-deterministic actions, partial observations, and online search agents operating in unknown environments. Examples discussed include the vacuum world problem and how search trees are used to handle non-determinism through contingency planning. The next session will cover online search agents operating in unknown environments.
This document provides an overview of logical agents and knowledge representation. It discusses knowledge-based agents and the Wumpus world example. It introduces propositional logic and various inference techniques like forward chaining, backward chaining, and resolution that can be used for automated reasoning. It also discusses concepts like entailment, models, validity, and satisfiability. Finally, it discusses how these logical concepts can be applied to build an agent that reasons about the Wumpus world using propositional logic.
This document contains lecture notes on the design and analysis of algorithms. It covers topics like algorithm definition, complexity analysis, divide and conquer algorithms, greedy algorithms, dynamic programming, and NP-complete problems. The notes provide examples of algorithms like selection sort, towers of Hanoi, and generating permutations. Pseudocode is used to describe algorithms precisely yet readably.
The document discusses various techniques for representing knowledge and reasoning with uncertainty, including:
1. Logic can be used to formally represent information to remove ambiguity, but reasoning about real-world problems like determining the cause of a doorbell ringing is uncertain.
2. Truth maintenance systems allow non-monotonic reasoning by tracking dependencies between facts and revising conclusions when new facts are added that cause inconsistencies.
3. Structured knowledge representations like frames and semantic networks help organize large quantities of information and show relationships between concepts.
Machine Learning can often be a daunting subject to tackle much less utilize in a meaningful manner. In this session, attendees will learn how to take their existing data, shape it, and create models that automatically can make principled business decisions directly in their applications. The discussion will include explanations of the data acquisition and shaping process. Additionally, attendees will learn the basics of machine learning - primarily the supervised learning problem.
Raimundo Soto - Catholic University of Chile
ERF Training on Advanced Panel Data Techniques Applied to Economic Modelling
29 -31 October, 2018
Cairo, Egypt
The document discusses recursion and provides examples of recursive algorithms like factorial, Fibonacci series, and Towers of Hanoi. It explains recursion using these examples and discusses the disadvantages of recursion. It also covers divide and conquer algorithms like quicksort and binary search. Finally, it discusses backtracking and provides the example of the eight queens problem to illustrate recursive backtracking.
Dynamic programming is used to solve optimization problems by combining solutions to overlapping subproblems. It works by breaking down problems into subproblems, solving each subproblem only once, and storing the solutions in a table to avoid recomputing them. There are two key properties for applying dynamic programming: overlapping subproblems and optimal substructure. Some applications of dynamic programming include finding shortest paths, matrix chain multiplication, the traveling salesperson problem, and knapsack problems.
The document provides an overview of artificial intelligence planning, including definitions and key concepts. It describes planning as a deliberative process that chooses and organizes actions to achieve goals. A conceptual model of planning problems uses state-transition systems and graphs to represent the problem. The document contrasts toy problems from real-world planning problems and discusses formulating planning as a search problem to find a solution path.
This document discusses the concept of dynamic programming. It provides examples of dynamic programming problems including assembly line scheduling and matrix chain multiplication. The key steps of a dynamic programming problem are: (1) characterize the optimal structure of a solution, (2) define the problem recursively, (3) compute the optimal solution in a bottom-up manner by solving subproblems only once and storing results, and (4) construct an optimal solution from the computed information.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Lecture 3 for the AI course in A universityCao Minh Tu
The document discusses evaluation metrics and methodologies for comparing machine learning models. It introduces key metrics like accuracy, precision, recall and the confusion matrix. It emphasizes the importance of comparing models using statistical tests to determine if performance differences are significant, such as paired t-tests and McNemar's test. Cross-validation is presented as a method for estimating out-of-sample performance and comparing multiple learners on the same data splits.
The document discusses various search methods including local search techniques like hill climbing, simulated annealing, beam search, and genetic search. It also covers searching continuous state spaces, nondeterministic actions, and online search where the agent performs actions and acquires sensor data. Local search methods start from an initial state and try to improve the current state to reach the goal. Simulated annealing allows occasional downhill moves to help explore more of the search space.
The document discusses various search methods including local search techniques like hill climbing, simulated annealing, beam search, and genetic search. It also covers searching continuous state spaces, nondeterministic actions, and online search where the agent performs actions and acquires sensor data. Local search methods start from an initial state and try improving the current state by moving locally through the state space.
Based on the provided data, I do not have enough information to make an accurate diagnosis. Some key data points that would be helpful include:
- Details on any symptoms (e.g. chest pain type if present)
- ECG or imaging test results
- Family history of heart disease
- Whether the patient smokes
- Details on lifestyle factors like diet, exercise and stress levels
Without more clinical context, I cannot determine if this patient shows signs of heart disease or what the likely diagnosis may be. Please provide additional medical history if you would like me to try assessing this case.
This document discusses numerical integration techniques. It begins by defining definite integrals and introducing the concepts of lower and upper Riemann sums. It then describes various quadrature rules for approximating integrals, including the trapezoid rule, Simpson's rule, Gaussian quadrature, and adaptive Simpson's method. It explains how these rules replace the integral with a weighted sum using different polynomial approximations within intervals of the bounded region. The document provides examples applying the trapezoid rule and estimating associated errors.
This document provides an overview of machine learning concepts including:
1. It defines data science and machine learning, distinguishing machine learning's focus on letting systems learn from data rather than being explicitly programmed.
2. It describes the two main areas of machine learning - supervised learning which uses labeled examples to predict outcomes, and unsupervised learning which finds patterns in unlabeled data.
3. It outlines the typical machine learning process of obtaining data, cleaning and transforming it, applying mathematical models, and using the resulting models to make predictions. Popular models like decision trees, neural networks, and support vector machines are also briefly introduced.
This document discusses properties of pseudo-random numbers and methods for generating random numbers computationally. It covers:
- Properties of pseudo-random numbers including being continuous between 0 and 1 and uniformly distributed.
- Common methods for generating pseudo-random numbers including table lookup, linear congruential generators (LCG), and feedback shift registers.
- Desirable properties for random number generators including being fast, requiring little memory, having a long cycle or period, and producing numbers that are close to uniform and independent.
This document describes the CSE 215 Algorithms course. It provides an overview of the course purpose, textbook, instructor, grading policy, prerequisites, and topics to be covered including proof by induction, asymptotic notation, and analysis of algorithms. The analysis will focus on determining the asymptotic performance of algorithms as the problem size increases. Proof techniques like induction will be used to analyze algorithms precisely.
In this presentation we describe the formulation of the HMM model as consisting of states that are hidden that generate the observables. We introduce the 3 basic problems: Finding the probability of a sequence of observation given the model, the decoding problem of finding the hidden states given the observations and the model and the training problem of determining the model parameters that generate the given observations. We discuss the Forward, Backward, Viterbi and Forward-Backward algorithms.
Genetic algorithms are a search method that uses principles of natural selection and evolution to find optimal solutions to problems. They work by generating an initial population of random solutions, then selecting the fittest to breed a new generation by combining traits and mutating genes. This process is repeated until a satisfactory solution emerges. The eight queens problem can be solved using a genetic algorithm approach by representing board configurations as individuals, evaluating their fitness based on non-attacking queens, breeding the fittest to produce new configurations, and repeating until a solution is found with all queens non-attacking. Semantic networks represent knowledge through nodes and relationships like IS-A and HAS to model how concepts are related, allowing inference of new facts by traversing links
AI_Session 11: searching with Non-Deterministic Actions and partial observati...Asst.prof M.Gokilavani
This document summarizes a session on problem solving by search in artificial intelligence. It discusses uninformed and informed search strategies like breadth-first search, uniform cost search, depth-first search, greedy best-first search, and A* search. It also covers searching with non-deterministic actions, partial observations, and online search agents operating in unknown environments. Examples discussed include the vacuum world problem and how search trees are used to handle non-determinism through contingency planning. The next session will cover online search agents operating in unknown environments.
This document provides an overview of logical agents and knowledge representation. It discusses knowledge-based agents and the Wumpus world example. It introduces propositional logic and various inference techniques like forward chaining, backward chaining, and resolution that can be used for automated reasoning. It also discusses concepts like entailment, models, validity, and satisfiability. Finally, it discusses how these logical concepts can be applied to build an agent that reasons about the Wumpus world using propositional logic.
This document contains lecture notes on the design and analysis of algorithms. It covers topics like algorithm definition, complexity analysis, divide and conquer algorithms, greedy algorithms, dynamic programming, and NP-complete problems. The notes provide examples of algorithms like selection sort, towers of Hanoi, and generating permutations. Pseudocode is used to describe algorithms precisely yet readably.
The document discusses various techniques for representing knowledge and reasoning with uncertainty, including:
1. Logic can be used to formally represent information to remove ambiguity, but reasoning about real-world problems like determining the cause of a doorbell ringing is uncertain.
2. Truth maintenance systems allow non-monotonic reasoning by tracking dependencies between facts and revising conclusions when new facts are added that cause inconsistencies.
3. Structured knowledge representations like frames and semantic networks help organize large quantities of information and show relationships between concepts.
Machine Learning can often be a daunting subject to tackle much less utilize in a meaningful manner. In this session, attendees will learn how to take their existing data, shape it, and create models that automatically can make principled business decisions directly in their applications. The discussion will include explanations of the data acquisition and shaping process. Additionally, attendees will learn the basics of machine learning - primarily the supervised learning problem.
Raimundo Soto - Catholic University of Chile
ERF Training on Advanced Panel Data Techniques Applied to Economic Modelling
29 -31 October, 2018
Cairo, Egypt
The document discusses recursion and provides examples of recursive algorithms like factorial, Fibonacci series, and Towers of Hanoi. It explains recursion using these examples and discusses the disadvantages of recursion. It also covers divide and conquer algorithms like quicksort and binary search. Finally, it discusses backtracking and provides the example of the eight queens problem to illustrate recursive backtracking.
Dynamic programming is used to solve optimization problems by combining solutions to overlapping subproblems. It works by breaking down problems into subproblems, solving each subproblem only once, and storing the solutions in a table to avoid recomputing them. There are two key properties for applying dynamic programming: overlapping subproblems and optimal substructure. Some applications of dynamic programming include finding shortest paths, matrix chain multiplication, the traveling salesperson problem, and knapsack problems.
The document provides an overview of artificial intelligence planning, including definitions and key concepts. It describes planning as a deliberative process that chooses and organizes actions to achieve goals. A conceptual model of planning problems uses state-transition systems and graphs to represent the problem. The document contrasts toy problems from real-world planning problems and discusses formulating planning as a search problem to find a solution path.
This document discusses the concept of dynamic programming. It provides examples of dynamic programming problems including assembly line scheduling and matrix chain multiplication. The key steps of a dynamic programming problem are: (1) characterize the optimal structure of a solution, (2) define the problem recursively, (3) compute the optimal solution in a bottom-up manner by solving subproblems only once and storing results, and (4) construct an optimal solution from the computed information.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
1. CIS 419/519 Fall’19
Generative Models; Naive Bayes
Dan Roth
danroth@seas.upenn.edu|http://www.cis.upenn.edu/~danroth/|461C, 3401
Walnut
1
Slides were created by Dan Roth (for CIS519/419 at Penn or CS446 at UIUC),
Eric Eaton for CIS519/419 at Penn, or from other authors who have made their
ML slides available.
2. CIS 419/519 Fall’19 2
Administration
• Projects:
– Come to my office hours at least once to discuss the project.
– Posters for the projects will be presented on the last meeting of
the class, Monday, December 9; we’ll start earlier: 9am – noon.
– We will also ask you to prepare a 3-minute video
• Final reports will only be due after the Final exam, on
December 19
– Specific instructions are on the web page and will be sent also
on Piazza.
3. CIS 419/519 Fall’19 3
Administration (2)
• Exam:
– The exam will take place on the originally assigned date, 12/19.
• Location: TBD
• Structured similarly to the midterm.
• 120 minutes; closed books.
– What is covered:
• Cumulative!
• Slightly more focus on the material covered after the previous mid-term.
• However, notice that the ideas in this class are cumulative!!
• Everything that we present in class and in the homework assignments
• Material that is in the slides but is not discussed in class is not part of the material
required for the exam.
– Example 1: We talked about Boosting. But not about boosting the confidence.
– Example 2: We talked about multiclass classification: OvA, AvA, but not Error
Correcting codes, and not about constraint classification (in the slides).
• We will give practice exams. HW5 will also serve as preparation.
Applied Machine Learning
• The exams are mostly about
understanding Machine Learning.
• HW is about applying machine
learning.
Go to Bayesian
Learning
4. CIS 419/519 Fall’19
Recap: Error Driven Learning
• Consider a distribution 𝐷 over space 𝑿𝑌
• 𝑿 - the instance space; 𝑌 - set of labels. (e.g. ±1)
• Can think about the data generation process as governed by 𝐷(𝒙), and the labeling
process as governed by 𝐷(𝑦|𝒙), such that
𝐷(𝒙, 𝑦) = 𝐷(𝒙) 𝐷(𝑦|𝒙)
• This can be used to model both the case where labels are generated by a function 𝑦 =
𝑓(𝒙), as well as noisy cases and probabilistic generation of the label.
• If the distribution 𝐷 is known, there is no learning. We can simply predict 𝑦 =
𝑎𝑟𝑔𝑚𝑎𝑥𝑦 𝐷(𝑦|𝒙)
• If we are looking for a hypothesis, we can simply find the one that minimizes the probability
of mislabeling:
ℎ = 𝑎𝑟𝑔𝑚𝑖𝑛ℎ 𝐸(𝒙,𝑦) ~ 𝐷[[ ℎ 𝒙 ≠ 𝑦]]
5. CIS 419/519 Fall’19 5
Recap: Error Driven Learning (2)
• Inductive learning comes into play when the distribution is not
known.
• Then, there are two basic approaches to take.
– Discriminative (Direct) Learning
• and
– Bayesian Learning (Generative)
• Running example:
– Text Correction:
• “I saw the girl it the park” I saw the girl in the park
6. CIS 419/519 Fall’19 6
1: Direct Learning
• Model the problem of text correction as a problem of learning from
examples.
• Goal: learn directly how to make predictions.
PARADIGM
• Look at many (positive/negative) examples.
• Discover some regularities in the data.
• Use these to construct a prediction policy.
• A policy (a function, a predictor) needs to be specific.
[it/in] rule: if “the” occurs after the target in
• Assumptions comes in the form of a hypothesis class.
Bottom line: approximating ℎ ∶ 𝑿 → 𝑌 is estimating 𝑃(𝑌|𝑿).
7. CIS 419/519 Fall’19
• Consider a distribution 𝐷 over space 𝑿𝑌
• 𝑿 - the instance space; 𝑌 - set of labels. (e.g. ±1)
• Given a sample 𝒙, 𝑦 1
𝑚
,, and a loss function 𝐿(𝒙, 𝑦)
• Find ℎ𝐻 that minimizes
𝑖=1,𝑚 𝐷(𝒙𝑖, 𝑦𝑖)𝐿(ℎ(𝒙𝑖), 𝑦𝑖) + 𝑅𝑒𝑔
𝐿 can be: 𝐿(ℎ(𝒙), 𝑦) = 1, ℎ(𝒙)𝑦, otherwise 𝐿 ℎ 𝒙 , 𝑦 = 0 (0-1 loss)
𝐿(ℎ(𝒙), 𝑦) = ℎ 𝒙 − 𝑦 2
(L2 loss)
𝐿(ℎ(𝒙), 𝑦) = max{0,1 − 𝑦 ℎ(𝒙)} (hinge loss)
𝐿(ℎ(𝒙), 𝑦) = exp{− 𝑦 ℎ(𝒙)} (exponential loss)
• Guarantees: If we find an algorithm that minimizes loss on the observed data, then learning
theory guarantees good future behavior (as a function of |𝐻|).
7
Direct Learning (2)
8. CIS 419/519 Fall’19
The model is called
“generative” since it
makes an
assumption on how
data 𝑿 is generated
given 𝑦
8
2: Generative Model
• Model the problem of text correction as that of generating correct
sentences.
• Goal: learn a model of the language; use it to predict.
PARADIGM
• Learn a probability distribution over all sentences
– In practice: make assumptions on the distribution’s type
• Use it to estimate which sentence is more likely.
– Pr 𝐼 𝑠𝑎𝑤 𝑡ℎ𝑒 𝑔𝑖𝑟𝑙 𝑖𝑡 𝑡ℎ𝑒 𝑝𝑎𝑟𝑘 >< Pr(𝐼 𝑠𝑎𝑤 𝑡ℎ𝑒 𝑔𝑖𝑟𝑙 𝑖𝑛 𝑡ℎ𝑒 𝑝𝑎𝑟𝑘)
– In practice: a decision policy depends on the assumptions
• Guarantees: We need to assume the “right” probability distribution
Bottom line: the generating paradigm approximates
𝑃(𝑿, 𝑌) = 𝑃(𝑿|𝑌) 𝑃(𝑌)
9. CIS 419/519 Fall’19
• There are actually two different notions.
• Learning probabilistic concepts
– The learned concept is a function 𝑐: 𝐗[0,1]
– 𝑐(𝒙) may be interpreted as the probability that the label 1 is assigned to
𝒙
– The learning theory that we have studied before is applicable (with
some extensions).
• Bayesian Learning: Use of a probabilistic criterion in selecting a
hypothesis
– The hypothesis can be deterministic, a Boolean function.
• It’s not the hypothesis – it’s the process.
• In practice, as we’ll see, we will use the same principles as before,
often similar justifications, and similar algorithms.
9
Probabilistic Learning
Remind yourself of basic probability
10. CIS 419/519 Fall’19
Probabilities
• 30 years of AI research danced around the fact that the world
was inherently uncertain
• Bayesian Inference:
– Use probability theory and information about independence
– Reason diagnostically (from evidence (effects) to conclusions
(causes))...
– ...or causally (from causes to effects)
• Probabilistic reasoning only gives probabilistic results
– i.e., it summarizes uncertainty from various sources
• We will only use it as a tool in Machine Learning.
11. CIS 419/519 Fall’19 11
Concepts
• Probability, Probability Space and Events
• Joint Events
• Conditional Probabilities
• Independence
– Next week’s recitation will provide a refresher on probability
– Use the material we provided on-line
• Next I will give a very quick refresher
12. CIS 419/519 Fall’19 12
(1) Discrete Random Variables
• Let 𝑋 denote a random variable
– 𝑋 is a mapping from a space of outcomes to 𝑅.
• [𝑋 = 𝑥] represents an event (a possible outcome) and,
• Each event has an associated probability
• Examples of binary random variables:
– 𝐴 = I have a headache
– 𝐴 = Sally will be the US president in 2020
• 𝑃(𝐴 = 𝑇𝑟𝑢𝑒) is “the fraction of possible worlds in which 𝐴 is
true”
– We could spend hours on the philosophy of this, but we won’t
Adapted from slide by Andrew Moore
13. CIS 419/519 Fall’19 13
Visualizing A
• Universe 𝑈 is the event space of all possible worlds
– Its area is 1
– 𝑃(𝑈) = 1
• 𝑃(𝐴) = area of red oval
• Therefore:
𝑃 𝐴 + 𝑃 ¬𝐴 = 1
𝑃 ¬𝐴 = 1 − 𝑃 𝐴
𝑈
worlds in which 𝐴 is false
worlds in
which 𝐴 is
true
14. CIS 419/519 Fall’19 14
Axioms of Probability
• Kolmogorov showed that three simple axioms lead to the rules of
probability theory
– de Finetti, Cox, and Carnap have also provided compelling arguments for
these axioms
1. All probabilities are between 0 and 1:
0 ≤ 𝑃(𝐴) ≤ 1
2. Valid propositions (tautologies) have probability 1, and unsatisfiable
propositions have probability 0:
𝑃(𝑡𝑟𝑢𝑒) = 1 ; 𝑃(𝑓𝑎𝑙𝑠𝑒) = 0
3. The probability of a disjunction is given by:
𝑃(𝐴 ∨ 𝐵) = 𝑃(𝐴) + 𝑃(𝐵) – 𝑃(𝐴 𝐵)
16. CIS 419/519 Fall’19 16
Multi-valued Random Variables
• 𝐴 is a random variable with arity 𝑘 if it can take on
exactly one value out of {𝑣1, 𝑣2, … , 𝑣𝑘 }
• Think about tossing a die
• Thus…
– 𝑃(𝐴 = 𝑣𝑖 𝐴 = 𝑣𝑗 ) = 0 𝑖𝑓 𝑖 ≠ 𝑗
– 𝑃(𝐴 = 𝑣1 ∨ 𝐴 = 𝑣2 ∨ … ∨ 𝐴 = 𝑣𝑘 ) = 1
Based on slide by Andrew Moore
1 =
𝑖=1
𝑘
𝑃(𝐴 = 𝑣𝑖)
17. CIS 419/519 Fall’19 17
Multi-valued Random Variables
• We can also show that:
𝑃 𝐵 = 𝑃( 𝐵 ∧ 𝐴 = 𝑣1 ∨ 𝐴 = 𝑣2 ∨ ⋯ ∨ 𝐴 = 𝑣𝑘 )
• This is called marginalization over A
P(𝐵) =
𝑖=1
𝑘
𝑃(𝐵 ∧ 𝐴 = 𝑣𝑖)
18. CIS 419/519 Fall’19
(2) Joint Probabilities
• Joint probability: matrix of combined probabilities of a set
of variables
Russell & Norvig’s Alarm Domain: (boolean RVs)
• A world has a specific instantiation of variables:
(𝑎𝑙𝑎𝑟𝑚 𝑏𝑢𝑟𝑔𝑙𝑎𝑟𝑦 ¬𝑒𝑎𝑟𝑡ℎ𝑞𝑢𝑎𝑘𝑒)
• The joint probability is given by:
𝑃(𝑎𝑙𝑎𝑟𝑚, 𝑏𝑢𝑟𝑔𝑙𝑎𝑟𝑦) = 𝑎𝑙𝑎𝑟𝑚 ¬𝑎𝑙𝑎𝑟𝑚
𝑏𝑢𝑟𝑔𝑙𝑎𝑟𝑦 0.09 0.01
¬𝑏𝑢𝑟𝑔𝑙𝑎𝑟𝑦 0.1 0.8
Probability of
burglary:
𝑃(𝑏𝑢𝑟𝑔𝑙𝑎𝑟𝑦) = 0.1
by marginalization
over 𝑎𝑙𝑎𝑟𝑚
24. CIS 419/519 Fall’19 24
(3) Conditional Probability
• 𝑃(𝐴 | 𝐵) = Fraction of worlds in which 𝐵 is true that also
have A true
U
A
B
What if we already know that 𝐵 is
true?
That knowledge changes the
probability of A
• Because we know we’re in a
world where B is true
𝑃 𝐴 𝐵 =
𝑃 𝐴 ∧ 𝐵
𝑃 𝐵
𝑃 𝐴 ∧ 𝐵 = 𝑃 𝐴 𝐵 × 𝑃(𝐵)
26. CIS 419/519 Fall’19 26
(4) Independence
• When two event do not affect each others’ probabilities,
we call them independent
• Formal definition:
𝐴 ⫫ 𝐵 ↔ 𝑃 𝐴 ∧ 𝐵 = 𝑃 𝐴 × 𝑃 𝐵
↔ 𝑃 𝐴 𝐵 = 𝑃(𝐴)
27. CIS 419/519 Fall’19
• Is smart independent of study?
• Is prepared independent of study?
27
Exercise: Independence
P(smart study
prep)
smart smart
study study study study
prepared 0.432 0.16 0.084 0.008
prepared 0.048 0.16 0.036 0.072
28. CIS 419/519 Fall’19 28
Exercise: Independence
Is smart independent of study?
𝑃 𝑠𝑡𝑢𝑑𝑦 ∧ 𝑠𝑚𝑎𝑟𝑡 = 0.432 + 0.048 = 0.48
𝑃 𝑠𝑡𝑢𝑑𝑦 = 0.432 + 0.048 + 0.084 + 0.036 = 0.6
𝑃 𝑠𝑚𝑎𝑟𝑡 = 0.432 + 0.048 + 0.16 + 0.16 = 0.8
𝑃(𝑠𝑡𝑢𝑑𝑦) × 𝑃(𝑠𝑚𝑎𝑟𝑡) = 0.6 × 0.8 = 0.48
Is prepared independent of study?
P(smart study
prep)
smart smart
study study study study
prepared 0.432 0.16 0.084 0.008
prepared 0.048 0.16 0.036 0.072
So yes!
29. CIS 419/519 Fall’19
• Absolute independence of A and B:
𝐴 ⫫ 𝐵 ↔ 𝑃 𝐴 ∧ 𝐵 = 𝑃 𝐴 × 𝑃 𝐵
↔ 𝑃 𝐴 𝐵 = 𝑃(𝐴)
• Conditional independence of 𝐴 and 𝐵 given 𝐶
• This lets us decompose the joint distribution:
– Conditional independence is different than absolute independence,
but still useful in decomposing the full joint
– 𝑃(𝐴, 𝐵, 𝐶) = [Always] 𝑃(𝐴, 𝐵|𝐶) 𝑃(𝐶) =
– = [Independence] 𝑃(𝐴|𝐶) 𝑃(𝐵|𝐶) 𝑃(𝐶)
𝐴 ⫫ 𝐵| C ↔ 𝑃 𝐴 ∧ 𝐵 𝐶 = 𝑃 𝐴 𝐶 × 𝑃(𝐵|𝐶)
29
Conditional Independence
30. CIS 419/519 Fall’19 30
Exercise: Independence
• Is smart independent of study?
• Is prepared independent of study?
P(smart study prep)
smart smart
study study study study
prepared 0.432 0.16 0.084 0.008
prepared 0.048 0.16 0.036 0.072
31. CIS 419/519 Fall’19 31
Take Home Exercise: Conditional
independence
• Is smart conditionally independent of prepared, given
study?
• Is study conditionally independent of prepared, given
smart?
P(smart study prep)
smart smart
study study study study
prepared 0.432 0.16 0.084 0.008
prepared 0.048 0.16 0.036 0.072
33. CIS 419/519 Fall’19 33
Summary: The Monty Hall problem
• Suppose you're on a game show, and
you're given the choice of three doors.
– Behind one door is a car; behind the
others, goats.
– You pick a door, say No. 1, and the host,
who knows what's behind the doors, opens
another door, say No. 3, which has a goat.
– He then says to you, "Do you want to
switch to door No. 2?"
• Is it to your advantage to switch your
choice?
• Try to develop an argument using the
concepts we discussed.
34. CIS 419/519 Fall’19
• Exactly the process we just used
• The most important formula in probabilistic machine learning
𝑃 𝐴 𝐵 =
𝑃(𝐵|𝐴) × 𝑃 𝐴
𝑃 𝐵
(Super Easy) Derivation:
𝑃 𝐴 ∧ 𝐵 = 𝑃 𝐴 𝐵 × 𝑃 𝐵
𝑃 𝐵 ∧ 𝐴 = 𝑃 𝐵 𝐴 × 𝑃 𝐴
Just set equal...
and solve...
Bayes, Thomas (1763) An essay
towards solving a problem in the
doctrine of chances. Philosophical
Transactions of the Royal Society of
London, 53:370-418
34
Bayes’ Rule
𝑃 𝐴 𝐵 × 𝑃 𝐵 = 𝑃 𝐵 𝐴 × 𝑃(𝐴)
35. CIS 419/519 Fall’19 38
Bayes’ Rule for Machine Learning
• Allows us to reason from evidence to hypotheses
• Another way of thinking about Bayes’ rule:
36. CIS 419/519 Fall’19 39
Basics of Bayesian Learning
• Goal: find the best hypothesis from some space 𝐻 of hypotheses, given
the observed data (evidence) 𝐷.
• Define best to be: most probable hypothesis in 𝐻
• In order to do that, we need to assume a probability distribution over the
class 𝐻.
• In addition, we need to know something about the relation between the
data observed and the hypotheses (E.g., a coin problem.)
– As we will see, we will be Bayesian about other things, e.g., the parameters of the
model
37. CIS 419/519 Fall’19 40
Basics of Bayesian Learning
• 𝑃(ℎ) - the prior probability of a hypothesis ℎ
Reflects background knowledge; before data is observed. If
no information - uniform distribution.
• 𝑃(𝐷) - The probability that this sample of the Data is
observed. (No knowledge of the hypothesis)
• 𝑃(𝐷|ℎ): The probability of observing the sample 𝐷, given that
hypothesis ℎ is the target
• 𝑃(ℎ|𝐷): The posterior probability of ℎ. The probability that ℎ is
the target, given that 𝐷 has been observed.
38. CIS 419/519 Fall’19 41
Bayes Theorem
𝑃 ℎ 𝐷 =
𝑃(𝐷|ℎ)𝑃 ℎ
𝑃(𝐷)
• 𝑃(ℎ|𝐷) increases with 𝑃(ℎ) and with 𝑃(𝐷|ℎ)
• 𝑃(ℎ|𝐷) decreases with 𝑃(𝐷)
39. CIS 419/519 Fall’19 42
Learning Scenario
𝑃(ℎ|𝐷) = 𝑃(𝐷|ℎ) 𝑃(ℎ)/𝑃(𝐷)
• The learner considers a set of candidate hypotheses 𝐻
(models), and attempts to find the most probable one ℎ ∈ 𝐻,
given the observed data.
• Such maximally probable hypothesis is called maximum a
posteriori hypothesis (MAP); Bayes theorem is used to compute
it:
ℎ𝑀𝐴𝑃 = 𝑎𝑟𝑔𝑚𝑎𝑥ℎ∈𝐻 𝑃(ℎ|𝐷) = 𝑎𝑟𝑔𝑚𝑎𝑥ℎ∈𝐻 𝑃(𝐷|ℎ) 𝑃(ℎ)/𝑃(𝐷)
= 𝑎𝑟𝑔𝑚𝑎𝑥ℎ∈𝐻 𝑃(𝐷|ℎ) 𝑃(ℎ)
40. CIS 419/519 Fall’19
ℎ𝑀𝐴𝑃 = 𝑎𝑟𝑔𝑚𝑎𝑥ℎ∈𝐻 𝑃 ℎ 𝐷 = 𝑎𝑟𝑔𝑚𝑎𝑥ℎ∈𝐻𝑃(𝐷|ℎ) 𝑃(ℎ)
• We may assume that a priori, hypotheses are equally probable:
𝑃 ℎ𝑖 = 𝑃 ℎ𝑗 ∀ ℎ𝑖, ℎ𝑗 ∈ 𝐻
• We get the Maximum Likelihood hypothesis:
ℎ𝑀𝐿 = 𝑎𝑟𝑔𝑚𝑎𝑥ℎ∈𝐻 𝑃(𝐷|ℎ)
• Here we just look for the hypothesis that best explains the data
43
Learning Scenario (2)
41. CIS 419/519 Fall’19 44
Examples
• ℎ𝑀𝐴𝑃 = 𝑎𝑟𝑔𝑚𝑎𝑥ℎ∈𝐻 𝑃(ℎ|𝐷) = 𝑎𝑟𝑔𝑚𝑎𝑥ℎ∈𝐻𝑃(𝐷|ℎ) 𝑃(ℎ)
• A given coin is either fair or has a 60% bias in favor of Head.
• Decide what is the bias of the coin [This is a learning problem!]
• Two hypotheses: ℎ1: 𝑃(𝐻) = 0.5; ℎ2: 𝑃(𝐻) = 0.6
Prior: 𝑃(ℎ): 𝑃(ℎ1) = 0.75 𝑃(ℎ2 ) = 0.25
Now we need Data. 1st Experiment: coin toss is 𝐻.
– 𝑃(𝐷|ℎ):
𝑃(𝐷|ℎ1) = 0.5 ; 𝑃(𝐷|ℎ2) = 0.6
– 𝑃(𝐷):
𝑃(𝐷) = 𝑃(𝐷|ℎ1)𝑃(ℎ1) + 𝑃(𝐷|ℎ2)𝑃(ℎ2 )
= 0.5 × 0.75 + 0.6 × 0.25 = 0.525
– 𝑃(ℎ|𝐷):
𝑃(ℎ1|𝐷) = 𝑃(𝐷|ℎ1)𝑃(ℎ1)/𝑃(𝐷) = 0.5 × 0.75/0.525 = 0.714
𝑃(ℎ2|𝐷) = 𝑃(𝐷|ℎ2)𝑃(ℎ2)/𝑃(𝐷) = 0.6 × 0.25/0.525 = 0.286
42. CIS 419/519 Fall’19 45
Examples(2)
• ℎ𝑀𝐴𝑃 = 𝑎𝑟𝑔𝑚𝑎𝑥ℎ∈𝐻 𝑃(ℎ|𝐷) = 𝑎𝑟𝑔𝑚𝑎𝑥ℎ∈𝐻𝑃(𝐷|ℎ) 𝑃(ℎ)
• A given coin is either fair or has a 60% bias in favor of Head.
• Decide what is the bias of the coin [This is a learning problem!]
• Two hypotheses: ℎ1: 𝑃(𝐻) = 0.5; ℎ2: 𝑃(𝐻) = 0.6
– Prior: 𝑃(ℎ): 𝑃(ℎ1) = 0.75 𝑃(ℎ2) = 0.25
• After 1st coin toss is 𝐻 we still think that the coin is more likely to be fair
• If we were to use Maximum Likelihood approach (i.e., assume equal priors) we would think
otherwise. The data supports the biased coin better.
• Try: 100 coin tosses; 70 heads.
• You will believe that the coin is biased.
43. CIS 419/519 Fall’19 46
Examples(2)
ℎ𝑀𝐴𝑃 = 𝑎𝑟𝑔𝑚𝑎𝑥ℎ∈𝐻 𝑃(ℎ|𝐷) = 𝑎𝑟𝑔𝑚𝑎𝑥ℎ∈𝐻𝑃(𝐷|ℎ) 𝑃(ℎ)
• A given coin is either fair or has a 60% bias in favor of Head.
• Decide what is the bias of the coin [This is a learning problem!]
• Two hypotheses: ℎ1: 𝑃(𝐻) = 0.5; ℎ2: 𝑃(𝐻) = 0.6
– Prior: 𝑃(ℎ): 𝑃(ℎ1) = 0.75 𝑃(ℎ2 ) = 0.25
• Case of 100 coin tosses; 70 heads.
𝑃(𝐷) = 𝑃(𝐷|ℎ1) 𝑃(ℎ1) + 𝑃(𝐷|ℎ2) 𝑃(ℎ2) =
= 0.5100 × 0.75 + 0.670 × 0.430 × 0.25 =
= 7.9 × 10−31
× 0.75 + 3.4 × 10−28
× 0.25
0.0057 = 𝑃(ℎ1|𝐷) = 𝑃(𝐷|ℎ1) 𝑃(ℎ1)/𝑃(𝐷) << 𝑃(𝐷|ℎ2) 𝑃(ℎ2) /𝑃(𝐷) = 𝑃(ℎ2|𝐷) = 0.9943
In this example we had to choose one hypothesis out of two possibilities; realistically, we could
have infinitely many to choose from. We will still follow the maximum likelihood principle.
44. CIS 419/519 Fall’19 48
Maximum Likelihood Estimate
• Assume that you toss a (𝑝, 1 − 𝑝) coin 𝑚 times and get 𝑘
Heads, 𝑚 − 𝑘 Tails. What is 𝑝?
• If 𝑝 is the probability of Head, the probability of the data
observed is:
𝑃 𝐷 𝑝 = 𝑝𝑘
1 − 𝑝 𝑚−𝑘
• The log Likelihood:
𝐿(𝑝) = log 𝑃(𝐷|𝑝) = 𝑘 log(𝑝) + (𝑚 − 𝑘)log(1 − 𝑝)
• To maximize, set the derivative w.r.t. 𝑝 equal to 0:
𝑑𝐿 𝑝
𝑑𝑝
=
𝑘
𝑝
–
𝑚 − 𝑘
1 − 𝑝
• Solving this for 𝑝, gives: 𝑝 =
𝑘
𝑚
2. In practice, smoothing is
advisable – deriving the right
smoothing can be done by
assuming a prior.
1. The model we assumed is
binomial. You could assume a
different model! Next we will
consider other models and see how
to learn their parameters.
45. CIS 419/519 Fall’19 49
Probability Distributions
• Bernoulli Distribution:
– Random Variable 𝑋 takes values {0, 1} s.t 𝑃(𝑋 = 1) = 𝑝 = 1 – 𝑃(𝑋 =
0)
– (Think of tossing a coin)
• Binomial Distribution:
– Random Variable 𝑋 takes values {1, 2, … , 𝑛} representing the number
of successes (𝑋 = 1) in 𝑛 Bernoulli trials.
– 𝑃 𝑋 = 𝑘 = 𝑓 𝑛, 𝑝, 𝑘 = 𝐶𝑛
𝑘
𝑝𝑘
1 − 𝑝 𝑛−𝑘
– Note that if 𝑋 ~ 𝐵𝑖𝑛𝑜𝑚(𝑛, 𝑝) and 𝑌 ~ 𝐵𝑒𝑟𝑛𝑜𝑢𝑙𝑙𝑖 𝑝 , 𝑋 = 𝑖=1
𝑛
𝑌
– (Think of multiple coin tosses)
46. CIS 419/519 Fall’19 50
Probability Distributions(2)
• Categorical Distribution:
– Random Variable 𝑋 takes on values in {1,2, … 𝑘} s.t 𝑃(𝑋 = 𝑖) = 𝑝𝑖 and 1
𝑘
𝑝𝑖 =
1
– (Think of a dice)
• Multinomial Distribution:
– Let the random variables 𝑋𝑖 (𝑖 = 1, 2, … , 𝑘) indicates the number of times
outcome 𝑖 was observed over the 𝑛 trials.
– The vector 𝑋 = (𝑋1, … , 𝑋𝑘) follows a multinomial distribution (𝑛, 𝑝) where 𝑝 =
(𝑝1, … , 𝑝𝑘) and 1
𝑘
𝑝𝑖 = 1
– 𝑓 𝑥1, 𝑥2, … 𝑥𝑘, 𝑛, 𝑝 = 𝑃 𝑋1 = 𝑥1, … 𝑋𝑘 = 𝑥𝑘
=
𝑛!
𝑥1!…𝑥𝑘!
𝑝1
𝑥1
… 𝑝𝑘
𝑥𝑘
𝑤ℎ𝑒𝑛 𝑖=1
𝑘
𝑥𝑖 = 𝑛
– (Think of 𝑛 tosses of a 𝑘 sided dice)
47. CIS 419/519 Fall’19
Our eventual goal will be: Given a
document, predict whether it’s “good” or
“bad”
51
A Multinomial Bag of Words
• We are given a collection of documents written in a three word language {𝑎, 𝑏, 𝑐}. All the documents
have exactly 𝑛 words (each word can be either 𝑎, 𝑏 𝑜𝑟 𝑐).
• We are given a labeled document collection {𝐷1, 𝐷2 … , 𝐷𝑚}. The label 𝑦𝑖 of document 𝐷𝑖 is 1 or 0,
indicating whether 𝐷𝑖 is “good” or “bad”.
• Our generative model uses the multinominal distribution. It first decides whether to generate a good
or a bad document (with 𝑃(𝑦𝑖
= 1) = 𝜂). Then, it places words in the document; let 𝑎𝑖 (𝑏𝑖, 𝑐𝑖, resp.) be
the number of times word 𝑎 (𝑏, 𝑐, resp.) appears in document 𝐷𝑖. That is, we have 𝑎𝑖 + 𝑏𝑖 + 𝑐𝑖 =
|𝐷𝑖| = 𝑛.
• In this generative model, we have:
𝑃(𝐷𝑖|𝑦 = 1) = 𝑛!/(𝑎𝑖! 𝑏𝑖! 𝑐𝑖!) 𝛼1
ai 𝛽1
bi 𝛾1
ci
where 𝛼1 (𝛽1, 𝛾1 resp.) is the probability that 𝑎 (𝑏 , 𝑐) appears in a “good” document.
• Similarly, 𝑃(𝐷𝑖|𝑦 = 0) = 𝑛!/(𝑎𝑖! 𝑏𝑖! 𝑐𝑖!) 𝛼 0
ai 𝛽0
bi 𝛾0
ci
• Note that: 𝛼0 + 𝛽0 + 𝛾0 = 𝛼1 + 𝛽1 + 𝛾1 = 1
Unlike the discriminative case, the “game” here is different:
We make an assumption on how the data is being generated.
(multinomial, with 𝜂, 𝛼𝑖 , 𝛽𝑖, 𝛾𝑖 )
We observe documents, and estimate these parameters (that’s the learning
problem).
Once we have the parameters, we can predict the corresponding label.
How do we model it?
What is the learning problem?
48. CIS 419/519 Fall’19
A Multinomial Bag of Words (2)
• We are given a collection of documents written in a three word language {𝑎, 𝑏, 𝑐}. All the documents
have exactly 𝑛 words (each word can be either 𝑎, 𝑏 𝑜𝑟 𝑐).
• We are given a labeled document collection {𝐷1, 𝐷2 … , 𝐷𝑚}. The label 𝑦𝑖 of document 𝐷𝑖 is 1 or 0,
indicating whether 𝐷𝑖 is “good” or “bad”.
• The classification problem: given a document 𝐷, determine if it is good or bad; that is, determine
𝑃(𝑦|𝐷).
• This can be determined via Bayes rule: 𝑃(𝑦|𝐷) = 𝑃(𝐷|𝑦) 𝑃(𝑦)/𝑃(𝐷)
• But, we need to know the parameters of the model to compute that.
49. CIS 419/519 Fall’19
• How do we estimate the parameters?
• We derive the most likely value of the parameters defined above, by maximizing the
log likelihood of the observed data.
• 𝑃𝐷 = 𝑖 𝑃(𝑦𝑖 , 𝐷𝑖 ) = 𝑖 𝑃(𝐷𝑖 |𝑦𝑖 ) 𝑃(𝑦𝑖) =
– We denote by 𝑃(𝑦𝑖 = 1) = 𝜂 the probability that an example is “good” (𝑦𝑖 = 1; otherwise
𝑦𝑖 = 0). Then:
𝑖
𝑃(𝑦, 𝐷𝑖 ) =
𝑖
[ 𝜂
𝑛!
𝑎𝑖! 𝑏𝑖! 𝑐𝑖!
𝛼1
ai 𝛽1
bi 𝛾1
𝑐𝑖
𝑦
𝑖 ⋅ 1 − 𝜂
𝑛!
𝑎𝑖! 𝑏𝑖! 𝑐𝑖!
𝛼 0
ai 𝛽0
bi 𝛾0
ci
1−𝑦𝑖
]
• We want to maximize it with respect to each of the parameters. We first compute
log(𝑃𝐷) and then differentiate:
• log(𝑃𝐷) = 𝑖
𝑦𝑖 [ log(𝜂) + 𝐶 + 𝑎𝑖 log(𝛼1) + 𝑏𝑖 log(𝛽1) + 𝑐𝑖 log(𝛾1)] +
(1 − 𝑦𝑖) [log(1 − 𝜂) + 𝐶’ + 𝑎𝑖 log(𝛼0) + 𝑏𝑖 log(𝛽0) + 𝑐𝑖 log(𝛾0) ]
•
𝑑𝑙𝑜𝑔𝑃𝐷
𝜂
= 𝑖[
𝑦𝑖
𝜂
−
1−𝑦𝑖
1 −𝜂
] = 0 → 𝑖 (𝑦𝑖 − 𝜂) = 0 → 𝜂 = 𝑖
𝑦𝑖
𝑚
• The same can be done for the other 6 parameters. However, notice that they are not
independent: 𝛼0+ 𝛽0+ 𝛾0= 𝛼1+ 𝛽1+ 𝛾1 =1 and also 𝑎𝑖 + 𝑏𝑖 + 𝑐𝑖 = |𝐷𝑖| = 𝑛.
53
A Multinomial Bag of Words (3)
Labeled data, assuming that the
examples are independent
Notice that this is an
important trick to write
down the joint
probability without
knowing what the
outcome of the
experiment is. The ith
expression evaluates to
𝑝(𝐷𝑖 , 𝑦𝑖)
(Could be written as a sum
with multiplicative 𝑦𝑖 but less
convenient)
Makes
sense?
50. CIS 419/519 Fall’19 54
Other Examples (HMMs)
• Consider data over 5 characters, 𝑥 = 𝑎, 𝑏, 𝑐, 𝑑, 𝑒, and 2 states 𝑠 = 𝐵, 𝐼
– Think about chunking a sentence to phrases; 𝐵 is the Beginning of each phrase, I is Inside a phrase.
– (Or: think about being in one of two rooms, observing 𝑥 in it, moving to the other)
• We generate characters according to:
• Initial state prob: 𝑝(𝐵) = 1; 𝑝(𝐼) = 0
• State transition prob:
– 𝑝(𝐵𝐵) = 0.8 𝑝(𝐵𝐼) = 0.2
– 𝑝(𝐼𝐵) = 0.5 𝑝(𝐼𝐼) = 0.5
• Output prob:
– 𝑝(𝑎|𝐵) = 0.25, 𝑝(𝑏|𝐵) = 0.10, 𝑝(𝑐|𝐵) = 0.10,… .
– 𝑝(𝑎|𝐼) = 0.25, 𝑝(𝑏, 𝐼) = 0, …
• Can follow the generation process to get the observed sequence.
𝑃(𝐵) = 1
𝑃(𝐼) = 0
𝑃(𝑥|𝐵)
B I
0.8
0.2
0.5
0.5
𝑃(𝑥|𝐼)
0.8
0.2
0.5
0.5
a
B I B
I
I
d
d
c
1
0.2 0.5
0.5
0.5
0.4
0.25
0.25
0.25
0.25
a
We can do the same exercise we did before.
Data: 𝑥1 , 𝑥2, … 𝑥𝑚 , 𝑠1 , 𝑠2, … 𝑠𝑚 1
𝑛
Find the most likely parameters of the model:
𝑃(𝑥𝑖 |𝑠𝑖), 𝑃(𝑠𝑖+1 |𝑠𝑖), 𝑝(𝑠1)
Given an unlabeled example (observation)
𝑥 = (𝑥1, 𝑥2,… 𝑥𝑚)
use Bayes rule to predict the label 𝑙 =
(𝑠1, 𝑠2, … 𝑠𝑚):
𝑙∗ = 𝑎𝑟𝑔𝑚𝑎𝑥𝑙 𝑃(𝑙|𝑥) =
𝑎𝑟𝑔𝑚𝑎𝑥𝑙 𝑃(𝑥|𝑙) 𝑃(𝑙)/𝑃(𝑥)
The only issue is computational: there are 2𝑚
possible values of 𝑙 (labels)
This is an HMM model (but nothing was
hidden; the 𝑠𝑖 were given in training. It’s also
possible to solve when the 𝑠1 , 𝑠2, … 𝑠𝑚 are
hidden, via EM.
51. CIS 419/519 Fall’19 55
Bayes Optimal Classifier
• How should we use the general formalism?
• What should 𝐻 be?
• 𝐻 can be a collection of functions. Given the training
data, choose an optimal function. Then, given new data,
evaluate the selected function on it.
• 𝐻 can be a collection of possible predictions. Given the
data, try to directly choose the optimal prediction.
• Could be different!
52. CIS 419/519 Fall’19 56
Bayes Optimal Classifier
• The first formalism suggests to learn a good hypothesis
and use it.
• (Language modeling, grammar learning, etc. are here)
ℎ𝑀𝐴𝑃 = 𝑎𝑟𝑔𝑚𝑎𝑥ℎ∈𝐻𝑃 ℎ 𝐷 = 𝑎𝑟𝑔𝑚𝑎𝑥ℎ∈𝐻𝑃 𝐷 ℎ 𝑃(ℎ)
• The second one suggests to directly choose a
decision.[it/in]:
• This is the issue of “thresholding” vs. entertaining all
options until the last minute. (Computational Issues)
53. CIS 419/519 Fall’19 57
Bayes Optimal Classifier: Example
• Assume a space of 3 hypotheses:
– 𝑃 ℎ1 𝐷 = 0.4; 𝑃 ℎ2 𝐷 = 0.3; 𝑃 ℎ3 𝐷 = 0.3 → ℎ𝑀𝐴𝑃 = ℎ1
• Given a new instance 𝑥, assume that
– ℎ1(𝒙) = 1 ℎ2(𝒙) = 0 ℎ3(𝒙) = 0
• In this case,
– 𝑃(𝑓(𝒙) = 1 ) = 0.4 ; 𝑃(𝑓(𝒙) = 0) = 0.6 𝑏𝑢𝑡 ℎ𝑀𝐴𝑃 (𝒙) = 1
• We want to determine the most probable classification by combining
the prediction of all hypotheses, weighted by their posterior
probabilities
• Think about it as deferring your commitment – don’t commit to the
most likely hypothesis until you have to make a decision (this has
computational consequences)
54. CIS 419/519 Fall’19 58
Bayes Optimal Classifier: Example(2)
• Let 𝑉 be a set of possible classifications
𝑃 𝑣𝑗 𝐷 =
ℎ𝑖∈𝐻
𝑃 𝑣𝑗 ℎ𝑖, 𝐷 𝑃 ℎ𝑖 𝐷 =
ℎ𝑖∈𝐻
𝑃 𝑣𝑗 ℎ𝑖 𝑃(ℎ𝑖|𝐷)
• Bayes Optimal Classification:
𝑣 = 𝑎𝑟𝑔𝑚𝑎𝑥𝑣𝑗∈𝑉𝑃 𝑣𝑗 𝐷 = 𝑎𝑟𝑔𝑚𝑎𝑥𝑣𝑗∈𝑉
ℎ𝑖∈𝐻
𝑃 𝑣𝑗 ℎ𝑖 𝑃(ℎ𝑖|𝐷)
• In the example:
𝑃 1 𝐷 =
ℎ𝑖∈𝐻
𝑃 1 ℎ𝑖 𝑃 ℎ𝑖 𝐷 = 1 ∗ 0.4 + 0 ∗ 0.3 + 0 ∗ 0.3 = 0.4
𝑃 0 𝐷 =
ℎ𝑖∈𝐻
𝑃 0 ℎ𝑖 𝑃 ℎ𝑖 𝐷 = 0 ∗ 0.4 + 1 ∗ 0.3 + 1 ∗ 0.3 = 0.6
• and the optimal prediction is indeed 0.
• The key example of using a “Bayes optimal Classifier” is that of the Naïve Bayes algorithm.
55. CIS 419/519 Fall’19
Bayesian Classifier
• 𝑓: 𝑿𝑉, finite set of values
• Instances 𝒙𝑿 can be described as a collection of features
𝒙 = 𝑥1, 𝑥2, … 𝑥𝑛 𝑥𝑖 ∈ {0,1}
• Given an example, assign it the most probable value in 𝑉
• Bayes Rule:
𝑣𝑀𝐴𝑃 = 𝑎𝑟𝑔𝑚𝑎𝑥𝑣𝑗∈𝑉𝑃 𝑣𝑗 𝑥 = 𝑎𝑟𝑔𝑚𝑎𝑥𝑣𝑗∈𝑉𝑃(𝑣𝑗|𝑥1, 𝑥2, … 𝑥𝑛)
𝑣𝑀𝐴𝑃 = 𝑎𝑟𝑔𝑚𝑎𝑥𝑣𝑗∈𝑉
𝑃(𝑥1, 𝑥2, … , 𝑥𝑛|𝑣𝑗)𝑃 𝑣𝑗
𝑃(𝑥1, 𝑥2, … , 𝑥𝑛)
= 𝑎𝑟𝑔𝑚𝑎𝑥𝑣𝑗∈𝑉𝑃(𝑥1, 𝑥2, … , 𝑥𝑛|𝑣𝑗)𝑃 𝑣𝑗
• Notational convention: 𝑃(𝑦) means 𝑃(𝑌 = 𝑦)
59
56. CIS 419/519 Fall’19
Bayesian Classifier
𝑉𝑀𝐴𝑃 = 𝑎𝑟𝑔𝑚𝑎𝑥𝑣 𝑃(𝑥1, 𝑥2, … , 𝑥𝑛 | 𝑣 )𝑃(𝑣)
• Given training data we can estimate the two terms.
• Estimating 𝑃(𝑣) is easy. E.g., under the binomial distribution assumption, count the
number of times 𝑣 appears in the training data.
• However, it is not feasible to estimate 𝑃(𝑥1, 𝑥2, … , 𝑥𝑛 | 𝑣 )
• In this case we have to estimate, for each target value, the probability of each instance
(most of which will not occur).
• In order to use a Bayesian classifiers in practice, we need to make assumptions that will
allow us to estimate these quantities.
60
58. CIS 419/519 Fall’19
Naive Bayes (2)
𝑉𝑀𝐴𝑃 = 𝑎𝑟𝑔𝑚𝑎𝑥𝑣 𝑃(𝑥1, 𝑥2, … , 𝑥𝑛 | 𝑣 )𝑃(𝑣)
• Assumption: feature values are independent given the target
value
𝑃 𝑥1 = 𝑏1, 𝑥2 = 𝑏2, … , 𝑥𝑛 = 𝑏𝑛 𝑣 = 𝑣𝑗 𝑖=1
𝑛
(𝑥𝑖 = 𝑏𝑖|𝑣 = 𝑣𝑗 )
• Generative model:
• First choose a value 𝑣𝑗 𝑉 according to 𝑃(𝑣)
• For each 𝑣𝑗 : choose 𝑥1 𝑥2, … , 𝑥𝑛 according to 𝑃(𝑥𝑘 |𝑣𝑗 )
62
59. CIS 419/519 Fall’19
Naive Bayes (3)
𝑉𝑀𝐴𝑃 = 𝑎𝑟𝑔𝑚𝑎𝑥𝑣 𝑃(𝑥1, 𝑥2, … , 𝑥𝑛 | 𝑣 )𝑃(𝑣)
• Assumption: feature values are independent given the target value
𝑃 𝑥1 = 𝑏1, 𝑥2 = 𝑏2, … , 𝑥𝑛 = 𝑏𝑛 𝑣 = 𝑣𝑗 𝑖=1
𝑛
(𝑥𝑖 = 𝑏𝑖| 𝑣 = 𝑣𝑗 )
• Learning method: Estimate 𝑛|𝑉| + |𝑉| parameters and use them to make a prediction.
(How to estimate?)
• Notice that this is learning without search. Given a collection of training examples, you just
compute the best hypothesis (given the assumptions).
• This is learning without trying to achieve consistency or even approximate consistency.
• Why does it work?
63
60. CIS 419/519 Fall’19
• Notice that the features values are conditionally independent given the target value,
and are not required to be independent.
• Example: The Boolean features are 𝑥 and 𝑦.
We define the label to be 𝑙 = 𝑓 𝑥, 𝑦 = 𝑥 ∧ 𝑦 over the product distribution:
𝑝(𝑥 = 0) = 𝑝(𝑥 = 1) =
1
2
and 𝑝(𝑦 = 0) = 𝑝(𝑦 = 1) =
1
2
The distribution is defined so that 𝑥 and 𝑦 are independent: 𝑝(𝑥, 𝑦) = 𝑝(𝑥)𝑝(𝑦)
That is:
• But, given that 𝑙 = 0:
𝑝(𝑥 = 1| 𝑙 = 0) = 𝑝(𝑦 = 1 𝑙 = 0 =
1
3
while: 𝑝(𝑥 = 1, 𝑦 = 1 | 𝑙 = 0) = 0
so 𝑥 and 𝑦 are not conditionally independent.
64
Conditional Independence
𝑋 = 0 𝑋 = 1
𝑌 = 0 ¼ (𝑙 = 0) ¼ (𝑙 = 0)
𝑌 = 1 ¼ (𝑙 = 0) ¼ (𝑙 = 1)
61. CIS 419/519 Fall’19
• The other direction also does not hold.
𝑥 and 𝑦 can be conditionally independent but not independent.
• Example: We define a distribution s.t.:
𝑙 = 0: 𝑝(𝑥 = 1| 𝑙 = 0) = 1, 𝑝(𝑦 = 1| 𝑙 = 0) = 0
𝑙 = 1: 𝑝(𝑥 = 1| 𝑙 = 1) = 0, 𝑝(𝑦 = 1| 𝑙 = 1) = 1
and assume, that: 𝑝(𝑙 = 0) = 𝑝(𝑙 = 1) =
1
2
• Given the value of l, 𝑥 and 𝑦 are independent (check)
• What about unconditional independence ?
𝑝(𝑥 = 1) = 𝑝(𝑥 = 1| 𝑙 = 0)𝑝(𝑙 = 0) + 𝑝(𝑥 = 1| 𝑙 = 1)𝑝(𝑙 = 1) = 0.5 + 0 = 0.5
𝑝(𝑦 = 1) = 𝑝(𝑦 = 1| 𝑙 = 0)𝑝(𝑙 = 0) + 𝑝(𝑦 = 1| 𝑙 = 1)𝑝(𝑙 = 1) = 0 + 0.5 = 0.5
But,
𝑝(𝑥 = 1, 𝑦 = 1) = 𝑝(𝑥 = 1, 𝑦 = 1| 𝑙 = 0)𝑝(𝑙 = 0) + 𝑝(𝑥 = 1, 𝑦 = 1| 𝑙 = 1)𝑝(𝑙 = 1) = 0
so 𝑥 and 𝑦 are not independent.
65
Conditional Independence
𝑋 = 0 𝑋 = 1
𝑌 = 0 0 (𝑙 = 0) ½ (𝑙 = 0)
𝑌 = 1 ½ (𝑙 = 1) 0 (𝑙 = 1)
62. CIS 419/519 Fall’19
Day Outlook Temperature Humidity Wind PlayTennis
1 Sunny Hot High Weak No
2 Sunny Hot High Strong No
3 Overcast Hot High Weak Yes
4 Rain Mild High Weak Yes
5 Rain Cool Normal Weak Yes
6 Rain Cool Normal Strong No
7 Overcast Cool Normal Strong Yes
8 Sunny Mild High Weak No
9 Sunny Cool Normal Weak Yes
10 Rain Mild Normal Weak Yes
11 Sunny Mild Normal Strong Yes
12 Overcast Mild High Strong Yes
13 Overcast Hot Normal Weak Yes
14 Rain Mild High Strong No 66
Naïve Bayes Example
𝑣𝑁𝐵 = 𝑎𝑟𝑔𝑚𝑎𝑥𝑣𝑗∈𝑉 P vj
𝑖
𝑃(𝑥𝑖|𝑣𝑗)
63. CIS 419/519 Fall’19 67
Estimating Probabilities
• 𝑣𝑁𝐵 = 𝑎𝑟𝑔𝑚𝑎𝑥𝑣∈ 𝑦𝑒𝑠,𝑛𝑜 P v 𝑖 𝑃(𝑥𝑖 = 𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑡𝑖𝑜𝑛|𝑣)
• How do we estimate 𝑃 𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑡𝑖𝑜𝑛 𝑣 ?
68. CIS 419/519 Fall’19 72
Estimating Probabilities
𝑣𝑁𝐵 = 𝑎𝑟𝑔𝑚𝑎𝑥𝑣
𝑖
𝑃 𝑥𝑖 𝑣 𝑃(𝑣)
• How do we estimate 𝑃(𝑥𝑖 | 𝑣) ?
• As we suggested before, we made a Binomial assumption; then:
P(𝑥𝑖 = 1 𝑣 =
#(𝑥𝑖=1 𝑖𝑛 𝑡𝑟𝑎𝑖𝑛𝑖𝑛𝑔 𝑖𝑛 𝑎𝑛 𝑒𝑥𝑎𝑚𝑝𝑙𝑒 𝑙𝑎𝑏𝑒𝑙𝑑 𝑣)
#(𝑣 𝑙𝑎𝑏𝑒𝑙𝑒𝑑 𝑒𝑥𝑎𝑚𝑝𝑙𝑒𝑠)
=
𝑛𝑖
𝑛
• Sparsity of data is a problem
– if 𝑛 is small, the estimate is not accurate
– if 𝑛𝑖 is 0, it will dominate the estimate: we will never predict 𝑣 if a word that
never appeared in training (with 𝑣) appears in the test data
69. CIS 419/519 Fall’19 73
Robust Estimation of Probabilities
• This process is called smoothing.
• There are many ways to do it, some better justified than others;
• An empirical issue.
Here:
• 𝑛𝑘 is # of occurrences of the 𝑘-th feature given the label 𝑣
• 𝑛 is # of occurrences of the label 𝑣
• 𝑝 is a prior estimate of 𝑣 (e.g., uniform)
• 𝑚 is equivalent sample size (# of labels)
– Is this a reasonable definition?
𝑃 𝑥𝑘 𝑣 =
𝑛𝑘 + 𝑚𝑝
𝑛 + 𝑚
𝑣𝑁𝐵 = 𝑎𝑟𝑔𝑚𝑎𝑥𝑣∈ 𝑙𝑖𝑘𝑒,𝑑𝑖𝑠𝑙𝑖𝑘𝑒 P v
𝑖
𝑃(𝑥𝑖 = 𝑤𝑜𝑟𝑑𝑖|𝑣)
71. CIS 419/519 Fall’19
•In practice, we will typically work in the log space:
• 𝑣𝑁𝐵 = 𝑎𝑟𝑔𝑚𝑎𝑥 log 𝑃 𝑣𝑗 𝑖 𝑃(𝑥𝑖│𝑣𝑗 ) =
= 𝑎𝑟𝑔𝑚𝑎𝑥 [ 𝑙𝑜𝑔 𝑃(𝑣𝑗) +
𝑖
𝑙𝑜𝑔 𝑃(𝑥𝑖|𝑣𝑗 )]
76
Another comment on estimation
𝑣𝑁𝐵 = 𝑎𝑟𝑔𝑚𝑎𝑥𝑣𝑗∈𝑉𝑃 𝑣𝑗
𝑖
𝑃 𝑥𝑖 𝑣𝑗
72. CIS 419/519 Fall’19 77
Naïve Bayes: Two Classes
• Notice that the naïve Bayes method gives a method for
predicting
• rather than an explicit classifier.
• In the case of two classes, 𝑣{0,1} we predict that 𝑣 =
1 iff:
Why do things work?
𝑃 𝑣𝑗 = 1 ⋅ 𝑖=1
𝑛
𝑃 𝑥𝑖 𝑣𝑗 = 1
𝑃 𝑣𝑗 = 0 ⋅ 𝑖=1
𝑛
𝑃 𝑥𝑖 𝑣𝑗 = 0
> 1
𝑣𝑁𝐵 = 𝑎𝑟𝑔𝑚𝑎𝑥𝑣𝑗∈𝑉P vj
𝑖
𝑃(𝑥𝑖|𝑣𝑗)
73. CIS 419/519 Fall’19 78
Naïve Bayes: Two Classes
• Notice that the naïve Bayes method gives a method for
predicting rather than an explicit classifier.
• In the case of two classes, 𝑣{0,1} we predict that 𝑣 =
1 iff: 𝑃 𝑣𝑗 = 1 ⋅ 𝑖=1
𝑛
𝑃 𝑥𝑖 𝑣𝑗 = 1
𝑃 𝑣𝑗 = 0 ⋅ 𝑖=1
𝑛
𝑃 𝑥𝑖 𝑣𝑗 = 0
> 1
Denote 𝑝𝑖 = 𝑃 𝑥𝑖 = 1 𝑣 = 1 , 𝑞𝑖 = 𝑃 𝑥𝑖 = 1 𝑣 = 0
𝑃 𝑣𝑗 = 1 ⋅ 𝑖=1
𝑛
𝑝𝑖
𝑥𝑖
1 − 𝑝𝑖
1−𝑥𝑖
𝑃 𝑣𝑗 = 0 ⋅ 𝑖=1
𝑛
𝑞𝑖
𝑥𝑖
1 − 𝑞𝑖
1−𝑥𝑖
> 1
𝑣𝑁𝐵 = 𝑎𝑟𝑔𝑚𝑎𝑥𝑣𝑗∈𝑉𝑃 𝑣𝑗
𝑖
𝑃(𝑥𝑖|𝑣𝑗)
76. CIS 419/519 Fall’19 81
Naïve Bayes: Two Classes
• In the case of two classes, v{0,1} we predict that v=1 iff:
• We get that naive Bayes is a linear separator with
• If 𝑝𝑖 = 𝑞𝑖 then 𝑤𝑖 = 0 and the feature is irrelevant
𝑃 𝑣𝑗 = 1 ⋅ 𝑖=1
𝑛
𝑝𝑖
𝑥𝑖
1 − 𝑝𝑖
1−𝑥𝑖
𝑃 𝑣𝑗 = 0 ⋅ 𝑖=1
𝑛
𝑞𝑖
𝑥𝑖
1 − 𝑞𝑖
1−𝑥𝑖
=
𝑃 𝑣𝑗 = 1 ⋅ 𝑖=1
𝑛
(1 − 𝑝𝑖) 𝑝𝑖/1 − 𝑝𝑖
𝑥𝑖
𝑃 𝑣𝑗 = 0 ⋅ 𝑖=1
𝑛
(1 − 𝑞𝑖) 𝑞𝑖/1 − 𝑞𝑖
𝑥𝑖
> 1
Take logarithm; we predict 𝑣 = 1 iff
𝑙𝑜𝑔
𝑃 𝑣𝑗 = 1
𝑃(𝑣𝑗 = 0)
+
𝑖
log
1 − 𝑝𝑖
1 − 𝑞𝑖
+
𝑖
(log
𝑝𝑖
1 − 𝑝𝑖
− log
𝑞𝑖
1 − 𝑞𝑖
)𝑥𝑖 > 0
𝑤𝑖 = log
𝑝𝑖
1 − 𝑝𝑖
− log
𝑞𝑖
1 − 𝑞𝑖
= log
𝑝𝑖 1 − 𝑞𝑖
𝑞𝑖 1 − 𝑝𝑖
77. CIS 419/519 Fall’19 82
Naïve Bayes: Two Classes
• In the case of two classes we have that:
log
𝑃(𝑣𝑗 = 1|𝑥)
𝑃(𝑣𝑗 = 0|𝑥)
=
𝑖
𝒘𝑖𝒙𝑖 − 𝑏
• but since
𝑃 𝑣𝑗 = 1 𝑥 = 1 − 𝑃(𝑣𝑗 = 0|𝑥)
• We get:
𝑃 𝑣𝑗 = 1 𝑥 =
1
1 + exp(− 𝑖 𝒘𝑖𝒙𝑖 + 𝑏)
• Which is simply the logistic function.
• The linearity of NB provides a better explanation for why it works.
We have:
𝐴 = 1 − 𝐵; 𝐿𝑜𝑔(𝐵/𝐴) = −𝐶.
Then:
𝐸𝑥𝑝(−𝐶) = 𝐵/𝐴 =
= (1 − 𝐴)/𝐴 = 1/𝐴 – 1
= 1 + 𝐸𝑥𝑝(−𝐶) = 1/𝐴
𝐴 = 1/(1 + 𝐸𝑥𝑝(−𝐶))
78. CIS 419/519 Fall’19
Why Does it Work?
• Learning Theory
– Probabilistic predictions are done via Linear Models
– The low expressivity explains Generalization + Robustness
• Expressivity (1: Methodology)
– Is there a reason to believe that these hypotheses minimize the empirical error?
• In General, No. (Unless some probabilistic assumptions happen to hold).
– But: if the hypothesis does not fit the training data,
• Experimenters will augment set of features (in fact, add dependencies)
– But now, this actually follows the Learning Theory Protocol:
• Try to learn a hypothesis that is consistent with the data
• Generalization will be a function of the low expressivity
• Expressivity (2: Theory)
– (a) Product distributions are “dense” in the space of all distributions.
• Consequently, the resulting predictor’s error is actually close to optimal classifier.
– (b) NB classifiers cannot express all linear functions; but they converge faster to what they can
express.
83
𝐸𝑟𝑟𝑠 ℎ =
𝑥 ∈ 𝑆 ℎ 𝑥 ≠ 1 |
|𝑆|
79. CIS 419/519 Fall’19
What’s Next?
(1) If probabilistic hypotheses are actually like other linear
functions, can we interpret the outcome of other linear
learning algorithms probabilistically?
– Yes
(2) If probabilistic hypotheses are actually like other linear
functions, can you train them similarly (that is,
discriminatively)?
– Yes.
– Classification: Logistics regression/Max Entropy
– HMM: can be learned as a linear model, e.g., with a version of
Perceptron (Structured Models class)
84
80. CIS 419/519 Fall’19 85
Recall: Naïve Bayes, Two Classes
• In the case of two classes we have:
log
P(𝑣𝑗 = 1|𝑥)
𝑃(𝑣𝑗 = 0|𝑥)
=
𝑖
𝑤𝑖𝑥𝑖 − 𝑏
• but since
𝑃 𝑣𝑗 = 1 𝑥 = 1 − 𝑃(𝑣𝑗 = 0|𝑥)
• We get (plug in (2) in (1); some algebra):
𝑃 𝑣𝑗 = 1 𝑥 =
1
1 + exp(− 𝑖 𝑤𝑖𝑥𝑖 + 𝑏)
• Which is simply the logistic (sigmoid) function used in the
• neural network representation.
81. CIS 419/519 Fall’19 86
Conditional Probabilities
• (1) If probabilistic hypotheses are actually like other linear
functions, can we interpret the outcome of other linear
learning algorithms probabilistically?
– Yes
• General recipe
– Train a classifier f using your favorite algorithm (Perceptron, SVM,
Winnow, etc). Then:
– Use Sigmoid1/1+exp{-(AwTx + B)} to get an estimate for P(y | x)
– 𝐴, 𝐵 can be tuned using a held out that was not used for training.
– Done in LBJava, for example
82. CIS 419/519 Fall’19
Logistic Regression (1)
• (2) If probabilistic hypotheses are actually like other linear functions,
can you actually train them similarly (that is, discriminatively)?
• The logistic regression model assumes the following model:
𝑃(𝑦 = +1 | 𝒙, 𝒘) = [1 + exp −𝑦 𝒘𝑇𝒙 + 𝑏 −1
• This is the same model we derived for naïve Bayes, only that now
we will not assume any independence assumption.
We will directly find the best 𝒘.
• Therefore training will be more difficult. However, the weight vector
derived will be more expressive.
– It can be shown that the naïve Bayes algorithm cannot represent all linear
threshold functions.
– On the other hand, NB converges to its performance faster.
87
How?
83. CIS 419/519 Fall’19
Logistic Regression (2)
• Given the model:
𝑃(𝑦 = +1 | 𝒙, 𝒘) = [1 + exp −𝑦 𝒘𝑇𝒙 + 𝑏 −1
• The goal is to find the (𝒘, 𝑏) that maximizes the log likelihood of the data: {𝑥1, 𝑥2 … 𝑥𝑚}.
• We are looking for (𝒘, 𝑏) that minimizes the negative log-likelihood
min
𝒘,𝑏
1
𝑚
log 𝑃(𝑦 = +1 | 𝑥, 𝑤) = min
𝒘,𝑏
1
𝑚
log[1 + exp(−𝑦𝑖(𝒘𝑇
𝒙𝑖 + 𝑏)]
• This optimization problem is called Logistics Regression.
• Logistic Regression is sometimes called the Maximum Entropy model in the NLP
community (since the resulting distribution is the one that has the largest entropy among all
those that activate the same features).
88
84. CIS 419/519 Fall’19
Logistic Regression (3)
• Using the standard mapping to linear separators through the origin, we would like to
minimize:
min
𝒘,𝑏
1
𝑚
log 𝑃(𝑦 = +1 | 𝑥, 𝑤) = min
𝒘,𝑏
1
𝑚
log[1 + exp(−𝑦𝑖(𝒘𝑇
𝒙𝑖 + 𝑏)]
• To get good generalization, it is common to add a regularization term, and the regularized
logistics regression then becomes:
𝑚𝑖𝑛𝑤 𝑓 𝑤 = ½ 𝑤𝑇𝑤 + 𝐶
1
𝑚
log[1 + exp(−𝑦𝑖(𝑤𝑇
𝑥𝑖)],
Where 𝐶 is a user selected parameter that balances the two terms.
89
Empirical loss
Regularization term
85. CIS 419/519 Fall’19
Comments on Discriminative Learning
• 𝑚𝑖𝑛𝑤 𝑓 𝑤 = ½ 𝑤𝑇𝑤 + 𝐶 1
𝑚
log[1 + exp(−𝑦𝑖(𝑤𝑇
𝑥𝑖)],
Where 𝐶 is a user selected parameter that balances the two terms.
• Since the second term is the loss function
• Therefore, regularized logistic regression can be related to other learning methods, e.g.,
SVMs.
• L1 SVM solves the following optimization problem:
𝑚𝑖𝑛𝑤 𝑓1(𝑤) = ½ 𝑤𝑇𝑤 + 𝐶 1
𝑚
max(0,1 − 𝑦𝑖(𝑤𝑇𝑥𝑖)
• L2 SVM solves the following optimization problem:
𝑚𝑖𝑛𝑤 𝑓2(𝑤) = ½ 𝑤𝑇𝑤 + 𝐶 1
𝑚
max 0,1 − 𝑦𝑖𝑤𝑇
𝑥𝑖
2
90
Empirical loss
Regularization term
87. CIS 419/519 Fall’19 92
Example: Learning to Classify Text
• Instance space 𝑋: Text documents
• Instances are labeled according to 𝑓(𝑥) = 𝑙𝑖𝑘𝑒/𝑑𝑖𝑠𝑙𝑖𝑘𝑒
• Goal: Learn this function such that, given a new document
• you can use it to decide if you like it or not
• How to represent the document ?
• How to estimate the probabilities ?
• How to classify?
𝑣𝑁𝐵 = 𝑎𝑟𝑔𝑚𝑎𝑥𝑣∈𝑉𝑃 𝑣
𝑖
𝑃(𝑥𝑖|𝑣)
88. CIS 419/519 Fall’19 93
Document Representation
• Instance space 𝑋: Text documents
• Instances are labeled according to 𝑦 = 𝑓(𝑥) = 𝑙𝑖𝑘𝑒/𝑑𝑖𝑠𝑙𝑖𝑘𝑒
• How to represent the document ?
• A document will be represented as a list of its words
• The representation question can be viewed as the generation
question
• We have a dictionary of 𝑛 words (therefore 2𝑛 parameters)
• We have documents of size 𝑁: can account for word position & count
• Having a parameter for each word & position may be too much:
– # of parameters: 2 × 𝑁 × 𝑛 (2 × 100 × 50,000 ~ 107
)
• Simplifying Assumption:
– The probability of observing a word in a document is independent of its location
– This still allows us to think about two ways of generating the document
89. CIS 419/519 Fall’19
• We want to compute
• 𝑎𝑟𝑔𝑚𝑎𝑥𝑦 𝑃(𝑦|𝐷) = 𝑎𝑟𝑔𝑚𝑎𝑥𝑦 𝑃(𝐷|𝑦) 𝑃(𝑦)/𝑃(𝐷) =
• = 𝑎𝑟𝑔𝑚𝑎𝑥𝑦 𝑃(𝐷|𝑦)𝑃(𝑦)
• Our assumptions will go into estimating 𝑃(𝐷|𝑦):
1. Multivariate Bernoulli
I. To generate a document, first decide if it’s good (𝑦 = 1) or bad (𝑦 = 0).
II. Given that, consider your dictionary of words and choose 𝑤 into your document with
probability 𝑝(𝑤 |𝑦), irrespective of anything else.
III. If the size of the dictionary is |𝑉| = 𝑛, we can then write
• 𝑃(𝑑|𝑦) = 1
𝑛
𝑃(𝑤𝑖 = 1|𝑦)𝑏
𝑖 𝑃(𝑤𝑖 = 0|𝑦)1−𝑏𝑖
• Where:
• 𝑝(𝑤 = 1/0|𝑦): the probability that 𝑤 appears/does-not in a 𝑦 −labeled document.
• 𝑏𝑖 ∈ {0,1} indicates whether word 𝑤𝑖 occurs in document 𝑑
• 2𝑛 + 2 parameters:
• Estimating 𝑃(𝑤𝑖 = 1|𝑦) and 𝑃(𝑦) is done in the ML way as before (counting).
94
Classification via Bayes Rule (B)
Parameters:
1. Priors: 𝑃(𝑦 = 0/1)
2. ∀ 𝑤𝑖 ∈ 𝐷𝑖𝑐𝑡𝑖𝑜𝑛𝑎𝑟𝑦
𝑝(𝑤𝑖 = 0/1 |𝑦 = 0/1)
90. CIS 419/519 Fall’19
• We want to compute
• 𝑎𝑟𝑔𝑚𝑎𝑥𝑦 𝑃(𝑦|𝐷) = 𝑎𝑟𝑔𝑚𝑎𝑥𝑦 𝑃(𝐷|𝑦) 𝑃(𝑦)/𝑃(𝐷) =
• = 𝑎𝑟𝑔𝑚𝑎𝑥𝑦 𝑃(𝐷|𝑦)𝑃(𝑦)
• Our assumptions will go into estimating 𝑃(𝐷|𝑦):
2. Multinomial
I. To generate a document, first decide if it’s good (𝑦 = 1) or bad (𝑦 = 0).
II. Given that, place 𝑁 words into 𝑑, such that 𝑤𝑖 is placed with probability 𝑃(𝑤𝑖|𝑦), and 𝑖
𝑁 𝑃(𝑤𝑖|𝑦) =
1.
III. The Probability of a document is:
𝑃(𝑑|𝑦) = 𝑁!/𝑛1! … 𝑛𝑘! 𝑃(𝑤1|𝑦)𝑛1 … 𝑝(𝑤𝑘|𝑦)𝑛𝑘
• Where 𝑛𝑖 is the # of times 𝑤𝑖 appears in the document.
• Same # of parameters: 2𝑛 + 2, where 𝑛 = |Dictionary|, but the estimation is done a bit differently. (HW).
95
A Multinomial Model
Parameters:
1. Priors: 𝑃(𝑦 = 0/1)
2. ∀ 𝑤𝑖 ∈ 𝐷𝑖𝑐𝑡𝑖𝑜𝑛𝑎𝑟𝑦
𝑝(𝑤𝑖 = 0/1 |𝑦 = 0/1)
𝑁 dictionary items are
chosen into 𝐷
91. CIS 419/519 Fall’19
• The generative model in these two cases is different
label
Appear
Documents d
µ ¯
Words w
Bernoulli: A binary variable corresponds to a document
d and a dictionary word 𝑤, and it takes the value 1 if 𝑤
appears in d. Document topic/label is governed by a
prior µ, its topic (label), and the variable in the
intersection of the plates is governed by µ and the
Bernoulli parameter ¯ for the dictionary word w
label
Appear (d)
Documents d
µ ¯
Position p
Multinomial: Words do not correspond to dictionary
words but to positions (occurrences) in the document
d. The internal variable is then 𝑊(𝐷, 𝑃). These
variables are generated from the same multinomial
distribution ¯, and depend on the topic/label.
96
Model Representation
92. CIS 419/519 Fall’19 97
General NB Scenario
• We assume a mixture probability model, parameterized by µ.
• Different components {𝑐1, 𝑐2, … 𝑐𝑘} of the model are parameterize by
disjoint subsets of µ.
• The generative story: A document 𝑑 is created by
(1) selecting a component according to the priors, 𝑃(𝑐𝑗 |µ), then
(2) having the mixture component generate a document according to
its own parameters, with distribution 𝑃(𝑑|𝑐𝑗, µ)
• So we have:
𝑃 𝑑 µ = 1
𝑘
𝑃(𝑐𝑗|µ) 𝑃(𝑑|𝑐𝑗, µ)
• In the case of document classification, we assume a one to one
correspondence between components and labels.
93. CIS 419/519 Fall’19
• 𝑋𝑖 can be continuous
• We can still use
𝑃 𝑋1, … . , 𝑋𝑛|𝑌 =
𝑖
𝑃(𝑋𝑖|𝑌)
• And
𝑃 𝑌 = 𝑦 𝑋1, … , 𝑋𝑛) =
𝑃 𝑌 = 𝑦 𝑖 𝑃(𝑋𝑖|𝑌 = 𝑦)
𝑗 𝑃(𝑌 = 𝑦𝑗) 𝑖 𝑃(𝑋𝑖|𝑌 = 𝑦𝑗)
98
Naïve Bayes: Continuous Features
94. CIS 419/519 Fall’19
• 𝑋𝑖 can be continuous
• We can still use
𝑃 𝑋1, … . , 𝑋𝑛|𝑌 =
𝑖
𝑃(𝑋𝑖|𝑌)
• And
𝑃 𝑌 = 𝑦 𝑋1, … , 𝑋𝑛) =
𝑃 𝑌 = 𝑦 𝑖 𝑃(𝑋𝑖|𝑌 = 𝑦)
𝑗 𝑃(𝑌 = 𝑦𝑗) 𝑖 𝑃(𝑋𝑖|𝑌 = 𝑦𝑗)
Naïve Bayes classifier:
𝑌 = arg max
𝑦
𝑃 𝑌 = 𝑦
𝑖
𝑃(𝑋𝑖|𝑌 = 𝑦)
99
Naïve Bayes: Continuous Features
95. CIS 419/519 Fall’19
• 𝑋𝑖 can be continuous
• We can still use
𝑃 𝑋1, … . , 𝑋𝑛|𝑌 =
𝑖
𝑃(𝑋𝑖|𝑌)
• And
𝑃 𝑌 = 𝑦 𝑋1, … , 𝑋𝑛) =
𝑃 𝑌 = 𝑦 𝑖 𝑃(𝑋𝑖|𝑌 = 𝑦)
𝑗 𝑃(𝑌 = 𝑦𝑗) 𝑖 𝑃(𝑋𝑖|𝑌 = 𝑦𝑗)
Naïve Bayes classifier:
𝑌 = arg max
𝑦
𝑃 𝑌 = 𝑦
𝑖
𝑃(𝑋𝑖|𝑌 = 𝑦)
Assumption: 𝑃(𝑋𝑖|𝑌) has a Gaussian distribution
100
Naïve Bayes: Continuous Features
96. CIS 419/519 Fall’19 101
The Gaussian Probability Distribution
• Gaussian probability distribution also called normal
distribution.
• It is a continuous distribution with pdf:
• = mean of distribution
• 𝜎2 = variance of distribution
• 𝑥 is a continuous variable (−∞ ≤ 𝑥 ≤ ∞)
• Probability of 𝑥 being in the range [𝑎, 𝑏] cannot be evaluated
analytically (has to be looked up in a table)
• 𝑝 𝑥 =
1
𝜎 2𝜋
𝑒
−
𝑥 −𝜇 2
2 𝜎2
p(x)
1
2
e
(x )2
2
2
gaussian
x
97. CIS 419/519 Fall’19
• 𝑃(𝑋𝑖|𝑌) is Gaussian
• Training: estimate mean and standard deviation
– 𝜇𝑖 = 𝐸 𝑋𝑖 𝑌 = 𝑦
– 𝜎𝑖
2
= 𝐸 𝑋𝑖 − 𝜇𝑖
2 𝑌 = 𝑦
102
Naïve Bayes: Continuous Features
Note that the following slides abuse notation
significantly. Since 𝑃(𝑥) = 0 for continues distributions,
we think of
𝑃 (𝑋 = 𝑥| 𝑌 = 𝑦), not as a classic probability distribution,
but just as a function 𝑓(𝑥) = 𝑁(𝑥, 𝜇, 𝜎2
).
𝑓(𝑥) behaves as a probability distribution in the sense
that 8 𝑥, 𝑓(𝑥) ¸ 0 and the values add up to 1. Also, note
that 𝑓(𝑥) satisfies Bayes Rule, that is, it is true that:
𝑓𝑌(𝑦|𝑋 = 𝑥) = 𝑓𝑋 (𝑥|𝑌 = 𝑦) 𝑓𝑌 (𝑦)/𝑓𝑋(𝑥)
100. CIS 419/519 Fall’19 105
Recall: Naïve Bayes, Two Classes
• In the case of two classes we have that:
log
𝑃(𝑣 = 1|𝑥)
𝑃(𝑣 = 0|𝑥)
=
𝑖
𝒘𝑖𝒙𝑖 − 𝑏
• but since
𝑃 𝑣 = 1 𝑥 = 1 − 𝑃(𝑣 = 0|𝑥)
• We get:
𝑃 𝑣 = 1 𝑥 =
1
1 + exp( − 𝑖 𝒘𝑖𝒙𝑖 + 𝑏)
• Which is simply the logistic function (also used in the neural network
representation)
• The same formula can be written for continuous features