SlideShare a Scribd company logo
1 of 34
Download to read offline
Chapter ML:I
I. Introduction
K Examples of Learning Tasks
K Specification of Learning Problems
ML:I-9 Introduction © STEIN/LETTMANN 2005-2017
Examples of Learning Tasks
Car Shopping Guide
What criteria form the basis of a decision?
ML:I-10 Introduction © STEIN/LETTMANN 2005-2017
Examples of Learning Tasks
Risk Analysis for Credit Approval
Customer 1
house owner yes
income (p.a.) 51 000 EUR
repayment (p.m.) 1 000 EUR
credit period 7 years
SCHUFA entry no
age 37
married yes
. . .
. . .
Customer n
house owner no
income (p.a.) 55 000 EUR
repayment (p.m.) 1 200 EUR
credit period 8 years
SCHUFA entry no
age ?
married yes
. . .
ML:I-11 Introduction © STEIN/LETTMANN 2005-2017
Examples of Learning Tasks
Risk Analysis for Credit Approval
Customer 1
house owner yes
income (p.a.) 51 000 EUR
repayment (p.m.) 1 000 EUR
credit period 7 years
SCHUFA entry no
age 37
married yes
. . .
. . .
Customer n
house owner no
income (p.a.) 55 000 EUR
repayment (p.m.) 1 200 EUR
credit period 8 years
SCHUFA entry no
age ?
married yes
. . .
Learned rules:
IF (income>40 000 AND credit_period<3) OR
house_owner=yes
THEN credit_approval=yes
IF SCHUFA_entry=yes OR
(income<20 000 AND repayment>800)
THEN credit_approval=no
ML:I-12 Introduction © STEIN/LETTMANN 2005-2017
Examples of Learning Tasks
Image Analysis [Mitchell 1997]
ML:I-13 Introduction © STEIN/LETTMANN 2005-2017
Examples of Learning Tasks
Image Analysis [Mitchell 1997]
Sharp
Left
Sharp
Right
4 Hidden
Units
30 Output
Units
30x32 Sensor
Input Retina
Straight
Ahead
...
ML:I-14 Introduction © STEIN/LETTMANN 2005-2017
Specification of Learning Problems
Definition 1 (Machine Learning [Mitchell 1997])
A computer program is said to learn
K from experience
K with respect to some class of tasks and
K a performance measure,
if its performance at the tasks improves with the experience.
ML:I-15 Introduction © STEIN/LETTMANN 2005-2017
Remarks:
K Example: chess
– task = playing chess
– performance measure = number of games won during a world championship
– experience = possibility to play against itself
K Example: optical character recognition
– task = isolation and classification of handwritten words in bitmaps
– performance measure = percentage of correctly classified words
– experience = collection of correctly classified, handwritten words
K A corpus with labeled examples forms a kind of “compiled experience”.
K Consider the different corpora that are exploited for different learning tasks in the webis
group. [www.uni-weimar.de/medien/webis/corpora]
ML:I-16 Introduction © STEIN/LETTMANN 2005-2017
Specification of Learning Problems
Learning Paradigms
1. Supervised learning
Learn a function from a set of input-output-pairs. An important branch of
supervised learning is automated classification. Example: optical character
recognition
2. Unsupervised learning
Identify structures in data. Important subareas of unsupervised learning
include automated categorization (e.g. via cluster analysis), parameter
optimization (e.g. via expectation maximization), and feature extraction (e.g.
via factor analysis).
3. Reinforcement learning
Learn, adapt, or optimize a behavior strategy in order to maximize the own
benefit by interpreting feedback that is provided by the environment. Example:
development of behavior strategies for agents in a hostile environment.
ML:I-17 Introduction © STEIN/LETTMANN 2005-2017
Specification of Learning Problems
Learning Paradigms
1. Supervised learning
Learn a function from a set of input-output-pairs. An important branch of
supervised learning is automated classification. Example: optical character
recognition
2. Unsupervised learning
Identify structures in data. Important subareas of unsupervised learning
include automated categorization (e.g. via cluster analysis), parameter
optimization (e.g. via expectation maximization), and feature extraction (e.g.
via factor analysis).
3. Reinforcement learning
Learn, adapt, or optimize a behavior strategy in order to maximize the own
benefit by interpreting feedback that is provided by the environment. Example:
development of behavior strategies for agents in a hostile environment.
ML:I-18 Introduction © STEIN/LETTMANN 2005-2017
Specification of Learning Problems
Example Chess: Kind of Experience [Mitchell 1997]
1. Feedback
– direct: for each board configuration the best move is given.
– indirect: only the final result is given after a series of moves.
ML:I-19 Introduction © STEIN/LETTMANN 2005-2017
Specification of Learning Problems
Example Chess: Kind of Experience [Mitchell 1997]
1. Feedback
– direct: for each board configuration the best move is given.
– indirect: only the final result is given after a series of moves.
2. Sequence and distribution of examples
– A teacher presents important example problems along with a solution.
– The learner chooses from the examples; e.g., pick a board for which the
best move is unknown.
– The selection of examples to learn from should follow the (expected)
distribution of future problems.
ML:I-20 Introduction © STEIN/LETTMANN 2005-2017
Specification of Learning Problems
Example Chess: Kind of Experience [Mitchell 1997]
1. Feedback
– direct: for each board configuration the best move is given.
– indirect: only the final result is given after a series of moves.
2. Sequence and distribution of examples
– A teacher presents important example problems along with a solution.
– The learner chooses from the examples; e.g., pick a board for which the
best move is unknown.
– The selection of examples to learn from should follow the (expected)
distribution of future problems.
3. Relevance under a performance measure
– How far can we get with experience?
– Can we master situations in the wild?
(playing against itself will be not enough to become world class)
ML:I-21 Introduction © STEIN/LETTMANN 2005-2017
Specification of Learning Problems
Example Chess: Ideal Target Function γ [Mitchell 1997]
(a) γ : Boards → Moves
(b) γ : Boards → R
ML:I-22 Introduction © STEIN/LETTMANN 2005-2017
Specification of Learning Problems
Example Chess: Ideal Target Function γ [Mitchell 1997]
(a) γ : Boards → Moves
(b) γ : Boards → R
A recursive definition of γ, following a kind of means-ends analysis :
Let be o ∈ Boards.
1. γ(o) = 100, if o represents a final board state that is won.
2. γ(o) = −100, if o represents a final board state that is lost.
3. γ(o) = 0, if o represents a final board state that is drawn.
4. γ(o) = γ(o∗
) otherwise.
o∗
denotes the best final state that can be reached if both sides play optimally.
Related: minimax strategy, α-β pruning. [Course on Search Algorithms, Stein 1998-2017]
ML:I-23 Introduction © STEIN/LETTMANN 2005-2017
Specification of Learning Problems
Example Chess: From the Real World γ to a Model World y
γ(o) ; y(α(o)) ≡ y(x)
ML:I-24 Introduction © STEIN/LETTMANN 2005-2017
Specification of Learning Problems
Example Chess: From the Real World γ to a Model World y
γ(o) ; y(α(o)) ≡ y(x)
y(x) = w0 + w1 · x1 + w2 · x2 + w3 · x3 + w4 · x4 + w5 · x5 + w6 · x6
where
x1 = number of black pawns on board o
x2 = number of white pawns on board o
x3 = number of black pieces on board o
x4 = number of white pieces on board o
x5 = number of black pieces threatened on board o
x6 = number of white pieces threatened on board o
ML:I-25 Introduction © STEIN/LETTMANN 2005-2017
Specification of Learning Problems
Example Chess: From the Real World γ to a Model World y
γ(o) ; y(α(o)) ≡ y(x)
y(x) = w0 + w1 · x1 + w2 · x2 + w3 · x3 + w4 · x4 + w5 · x5 + w6 · x6
where
x1 = number of black pawns on board o
x2 = number of white pawns on board o
x3 = number of black pieces on board o
x4 = number of white pieces on board o
x5 = number of black pieces threatened on board o
x6 = number of white pieces threatened on board o
Other approaches to formulate y:
K case base
K set of rules
K neural network
K polynomial function of board features
ML:I-26 Introduction © STEIN/LETTMANN 2005-2017
Remarks:
K The ideal target function γ interprets the real world, say, a real-world object o, to
“compute” γ(o). This “computation” can be operationalized by a human or by some other
(even arcane) mechanism of the real world.
K To simulate the interesting aspects of the real world by means of a computer, we define a
model world. This model world is restricted to particular—typically easily
measurable—features x that are derived from o, with x = α(o). In the model world, y(x) is the
abstracted and formalized counterpart of γ(o).
K y is called model function or model, α is called model formation function.
K The key difference between an ideal target function γ and a model function y lies in the
complexity and the representation of their respective domains. Examples:
– A chess grand master assesses a board o in its entirety, both intuitively and analytically;
a chess program is restricted to particular features x, x = α(o).
– A human mushroom picker assesses a mushroom o with all her skills (intuitively,
analytically, by tickled senses); a classification program is restricted to a few surface
features x, x = α(o).
ML:I-27 Introduction © STEIN/LETTMANN 2005-2017
Remarks (continued) :
K For automated chess playing a real-valued assessment function is needed; such kind of
problems form regression problems. If only a small number of values are to be considered
(e.g. school grades), we are given a classification problem. A regression problem can be
transformed into a classification problem via domain discretization.
K Regression problems and classification problems often differ with regard to assessing the
achieved accuracy or goodness of fit. For regression problems the sum of the squared
residuals may be a sensible criterion; for classification problems the number of misclassified
examples may be more relevant.
K For classification problems, the ideal target function γ is also called ideal classifier; similarly,
the model function y is called classifier.
K Decision problems are classification problems with two classes.
K The halting problem for Turing machines is an undecidable classification problem.
ML:I-28 Introduction © STEIN/LETTMANN 2005-2017
Specification of Learning Problems [model world]
How to Build a Classifier y
Characterization of the real world:
K O is a set of objects. (example: mails)
K C is a set of classes. (example: spam versus ham)
K γ : O → C is the ideal classifier for O. (typically a human expert)
ML:I-29 Introduction © STEIN/LETTMANN 2005-2017
Specification of Learning Problems [model world]
How to Build a Classifier y
Characterization of the real world:
K O is a set of objects. (example: mails)
K C is a set of classes. (example: spam versus ham)
K γ : O → C is the ideal classifier for O. (typically a human expert)
Classification problem:
K Given some o ∈ O, determine its class γ(o) ∈ C. (example: is a mail spam?)
Acquisition of classification knowledge:
1. Collect real-world examples of the form (o, γ(o)), o ∈ O.
2. Abstract the objects towards feature vectors x ∈ X, with x = α(o).
3. Construct feature vector examples as (x, c(x)), with x = α(o), c(x) = γ(o).
ML:I-30 Introduction © STEIN/LETTMANN 2005-2017
Specification of Learning Problems [real world]
How to Build a Classifier y (continued)
Characterization of the model world:
K X is a set of feature vectors, called feature space. (example: word frequencies)
K C is a set of classes. (as before: spam versus ham)
K c : X → C is the ideal classifier for X. (c is unknown)
K D = {(x1, c(x1)), . . . , (xn, c(xn))} ⊆ X × C is a set of examples.
ML:I-31 Introduction © STEIN/LETTMANN 2005-2017
Specification of Learning Problems [real world]
How to Build a Classifier y (continued)
Characterization of the model world:
K X is a set of feature vectors, called feature space. (example: word frequencies)
K C is a set of classes. (as before: spam versus ham)
K c : X → C is the ideal classifier for X. (c is unknown)
K D = {(x1, c(x1)), . . . , (xn, c(xn))} ⊆ X × C is a set of examples.
Machine learning problem:
K Approximate the ideal classifier c (implicitly given via D) by a function y:
§ Decide on a model function y : X → C, x → y(x) (y needs to be parameterized)
§ Apply statistics, search, theory, and algorithms from the field of machine
learning to maximize the goodness of fit between c and y.
ML:I-32 Introduction © STEIN/LETTMANN 2005-2017
Specification of Learning Problems
How to Build a Classifier y (continued)
Objects
O
X
Feature space
Classesγ
α
c ≈ y
C
Semantics:
γ Ideal classifier (a human) for real-world objects.
α Model formation function.
c Unknown ideal classifier for vectors from the feature space.
y Classifier to be learned.
c ≈ y c is approximated by y (based on a set of examples).
ML:I-33 Introduction © STEIN/LETTMANN 2005-2017
Remarks:
K The feature space X comprises vectors x1, x2, . . ., which can be considered as abstractions
of real-world objects o1, o2, . . ., and which have been computed according to our view of the
real world.
K The model formation function α determines the level of abstraction between o and x,
x = α(o). I.e., α determines the representation fidelity, exactness, quality, or simplification.
K Though α models an object o ∈ O only imperfectly as x = α(o), c(x) must be considered as
ideal classifier, since c(x) is defined as γ(o) and hence yields the true real-world classes.
I.e., c and γ have different domains each, but they return the same images.
K The function c is only implicitly given, in the form of the example set D. If c could be stated in
an explicit form, e.g. as a function equation, one would have no machine learning problem.
K c(x) is often termed “ground truth” (for x and the underlying classification problem). Observe
that this term is justified by the fact that c(x) ≡ γ(o).
ML:I-34 Introduction © STEIN/LETTMANN 2005-2017
Specification of Learning Problems
LMS Algorithm for Fitting y [IGD Algorithm]
Algorithm: LMS Least Mean Squares.
Input: D Training examples of the form (x, c(x)) with target function value c(x) for x.
η Learning rate, a small positive constant.
Internal: y(D) Set of y(x)-values computed from the elements x in D given some w.
Output: w Weight vector.
LMS(D, η)
1. initialize_random_weights((w0, w1, . . . , wp))
2. REPEAT
3. (x, c(x)) = random_select(D)
4. y(x) = w0 + w1 · x1 + . . . + wp · xp
5. error = c(x) − y(x)
6. FOR j = 0 TO p DO
7. ∆wj = η · error · xj // ∀x∈D : x|x0
≡ 1
8. wj = wj + ∆wj
9. ENDDO
10. UNTIL(convergence(D, y(D)))
11. return((w0, w1, . . . , wp))
ML:I-35 Introduction © STEIN/LETTMANN 2005-2017
Remarks:
K The LMS weight adaptation corresponds to the incremental gradient descend [IGD] algorithm,
and approximates the global direction of steepest error descent as used by the batch gradient
descent [BGD] algorithm, for which more rigorous statements on convergence are possible.
K Possible criteria for convergence() :
– The global error quantified as the sum of the squared residuals, x,c(x)∈D (c(x) − y(x))2
.
– An upper bound on the number of iterations.
ML:I-36 Introduction © STEIN/LETTMANN 2005-2017
Specification of Learning Problems
Design of Learning Systems [p.12, Mitchell 1997]
Solution
Moves, γ(o*)
Chess
program
compute_solution()
Problem
Chess
board
ML:I-37 Introduction © STEIN/LETTMANN 2005-2017
Specification of Learning Problems
Design of Learning Systems [p.12, Mitchell 1997]
Solution
Moves, γ(o*)
Chess
program
compute_solution()
Problem
Chess
board
Training examples
(x1, c(x1)), ...,
(xn, c(xn))
accept
improve
Move
analysis
evaluate()
ML:I-38 Introduction © STEIN/LETTMANN 2005-2017
Specification of Learning Problems
Design of Learning Systems [p.12, Mitchell 1997]
Solution
Moves, γ(o*)
Chess
program
compute_solution()
Problem
Chess
board
Training examples
(x1, c(x1)), ...,
(xn, c(xn))
accept
improve
Move
analysis
evaluate()
generalize()
LMS
algorithm
Hypothesis
(w0, ..., wp)
ML:I-39 Introduction © STEIN/LETTMANN 2005-2017
Specification of Learning Problems
Design of Learning Systems [p.12, Mitchell 1997]
Solution
Moves, γ(o*)
Chess
program
compute_solution()
Problem
Chess
board
Training examples
(x1, c(x1)), ...,
(xn, c(xn))
accept
improve
Move
analysis
evaluate()
generalize()
LMS
algorithm
Hypothesis
(w0, ..., wp)
generate_
problem()
Important design aspects:
1. kind of experience
2. fidelity of the model formation function α : O → X
3. class or structure of the model function y
4. learning method for fitting y
ML:I-40 Introduction © STEIN/LETTMANN 2005-2017
Specification of Learning Problems
Related Questions
Model functions y:
K What are important classes of model functions?
K What are methods to fit (= learn) model functions?
K What are measures to assess the goodness of fit?
K How does the example number affect the learning process?
K How does noise affect the learning process?
ML:I-41 Introduction © STEIN/LETTMANN 2005-2017
Specification of Learning Problems
Related Questions (continued)
Generic learnability:
K What are the theoretical limits of learnability?
K How can we use nature as a model for learning?
Knowledge acquisition:
K How can we integrate background knowledge into the learning process?
K How can we integrate human expertise into the learning process?
ML:I-42 Introduction © STEIN/LETTMANN 2005-2017

More Related Content

Similar to Machine Learning and Data Mining - Organization, Literature

Chap05 discrete probability distributions
Chap05 discrete probability distributionsChap05 discrete probability distributions
Chap05 discrete probability distributionsUni Azza Aunillah
 
Machine learning Lecture 1
Machine learning Lecture 1Machine learning Lecture 1
Machine learning Lecture 1Srinivasan R
 
week9_Machine_Learning.ppt
week9_Machine_Learning.pptweek9_Machine_Learning.ppt
week9_Machine_Learning.pptbutest
 
林守德/Practical Issues in Machine Learning
林守德/Practical Issues in Machine Learning林守德/Practical Issues in Machine Learning
林守德/Practical Issues in Machine Learning台灣資料科學年會
 
Machine learning for computer vision part 2
Machine learning for computer vision part 2Machine learning for computer vision part 2
Machine learning for computer vision part 2potaters
 
AP & GP (WEEK 1).pptx
AP & GP (WEEK 1).pptxAP & GP (WEEK 1).pptx
AP & GP (WEEK 1).pptxAnneAlyas
 
Minghui Conference Cross-Validation Talk
Minghui Conference Cross-Validation TalkMinghui Conference Cross-Validation Talk
Minghui Conference Cross-Validation TalkWei Wang
 
Dynamic Programming and Reinforcement Learning applied to Tetris Game
Dynamic Programming and Reinforcement Learning applied to Tetris GameDynamic Programming and Reinforcement Learning applied to Tetris Game
Dynamic Programming and Reinforcement Learning applied to Tetris GameSuelen Carvalho
 
Mba205 winter 2017
Mba205 winter 2017Mba205 winter 2017
Mba205 winter 2017smumbahelp
 
Decoding the science of Decision trees
Decoding the science of Decision treesDecoding the science of Decision trees
Decoding the science of Decision treesEdureka!
 
Can Machine Learning Models be Trusted? Explaining Decisions of ML Models
Can Machine Learning Models be Trusted? Explaining Decisions of ML ModelsCan Machine Learning Models be Trusted? Explaining Decisions of ML Models
Can Machine Learning Models be Trusted? Explaining Decisions of ML ModelsDarek Smyk
 
Problem 1 – First-Order Predicate Calculus (15 points)
Problem 1 – First-Order Predicate Calculus (15 points)Problem 1 – First-Order Predicate Calculus (15 points)
Problem 1 – First-Order Predicate Calculus (15 points)butest
 
Deep Learning: Introduction & Chapter 5 Machine Learning Basics
Deep Learning: Introduction & Chapter 5 Machine Learning BasicsDeep Learning: Introduction & Chapter 5 Machine Learning Basics
Deep Learning: Introduction & Chapter 5 Machine Learning BasicsJason Tsai
 
Marco-Calculus-AB-Lesson-Plan (Important book for calculus).pdf
Marco-Calculus-AB-Lesson-Plan (Important book for calculus).pdfMarco-Calculus-AB-Lesson-Plan (Important book for calculus).pdf
Marco-Calculus-AB-Lesson-Plan (Important book for calculus).pdfUniversity of the Punjab
 
Output Units and Cost Function in FNN
Output Units and Cost Function in FNNOutput Units and Cost Function in FNN
Output Units and Cost Function in FNNLin JiaMing
 
课堂讲义(最后更新:2009-9-25)
课堂讲义(最后更新:2009-9-25)课堂讲义(最后更新:2009-9-25)
课堂讲义(最后更新:2009-9-25)butest
 

Similar to Machine Learning and Data Mining - Organization, Literature (20)

Chap05 discrete probability distributions
Chap05 discrete probability distributionsChap05 discrete probability distributions
Chap05 discrete probability distributions
 
Machine learning Lecture 1
Machine learning Lecture 1Machine learning Lecture 1
Machine learning Lecture 1
 
Barr cc slides
Barr cc slidesBarr cc slides
Barr cc slides
 
Introduction to Machine Learning
Introduction to Machine Learning Introduction to Machine Learning
Introduction to Machine Learning
 
week9_Machine_Learning.ppt
week9_Machine_Learning.pptweek9_Machine_Learning.ppt
week9_Machine_Learning.ppt
 
Decision tree
Decision treeDecision tree
Decision tree
 
林守德/Practical Issues in Machine Learning
林守德/Practical Issues in Machine Learning林守德/Practical Issues in Machine Learning
林守德/Practical Issues in Machine Learning
 
Machine learning for computer vision part 2
Machine learning for computer vision part 2Machine learning for computer vision part 2
Machine learning for computer vision part 2
 
AP & GP (WEEK 1).pptx
AP & GP (WEEK 1).pptxAP & GP (WEEK 1).pptx
AP & GP (WEEK 1).pptx
 
Minghui Conference Cross-Validation Talk
Minghui Conference Cross-Validation TalkMinghui Conference Cross-Validation Talk
Minghui Conference Cross-Validation Talk
 
Dynamic Programming and Reinforcement Learning applied to Tetris Game
Dynamic Programming and Reinforcement Learning applied to Tetris GameDynamic Programming and Reinforcement Learning applied to Tetris Game
Dynamic Programming and Reinforcement Learning applied to Tetris Game
 
Mba205 winter 2017
Mba205 winter 2017Mba205 winter 2017
Mba205 winter 2017
 
Decoding the science of Decision trees
Decoding the science of Decision treesDecoding the science of Decision trees
Decoding the science of Decision trees
 
Can Machine Learning Models be Trusted? Explaining Decisions of ML Models
Can Machine Learning Models be Trusted? Explaining Decisions of ML ModelsCan Machine Learning Models be Trusted? Explaining Decisions of ML Models
Can Machine Learning Models be Trusted? Explaining Decisions of ML Models
 
Problem 1 – First-Order Predicate Calculus (15 points)
Problem 1 – First-Order Predicate Calculus (15 points)Problem 1 – First-Order Predicate Calculus (15 points)
Problem 1 – First-Order Predicate Calculus (15 points)
 
Deep Learning: Introduction & Chapter 5 Machine Learning Basics
Deep Learning: Introduction & Chapter 5 Machine Learning BasicsDeep Learning: Introduction & Chapter 5 Machine Learning Basics
Deep Learning: Introduction & Chapter 5 Machine Learning Basics
 
Marco-Calculus-AB-Lesson-Plan (Important book for calculus).pdf
Marco-Calculus-AB-Lesson-Plan (Important book for calculus).pdfMarco-Calculus-AB-Lesson-Plan (Important book for calculus).pdf
Marco-Calculus-AB-Lesson-Plan (Important book for calculus).pdf
 
Output Units and Cost Function in FNN
Output Units and Cost Function in FNNOutput Units and Cost Function in FNN
Output Units and Cost Function in FNN
 
Lecture 1.pptx
Lecture 1.pptxLecture 1.pptx
Lecture 1.pptx
 
课堂讲义(最后更新:2009-9-25)
课堂讲义(最后更新:2009-9-25)课堂讲义(最后更新:2009-9-25)
课堂讲义(最后更新:2009-9-25)
 

Recently uploaded

Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsanshu789521
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsKarinaGenton
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...M56BOOKSTORE PRODUCT/SERVICE
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesFatimaKhan178732
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
MENTAL STATUS EXAMINATION format.docx
MENTAL     STATUS EXAMINATION format.docxMENTAL     STATUS EXAMINATION format.docx
MENTAL STATUS EXAMINATION format.docxPoojaSen20
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docxPoojaSen20
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfSumit Tiwari
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 

Recently uploaded (20)

Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha elections
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its Characteristics
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Staff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSDStaff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSD
 
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and Actinides
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
MENTAL STATUS EXAMINATION format.docx
MENTAL     STATUS EXAMINATION format.docxMENTAL     STATUS EXAMINATION format.docx
MENTAL STATUS EXAMINATION format.docx
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docx
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 

Machine Learning and Data Mining - Organization, Literature

  • 1. Chapter ML:I I. Introduction K Examples of Learning Tasks K Specification of Learning Problems ML:I-9 Introduction © STEIN/LETTMANN 2005-2017
  • 2. Examples of Learning Tasks Car Shopping Guide What criteria form the basis of a decision? ML:I-10 Introduction © STEIN/LETTMANN 2005-2017
  • 3. Examples of Learning Tasks Risk Analysis for Credit Approval Customer 1 house owner yes income (p.a.) 51 000 EUR repayment (p.m.) 1 000 EUR credit period 7 years SCHUFA entry no age 37 married yes . . . . . . Customer n house owner no income (p.a.) 55 000 EUR repayment (p.m.) 1 200 EUR credit period 8 years SCHUFA entry no age ? married yes . . . ML:I-11 Introduction © STEIN/LETTMANN 2005-2017
  • 4. Examples of Learning Tasks Risk Analysis for Credit Approval Customer 1 house owner yes income (p.a.) 51 000 EUR repayment (p.m.) 1 000 EUR credit period 7 years SCHUFA entry no age 37 married yes . . . . . . Customer n house owner no income (p.a.) 55 000 EUR repayment (p.m.) 1 200 EUR credit period 8 years SCHUFA entry no age ? married yes . . . Learned rules: IF (income>40 000 AND credit_period<3) OR house_owner=yes THEN credit_approval=yes IF SCHUFA_entry=yes OR (income<20 000 AND repayment>800) THEN credit_approval=no ML:I-12 Introduction © STEIN/LETTMANN 2005-2017
  • 5. Examples of Learning Tasks Image Analysis [Mitchell 1997] ML:I-13 Introduction © STEIN/LETTMANN 2005-2017
  • 6. Examples of Learning Tasks Image Analysis [Mitchell 1997] Sharp Left Sharp Right 4 Hidden Units 30 Output Units 30x32 Sensor Input Retina Straight Ahead ... ML:I-14 Introduction © STEIN/LETTMANN 2005-2017
  • 7. Specification of Learning Problems Definition 1 (Machine Learning [Mitchell 1997]) A computer program is said to learn K from experience K with respect to some class of tasks and K a performance measure, if its performance at the tasks improves with the experience. ML:I-15 Introduction © STEIN/LETTMANN 2005-2017
  • 8. Remarks: K Example: chess – task = playing chess – performance measure = number of games won during a world championship – experience = possibility to play against itself K Example: optical character recognition – task = isolation and classification of handwritten words in bitmaps – performance measure = percentage of correctly classified words – experience = collection of correctly classified, handwritten words K A corpus with labeled examples forms a kind of “compiled experience”. K Consider the different corpora that are exploited for different learning tasks in the webis group. [www.uni-weimar.de/medien/webis/corpora] ML:I-16 Introduction © STEIN/LETTMANN 2005-2017
  • 9. Specification of Learning Problems Learning Paradigms 1. Supervised learning Learn a function from a set of input-output-pairs. An important branch of supervised learning is automated classification. Example: optical character recognition 2. Unsupervised learning Identify structures in data. Important subareas of unsupervised learning include automated categorization (e.g. via cluster analysis), parameter optimization (e.g. via expectation maximization), and feature extraction (e.g. via factor analysis). 3. Reinforcement learning Learn, adapt, or optimize a behavior strategy in order to maximize the own benefit by interpreting feedback that is provided by the environment. Example: development of behavior strategies for agents in a hostile environment. ML:I-17 Introduction © STEIN/LETTMANN 2005-2017
  • 10. Specification of Learning Problems Learning Paradigms 1. Supervised learning Learn a function from a set of input-output-pairs. An important branch of supervised learning is automated classification. Example: optical character recognition 2. Unsupervised learning Identify structures in data. Important subareas of unsupervised learning include automated categorization (e.g. via cluster analysis), parameter optimization (e.g. via expectation maximization), and feature extraction (e.g. via factor analysis). 3. Reinforcement learning Learn, adapt, or optimize a behavior strategy in order to maximize the own benefit by interpreting feedback that is provided by the environment. Example: development of behavior strategies for agents in a hostile environment. ML:I-18 Introduction © STEIN/LETTMANN 2005-2017
  • 11. Specification of Learning Problems Example Chess: Kind of Experience [Mitchell 1997] 1. Feedback – direct: for each board configuration the best move is given. – indirect: only the final result is given after a series of moves. ML:I-19 Introduction © STEIN/LETTMANN 2005-2017
  • 12. Specification of Learning Problems Example Chess: Kind of Experience [Mitchell 1997] 1. Feedback – direct: for each board configuration the best move is given. – indirect: only the final result is given after a series of moves. 2. Sequence and distribution of examples – A teacher presents important example problems along with a solution. – The learner chooses from the examples; e.g., pick a board for which the best move is unknown. – The selection of examples to learn from should follow the (expected) distribution of future problems. ML:I-20 Introduction © STEIN/LETTMANN 2005-2017
  • 13. Specification of Learning Problems Example Chess: Kind of Experience [Mitchell 1997] 1. Feedback – direct: for each board configuration the best move is given. – indirect: only the final result is given after a series of moves. 2. Sequence and distribution of examples – A teacher presents important example problems along with a solution. – The learner chooses from the examples; e.g., pick a board for which the best move is unknown. – The selection of examples to learn from should follow the (expected) distribution of future problems. 3. Relevance under a performance measure – How far can we get with experience? – Can we master situations in the wild? (playing against itself will be not enough to become world class) ML:I-21 Introduction © STEIN/LETTMANN 2005-2017
  • 14. Specification of Learning Problems Example Chess: Ideal Target Function γ [Mitchell 1997] (a) γ : Boards → Moves (b) γ : Boards → R ML:I-22 Introduction © STEIN/LETTMANN 2005-2017
  • 15. Specification of Learning Problems Example Chess: Ideal Target Function γ [Mitchell 1997] (a) γ : Boards → Moves (b) γ : Boards → R A recursive definition of γ, following a kind of means-ends analysis : Let be o ∈ Boards. 1. γ(o) = 100, if o represents a final board state that is won. 2. γ(o) = −100, if o represents a final board state that is lost. 3. γ(o) = 0, if o represents a final board state that is drawn. 4. γ(o) = γ(o∗ ) otherwise. o∗ denotes the best final state that can be reached if both sides play optimally. Related: minimax strategy, α-β pruning. [Course on Search Algorithms, Stein 1998-2017] ML:I-23 Introduction © STEIN/LETTMANN 2005-2017
  • 16. Specification of Learning Problems Example Chess: From the Real World γ to a Model World y γ(o) ; y(α(o)) ≡ y(x) ML:I-24 Introduction © STEIN/LETTMANN 2005-2017
  • 17. Specification of Learning Problems Example Chess: From the Real World γ to a Model World y γ(o) ; y(α(o)) ≡ y(x) y(x) = w0 + w1 · x1 + w2 · x2 + w3 · x3 + w4 · x4 + w5 · x5 + w6 · x6 where x1 = number of black pawns on board o x2 = number of white pawns on board o x3 = number of black pieces on board o x4 = number of white pieces on board o x5 = number of black pieces threatened on board o x6 = number of white pieces threatened on board o ML:I-25 Introduction © STEIN/LETTMANN 2005-2017
  • 18. Specification of Learning Problems Example Chess: From the Real World γ to a Model World y γ(o) ; y(α(o)) ≡ y(x) y(x) = w0 + w1 · x1 + w2 · x2 + w3 · x3 + w4 · x4 + w5 · x5 + w6 · x6 where x1 = number of black pawns on board o x2 = number of white pawns on board o x3 = number of black pieces on board o x4 = number of white pieces on board o x5 = number of black pieces threatened on board o x6 = number of white pieces threatened on board o Other approaches to formulate y: K case base K set of rules K neural network K polynomial function of board features ML:I-26 Introduction © STEIN/LETTMANN 2005-2017
  • 19. Remarks: K The ideal target function γ interprets the real world, say, a real-world object o, to “compute” γ(o). This “computation” can be operationalized by a human or by some other (even arcane) mechanism of the real world. K To simulate the interesting aspects of the real world by means of a computer, we define a model world. This model world is restricted to particular—typically easily measurable—features x that are derived from o, with x = α(o). In the model world, y(x) is the abstracted and formalized counterpart of γ(o). K y is called model function or model, α is called model formation function. K The key difference between an ideal target function γ and a model function y lies in the complexity and the representation of their respective domains. Examples: – A chess grand master assesses a board o in its entirety, both intuitively and analytically; a chess program is restricted to particular features x, x = α(o). – A human mushroom picker assesses a mushroom o with all her skills (intuitively, analytically, by tickled senses); a classification program is restricted to a few surface features x, x = α(o). ML:I-27 Introduction © STEIN/LETTMANN 2005-2017
  • 20. Remarks (continued) : K For automated chess playing a real-valued assessment function is needed; such kind of problems form regression problems. If only a small number of values are to be considered (e.g. school grades), we are given a classification problem. A regression problem can be transformed into a classification problem via domain discretization. K Regression problems and classification problems often differ with regard to assessing the achieved accuracy or goodness of fit. For regression problems the sum of the squared residuals may be a sensible criterion; for classification problems the number of misclassified examples may be more relevant. K For classification problems, the ideal target function γ is also called ideal classifier; similarly, the model function y is called classifier. K Decision problems are classification problems with two classes. K The halting problem for Turing machines is an undecidable classification problem. ML:I-28 Introduction © STEIN/LETTMANN 2005-2017
  • 21. Specification of Learning Problems [model world] How to Build a Classifier y Characterization of the real world: K O is a set of objects. (example: mails) K C is a set of classes. (example: spam versus ham) K γ : O → C is the ideal classifier for O. (typically a human expert) ML:I-29 Introduction © STEIN/LETTMANN 2005-2017
  • 22. Specification of Learning Problems [model world] How to Build a Classifier y Characterization of the real world: K O is a set of objects. (example: mails) K C is a set of classes. (example: spam versus ham) K γ : O → C is the ideal classifier for O. (typically a human expert) Classification problem: K Given some o ∈ O, determine its class γ(o) ∈ C. (example: is a mail spam?) Acquisition of classification knowledge: 1. Collect real-world examples of the form (o, γ(o)), o ∈ O. 2. Abstract the objects towards feature vectors x ∈ X, with x = α(o). 3. Construct feature vector examples as (x, c(x)), with x = α(o), c(x) = γ(o). ML:I-30 Introduction © STEIN/LETTMANN 2005-2017
  • 23. Specification of Learning Problems [real world] How to Build a Classifier y (continued) Characterization of the model world: K X is a set of feature vectors, called feature space. (example: word frequencies) K C is a set of classes. (as before: spam versus ham) K c : X → C is the ideal classifier for X. (c is unknown) K D = {(x1, c(x1)), . . . , (xn, c(xn))} ⊆ X × C is a set of examples. ML:I-31 Introduction © STEIN/LETTMANN 2005-2017
  • 24. Specification of Learning Problems [real world] How to Build a Classifier y (continued) Characterization of the model world: K X is a set of feature vectors, called feature space. (example: word frequencies) K C is a set of classes. (as before: spam versus ham) K c : X → C is the ideal classifier for X. (c is unknown) K D = {(x1, c(x1)), . . . , (xn, c(xn))} ⊆ X × C is a set of examples. Machine learning problem: K Approximate the ideal classifier c (implicitly given via D) by a function y: § Decide on a model function y : X → C, x → y(x) (y needs to be parameterized) § Apply statistics, search, theory, and algorithms from the field of machine learning to maximize the goodness of fit between c and y. ML:I-32 Introduction © STEIN/LETTMANN 2005-2017
  • 25. Specification of Learning Problems How to Build a Classifier y (continued) Objects O X Feature space Classesγ α c ≈ y C Semantics: γ Ideal classifier (a human) for real-world objects. α Model formation function. c Unknown ideal classifier for vectors from the feature space. y Classifier to be learned. c ≈ y c is approximated by y (based on a set of examples). ML:I-33 Introduction © STEIN/LETTMANN 2005-2017
  • 26. Remarks: K The feature space X comprises vectors x1, x2, . . ., which can be considered as abstractions of real-world objects o1, o2, . . ., and which have been computed according to our view of the real world. K The model formation function α determines the level of abstraction between o and x, x = α(o). I.e., α determines the representation fidelity, exactness, quality, or simplification. K Though α models an object o ∈ O only imperfectly as x = α(o), c(x) must be considered as ideal classifier, since c(x) is defined as γ(o) and hence yields the true real-world classes. I.e., c and γ have different domains each, but they return the same images. K The function c is only implicitly given, in the form of the example set D. If c could be stated in an explicit form, e.g. as a function equation, one would have no machine learning problem. K c(x) is often termed “ground truth” (for x and the underlying classification problem). Observe that this term is justified by the fact that c(x) ≡ γ(o). ML:I-34 Introduction © STEIN/LETTMANN 2005-2017
  • 27. Specification of Learning Problems LMS Algorithm for Fitting y [IGD Algorithm] Algorithm: LMS Least Mean Squares. Input: D Training examples of the form (x, c(x)) with target function value c(x) for x. η Learning rate, a small positive constant. Internal: y(D) Set of y(x)-values computed from the elements x in D given some w. Output: w Weight vector. LMS(D, η) 1. initialize_random_weights((w0, w1, . . . , wp)) 2. REPEAT 3. (x, c(x)) = random_select(D) 4. y(x) = w0 + w1 · x1 + . . . + wp · xp 5. error = c(x) − y(x) 6. FOR j = 0 TO p DO 7. ∆wj = η · error · xj // ∀x∈D : x|x0 ≡ 1 8. wj = wj + ∆wj 9. ENDDO 10. UNTIL(convergence(D, y(D))) 11. return((w0, w1, . . . , wp)) ML:I-35 Introduction © STEIN/LETTMANN 2005-2017
  • 28. Remarks: K The LMS weight adaptation corresponds to the incremental gradient descend [IGD] algorithm, and approximates the global direction of steepest error descent as used by the batch gradient descent [BGD] algorithm, for which more rigorous statements on convergence are possible. K Possible criteria for convergence() : – The global error quantified as the sum of the squared residuals, x,c(x)∈D (c(x) − y(x))2 . – An upper bound on the number of iterations. ML:I-36 Introduction © STEIN/LETTMANN 2005-2017
  • 29. Specification of Learning Problems Design of Learning Systems [p.12, Mitchell 1997] Solution Moves, γ(o*) Chess program compute_solution() Problem Chess board ML:I-37 Introduction © STEIN/LETTMANN 2005-2017
  • 30. Specification of Learning Problems Design of Learning Systems [p.12, Mitchell 1997] Solution Moves, γ(o*) Chess program compute_solution() Problem Chess board Training examples (x1, c(x1)), ..., (xn, c(xn)) accept improve Move analysis evaluate() ML:I-38 Introduction © STEIN/LETTMANN 2005-2017
  • 31. Specification of Learning Problems Design of Learning Systems [p.12, Mitchell 1997] Solution Moves, γ(o*) Chess program compute_solution() Problem Chess board Training examples (x1, c(x1)), ..., (xn, c(xn)) accept improve Move analysis evaluate() generalize() LMS algorithm Hypothesis (w0, ..., wp) ML:I-39 Introduction © STEIN/LETTMANN 2005-2017
  • 32. Specification of Learning Problems Design of Learning Systems [p.12, Mitchell 1997] Solution Moves, γ(o*) Chess program compute_solution() Problem Chess board Training examples (x1, c(x1)), ..., (xn, c(xn)) accept improve Move analysis evaluate() generalize() LMS algorithm Hypothesis (w0, ..., wp) generate_ problem() Important design aspects: 1. kind of experience 2. fidelity of the model formation function α : O → X 3. class or structure of the model function y 4. learning method for fitting y ML:I-40 Introduction © STEIN/LETTMANN 2005-2017
  • 33. Specification of Learning Problems Related Questions Model functions y: K What are important classes of model functions? K What are methods to fit (= learn) model functions? K What are measures to assess the goodness of fit? K How does the example number affect the learning process? K How does noise affect the learning process? ML:I-41 Introduction © STEIN/LETTMANN 2005-2017
  • 34. Specification of Learning Problems Related Questions (continued) Generic learnability: K What are the theoretical limits of learnability? K How can we use nature as a model for learning? Knowledge acquisition: K How can we integrate background knowledge into the learning process? K How can we integrate human expertise into the learning process? ML:I-42 Introduction © STEIN/LETTMANN 2005-2017