De presentatie van Holger Hoos, tijdens de parallelle sessie 'Methoden en technieken voor data-analyse' van het congres 'Data gedreven Beleidsontwikkeling' in Den Haag op 28 november 2017.
1. Machine Learning –
Opportunities and Limitations
Holger H. Hoos
LIACS
Universiteit Leiden
The Netherlands
LCDS Conference
2017/11/28
2. The age of computation
Clear, precise instructions – flawlessly executed
Holger Hoos: Machine Learning – Opportunities and Limitations
1
3. The age of computation
Clear, precise instructions – flawlessly executed
algorithms = recipes for data processing
Holger Hoos: Machine Learning – Opportunities and Limitations
1
4. The age of computation
Clear, precise instructions – flawlessly executed
algorithms = recipes for data processing
predictable results, behaviour
Holger Hoos: Machine Learning – Opportunities and Limitations
1
5. The age of computation
Clear, precise instructions – flawlessly executed
algorithms = recipes for data processing
predictable results, behaviour
performance guarantees
Holger Hoos: Machine Learning – Opportunities and Limitations
1
6. The age of computation
Clear, precise instructions – flawlessly executed
algorithms = recipes for data processing
predictable results, behaviour
performance guarantees
trusted, effective solutions to complex problems
Holger Hoos: Machine Learning – Opportunities and Limitations
1
7. The age of advanced computation – AI
vast amounts of cheap computation
Holger Hoos: Machine Learning – Opportunities and Limitations
2
8. The age of advanced computation – AI
vast amounts of cheap computation
automatically designed algorithms
Holger Hoos: Machine Learning – Opportunities and Limitations
2
9. The age of advanced computation – AI
vast amounts of cheap computation
automatically designed algorithms
effective but complex, heuristic, black-box methods
Holger Hoos: Machine Learning – Opportunities and Limitations
2
11. Key idea:
explicit programming
learning / automatic adaptation to data
Success stories:
game playing (e.g., Go, poker)
Holger Hoos: Machine Learning – Opportunities and Limitations
3
12. Key idea:
explicit programming
learning / automatic adaptation to data
Success stories:
game playing (e.g., Go, poker)
medical diagnosis (lung disease)
Holger Hoos: Machine Learning – Opportunities and Limitations
3
13. Key idea:
explicit programming
learning / automatic adaptation to data
Success stories:
game playing (e.g., Go, poker)
medical diagnosis (lung disease)
transportation (autonomous driving)
Holger Hoos: Machine Learning – Opportunities and Limitations
3
14. Key idea:
explicit programming
learning / automatic adaptation to data
Success stories:
game playing (e.g., Go, poker)
medical diagnosis (lung disease)
transportation (autonomous driving)
energy (demand prediction and trading)
Holger Hoos: Machine Learning – Opportunities and Limitations
3
15. The Machine Learning Revolution
machine learning (ML) = automatic construction
of software that works well on given data
Holger Hoos: Machine Learning – Opportunities and Limitations
4
16. The Machine Learning Revolution
machine learning (ML) = automatic construction
of software that works well on given data
ideas reach back to 1950s (Alan Turing)
Holger Hoos: Machine Learning – Opportunities and Limitations
4
17. The Machine Learning Revolution
machine learning (ML) = automatic construction
of software that works well on given data
ideas reach back to 1950s (Alan Turing)
based on statistics, mathematical optimisation
Holger Hoos: Machine Learning – Opportunities and Limitations
4
18. The Machine Learning Revolution
machine learning (ML) = automatic construction
of software that works well on given data
ideas reach back to 1950s (Alan Turing)
based on statistics, mathematical optimisation
and principled experimentation (heuristic mechanisms)
Holger Hoos: Machine Learning – Opportunities and Limitations
4
19. The Machine Learning Revolution
machine learning (ML) = automatic construction
of software that works well on given data
ideas reach back to 1950s (Alan Turing)
based on statistics, mathematical optimisation
and principled experimentation (heuristic mechanisms)
key ingredient to artificial intelligence (AI)
Holger Hoos: Machine Learning – Opportunities and Limitations
4
20. The Machine Learning Revolution
machine learning (ML) = automatic construction
of software that works well on given data
ideas reach back to 1950s (Alan Turing)
based on statistics, mathematical optimisation
and principled experimentation (heuristic mechanisms)
key ingredient to artificial intelligence (AI)
but: AI is more than ML
Holger Hoos: Machine Learning – Opportunities and Limitations
4
21. Supervised vs unsupervised ML
unsupervised: discover patterns in data
Holger Hoos: Machine Learning – Opportunities and Limitations
5
22. Supervised vs unsupervised ML
unsupervised: discover patterns in data
data mining
Holger Hoos: Machine Learning – Opportunities and Limitations
5
23. Supervised vs unsupervised ML
unsupervised: discover patterns in data
data mining (e.g., clustering)
Holger Hoos: Machine Learning – Opportunities and Limitations
5
24. Supervised vs unsupervised ML
unsupervised: discover patterns in data
data mining (e.g., clustering)
supervised: make predictions based on
known training examples
Holger Hoos: Machine Learning – Opportunities and Limitations
5
25. Supervised vs unsupervised ML
unsupervised: discover patterns in data
data mining (e.g., clustering)
supervised: make predictions based on
known training examples
statistical modelling
Holger Hoos: Machine Learning – Opportunities and Limitations
5
26. Supervised vs unsupervised ML
unsupervised: discover patterns in data
data mining (e.g., clustering)
supervised: make predictions based on
known training examples
statistical modelling
Key assumption: training data is representative
Key assumption: of application scenario
Holger Hoos: Machine Learning – Opportunities and Limitations
5
27. Supervised vs unsupervised ML
unsupervised: discover patterns in data
data mining (e.g., clustering)
supervised: make predictions based on
known training examples
statistical modelling
Key assumption: training data is representative
Key assumption: of application scenario
other types of ML exist
(e.g., semi-supervised learning, reinforcement learning)
Holger Hoos: Machine Learning – Opportunities and Limitations
5
28. Regression
Example: predict plant growth for given set
Example: of environmental conditions
Holger Hoos: Machine Learning – Opportunities and Limitations
6
29. Regression
Example: predict plant growth for given set
Example: of environmental conditions
Given: set of training examples
Given: = feature values + numerical outputs
Holger Hoos: Machine Learning – Opportunities and Limitations
6
30. Regression
Example: predict plant growth for given set
Example: of environmental conditions
Given: set of training examples
Given: = feature values + numerical outputs
Objective: predict output for new feature values
Holger Hoos: Machine Learning – Opportunities and Limitations
6
31. Classification
Example: predict whether someone takes a loan
Example: based on demographic + personal financial data
Holger Hoos: Machine Learning – Opportunities and Limitations
7
32. Classification
Example: predict whether someone takes a loan
Example: based on demographic + personal financial data
Given: set of training examples
Given: = feature values + classes
Holger Hoos: Machine Learning – Opportunities and Limitations
7
33. Classification
Example: predict whether someone takes a loan
Example: based on demographic + personal financial data
Given: set of training examples
Given: = feature values + classes
Objective: predict class for new feature values
Holger Hoos: Machine Learning – Opportunities and Limitations
7
34. Classification
Example: predict whether someone takes a loan
Example: based on demographic + personal financial data
Given: set of training examples
Given: = feature values + classes
Objective: predict class for new feature values
Important special case: binary classification
Important special case: = 2 classes (e.g., yes/no)
Holger Hoos: Machine Learning – Opportunities and Limitations
7
35. Example: Binary classification with decision trees
[Source: www.simafore.com]
Holger Hoos: Machine Learning – Opportunities and Limitations
8
36. Random forests (state-of-the-art method)
[Source: blog.citizennet.com]
Holger Hoos: Machine Learning – Opportunities and Limitations
9
37. Key distinction:
Classification procedure (classifier; ‘model’):
algorithm used for solving a classification problem
e.g., decision tree
Holger Hoos: Machine Learning – Opportunities and Limitations
10
38. Key distinction:
Classification procedure (classifier; ‘model’):
algorithm used for solving a classification problem
e.g., decision tree
Input: feature values
Output: class (yes/no)
Holger Hoos: Machine Learning – Opportunities and Limitations
10
39. Key distinction:
Classification procedure (classifier; ‘model’):
algorithm used for solving a classification problem
e.g., decision tree
Input: feature values
Output: class (yes/no)
Learning procedure:
algorithm used for constructing a classifier
e.g., C4.5 (well-known decision tree learning algorithm)
Holger Hoos: Machine Learning – Opportunities and Limitations
10
40. Key distinction:
Classification procedure (classifier; ‘model’):
algorithm used for solving a classification problem
e.g., decision tree
Input: feature values
Output: class (yes/no)
Learning procedure:
algorithm used for constructing a classifier
e.g., C4.5 (well-known decision tree learning algorithm)
Input: set of training data
Output: classification procedure (decision tree)
Holger Hoos: Machine Learning – Opportunities and Limitations
10
41. Evaluation and Bias
How to evaluate supervised ML algorithms?
Key idea: Assess quality of predictions obtained
Key idea: (e.g., from trained binary classifier)
Holger Hoos: Machine Learning – Opportunities and Limitations
11
42. Evaluation and Bias
How to evaluate supervised ML algorithms?
Key idea: Assess quality of predictions obtained
Key idea: (e.g., from trained binary classifier)
Prediction quality of binary classifiers
accuracy: expected rate of misclassifications
Holger Hoos: Machine Learning – Opportunities and Limitations
11
43. Evaluation and Bias
How to evaluate supervised ML algorithms?
Key idea: Assess quality of predictions obtained
Key idea: (e.g., from trained binary classifier)
Prediction quality of binary classifiers
accuracy: expected rate of misclassifications
false positive rate: expected rate of incorrect ‘yes’ predictions
Holger Hoos: Machine Learning – Opportunities and Limitations
11
44. Evaluation and Bias
How to evaluate supervised ML algorithms?
Key idea: Assess quality of predictions obtained
Key idea: (e.g., from trained binary classifier)
Prediction quality of binary classifiers
accuracy: expected rate of misclassifications
false positive rate: expected rate of incorrect ‘yes’ predictions
false negative rate: expected rate of incorrect ‘no’ predictions
Holger Hoos: Machine Learning – Opportunities and Limitations
11
45. Evaluation and Bias
How to evaluate supervised ML algorithms?
Key idea: Assess quality of predictions obtained
Key idea: (e.g., from trained binary classifier)
Prediction quality of binary classifiers
accuracy: expected rate of misclassifications
false positive rate: expected rate of incorrect ‘yes’ predictions
false negative rate: expected rate of incorrect ‘no’ predictions
trade-off (weighted average; ROC curve)
Holger Hoos: Machine Learning – Opportunities and Limitations
11
46. Caution: Typically, no single ‘correct’ evaluation metric
evaluation metrics can introduce unfairness / bias
Holger Hoos: Machine Learning – Opportunities and Limitations
12
47. Caution: Typically, no single ‘correct’ evaluation metric
evaluation metrics can introduce unfairness / bias
especially when training sets are unbalanced
(many more ‘no’ than ‘yes’ cases, prevalence/lack
of input feature combinations)
Holger Hoos: Machine Learning – Opportunities and Limitations
12
48. Caution: Typically, no single ‘correct’ evaluation metric
evaluation metrics can introduce unfairness / bias
especially when training sets are unbalanced
(many more ‘no’ than ‘yes’ cases, prevalence/lack
of input feature combinations)
use great care when constructing training sets
Holger Hoos: Machine Learning – Opportunities and Limitations
12
49. Caution: Typically, no single ‘correct’ evaluation metric
evaluation metrics can introduce unfairness / bias
especially when training sets are unbalanced
(many more ‘no’ than ‘yes’ cases, prevalence/lack
of input feature combinations)
use great care when constructing training sets
use multiple evaluation metrics
Holger Hoos: Machine Learning – Opportunities and Limitations
12
50. Caution: Typically, no single ‘correct’ evaluation metric
evaluation metrics can introduce unfairness / bias
especially when training sets are unbalanced
(many more ‘no’ than ‘yes’ cases, prevalence/lack
of input feature combinations)
use great care when constructing training sets
use multiple evaluation metrics
perform detailed evaluations (beyond simple metrics)
Holger Hoos: Machine Learning – Opportunities and Limitations
12
51. The problem of overfitting
good performance on training data may not generalise
to previously unseen data
overfitting (well-known problem)
Holger Hoos: Machine Learning – Opportunities and Limitations
13
52. The problem of overfitting
good performance on training data may not generalise
to previously unseen data
overfitting (well-known problem)
detect overfitting using validation techniques
hold-out validation: evaluate on set of test cases
hold-out validation: strictly separate from training set
Holger Hoos: Machine Learning – Opportunities and Limitations
13
53. The problem of overfitting
good performance on training data may not generalise
to previously unseen data
overfitting (well-known problem)
detect overfitting using validation techniques
hold-out validation: evaluate on set of test cases
hold-out validation: strictly separate from training set
cross-validation: like hold-out, but with many different
cross-validation: training/test splits
Holger Hoos: Machine Learning – Opportunities and Limitations
13
54. The problem of overfitting
good performance on training data may not generalise
to previously unseen data
overfitting (well-known problem)
detect overfitting using validation techniques
hold-out validation: evaluate on set of test cases
hold-out validation: strictly separate from training set
cross-validation: like hold-out, but with many different
cross-validation: training/test splits
prevent overfitting using regularisation techniques
(= modification / specific setting of ML method used)
Holger Hoos: Machine Learning – Opportunities and Limitations
13
55. The problem of overfitting
good performance on training data may not generalise
to previously unseen data
overfitting (well-known problem)
detect overfitting using validation techniques
hold-out validation: evaluate on set of test cases
hold-out validation: strictly separate from training set
cross-validation: like hold-out, but with many different
cross-validation: training/test splits
prevent overfitting using regularisation techniques
(= modification / specific setting of ML method used)
Caution: Overfitting can introduce bias!
Holger Hoos: Machine Learning – Opportunities and Limitations
13
56. Problematic features
certain (input) features can help improve performance,
but are inappropriate to use
Holger Hoos: Machine Learning – Opportunities and Limitations
14
57. Problematic features
certain (input) features can help improve performance,
but are inappropriate to use
examples:
Holger Hoos: Machine Learning – Opportunities and Limitations
14
58. Problematic features
certain (input) features can help improve performance,
but are inappropriate to use
examples: race, gender, sexual orientation
Holger Hoos: Machine Learning – Opportunities and Limitations
14
59. Problematic features
certain (input) features can help improve performance,
but are inappropriate to use
examples: race, gender, sexual orientation
using problematic features in machine learning can cause
(unintentional) discrimination
Holger Hoos: Machine Learning – Opportunities and Limitations
14
60. Problematic features
certain (input) features can help improve performance,
but are inappropriate to use
examples: race, gender, sexual orientation
using problematic features in machine learning can cause
(unintentional) discrimination
Easy solution: do not use problematic features
Holger Hoos: Machine Learning – Opportunities and Limitations
14
61. Problematic features
certain (input) features can help improve performance,
but are inappropriate to use
examples: race, gender, sexual orientation
using problematic features in machine learning can cause
(unintentional) discrimination
Easy solution: do not use problematic features
Wrong!! combinations of other, harmless features can yield
equivalent information
Holger Hoos: Machine Learning – Opportunities and Limitations
14
62. Problematic features
certain (input) features can help improve performance,
but are inappropriate to use
examples: race, gender, sexual orientation
using problematic features in machine learning can cause
(unintentional) discrimination
Easy solution: do not use problematic features
Wrong!! combinations of other, harmless features can yield
equivalent information
especially problematic for deep learning and other,
powerful black-box methods
Holger Hoos: Machine Learning – Opportunities and Limitations
14
63. Problematic features
certain (input) features can help improve performance,
but are inappropriate to use
examples: race, gender, sexual orientation
using problematic features in machine learning can cause
(unintentional) discrimination
Easy solution: do not use problematic features
Wrong!! combinations of other, harmless features can yield
equivalent information
especially problematic for deep learning and other,
powerful black-box methods
Better solution: careful, detailed evaluation
Holger Hoos: Machine Learning – Opportunities and Limitations
14
65. Explainability & Transparency
Challenge: How can we trust an ML system?
carefully evaluate performance;
identify strengths and weaknesses
(requires detailed evaluation = computational experiments)
Holger Hoos: Machine Learning – Opportunities and Limitations
15
66. Explainability & Transparency
Challenge: How can we trust an ML system?
carefully evaluate performance;
identify strengths and weaknesses
(requires detailed evaluation = computational experiments)
understand how it works
Holger Hoos: Machine Learning – Opportunities and Limitations
15
67. Explainability & Transparency
Challenge: How can we trust an ML system?
carefully evaluate performance;
identify strengths and weaknesses
(requires detailed evaluation = computational experiments)
understand how it works
understand its output
Holger Hoos: Machine Learning – Opportunities and Limitations
15
68. Key distinction:
understanding a classifier (e.g., decision tree)
vs understanding the training procedure that produced it
Holger Hoos: Machine Learning – Opportunities and Limitations
16
69. Key distinction:
understanding a classifier (e.g., decision tree)
vs understanding the training procedure that produced it
Note:
to understand a given classifier (and its output),
we do not need to understand how it was built
Holger Hoos: Machine Learning – Opportunities and Limitations
16
70. Key distinction:
understanding a classifier (e.g., decision tree)
vs understanding the training procedure that produced it
Note:
to understand a given classifier (and its output),
we do not need to understand how it was built
understanding of what happens at every step
does not mean understanding behaviour of an algorithm
Holger Hoos: Machine Learning – Opportunities and Limitations
16
71. Key distinction:
understanding a classifier (e.g., decision tree)
vs understanding the training procedure that produced it
Note:
to understand a given classifier (and its output),
we do not need to understand how it was built
understanding of what happens at every step
does not mean understanding behaviour of an algorithm
some classifiers are easier to understand than others
Holger Hoos: Machine Learning – Opportunities and Limitations
16
73. Deep learning
uses neural networks with many layers
Holger Hoos: Machine Learning – Opportunities and Limitations
18
74. Deep learning
uses neural networks with many layers
AlphaGo Zero: 84 layers
Holger Hoos: Machine Learning – Opportunities and Limitations
18
75. Deep learning
uses neural networks with many layers
AlphaGo Zero: 84 layers
idea + research dates back to 1960s/1970s
Holger Hoos: Machine Learning – Opportunities and Limitations
18
76. Deep learning
uses neural networks with many layers
AlphaGo Zero: 84 layers
idea + research dates back to 1960s/1970s
successful real-world applications since the 1980s
Holger Hoos: Machine Learning – Opportunities and Limitations
18
77. Deep learning
uses neural networks with many layers
AlphaGo Zero: 84 layers
idea + research dates back to 1960s/1970s
successful real-world applications since the 1980s
very popular since ≈ 2012
Holger Hoos: Machine Learning – Opportunities and Limitations
18
78. Deep learning
uses neural networks with many layers
AlphaGo Zero: 84 layers
idea + research dates back to 1960s/1970s
successful real-world applications since the 1980s
very popular since ≈ 2012
impressive results in increasing number of application areas
Holger Hoos: Machine Learning – Opportunities and Limitations
18
79. Deep learning
uses neural networks with many layers
AlphaGo Zero: 84 layers
idea + research dates back to 1960s/1970s
successful real-world applications since the 1980s
very popular since ≈ 2012
impressive results in increasing number of application areas
requires large amounts of data, specialised hardware,
considerable human expertise + experimentation
Holger Hoos: Machine Learning – Opportunities and Limitations
18
80. Deep learning
uses neural networks with many layers
AlphaGo Zero: 84 layers
idea + research dates back to 1960s/1970s
successful real-world applications since the 1980s
very popular since ≈ 2012
impressive results in increasing number of application areas
requires large amounts of data, specialised hardware,
considerable human expertise + experimentation
Caution! Deep learning = machine learning = AI
Holger Hoos: Machine Learning – Opportunities and Limitations
18
81. Deep neural networks are black-box methods
easy to understand function of each ‘neuron’ in the network;
very hard / impossible to understand the behaviour of the network
Holger Hoos: Machine Learning – Opportunities and Limitations
19
82. Deep neural networks are black-box methods
easy to understand function of each ‘neuron’ in the network;
very hard / impossible to understand the behaviour of the network
lack of transparency / explainability
Holger Hoos: Machine Learning – Opportunities and Limitations
19
83. Deep neural networks are black-box methods
easy to understand function of each ‘neuron’ in the network;
very hard / impossible to understand the behaviour of the network
lack of transparency / explainability
Possible remedies:
principled, detailed evaluation of behaviour
Holger Hoos: Machine Learning – Opportunities and Limitations
19
84. Deep neural networks are black-box methods
easy to understand function of each ‘neuron’ in the network;
very hard / impossible to understand the behaviour of the network
lack of transparency / explainability
Possible remedies:
principled, detailed evaluation of behaviour
use alternate methods with similar performance
(e.g., random forests)
Holger Hoos: Machine Learning – Opportunities and Limitations
19
85. Deep neural networks are black-box methods
easy to understand function of each ‘neuron’ in the network;
very hard / impossible to understand the behaviour of the network
lack of transparency / explainability
Possible remedies:
principled, detailed evaluation of behaviour
use alternate methods with similar performance
(e.g., random forests)
trade off performance against explainability
Holger Hoos: Machine Learning – Opportunities and Limitations
19
86. Deep neural networks are black-box methods
easy to understand function of each ‘neuron’ in the network;
very hard / impossible to understand the behaviour of the network
lack of transparency / explainability
Possible remedies:
principled, detailed evaluation of behaviour
use alternate methods with similar performance
(e.g., random forests)
trade off performance against explainability
frugal learning (new research direction)
Holger Hoos: Machine Learning – Opportunities and Limitations
19
88. Automated Machine Learning
Machine learning is powerful, but successful application
is far from trivial.
Holger Hoos: Machine Learning – Opportunities and Limitations
20
89. Automated Machine Learning
Machine learning is powerful, but successful application
is far from trivial.
Fundamental problem:
Which of many available algorithms (models) applicable to
given machine learning problem to use, and with which
hyper-parameter settings?
Holger Hoos: Machine Learning – Opportunities and Limitations
20
90. Automated Machine Learning
Machine learning is powerful, but successful application
is far from trivial.
Fundamental problem:
Which of many available algorithms (models) applicable to
given machine learning problem to use, and with which
hyper-parameter settings?
Example: WEKA contains 39 classification algorithms,
Example: 3 × 8 feature selection methods
Holger Hoos: Machine Learning – Opportunities and Limitations
20
91. Automated Machine Learning
Machine learning is powerful, but successful application
is far from trivial.
Fundamental problem:
Which of many available algorithms (models) applicable to
given machine learning problem to use, and with which
hyper-parameter settings?
Solution:
Automatically select ML methods and hyper-parameter settings
Holger Hoos: Machine Learning – Opportunities and Limitations
20
92. Automated Machine Learning
Machine learning is powerful, but successful application
is far from trivial.
Fundamental problem:
Which of many available algorithms (models) applicable to
given machine learning problem to use, and with which
hyper-parameter settings?
Solution:
Automatically select ML methods and hyper-parameter settings
Automated machine learning (AutoML)
Holger Hoos: Machine Learning – Opportunities and Limitations
20
93. AutoML ...
achieves substantial performance improvements
over solutions hand-crafted by human experts
Holger Hoos: Machine Learning – Opportunities and Limitations
21
94. AutoML ...
achieves substantial performance improvements
over solutions hand-crafted by human experts
enables frugal learning (explainable/transparent ML)
Holger Hoos: Machine Learning – Opportunities and Limitations
21
95. AutoML ...
achieves substantial performance improvements
over solutions hand-crafted by human experts
enables frugal learning (explainable/transparent ML)
helps non-experts effectively apply ML techniques
Holger Hoos: Machine Learning – Opportunities and Limitations
21
96. AutoML ...
achieves substantial performance improvements
over solutions hand-crafted by human experts
enables frugal learning (explainable/transparent ML)
helps non-experts effectively apply ML techniques
intense international research focus
(academia + industry)
Holger Hoos: Machine Learning – Opportunities and Limitations
21
97. AutoML ...
achieves substantial performance improvements
over solutions hand-crafted by human experts
enables frugal learning (explainable/transparent ML)
helps non-experts effectively apply ML techniques
intense international research focus
(academia + industry)
ongoing research focus at LIACS (Leiden Institute of
Advanced Computer Science);
see ada.liacs.nl/projects, Auto-WEKA.
Holger Hoos: Machine Learning – Opportunities and Limitations
21
99. Take-Home Message
Machine learning can (help to) solve many proplems ...
but is no panacea.
Holger Hoos: Machine Learning – Opportunities and Limitations
22
100. Take-Home Message
Machine learning can (help to) solve many proplems ...
but is no panacea.
Methods and results strongly depend on quantity
+ quality of input data.
Holger Hoos: Machine Learning – Opportunities and Limitations
22
101. Take-Home Message
Machine learning can (help to) solve many proplems ...
but is no panacea.
Methods and results strongly depend on quantity
+ quality of input data.
Challenges:
Risk of overfitting training data, hidden bias
Holger Hoos: Machine Learning – Opportunities and Limitations
22
102. Take-Home Message
Machine learning can (help to) solve many proplems ...
but is no panacea.
Methods and results strongly depend on quantity
+ quality of input data.
Challenges:
Risk of overfitting training data, hidden bias
Lack of transparency, explainability
Holger Hoos: Machine Learning – Opportunities and Limitations
22
103. Take-Home Message
Machine learning can (help to) solve many proplems ...
but is no panacea.
Methods and results strongly depend on quantity
+ quality of input data.
Challenges:
Risk of overfitting training data, hidden bias
Lack of transparency, explainability
Human expertise: crucial for successful, responsible use
Holger Hoos: Machine Learning – Opportunities and Limitations
22
104. Take-Home Message
Machine learning can (help to) solve many proplems ...
but is no panacea.
Methods and results strongly depend on quantity
+ quality of input data.
Challenges:
Risk of overfitting training data, hidden bias
Lack of transparency, explainability
Human expertise: crucial for successful, responsible use
Current + future research (far from solved)
Holger Hoos: Machine Learning – Opportunities and Limitations
22
105. Take-Home Message
Machine learning can (help to) solve many proplems ...
but is no panacea.
Methods and results strongly depend on quantity
+ quality of input data.
Challenges:
Risk of overfitting training data, hidden bias
Lack of transparency, explainability
Human expertise: crucial for successful, responsible use
Current + future research (far from solved)
AI should augment, not replace human expertise!
(Likewise for machine learning.)
Holger Hoos: Machine Learning – Opportunities and Limitations
22