Automating Machine Learning - Is it feasible?Manuel Martín
Facing a machine learning problem for the first time can be overwhelming. Hundreds of methods exist for tackling problems such as classification, regression or clustering. Selecting the appropriate method is challenging, specially if no much prior knowledge is known. In addition, most models require to optimise a number of hyperparameters to perform well. Preparing the data for the learning algorithm is also a labour-intensive process that includes cleaning outliers and imperfections, feature selection, data transformation like PCA and more. A workflow connecting preprocessing methods and predictive models is called a multicomponent predictive system (MCPS). This talk introduces the problem of automating the composition and optimisation of MCPSs and also how they can be adapted in changing environments.
Decision tree making use of the classification of the data, when the data is categorical or ordinal. It is a part of the supervised machine learning. It is in the form of a data tree which contains the result of parents node.
LearnBay provides industrial training in Data Science which is co-developed with IBM.
To know more :
Visit our website: https://www.learnbay.co/data-science-course/
Follow us on:
LinkedIn: https://www.linkedin.com/company/learnbay/
Facebook: https://www.facebook.com/learnbay/
Twitter: https://twitter.com/Learnbay1
The document discusses various machine learning classification algorithms including logistic regression, k-nearest neighbors, naive Bayes, support vector machines, decision trees, and random forests. It provides intuitive explanations of how each algorithm works through examples and mathematical formulas. Random forests are ensemble methods that generate multiple decision trees on different samples of the data and use voting to improve predictive performance over a single tree.
Data Science Training in Bangalore | Learnbay.in | Decision Tree | Machine Le...Learnbay Datascience
Decision Tree by Learnbay | Data Science Training in Bangalore | Machine Learning Courses. Learnbay offers classroom data science training courses in Bangalore with project and job assistance for working professionals.For more details visit https://www.learnbay.in/shop/courses/data-science-training-courses-bangalore/
The document provides an overview of machine learning and decision tree learning. It discusses how machine learning can be applied to problems that are too difficult to program by hand, such as autonomous driving. It then describes decision tree learning, including how decision trees work, how the ID3 algorithm builds decision trees in a top-down manner by selecting the attribute that best splits the data at each step, and how decision trees can be converted to rules.
This presentation covers Decision Tree as a supervised machine learning technique, talking about Information Gain method and Gini Index method with their related Algorithms.
Least Square Plane and Leastsquare Quadric Surface Approximation by Using Mod...IOSRJM
Now a days Surface fitting is applied all engineering and medical fields. Kamron Saniee ,2007 find a simple expression for multivariate LaGrange’s Interpolation. We derive a least square plane and least square quadric surface Approximation from a given N+1 tabular points when the function is unique. We used least square method technique. We can apply this method in surface fitting also.
Automating Machine Learning - Is it feasible?Manuel Martín
Facing a machine learning problem for the first time can be overwhelming. Hundreds of methods exist for tackling problems such as classification, regression or clustering. Selecting the appropriate method is challenging, specially if no much prior knowledge is known. In addition, most models require to optimise a number of hyperparameters to perform well. Preparing the data for the learning algorithm is also a labour-intensive process that includes cleaning outliers and imperfections, feature selection, data transformation like PCA and more. A workflow connecting preprocessing methods and predictive models is called a multicomponent predictive system (MCPS). This talk introduces the problem of automating the composition and optimisation of MCPSs and also how they can be adapted in changing environments.
Decision tree making use of the classification of the data, when the data is categorical or ordinal. It is a part of the supervised machine learning. It is in the form of a data tree which contains the result of parents node.
LearnBay provides industrial training in Data Science which is co-developed with IBM.
To know more :
Visit our website: https://www.learnbay.co/data-science-course/
Follow us on:
LinkedIn: https://www.linkedin.com/company/learnbay/
Facebook: https://www.facebook.com/learnbay/
Twitter: https://twitter.com/Learnbay1
The document discusses various machine learning classification algorithms including logistic regression, k-nearest neighbors, naive Bayes, support vector machines, decision trees, and random forests. It provides intuitive explanations of how each algorithm works through examples and mathematical formulas. Random forests are ensemble methods that generate multiple decision trees on different samples of the data and use voting to improve predictive performance over a single tree.
Data Science Training in Bangalore | Learnbay.in | Decision Tree | Machine Le...Learnbay Datascience
Decision Tree by Learnbay | Data Science Training in Bangalore | Machine Learning Courses. Learnbay offers classroom data science training courses in Bangalore with project and job assistance for working professionals.For more details visit https://www.learnbay.in/shop/courses/data-science-training-courses-bangalore/
The document provides an overview of machine learning and decision tree learning. It discusses how machine learning can be applied to problems that are too difficult to program by hand, such as autonomous driving. It then describes decision tree learning, including how decision trees work, how the ID3 algorithm builds decision trees in a top-down manner by selecting the attribute that best splits the data at each step, and how decision trees can be converted to rules.
This presentation covers Decision Tree as a supervised machine learning technique, talking about Information Gain method and Gini Index method with their related Algorithms.
Least Square Plane and Leastsquare Quadric Surface Approximation by Using Mod...IOSRJM
Now a days Surface fitting is applied all engineering and medical fields. Kamron Saniee ,2007 find a simple expression for multivariate LaGrange’s Interpolation. We derive a least square plane and least square quadric surface Approximation from a given N+1 tabular points when the function is unique. We used least square method technique. We can apply this method in surface fitting also.
This document discusses machine learning decision trees. It outlines the ID3 algorithm for inducing decision trees from data in a top-down manner using information gain. The algorithm selects the attribute with highest information gain at each step to split the data. Overfitting is addressed through reduced error pruning which prunes nodes to minimize error on a validation set. Continuous and multi-valued attributes are handled through discretization. The document also discusses converting decision trees to rules and handling missing attribute values.
The document outlines the process of building decision trees for machine learning. It discusses key concepts like decision tree structure with root, internal and leaf nodes. It also explains entropy and information gain, which are measures of impurity/purity used to select the best attributes to split nodes on. The example of building a decision tree to predict playing tennis is used throughout to demonstrate these concepts in a step-by-step manner.
This document describes how a decision tree algorithm called CART (Classification And Regression Tree) works using the Gini impurity index. It provides an example of building a decision tree to predict whether to play tennis based on 14 data points with 4 attributes: outlook, temperature, humidity, and wind. The document calculates the Gini index for each attribute and uses the attribute with the lowest Gini index at each step to split the data and build the decision tree recursively.
This document provides an introduction to machine learning and decision trees. It defines key concepts like deep learning, artificial intelligence, and machine learning. It then discusses different machine learning algorithms like supervised learning, unsupervised learning, and decision trees. The document explains how decision trees are built by choosing features to split on at each node based on metrics like information gain and entropy. It provides an example of calculating entropy and information gain to select the best feature to split the root node on.
The document discusses discrete-time control systems. It introduces the concept of discrete functions, the z-transform and stability criteria. It also presents how to transform continuous systems to discrete systems using numerical derivatives and the discrete PID controller. The key steps are approximating derivatives as differences, representing systems using the z-transform, and deriving the discrete PID controller transfer function. Stability depends on the z-transform roots being inside the unit circle.
The document discusses the least squares method for fitting curves and lines to datasets. It begins by introducing least squares methods and their applications. It then covers the history of least squares, which was first published by Legendre in 1805 and also developed by Gauss. The document goes on to explain how least squares finds the "best fit" line or curve by minimizing the sum of the squared residuals between the data points and the fitting curve. It provides the equations for computing the coefficients of a linear regression line using the least squares approach. Finally, it generalizes the method to fitting polynomials of various degrees to data.
ID3, C4.5 :used to generate a decision tree developed by Ross Quinlan typically used in the machine learning and natural language processing domains, overview about these algorithms with illustrated examples
A deep introduction to supervised and unsupervised Machine Learning with examples in R.
Techniques covered for Regression:
- Linear Regression
- Polynomial Regression
Techniques covered for Classification:
- Simple and Multiple Logistic Regression
- Linear and Quadratic Discriminant Analysis
- K-Nearest Neighbors
Clustering:
- K-Means clustering
- Hierarchical clustering
Jordan Higher (𝜎, 𝜏)-Centralizer on Prime RingIOSR Journals
Let 𝑅 be a ring and 𝜎, 𝜏 be an endomorphisms of 𝑅, in this paper we will present and study the
concepts of higher (𝜎, 𝜏)-centralizer, Jordan higher(𝜎, 𝜏)-centralizer and Jordan triple higher (𝜎, 𝜏)-
centralizer and their generalization on the ring. The main results are prove that every Jordan higher (𝜎, 𝜏)-
centralizer of prime ring 𝑅 is higher (𝜎, 𝜏)-centralizer of 𝑅 and we prove let 𝑅 be a 2-torsion free ring,𝜎 𝑎𝑛𝑑 𝜏
are commutative endomorphism then every Jordan higher (𝜎, 𝜏)-centralizer is Jordan triple higher (𝜎, 𝜏)-
centralizer.
2.2 Special types of Correlation
2.3 Point Biserial Correlation rPB
2.3.1 Calculation of rPB
2.3.2 Significance Testing of rPB
2.4 Phi Coefficient (φ )
2.4.1 Significance Testing of phi (φ )
2.5 Biserial Correlation
2.6 Tetrachoric Correlation
2.7 Rank Order Correlations
2.7.1 Rank-order Data
2.7.2 Assumptions Underlying Pearson’s Correlation not Satisfied
2.8 Spearman’s Rank Order Correlation or Spearman’s rho (rs)
2.8.1 Null and Alternate Hypothesis
2.8.2 Numerical Example: for Untied and Tied Ranks
2.8.3 Spearman’s Rho with Tied Ranks
2.8.4 Steps for rS with Tied Ranks
2.8.5 Significance Testing of Spearman’s rho
2.9 Kendall’s Tau (ô)
2.9.1 Null and Alternative Hypothesis
2.9.2 Logic of Kendall’s Tau and Computation
2.9.3 Computational Alternative for Kendall’s Tau
2.9.4 Significance Testing for Kendall’s Tau
This document describes decision tree learning and provides an example to illustrate the process. It begins by introducing decision trees and their use for classification. It then provides details on key concepts like entropy, information gain, and the ID3 algorithm. The example shows calculating entropy and information gain for attributes to determine the root node. It further splits the data based on the root node and calculates entropy and information gain for subtrees until classes can be determined at the leaf nodes. The example builds out the full decision tree to classify whether it is suitable to play tennis based on weather conditions.
Codeless Generative AI Pipelines
(GenAI with Milvus)
https://ml.dssconf.pl/user.html#!/lecture/DSSML24-041a/rate
Discover the potential of real-time streaming in the context of GenAI as we delve into the intricacies of Apache NiFi and its capabilities. Learn how this tool can significantly simplify the data engineering workflow for GenAI applications, allowing you to focus on the creative aspects rather than the technical complexities. I will guide you through practical examples and use cases, showing the impact of automation on prompt building. From data ingestion to transformation and delivery, witness how Apache NiFi streamlines the entire pipeline, ensuring a smooth and hassle-free experience.
Timothy Spann
https://www.youtube.com/@FLaNK-Stack
https://medium.com/@tspann
https://www.datainmotion.dev/
milvus, unstructured data, vector database, zilliz, cloud, vectors, python, deep learning, generative ai, genai, nifi, kafka, flink, streaming, iot, edge
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
Beyond the Basics of A/B Tests: Highly Innovative Experimentation Tactics You...Aggregage
This webinar will explore cutting-edge, less familiar but powerful experimentation methodologies which address well-known limitations of standard A/B Testing. Designed for data and product leaders, this session aims to inspire the embrace of innovative approaches and provide insights into the frontiers of experimentation!
This document discusses machine learning decision trees. It outlines the ID3 algorithm for inducing decision trees from data in a top-down manner using information gain. The algorithm selects the attribute with highest information gain at each step to split the data. Overfitting is addressed through reduced error pruning which prunes nodes to minimize error on a validation set. Continuous and multi-valued attributes are handled through discretization. The document also discusses converting decision trees to rules and handling missing attribute values.
The document outlines the process of building decision trees for machine learning. It discusses key concepts like decision tree structure with root, internal and leaf nodes. It also explains entropy and information gain, which are measures of impurity/purity used to select the best attributes to split nodes on. The example of building a decision tree to predict playing tennis is used throughout to demonstrate these concepts in a step-by-step manner.
This document describes how a decision tree algorithm called CART (Classification And Regression Tree) works using the Gini impurity index. It provides an example of building a decision tree to predict whether to play tennis based on 14 data points with 4 attributes: outlook, temperature, humidity, and wind. The document calculates the Gini index for each attribute and uses the attribute with the lowest Gini index at each step to split the data and build the decision tree recursively.
This document provides an introduction to machine learning and decision trees. It defines key concepts like deep learning, artificial intelligence, and machine learning. It then discusses different machine learning algorithms like supervised learning, unsupervised learning, and decision trees. The document explains how decision trees are built by choosing features to split on at each node based on metrics like information gain and entropy. It provides an example of calculating entropy and information gain to select the best feature to split the root node on.
The document discusses discrete-time control systems. It introduces the concept of discrete functions, the z-transform and stability criteria. It also presents how to transform continuous systems to discrete systems using numerical derivatives and the discrete PID controller. The key steps are approximating derivatives as differences, representing systems using the z-transform, and deriving the discrete PID controller transfer function. Stability depends on the z-transform roots being inside the unit circle.
The document discusses the least squares method for fitting curves and lines to datasets. It begins by introducing least squares methods and their applications. It then covers the history of least squares, which was first published by Legendre in 1805 and also developed by Gauss. The document goes on to explain how least squares finds the "best fit" line or curve by minimizing the sum of the squared residuals between the data points and the fitting curve. It provides the equations for computing the coefficients of a linear regression line using the least squares approach. Finally, it generalizes the method to fitting polynomials of various degrees to data.
ID3, C4.5 :used to generate a decision tree developed by Ross Quinlan typically used in the machine learning and natural language processing domains, overview about these algorithms with illustrated examples
A deep introduction to supervised and unsupervised Machine Learning with examples in R.
Techniques covered for Regression:
- Linear Regression
- Polynomial Regression
Techniques covered for Classification:
- Simple and Multiple Logistic Regression
- Linear and Quadratic Discriminant Analysis
- K-Nearest Neighbors
Clustering:
- K-Means clustering
- Hierarchical clustering
Jordan Higher (𝜎, 𝜏)-Centralizer on Prime RingIOSR Journals
Let 𝑅 be a ring and 𝜎, 𝜏 be an endomorphisms of 𝑅, in this paper we will present and study the
concepts of higher (𝜎, 𝜏)-centralizer, Jordan higher(𝜎, 𝜏)-centralizer and Jordan triple higher (𝜎, 𝜏)-
centralizer and their generalization on the ring. The main results are prove that every Jordan higher (𝜎, 𝜏)-
centralizer of prime ring 𝑅 is higher (𝜎, 𝜏)-centralizer of 𝑅 and we prove let 𝑅 be a 2-torsion free ring,𝜎 𝑎𝑛𝑑 𝜏
are commutative endomorphism then every Jordan higher (𝜎, 𝜏)-centralizer is Jordan triple higher (𝜎, 𝜏)-
centralizer.
2.2 Special types of Correlation
2.3 Point Biserial Correlation rPB
2.3.1 Calculation of rPB
2.3.2 Significance Testing of rPB
2.4 Phi Coefficient (φ )
2.4.1 Significance Testing of phi (φ )
2.5 Biserial Correlation
2.6 Tetrachoric Correlation
2.7 Rank Order Correlations
2.7.1 Rank-order Data
2.7.2 Assumptions Underlying Pearson’s Correlation not Satisfied
2.8 Spearman’s Rank Order Correlation or Spearman’s rho (rs)
2.8.1 Null and Alternate Hypothesis
2.8.2 Numerical Example: for Untied and Tied Ranks
2.8.3 Spearman’s Rho with Tied Ranks
2.8.4 Steps for rS with Tied Ranks
2.8.5 Significance Testing of Spearman’s rho
2.9 Kendall’s Tau (ô)
2.9.1 Null and Alternative Hypothesis
2.9.2 Logic of Kendall’s Tau and Computation
2.9.3 Computational Alternative for Kendall’s Tau
2.9.4 Significance Testing for Kendall’s Tau
This document describes decision tree learning and provides an example to illustrate the process. It begins by introducing decision trees and their use for classification. It then provides details on key concepts like entropy, information gain, and the ID3 algorithm. The example shows calculating entropy and information gain for attributes to determine the root node. It further splits the data based on the root node and calculates entropy and information gain for subtrees until classes can be determined at the leaf nodes. The example builds out the full decision tree to classify whether it is suitable to play tennis based on weather conditions.
Similar to Machine Learning with Accord Framework (11)
Codeless Generative AI Pipelines
(GenAI with Milvus)
https://ml.dssconf.pl/user.html#!/lecture/DSSML24-041a/rate
Discover the potential of real-time streaming in the context of GenAI as we delve into the intricacies of Apache NiFi and its capabilities. Learn how this tool can significantly simplify the data engineering workflow for GenAI applications, allowing you to focus on the creative aspects rather than the technical complexities. I will guide you through practical examples and use cases, showing the impact of automation on prompt building. From data ingestion to transformation and delivery, witness how Apache NiFi streamlines the entire pipeline, ensuring a smooth and hassle-free experience.
Timothy Spann
https://www.youtube.com/@FLaNK-Stack
https://medium.com/@tspann
https://www.datainmotion.dev/
milvus, unstructured data, vector database, zilliz, cloud, vectors, python, deep learning, generative ai, genai, nifi, kafka, flink, streaming, iot, edge
The Building Blocks of QuestDB, a Time Series Databasejavier ramirez
Talk Delivered at Valencia Codes Meetup 2024-06.
Traditionally, databases have treated timestamps just as another data type. However, when performing real-time analytics, timestamps should be first class citizens and we need rich time semantics to get the most out of our data. We also need to deal with ever growing datasets while keeping performant, which is as fun as it sounds.
It is no wonder time-series databases are now more popular than ever before. Join me in this session to learn about the internal architecture and building blocks of QuestDB, an open source time-series database designed for speed. We will also review a history of some of the changes we have gone over the past two years to deal with late and unordered data, non-blocking writes, read-replicas, or faster batch ingestion.
Beyond the Basics of A/B Tests: Highly Innovative Experimentation Tactics You...Aggregage
This webinar will explore cutting-edge, less familiar but powerful experimentation methodologies which address well-known limitations of standard A/B Testing. Designed for data and product leaders, this session aims to inspire the embrace of innovative approaches and provide insights into the frontiers of experimentation!
The Ipsos - AI - Monitor 2024 Report.pdfSocial Samosa
According to Ipsos AI Monitor's 2024 report, 65% Indians said that products and services using AI have profoundly changed their daily life in the past 3-5 years.
Analysis insight about a Flyball dog competition team's performanceroli9797
Insight of my analysis about a Flyball dog competition team's last year performance. Find more: https://github.com/rolandnagy-ds/flyball_race_analysis/tree/main
8. CAN WE PLAY TENNIS
Outlook Temperature Humidity Wind PlayTennis
Sunny Hot High Weak No
Sunny Hot High Strong No
Overcast Hot High Weak Yes
Rain Mild High Weak Yes
Rain Cool Normal Weak Yes
Rain Cool Normal Strong No
Overcast Cool Normal Strong Yes
Sunny Mild High Weak No
Sunny Cool Normal Weak Yes
Rain Mild Normal Weak Yes
Sunny Mild Normal Strong Yes
Overcast Mild High Strong Yes
Overcast Hot Normal Weak Yes
Rain Mild High Strong No Dataset from Mitchell, T. M. Machine
Learning. McGraw-Hill, 1997. pp. 59-
60.
9. CAN WE PLAY TENNIS
Outlook Temperature Humidity Wind PlayTennis
Sunny Hot High Weak No
Sunny Hot High Strong No
Overcast Hot High Weak Yes
Rain Mild High Weak Yes
Rain Cool Normal Weak Yes
Rain Cool Normal Strong No
Overcast Cool Normal Strong Yes
Sunny Mild High Weak No
Sunny Cool Normal Weak Yes
Rain Mild Normal Weak Yes
Sunny Mild Normal Strong Yes
Overcast Mild High Strong Yes
Overcast Hot Normal Weak Yes
Rain Mild High Strong No
10. CAN WE PLAY TENNIS
Outlook PlayTennis
Sunny No
Sunny No
Overcast Yes
Rain Yes
Rain Yes
Rain No
Overcast Yes
Sunny No
Sunny Yes
Rain Yes
Sunny Yes
Overcast Yes
Overcast Yes
Rain No
11. CAN WE PLAY TENNIS
Outlook PlayTennis
Overcast Yes
Overcast Yes
Overcast Yes
Overcast Yes
Rain Yes
Rain Yes
Rain Yes
Rain No
Rain No
Sunny Yes
Sunny Yes
Sunny No
Sunny No
Sunny No
12. CAN WE PLAY TENNIS
Outlook PlayTennis
Overcast Yes
Overcast Yes
Overcast Yes
Overcast Yes
Rain Yes
Rain Yes
Rain Yes
Rain No
Rain No
Sunny Yes
Sunny Yes
Sunny No
Sunny No
Sunny No
13. CAN WE PLAY TENNIS
Outlook PlayTennis
Overcast Yes
Overcast Yes
Overcast Yes
Overcast Yes
Rain Yes
Rain Yes
Rain Yes
Rain No
Rain No
Sunny Yes
Sunny Yes
Sunny No
Sunny No
Sunny No
𝑒𝑛𝑡𝑟𝑜𝑝𝑦 𝐻(𝑥) = −𝑝 𝑦𝑒𝑠 log2(𝑝 𝑦𝑒𝑠) −𝑝 𝑛𝑜 log2(𝑝 𝑛𝑜)
14. CAN WE PLAY TENNIS
Outlook PlayTennis
Overcast Yes
Overcast Yes
Overcast Yes
Overcast Yes
Rain Yes
Rain Yes
Rain Yes
Rain No
Rain No
Sunny Yes
Sunny Yes
Sunny No
Sunny No
Sunny No
𝑒𝑛𝑡𝑟𝑜𝑝𝑦 𝑂𝑣𝑒𝑟𝑐𝑎𝑠𝑡 = −1 log2 1 − 0 log2 0 = 0
𝑒𝑛𝑡𝑟𝑜𝑝𝑦 𝑅𝑎𝑖𝑛 = −0.6 log2 0.6 − 0.4 log2 0.4 = 0.97
𝑒𝑛𝑡𝑟𝑜𝑝𝑦𝑆𝑢𝑛𝑛𝑦 = −0.4 log2 0.4 − 0.6 log2 0.6 = 0.97
15. CAN WE PLAY TENNIS
Outlook PlayTennis
Overcast Yes
Overcast Yes
Overcast Yes
Overcast Yes
Rain Yes
Rain Yes
Rain Yes
Rain No
Rain No
Sunny Yes
Sunny Yes
Sunny No
Sunny No
Sunny No
𝑒𝑛𝑡𝑟𝑜𝑝𝑦 𝑂𝑣𝑒𝑟𝑐𝑎𝑠𝑡 = −1 log2 1 − 0 log2 0 = 0
𝑒𝑛𝑡𝑟𝑜𝑝𝑦 𝑅𝑎𝑖𝑛 = −0.6 log2 0.6 − 0.4 log2 0.4 = 0.97
𝑒𝑛𝑡𝑟𝑜𝑝𝑦𝑆𝑢𝑛𝑛𝑦 = −0.4 log2 0.4 − 0.6 log2 0.6 = 0.97
16. CAN WE PLAY TENNIS
Outlook PlayTennis
Overcast Yes
Overcast Yes
Overcast Yes
Overcast Yes
Rain Yes
Rain Yes
Rain Yes
Rain No
Rain No
Sunny Yes
Sunny Yes
Sunny No
Sunny No
Sunny No
𝑒𝑛𝑡𝑟𝑜𝑝𝑦 𝐵𝑒𝑓𝑜𝑟𝑒 = 0.94 𝑒𝑛𝑡𝑟𝑜𝑝𝑦 𝑎𝑓𝑡𝑒𝑟 = 0.7
𝑖𝑛𝑓𝑜𝑟𝑚𝑎𝑡𝑖𝑜𝑛 𝑔𝑎𝑖𝑛 = 0.94 − 0.7 = 0.24
17. CAN WE PLAY TENNIS
Outlook Temperature Humidity Wind PlayTennis
Sunny Cool Normal Weak Yes
Sunny Mild Normal Strong Yes
Sunny Hot High Weak No
Sunny Hot High Strong No
𝑂𝑢𝑡𝑙𝑜𝑜𝑘?
𝑅𝑎𝑖𝑛
𝑌𝑒𝑠
Outlook Temperature Humidity Wind PlayTennis
Rain Mild High Weak Yes
Rain Cool Normal Weak Yes
Rain Mild Normal Weak Yes
Rain Cool Normal Strong No
18. CAN WE PLAY TENNIS
Outlook Temperature Humidity Wind PlayTennis
Sunny Cool Normal Weak Yes
Sunny Mild Normal Strong Yes
Sunny Hot High Weak No
Sunny Hot High Strong No
𝑂𝑢𝑡𝑙𝑜𝑜𝑘?
𝑅𝑎𝑖𝑛
𝑌𝑒𝑠
𝑊𝑖𝑛𝑑?
𝑌𝑒𝑠
𝑁𝑜
19. CAN WE PLAY TENNIS
𝑂𝑢𝑡𝑙𝑜𝑜𝑘?
𝑅𝑎𝑖𝑛
𝑌𝑒𝑠
𝑊𝑖𝑛𝑑?
𝑌𝑒𝑠
𝑁𝑜
𝑌𝑒𝑠
𝑁𝑜
𝐻𝑢𝑚𝑖𝑑𝑖𝑡𝑦?
20. CREATING TREE
DecisionVariable[] attributes =
{
new DecisionVariable("Outlook", 3),
new DecisionVariable("Temperature", 3),
new DecisionVariable("Humidity", 2),
new DecisionVariable("Wind", 2)
};
int classCount = 2;
DecisionTree tree = new DecisionTree(attributes, classCount);