Welcome to the Supervised Machine Learning and Data Sciences.
Algorithms for building models. Support Vector Machines.
Classification algorithm explanation and code in Python ( SVM ) .
Basic of Decision Tree Learning. This slide includes definition of decision tree, basic example, basic construction of a decision tree, mathlab example
Scikit-Learn is a powerful machine learning library implemented in Python with numeric and scientific computing powerhouses Numpy, Scipy, and matplotlib for extremely fast analysis of small to medium sized data sets. It is open source, commercially usable and contains many modern machine learning algorithms for classification, regression, clustering, feature extraction, and optimization. For this reason Scikit-Learn is often the first tool in a Data Scientists toolkit for machine learning of incoming data sets.
The purpose of this one day course is to serve as an introduction to Machine Learning with Scikit-Learn. We will explore several clustering, classification, and regression algorithms for a variety of machine learning tasks and learn how to implement these tasks with our data using Scikit-Learn and Python. In particular, we will structure our machine learning models as though we were producing a data product, an actionable model that can be used in larger programs or algorithms; rather than as simply a research or investigation methodology.
Welcome to the Supervised Machine Learning and Data Sciences.
Algorithms for building models. Support Vector Machines.
Classification algorithm explanation and code in Python ( SVM ) .
Basic of Decision Tree Learning. This slide includes definition of decision tree, basic example, basic construction of a decision tree, mathlab example
Scikit-Learn is a powerful machine learning library implemented in Python with numeric and scientific computing powerhouses Numpy, Scipy, and matplotlib for extremely fast analysis of small to medium sized data sets. It is open source, commercially usable and contains many modern machine learning algorithms for classification, regression, clustering, feature extraction, and optimization. For this reason Scikit-Learn is often the first tool in a Data Scientists toolkit for machine learning of incoming data sets.
The purpose of this one day course is to serve as an introduction to Machine Learning with Scikit-Learn. We will explore several clustering, classification, and regression algorithms for a variety of machine learning tasks and learn how to implement these tasks with our data using Scikit-Learn and Python. In particular, we will structure our machine learning models as though we were producing a data product, an actionable model that can be used in larger programs or algorithms; rather than as simply a research or investigation methodology.
Linear regression with gradient descentSuraj Parmar
Intro to the very popular optimization Technique(Gradient descent) with linear regression . Linear regression with Gradient descent on www.landofai.com
Introduction to Machine Learning : Machine Learning (ML) is a type of Intelligence (AI) that allows Software applications to become more accurate at predicting outcomes without being explicitly programmed to do so. Machine Learning Algorithms use historical data as input to predict new output values.
Decision tree induction \ Decision Tree Algorithm with Example| Data scienceMaryamRehman6
This Decision Tree Algorithm in Machine Learning Presentation will help you understand all the basics of Decision Tree along with what Machine Learning is, what Machine Learning is, what Decision Tree is, the advantages and disadvantages of Decision Tree, how Decision Tree algorithm works with resolved examples, and at the end of the decision Tree use case/demo in Python for loan payment. For both beginners and experts who want to learn Machine Learning Algorithms, this Decision Tree tutorial is perfect.
Linear regression with gradient descentSuraj Parmar
Intro to the very popular optimization Technique(Gradient descent) with linear regression . Linear regression with Gradient descent on www.landofai.com
Introduction to Machine Learning : Machine Learning (ML) is a type of Intelligence (AI) that allows Software applications to become more accurate at predicting outcomes without being explicitly programmed to do so. Machine Learning Algorithms use historical data as input to predict new output values.
Decision tree induction \ Decision Tree Algorithm with Example| Data scienceMaryamRehman6
This Decision Tree Algorithm in Machine Learning Presentation will help you understand all the basics of Decision Tree along with what Machine Learning is, what Machine Learning is, what Decision Tree is, the advantages and disadvantages of Decision Tree, how Decision Tree algorithm works with resolved examples, and at the end of the decision Tree use case/demo in Python for loan payment. For both beginners and experts who want to learn Machine Learning Algorithms, this Decision Tree tutorial is perfect.
Decision Trees - The Machine Learning Magic UnveiledLuca Zavarella
Often a Machine Learning algorithm is seen as one of those magical weapons capable of revealing possible future scenarios to whoever holds it. In truth, it's a direct application of mathematical and statistical concepts, which sometimes generate complex models to be interpreted as output. However, there are predictive models based on decision trees that are really simple to understand. In this slide deck I'll explain what is behind a predictive model of this type.
Here the demo files: https://goo.gl/K6dgWC
Classification Using Decision Trees and RulesChapter 5.docxmonicafrancis71118
Classification Using Decision
Trees and Rules
Chapter 5
Introduction
• Decision tree learners use a tree structure to model the relationships
among the features and the potential outcomes.
• a structure of branching decisions into a final predicted class value
• Decision begins at the root node, then passed through decision nodes
that require choices.
• Choices split the data across branches that indicate potential
outcomes of a decision
• Tree is terminated by leaf nodes that denote the action to be taken as
the result of the series of decisions.
Decision Tree Example
Benefits
• Flowchart-like tree structure is not necessarily exclusively for the
learner's internal use.
• Resulting structure in a human-readable format.
• Provides insight into how and why the model works or doesn't work well for a
particular task.
• Useful where classification mechanism needs to be transparent for legal reasons, or in
case the results need to be shared with others in order to inform future business
practices
• Credit scoring models where criteria that causes an applicant to be rejected need to be clearly
documented and free from bias
• Marketing studies of customer behavior such as satisfaction or churn, which will be shared
with management or advertising agencies
• Diagnosis of medical conditions based on laboratory measurements, symptoms, or the rate of
disease progression
Applicability
• Widely used machine learning technique
• Can be applied to model almost any type of data with excellent
results
• Does not fit task where the data has a large number of nominal
features with many levels or it has a large number of numeric
features.
• Result in large number of decisions and an overly complex tree.
• Tendency of decision trees to overfit data, though this can be overcome by
adjusting some simple parameters
Divide and Conquer
• Decision trees are built using a heuristic called recursive partitioning.
• Divide and conquer because it splits the data into subsets, which are then
split repeatedly into even smaller subsets,
• Stops when the data within the subsets are sufficiently homogenous, or
another stopping criterion has been met.
• Root node represents the entire dataset
• Algorithm must choose a feature to split upon
• Choose the feature most predictive of the target class.
• Algorithm continues to divide and conquer the data, choosing the best
candidate feature each time to create another decision node, until a stopping
criterion is reached.
Divide and Conquer
• Stopping Conditions
• All (or nearly all) of the examples at the node have the same class
• There are no remaining features to distinguish among the examples
• The tree has grown to a predefined size limit
Example
• Finding potential for a movie- Box Office Bust, Mainstream Hit, Critical Success
• Diagonal lines might have split the data even more cleanly.
• Limitation of the decision tree's knowledge representation, whi.
Valencian Summer School 2015
Day 1
Lecture 3
Decision Trees
Gonzalo Martínez (UAM)
https://bigml.com/events/valencian-summer-school-in-machine-learning-2015
Lecture 9 - Decision Trees and Ensemble Methods, a lecture in subject module ...Maninda Edirisooriya
Decision Trees and Ensemble Methods is a different form of Machine Learning algorithm classes. This was one of the lectures of a full course I taught in University of Moratuwa, Sri Lanka on 2023 second half of the year.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
We all have good and bad thoughts from time to time and situation to situation. We are bombarded daily with spiraling thoughts(both negative and positive) creating all-consuming feel , making us difficult to manage with associated suffering. Good thoughts are like our Mob Signal (Positive thought) amidst noise(negative thought) in the atmosphere. Negative thoughts like noise outweigh positive thoughts. These thoughts often create unwanted confusion, trouble, stress and frustration in our mind as well as chaos in our physical world. Negative thoughts are also known as “distorted thinking”.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
The Indian economy is classified into different sectors to simplify the analysis and understanding of economic activities. For Class 10, it's essential to grasp the sectors of the Indian economy, understand their characteristics, and recognize their importance. This guide will provide detailed notes on the Sectors of the Indian Economy Class 10, using specific long-tail keywords to enhance comprehension.
For more information, visit-www.vavaclasses.com
2. Goal of Classification Algorithm
• Build models with good generalization capabilities, i.e., Model that
accurately predict the class labels of previously unknown records.
• Classification Algorithm
• Naïve Bayes Classifier
• Decision Tree
• Rule-based classifiers
• Neural Network
• Support Vector Machine
3. Why decision tree?
• Decision trees are powerful and popular tools for
classification and prediction.
• Decision trees represent rules, which can be understood
by humans and used in knowledge system such as
database.
11. Decision tree is a classifier in the form of a tree structure
Decision tree maps out all possible decision paths in the
form of a tree.
– Root node: The node has no incoming edges and zero or
more outgoing edges.
– Internal node ((Decision node): specifies a test on a single
attribute
– Leaf node: indicates the value of the target attribute
– Branches (Arc/edge): split of one attribute
Decision trees classify instances or examples by starting at
the root of the tree and moving through it until a leaf node
based on local optimum decesion.
Definition
21. How do we find the best tree
•Exponentially large number of possible decision
tree makes decision tree hard.
22. Decision tree
• Decision tree to represent learned target function
• Each internal node tests an attribute
• Each branch corresponds to attribute value
• Each leaf node assigns a classification
• Can be represented by
logical formula
(2) Which node
to proceed?
(3) When to stop/ come
to conclusion?
(1) Which to
start? (root)
29. Greedy Decision Tree Algorithm
Step 1: Start with an empty tree
Step 2: Select a feature to split data
For each split of the tree.
Step 3: If nothing more to, make
predictions
Step 4: Otherwise, go to Step 2 &
continue (recurse) on this split.
Problem 1: Feature split
selection
Problem 2:
Stopping condition
Recursion
30. Design Issues of Decision Tree Induction
•Issues
• How to Classify a leaf node
• Assign the majority class
• If leaf is empty, assign the default class – the class that has the
highest popularity.
• Determine how to split the records
• How to specify the attribute test condition?
• How to determine the best split?
• Determine when to stop splitting
• Every attribute has already been included along this path
through the tree.
• Stop splitting if all the records belong to the same class or have
identical attribute values
• Stop when each leaf node has uncertainty below some
threshold.
38. Algorithms
•Many Algorithms:
• Hunt’s Algorithm (one of the earliest)
• ID3 (Iterative Dichotomiser)
• C4.5
• CART (Classification And Regression Tree)
• SLIQ, SPRINT
39. General Structure of Hunt’s Algorithm
• Basic of many existing DT algorithm.
• Let Dt be the set of training records that reach a
node t
• General Procedure:
• If Dt contains records that belong the same class
yt, then t is a leaf node labeled as yt
• If Dt contains records with the same attribute
values, then t is a leaf node labeled with the
majority class yt
• If Dt is an empty set, then t is a leaf node labeled
by the default class, yd
• If Dt contains records that belong to more than
one class, use an attribute test to split the data
into smaller subsets.
• Recursively apply the procedure to each
subset.
Tid Refund Marital
Status
Taxable
Income Cheat
1 Yes Single 125K No
2 No Married 100K No
3 No Single 70K No
4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
10 No Single 90K Yes
10
Dt
?
40. Hunt’s Algorithm
Don’t
Cheat
Refund
Don’t
Cheat
Don’t
Cheat
Yes No
Refund
Don’t
Cheat
Yes No
Marital
Status
Don’t
Cheat
Cheat
Single,
Divorced
Married
Taxable
Income
Don’t
Cheat
< 80K >= 80K
Refund
Don’t
Cheat
Yes No
Marital
Status
Don’t
Cheat
Cheat
Single,
Divorced
Married
Tid Refund Marital
Status
Taxable
Income Cheat
1 Yes Single 125K No
2 No Married 100K No
3 No Single 70K No
4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
10 No Single 90K Yes
10
Tid Refund Marital
Status
Taxable
Income Cheat
1 Yes Single 125K No
4 Yes Married 120K No
7 Yes Divorced 220K No
2 No Married 100K No
3 No Single 70K No
5 No Divorced 95K Yes
6 No Married 60K No
8 No Single 85K Yes
9 No Married 75K No
10 No Single 90K Yes
10
41. Hunt’s Algorithm
•Empty node (Non of the training records have the
combination of attribute value)
• Node is declared as a leaf node with same class label as
the majority class of training records associated with
its parent node.
•Non-empty node
• Same class
• Identical attribute values (Except for the class label)
• Node is declared as a leaf node with the same class label as the
majority class of training records associated with this node.
42. Iterative Dichotomiser (ID3)
• Dichotomisation means, the act of dividing into two sharply
different categories.
Outlook
Sunny Overcast Rain
Humidity
High Normal
Wind
Strong Weak
No Yes
Yes
Yes
No
43. Principled Criterion
•Selection of an attribute to test at each node
choosing the most useful attribute for classifying
examples.
•Information gain
• Measures how well a given attribute separates the
training examples according to their target
classification.
• This measure is used to select among the candidate
attributes at each step while growing the tree.
• Gain is measure of how much we can reduce
uncertainty (value lies between 0, 1)
44. How to Specify Test Condition?
•Depends on attribute types
• Binary
• Nominal
• Ordinal
• Continuous
•Depends on number of ways to split
• 2-way split
• Multi-way split
45. Splitting Based on Nominal Attributes
• Binary split: The test condition for a binary attribute generates
two potential outcomes
Body Temp
{Warm-
blooded}
{Cold-blooded}
46. Splitting Based on Nominal Attributes
• Multi-way split: Use as many partitions as distinct values.
• Binary split: Divides values into two subsets.
Need to find optimal partitioning.
CarType
Family
Sports
Luxury
CarType
{Family,
Luxury} {Sports}
CarType
{Sports,
Luxury} {Family}
OR
Note: CART produces only binary split by considering all 2 − 1
ways of creating a binary partition of attribute values.
47. Splitting Based on Ordinal Attributes
• Multi-way split: Use as many partitions as distinct values.
• Binary split: Divides values into two subsets – respects the order
(Grouped as long as the grouping does not violates the order property
of the attribute values). Need to find optimal partitioning.
Size
Small
Medium
Large
Size
{Medium,
Large,
Extra Large} {Small}
Size
{Small,
Medium} {Large, Extra Large}
OR
Size
{Small,
Large} {Medium,
Extra Large}
48. Splitting Based on Continuous Attributes
•Different ways of handling
•Discretization to form an ordinal categorical
attribute
• Static – discretize once at the beginning
• Dynamic – ranges can be found by equal interval bucketing,
equal frequency bucketing (percentiles), or clustering.
•Binary Decision: (A < v) or (A v)
• consider all possible splits and finds the best cut
• can be more compute intensive
56. How to determine the Best Split
Before Splitting: 10 records of class 0,
10 records of class 1
Which test condition is the best?
• Class distribution of the records before and after splitting
57. How to determine the Best Split
• Greedy approach:
• Nodes with homogeneous class distribution are preferred
• Need a measure of node impurity: The smaller the degree of impurity the
more skewed the class distribution.
• Ideas?
• Entropy and Information gain
Non-homogeneous,
High degree of impurity
Homogeneous,
Low degree of impurity
58. Entropy
• A measure of
• Uncertainty
• (Im)Purity
• Information content
• Given a collection S
= − ⊕ log ⊕ − ⊖ log ⊖
⊕ is the proportion of positive examples in D
⊖ is the proportion of negative examples in D
• The lower the Entropy, the less uniform the distribution, the
purer the node.
59. Information Gain
• Gain tells us how would be gained by branching on A.
• Information gain is simply the expected reduction in entropy caused by
partitioning the examples according to the selected attribute.
• Information gain, Gain (S, A) of an attribute A is defined as
= − ( )
= ( )
∈ ( )
( ) is the set of all possible values for attribute A, is the subset
of D for which attribute A has value v
64. Example
• D is collection of 14 examples, 9 positive and 5 negative examples
9+, 5 − = −(9 14
⁄ )log 9 14
⁄ −
(5 14
⁄ )log 5 14
⁄ = 0.940
Entropy is 0 if all members of D belongs to the same class. Entropy is 1
when the collection contains an equal number of positive and negative
examples.
65. Entropy
1. The entropy is 0 if the outcome is
‘certain’
2. The entropy is maximum if we
have no knowledge of the system
( or any outcome is equally possible)
S is sample of training examples.
⊕ is the proportion of positive examples in S
⊖ is the proportion of negative examples in S.
Entropy measures the impurity of S
= − ⊕ log ⊕ − ⊖ log ⊖
Entropy of a 2-class problem with regard to the
portion of one of the two groups
66. Examples
• Before partitioning, the entropy is
• Info(10/20, 10/20) = - 10/20 log(10/20) - 10/20 log(10/20) = 1
• Using the ``where’’ attribute, divide into 2 subsets
• Entropy of the first set Info (home) = - 6/12 log(6/12) - 6/12 log(6/12) = 1
• Entropy of the second set Info (away) = - 4/8 log(6/8) - 4/8 log(4/8) = 1
• Expected entropy after partitioning
• 12/20 * Info (home) + 8/20 * Info (away) = 1
67. Example
• Using the ``when’’ attribute, divide into 3 subsets
• Entropy of the first set Info (5pm) = - 1/4 log(1/4) - 3/4 log(3/4);
• Entropy of the second set Info (7pm) = - 9/12 log(9/12) - 3/12 log(3/12);
• Entropy of the second set Info (9pm) = - 0/4 log(0/4) - 4/4 log(4/4) = 0
• Expected entropy after partitioning
• 4/20 * Info (1/4, 3/4) + 12/20 * Info (9/12, 3/12) + 4/20 * Info (0/4, 4/4) = 0.65
• Information gain 1-0.65 = 0.35
71. Example
• The information gain due to sorting the original 14 examples by the
attribute Wind may then be calculated as
= ,
= 9+, 5 −
← 6+, 2 −
← 3+, 3 −
, = − ( )
∈{ , }
= − (8 14
⁄ ) − (6 14
⁄ )
86. Information Gain: Limitation
• Problematic: Attribute with a large number of values (extreme
case: ID case)
• Subsets are more likely to be pure if there is a large number of
values
• Information gain is biased towards choosing attribute with a large
number of values
87. Gain Ratio
• A modification of the information gain that reduces its bais.
• The gain ratio measure penalizes attribute such as customer ID by
incorporating a term, called split information.
• Split information is sensitive to how broadly and uniformly the
attribute splits the data.
88. C4.5
• C4.5, a successor of ID3, uses an extension to Information gain known
as gain ratio.
• It overcome the bias problem.
• It applies a kind of normalization to information gain using split
information value.
= − log
is the entropy of with respect to the values of attribute
.
=
( )
The attribute with maximum gain ratio is selected as the splitting
attribute.
89. Gain ratios for weather data
Outlook Temperature
Info: 0.693 Info: 0.911
Gain: 0.940-0.693 0.247 Gain: 0.940-0.911 0.029
Split info: info([5,4,5]) 1.577 Split info: info([4,6,4]) 1.362
Gain ratio: 0.247/1.577 0.156 Gain ratio: 0.029/1.362 0.021
Humidity Windy
Info: 0.788 Info: 0.892
Gain: 0.940-0.788 0.152 Gain: 0.940-0.892 0.048
Split info: info([7,7]) 1.000 Split info: info([8,6]) 0.985
Gain ratio: 0.152/1 0.152 Gain ratio: 0.048/0.985 0.049