This document provides an overview of machine learning concepts including supervised and unsupervised learning, decision trees, and computational learning theory. It discusses what machine learning is, why we learn, major machine learning paradigms like induction and clustering. It also covers the inductive learning problem, decision tree learning with the ID3 algorithm, and uses information theory concepts like entropy and information gain to select the best attributes to split data. An example decision tree is constructed for a restaurant domain to classify whether a patron will wait or leave.
The document summarizes material from a machine learning class. It defines machine learning as using examples to reach general conclusions and make predictions. It describes supervised learning, where examples are labeled, and unsupervised learning, which only uses inputs. Decision trees are discussed as a method that partitions data into regions to classify it.
Machine Learning: Decision Trees Chapter 18.1-18.3butest
The document discusses machine learning and decision trees. It provides an overview of different machine learning paradigms like rote learning, induction, clustering, analogy, discovery, and reinforcement learning. It then focuses on decision trees, describing them as trees that classify examples by splitting them along attribute values at each node. The goal of learning decision trees is to build a tree that can accurately classify new examples. It describes the ID3 algorithm for constructing decision trees in a greedy top-down manner by choosing the attribute that best splits the training examples at each node.
This document discusses machine learning and artificial intelligence concepts such as supervised learning, decision trees, and neural networks. It defines machine learning, describes categories of learning including learning from examples and reinforcement learning, and explains algorithms like naive Bayes classifiers and decision tree induction. Performance measurement and challenges for different machine learning approaches are also summarized.
This document discusses machine learning and artificial intelligence concepts such as supervised learning, decision trees, and neural networks. It defines machine learning, describes categories of learning including learning from examples and reinforcement learning, and explains algorithms like naive Bayes classifiers and decision tree induction. Performance measurement and challenges for different machine learning approaches are also summarized.
The document discusses concepts related to supervised machine learning and decision tree algorithms. It defines key terms like supervised vs unsupervised learning, concept learning, inductive bias, and information gain. It also describes the basic process for learning decision trees, including selecting the best attribute at each node using information gain to create a small tree that correctly classifies examples, and evaluating performance on test data. Extensions like handling real-valued, missing and noisy data, generating rules from trees, and pruning trees to avoid overfitting are also covered.
The document discusses machine learning and various machine learning techniques. It defines machine learning as using data and experience to acquire models and modify decision mechanisms to improve performance. The document outlines different types of machine learning including supervised learning (using labeled data), unsupervised learning (using only unlabeled data), and reinforcement learning (where an agent takes actions and receives rewards or punishments). It provides examples of classification problems and discusses decision tree learning as a supervised learning method, including how decision trees are constructed and potential issues like overfitting.
The document discusses machine learning and provides an overview of the field. It defines machine learning, explains why it is useful, and gives a brief tour of different machine learning techniques including decision trees, neural networks, Bayesian networks, and more. It also discusses some issues in machine learning like how learning problem characteristics and algorithms influence accuracy.
The document discusses machine learning and various machine learning techniques. It defines machine learning as using data and experience to acquire models and modify decision mechanisms to improve performance. It describes supervised learning where data and labels are provided, unsupervised learning where only data is given, and reinforcement learning where an agent takes actions and receives rewards or punishments. Decision tree learning is discussed as a supervised learning method where trees are constructed by recursively splitting data based on attribute tests that optimize criteria like information gain. Overfitting and techniques like pruning are addressed to improve generalization.
The document summarizes material from a machine learning class. It defines machine learning as using examples to reach general conclusions and make predictions. It describes supervised learning, where examples are labeled, and unsupervised learning, which only uses inputs. Decision trees are discussed as a method that partitions data into regions to classify it.
Machine Learning: Decision Trees Chapter 18.1-18.3butest
The document discusses machine learning and decision trees. It provides an overview of different machine learning paradigms like rote learning, induction, clustering, analogy, discovery, and reinforcement learning. It then focuses on decision trees, describing them as trees that classify examples by splitting them along attribute values at each node. The goal of learning decision trees is to build a tree that can accurately classify new examples. It describes the ID3 algorithm for constructing decision trees in a greedy top-down manner by choosing the attribute that best splits the training examples at each node.
This document discusses machine learning and artificial intelligence concepts such as supervised learning, decision trees, and neural networks. It defines machine learning, describes categories of learning including learning from examples and reinforcement learning, and explains algorithms like naive Bayes classifiers and decision tree induction. Performance measurement and challenges for different machine learning approaches are also summarized.
This document discusses machine learning and artificial intelligence concepts such as supervised learning, decision trees, and neural networks. It defines machine learning, describes categories of learning including learning from examples and reinforcement learning, and explains algorithms like naive Bayes classifiers and decision tree induction. Performance measurement and challenges for different machine learning approaches are also summarized.
The document discusses concepts related to supervised machine learning and decision tree algorithms. It defines key terms like supervised vs unsupervised learning, concept learning, inductive bias, and information gain. It also describes the basic process for learning decision trees, including selecting the best attribute at each node using information gain to create a small tree that correctly classifies examples, and evaluating performance on test data. Extensions like handling real-valued, missing and noisy data, generating rules from trees, and pruning trees to avoid overfitting are also covered.
The document discusses machine learning and various machine learning techniques. It defines machine learning as using data and experience to acquire models and modify decision mechanisms to improve performance. The document outlines different types of machine learning including supervised learning (using labeled data), unsupervised learning (using only unlabeled data), and reinforcement learning (where an agent takes actions and receives rewards or punishments). It provides examples of classification problems and discusses decision tree learning as a supervised learning method, including how decision trees are constructed and potential issues like overfitting.
The document discusses machine learning and provides an overview of the field. It defines machine learning, explains why it is useful, and gives a brief tour of different machine learning techniques including decision trees, neural networks, Bayesian networks, and more. It also discusses some issues in machine learning like how learning problem characteristics and algorithms influence accuracy.
The document discusses machine learning and various machine learning techniques. It defines machine learning as using data and experience to acquire models and modify decision mechanisms to improve performance. It describes supervised learning where data and labels are provided, unsupervised learning where only data is given, and reinforcement learning where an agent takes actions and receives rewards or punishments. Decision tree learning is discussed as a supervised learning method where trees are constructed by recursively splitting data based on attribute tests that optimize criteria like information gain. Overfitting and techniques like pruning are addressed to improve generalization.
This document discusses machine learning and various machine learning techniques. It begins by defining learning and different types of machine learning, including supervised learning, unsupervised learning, and reinforcement learning. It then focuses on supervised learning, discussing important concepts like training and test sets. Decision trees are presented as a popular supervised learning technique, including how they are constructed using a top-down recursive approach that chooses attributes to best split the data based on measures like information gain. Overfitting is also discussed as an issue to address with techniques like pruning.
The document discusses machine learning and various machine learning concepts. It defines learning as improving performance through experience. Machine learning involves using data to acquire models and learn hidden concepts. The main areas covered are supervised learning (data with labels), unsupervised learning (data without labels), semi-supervised learning (some labels present), and reinforcement learning (agent takes actions and receives rewards/punishments). Decision trees are presented as a way to represent hypotheses learned through examples, with attributes used to recursively split data into partitions.
The document discusses various topics related to active learning and optimal experimental design. It begins with motivations for active learning such as collecting higher quality data being more useful than simply more data, and data collection often being expensive. It provides examples of applying active learning techniques to problems like classification with gene expression data, collaborative filtering for movie recommendations, sequencing genomes, and improving cell culture conditions. It then covers topics like uncertainty sampling, query by committee, and information-based loss functions for active learning. For optimal experimental design, it discusses techniques like A-optimal, D-optimal, and E-optimal design and how they can be applied to problems with linear models. It also covers extensions to non-linear models using techniques like sequential experimental design
This document provides an introduction and overview of machine learning. It discusses different types of machine learning including supervised, unsupervised, semi-supervised and reinforcement learning. It also covers key machine learning concepts like hypothesis space, inductive bias, representations, features, and more. The document provides examples to illustrate these concepts in domains like medical diagnosis, entity recognition, and image recognition.
Learning agents modify their decision mechanisms through experience to improve performance. Inductive learning involves constructing a hypothesis function h that approximates the target function f based on training examples. Decision tree learning is an inductive method that uses information gain to recursively select the most informative attributes to split the training examples into subsets at each node, building the tree from the top down. Performance is evaluated by testing the learned hypothesis on new examples.
This document provides an overview of machine learning, including supervised learning, unsupervised learning, and reinforcement learning. It then discusses decision tree learning and decision trees in more detail. Decision tree algorithms like ID3 and C4.5 are explained as popular inductive inference algorithms that use an information gain measure to select attributes at each step of growing the decision tree. The document also covers converting decision trees to rules and splitting information. Linear models and artificial neural networks are briefly introduced, with the backpropagation algorithm explained as the gradient descent learning rule used in multilayer feedforward neural networks.
The document discusses machine learning concepts including:
1. Supervised learning aims to learn a function that maps inputs to target variables by minimizing error on training data. Decision tree learning is an example approach.
2. Decision trees partition data into purer subsets using information gain, which measures the reduction in entropy when an attribute is used.
3. The greedy decision tree algorithm recursively selects the attribute with highest information gain to split on, growing subtrees until leaves contain only one class.
Machine learning is important for improving brittle early AI systems and reducing the effort required for knowledge acquisition. There are two main types of machine learning - supervised learning, where a system is provided examples and feedback to learn a task, and unsupervised learning, where patterns are identified without labeled examples. Popular supervised learning methods include neural networks, Bayesian classifiers, decision trees, and linear regression, which aim to learn functions or classify inputs. Bayesian learning estimates probabilities to classify inputs, while neural networks can perform non-linear regression through backpropagation.
The document discusses machine learning concepts including supervised and unsupervised learning. It explains that supervised learning involves labeled training data to learn a model that can classify new examples, while unsupervised learning discovers hidden patterns in unlabeled data. The document also covers regression, classification tasks, evaluating models on test data, feature selection, and the machine learning process of data collection, model training and evaluation.
Lecture 2 Basic Concepts in Machine Learning for Language TechnologyMarina Santini
Definition of Machine Learning
Type of Machine Learning:
Classification
Regression
Supervised Learning
Unsupervised Learning
Reinforcement Learning
Supervised Learning:
Supervised Classification
Training set
Hypothesis class
Empirical error
Margin
Noise
Inductive bias
Generalization
Model assessment
Cross-Validation
Classification in NLP
Types of Classification
This document provides an overview of machine learning techniques using R. It discusses regression, classification, linear models, decision trees, neural networks, genetic algorithms, support vector machines, and ensembling methods. Evaluation metrics and algorithms like lm(), rpart(), nnet(), ksvm(), and ga() are presented for different machine learning tasks. The document also compares inductive learning, analytical learning, and explanation-based learning approaches.
Course: Intro to Computer Science (Malmö Högskola):
knowledge representation and abstraction, decision making, generalization, data acquistion (abstraction), machine learning, similarity
another version of abstraction
This document provides an overview of decision tree algorithms for machine learning. It discusses key concepts such as:
- Decision trees can be used for classification or regression problems.
- They represent rules that can be understood by humans and used in knowledge systems.
- The trees are built by splitting the data into purer subsets based on attribute tests, using measures like information gain.
- Issues like overfitting are addressed through techniques like reduced error pruning and rule post-pruning.
Genetic algorithms are a search method that uses principles of natural selection and evolution to find optimal solutions to problems. They work by generating an initial population of random solutions, then selecting the fittest to breed a new generation by combining traits and mutating genes. This process is repeated until a satisfactory solution emerges. The eight queens problem can be solved using a genetic algorithm approach by representing board configurations as individuals, evaluating their fitness based on non-attacking queens, breeding the fittest to produce new configurations, and repeating until a solution is found with all queens non-attacking. Semantic networks represent knowledge through nodes and relationships like IS-A and HAS to model how concepts are related, allowing inference of new facts by traversing links
This document provides an overview of machine learning concepts including:
1. Machine learning aims to create computer programs that improve with experience by learning from data. It involves tasks like classification, regression, and clustering.
2. Data comes in different types like text, numbers, images and is generated in massive quantities daily from sources like Google, Facebook, and sensors.
3. Machine learning algorithms are either supervised, using labeled training data, or unsupervised, using unlabeled data. Common supervised techniques are decision trees, neural networks, and support vector machines while clustering is a major unsupervised technique.
Maximum likelihood estimation (MLE) is a technique for estimating parameters in a probabilistic model based on observed data. MLE finds the parameter values that maximize the likelihood function, or the probability of obtaining the observed data given the parameters. This involves writing the log likelihood function, taking its derivative with respect to the parameters, and solving for the parameter values that set the derivative to zero. MLE was demonstrated for Bernoulli, normal, Poisson, and Markov chain models using both theoretical examples and observed data. In practice, MLE provides a principled approach for learning probability distributions from samples.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
This document discusses machine learning and various machine learning techniques. It begins by defining learning and different types of machine learning, including supervised learning, unsupervised learning, and reinforcement learning. It then focuses on supervised learning, discussing important concepts like training and test sets. Decision trees are presented as a popular supervised learning technique, including how they are constructed using a top-down recursive approach that chooses attributes to best split the data based on measures like information gain. Overfitting is also discussed as an issue to address with techniques like pruning.
The document discusses machine learning and various machine learning concepts. It defines learning as improving performance through experience. Machine learning involves using data to acquire models and learn hidden concepts. The main areas covered are supervised learning (data with labels), unsupervised learning (data without labels), semi-supervised learning (some labels present), and reinforcement learning (agent takes actions and receives rewards/punishments). Decision trees are presented as a way to represent hypotheses learned through examples, with attributes used to recursively split data into partitions.
The document discusses various topics related to active learning and optimal experimental design. It begins with motivations for active learning such as collecting higher quality data being more useful than simply more data, and data collection often being expensive. It provides examples of applying active learning techniques to problems like classification with gene expression data, collaborative filtering for movie recommendations, sequencing genomes, and improving cell culture conditions. It then covers topics like uncertainty sampling, query by committee, and information-based loss functions for active learning. For optimal experimental design, it discusses techniques like A-optimal, D-optimal, and E-optimal design and how they can be applied to problems with linear models. It also covers extensions to non-linear models using techniques like sequential experimental design
This document provides an introduction and overview of machine learning. It discusses different types of machine learning including supervised, unsupervised, semi-supervised and reinforcement learning. It also covers key machine learning concepts like hypothesis space, inductive bias, representations, features, and more. The document provides examples to illustrate these concepts in domains like medical diagnosis, entity recognition, and image recognition.
Learning agents modify their decision mechanisms through experience to improve performance. Inductive learning involves constructing a hypothesis function h that approximates the target function f based on training examples. Decision tree learning is an inductive method that uses information gain to recursively select the most informative attributes to split the training examples into subsets at each node, building the tree from the top down. Performance is evaluated by testing the learned hypothesis on new examples.
This document provides an overview of machine learning, including supervised learning, unsupervised learning, and reinforcement learning. It then discusses decision tree learning and decision trees in more detail. Decision tree algorithms like ID3 and C4.5 are explained as popular inductive inference algorithms that use an information gain measure to select attributes at each step of growing the decision tree. The document also covers converting decision trees to rules and splitting information. Linear models and artificial neural networks are briefly introduced, with the backpropagation algorithm explained as the gradient descent learning rule used in multilayer feedforward neural networks.
The document discusses machine learning concepts including:
1. Supervised learning aims to learn a function that maps inputs to target variables by minimizing error on training data. Decision tree learning is an example approach.
2. Decision trees partition data into purer subsets using information gain, which measures the reduction in entropy when an attribute is used.
3. The greedy decision tree algorithm recursively selects the attribute with highest information gain to split on, growing subtrees until leaves contain only one class.
Machine learning is important for improving brittle early AI systems and reducing the effort required for knowledge acquisition. There are two main types of machine learning - supervised learning, where a system is provided examples and feedback to learn a task, and unsupervised learning, where patterns are identified without labeled examples. Popular supervised learning methods include neural networks, Bayesian classifiers, decision trees, and linear regression, which aim to learn functions or classify inputs. Bayesian learning estimates probabilities to classify inputs, while neural networks can perform non-linear regression through backpropagation.
The document discusses machine learning concepts including supervised and unsupervised learning. It explains that supervised learning involves labeled training data to learn a model that can classify new examples, while unsupervised learning discovers hidden patterns in unlabeled data. The document also covers regression, classification tasks, evaluating models on test data, feature selection, and the machine learning process of data collection, model training and evaluation.
Lecture 2 Basic Concepts in Machine Learning for Language TechnologyMarina Santini
Definition of Machine Learning
Type of Machine Learning:
Classification
Regression
Supervised Learning
Unsupervised Learning
Reinforcement Learning
Supervised Learning:
Supervised Classification
Training set
Hypothesis class
Empirical error
Margin
Noise
Inductive bias
Generalization
Model assessment
Cross-Validation
Classification in NLP
Types of Classification
This document provides an overview of machine learning techniques using R. It discusses regression, classification, linear models, decision trees, neural networks, genetic algorithms, support vector machines, and ensembling methods. Evaluation metrics and algorithms like lm(), rpart(), nnet(), ksvm(), and ga() are presented for different machine learning tasks. The document also compares inductive learning, analytical learning, and explanation-based learning approaches.
Course: Intro to Computer Science (Malmö Högskola):
knowledge representation and abstraction, decision making, generalization, data acquistion (abstraction), machine learning, similarity
another version of abstraction
This document provides an overview of decision tree algorithms for machine learning. It discusses key concepts such as:
- Decision trees can be used for classification or regression problems.
- They represent rules that can be understood by humans and used in knowledge systems.
- The trees are built by splitting the data into purer subsets based on attribute tests, using measures like information gain.
- Issues like overfitting are addressed through techniques like reduced error pruning and rule post-pruning.
Genetic algorithms are a search method that uses principles of natural selection and evolution to find optimal solutions to problems. They work by generating an initial population of random solutions, then selecting the fittest to breed a new generation by combining traits and mutating genes. This process is repeated until a satisfactory solution emerges. The eight queens problem can be solved using a genetic algorithm approach by representing board configurations as individuals, evaluating their fitness based on non-attacking queens, breeding the fittest to produce new configurations, and repeating until a solution is found with all queens non-attacking. Semantic networks represent knowledge through nodes and relationships like IS-A and HAS to model how concepts are related, allowing inference of new facts by traversing links
This document provides an overview of machine learning concepts including:
1. Machine learning aims to create computer programs that improve with experience by learning from data. It involves tasks like classification, regression, and clustering.
2. Data comes in different types like text, numbers, images and is generated in massive quantities daily from sources like Google, Facebook, and sensors.
3. Machine learning algorithms are either supervised, using labeled training data, or unsupervised, using unlabeled data. Common supervised techniques are decision trees, neural networks, and support vector machines while clustering is a major unsupervised technique.
Maximum likelihood estimation (MLE) is a technique for estimating parameters in a probabilistic model based on observed data. MLE finds the parameter values that maximize the likelihood function, or the probability of obtaining the observed data given the parameters. This involves writing the log likelihood function, taking its derivative with respect to the parameters, and solving for the parameter values that set the derivative to zero. MLE was demonstrated for Bernoulli, normal, Poisson, and Markov chain models using both theoretical examples and observed data. In practice, MLE provides a principled approach for learning probability distributions from samples.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMS
ML-DecisionTrees.ppt
1. 1
Machine Learning
Chapter 18.1-18.3, 19.1,
skim 20.4-20.5
CS 63
Adapted from slides by
Tim Finin and
Marie desJardins.
Some material adopted
from notes by Chuck Dyer
2. 2
Outline
• Machine learning
– What is ML?
– Inductive learning
• Supervised
• Unsupervised
– Decision trees
– Computational learning theory
3. 3
What is learning?
• “Learning denotes changes in a system that ...
enable a system to do the same task more
efficiently the next time.” –Herbert Simon
• “Learning is constructing or modifying
representations of what is being experienced.”
–Ryszard Michalski
• “Learning is making useful changes in our minds.”
–Marvin Minsky
4. 4
Why learn?
• Understand and improve efficiency of human learning
– Use to improve methods for teaching and tutoring people (e.g., better
computer-aided instruction)
• Discover new things or structure that were previously
unknown to humans
– Examples: data mining, scientific discovery
• Fill in skeletal or incomplete specifications about a domain
– Large, complex AI systems cannot be completely derived by hand
and require dynamic updating to incorporate new information.
– Learning new characteristics expands the domain or expertise and
lessens the “brittleness” of the system
• Build software agents that can adapt to their users or to
other software agents
6. 6
Major paradigms of machine learning
• Rote learning – One-to-one mapping from inputs to stored
representation. “Learning by memorization.” Association-based
storage and retrieval.
• Induction – Use specific examples to reach general conclusions
• Clustering – Unsupervised identification of natural groups in data
• Analogy – Determine correspondence between two different
representations
• Discovery – Unsupervised, specific goal not given
• Genetic algorithms – “Evolutionary” search techniques, based on
an analogy to “survival of the fittest”
• Reinforcement – Feedback (positive or negative reward) given at
the end of a sequence of steps
7. 7
The inductive learning problem
• Extrapolate from a given set of examples to make accurate
predictions about future examples
• Supervised versus unsupervised learning
– Learn an unknown function f(X) = Y, where X is an input example
and Y is the desired output.
– Supervised learning implies we are given a training set of (X, Y)
pairs by a “teacher”
– Unsupervised learning means we are only given the Xs and some
(ultimate) feedback function on our performance.
• Concept learning or classification
– Given a set of examples of some concept/class/category, determine if
a given example is an instance of the concept or not
– If it is an instance, we call it a positive example
– If it is not, it is called a negative example
– Or we can make a probabilistic prediction (e.g., using a Bayes net)
8. 8
Supervised concept learning
• Given a training set of positive and negative examples of a
concept
• Construct a description that will accurately classify whether
future examples are positive or negative
• That is, learn some good estimate of function f given a
training set {(x1, y1), (x2, y2), ..., (xn, yn)} where each yi is
either + (positive) or - (negative), or a probability
distribution over +/-
9. 9
Inductive learning framework
• Raw input data from sensors are typically preprocessed to
obtain a feature vector, X, that adequately describes all of the
relevant features for classifying examples
• Each x is a list of (attribute, value) pairs. For example,
X = [Person:Sue, EyeColor:Brown, Age:Young, Sex:Female]
• The number of attributes (a.k.a. features) is fixed (positive,
finite)
• Each attribute has a fixed, finite number of possible values (or
could be continuous)
• Each example can be interpreted as a point in an n-
dimensional feature space, where n is the number of
attributes
10. 10
Inductive learning as search
• Instance space I defines the language for the training and
test instances
– Typically, but not always, each instance i I is a feature vector
– Features are also sometimes called attributes or variables
– I: V1 x V2 x … x Vk, i = (v1, v2, …, vk)
• Class variable C gives an instance’s class (to be predicted)
• Model space M defines the possible classifiers
– M: I → C, M = {m1, … mn} (possibly infinite)
– Model space is sometimes, but not always, defined in terms of the
same features as the instance space
• Training data can be used to direct the search for a good
(consistent, complete, simple) hypothesis in the model
space
11. 11
Model spaces
• Decision trees
– Partition the instance space into axis-parallel regions, labeled with class
value
• Version spaces
– Search for necessary (lower-bound) and sufficient (upper-bound) partial
instance descriptions for an instance to be a member of the class
• Nearest-neighbor classifiers
– Partition the instance space into regions defined by the centroid instances
(or cluster of k instances)
• Associative rules (feature values → class)
• First-order logical rules
• Bayesian networks (probabilistic dependencies of class on attributes)
• Neural networks
13. 13
Learning decision trees
•Goal: Build a decision tree to classify
examples as positive or negative
instances of a concept using
supervised learning from a training set
•A decision tree is a tree where
– each non-leaf node has associated with it
an attribute (feature)
–each leaf node has associated with it a
classification (+ or -)
–each arc has associated with it one of the
possible values of the attribute at the node
from which the arc is directed
•Generalization: allow for >2 classes
–e.g., {sell, hold, buy}
Color
Shape
Size +
+
- Size
+
-
+
big
big small
small
round
square
red
green blue
15. 15
Inductive learning and bias
• Suppose that we want to learn a function f(x) = y and we
are given some sample (x,y) pairs, as in figure (a)
• There are several hypotheses we could make about this
function, e.g.: (b), (c) and (d)
• A preference for one over the others reveals the bias of our
learning technique, e.g.:
– prefer piece-wise functions
– prefer a smooth function
– prefer a simple function and treat outliers as noise
16. 16
Preference bias: Ockham’s Razor
• A.k.a. Occam’s Razor, Law of Economy, or Law of
Parsimony
• Principle stated by William of Ockham (1285-1347/49), a
scholastic, that
– “non sunt multiplicanda entia praeter necessitatem”
– or, entities are not to be multiplied beyond necessity
• The simplest consistent explanation is the best
• Therefore, the smallest decision tree that correctly classifies
all of the training examples is best.
• Finding the provably smallest decision tree is NP-hard, so
instead of constructing the absolute smallest tree consistent
with the training examples, construct one that is pretty small
17. 17
R&N’s restaurant domain
• Develop a decision tree to model the decision a patron
makes when deciding whether or not to wait for a table at a
restaurant
• Two classes: wait, leave
• Ten attributes: Alternative available? Bar in restaurant? Is it
Friday? Are we hungry? How full is the restaurant? How
expensive? Is it raining? Do we have a reservation? What
type of restaurant is it? What’s the purported waiting time?
• Training set of 12 examples
• ~ 7000 possible cases
20. 20
ID3
• A greedy algorithm for decision tree construction developed
by Ross Quinlan, 1987
• Top-down construction of the decision tree by recursively
selecting the “best attribute” to use at the current node in the
tree
– Once the attribute is selected for the current node,
generate children nodes, one for each possible value of
the selected attribute
– Partition the examples using the possible values of this
attribute, and assign these subsets of the examples to the
appropriate child node
– Repeat for each child node until all examples associated
with a node are either all positive or all negative
21. 21
Choosing the best attribute
• The key problem is choosing which attribute to
split a given set of examples
• Some possibilities are:
– Random: Select any attribute at random
– Least-Values: Choose the attribute with the smallest
number of possible values
– Most-Values: Choose the attribute with the largest
number of possible values
– Max-Gain: Choose the attribute that has the largest
expected information gain–i.e., the attribute that will
result in the smallest expected size of the subtrees rooted
at its children
• The ID3 algorithm uses the Max-Gain method of
selecting the best attribute
25. 25
Information theory
• If there are n equally probable possible messages, then the
probability p of each is 1/n
• Information conveyed by a message is -log(p) = log(n)
• E.g., if there are 16 messages, then log(16) = 4 and we need 4
bits to identify/send each message
• In general, if we are given a probability distribution
P = (p1, p2, .., pn)
• Then the information conveyed by the distribution (aka entropy
of P) is:
I(P) = -(p1*log(p1) + p2*log(p2) + .. + pn*log(pn))
26. 26
Information theory II
• Information conveyed by distribution (a.k.a. entropy of P):
I(P) = -(p1*log(p1) + p2*log(p2) + .. + pn*log(pn))
• Examples:
– If P is (0.5, 0.5) then I(P) is 1
– If P is (0.67, 0.33) then I(P) is 0.92
– If P is (1, 0) then I(P) is 0
• The more uniform the probability distribution, the greater
its information: More information is conveyed by a message
telling you which event actually occurred
• Entropy is the average number of bits/message needed to
represent a stream of messages
27. 27
The Entropy Function Relative to
Boolean Classification
1.0
0.0 0.5 1.0
Proportion of positive examples
Entropy
Example taken from
Tom Mitchell’s
Machine Learning
28. 28
Huffman code
• In 1952 MIT student David Huffman devised, in the course of
doing a homework assignment, an elegant coding scheme
which is optimal in the case where all symbols’ probabilities are
integral powers of 1/2.
• A Huffman code can be built in the following manner:
– Rank all symbols in order of probability of occurrence
– Successively combine the two symbols of the lowest
probability to form a new composite symbol; eventually we
will build a binary tree where each node is the probability of
all nodes beneath it
– Trace a path to each leaf, noticing the direction at each node
29. 29
Huffman code example
Msg. Prob.
A .125
B .125
C .25
D .5
.5
.5
1
.125
.125
.25
A
C
B
D
.25
0 1
0
0 1
1
M code length prob
A 000 3 0.125 0.375
B 001 3 0.125 0.375
C 01 2 0.250 0.500
D 1 1 0.500 0.500
average message length 1.750
If we use this code to many
messages (A,B,C or D) with this
probability distribution, then, over
time, the average bits/message
should approach 1.75
30. 30
So what does the Huffman code have
to do with information theory?
• Shannon’s entropy gives the average length of the smallest
encoding theoretically possible for a weighted alphabet.
• The information theoretical limit to encoding this alphabet
is:
-0.5*log(0.5) – 0.25*log(0.25) – 0.125*log(0.125) – 0.125*log(0.125) = 1.75
• Huffman’s code is optimal in the information theoretical
sense, and yields encodings which are very very close to the
theoretical limit.
31. 31
Information for classification
• If a set T of records is partitioned into disjoint exhaustive
classes (C1,C2,..,Ck) on the basis of the value of the class
attribute, then the information needed to identify the class of
an element of T is
Info(T) = I(P)
where P is the probability distribution of partition (C1,C2,..,Ck):
P = (|C1|/|T|, |C2|/|T|, ..., |Ck|/|T|)
C1
C2
C3
C1
C2
C3
High information
Low information
32. 32
Information for classification II
• If we partition T w.r.t attribute X into sets {T1,T2, ..,Tn} then
the information needed to identify the class of an element of
T becomes the weighted average of the information needed
to identify the class of an element of Ti, i.e. the weighted
average of Info(Ti):
Info(X,T) = S|Ti|/|T| * Info(Ti)
C1
C2
C3
C1
C2
C3
High information Low information
33. 33
Information gain
• Consider the quantity Gain(X,T) defined as
Gain(X,T) = Info(T) - Info(X,T)
• This represents the difference between
– information needed to identify an element of T and
– information needed to identify an element of T after the value of attribute X
has been obtained
That is, this is the gain in information due to attribute X
• We can use this to rank attributes and to build decision trees where at each
node is located the attribute with greatest gain among the attributes not yet
considered in the path from the root
• The intent of this ordering is:
– To create small decision trees so that records can be identified after only a few
questions
– To match a hoped-for minimality of the process represented by the records
being considered (Occam’s Razor)
34. 34
Computing information gain
French
Italian
Thai
Burger
Empty Some Full
Y
Y
Y
Y
Y
Y
N
N
N
N
N
N
I(T) = - (.5 log .5 + .5 log .5)
= .5 + .5 = 1
I (Patrons, T) =
1/6 (0) + 1/3 (0) +
1/2 (- 2/3 log 2/3 – 1/3 log 1/3)
= 1/2 (2/3*.6 + 1/3*1.6)
= .47
I (Type, T) =
1/6 (1) + 1/6 (1) +
1/3 (1) + 1/3 (1) = 1
Gain (Patrons, T) = 1 - .47 = .53
Gain (Type, T) = 1 – 1 = 0
I(T) = -S|Ti|/|T| * log(|Ti|/|T|)
Info(X,T) = S|Ti|/|T| * Info(Ti)
Gain(X,T) = Info(T) - Info(X,T)
35. 35
The ID3 algorithm is used to build a decision tree, given a set of non-categorical attributes
C1, C2, .., Cn, the class attribute C, and a training set T of records.
function ID3 (R: a set of input attributes,
C: the class attribute,
S: a training set) returns a decision tree;
begin
If S is empty, return a single node with value Failure;
If every example in S has the same value for C, return
single node with that value;
If R is empty, then return a single node with most
frequent of the values of C found in examples S;
[note: there will be errors, i.e., improperly classified
records];
Let D be attribute with largest Gain(D,S) among attributes in R;
Let {dj| j=1,2, .., m} be the values of attribute D;
Let {Sj| j=1,2, .., m} be the subsets of S consisting
respectively of records with value dj for attribute D;
Return a tree with root labeled D and arcs labeled
d1, d2, .., dm going respectively to the trees
ID3(R-{D},C,S1), ID3(R-{D},C,S2) ,.., ID3(R-{D},C,Sm);
end ID3;
36. 36
How well does it work?
Many case studies have shown that decision trees are
at least as accurate as human experts.
–A study for diagnosing breast cancer had humans
correctly classifying the examples 65% of the
time; the decision tree classified 72% correct
–British Petroleum designed a decision tree for gas-
oil separation for offshore oil platforms that
replaced an earlier rule-based expert system
–Cessna designed an airplane flight controller using
90,000 examples and 20 attributes per example
37. 37
Extensions of the decision tree
learning algorithm
• Using gain ratios
• Real-valued data
• Noisy data and overfitting
• Generation of rules
• Setting parameters
• Cross-validation for experimental validation of performance
• C4.5 is an extension of ID3 that accounts for unavailable
values, continuous attribute value ranges, pruning of
decision trees, rule derivation, and so on
38. 38
Using gain ratios
• The information gain criterion favors attributes that have a large
number of values
– If we have an attribute D that has a distinct value for each
record, then Info(D,T) is 0, thus Gain(D,T) is maximal
• To compensate for this Quinlan suggests using the following
ratio instead of Gain:
GainRatio(D,T) = Gain(D,T) / SplitInfo(D,T)
• SplitInfo(D,T) is the information due to the split of T on the
basis of value of categorical attribute D
SplitInfo(D,T) = I(|T1|/|T|, |T2|/|T|, .., |Tm|/|T|)
where {T1, T2, ..., Tm} is the partition of T induced by value of D
39. 39
Computing gain ratio
I(T) = 1
I (Patrons, T) = .47
I (Type, T) = 1
Gain (Patrons, T) =.53
Gain (Type, T) = 0
SplitInfo (Patrons, T) = - (1/6 log 1/6 + 1/3 log 1/3 + 1/2 log 1/2)
= 1/6*2.6 + 1/3*1.6 + 1/2*1
= 1.47
SplitInfo (Type, T) = 1/6 log 1/6 + 1/6 log 1/6 + 1/3 log 1/3 + 1/3 log 1/3
= 1/6*2.6 + 1/6*2.6 + 1/3*1.6 + 1/3*1.6 = 1.93
GainRatio (Patrons, T) = Gain (Patrons, T) / SplitInfo(Patrons, T) = .53 / 1.47 = .36
GainRatio (Type, T) = Gain (Type, T) / SplitInfo (Type, T) = 0 / 1.93 = 0
French
Italian
Thai
Burger
Empty Some Full
Y
Y
Y
Y
Y
Y
N
N
N
N
N
N
40. 40
Real-valued data
• Select a set of thresholds defining intervals
• Each interval becomes a discrete value of the attribute
• Use some simple heuristics…
– always divide into quartiles
• Use domain knowledge…
– divide age into infant (0-2), toddler (3 - 5), school-aged (5-8)
• Or treat this as another learning problem
– Try a range of ways to discretize the continuous variable and
see which yield “better results” w.r.t. some metric
– E.g., try midpoint between every pair of values
41. 41
Noisy data and overfitting
• Many kinds of “noise” can occur in the examples:
– Two examples have same attribute/value pairs, but different classifications
– Some values of attributes are incorrect because of errors in the data
acquisition process or the preprocessing phase
– The classification is wrong (e.g., + instead of -) because of some error
– Some attributes are irrelevant to the decision-making process, e.g., color of
a die is irrelevant to its outcome
• The last problem, irrelevant attributes, can result in overfitting
the training example data.
– If the hypothesis space has many dimensions because of a large number of
attributes, we may find meaningless regularity in the data that is
irrelevant to the true, important, distinguishing features
– Fix by pruning lower nodes in the decision tree
– For example, if Gain of the best attribute at a node is below a threshold,
stop and make this node a leaf rather than generating children nodes
42. 42
Pruning decision trees
• Pruning of the decision tree is done by replacing a whole
subtree by a leaf node
• The replacement takes place if a decision rule establishes
that the expected error rate in the subtree is greater than in
the single leaf. E.g.,
– Training: one training red true and two training blue falses
– Test: three red falses and one blue true
– Consider replacing this subtree by a single FALSE node.
• After replacement we will have only two errors instead of
four:
Color
1 true
0 false
0 true
2 false
red blue
Color
1 true
3 false
1 true
1 false
red blue 2 success
4 failure
FALSE
Training Test Pruned
43. 43
Converting decision trees to rules
• It is easy to derive a rule set from a decision tree: write a rule
for each path in the decision tree from the root to a leaf
• In that rule the left-hand side is easily built from the label of
the nodes and the labels of the arcs
• The resulting rules set can be simplified:
– Let LHS be the left hand side of a rule
– Let LHS' be obtained from LHS by eliminating some conditions
– We can certainly replace LHS by LHS' in this rule if the subsets of the
training set that satisfy respectively LHS and LHS' are equal
– A rule may be eliminated by using metaconditions such as “if no other
rule applies”
44. 44
Evaluation methodology
• Standard methodology:
1. Collect a large set of examples (all with correct classifications)
2. Randomly divide collection into two disjoint sets: training and test
3. Apply learning algorithm to training set giving hypothesis H
4. Measure performance of H w.r.t. test set
• Important: keep the training and test sets disjoint!
• To study the efficiency and robustness of an algorithm, repeat
steps 2-4 for different training sets and sizes of training sets
• If you improve your algorithm, start again with step 1 to avoid
evolving the algorithm to work well on just this collection
45. 46
Summary: Decision tree learning
• Inducing decision trees is one of the most widely used
learning methods in practice
• Can out-perform human experts in many problems
• Strengths include
– Fast
– Simple to implement
– Can convert result to a set of easily interpretable rules
– Empirically valid in many commercial products
– Handles noisy data
• Weaknesses include:
– Univariate splits/partitioning using only one attribute at a time so limits
types of possible trees
– Large decision trees may be hard to understand
– Requires fixed-length feature vectors
– Non-incremental (i.e., batch method)
46. 47
Computational learning theory
• Probably approximately correct (PAC) learning:
– Sample complexity (# of examples to “guarantee” correctness)
grows with the size of the model space
• Stationarity assumption: Training set and test sets are drawn
from the same distribution
– Lots of recent work on what to do if this assumption is violated, but
you know something about the relationship between the two
distributions
• Theoretical results apply to fairly simple learning models
(e.g., decision list learning)