Frames are data structures that represent conceptual entities and the relationships between them. A frame consists of slots that describe attributes, with values that can be other frames. For example, a "Person" frame may have slots for "name", "gender", etc. Frames allow complex, hierarchical knowledge to be modeled. They can be used to represent things in the real world as well as abstract concepts.
Ontology Engineering: representation in OWLGuus Schreiber
This document discusses ontology representation using RDF/OWL. It covers:
1) OWL is the W3C standard ontology language based on RDF triples. Turtle syntax is commonly used.
2) The basic constructs of OWL like classes, properties, and individuals are introduced. Properties have domains, ranges, and other characteristics.
3) Complex class expressions can be used to define classes as unions, intersections, or negations of other classes. Necessary and sufficient conditions are explained.
This document provides an overview of ontology construction and discusses several important concepts:
1. It discusses subclass relations and different views of category representations, including hierarchy levels and sets versus prototypes.
2. It examines construction patterns for representing n-ary relations and value sets versus value partitions.
3. It analyzes challenges in representing categories and different approaches to the vertical and horizontal organization of categories in ontologies.
El documento describe tres niveles de membresía para una página de contactos, incluyendo Nivel Básico con capacidad para 5 mensajes y fotos de 200x200kb, Nivel Plata con capacidad para 50 mensajes y fotos de 1280x1024 de 1MB, y Nivel Oro con capacidad de mensajes y fotos ilimitados.
El Internet de las Cosas (IoT) permitirá que objetos cotidianos como refrigeradores, zapatillas y cepillos de dientes se conecten a Internet y compartan datos en tiempo real para mejorar la vida de las personas. A través de chips RFID, cualquier objeto podrá procesar y transmitir información constantemente, creando una revolución en las interacciones entre objetos, personas y la red. Se estima que entre 2020 y 2025 habrá entre 22,000 y 50,000 millones de dispositivos conectados a Internet a través del IoT.
The document provides an overview of knowledge representation formalisms including semantic networks and frames. It discusses the syntax and semantics of semantic networks, how they can represent relations between concepts using nodes and links, and techniques for inference including inheritance and intersection search. Frames are also introduced as another knowledge representation technique where information about concepts is organized into objects with attributes and values.
The document provides an overview of knowledge representation formalisms including semantic networks and frames. It discusses different types of knowledge representation including relational, inheritable, inferential, and procedural knowledge. It then focuses on semantic networks, describing their basic components like concepts and relations, and how inference can be performed through intersection search and inheritance. Finally, it discusses extensions to semantic networks like partitioned networks and quantified expressions.
This document discusses representing knowledge using frames and slots. It describes:
1. Using a metaclass called Slot to represent slots, with attributes like domain, range, default value, etc.
2. Examples of slots like Coach, Colour, and Position and their attributes.
3. Capabilities a frame system interpreter needs like consistency checking, inheritance of default values, and computing slot values.
4. Defining access paths to related objects in a frame network through slot sequences.
Ontology Engineering: representation in OWLGuus Schreiber
This document discusses ontology representation using RDF/OWL. It covers:
1) OWL is the W3C standard ontology language based on RDF triples. Turtle syntax is commonly used.
2) The basic constructs of OWL like classes, properties, and individuals are introduced. Properties have domains, ranges, and other characteristics.
3) Complex class expressions can be used to define classes as unions, intersections, or negations of other classes. Necessary and sufficient conditions are explained.
This document provides an overview of ontology construction and discusses several important concepts:
1. It discusses subclass relations and different views of category representations, including hierarchy levels and sets versus prototypes.
2. It examines construction patterns for representing n-ary relations and value sets versus value partitions.
3. It analyzes challenges in representing categories and different approaches to the vertical and horizontal organization of categories in ontologies.
El documento describe tres niveles de membresía para una página de contactos, incluyendo Nivel Básico con capacidad para 5 mensajes y fotos de 200x200kb, Nivel Plata con capacidad para 50 mensajes y fotos de 1280x1024 de 1MB, y Nivel Oro con capacidad de mensajes y fotos ilimitados.
El Internet de las Cosas (IoT) permitirá que objetos cotidianos como refrigeradores, zapatillas y cepillos de dientes se conecten a Internet y compartan datos en tiempo real para mejorar la vida de las personas. A través de chips RFID, cualquier objeto podrá procesar y transmitir información constantemente, creando una revolución en las interacciones entre objetos, personas y la red. Se estima que entre 2020 y 2025 habrá entre 22,000 y 50,000 millones de dispositivos conectados a Internet a través del IoT.
The document provides an overview of knowledge representation formalisms including semantic networks and frames. It discusses the syntax and semantics of semantic networks, how they can represent relations between concepts using nodes and links, and techniques for inference including inheritance and intersection search. Frames are also introduced as another knowledge representation technique where information about concepts is organized into objects with attributes and values.
The document provides an overview of knowledge representation formalisms including semantic networks and frames. It discusses different types of knowledge representation including relational, inheritable, inferential, and procedural knowledge. It then focuses on semantic networks, describing their basic components like concepts and relations, and how inference can be performed through intersection search and inheritance. Finally, it discusses extensions to semantic networks like partitioned networks and quantified expressions.
This document discusses representing knowledge using frames and slots. It describes:
1. Using a metaclass called Slot to represent slots, with attributes like domain, range, default value, etc.
2. Examples of slots like Coach, Colour, and Position and their attributes.
3. Capabilities a frame system interpreter needs like consistency checking, inheritance of default values, and computing slot values.
4. Defining access paths to related objects in a frame network through slot sequences.
The document describes the key steps in natural language processing including morphological analysis, part-of-speech tagging, lexical processing, syntactic processing, semantic analysis, knowledge representation, discourse analysis, and applications such as machine translation. It outlines techniques for analyzing words, assigning parts of speech, determining word meanings, parsing sentences into syntactic structures, assigning semantic meanings, representing knowledge, analyzing discourse, and translating between languages.
The document discusses natural language processing (NLP) and some of the key challenges involved. It explains that NLP aims to get computers to understand and generate human languages. Understanding written text requires lexical, syntactic, semantic and other knowledge about the language. Ambiguities exist at many levels, from individual words to sentences, and resolving ambiguities is challenging. The document outlines some common sources of ambiguity and methods for addressing them, including part-of-speech tagging and probabilistic parsing. It also discusses models like context-free grammars that are used to represent linguistic knowledge, and algorithms like state space search and dynamic programming that manipulate these models. Finally, it provides an overview of the main steps involved in natural language understanding.
The document discusses multi-layer perceptrons and the backpropagation algorithm. It provides an overview of MLP architecture with input, output, and internal nodes. It explains that MLPs can learn nonlinear decision boundaries using sigmoid activation functions. The backpropagation algorithm is then described in detail, including forward and backward propagation steps to calculate errors and update weights through gradient descent. Applications of neural networks are also listed.
The document discusses perceptrons and neural networks. It defines a perceptron as a linear classifier that uses a step function to output 1 if the linear combination of inputs is above a threshold, and -1 otherwise. Perceptrons can only learn linearly separable problems. The perceptron learning algorithm updates weights for each training example using rules like the perceptron rule or delta rule. The delta rule allows learning non-linearly separable problems by minimizing the error between actual and predicted outputs.
Artificial neural networks (ANNs) are computing systems inspired by biological neural networks. ANNs can learn complex patterns and make predictions based on large amounts of data. The document discusses the basic structure and functioning of ANNs, including their ability to learn through adjustment of synaptic weights between neurons. It also describes several common types of ANNs, focusing on perceptrons and multi-layer perceptrons.
This document discusses decision tree induction and rule induction. It describes splitting functions used to build decision trees based on information gain. Higher information gain indicates a better split attribute. The document also discusses overfitting in decision trees and the need for pruning. A common pruning algorithm is described that iteratively prunes subtrees starting from internal nodes if accuracy does not decrease on a validation set. Decision trees are suitable for problems with mixed attribute types where the target function is discrete.
This document discusses decision trees, which classify examples based on decisions made at internal nodes based on attribute values. It defines decision trees and how they classify examples by following branches from the root node to a terminal node. It describes how decision trees are constructed using a top-down, greedy search approach that recursively separates the training data into subsets based on the best attribute at each step until reaching a leaf node.
The document discusses concept learning and the candidate elimination algorithm. It defines concept learning as inducing a function that maps examples into categories. It then describes concept learning as a search problem to find the most specific hypothesis consistent with training examples. The candidate elimination algorithm maintains the most specific and general hypotheses consistent with examples to represent the version space of possible concepts. It updates these lists by generalizing specific hypotheses or specializing general hypotheses based on new positive and negative examples.
The document provides an introduction to machine learning, including:
1) It defines machine learning and contrasts it with classical AI, noting that machine learning focuses on inductive reasoning by obtaining results from data.
2) It lists several reasons why machine learning is attractive for real-life problems, such as when tasks cannot be well-defined or environments change over time.
3) It outlines the key components of a learning system as the task, experience, model, learning rules, and performance measure.
This document discusses the process of fuzzy reasoning, which involves 3 steps: fuzzification, rule evaluation, and defuzzification. It describes each step in detail. Fuzzification involves transforming crisp inputs into fuzzy inputs using membership functions. Rule evaluation uses IF-THEN rules with fuzzy logic operators. Defuzzification transforms fuzzy outputs into crisp outputs using methods like center of gravity. The document provides examples and illustrations of each step in fuzzy reasoning. It also discusses applications of fuzzy logic in areas like cement kiln control.
This document discusses fuzzy set representation and fuzzy reasoning. It begins by explaining fuzzy sets and membership values between 0 and 1, using the example of assessing someone's membership in the set of "old people." It then defines key concepts of fuzzy sets like empty sets, equality, complement, containment, union and intersection. The document contrasts fuzzy logic operations with probabilistic operations. It also introduces hedges that modify membership values like "very" increasing a value. Examples use hedges to transform statements. The document shows fuzzy sets can systematically translate between numeric values and natural language statements.
This document discusses the history and concepts of fuzzy logic as an approach to handling uncertainty. It describes how fuzzy logic was developed to address the vagueness and imprecision in natural language that cannot be easily represented using traditional binary logic. The key figures discussed include Zadeh who introduced the concept of fuzzy set theory in 1965 allowing membership to be defined over a range between 0 and 1, as well as earlier philosophers like Lukasiewicz who proposed multi-valued logics as an alternative to Aristotle's binary logic. The document provides important context on the origins and motivation of fuzzy logic for reasoning with uncertainty.
1. Exact inference in Bayesian networks is NP-hard in the worst case, so approximation techniques are needed for large networks.
2. Major approximation techniques include variational methods like mean-field approximation, sampling methods like Monte Carlo Markov Chain, and bounded cutset conditioning.
3. Variational methods introduce variational parameters to minimize the distance between the approximate and true distributions. Sampling methods draw random samples to estimate probabilities. Bounded cutset conditioning breaks loops by instantiating subsets of variables.
The document discusses Bayesian networks (BNs), which are directed acyclic graphs used to represent probabilistic relationships between variables. Each node in a BN represents a variable, and connections between nodes represent probabilistic dependencies. These dependencies are quantified with conditional probability tables (CPTs). BNs allow compact representation of joint probability distributions by exploiting conditional independence relationships. Learning BNs from data involves estimating both the network structure and the parameters of each CPT. When the structure is known and all variables are observable, parameters can be estimated with maximum likelihood.
The document discusses probabilistic inference rules, specifically the product rule and Bayes' rule. It provides an example of using Bayes' rule to calculate the probability of having a disease given a positive test result. It explains that conducting multiple independent tests can increase the posterior probability. Finally, it defines conditional independence, giving the example that the probability of being on time is conditionally independent of leaving by 8am given that one got the train.
This document provides an overview of probabilistic reasoning and uncertainty in knowledge representation. It discusses:
1) Using probability theory to represent uncertainty quantitatively rather than logical rules with certainty factors.
2) Key concepts in probability theory including random variables, probability distributions, joint probabilities, marginal probabilities, and conditional probabilities.
3) Representing a problem domain as a probabilistic model with a sample space of possible variable states.
4) Independence of variables allowing simpler computation of probabilities.
5) The document is an introduction to probabilistic reasoning concepts to be covered in more detail later, including Bayesian networks.
The document discusses partial-order planning algorithms. Partial-order planners represent plans as graphs showing temporal constraints between steps rather than totally ordered lists. They exhibit "least commitment" by only adding constraints when necessary. The document describes representing partial-order plans as graphs and how they can represent multiple total order plans. It also discusses using plan modification operators like refinement and threat resolution to search the plan space for a solution.
This document discusses two main approaches to solving planning problems: situation-space search and planning-space search. It focuses on situation-space planning, describing progression planning and regression planning. Progression planning searches forward from the initial state to the goal state, while regression planning searches backward from the goal state to the initial state. Algorithms for both progression and regression planning in situation space are presented. The document also discusses goal interaction and the principle of least commitment in planning.
This document discusses planning systems and how they represent states, goals, and actions. It describes how states are represented by conjunctions of literals and goals can contain variables. Actions in planning systems have preconditions that must be true for the action to occur and effects that describe how the state changes. It provides an example of the blocks world planning problem and defines the state descriptors and operators for that domain.
This document discusses planning and logic-based planning. It provides an overview of planning problems and algorithms. The key points are:
1. Planning involves finding a sequence of actions to achieve a goal starting from an initial state. Planning can be viewed as a type of problem solving that uses knowledge about actions and their consequences.
2. Logic-based planning represents planning problems using formal logics like situation calculus. Situation calculus extends first-order logic to reason about how the world changes over time as actions are performed.
3. The document discusses the differences between problem solving and planning approaches. Planning emphasizes using representations of states, goals, and operators to intelligently select actions, rather than incremental problem solving from
The document describes the key steps in natural language processing including morphological analysis, part-of-speech tagging, lexical processing, syntactic processing, semantic analysis, knowledge representation, discourse analysis, and applications such as machine translation. It outlines techniques for analyzing words, assigning parts of speech, determining word meanings, parsing sentences into syntactic structures, assigning semantic meanings, representing knowledge, analyzing discourse, and translating between languages.
The document discusses natural language processing (NLP) and some of the key challenges involved. It explains that NLP aims to get computers to understand and generate human languages. Understanding written text requires lexical, syntactic, semantic and other knowledge about the language. Ambiguities exist at many levels, from individual words to sentences, and resolving ambiguities is challenging. The document outlines some common sources of ambiguity and methods for addressing them, including part-of-speech tagging and probabilistic parsing. It also discusses models like context-free grammars that are used to represent linguistic knowledge, and algorithms like state space search and dynamic programming that manipulate these models. Finally, it provides an overview of the main steps involved in natural language understanding.
The document discusses multi-layer perceptrons and the backpropagation algorithm. It provides an overview of MLP architecture with input, output, and internal nodes. It explains that MLPs can learn nonlinear decision boundaries using sigmoid activation functions. The backpropagation algorithm is then described in detail, including forward and backward propagation steps to calculate errors and update weights through gradient descent. Applications of neural networks are also listed.
The document discusses perceptrons and neural networks. It defines a perceptron as a linear classifier that uses a step function to output 1 if the linear combination of inputs is above a threshold, and -1 otherwise. Perceptrons can only learn linearly separable problems. The perceptron learning algorithm updates weights for each training example using rules like the perceptron rule or delta rule. The delta rule allows learning non-linearly separable problems by minimizing the error between actual and predicted outputs.
Artificial neural networks (ANNs) are computing systems inspired by biological neural networks. ANNs can learn complex patterns and make predictions based on large amounts of data. The document discusses the basic structure and functioning of ANNs, including their ability to learn through adjustment of synaptic weights between neurons. It also describes several common types of ANNs, focusing on perceptrons and multi-layer perceptrons.
This document discusses decision tree induction and rule induction. It describes splitting functions used to build decision trees based on information gain. Higher information gain indicates a better split attribute. The document also discusses overfitting in decision trees and the need for pruning. A common pruning algorithm is described that iteratively prunes subtrees starting from internal nodes if accuracy does not decrease on a validation set. Decision trees are suitable for problems with mixed attribute types where the target function is discrete.
This document discusses decision trees, which classify examples based on decisions made at internal nodes based on attribute values. It defines decision trees and how they classify examples by following branches from the root node to a terminal node. It describes how decision trees are constructed using a top-down, greedy search approach that recursively separates the training data into subsets based on the best attribute at each step until reaching a leaf node.
The document discusses concept learning and the candidate elimination algorithm. It defines concept learning as inducing a function that maps examples into categories. It then describes concept learning as a search problem to find the most specific hypothesis consistent with training examples. The candidate elimination algorithm maintains the most specific and general hypotheses consistent with examples to represent the version space of possible concepts. It updates these lists by generalizing specific hypotheses or specializing general hypotheses based on new positive and negative examples.
The document provides an introduction to machine learning, including:
1) It defines machine learning and contrasts it with classical AI, noting that machine learning focuses on inductive reasoning by obtaining results from data.
2) It lists several reasons why machine learning is attractive for real-life problems, such as when tasks cannot be well-defined or environments change over time.
3) It outlines the key components of a learning system as the task, experience, model, learning rules, and performance measure.
This document discusses the process of fuzzy reasoning, which involves 3 steps: fuzzification, rule evaluation, and defuzzification. It describes each step in detail. Fuzzification involves transforming crisp inputs into fuzzy inputs using membership functions. Rule evaluation uses IF-THEN rules with fuzzy logic operators. Defuzzification transforms fuzzy outputs into crisp outputs using methods like center of gravity. The document provides examples and illustrations of each step in fuzzy reasoning. It also discusses applications of fuzzy logic in areas like cement kiln control.
This document discusses fuzzy set representation and fuzzy reasoning. It begins by explaining fuzzy sets and membership values between 0 and 1, using the example of assessing someone's membership in the set of "old people." It then defines key concepts of fuzzy sets like empty sets, equality, complement, containment, union and intersection. The document contrasts fuzzy logic operations with probabilistic operations. It also introduces hedges that modify membership values like "very" increasing a value. Examples use hedges to transform statements. The document shows fuzzy sets can systematically translate between numeric values and natural language statements.
This document discusses the history and concepts of fuzzy logic as an approach to handling uncertainty. It describes how fuzzy logic was developed to address the vagueness and imprecision in natural language that cannot be easily represented using traditional binary logic. The key figures discussed include Zadeh who introduced the concept of fuzzy set theory in 1965 allowing membership to be defined over a range between 0 and 1, as well as earlier philosophers like Lukasiewicz who proposed multi-valued logics as an alternative to Aristotle's binary logic. The document provides important context on the origins and motivation of fuzzy logic for reasoning with uncertainty.
1. Exact inference in Bayesian networks is NP-hard in the worst case, so approximation techniques are needed for large networks.
2. Major approximation techniques include variational methods like mean-field approximation, sampling methods like Monte Carlo Markov Chain, and bounded cutset conditioning.
3. Variational methods introduce variational parameters to minimize the distance between the approximate and true distributions. Sampling methods draw random samples to estimate probabilities. Bounded cutset conditioning breaks loops by instantiating subsets of variables.
The document discusses Bayesian networks (BNs), which are directed acyclic graphs used to represent probabilistic relationships between variables. Each node in a BN represents a variable, and connections between nodes represent probabilistic dependencies. These dependencies are quantified with conditional probability tables (CPTs). BNs allow compact representation of joint probability distributions by exploiting conditional independence relationships. Learning BNs from data involves estimating both the network structure and the parameters of each CPT. When the structure is known and all variables are observable, parameters can be estimated with maximum likelihood.
The document discusses probabilistic inference rules, specifically the product rule and Bayes' rule. It provides an example of using Bayes' rule to calculate the probability of having a disease given a positive test result. It explains that conducting multiple independent tests can increase the posterior probability. Finally, it defines conditional independence, giving the example that the probability of being on time is conditionally independent of leaving by 8am given that one got the train.
This document provides an overview of probabilistic reasoning and uncertainty in knowledge representation. It discusses:
1) Using probability theory to represent uncertainty quantitatively rather than logical rules with certainty factors.
2) Key concepts in probability theory including random variables, probability distributions, joint probabilities, marginal probabilities, and conditional probabilities.
3) Representing a problem domain as a probabilistic model with a sample space of possible variable states.
4) Independence of variables allowing simpler computation of probabilities.
5) The document is an introduction to probabilistic reasoning concepts to be covered in more detail later, including Bayesian networks.
The document discusses partial-order planning algorithms. Partial-order planners represent plans as graphs showing temporal constraints between steps rather than totally ordered lists. They exhibit "least commitment" by only adding constraints when necessary. The document describes representing partial-order plans as graphs and how they can represent multiple total order plans. It also discusses using plan modification operators like refinement and threat resolution to search the plan space for a solution.
This document discusses two main approaches to solving planning problems: situation-space search and planning-space search. It focuses on situation-space planning, describing progression planning and regression planning. Progression planning searches forward from the initial state to the goal state, while regression planning searches backward from the goal state to the initial state. Algorithms for both progression and regression planning in situation space are presented. The document also discusses goal interaction and the principle of least commitment in planning.
This document discusses planning systems and how they represent states, goals, and actions. It describes how states are represented by conjunctions of literals and goals can contain variables. Actions in planning systems have preconditions that must be true for the action to occur and effects that describe how the state changes. It provides an example of the blocks world planning problem and defines the state descriptors and operators for that domain.
This document discusses planning and logic-based planning. It provides an overview of planning problems and algorithms. The key points are:
1. Planning involves finding a sequence of actions to achieve a goal starting from an initial state. Planning can be viewed as a type of problem solving that uses knowledge about actions and their consequences.
2. Logic-based planning represents planning problems using formal logics like situation calculus. Situation calculus extends first-order logic to reason about how the world changes over time as actions are performed.
3. The document discusses the differences between problem solving and planning approaches. Planning emphasizes using representations of states, goals, and operators to intelligently select actions, rather than incremental problem solving from
1. Module
8
Other representation
formalisms
Version 2 CSE IIT, Kharagpur
2. Lesson
20
Frames - I
Version 2 CSE IIT, Kharagpur
3. 8.4 Frames
Frames are descriptions of conceptual individuals. Frames can exist for ``real'' objects
such as ``The Watergate Hotel'', sets of objects such as ``Hotels'', or more ``abstract''
objects such as ``Cola-Wars'' or ``Watergate''.
A Frame system is a collection of objects. Each object contains a number of slots. A slot
represents an attribute. Each slot has a value. The value of an attribute can be another
object. Each object is like a C struct. The struct has a name and contains a bunch of
named values (which can be pointers)
Frames are essentially defined by their relationships with other frames. Relationships
between frames are represented using slots. If a frame f is in a relationship r to a frame g,
then we put the value g in the r slot of f.
For example, suppose we are describing the following genealogical tree:
The frame describing Adam might look something like:
Adam:
sex: Male
spouse: Beth
child: (Charles Donna Ellen)
where sex, spouse, and child are slots. Note that a single slot may hold several values
(e.g. the children of Adam).
The genealogical tree would then be described by (at least) seven frames, describing the
following individuals: Adam, Beth, Charles, Donna, Ellen, Male, and Female.
A frame can be considered just a convenient way to represent a set of predicates applied
to constant symbols (e.g. ground instances of predicates.). For example, the frame above
could be written:
sex(Adam,Male)
spouse(Adam,Beth)
child(Adam,Charles)
child(Adam,Donna)
child(Adam,Ellen)
Version 2 CSE IIT, Kharagpur
4. More generally, the ground predicate r(f,g) is represented, in a frame based system, by
placing the value g in the r slot of the frame f : r(f,g)
Frames can also be regarded as an extension to Semantic nets. Indeed it is not clear where
the distinction between a semantic net and a frame ends. Semantic nets initially we used
to represent labelled connections between objects. As tasks became more complex the
representation needs to be more structured. The more structured the system it becomes
more beneficial to use frames. A frame is a collection of attributes or slots and associated
values that describe some real world entity. Frames on their own are not particularly
helpful but frame systems are a powerful way of encoding information to support
reasoning. Set theory provides a good basis for understanding frame systems. Each frame
represents:
• a class (set), or
• an instance (an element of a class).
Consider the example below.
Person
isa: Mammal
Cardinality:
Adult-Male
isa: Person
Cardinality:
Rugby-Player
isa: Adult-Male
Cardinality:
Height:
Weight:
Version 2 CSE IIT, Kharagpur
5. Position:
Team:
Team-Colours:
Back
isa: Rugby-Player
Cardinality:
Tries:
Mike-Hall
instance: Back
Height: 6-0
Position: Centre
Team: Cardiff-RFC
Team-Colours: Black/Blue
Rugby-Team
isa: Team
Cardinality:
Team-size: 15
Coach:
Here the frames Person, Adult-Male, Rugby-Player and Rugby-Team are all classes and
the frames Robert-Howley and Cardiff-RFC are instances.
Version 2 CSE IIT, Kharagpur
6. Note
• The isa relation is in fact the subset relation.
• The instance relation is in fact element of.
• The isa attribute possesses a transitivity property. This implies: Robert-Howley is
a Back and a Back is a Rugby-Player who in turn is an Adult-Male and also a
Person.
• Both isa and instance have inverses which are called subclasses or all instances.
• There are attributes that are associated with the class or set such as cardinality and
on the other hand there are attributes that are possessed by each member of the
class or set.
DISTINCTION BETWEN SETS AND INSTANCES
It is important that this distinction is clearly understood.
Cardiff-RFC can be thought of as a set of players or as an instance of a Rugby-Team.
If Cardiff-RFC were a class then
• its instances would be players
• it could not be a subclass of Rugby-Team otherwise its elements would be
members of Rugby-Team which we do not want.
Instead we make it a subclass of Rugby-Player and this allows the players to inherit the
correct properties enabling us to let the Cardiff-RFC to inherit information about teams.
This means that Cardiff-RFC is an instance of Rugby-Team.
BUT There is a problem here:
• A class is a set and its elements have properties.
• We wish to use inheritance to bestow values on its members.
• But there are properties that the set or class itself has such as the manager of a
team.
This is why we need to view Cardiff-RFC as a subset of one class players and an instance
of teams. We seem to have a CATCH 22. Solution: MetaClasses
A metaclass is a special class whose elements are themselves classes.
Now consider our rugby teams as:
Version 2 CSE IIT, Kharagpur
7. The basic metaclass is Class, and this allows us to
• define classes which are instances of other classes, and (thus)
• inherit properties from this class.
Inheritance of default values occurs when one element or class is an instance of a class.
Version 2 CSE IIT, Kharagpur