The document discusses various data mining methods. It describes data mining as seeking patterns within large databases. Common data mining methods mentioned include clustering, regression, rule extraction, and data visualization. Machine learning algorithms often used for health data include logistic regression, support vector machines, decision trees, and neural networks. The document also discusses newer techniques like graph-based data mining, topological data mining, and data visualization for exploring complex data.
Analysis of Classification Algorithm in Data Miningijdmtaiir
Data Mining is the extraction of hidden predictive
information from large database. Classification is the process
of finding a model that describes and distinguishes data classes
or concept. This paper performs the study of prediction of class
label using C4.5 and Naïve Bayesian algorithm.C4.5 generates
classifiers expressed as decision trees from a fixed set of
examples. The resulting tree is used to classify future samples
.The leaf nodes of the decision tree contain the class name
whereas a non-leaf node is a decision node. The decision node
is an attribute test with each branch (to another decision tree)
being a possible value of the attribute. C4.5 uses information
gain to help it decide which attribute goes into a decision node.
A Naïve Bayesian classifier is a simple probabilistic classifier
based on applying Baye’s theorem with strong (naive)
independence assumptions. Naive Bayesian classifier assumes
that the effect of an attribute value on a given class is
independent of the values of the other attribute. This
assumption is called class conditional independence. The
results indicate that Predicting of class label using Naïve
Bayesian classifier is very effective and simple compared to
C4.5 classifier
Hypothesis on Different Data Mining AlgorithmsIJERA Editor
In this paper, different classification algorithms for data mining are discussed. Data Mining is about
explaining the past & predicting the future by means of data analysis. Classification is a task of data mining,
which categories data based on numerical or categorical variables. To classify the data many algorithms are
proposed, out of them five algorithms are comparatively studied for data mining through classification. There are
four different classification approaches namely Frequency Table, Covariance Matrix, Similarity Functions &
Others. As work for research on classification methods, algorithms like Naive Bayesian, K Nearest Neighbors,
Decision Tree, Artificial Neural Network & Support Vector Machine are studied & examined using benchmark
datasets like Iris & Lung Cancer.
A Survey Ondecision Tree Learning Algorithms for Knowledge DiscoveryIJERA Editor
Theimmense volumes of data are populated into repositories from various applications. In order to find out desired information and knowledge from large datasets, the data mining techniques are very much helpful. Classification is one of the knowledge discovery techniques. In Classification, Decision trees are very popular in research community due to simplicity and easy comprehensibility. This paper presentsan updated review of recent developments in the field of decision trees.
A survey of modified support vector machine using particle of swarm optimizat...Editor Jacotech
The main objective of this survey paper is to provide a detailed description of Wireless Sensor Networks with Medium Access Control layer and Routing layer. In the medium access control layer, Event Driven Time Division Multiple Access protocol is studied and in Network layer, two routing protocols Bellman-Ford and Dynamic Source Routing are studied.
Analysis of Classification Algorithm in Data Miningijdmtaiir
Data Mining is the extraction of hidden predictive
information from large database. Classification is the process
of finding a model that describes and distinguishes data classes
or concept. This paper performs the study of prediction of class
label using C4.5 and Naïve Bayesian algorithm.C4.5 generates
classifiers expressed as decision trees from a fixed set of
examples. The resulting tree is used to classify future samples
.The leaf nodes of the decision tree contain the class name
whereas a non-leaf node is a decision node. The decision node
is an attribute test with each branch (to another decision tree)
being a possible value of the attribute. C4.5 uses information
gain to help it decide which attribute goes into a decision node.
A Naïve Bayesian classifier is a simple probabilistic classifier
based on applying Baye’s theorem with strong (naive)
independence assumptions. Naive Bayesian classifier assumes
that the effect of an attribute value on a given class is
independent of the values of the other attribute. This
assumption is called class conditional independence. The
results indicate that Predicting of class label using Naïve
Bayesian classifier is very effective and simple compared to
C4.5 classifier
Hypothesis on Different Data Mining AlgorithmsIJERA Editor
In this paper, different classification algorithms for data mining are discussed. Data Mining is about
explaining the past & predicting the future by means of data analysis. Classification is a task of data mining,
which categories data based on numerical or categorical variables. To classify the data many algorithms are
proposed, out of them five algorithms are comparatively studied for data mining through classification. There are
four different classification approaches namely Frequency Table, Covariance Matrix, Similarity Functions &
Others. As work for research on classification methods, algorithms like Naive Bayesian, K Nearest Neighbors,
Decision Tree, Artificial Neural Network & Support Vector Machine are studied & examined using benchmark
datasets like Iris & Lung Cancer.
A Survey Ondecision Tree Learning Algorithms for Knowledge DiscoveryIJERA Editor
Theimmense volumes of data are populated into repositories from various applications. In order to find out desired information and knowledge from large datasets, the data mining techniques are very much helpful. Classification is one of the knowledge discovery techniques. In Classification, Decision trees are very popular in research community due to simplicity and easy comprehensibility. This paper presentsan updated review of recent developments in the field of decision trees.
A survey of modified support vector machine using particle of swarm optimizat...Editor Jacotech
The main objective of this survey paper is to provide a detailed description of Wireless Sensor Networks with Medium Access Control layer and Routing layer. In the medium access control layer, Event Driven Time Division Multiple Access protocol is studied and in Network layer, two routing protocols Bellman-Ford and Dynamic Source Routing are studied.
Analysis On Classification Techniques In Mammographic Mass Data SetIJERA Editor
Data mining, the extraction of hidden information from large databases, is to predict future trends and behaviors, allowing businesses to make proactive, knowledge-driven decisions. Data-Mining classification techniques deals with determining to which group each data instances are associated with. It can deal with a wide variety of data so that large amount of data can be involved in processing. This paper deals with analysis on various data mining classification techniques such as Decision Tree Induction, Naïve Bayes , k-Nearest Neighbour (KNN) classifiers in mammographic mass dataset.
Classification of Breast Cancer Diseases using Data Mining Techniquesinventionjournals
Medical data mining has great deal for exploring new knowledge from large amount of data. Classification is one of the important data mining techniques for classification of data. In this research work, we have used various data mining based classification techniques for classification of cancer diseases patient or not. We applied the Breast Cancer-Wisconsin (Original) data set into different data mining techniques and compared the accuracy of models with two different data partitions. BayesNet achieved highest accuracy as 97.13% in case of 10-fold data partitions. We have also applied the info gain feature selection technique on BayesNet and Support Vector Machine (SVM) and achieved best accuracy 97.28% accuracy with BayesNet in case of 6 feature subset.
Incremental learning from unbalanced data with concept class, concept drift a...IJDKP
Recently, stream data mining applications has drawn vital attention from several research communities.
Stream data is continuous form of data which is distinguished by its online nature. Traditionally, machine
learning area has been developing learning algorithms that have certain assumptions on underlying
distribution of data such as data should have predetermined distribution. Such constraints on the problem
domain lead the way for development of smart learning algorithms performance is theoretically verifiable.
Real-word situations are different than this restricted model. Applications usually suffers from problems
such as unbalanced data distribution. Additionally, data picked from non-stationary environments are also
usual in real world applications, resulting in the “concept drift” which is related with data stream
examples. These issues have been separately addressed by the researchers, also, it is observed that joint
problem of class imbalance and concept drift has got relatively little research. If the final objective of
clever machine learning techniques is to be able to address a broad spectrum of real world applications,
then the necessity for a universal framework for learning from and tailoring (adapting) to, environment
where drift in concepts may occur and unbalanced data distribution is present can be hardly exaggerated.
In this paper, we first present an overview of issues that are observed in stream data mining scenarios,
followed by a complete review of recent research in dealing with each of the issue.
KNOWLEDGE BASED ANALYSIS OF VARIOUS STATISTICAL TOOLS IN DETECTING BREAST CANCERcscpconf
In this paper, we study the performance criterion of machine learning tools in classifying breast cancer. We compare the data mining tools such as Naïve Bayes, Support vector machines, Radial basis neural networks, Decision trees J48 and simple CART. We used both binary and multi class data sets namely WBC, WDBC and Breast tissue from UCI machine learning depositary. The experiments are conducted in WEKA. The aim of this research is to find out the best classifier with respect to accuracy, precision, sensitivity and specificity in detecting breast cancer
CLASSIFICATION ALGORITHM USING RANDOM CONCEPT ON A VERY LARGE DATA SET: A SURVEYEditor IJMTER
Data mining environment produces a large amount of data, that need to be
analyses, pattern have to be extracted from that to gain knowledge. In this new period with
rumble of data both ordered and unordered, by using traditional databases and architectures, it
has become difficult to process, manage and analyses patterns. To gain knowledge about the
Big Data a proper architecture should be understood. Classification is an important data mining
technique with broad applications to classify the various kinds of data used in nearly every
field of our life. Classification is used to classify the item according to the features of the item
with respect to the predefined set of classes. This paper provides an inclusive survey of
different classification algorithms and put a light on various classification algorithms including
j48, C4.5, k-nearest neighbor classifier, Naive Bayes, SVM etc., using random concept.
Multimodal authentication is one of the prime concepts in current applications of real scenario. Various
approaches have been proposed in this aspect. In this paper, an intuitive strategy is proposed as a
framework for providing more secure key in biometric security aspect. Initially the features will be
extracted through PCA by SVD from the chosen biometric patterns, then using LU factorization technique
key components will be extracted, then selected with different key sizes and then combined the selected key
components using convolution kernel method (Exponential Kronecker Product - eKP) as Context-Sensitive
Exponent Associative Memory model (CSEAM). In the similar way, the verification process will be done
and then verified with the measure MSE. This model would give better outcome when compared with SVD
factorization[1] as feature selection. The process will be computed for different key sizes and the results
will be presented.
Analysis On Classification Techniques In Mammographic Mass Data SetIJERA Editor
Data mining, the extraction of hidden information from large databases, is to predict future trends and behaviors, allowing businesses to make proactive, knowledge-driven decisions. Data-Mining classification techniques deals with determining to which group each data instances are associated with. It can deal with a wide variety of data so that large amount of data can be involved in processing. This paper deals with analysis on various data mining classification techniques such as Decision Tree Induction, Naïve Bayes , k-Nearest Neighbour (KNN) classifiers in mammographic mass dataset.
Classification of Breast Cancer Diseases using Data Mining Techniquesinventionjournals
Medical data mining has great deal for exploring new knowledge from large amount of data. Classification is one of the important data mining techniques for classification of data. In this research work, we have used various data mining based classification techniques for classification of cancer diseases patient or not. We applied the Breast Cancer-Wisconsin (Original) data set into different data mining techniques and compared the accuracy of models with two different data partitions. BayesNet achieved highest accuracy as 97.13% in case of 10-fold data partitions. We have also applied the info gain feature selection technique on BayesNet and Support Vector Machine (SVM) and achieved best accuracy 97.28% accuracy with BayesNet in case of 6 feature subset.
Incremental learning from unbalanced data with concept class, concept drift a...IJDKP
Recently, stream data mining applications has drawn vital attention from several research communities.
Stream data is continuous form of data which is distinguished by its online nature. Traditionally, machine
learning area has been developing learning algorithms that have certain assumptions on underlying
distribution of data such as data should have predetermined distribution. Such constraints on the problem
domain lead the way for development of smart learning algorithms performance is theoretically verifiable.
Real-word situations are different than this restricted model. Applications usually suffers from problems
such as unbalanced data distribution. Additionally, data picked from non-stationary environments are also
usual in real world applications, resulting in the “concept drift” which is related with data stream
examples. These issues have been separately addressed by the researchers, also, it is observed that joint
problem of class imbalance and concept drift has got relatively little research. If the final objective of
clever machine learning techniques is to be able to address a broad spectrum of real world applications,
then the necessity for a universal framework for learning from and tailoring (adapting) to, environment
where drift in concepts may occur and unbalanced data distribution is present can be hardly exaggerated.
In this paper, we first present an overview of issues that are observed in stream data mining scenarios,
followed by a complete review of recent research in dealing with each of the issue.
KNOWLEDGE BASED ANALYSIS OF VARIOUS STATISTICAL TOOLS IN DETECTING BREAST CANCERcscpconf
In this paper, we study the performance criterion of machine learning tools in classifying breast cancer. We compare the data mining tools such as Naïve Bayes, Support vector machines, Radial basis neural networks, Decision trees J48 and simple CART. We used both binary and multi class data sets namely WBC, WDBC and Breast tissue from UCI machine learning depositary. The experiments are conducted in WEKA. The aim of this research is to find out the best classifier with respect to accuracy, precision, sensitivity and specificity in detecting breast cancer
CLASSIFICATION ALGORITHM USING RANDOM CONCEPT ON A VERY LARGE DATA SET: A SURVEYEditor IJMTER
Data mining environment produces a large amount of data, that need to be
analyses, pattern have to be extracted from that to gain knowledge. In this new period with
rumble of data both ordered and unordered, by using traditional databases and architectures, it
has become difficult to process, manage and analyses patterns. To gain knowledge about the
Big Data a proper architecture should be understood. Classification is an important data mining
technique with broad applications to classify the various kinds of data used in nearly every
field of our life. Classification is used to classify the item according to the features of the item
with respect to the predefined set of classes. This paper provides an inclusive survey of
different classification algorithms and put a light on various classification algorithms including
j48, C4.5, k-nearest neighbor classifier, Naive Bayes, SVM etc., using random concept.
Multimodal authentication is one of the prime concepts in current applications of real scenario. Various
approaches have been proposed in this aspect. In this paper, an intuitive strategy is proposed as a
framework for providing more secure key in biometric security aspect. Initially the features will be
extracted through PCA by SVD from the chosen biometric patterns, then using LU factorization technique
key components will be extracted, then selected with different key sizes and then combined the selected key
components using convolution kernel method (Exponential Kronecker Product - eKP) as Context-Sensitive
Exponent Associative Memory model (CSEAM). In the similar way, the verification process will be done
and then verified with the measure MSE. This model would give better outcome when compared with SVD
factorization[1] as feature selection. The process will be computed for different key sizes and the results
will be presented.
Data Analysis: Statistical Methods: Regression modelling, Multivariate Analysis - Classification: SVM & Kernel Methods - Rule Mining - Cluster Analysis, Types of Data in Cluster Analysis, Partitioning Methods, Hierarchical Methods, Density Based Methods, Grid Based Methods, Model Based Clustering Methods, Clustering High Dimensional Data - Predictive Analytics – Data analysis using R.
Performance Comparision of Machine Learning AlgorithmsDinusha Dilanka
In this paper Compare the performance of two
classification algorithm. I t is useful to differentiate
algorithms based on computational performance rather
than classification accuracy alone. As although
classification accuracy between the algorithms is similar,
computational performance can differ significantly and it
can affect to the final results. So the objective of this paper
is to perform a comparative analysis of two machine
learning algorithms namely, K Nearest neighbor,
classification and Logistic Regression. In this paper it
was considered a large dataset of 7981 data points and 112
features. Then the performance of the above mentioned
machine learning algorithms are examined. In this paper
the processing time and accuracy of the different machine
learning techniques are being estimated by considering the
collected data set, over a 60% for train and remaining
40% for testing. The paper is organized as follows. In
Section I, introduction and background analysis of the
research is included and in section II, problem statement.
In Section III, our application and data analyze Process,
the testing environment, and the Methodology of our
analysis are being described briefly. Section IV comprises
the results of two algorithms. Finally, the paper concludes
with a discussion of future directions for research by
eliminating the problems existing with the current
research methodology.
Machine learning workshop, session 3.
- Data sets
- Machine Learning Algorithms
- Algorithms by Learning Style
- Algorithms by Similarity
- People to follow
Data Science - Part V - Decision Trees & Random Forests Derek Kane
This lecture provides an overview of decision tree machine learning algorithms and random forest ensemble techniques. The practical example includes diagnosing Type II diabetes and evaluating customer churn in the telecommunication industry.
A Formal Machine Learning or Multi Objective Decision Making System for Deter...Editor IJCATR
Decision-making typically needs the mechanisms to compromise among opposing norms. Once multiple objectives square measure is concerned of machine learning, a vital step is to check the weights of individual objectives to the system-level performance. Determinant, the weights of multi-objectives is associate in analysis method, associated it's been typically treated as a drawback. However, our preliminary investigation has shown that existing methodologies in managing the weights of multi-objectives have some obvious limitations like the determination of weights is treated as one drawback, a result supporting such associate improvement is limited, if associated it will even be unreliable, once knowledge concerning multiple objectives is incomplete like an integrity caused by poor data. The constraints of weights are also mentioned. Variable weights square measure is natural in decision-making processes. Here, we'd like to develop a scientific methodology in determinant variable weights of multi-objectives. The roles of weights in a creative multi-objective decision-making or machine-learning of square measure analyzed, and therefore the weights square measure determined with the help of a standard neural network.
IMAGE CLASSIFICATION USING DIFFERENT CLASSICAL APPROACHESVikash Kumar
IMAGE CLASSIFICATION USING KNN, RANDOM FOREST AND SVM ALGORITHM ON GLAUCOMA DATASETS AND EXPLAIN THE ACCURACY, SENSITIVITY, AND SPECIFICITY OF EACH AND EVERY ALGORITHMS
AI Professionals use top machine learning algorithms to automate models that analyze more extensive and complex data which was not possible in older machine learning algos.
In a world of data explosion, the rate of data generation and consumption is on the increasing side, there comes the buzzword - Big Data.
Big Data is the concept of fast-moving, large-volume data in varying dimensions (sources) and
highly unpredicted sources.
The 4Vs of Big Data
● Volume - Scale of Data
● Velocity - Analysis of Streaming Data
● Variety - Different forms of Data
● Veracity - Uncertainty of Data
With increasing data availability, the new trend in the industry demands not just data collection,
but making ample sense of acquired data - thereby, the concept of Data Analytics.
Taking it a step further to further make a futuristic prediction and realistic inferences - the concept
of Machine Learning.
A blend of both gives a robust analysis of data for the past, now and the future.
There is a thin line between data analytics and Machine learning which becomes very obvious
when you dig deep.
Similar to Basic course for computer based methods (20)
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
2. I. Data Mining
DM is defined as “the process of seeking interesting or valuable information
(patterns) within the large databases”
At first glance, this definition seems more like a new name for statistics
However, DM is actually performed on sets of data that are far larger than
statistical methods can accurately analyze
3. Data Mining methods
DM involves methods that are at the intersection of arteficial intelligence,
machine learning, statistics and database systems
Sometimes, these methods support dimensionality reduction, by mapping a
set of maximally informative dimensions
Sometimes, they represent definite mathematical models
Often, combination of methods is used to problem solving
4. Data Mining methods
Essentially, patterns are often defined relative to the overall model of the data set from
which it is derived
There are many tools involved in data mining that help find these structures
Some of the most important tools include
Clustering - the act of partitioning data sets of many random items into subsets of
smaller size that show commonality between them - by looking at such clusters, analysts
are able to extract statistical models from the data fields
Regression - the method of fitting a curve through a set of points using some goodness-
of-fit criterion - while examining predefined goodness-of-fit parameters - analysts can
locate and describe patterns
Rule extraction - the method of using relationships between variables to establish some
sort of rule
Data visualization - a sort of technique that can help us to explain (understand) trends
and complexity in data much easily
5. Data Mining methods
most commonly used in health science
Logistic Regression (LR)
Support Vector Machine (SVM)
Appriori and other association rule mining (AR)
Decision Tree algorithms(DT)
Classification algorithms: K-means, SOM (Self-organizing Map), Naive Bayes
Arteficial Neural Networks (ANN)
6. Yet a combination of techniques can elicite a particular mining function
Techniques Utility
Appriori
& FP Growth
Association rule mining for finding frequent item
sets (e.g. diseases) in medical databases
ANN
& Genetic algorithm
Extracting patterns
Detecting trends
Classifcation
Decision Tree algorithms (ID3, C4, C5, CART) Decision support
Classification
Combined use of K-means, SOM & Naive Bayes Accurate classification
Combination of SVM, ANN & ID3 Classification
7. Logistic Regression (LR)
A popular method for classifying individuals, given the values of a set of
explanatory variables
Will a subject develop diabetes ?
Will a subject respond to a treatment ?
It estimates the probability that an individaul is in a particular group
LR does not make any assumptions of normality, linearity and homogeneity of
variance for the independent variables
8. Fig. 1. Logistic regression curve
Value produced by logistic regression is a probability value between 0.0 and 1.0
If the probability for group membership in the modeled category is above some cut point
(the default is 0.50) - the subject is predicted to be a member of the modeled group
If the probability is below the cut point - the subject is predicted to be a member of the
other group
-7.5 -5 -2.5 2.5 5 7.5
0.2
0.4
0.6
0.8
1
9. Testing the LR model performances (a fit to a series of data)
Testing the models depending on the probability p
ROC curve
C statistics
GINI coefficient
KS test
Testing the models depending on the cuf-off values
Sensitivity (true positive rate)
Specificity (true negative rate)
Accuracy
Type I error (misclassification of diabetic)
Type II error (misclassification of healty)
10. Linear vs Logistic regression model
In linear regression - the outcome (dependent variable) is continuous - it can
have any of an infinite number of possible values.
In logistic regression - the outcome (dependent variable) has only a limited
number of possible values - it is used when the response variable is categorical
in nature
The logistic model is unavoidable if it fits the data much better than the linear
model
In many situations - the linear model fits just as well, or almost as well as the
logistic model
In fact, in many situations, the linear and logistic model give results that are
practically indistinguishable
11. Fig. 2. Linear vs Logistic regression model
The linear model assumes that the probability p is a linear function of the regressors
The logistic model assumes that the log of the odds p/(1-p) is a linear function of the regressors
12. Support Vector Machine
Supervised ML method
For classification and regression challenges (mostly for classification)
The principle algorithm is laying on:
Each data item is plotted as a point in n-dimensional space (n= number of
features the varible posses) with the value of each feature being the value of a
particular coordinate
Then, classification is performed - by finding the hyper-plane that differentiates the
two classes very well
13. Supervised ML Unsupervised ML
The major part of practical ML uses supervised learning
When there are input variables (x) and an output variable (Y) - an algorithm is
used to learn the mapping function from the input to the output: Y = f(X)
The goal is to approximate the mapping function so well that when you have
new input data (x) - you can predict the output variables (Y) for that data
It is called supervised learning because the process of an algorithm learning
from the training dataset can be thought of as a teacher supervising the
learning process.
We know the correct answers, the algorithm iteratively makes predictions on the
training data and is corrected by the teacher
Learning stops when the algorithm achieves an acceptable level of
performance
Supervised learning problems can be grouped into regression and classification
problems
Classification - when the output variable is a category, such as “disease” and
“no disease”
Regression - when the output variable is a real value, such as “weight”
Usual methods of Supervised ML are:
Linear regression - for regression problems
Random forest - for classification and regression problems
Support vector machines -for classification problems
When there are only input data (X) and no
corresponding output variables
The goal is to model the underlying structure or
distribution in the data - in order to learn more about the
data
It is called unsupervised learning because unlike
supervised learning - there is no known answer and
there is no teacher
Algorithms are left to their own devises to discover and
present the interesting structure in the data
Unsupervised learning problems can be grouped into
clustering and association problems
Clustering - when the problem is to discover the inherent
groupings in the data, such as grouping by purchasing
behavior
Association - when the problem is to discover rules that
describe large portions of your data
Usual methods of Unsupervised ML are:
k-means - for clustering problems
Apriori algorithm - for association rule learning problems
14. Appriori algorithm (AA)
/ other Association Rule Mining (ARM)
ARM - a technique to uncover how items are associated to each other
AA - mining association rules between frequent sets of Items in large databases (Fig. 3.)
15. Decision Tree (DT) algorithms
In supervised learning algorithms
For classification and regression problems
The DT algorithm tries to solve the problem by using tree representation (Fig. 4.)
A flow-chart-like structure (Fig. )
Each internal node denotes a test on an attribute
Each branch represents the outcome of a test
Each leaf (a terminal node) holds a class label
The topmost node in a tree is the root node
There are many specific decision-tree algorithms
16. Fig. 4. DT algorithm simulate the brancing logic of the tree
18. Arteficial Neural Networks (ANN)
A method of artificial intelligence inspired by and structured according to the human
brain
It is a ML & DM method - a method that learn on examples
Uses retrospective data
It can be used for prediction, classification and pattern recognition (e.g. association
problems)
Prediction - a numeric value is predicted as the output (e.g. blood pressure, age etc.)
and MSE or RMSE error is used as the evaluation measure of model performance
Classification - cases are assigned into two or more categories of the output (e.g.
presence/absence of a disease, treatment outcome, etc.) and classification rate is
used as the evaluation measure of model performance
ANNs have shown success in modelling real world situations, so they can be used both
in research purpose and for practical usage as a decision support or a simulation tool
19. Biological vs Arteficial Neural Network
(Fig. 6.)
Biological neural network - consists of mutually connected biological neurons
A biological neuron - a cell that receives information from other neurons through dendrites,
processes it and sends impuls through the axon and synapses to other neurons in the network
Learning - is being performed by the change of the weights of synaptic connections - millions of
neurons can parallely process information
Artificial neural network
An artificial neuron - a processing unit (variable) that receives weighted input from other
variables, transforms the input according to a formula and sends the output to other variables
Learning - is being performed by the change of weight values of variables (weights wji are
ponders by which the inputs are multiplied)
21. Fig. 7. - Generalization ability of the ANN model needs to be tested
It does not rely on results obtained on a single sample - many learning
iterations on the training set take place within the middle (hidden) layer -
staying between input and output layers
22. Criteria for distinguishing ANN algorithms
Nummber of layers
Type of learning
• Supervised - real output values are known from the past and provided in the dataset
• Unsupervised - real output values are not known, and not provided in the dataset, these networks
are used to cluster data in groups by characteristics
Type of connections among neurons
Connection among input and output data
Input and transfer functions
Time characteristics
Learning time
etc.
23. II. Modern computer-based methods
Graph-based DM
Data Visualization and Visual Analytics
Topological DM
Similar techniques that can be used to organize highly complex and
heterogeneous data
Data can be very powerful, if you can actually understand what it's telling
you
It's not easy to get clear takeaways by looking at a slew of numbers and
stats - you need the data presented in a logical, easy-to-understand way –
that`s the situation when to enter some of these techniques
24. Graph-based DM
In order to apply graph-based data mining techniques, such as classification and
clustering - it is necessary to define proximity measures between data represented
in the graph form (Fig. 8. and 9.)
There are several within-graph proximity measures
Hyperlink-Induced Topic Search (HITS)
The Neumann Kernel (NK)
Shared Nearest Neighbor (SNN)
25. Fig. 8. - Defining proximity measures enables structure visible
Scatter plots showing the similarity from -1 to 1
26. Fig. 9. - Citation graph by using NK-proximity measures
- n1…n8 vertices (articles)
- edges indicate a citation
Citation Matrix C can be formed - If an edge between two
vertices exists then the matrix cell = 1 else = 0
27. Fig. 10. - How to generalize mathematically
the pattern of a dalmatian dog?
28. Data Visualization
The human brain processes visual information better than it processes text -
so by using charts, graphs and design elements - data visualization can
help us to explain (understand) trends and stats much more easily (Fig. 10.)
Fig. 10. - The structure of population by age - commoly used data
visualisation procedure in public health domain
29. Data visualization
The samples of data being mined are so vast that scatter plots and
histograms will often fall short representing any information of realistic value
(Fig. 11.)
For that very reason, the analysts concerned with data mining are constantly
looking for better ways to graphically represent data
No matter what tools analysts will have at their fingertips - the patterns and
models being mined will only be as good in quality as the data that it is
being derived from
30. Fig. 11. - Making graph more simple and easier for understanding
31. Application domains of
Data Visualization and Visual Analytics techniques
Visualization of large, complex, multivariate, biological networks
Visual text analytics and classify relevant related work on biological entities
in publication databases (e.g. PubMed)
Visualization for exploring heterogeneous data
and data from multiple data sources
Visual analytics as support for understanding uncertainty
and data quality issues
32. Fig. 12. - Complex data visual analytics computer-based tool
(the personal archive)
33. Fig. 13. - First visualization of the human
Protein-Protein-Interaction structure
34. Topological DM
Applying topological techniques to DM and KDD is a hot and promising future
research area.
Topology has its roots in theoretical mathematics, but within the last decade,
computational topology rapidly gains interest among computer scientists.
It is a study of abstract shapes and spaces and mappings between them. It
originated from the study of geometry and set theory.
Topological methods can be applied to data represented by point clouds, that
is, finite subsets of the n-dimensional Euclidean space.
The input is presented with a sample of some unknown space which one wishes
to reconstruct and understand.
Distinguishing between the ambient (embedding) dimension n, and the intrinsic
dimension of the data is of primary interest towards understanding the intrinsic
structure of data.
35. Topological DM
Geometrical and topological methods are tools allowing us to analyse highly complex
data
Modern data science uses topological methods to find the structural features of data
sets before further supervised or unsupervised analysis
Mathematical formalism, which has been developed for incorporating geometric and
topological techniques, deals with point cloud data sets, i.e. finite sets of points
The point clouds are finite samples taken from a geometric object
Tools from the various branches of geometry and topology are then used to study the
point cloud data sets
Topology provides a formal language for qualitative mathematics, whereas geometry is
mainly quantitative.
Topology studies the relationships of proximity or nearness, since geometry can be
regarded as the study of distance functions
These methods create a summary or compressed representation of all of the data
features to help to rapidly uncover particular patterns and relationships in data.
The idea of constructing summaries of entire domains of attributes involves
understanding the relationship between topological and geometric objects
constructed from data using various features
36. Topological DM
Fig. 14.
Forming the computational
structure (down below)
from the shape which one
wishes to reconstruct and
understand (up above)