SlideShare a Scribd company logo
1 of 4
Download to read offline
IOSR Journal of Computer Engineering (IOSR-JCE)
e-ISSN: 2278-0661, p- ISSN: 2278-8727Volume 12, Issue 5 (Jul. - Aug. 2013), PP 45-48
www.iosrjournals.org
www.iosrjournals.org 45 | Page
K Means Clustering Algorithm for Partitioning Data Sets
Evaluated From Horizontal Aggregations
R. Rakesh Kumar1
, A. Bhanu Prasad2
1
(M.Tech, Software Engineering, Vardhaman College of Engineering/ JNTU-Hyderabad, India)
2
(Associate Professor, Department of IT, Vardhaman College of Engineering/ JNTU-Hyderabad, India)
Abstract: Data mining refers to the process of analyzing the data from different perspectives and summarizing
it into useful information that is mostly used by the different users for analyzing the data as well as for
preparing data sets. A data set is collection of data that is present in the tabular form. Preparing data set
involves complex SQL queries, joining tables and aggregate functions. Traditional RDBMS manages the tables
with vertical format and returns one number per row. It means that it returns a single value output which is not
suitable for preparing a data set. This paper mainly focused on k means clustering algorithm which is used to
partition data sets after horizontal aggregations and a small description about the horizontal aggregation
methods which returns set of numbers instead of one number per row. This paper consists of three methods that
is SPJ, CASE and PIVOT methods in order to evaluate horizontal aggregations. Horizontal aggregations
results in large volumes of data sets which are then partitioned into homogeneous clusters is important in the
system. This can be performed by k means clustering algorithm.
Keywords: Aggregations, SQL, pivoting and K-means clustering.
I. Introduction
Data mining is the process of extracting knowledge from large volumes of data. It has attracted a great
deal of attention in the information industry and in society as a whole in recent years due to the wide
availability of huge amount of data and the imminent need for turning such data into useful information and
knowledge. Data can be stored in different kinds of databases and information repositories. One such data
repository architecture that has emerged is the data warehouse. Data warehouse technology includes OLAP
(Online Analytical Processing), that is, analysis technique with functionalities such as summarization,
consolidation and aggregation. Data aggregation is a process in which information is gathered and expressed in
a summary form, and which is used for purposes such as statistical analysis. Aggregation is normally associated
with data reduction in relational databases. The aggregate functions available in SQL are MIN, MAX, AVG,
SUM and COUNT. All these functions returns a single number as output. This is called vertical aggregation.
The output of vertical aggregations is helpful in calculation. Most of the data mining operations require a data
set with horizontal layout with many tuples and one variable or dimension per column.
This paper contains three fundamental methods that are used to evaluate Horizontal aggregations: they are case,
SPJ (Select Project Join) and pivot: case, SPJ (Select Project Join) and pivot.
Case method:
This method uses the “case” programming construct available in SQL. The case statement returns a
selected value based on Boolean expressions from a set of values.
SPJ method:
The SPJ method is work based on the relational operators only. The basic concept in SPJ method is to
build a table with vertical aggregation for each resultant column. And then join all those tables to produce
another table.
Pivot method:
It is a built-in operator offered by some DBMS (Data Base Management System) concepts, which
transforms row to columns. This method internally needs to determine how many columns are needed to store
the transposed table and it can be combined with the GROUP BY clause.
Clustering:
Clustering is the process of grouping a set of objects into clusters in such a way that objects in the same
cluster are more similar to each other than to those in other clusters. It is a common technique for statistical data
analysis which is used in many fields like machine learning, pattern recognition, image analysis, information
retrieval, and bio informatics. Clustering is a main task of exploratory data mining. Clustering can be achieved
K means clustering algorithm for partitioning data sets evaluated from horizontal aggregations
www.iosrjournals.org 46 | Page
by various algorithms which differ significantly in their notion of what constitutes a cluster and how to
efficiently find them. The appropriate clustering algorithm and parameter settings (including values such as the
distance function to use, a density threshold or the number of expected clusters) depend on the individual data
set and also intended use of the results.
The horizontal aggregations provide several unique features and advantages. There advantages include they
 They represent a template to generate SQL code from a data mining tool. This SQL code automates writing
SQL queries, optimizing them, and testing them for correctness. SQL code reduces manual work in the data
preparation phase in a data mining.
 SQL code is more efficient than SQL code written by an end user as it is automatically generated. As a
result, data sets can be created in less time.
 The data set can be created entirely inside the DBMS. In modern database environments, it is common to
export denormalized data sets to be further cleaned and transformed outside a DBMS in external tools (e.g.,
statistical packages). Unfortunately, exporting large tables outside a DBMS is very slow and it creates
inconsistent copies of the same data and effects database security.
Therefore, we provide a more efficient, better integrated and more secure solution compared to external
data mining tools. Horizontal aggregations just require a small syntax extension to aggregate functions called in
a SELECT statement. Alternatively, they can be used to generate SQL code from a data mining tool to build
data sets for data mining analysis.
II. Proposed Methodology
Clustering is a method of partitioning a set of objects into clusters such that objects in the same cluster
are more similar to each other than objects in different clusters according to some defined criteria. Data mining
applications frequently involve categorical data. The biggest advantage of these clustering algorithms is that it is
scalable to very large data sets.
We need different attributes from multiple fact tables in order to construct a new data set from the
range of discrete points of known data sets. In many applications one often has a number of data values,
obtained by experimentation, which stored on limited number of databases. It is often required to extract
particular attributes that are useful from the different fact tables and perform aggregation.
2.1. K means algorithm
K-means is one of the simplest unsupervised learning algorithms which is used to solve the well known
clustering problem. The procedure employs a simple and easy way to classify a given data set through a certain
number of clusters fixed a priori. We assume K clusters. The main idea of this algorithm is to define k
centroids, one for each cluster. These centroids should be placed in a way that different location causes different
result. So, the better choice is to place them as much as possible far away from each other. The next step in the
algorithm is to take each point belonging to a given data set and associate it to the nearest centroid. When no
point is pending, the first step is completed and an early group age is done. At this point we need to re-calculate
k new centroid as the clusters resulting from the previous step. After we have these k new centroids, a new
binding is performed between the same data set points and the nearest new centroid. As a result, a loop has been
generated. As a result of this loop we may notice that the k centroids change their location step by step until no
more changes are done. In other words centroids do not move any more.
Finally, this algorithm aims at minimizing an objective function, which is a squared error function. The
objective function is given by the following equation.
 
1
2
1
k n
i j
i
J
j
J x c
 
  
Where ∥ 𝑥𝑖
𝐽
− 𝑐𝑗 ∥2
is a chosen distance measure between a data point 𝑥𝑖
𝐽
and the cluster centre𝑐𝑗 , is an
indicator of the distance of the n data points from their respective cluster centers.
K-means algorithm which is based on classification technique uses horizontal aggregation as input.
Pivot operator is used to calculate the aggregation of particular data values from distinct multiple fact tables.
Optimization provides for PIVOT for large number of fact table. The database connectivity and choosing
different tables with .mdb extension is the first step in this system.
K means algorithm consists of the following four steps. They are
1. Place K points into the space represented by the objects which are data sets that are being clustered.
These points represent initial group centroids.
2. Assign each data object to the group that has the closest centroid.
3. When all objects have been assigned to different clusters, recalculate the positions of the K centroids.
K means clustering algorithm for partitioning data sets evaluated from horizontal aggregations
www.iosrjournals.org 47 | Page
4. Repeat Steps 2 and 3 until the centroids no longer move. This produces a separation of the objects into
clusters from which the metric to be minimized can be calculated.
Fig: flow chart of k means algorithm
Step by step example of k means algorithm:
The k means algorithm is implemented on large data sets that are resulted after the horizontal
aggregation in order to partition into different clusters. An example with step by step procedure applied on a
data set is given below.
As a simple illustration of a k-means algorithm, consider the following data set consisting of the scores of two
variables on each of seven individuals:
Subject A B
1 1.0 1.0
2 1.5 2.0
3 3.0 4.0
4 5.0 7.0
5 3.5 5.0
6 4.5 5.0
7 3.5 4.5
This data set which is given above in the table is to be grouped into two clusters. As a first step in finding a
sensible initial partition, let the A & B values of the two individuals furthest apart (using the Euclidean distance
measure), define the initial cluster,
Individual Mean Vector
(centroid)
Group 1 1 (1.0, 1.0)
Group 2 4 (5.0, 7.0)
The remaining data sets are now examined in a sequence and are allocated to different cluster to which
they are closest, which is calculated terms of Euclidean distance to the cluster mean. When mean vector is
recalculated, each time a new member is added. This gives the following series of steps:
Cluster 1 Cluster 2
Step Individual Mean Vector
(centroid)
Individual Mean Vector
(centroid)
1 1 (1.0, 1.0) 4 (5.0, 7.0)
2 1, 2 (1.2, 1.5) 4 (5.0, 7.0)
3 1, 2, 3 (1.8, 2.3) 4 (5.0, 7.0)
4 1, 2, 3 (1.8, 2.3) 4, 5 (4.2, 6.0)
5 1, 2, 3 (1.8, 2.3) 4, 5, 6 (4.3, 5.7)
6 1, 2, 3 (1.8, 2.3) 4, 5, 6, 7 (4.1, 5.4)
Now the initial partition is changed, and the two clusters at this stage having the following
characteristics:
K means clustering algorithm for partitioning data sets evaluated from horizontal aggregations
www.iosrjournals.org 48 | Page
Individual Mean Vector
(centroid)
Cluster 1 1, 2, 3 (1.8, 2.3)
Cluster 2 4, 5, 6, 7 (4.1, 5.4)
But we are not sure that each individual has been assigned to the right cluster. In order to know that
each individual is assigned to the right cluster, we compare each individual’s distance to its own cluster mean
and to that of the opposite cluster.
Individual Distance to mean
(centroid) of Cluster 1
Distance to mean (centroid)
of Cluster 2
1 1.5 5.4
2 0.4 4.3
3 2.1 1.8
4 5.7 1.8
5 3.2 0.7
6 3.8 0.6
7 2.8 1.1
Hear only individual 3 is nearer to the mean of the opposite cluster (Cluster 2) than its own (Cluster 1).
In other words, each individual's distance to its own cluster mean should be smaller than the distance to the
other cluster's mean (which is not the case with individual 3). Thus, individual 3 is relocated to Cluster 2
resulting which results in the new partition:
Individual Mean Vector
(centroid)
Cluster 1 1, 2 (1.3, 1.5)
Cluster 2 3, 4, 5, 6, 7 (3.9, 5.1)
The iterative relocation would now continue from this new partition until no more relocation occur.
However, in the example given above each individual nearer its own cluster mean than that of the other cluster
and the iteration steps, choosing the latest partitioning as the final cluster solution. I this way the k means
algorithm is performed on the data sets that are evaluated from horizontal aggregations.
III. Conclusion
The paper extended the horizontal aggregation methods with k means clustering algorithm.
Conventional horizontal aggregation gives data from single fact table as an input. But in this paper the input to
the algorithm is the data from the multiple fact tables. In order to partition the data set into different clusters we
used the Euclidean distance computation, pivoting a table to have one dimension value per row. Pivot is a data
manipulating operator which is easy to compute wide set of values. It is an extension of Group By with unique
restrictions and optimization opportunities, and this makes it easy to introduce incrementally on top of existing
grouping implementation. The paper also consists of a simple example explaining the step wise procedure of k
means clustering algorithm. Optimized k-means is significantly faster because of small data set run clustering
outside the DBMS.
References
[1] C. Ordonez and Z. Chen. Horizontal aggregations in SQL to prepare data sets for data mining analysis.IEEE Transactions on
Knowledge and Data Engineering (TKDE), 24(4), 2012.
[2] J. Gray, A. Bosworth, A. Layman, and H. Pirahesh. Data cube: A relational aggregation operator generalizing group-by, cross-tab
and subtotal. In ICDE Conference.
[3] G. Bhargava, P. Goel, and B.R. Iyer. Hypergraph based reordering of outer join queries with complex predicates. In ACM SIGMOD
Conference, pages 304–315, 1995.
[4] J.A. Blakeley, V. Rao, I. Kunen, A. Prout, M. Henaire, and C. Kleinerman. .NET database programmability and extensibility in
Microsoft SQL Server. In Proc. ACM SIGMOD Conference, pages 1087–1098, 2008.
[5] J. Clear, D. Dunn, B. Harvey, M.L. Heytens, and P. Lohman. Non-stop SQL/MX primitives for knowledge discovery. In ACM KDD
Conference, pages 425–429, 1999.
[6] E.F. Codd. Extending the database relational model to capture more meaning. ACM TODS, 4(4):397–434, 1979.
[7] C. Galindo-Legaria and A. Rosenthal. Outer join simplification and reordering for query optimization. ACM TODS, 22(1):43–73,
1997.
Text Book:
[1] H. Garcia-Molina, J.D. Ullman, and J. Widom. Database Systems: The Complete Book. Prentice Hall, 1st edition, 2001.
[2] J. Han and M. Kamber. Data Mining: Concepts and Techniques. Morgan Kaufmann, San Francisco, 1st edition, 2001.

More Related Content

What's hot (8)

IP Security
IP SecurityIP Security
IP Security
 
Datapath design
Datapath designDatapath design
Datapath design
 
Ppt of routing protocols
Ppt of routing protocolsPpt of routing protocols
Ppt of routing protocols
 
Writing Basic SQL SELECT Statements
Writing Basic SQL SELECT StatementsWriting Basic SQL SELECT Statements
Writing Basic SQL SELECT Statements
 
Sql – Structured Query Language
Sql – Structured Query LanguageSql – Structured Query Language
Sql – Structured Query Language
 
SQL
SQLSQL
SQL
 
Sql oracle
Sql oracleSql oracle
Sql oracle
 
Beginers guide for oracle sql
Beginers guide for oracle sqlBeginers guide for oracle sql
Beginers guide for oracle sql
 

Viewers also liked

Reduce the False Positive and False Negative from Real Traffic with Intrusion...
Reduce the False Positive and False Negative from Real Traffic with Intrusion...Reduce the False Positive and False Negative from Real Traffic with Intrusion...
Reduce the False Positive and False Negative from Real Traffic with Intrusion...IOSR Journals
 
и.а.крылов викторина
и.а.крылов  викторинаи.а.крылов  викторина
и.а.крылов викторинаNatalya Dyrda
 
Secure Data Aggregation in Wireless Sensor Networks Using Randomized Dispersi...
Secure Data Aggregation in Wireless Sensor Networks Using Randomized Dispersi...Secure Data Aggregation in Wireless Sensor Networks Using Randomized Dispersi...
Secure Data Aggregation in Wireless Sensor Networks Using Randomized Dispersi...IOSR Journals
 
A framework for an Operating System-based Mobile Agent Interoperability
A framework for an Operating System-based Mobile Agent InteroperabilityA framework for an Operating System-based Mobile Agent Interoperability
A framework for an Operating System-based Mobile Agent InteroperabilityIOSR Journals
 
テスト分析 個人
テスト分析 個人テスト分析 個人
テスト分析 個人Tomoaki Fukura
 
International Medical Careers Forum 2016 (complete)
International Medical Careers Forum 2016 (complete)International Medical Careers Forum 2016 (complete)
International Medical Careers Forum 2016 (complete)Odyssey Recruitment
 
Простые и составные числительные
Простые и составные числительныеПростые и составные числительные
Простые и составные числительныеNatalya Dyrda
 
Fraud and Tamper Detection in Authenticity Verification through Gradient Bas...
Fraud and Tamper Detection in Authenticity Verification through  Gradient Bas...Fraud and Tamper Detection in Authenticity Verification through  Gradient Bas...
Fraud and Tamper Detection in Authenticity Verification through Gradient Bas...IOSR Journals
 
Mining Negative Association Rules
Mining Negative Association RulesMining Negative Association Rules
Mining Negative Association RulesIOSR Journals
 
Знаете ли вы сказки А.С. Пушкина
Знаете ли вы сказки А.С. ПушкинаЗнаете ли вы сказки А.С. Пушкина
Знаете ли вы сказки А.С. ПушкинаNatalya Dyrda
 
царскосельский лицей
царскосельский лицейцарскосельский лицей
царскосельский лицейNatalya Dyrda
 
Wastemanegement 2
Wastemanegement 2Wastemanegement 2
Wastemanegement 2laniator
 
SM 435: MGoPro Combine
SM 435: MGoPro CombineSM 435: MGoPro Combine
SM 435: MGoPro CombineKeely Galgano
 
Optimal Repeated Frame Compensation Using Efficient Video Coding
Optimal Repeated Frame Compensation Using Efficient Video  CodingOptimal Repeated Frame Compensation Using Efficient Video  Coding
Optimal Repeated Frame Compensation Using Efficient Video CodingIOSR Journals
 
Bmb02012013 00000
Bmb02012013 00000Bmb02012013 00000
Bmb02012013 00000Defense1
 
Cultural Immersion: Early Tech Adopters
Cultural Immersion: Early Tech Adopters Cultural Immersion: Early Tech Adopters
Cultural Immersion: Early Tech Adopters Keely Galgano
 

Viewers also liked (20)

Reduce the False Positive and False Negative from Real Traffic with Intrusion...
Reduce the False Positive and False Negative from Real Traffic with Intrusion...Reduce the False Positive and False Negative from Real Traffic with Intrusion...
Reduce the False Positive and False Negative from Real Traffic with Intrusion...
 
и.а.крылов викторина
и.а.крылов  викторинаи.а.крылов  викторина
и.а.крылов викторина
 
Secure Data Aggregation in Wireless Sensor Networks Using Randomized Dispersi...
Secure Data Aggregation in Wireless Sensor Networks Using Randomized Dispersi...Secure Data Aggregation in Wireless Sensor Networks Using Randomized Dispersi...
Secure Data Aggregation in Wireless Sensor Networks Using Randomized Dispersi...
 
A framework for an Operating System-based Mobile Agent Interoperability
A framework for an Operating System-based Mobile Agent InteroperabilityA framework for an Operating System-based Mobile Agent Interoperability
A framework for an Operating System-based Mobile Agent Interoperability
 
テスト分析 個人
テスト分析 個人テスト分析 個人
テスト分析 個人
 
International Medical Careers Forum 2016 (complete)
International Medical Careers Forum 2016 (complete)International Medical Careers Forum 2016 (complete)
International Medical Careers Forum 2016 (complete)
 
Простые и составные числительные
Простые и составные числительныеПростые и составные числительные
Простые и составные числительные
 
Pr
PrPr
Pr
 
Fraud and Tamper Detection in Authenticity Verification through Gradient Bas...
Fraud and Tamper Detection in Authenticity Verification through  Gradient Bas...Fraud and Tamper Detection in Authenticity Verification through  Gradient Bas...
Fraud and Tamper Detection in Authenticity Verification through Gradient Bas...
 
Mining Negative Association Rules
Mining Negative Association RulesMining Negative Association Rules
Mining Negative Association Rules
 
Партнёр Ofipa 2013
Партнёр Ofipa   2013Партнёр Ofipa   2013
Партнёр Ofipa 2013
 
Знаете ли вы сказки А.С. Пушкина
Знаете ли вы сказки А.С. ПушкинаЗнаете ли вы сказки А.С. Пушкина
Знаете ли вы сказки А.С. Пушкина
 
K0946269
K0946269K0946269
K0946269
 
царскосельский лицей
царскосельский лицейцарскосельский лицей
царскосельский лицей
 
Wastemanegement 2
Wastemanegement 2Wastemanegement 2
Wastemanegement 2
 
SM 435: MGoPro Combine
SM 435: MGoPro CombineSM 435: MGoPro Combine
SM 435: MGoPro Combine
 
A0350107
A0350107A0350107
A0350107
 
Optimal Repeated Frame Compensation Using Efficient Video Coding
Optimal Repeated Frame Compensation Using Efficient Video  CodingOptimal Repeated Frame Compensation Using Efficient Video  Coding
Optimal Repeated Frame Compensation Using Efficient Video Coding
 
Bmb02012013 00000
Bmb02012013 00000Bmb02012013 00000
Bmb02012013 00000
 
Cultural Immersion: Early Tech Adopters
Cultural Immersion: Early Tech Adopters Cultural Immersion: Early Tech Adopters
Cultural Immersion: Early Tech Adopters
 

Similar to K Means Clustering Algorithm for Partitioning Data Sets Evaluated From Horizontal Aggregations

Hortizontal Aggregation in SQL for Data Mining Analysis to Prepare Data Sets
Hortizontal Aggregation in SQL for Data Mining Analysis to Prepare Data SetsHortizontal Aggregation in SQL for Data Mining Analysis to Prepare Data Sets
Hortizontal Aggregation in SQL for Data Mining Analysis to Prepare Data SetsIJMER
 
Parallel KNN for Big Data using Adaptive Indexing
Parallel KNN for Big Data using Adaptive IndexingParallel KNN for Big Data using Adaptive Indexing
Parallel KNN for Big Data using Adaptive IndexingIRJET Journal
 
84cc04ff77007e457df6aa2b814d2346bf1b
84cc04ff77007e457df6aa2b814d2346bf1b84cc04ff77007e457df6aa2b814d2346bf1b
84cc04ff77007e457df6aa2b814d2346bf1bPRAWEEN KUMAR
 
E132833
E132833E132833
E132833irjes
 
Experimental study of Data clustering using k- Means and modified algorithms
Experimental study of Data clustering using k- Means and modified algorithmsExperimental study of Data clustering using k- Means and modified algorithms
Experimental study of Data clustering using k- Means and modified algorithmsIJDKP
 
IRJET- Review of Existing Methods in K-Means Clustering Algorithm
IRJET- Review of Existing Methods in K-Means Clustering AlgorithmIRJET- Review of Existing Methods in K-Means Clustering Algorithm
IRJET- Review of Existing Methods in K-Means Clustering AlgorithmIRJET Journal
 
Implementation of p pic algorithm in map reduce to handle big data
Implementation of p pic algorithm in map reduce to handle big dataImplementation of p pic algorithm in map reduce to handle big data
Implementation of p pic algorithm in map reduce to handle big dataeSAT Publishing House
 
Density Based Clustering Approach for Solving the Software Component Restruct...
Density Based Clustering Approach for Solving the Software Component Restruct...Density Based Clustering Approach for Solving the Software Component Restruct...
Density Based Clustering Approach for Solving the Software Component Restruct...IRJET Journal
 
IRJET- Diverse Approaches for Document Clustering in Product Development Anal...
IRJET- Diverse Approaches for Document Clustering in Product Development Anal...IRJET- Diverse Approaches for Document Clustering in Product Development Anal...
IRJET- Diverse Approaches for Document Clustering in Product Development Anal...IRJET Journal
 
New Approach for K-mean and K-medoids Algorithm
New Approach for K-mean and K-medoids AlgorithmNew Approach for K-mean and K-medoids Algorithm
New Approach for K-mean and K-medoids AlgorithmEditor IJCATR
 
MPSKM Algorithm to Cluster Uneven Dimensional Time Series Subspace Data
MPSKM Algorithm to Cluster Uneven Dimensional Time Series Subspace DataMPSKM Algorithm to Cluster Uneven Dimensional Time Series Subspace Data
MPSKM Algorithm to Cluster Uneven Dimensional Time Series Subspace DataIRJET Journal
 
Dynamic approach to k means clustering algorithm-2
Dynamic approach to k means clustering algorithm-2Dynamic approach to k means clustering algorithm-2
Dynamic approach to k means clustering algorithm-2IAEME Publication
 
Classification Techniques: A Review
Classification Techniques: A ReviewClassification Techniques: A Review
Classification Techniques: A ReviewIOSRjournaljce
 
Clustering
ClusteringClustering
ClusteringMeme Hei
 
TOWARDS REDUCTION OF DATA FLOW IN A DISTRIBUTED NETWORK USING PRINCIPAL COMPO...
TOWARDS REDUCTION OF DATA FLOW IN A DISTRIBUTED NETWORK USING PRINCIPAL COMPO...TOWARDS REDUCTION OF DATA FLOW IN A DISTRIBUTED NETWORK USING PRINCIPAL COMPO...
TOWARDS REDUCTION OF DATA FLOW IN A DISTRIBUTED NETWORK USING PRINCIPAL COMPO...cscpconf
 
IRJET- Enhanced Density Based Method for Clustering Data Stream
IRJET-  	  Enhanced Density Based Method for Clustering Data StreamIRJET-  	  Enhanced Density Based Method for Clustering Data Stream
IRJET- Enhanced Density Based Method for Clustering Data StreamIRJET Journal
 

Similar to K Means Clustering Algorithm for Partitioning Data Sets Evaluated From Horizontal Aggregations (20)

Hortizontal Aggregation in SQL for Data Mining Analysis to Prepare Data Sets
Hortizontal Aggregation in SQL for Data Mining Analysis to Prepare Data SetsHortizontal Aggregation in SQL for Data Mining Analysis to Prepare Data Sets
Hortizontal Aggregation in SQL for Data Mining Analysis to Prepare Data Sets
 
Parallel KNN for Big Data using Adaptive Indexing
Parallel KNN for Big Data using Adaptive IndexingParallel KNN for Big Data using Adaptive Indexing
Parallel KNN for Big Data using Adaptive Indexing
 
84cc04ff77007e457df6aa2b814d2346bf1b
84cc04ff77007e457df6aa2b814d2346bf1b84cc04ff77007e457df6aa2b814d2346bf1b
84cc04ff77007e457df6aa2b814d2346bf1b
 
E132833
E132833E132833
E132833
 
Experimental study of Data clustering using k- Means and modified algorithms
Experimental study of Data clustering using k- Means and modified algorithmsExperimental study of Data clustering using k- Means and modified algorithms
Experimental study of Data clustering using k- Means and modified algorithms
 
IRJET- Review of Existing Methods in K-Means Clustering Algorithm
IRJET- Review of Existing Methods in K-Means Clustering AlgorithmIRJET- Review of Existing Methods in K-Means Clustering Algorithm
IRJET- Review of Existing Methods in K-Means Clustering Algorithm
 
Bo4301369372
Bo4301369372Bo4301369372
Bo4301369372
 
Chapter 5.pdf
Chapter 5.pdfChapter 5.pdf
Chapter 5.pdf
 
Implementation of p pic algorithm in map reduce to handle big data
Implementation of p pic algorithm in map reduce to handle big dataImplementation of p pic algorithm in map reduce to handle big data
Implementation of p pic algorithm in map reduce to handle big data
 
F04463437
F04463437F04463437
F04463437
 
Lx3520322036
Lx3520322036Lx3520322036
Lx3520322036
 
Density Based Clustering Approach for Solving the Software Component Restruct...
Density Based Clustering Approach for Solving the Software Component Restruct...Density Based Clustering Approach for Solving the Software Component Restruct...
Density Based Clustering Approach for Solving the Software Component Restruct...
 
IRJET- Diverse Approaches for Document Clustering in Product Development Anal...
IRJET- Diverse Approaches for Document Clustering in Product Development Anal...IRJET- Diverse Approaches for Document Clustering in Product Development Anal...
IRJET- Diverse Approaches for Document Clustering in Product Development Anal...
 
New Approach for K-mean and K-medoids Algorithm
New Approach for K-mean and K-medoids AlgorithmNew Approach for K-mean and K-medoids Algorithm
New Approach for K-mean and K-medoids Algorithm
 
MPSKM Algorithm to Cluster Uneven Dimensional Time Series Subspace Data
MPSKM Algorithm to Cluster Uneven Dimensional Time Series Subspace DataMPSKM Algorithm to Cluster Uneven Dimensional Time Series Subspace Data
MPSKM Algorithm to Cluster Uneven Dimensional Time Series Subspace Data
 
Dynamic approach to k means clustering algorithm-2
Dynamic approach to k means clustering algorithm-2Dynamic approach to k means clustering algorithm-2
Dynamic approach to k means clustering algorithm-2
 
Classification Techniques: A Review
Classification Techniques: A ReviewClassification Techniques: A Review
Classification Techniques: A Review
 
Clustering
ClusteringClustering
Clustering
 
TOWARDS REDUCTION OF DATA FLOW IN A DISTRIBUTED NETWORK USING PRINCIPAL COMPO...
TOWARDS REDUCTION OF DATA FLOW IN A DISTRIBUTED NETWORK USING PRINCIPAL COMPO...TOWARDS REDUCTION OF DATA FLOW IN A DISTRIBUTED NETWORK USING PRINCIPAL COMPO...
TOWARDS REDUCTION OF DATA FLOW IN A DISTRIBUTED NETWORK USING PRINCIPAL COMPO...
 
IRJET- Enhanced Density Based Method for Clustering Data Stream
IRJET-  	  Enhanced Density Based Method for Clustering Data StreamIRJET-  	  Enhanced Density Based Method for Clustering Data Stream
IRJET- Enhanced Density Based Method for Clustering Data Stream
 

More from IOSR Journals (20)

A011140104
A011140104A011140104
A011140104
 
M0111397100
M0111397100M0111397100
M0111397100
 
L011138596
L011138596L011138596
L011138596
 
K011138084
K011138084K011138084
K011138084
 
J011137479
J011137479J011137479
J011137479
 
I011136673
I011136673I011136673
I011136673
 
G011134454
G011134454G011134454
G011134454
 
H011135565
H011135565H011135565
H011135565
 
F011134043
F011134043F011134043
F011134043
 
E011133639
E011133639E011133639
E011133639
 
D011132635
D011132635D011132635
D011132635
 
C011131925
C011131925C011131925
C011131925
 
B011130918
B011130918B011130918
B011130918
 
A011130108
A011130108A011130108
A011130108
 
I011125160
I011125160I011125160
I011125160
 
H011124050
H011124050H011124050
H011124050
 
G011123539
G011123539G011123539
G011123539
 
F011123134
F011123134F011123134
F011123134
 
E011122530
E011122530E011122530
E011122530
 
D011121524
D011121524D011121524
D011121524
 

Recently uploaded

EduAI - E learning Platform integrated with AI
EduAI - E learning Platform integrated with AIEduAI - E learning Platform integrated with AI
EduAI - E learning Platform integrated with AIkoyaldeepu123
 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfAsst.prof M.Gokilavani
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort servicejennyeacort
 
DATA ANALYTICS PPT definition usage example
DATA ANALYTICS PPT definition usage exampleDATA ANALYTICS PPT definition usage example
DATA ANALYTICS PPT definition usage examplePragyanshuParadkar1
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.eptoze12
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girlsssuser7cb4ff
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...VICTOR MAESTRE RAMIREZ
 
Effects of rheological properties on mixing
Effects of rheological properties on mixingEffects of rheological properties on mixing
Effects of rheological properties on mixingviprabot1
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxbritheesh05
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 
Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfROCENODodongVILLACER
 
complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...asadnawaz62
 
pipeline in computer architecture design
pipeline in computer architecture  designpipeline in computer architecture  design
pipeline in computer architecture designssuser87fa0c1
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx959SahilShah
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSCAESB
 
main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidNikhilNagaraju
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfAsst.prof M.Gokilavani
 

Recently uploaded (20)

EduAI - E learning Platform integrated with AI
EduAI - E learning Platform integrated with AIEduAI - E learning Platform integrated with AI
EduAI - E learning Platform integrated with AI
 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
 
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptxExploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
 
DATA ANALYTICS PPT definition usage example
DATA ANALYTICS PPT definition usage exampleDATA ANALYTICS PPT definition usage example
DATA ANALYTICS PPT definition usage example
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.
 
young call girls in Green Park🔝 9953056974 🔝 escort Service
young call girls in Green Park🔝 9953056974 🔝 escort Serviceyoung call girls in Green Park🔝 9953056974 🔝 escort Service
young call girls in Green Park🔝 9953056974 🔝 escort Service
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girls
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...
 
Effects of rheological properties on mixing
Effects of rheological properties on mixingEffects of rheological properties on mixing
Effects of rheological properties on mixing
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptx
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 
Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdf
 
complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...
 
pipeline in computer architecture design
pipeline in computer architecture  designpipeline in computer architecture  design
pipeline in computer architecture design
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentation
 
main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfid
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
 

K Means Clustering Algorithm for Partitioning Data Sets Evaluated From Horizontal Aggregations

  • 1. IOSR Journal of Computer Engineering (IOSR-JCE) e-ISSN: 2278-0661, p- ISSN: 2278-8727Volume 12, Issue 5 (Jul. - Aug. 2013), PP 45-48 www.iosrjournals.org www.iosrjournals.org 45 | Page K Means Clustering Algorithm for Partitioning Data Sets Evaluated From Horizontal Aggregations R. Rakesh Kumar1 , A. Bhanu Prasad2 1 (M.Tech, Software Engineering, Vardhaman College of Engineering/ JNTU-Hyderabad, India) 2 (Associate Professor, Department of IT, Vardhaman College of Engineering/ JNTU-Hyderabad, India) Abstract: Data mining refers to the process of analyzing the data from different perspectives and summarizing it into useful information that is mostly used by the different users for analyzing the data as well as for preparing data sets. A data set is collection of data that is present in the tabular form. Preparing data set involves complex SQL queries, joining tables and aggregate functions. Traditional RDBMS manages the tables with vertical format and returns one number per row. It means that it returns a single value output which is not suitable for preparing a data set. This paper mainly focused on k means clustering algorithm which is used to partition data sets after horizontal aggregations and a small description about the horizontal aggregation methods which returns set of numbers instead of one number per row. This paper consists of three methods that is SPJ, CASE and PIVOT methods in order to evaluate horizontal aggregations. Horizontal aggregations results in large volumes of data sets which are then partitioned into homogeneous clusters is important in the system. This can be performed by k means clustering algorithm. Keywords: Aggregations, SQL, pivoting and K-means clustering. I. Introduction Data mining is the process of extracting knowledge from large volumes of data. It has attracted a great deal of attention in the information industry and in society as a whole in recent years due to the wide availability of huge amount of data and the imminent need for turning such data into useful information and knowledge. Data can be stored in different kinds of databases and information repositories. One such data repository architecture that has emerged is the data warehouse. Data warehouse technology includes OLAP (Online Analytical Processing), that is, analysis technique with functionalities such as summarization, consolidation and aggregation. Data aggregation is a process in which information is gathered and expressed in a summary form, and which is used for purposes such as statistical analysis. Aggregation is normally associated with data reduction in relational databases. The aggregate functions available in SQL are MIN, MAX, AVG, SUM and COUNT. All these functions returns a single number as output. This is called vertical aggregation. The output of vertical aggregations is helpful in calculation. Most of the data mining operations require a data set with horizontal layout with many tuples and one variable or dimension per column. This paper contains three fundamental methods that are used to evaluate Horizontal aggregations: they are case, SPJ (Select Project Join) and pivot: case, SPJ (Select Project Join) and pivot. Case method: This method uses the “case” programming construct available in SQL. The case statement returns a selected value based on Boolean expressions from a set of values. SPJ method: The SPJ method is work based on the relational operators only. The basic concept in SPJ method is to build a table with vertical aggregation for each resultant column. And then join all those tables to produce another table. Pivot method: It is a built-in operator offered by some DBMS (Data Base Management System) concepts, which transforms row to columns. This method internally needs to determine how many columns are needed to store the transposed table and it can be combined with the GROUP BY clause. Clustering: Clustering is the process of grouping a set of objects into clusters in such a way that objects in the same cluster are more similar to each other than to those in other clusters. It is a common technique for statistical data analysis which is used in many fields like machine learning, pattern recognition, image analysis, information retrieval, and bio informatics. Clustering is a main task of exploratory data mining. Clustering can be achieved
  • 2. K means clustering algorithm for partitioning data sets evaluated from horizontal aggregations www.iosrjournals.org 46 | Page by various algorithms which differ significantly in their notion of what constitutes a cluster and how to efficiently find them. The appropriate clustering algorithm and parameter settings (including values such as the distance function to use, a density threshold or the number of expected clusters) depend on the individual data set and also intended use of the results. The horizontal aggregations provide several unique features and advantages. There advantages include they  They represent a template to generate SQL code from a data mining tool. This SQL code automates writing SQL queries, optimizing them, and testing them for correctness. SQL code reduces manual work in the data preparation phase in a data mining.  SQL code is more efficient than SQL code written by an end user as it is automatically generated. As a result, data sets can be created in less time.  The data set can be created entirely inside the DBMS. In modern database environments, it is common to export denormalized data sets to be further cleaned and transformed outside a DBMS in external tools (e.g., statistical packages). Unfortunately, exporting large tables outside a DBMS is very slow and it creates inconsistent copies of the same data and effects database security. Therefore, we provide a more efficient, better integrated and more secure solution compared to external data mining tools. Horizontal aggregations just require a small syntax extension to aggregate functions called in a SELECT statement. Alternatively, they can be used to generate SQL code from a data mining tool to build data sets for data mining analysis. II. Proposed Methodology Clustering is a method of partitioning a set of objects into clusters such that objects in the same cluster are more similar to each other than objects in different clusters according to some defined criteria. Data mining applications frequently involve categorical data. The biggest advantage of these clustering algorithms is that it is scalable to very large data sets. We need different attributes from multiple fact tables in order to construct a new data set from the range of discrete points of known data sets. In many applications one often has a number of data values, obtained by experimentation, which stored on limited number of databases. It is often required to extract particular attributes that are useful from the different fact tables and perform aggregation. 2.1. K means algorithm K-means is one of the simplest unsupervised learning algorithms which is used to solve the well known clustering problem. The procedure employs a simple and easy way to classify a given data set through a certain number of clusters fixed a priori. We assume K clusters. The main idea of this algorithm is to define k centroids, one for each cluster. These centroids should be placed in a way that different location causes different result. So, the better choice is to place them as much as possible far away from each other. The next step in the algorithm is to take each point belonging to a given data set and associate it to the nearest centroid. When no point is pending, the first step is completed and an early group age is done. At this point we need to re-calculate k new centroid as the clusters resulting from the previous step. After we have these k new centroids, a new binding is performed between the same data set points and the nearest new centroid. As a result, a loop has been generated. As a result of this loop we may notice that the k centroids change their location step by step until no more changes are done. In other words centroids do not move any more. Finally, this algorithm aims at minimizing an objective function, which is a squared error function. The objective function is given by the following equation.   1 2 1 k n i j i J j J x c      Where ∥ 𝑥𝑖 𝐽 − 𝑐𝑗 ∥2 is a chosen distance measure between a data point 𝑥𝑖 𝐽 and the cluster centre𝑐𝑗 , is an indicator of the distance of the n data points from their respective cluster centers. K-means algorithm which is based on classification technique uses horizontal aggregation as input. Pivot operator is used to calculate the aggregation of particular data values from distinct multiple fact tables. Optimization provides for PIVOT for large number of fact table. The database connectivity and choosing different tables with .mdb extension is the first step in this system. K means algorithm consists of the following four steps. They are 1. Place K points into the space represented by the objects which are data sets that are being clustered. These points represent initial group centroids. 2. Assign each data object to the group that has the closest centroid. 3. When all objects have been assigned to different clusters, recalculate the positions of the K centroids.
  • 3. K means clustering algorithm for partitioning data sets evaluated from horizontal aggregations www.iosrjournals.org 47 | Page 4. Repeat Steps 2 and 3 until the centroids no longer move. This produces a separation of the objects into clusters from which the metric to be minimized can be calculated. Fig: flow chart of k means algorithm Step by step example of k means algorithm: The k means algorithm is implemented on large data sets that are resulted after the horizontal aggregation in order to partition into different clusters. An example with step by step procedure applied on a data set is given below. As a simple illustration of a k-means algorithm, consider the following data set consisting of the scores of two variables on each of seven individuals: Subject A B 1 1.0 1.0 2 1.5 2.0 3 3.0 4.0 4 5.0 7.0 5 3.5 5.0 6 4.5 5.0 7 3.5 4.5 This data set which is given above in the table is to be grouped into two clusters. As a first step in finding a sensible initial partition, let the A & B values of the two individuals furthest apart (using the Euclidean distance measure), define the initial cluster, Individual Mean Vector (centroid) Group 1 1 (1.0, 1.0) Group 2 4 (5.0, 7.0) The remaining data sets are now examined in a sequence and are allocated to different cluster to which they are closest, which is calculated terms of Euclidean distance to the cluster mean. When mean vector is recalculated, each time a new member is added. This gives the following series of steps: Cluster 1 Cluster 2 Step Individual Mean Vector (centroid) Individual Mean Vector (centroid) 1 1 (1.0, 1.0) 4 (5.0, 7.0) 2 1, 2 (1.2, 1.5) 4 (5.0, 7.0) 3 1, 2, 3 (1.8, 2.3) 4 (5.0, 7.0) 4 1, 2, 3 (1.8, 2.3) 4, 5 (4.2, 6.0) 5 1, 2, 3 (1.8, 2.3) 4, 5, 6 (4.3, 5.7) 6 1, 2, 3 (1.8, 2.3) 4, 5, 6, 7 (4.1, 5.4) Now the initial partition is changed, and the two clusters at this stage having the following characteristics:
  • 4. K means clustering algorithm for partitioning data sets evaluated from horizontal aggregations www.iosrjournals.org 48 | Page Individual Mean Vector (centroid) Cluster 1 1, 2, 3 (1.8, 2.3) Cluster 2 4, 5, 6, 7 (4.1, 5.4) But we are not sure that each individual has been assigned to the right cluster. In order to know that each individual is assigned to the right cluster, we compare each individual’s distance to its own cluster mean and to that of the opposite cluster. Individual Distance to mean (centroid) of Cluster 1 Distance to mean (centroid) of Cluster 2 1 1.5 5.4 2 0.4 4.3 3 2.1 1.8 4 5.7 1.8 5 3.2 0.7 6 3.8 0.6 7 2.8 1.1 Hear only individual 3 is nearer to the mean of the opposite cluster (Cluster 2) than its own (Cluster 1). In other words, each individual's distance to its own cluster mean should be smaller than the distance to the other cluster's mean (which is not the case with individual 3). Thus, individual 3 is relocated to Cluster 2 resulting which results in the new partition: Individual Mean Vector (centroid) Cluster 1 1, 2 (1.3, 1.5) Cluster 2 3, 4, 5, 6, 7 (3.9, 5.1) The iterative relocation would now continue from this new partition until no more relocation occur. However, in the example given above each individual nearer its own cluster mean than that of the other cluster and the iteration steps, choosing the latest partitioning as the final cluster solution. I this way the k means algorithm is performed on the data sets that are evaluated from horizontal aggregations. III. Conclusion The paper extended the horizontal aggregation methods with k means clustering algorithm. Conventional horizontal aggregation gives data from single fact table as an input. But in this paper the input to the algorithm is the data from the multiple fact tables. In order to partition the data set into different clusters we used the Euclidean distance computation, pivoting a table to have one dimension value per row. Pivot is a data manipulating operator which is easy to compute wide set of values. It is an extension of Group By with unique restrictions and optimization opportunities, and this makes it easy to introduce incrementally on top of existing grouping implementation. The paper also consists of a simple example explaining the step wise procedure of k means clustering algorithm. Optimized k-means is significantly faster because of small data set run clustering outside the DBMS. References [1] C. Ordonez and Z. Chen. Horizontal aggregations in SQL to prepare data sets for data mining analysis.IEEE Transactions on Knowledge and Data Engineering (TKDE), 24(4), 2012. [2] J. Gray, A. Bosworth, A. Layman, and H. Pirahesh. Data cube: A relational aggregation operator generalizing group-by, cross-tab and subtotal. In ICDE Conference. [3] G. Bhargava, P. Goel, and B.R. Iyer. Hypergraph based reordering of outer join queries with complex predicates. In ACM SIGMOD Conference, pages 304–315, 1995. [4] J.A. Blakeley, V. Rao, I. Kunen, A. Prout, M. Henaire, and C. Kleinerman. .NET database programmability and extensibility in Microsoft SQL Server. In Proc. ACM SIGMOD Conference, pages 1087–1098, 2008. [5] J. Clear, D. Dunn, B. Harvey, M.L. Heytens, and P. Lohman. Non-stop SQL/MX primitives for knowledge discovery. In ACM KDD Conference, pages 425–429, 1999. [6] E.F. Codd. Extending the database relational model to capture more meaning. ACM TODS, 4(4):397–434, 1979. [7] C. Galindo-Legaria and A. Rosenthal. Outer join simplification and reordering for query optimization. ACM TODS, 22(1):43–73, 1997. Text Book: [1] H. Garcia-Molina, J.D. Ullman, and J. Widom. Database Systems: The Complete Book. Prentice Hall, 1st edition, 2001. [2] J. Han and M. Kamber. Data Mining: Concepts and Techniques. Morgan Kaufmann, San Francisco, 1st edition, 2001.