This document proposes a combined clustering approach using the Gravitational Search Algorithm (GSA) and K-means (GSA-KM) along with genetic algorithms. It begins with an introduction to data mining and clustering. Then, it discusses the existing K-means and GSA algorithms and their limitations. The proposed GSA-KM approach applies K-means initially to generate centroids, then uses GSA to refine the clusters. Genetic algorithms are added to further improve efficiency and speed. The approach is implemented in C# using an MS Access database to cluster datasets and compare performance against other algorithms.
The document proposes an automatic teller ration (ATR) shop to address issues with the current ration shop system in India. The ATR shop would give people scratch cards instead of ration cards to obtain food materials from an automated machine at any time, avoiding long queues. It would authenticate users by scratching the card and verify their service status before dispensing food and a receipt. The project is currently in the proposal stage and has not yet started.
This document outlines a student project that includes the project title, team members, supervisor details, objectives, abstract, base paper, work plan, and references. The objectives section lists 4 points about the goals of the project. The abstract also provides 4 points summarizing the project. A base paper is cited as inspiration along with a planned work schedule and list of references.
This document describes a canvas-based presentation tool called LandScape that is being developed using Scalable Vector Graphics (SVG) and JavaScript. It discusses the advantages of a canvas-based paradigm over traditional slide-based presentations. LandScape will allow creating and dynamically controlling presentations on an SVG canvas using tools like Inkscape, Raphael.js and Batik. The goal is to create an open, multi-platform tool that is more flexible than slideware but with a lighter feature set than commercial products.
1. The document provides an overview of key concepts in data science and machine learning including the data science process, types of data, machine learning techniques, and Python tools used for machine learning.
2. It describes the typical 6 step data science process: setting goals, data retrieval, data preparation, exploration, modeling, and presentation.
3. Different types of data are discussed including structured, unstructured, machine-generated, graph-based, and audio/video data.
4. Machine learning techniques can be supervised, unsupervised, or semi-supervised depending on whether labeled data is used.
Deep learning is a machine learning technique that uses artificial neural networks with multiple hidden layers to learn representations of data by increasing the level of abstraction from lower to higher layers. It has proven effective for multimedia data mining tasks like image tagging and caption generation. Deep neural networks can extract meaningful patterns from high-dimensional input using convolutional and recurrent layers, whereas shallow networks are limited. While deep learning has achieved good results, supervised approaches require large labeled datasets.
This document summarizes a student project to design and fabricate a brimmed diffuser for a wind turbine. The brimmed diffuser is intended to increase the efficiency and power output of the wind turbine. The design of the brimmed diffuser is presented, including dimensions calculated based on the turbine diameter. Testing showed that the brimmed diffuser increased wind velocity and the power output of the turbine. The project demonstrates the potential for brimmed diffusers to improve wind turbine performance and power generation.
In this slide contain details of CRC Card (class responsibility collaboration) , and 6 examples of CRC card ,advantages and disadvantages, and how to make CRC card.
Anna university-ug-pg-ppt-presentation-formatVeera Victory
This document proposes creating an institutional repository for Anna University to make its scholarly works more accessible. It details collecting and organizing the university's publications from 1989-2012 across departments and categorizing them by type, subject, and country of publication. Statistics show Anna University publishes nearly 10,000 documents annually, with most being articles published in India. The repository would make Anna University's research more visible globally and promote open access to knowledge. It would cost an estimated 8.42 lakhs over two years to develop and maintain.
The document proposes an automatic teller ration (ATR) shop to address issues with the current ration shop system in India. The ATR shop would give people scratch cards instead of ration cards to obtain food materials from an automated machine at any time, avoiding long queues. It would authenticate users by scratching the card and verify their service status before dispensing food and a receipt. The project is currently in the proposal stage and has not yet started.
This document outlines a student project that includes the project title, team members, supervisor details, objectives, abstract, base paper, work plan, and references. The objectives section lists 4 points about the goals of the project. The abstract also provides 4 points summarizing the project. A base paper is cited as inspiration along with a planned work schedule and list of references.
This document describes a canvas-based presentation tool called LandScape that is being developed using Scalable Vector Graphics (SVG) and JavaScript. It discusses the advantages of a canvas-based paradigm over traditional slide-based presentations. LandScape will allow creating and dynamically controlling presentations on an SVG canvas using tools like Inkscape, Raphael.js and Batik. The goal is to create an open, multi-platform tool that is more flexible than slideware but with a lighter feature set than commercial products.
1. The document provides an overview of key concepts in data science and machine learning including the data science process, types of data, machine learning techniques, and Python tools used for machine learning.
2. It describes the typical 6 step data science process: setting goals, data retrieval, data preparation, exploration, modeling, and presentation.
3. Different types of data are discussed including structured, unstructured, machine-generated, graph-based, and audio/video data.
4. Machine learning techniques can be supervised, unsupervised, or semi-supervised depending on whether labeled data is used.
Deep learning is a machine learning technique that uses artificial neural networks with multiple hidden layers to learn representations of data by increasing the level of abstraction from lower to higher layers. It has proven effective for multimedia data mining tasks like image tagging and caption generation. Deep neural networks can extract meaningful patterns from high-dimensional input using convolutional and recurrent layers, whereas shallow networks are limited. While deep learning has achieved good results, supervised approaches require large labeled datasets.
This document summarizes a student project to design and fabricate a brimmed diffuser for a wind turbine. The brimmed diffuser is intended to increase the efficiency and power output of the wind turbine. The design of the brimmed diffuser is presented, including dimensions calculated based on the turbine diameter. Testing showed that the brimmed diffuser increased wind velocity and the power output of the turbine. The project demonstrates the potential for brimmed diffusers to improve wind turbine performance and power generation.
In this slide contain details of CRC Card (class responsibility collaboration) , and 6 examples of CRC card ,advantages and disadvantages, and how to make CRC card.
Anna university-ug-pg-ppt-presentation-formatVeera Victory
This document proposes creating an institutional repository for Anna University to make its scholarly works more accessible. It details collecting and organizing the university's publications from 1989-2012 across departments and categorizing them by type, subject, and country of publication. Statistics show Anna University publishes nearly 10,000 documents annually, with most being articles published in India. The repository would make Anna University's research more visible globally and promote open access to knowledge. It would cost an estimated 8.42 lakhs over two years to develop and maintain.
Flow-oriented modeling represents how data objects are transformed as they move through a system. A data flow diagram (DFD) is the diagrammatic form used to depict this approach. DFDs show the flow of data through processes and external entities of a system using symbols like circles and arrows. They provide a unique view of how a system works by modeling the input, output, storage and processing of data from level to level.
Dynamic Itemset Counting (DIC) is an algorithm for efficiently mining frequent itemsets from transactional data that improves upon the Apriori algorithm. DIC allows itemsets to begin being counted as soon as it is suspected they may be frequent, rather than waiting until the end of each pass like Apriori. DIC uses different markings like solid/dashed boxes and circles to track the counting status of itemsets. It can generate frequent itemsets and association rules using conviction in fewer passes over the data compared to Apriori.
This document outlines a final project presentation for a mechanical engineering student. The project aims to investigate total pressure distortion patterns downstream of a distortion screen and identify the aerodynamic inlet plane ahead of a compressor. The methodology involves obtaining geometric details of an experimental facility, meshing the fluid domain, imposing boundary conditions from experiments, and obtaining flow solutions using simulation software. Results will be validated with experiments. The presentation covers the project objectives, literature review on distorted intake flows, validation studies, simulation design, solution procedure, results and discussions, conclusions, and suggestions for future work.
Random scan displays and raster scan displaysSomya Bagai
Raster scan displays work by sweeping an electron beam across the screen in horizontal lines from top to bottom. As the beam moves, its intensity is turned on and off to illuminate pixels and form an image. The pixel values are stored in and retrieved from a refresh buffer or frame buffer. Random scan displays draw images using geometric primitives like points and lines based on mathematical equations, directing the electron beam only where needed. Raster scans have higher resolution but jagged lines, while random scans produce smooth lines but cannot display complex images. Both use a video controller and frame buffer in memory to control the display process.
OOAD - UML - Sequence and Communication Diagrams - LabVicter Paul
The document discusses interaction diagrams, specifically sequence diagrams and communication diagrams. It explains that interaction diagrams show interactions between objects by depicting the messages exchanged. A sequence diagram emphasizes the time ordering of messages, showing objects arranged from left to right and messages ordered from top to bottom. A communication diagram emphasizes the structural organization of objects, showing them as vertices connected by links along which messages pass. Both diagram types are semantically equivalent but visualize information differently based on their focus. Examples of sequence and communication diagrams are provided for processes like patient admission to a hospital.
System Models in Software Engineering SE7koolkampus
The document discusses various types of system models used in requirements engineering including context models, behavioral models, data models, object models, and how CASE workbenches support system modeling. It describes behavioral models like data flow diagrams and state machine models, data models like entity-relationship diagrams, and object models using the Unified Modeling Language. CASE tools can support modeling through features like diagram editors, repositories, and code generation.
The Sutherland-Hodgman algorithm clips polygons by clipping against each edge of the clipping window in a specific order: left, top, right, bottom. It works by testing each edge of the polygon against the clipping window boundary and either keeping or discarding vertices based on whether they are inside or outside the window. The algorithm results in a clipped polygon that only includes vertices and edge intersections that are inside the clipping window.
This document discusses object oriented analysis and design concepts including class diagrams, elaboration, and domain modeling. It describes how class diagrams show object types and relationships, and how elaboration refines requirements through iterative modeling. Elaboration builds the core architecture, resolves risks, and clarifies requirements over multiple iterations. A domain model visually represents conceptual classes and relationships in the problem domain.
Object Oriented Approach for Software DevelopmentRishabh Soni
This document provides an overview of object-oriented design methodologies. It discusses key object-oriented concepts like abstraction, encapsulation, and polymorphism. It also describes the three main models used in object-oriented analysis: the object model, dynamic model, and functional model. Finally, it outlines the typical stages of the object-oriented development life cycle, including system conception, analysis, system design, class design, and implementation.
The document discusses different types of video display devices, focusing on cathode ray tubes (CRTs). It describes how CRTs work using an electron gun, deflection plates, and phosphor-coated screen to produce images. Color CRT monitors are also covered, explaining how they produce color using either beam penetration or shadow mask methods. Other display types mentioned include direct view storage tubes, flat panel displays, and their key differences from CRTs.
An Approach For Predicting Road Accident SeverityBilalSikander3
M.Tech Final Year Presentation on predicting the road accident severity. The presentation is about proposing a model for predicting the road accidents severity based on road environment condition
There are three main methods for generating characters using software: the stroke method, vector/bitmap method, and starbust method. The stroke method uses a sequence of line and arc drawing functions defined by starting and end points. The starbust method uses a fixed pattern of 24 bit line segments to represent characters. The bitmap method stores characters as arrays of 1s and 0s representing pixels, allowing for variable font sizes by increasing the array size. All the methods can create aliased characters, and the starbust method requires extra memory to store the 24 bit segment codes.
The document discusses state modeling and state diagrams. It defines states as representations of intervals of time that describe an object's behavioral condition. Events trigger transitions between states. A state diagram uses a graph to represent an object's states and the transitions between them caused by events. It specifies the object's response to input events over time. The document provides examples of how to notationally represent states, transitions, events, and other elements in a state diagram.
Unit 1( modelling concepts & class modeling)Manoj Reddy
The document discusses object-oriented modeling and design. It covers key concepts like classes, objects, inheritance, polymorphism, and encapsulation. It also discusses the Unified Modeling Language (UML) which provides standard notation for visualizing, specifying, constructing, and documenting models. The document is a lecture on object-oriented concepts for students to understand modeling using classes, objects, and relationships.
The depth buffer method is used to determine visibility in 3D graphics by testing the depth (z-coordinate) of each surface to determine the closest visible surface. It involves using two buffers - a depth buffer to store the depth values and a frame buffer to store color values. For each pixel, the depth value is calculated and compared to the existing value in the depth buffer, and if closer the color and depth values are updated in the respective buffers. This method is implemented efficiently in hardware and processes surfaces one at a time in any order.
1. The presentation discusses different types of projections including parallel and perspective projections. Parallel projection involves projectors that are parallel, while perspective projection involves projectors that converge at a point.
2. Within parallel projection, there are orthographic and oblique projections. Orthographic projection uses perpendicular projectors, while oblique projection uses projectors that are not perpendicular. Specific types of oblique projection include cavalier and cabinet.
3. The presentation also derives the equations for parallel and oblique projections. It compares parallel and perspective projections, noting differences in properties like size preservation and foreshortening.
This document discusses processes and threads in Perl programming. It defines a process as an instance of a running program, while a thread is a flow of control through a program with a single execution point. Multiple threads can run within a single process and share resources, while processes run independently. The document compares processes and threads, and covers creating and managing threads, sharing data between threads, synchronization, and inter-process communication techniques in Perl like fork, pipe, and open.
The document describes the steps to construct a domain class model:
1. The first step is to find relevant classes by identifying nouns from the problem domain. Classes often correspond to nouns and should make sense within the application domain.
2. The next steps are to prepare a data dictionary defining each class, find associations between classes corresponding to verbs, and identify attributes and links for each class.
3. The model is then organized and simplified using techniques like inheritance and packages. The model is iteratively refined by verifying queries and reconsidering the level of abstraction.
Static modeling represents the static elements of software such as classes, objects, and interfaces and their relationships. It includes class diagrams and object diagrams. Class diagrams show classes, attributes, and relationships between classes. Object diagrams show instances of classes and their properties. Dynamic modeling represents the behavior and interactions of static elements through interaction diagrams like sequence diagrams and communication diagrams, as well as activity diagrams.
Raster scanning is a process used in television and computer graphics where an image is captured and reconstructed by systematically scanning across it in horizontal lines from top to bottom. Each line, called a scan line, is transmitted as an analog signal or divided into discrete pixels. Pixels are stored in a refresh buffer and then "painted" onto the screen one row at a time, with the beam returning to the left side during horizontal retrace and to the top left for vertical retrace between frames. Raster scanning provides realistic images but at the cost of lower resolution compared to random scanning systems.
The document proposes a new social network-based scheme to help telecom operators prevent churn by providing value-added services. The proposed scheme introduces the concept of user groups, where a group owner can share subscribed services with group members at a discount, providing incentives for both users and service creators. This encourages more service usage and helps operators identify communities and target new service proposals accordingly. The scheme is intended to provide a more flexible charging mechanism for value-added services compared to existing straight-forward monthly subscription models.
Flow-oriented modeling represents how data objects are transformed as they move through a system. A data flow diagram (DFD) is the diagrammatic form used to depict this approach. DFDs show the flow of data through processes and external entities of a system using symbols like circles and arrows. They provide a unique view of how a system works by modeling the input, output, storage and processing of data from level to level.
Dynamic Itemset Counting (DIC) is an algorithm for efficiently mining frequent itemsets from transactional data that improves upon the Apriori algorithm. DIC allows itemsets to begin being counted as soon as it is suspected they may be frequent, rather than waiting until the end of each pass like Apriori. DIC uses different markings like solid/dashed boxes and circles to track the counting status of itemsets. It can generate frequent itemsets and association rules using conviction in fewer passes over the data compared to Apriori.
This document outlines a final project presentation for a mechanical engineering student. The project aims to investigate total pressure distortion patterns downstream of a distortion screen and identify the aerodynamic inlet plane ahead of a compressor. The methodology involves obtaining geometric details of an experimental facility, meshing the fluid domain, imposing boundary conditions from experiments, and obtaining flow solutions using simulation software. Results will be validated with experiments. The presentation covers the project objectives, literature review on distorted intake flows, validation studies, simulation design, solution procedure, results and discussions, conclusions, and suggestions for future work.
Random scan displays and raster scan displaysSomya Bagai
Raster scan displays work by sweeping an electron beam across the screen in horizontal lines from top to bottom. As the beam moves, its intensity is turned on and off to illuminate pixels and form an image. The pixel values are stored in and retrieved from a refresh buffer or frame buffer. Random scan displays draw images using geometric primitives like points and lines based on mathematical equations, directing the electron beam only where needed. Raster scans have higher resolution but jagged lines, while random scans produce smooth lines but cannot display complex images. Both use a video controller and frame buffer in memory to control the display process.
OOAD - UML - Sequence and Communication Diagrams - LabVicter Paul
The document discusses interaction diagrams, specifically sequence diagrams and communication diagrams. It explains that interaction diagrams show interactions between objects by depicting the messages exchanged. A sequence diagram emphasizes the time ordering of messages, showing objects arranged from left to right and messages ordered from top to bottom. A communication diagram emphasizes the structural organization of objects, showing them as vertices connected by links along which messages pass. Both diagram types are semantically equivalent but visualize information differently based on their focus. Examples of sequence and communication diagrams are provided for processes like patient admission to a hospital.
System Models in Software Engineering SE7koolkampus
The document discusses various types of system models used in requirements engineering including context models, behavioral models, data models, object models, and how CASE workbenches support system modeling. It describes behavioral models like data flow diagrams and state machine models, data models like entity-relationship diagrams, and object models using the Unified Modeling Language. CASE tools can support modeling through features like diagram editors, repositories, and code generation.
The Sutherland-Hodgman algorithm clips polygons by clipping against each edge of the clipping window in a specific order: left, top, right, bottom. It works by testing each edge of the polygon against the clipping window boundary and either keeping or discarding vertices based on whether they are inside or outside the window. The algorithm results in a clipped polygon that only includes vertices and edge intersections that are inside the clipping window.
This document discusses object oriented analysis and design concepts including class diagrams, elaboration, and domain modeling. It describes how class diagrams show object types and relationships, and how elaboration refines requirements through iterative modeling. Elaboration builds the core architecture, resolves risks, and clarifies requirements over multiple iterations. A domain model visually represents conceptual classes and relationships in the problem domain.
Object Oriented Approach for Software DevelopmentRishabh Soni
This document provides an overview of object-oriented design methodologies. It discusses key object-oriented concepts like abstraction, encapsulation, and polymorphism. It also describes the three main models used in object-oriented analysis: the object model, dynamic model, and functional model. Finally, it outlines the typical stages of the object-oriented development life cycle, including system conception, analysis, system design, class design, and implementation.
The document discusses different types of video display devices, focusing on cathode ray tubes (CRTs). It describes how CRTs work using an electron gun, deflection plates, and phosphor-coated screen to produce images. Color CRT monitors are also covered, explaining how they produce color using either beam penetration or shadow mask methods. Other display types mentioned include direct view storage tubes, flat panel displays, and their key differences from CRTs.
An Approach For Predicting Road Accident SeverityBilalSikander3
M.Tech Final Year Presentation on predicting the road accident severity. The presentation is about proposing a model for predicting the road accidents severity based on road environment condition
There are three main methods for generating characters using software: the stroke method, vector/bitmap method, and starbust method. The stroke method uses a sequence of line and arc drawing functions defined by starting and end points. The starbust method uses a fixed pattern of 24 bit line segments to represent characters. The bitmap method stores characters as arrays of 1s and 0s representing pixels, allowing for variable font sizes by increasing the array size. All the methods can create aliased characters, and the starbust method requires extra memory to store the 24 bit segment codes.
The document discusses state modeling and state diagrams. It defines states as representations of intervals of time that describe an object's behavioral condition. Events trigger transitions between states. A state diagram uses a graph to represent an object's states and the transitions between them caused by events. It specifies the object's response to input events over time. The document provides examples of how to notationally represent states, transitions, events, and other elements in a state diagram.
Unit 1( modelling concepts & class modeling)Manoj Reddy
The document discusses object-oriented modeling and design. It covers key concepts like classes, objects, inheritance, polymorphism, and encapsulation. It also discusses the Unified Modeling Language (UML) which provides standard notation for visualizing, specifying, constructing, and documenting models. The document is a lecture on object-oriented concepts for students to understand modeling using classes, objects, and relationships.
The depth buffer method is used to determine visibility in 3D graphics by testing the depth (z-coordinate) of each surface to determine the closest visible surface. It involves using two buffers - a depth buffer to store the depth values and a frame buffer to store color values. For each pixel, the depth value is calculated and compared to the existing value in the depth buffer, and if closer the color and depth values are updated in the respective buffers. This method is implemented efficiently in hardware and processes surfaces one at a time in any order.
1. The presentation discusses different types of projections including parallel and perspective projections. Parallel projection involves projectors that are parallel, while perspective projection involves projectors that converge at a point.
2. Within parallel projection, there are orthographic and oblique projections. Orthographic projection uses perpendicular projectors, while oblique projection uses projectors that are not perpendicular. Specific types of oblique projection include cavalier and cabinet.
3. The presentation also derives the equations for parallel and oblique projections. It compares parallel and perspective projections, noting differences in properties like size preservation and foreshortening.
This document discusses processes and threads in Perl programming. It defines a process as an instance of a running program, while a thread is a flow of control through a program with a single execution point. Multiple threads can run within a single process and share resources, while processes run independently. The document compares processes and threads, and covers creating and managing threads, sharing data between threads, synchronization, and inter-process communication techniques in Perl like fork, pipe, and open.
The document describes the steps to construct a domain class model:
1. The first step is to find relevant classes by identifying nouns from the problem domain. Classes often correspond to nouns and should make sense within the application domain.
2. The next steps are to prepare a data dictionary defining each class, find associations between classes corresponding to verbs, and identify attributes and links for each class.
3. The model is then organized and simplified using techniques like inheritance and packages. The model is iteratively refined by verifying queries and reconsidering the level of abstraction.
Static modeling represents the static elements of software such as classes, objects, and interfaces and their relationships. It includes class diagrams and object diagrams. Class diagrams show classes, attributes, and relationships between classes. Object diagrams show instances of classes and their properties. Dynamic modeling represents the behavior and interactions of static elements through interaction diagrams like sequence diagrams and communication diagrams, as well as activity diagrams.
Raster scanning is a process used in television and computer graphics where an image is captured and reconstructed by systematically scanning across it in horizontal lines from top to bottom. Each line, called a scan line, is transmitted as an analog signal or divided into discrete pixels. Pixels are stored in a refresh buffer and then "painted" onto the screen one row at a time, with the beam returning to the left side during horizontal retrace and to the top left for vertical retrace between frames. Raster scanning provides realistic images but at the cost of lower resolution compared to random scanning systems.
The document proposes a new social network-based scheme to help telecom operators prevent churn by providing value-added services. The proposed scheme introduces the concept of user groups, where a group owner can share subscribed services with group members at a discount, providing incentives for both users and service creators. This encourages more service usage and helps operators identify communities and target new service proposals accordingly. The scheme is intended to provide a more flexible charging mechanism for value-added services compared to existing straight-forward monthly subscription models.
The document describes a final year project to develop a mobile and web application called SpringsVision Events for planning and managing social events. A team of 4 students - Syed Absar Karim, Umair Ahmed, Shafaq Yameen, and Zaid Hussain - presented their project to create an online platform for scheduling events, adding social networking features, and mobile support to the supervisor Mr. Nadeem Mahmood. The project aims to provide a useful tool for personal event management and sharing on social media.
This document provides a ratio analysis of Kutwal Foods Pvt. Ltd., an Indian food manufacturing and trading company. It includes the company profile, objectives of the analysis, research methodology used, data interpretation and key findings. The analysis found that the company's gross and net profit ratios have been decreasing in recent years due to rising material costs and low sales margins. It recommends that the company improve its profitability by reducing expenses and utilizing resources more efficiently.
This document discusses optimizing the Eicher 11.10 vehicle through analysis and surveys. The project aims to test parameters of 4 Eicher 11.10 vehicles over 1 month to analyze engine performance, chassis, brakes, etc. and compare to guidelines. Surveys of the company, workshops/dealers, and customers will be conducted. Aerodynamic and safety parameter studies are also planned to optimize cabin design, safety, and mileage. Potential add-on features investigated include an in-built weighing machine, Intelligent Vehicle Highway Systems (IVHS) technology, Cummins CADEC technology, and a driver's manual. The overall goal is to further develop the vehicle's performance and boost its sales.
This project presents an overhead bridge electromagnetic crane that can automatically measure and place objects on a conveyor belt based on their length and height. The crane uses sensors to detect objects, a microcontroller to process the measurements and control the electromagnet and motors, and can operate either automatically or manually via joystick. It is intended to help automate material handling in industries like shipping, steel mills, and petroleum refineries.
First Review for B.Tech Mechanical VIII SemVIT University
The document provides a schedule for the first review of final year B.Tech mechanical engineering student projects at VIT University during the winter of 2009-2010. It lists the venue, time, faculty in charge, and projects being reviewed for each of the four parallel sessions on February 16, 2010. It provides instructions to students on preparing and submitting their project reports and presentations for evaluation during the review.
The document summarizes a presentation on e-banking services provided by HDFC Bank in India. It provides background on HDFC Bank's founding and operations, describes the different types of electronic banking available including internet banking, phone banking and mobile banking. It then outlines the objectives and findings of a study conducted on users of ATM and internet banking services, including that most users were satisfied with ATM services but some faced issues like cards getting stuck in machines. Suggestions to address problems include educating users, improving security, and making applications easy to use.
Mudra Communication was founded in 1980 in Mumbai by A.G. Krishnamurthy. It started with one client, Vimal, and has since grown to over 125 clients and become one of the largest advertising agencies in India. Mudra offers a wide range of advertising and marketing services and has over 500 employees across India. In 2011, Mudra reported $62.7 million in advertising revenue and $36.45 million in net income.
The document summarizes a final year project on Copernicus, a 3D interactive learning application. It discusses researching the education sector, conducting teacher and student surveys which showed strong interest, and interviewing teachers who felt it could enhance learning. It also details attending a science fair to demonstrate Copernicus, connecting with an educational organization, and plans to seek investment from innovation funding or companies to help bring Copernicus to market.
General Guidelines - First Review for B.Tech Mechanical VIII SemVIT University
This document provides guidelines for students presenting their project work, including:
- The presentation should be in PowerPoint without binding or plastic covers, and students should conduct a mock review with their guide before presenting.
- The presentation should include sections on the project title, introduction, objectives, expected outcomes, literature review, methodology, individual contributions, results, progress, and a Gantt chart work plan.
- References should be in IEEE or ASME style. Guidance is provided on formatting slides, using bullet points, citing figures, and the order and content of slides including the title slide and Gantt chart slide.
Canvas Based Presentation Tool
LandScape is a canvas based presentation tool that uses Scalable Vector Graphics (SVG) and JavaScript. It allows dynamic control of presentations by zooming, panning, and rotating on a large canvas without slide boundaries. Features include importing multiple media formats, templates, and motion paths for transitions. The goal is to create attractive presentations that are not too dependent on features of the viewing program and require only nominal browser resources. It integrates JavaScript manipulation of SVG objects created using the Inkscape editor.
This document provides an overview of an online management system for a firm. The system will incorporate features for firm management, client relationship management, and central management. It will provide clients with online project booking capabilities. The system aims to provide a complete solution for a firm's problems related to financing, accounting, project management, and more. It will help managers, employees, and others work more effectively and efficiently. The system addresses issues with current manual processes that are difficult to manage, maintain records for, and lack security.
The document compares the cost of designing a building using different structural systems, including a dual system without beams, building frame system with beams, and other options. It finds that the dual system without beams has the lowest total cost at $80 million, while the most expensive is the moment resisting system with beams at $120 million. Charts and tables show the cost breakdown by structural element and comparisons of total costs for each system.
This document summarizes research on fatigue crack behavior in adhesive joints. It describes how double cantilever beam (DCB) tests were conducted on aluminum specimen structural joints bonded with adhesive. Tensile loads were applied to the specimens to observe crack behavior in the adhesive. Both experimental testing and ANSYS simulation were used to analyze crack behavior and results were compared. The goal was to understand crack formation in adhesive joints and calculate adhesive bond strength.
David Keith presented a surprising solution to climate change by injecting gases into the atmosphere to reflect sunlight. His presentation was fast-paced, humorous, and self-explanatory, using funny slides and wide gestures. He argued that the climate crisis is not new and this solution could be an obvious way to address it.
The document provides details about the history and development of Coimbatore, India, including:
- Coimbatore was established in 1906 and developed with the establishment of BrookeBond and Devanga High School.
- The first polytechnic college, Arthur Hope College of Technology, was established and focused on engineering education.
- In the early 20th century, Coimbatore saw many innovations from local residents, including early versions of razors, cameras, car engines, and radios.
This document discusses hybrid wireless networks (HWNs) and some of their advantages over traditional wireless networks. It outlines the classification of HWN architectures and some routing protocols that have been proposed for different HWN types. Some key challenges of routing in HWNs are scalability, overhead from the presence of base stations and wired backbones, and high routing overhead. The document proposes looking at overhead and scalability issues for routing in HWNs.
A South Jersey chemical manufacturing plant is seeking a contract chemical engineer on a long-term temporary basis to manage small and moderate sized plant projects, with potential for a full-time position. The engineer should have hands-on project experience at a non-managerial level and expertise in engineering design and implementation. They must be able to multitask on very small and moderate jobs simultaneously and have prior experience working in a chemical plant, preferably with a chemical engineering degree. Interested candidates should email their resume to Lauren Madosky.
Simulation and Design of SRF based Control Algorithm for Three Phase Shunt Ac...idescitation
Active power filters are effective in mitigating line
current harmonics and compensating for the reactive power
in the line. There are basically two types of Active Power
Filters (APFs): shunt type and series type. Shunt active power
filters (SAPFs) represent the most important and most widely
used filters in industrial purposes, this is due not only to the
fact that eliminate the harmonic current but also they are
suitable for a wide range of power ratings. In this paper,
Synchronous Reference Frame (SRF) theory is employed to
calculate compensating currents while the three phase source
is feeding a highly non-linear load. The main objective is to
study and investigate the performance of Shunt active power
filter using SRF theory. The algorithm is simulated under
M ATLAB
7.8
environment
using
Simulinkand
SimPowerSystems toolbox. The results shown are within the
IEEE Standard 512-1992.
This document provides an introduction to data mining techniques. It discusses how data mining emerged due to the problem of data explosion and the need to extract knowledge from large datasets. It describes data mining as an interdisciplinary field that involves methods from artificial intelligence, machine learning, statistics, and databases. It also summarizes some common data mining frameworks and processes like KDD, CRISP-DM and SEMMA.
Data mining involves classification, cluster analysis, outlier mining, and evolution analysis. Classification models data to distinguish classes using techniques like decision trees or neural networks. Cluster analysis groups similar objects without labels, while outlier mining finds irregular objects. Evolution analysis models changes over time. Data mining performance considers algorithm efficiency, scalability, and handling diverse and complex data types from multiple sources.
Data Mining: Data mining classification and analysisDatamining Tools
Data mining involves classification, cluster analysis, outlier mining, and evolution analysis. Classification models data to distinguish classes using techniques like decision trees or neural networks. Cluster analysis groups similar objects without labels, while outlier mining finds irregular objects. Evolution analysis models changes over time. Data mining performance depends on algorithm efficiency and scalability for large datasets across diverse database types.
This document outlines the learning objectives and resources for a course on data mining and analytics. The course aims to:
1) Familiarize students with key concepts in data mining like association rule mining and classification algorithms.
2) Teach students to apply techniques like association rule mining, classification, cluster analysis, and outlier analysis.
3) Help students understand the importance of applying data mining concepts across different domains.
The primary textbook listed is "Data Mining: Concepts and Techniques" by Jiawei Han and Micheline Kamber. Topics that will be covered include introduction to data mining, preprocessing, association rules, classification algorithms, cluster analysis, and applications.
Additional themes of data mining for Msc CSThanveen
Data mining involves using computational techniques from machine learning, statistics, and database systems to discover patterns in large data sets. There are several theoretical foundations of data mining including data reduction, data compression, pattern discovery, probability theory, and inductive databases. Statistical techniques like regression, generalized linear models, analysis of variance, and time series analysis are also used for statistical data mining. Visual data mining integrates data visualization techniques with data mining to discover implicit knowledge. Audio data mining uses audio signals to represent data mining patterns and results. Collaborative filtering is commonly used for product recommendations based on opinions of other customers. Privacy and security of personal data are important social concerns of data mining.
Hierarchical clustering methods create a hierarchy of clusters based on distance or similarity measures. They do not require specifying the number of clusters k in advance. Hierarchical methods either merge smaller clusters into larger ones (agglomerative) or split larger clusters into smaller ones (divisive) at each step. This continues recursively until all objects are linked or placed into individual clusters.
This document provides an introduction to data mining. It discusses the history of data mining, which began with early methods like Bayes' Theorem and regression analysis in the 1700s and 1800s. The document then covers why organizations mine data from both commercial and scientific viewpoints. It defines data mining as the extraction of useful patterns from large datasets and explains how it differs from traditional data analysis. Several common data mining tasks like classification, clustering, and association rule mining are also introduced. Finally, the document outlines the typical steps involved in a knowledge discovery process.
This document discusses data mining applications and trends. It covers topics like mining complex data types, other data mining methodologies, and various applications of data mining. Some key applications discussed include using data mining in finance, retail, telecommunications, science/engineering, intrusion detection, and recommender systems. The document also touches on topics like visual data mining, ubiquitous and invisible data mining, and the privacy and social impacts of data mining.
Data mining Basics and complete description onwordSulman Ahmed
This document discusses data mining and provides examples of its applications. It begins by explaining why data is mined from both commercial and scientific viewpoints in order to discover useful patterns and information. It then discusses some of the challenges of data mining, such as dealing with large datasets, high dimensionality, complex data types, and distributed data sources. The document outlines common data mining tasks like classification, clustering, association rule mining, and regression. It provides real-world examples of how these techniques are used for applications like fraud detection, customer profiling, and scientific discovery.
The document discusses trends in data mining research, including mining complex data types like sequences, time series, graphs and networks. It covers various data mining methodologies like statistical data mining, visual data mining and views on the foundations of data mining. Statistical techniques discussed include regression, generalized linear models and discriminant analysis. Visual data mining involves using visualization to gain insights from large datasets and present data mining results.
Data mining refers to extracting knowledge from large amounts of data and involves techniques from machine learning, statistics, and databases. A typical data mining system includes a database, data mining engine, pattern evaluation module, and graphical user interface. The knowledge discovery in data (KDD) process involves data cleaning, integration, selection, transformation, mining, evaluation, and presentation to extract useful patterns from data. KDD is the overall process while data mining is one step, applying algorithms to extract patterns for analysis.
The document provides an overview of data mining and data warehousing concepts through a series of lectures. It discusses the evolution of database technology and data analysis, defines data mining and knowledge discovery, describes data mining functionalities like classification and clustering, and covers data warehouse concepts like dimensional modeling and OLAP operations. It also presents sample queries in a proposed data mining query language.
This document discusses cluster analysis and clustering algorithms. It defines a cluster as a collection of similar data objects that are dissimilar from objects in other clusters. Unsupervised learning is used with no predefined classes. Popular clustering algorithms include k-means, hierarchical, density-based, and model-based approaches. Quality clustering produces high intra-class similarity and low inter-class similarity. Outlier detection finds dissimilar objects to identify anomalies.
Chapter 13. Trends and Research Frontiers in Data Mining.pptSubrata Kumer Paul
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The document provides an introduction to the concept of data mining. It discusses the evolution of data analysis techniques from empirical to computational to data-driven approaches. Data mining is presented as a natural evolution to analyze massive data sets and discover useful patterns. Key aspects of data mining covered include its functionality, types of data and knowledge that can be mined, major issues, and its relationship to other fields such as machine learning, statistics, and databases.
presentation on recent data mining Techniques ,and future directions of research from the recent research papers made in Pre-master ,in Cairo University under supervision of Dr. Rabie
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
1. A Combined Approach for Clustering
based on the GSA-KM and Genetic
Algorithms
Divakar Raj.M (0901016) Under the guidance of
Dilip.M (0901015) Mr.P.Perumal
Kishore Kumar.C (0901036) Associate Professor
IV CSE - A Department of Computer Science and
Engineering (UG)
Data Mining / Clustering 1/33
2. Introduction about Data Mining
• Data mining (knowledge discovery in databases):
– Extraction of interesting (non-trivial, implicit, previously unknown and
potentially useful) information or patterns from data in large databases
• Potential Applications
– Market analysis and management
– Risk analysis and management
– Fraud detection and management
– Text mining (news group, email, documents) and Web analysis
– Intelligent query answering
Data Mining / Clustering 2/33
3. Data Mining: A KDD Process
– Data mining: the core of Pattern Evaluation
knowledge discovery
process.
Data Mining
Task-relevant Data
Data Warehouse Selection
Data Cleaning
Data Integration
Databases Data Mining / Clustering 3/33
4. Architecture of a Typical Data
Mining System
Graphical user interface
Pattern evaluation
Data mining engine
Knowledge-base
Database or data warehouse
server
Filtering
Data cleaning & data integration
Data
Databases Warehouse
4/33
Data Mining / Clustering
5. Data Mining Functionalities
• Concept description: Characterization and discrimination
– Generalize, summarize, and contrast data characteristics, e.g., dry
vs. wet regions
• Association (correlation and causality)
– Multi-dimensional vs. single-dimensional association
– age(X, ―20..29‖) ^ income(X, ―20..29K‖) buys(X, ―PC‖)
– contains(T, ―computer‖) contains(x, ―software‖)
Data Mining / Clustering 5/33
6. Data Mining Functionalities
• Classification and Prediction
– Finding models (functions) that describe and distinguish classes or
concepts for future prediction
– E.g., classify countries based on climate, or classify cars based on gas
mileage
– Presentation: decision-tree, classification rule, neural network
– Prediction: Predict some unknown or missing numerical values
• Cluster analysis
– Class label is unknown: Group data to form new classes, e.g., cluster
houses to find distribution patterns
– Clustering based on the principle: maximizing the intra-class similarity
and minimizing the interclass similarity
Data Mining / Clustering 6/33
7. Data Mining Functionalities
• Outlier analysis
– Outlier: a data object that does not comply with the general behavior
of the data
– It can be considered as noise or exception but is quite useful in fraud
detection, rare events analysis
• Trend and evolution analysis
– Trend and deviation: regression analysis
– Sequential pattern mining, periodicity analysis
– Similarity-based analysis
Data Mining / Clustering 7/33
8. Issues in Data mining
• Individual Privacy
• Data Integrity
• Relational Database Structure (vs) Multidimensional One
• Issue of Cost
• Mining methodology and user interaction issues
• Performance issues
• Issues relating to the diversity of database types
Data Mining / Clustering 8/33
9. Applications
• Database analysis and decision support
– Market analysis and management
• Target Marketing, Customer Relation Management, Market
Basket Analysis, Cross Selling, Market Segmentation
– Risk analysis and management
• Forecasting, Customer Retention, Improved Underwriting,
Quality Control, Competitive Analysis
Data Mining / Clustering 9/33
10. Applications
• Text mining (news group, email, documents) and Web analysis
• Intelligent query answering
• Sports
• Astronomy
• Internet Web Surf-Aid
Data Mining / Clustering 10/33
11. Clustering
• Clustering is a data mining (machine learning)
technique used to place data elements into related
groups without advance knowledge of the group
definitions
• Set of meaningful sub classes called clusters
Data Mining / Clustering 11/33
12. Cluster Analysis
• Cluster: a collection of data objects
– Similar to one another within the same cluster
– Dissimilar to the objects in other clusters
• Cluster analysis
– Grouping a set of data objects into clusters
• Clustering is unsupervised classification: no predefined classes
• Typical applications
– As a stand-alone tool to get insight into data distribution
– As a preprocessing step for other algorithms
Data Mining / Clustering 12/33
13. What Is Good Clustering?
• A good clustering method will produce high quality clusters with
– high intra-class similarity
– low inter-class similarity
• The quality of a clustering result depends on both the similarity
measure used by the method and its implementation.
• The quality of a clustering method is also measured by its ability to
discover some or all of the hidden patterns
Data Mining / Clustering 13/33
14. Requirements of Clustering in Data Mining
• Scalability
• Ability to deal with different types of attributes
• Discovery of clusters with arbitrary shape
• Minimal requirements for domain knowledge to determine input
parameters
• Able to deal with noise and outliers
• Insensitive to order of input records
• High dimensionality
• Incorporation of user-specified constraints
• Interpretability and Usability
Data Mining / Clustering 14/33
15. Major Clustering Approaches
• Partitioning algorithms: Construct various partitions and then
evaluate them by some criterion
• Hierarchy algorithms: Create a hierarchical decomposition of the
set of data (or objects) using some criterion
• Density-based: based on connectivity and density functions
• Grid-based: based on a multiple-level granularity structure
• Model-based: A model is hypothesized for each of the clusters
and the idea is to find the best fit of that model to each other
Data Mining / Clustering 15/33
16. Issues of Clustering
• Assessment of results
• Choice of appropriate number of clusters
• Data preparation
• Proximity measures
• Handling outliers
Data Mining / Clustering 16/33
17. General Applications of Clustering
• Pattern Recognition
• Image Processing
• Economic Science (especially market research)
• WWW
– Document classification
– Cluster Weblog data to discover groups of similar access patterns
Data Mining / Clustering 17/33
18. Examples of Clustering Applications
• Marketing: Help marketers discover distinct groups in their
customer bases, and then use this knowledge to develop targeted
marketing programs
• Land use: Identification of areas of similar land use in an earth
observation database
• Insurance: Identifying groups of motor insurance policy holders
with a high average claim cost
• City-planning: Identifying groups of houses according to their
house type, value, and geographical location
• Earth-quake studies: Observed earth quake epicenters should be
clustered along continent faults
Data Mining / Clustering 18/33
19. Literature Survey
[1] An Architecture for Component-Based Design of Representative-
Based Clustering Algorithms
Boris Delibas, Milan Vuki, Milos Jovanovi, Kathrin Kirchner,
Johannes Ruhland, Milija Suknovic (2012)
[2] The Research of Imbalanced Data Set of Sample Sampling Method
Based on K-Means Cluster and Genetic Algorithm
Yang Yong, (2012)
[3] A Combined Approach for Clustering based on K-means and
Gravitational Search Algorithms
Abdolreza Hatamlou, Salwani Abdullah, Hossein Nezamabadi-
pour, (2012)
Data Mining / Clustering 19/33
20. An Architecture for Component-Based Design of
Representative-Based Clustering Algorithms
• Based on reusable components
• Components derived from K-Means like algorithms and their extensions
• The new algorithm is built by exchanging components from the original
algorithm and their improvements
• The Comparison & Evaluation are possible by using Representative Based
Clustering Algorithm
Data Mining / Clustering 20/33
21. The Research of Imbalanced Data Set of Sample
Sampling Method
Based on K-Means Cluster and Genetic Algorithm
• We use K-Means to cluster & In each cluster, we use GA to carry on the
valid confirmation and to gain a new sample
• Enhances the classified performance of imbalanced datasets
• Generates unbalanced data set’s minority class
• Attention to Classification’s accuracy of Minority Classes
Data Mining / Clustering 21/33
22. A Combined Approach for Clustering based on K-
means and
Gravitational Search Algorithms
• A hybrid data clustering algorithm based on GSA and k-means
(GSA-KM) is presented
• It uses the advantages of both algorithms
• Comparison of the performance of GSA-KM with other well-known
algorithms
– K-means
– Genetic Algorithm(GA)
– Simulated Annealing(SA)
– Ant Colony Optimization(ACO)
– Honey Bee Mating Optimization(HBMO)
– Particle Swarm Optimization(PSO)
– Gravitational Search Algorithm(GSA)
• Comparison based on real and standard datasets from the UCI
repository Data Mining / Clustering 22/33
23. Existing System
K-Means
• One of the most efficient and famous clustering algorithms
• Starts with some random or heuristic-based centroids for the desired clusters
• Assigns every data object to the closest centroid
• Iteratively refines the current centroids to reach the (near) optimal ones by
calculating the mean value of data objects within their respective clusters
• The algorithm will terminate when any one of the specified termination
criteria is met (i.e., a predetermined maximum number of iterations is
reached, a (near) optimal solution is found or the maximum search time is
reached)
Data Mining / Clustering 23/33
24. Existing System
Gravitational Search Algorithm
• Inspired by the physical phenomenon of Gravity
• Based on the interaction of masses in the universe via Newtonian
gravity law
• Attraction depends on the amount of masses and the distance
between them
2
• F = G (M1*M2) / R
Data Mining / Clustering 24/33
25. Drawbacks of Existing System
K – Means
• Performance is highly dependent on the initial state of
centroids
• May converge to the local optima rather than global optima
• The number of clusters is needed as input to the algorithm, i.e.
the number of clusters is assumed known
Data Mining / Clustering 25/33
26. GSA-KM
• Built on three main steps
1. GSA-KM applies k-means algorithm on selected dataset
and tries to produce near optimal centroids for desired
clusters
2. The proposed approach will produce an initial population
of solutions
3. Application of the GSA Algorithm
Data Mining / Clustering 26/33
27. GSA - KM
Ways for production of an initial population
• One of the candidate solutions will be produced by the output of the
k-means algorithm, which has been achieved in the previous step
• Three of them will be created based on the dataset itself and other
solutions will be produced randomly
• GSA will be employed for determining an optimal solution for the
clustering problem
Data Mining / Clustering 27/33
28. Reasons for Efficiency
• Decreases the number of iterations and function evaluations to
find a near global optimum compared to the original GSA
alone
• With the advent of a good candidate solution in the initial
population, GSA can search for near global optima in a
promising search space and, therefore, find a high quality
solution in comparison with the original GSA alone
Data Mining / Clustering 28/33
29. Proposed System
• Along with the given GSA-KM, we intend to implement
Genetic Algorithm to further increase the efficiency and speed
of the clustering
• The proposed system will have combined advantages and will
be faster and efficient than the traditional clustering algorithms
and also GSA-KM
Data Mining / Clustering 29/33
30. Implementation Details
• Programming language : C#
• Database : MS- Access
• The given repository is clustered using K-Means and GSA,
combinedly called GSA-KM and Genetic Algorithm is used to
enhance the performance
• The performance is calculated and compared with other
clustering algorithms
Data Mining / Clustering 30/33
31. References
[1] C.L. Blake, C.J. Merz
UCI repository of machine learning databases
http://www.ics.uci.edu/-learn/MLRepository.html
[2] S. Das, A. Abraham, A. Konar
Meta heuristic pattern clustering —an overview
Studies in Computational Intelligence (2009)
[3] L. Kaufman, P.J. Rousseeuw
Finding Groups in Data: An Introduction to Cluster Analysis
John Wiley & Sons, New York, (1990)
[4] M.B. Adil
Modified global-means algorithm for minimum sum-of- squares clustering problems
Pattern Recognition 41 (10) (2008)
[5] E. Rashedi, H. Nezamabadi-pour, S. Saryazdi
GSA: a gravitational search algorithm
Information Sciences 179 (13) (2009)
Data Mining / Clustering 31/33
32. References
[6] A. Likas, N. Vlassis, J.J. Verbeek
The global k -means clustering algorithm
Pattern Recognition 36 (2) (2003)
[7] M. Mahdavi
Novel meta-heuristic algorithms for clustering web documents
Applied Mathematics and Computation (2008)
[8] M. Moshtaghi
Clustering ellipses for anomaly detection
Pattern Recognition 44 (2008)
[9] B. Saglam, et al.,
A mixed-integer programming approach to the clustering problem with an application in customer
segmentation
European Journal of Operational Research 173 (3) (2006)
[10] A.K. Jain
Data clustering: 50 years beyond K –means
Pattern Recognition Letters 31 (8) (2010)
Data Mining / Clustering 32/33