The document proposes a Modified Pure Radix Sort algorithm for large heterogeneous datasets. The algorithm divides the data into numeric and string processes that work simultaneously. The numeric process further divides data into sublists by element length and sorts them simultaneously using an even/odd logic across digits. The string process identifies common patterns to convert strings to numbers that are then sorted. This optimizes problems with traditional radix sort through a distributed computing approach.
A fuzzy clustering algorithm for high dimensional streaming dataAlexander Decker
This document summarizes a research paper that proposes a new dimension-reduced weighted fuzzy clustering algorithm (sWFCM-HD) for high-dimensional streaming data. The algorithm can cluster datasets that have both high dimensionality and a streaming (continuously arriving) nature. It combines previous work on clustering algorithms for streaming data and high-dimensional data. The paper introduces the algorithm and compares it experimentally to show improvements in memory usage and runtime over other approaches for these types of datasets.
This document summarizes a research paper on developing an improved LEACH (Low-Energy Adaptive Clustering Hierarchy) communication protocol for energy efficient data mining in multi-feature sensor networks. It begins with background on wireless sensor networks and issues like energy efficiency. It then discusses the existing LEACH protocol and its drawbacks. The proposed improved LEACH protocol includes cluster heads, sub-cluster heads, and cluster nodes to address LEACH's limitations. This new version aims to minimize energy consumption during cluster formation and data aggregation in multi-feature sensor networks.
This document summarizes a research paper that proposes a new density-based clustering technique called Triangle-Density Based Clustering Technique (TDCT) to efficiently cluster large spatial datasets. TDCT uses a polygon approach where the number of data points inside each triangle of a polygon is calculated to determine triangle densities. Triangle densities are used to identify clusters based on a density confidence threshold. The technique aims to identify clusters of arbitrary shapes and densities while minimizing computational costs. Experimental results demonstrate the technique's superiority in terms of cluster quality and complexity compared to other density-based clustering algorithms.
AN ENTROPIC OPTIMIZATION TECHNIQUE IN HETEROGENEOUS GRID COMPUTING USING BION...ijcsit
This document summarizes a research paper that proposes a new method for improving both fault tolerance and load balancing in grid computing networks. The method converts the tree structure of grid computing nodes into a distributed R-tree index structure and then applies an entropy estimation technique. This entropy estimation helps discard nodes with high entropy from the tree, reducing complexity. The method then uses thresholding and control algorithms to select optimal route paths based on load balance and fault tolerance. Various optimization techniques like genetic algorithms, ant colony optimization, and particle swarm optimization are also applied to reach better solutions. Experimental results showed the proposed method improved performance over other existing methods.
GET IEEE BIG DATA,JAVA ,DOTNET,ANDROID ,NS2,MATLAB,EMBEDED AT LOW COST WITH BEST QUALITY PLEASE CONTACT BELOW NUMBER
FOR MORE INFORMATION PLEASE FIND THE BELOW DETAILS:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com
Mobile: 9791938249
Telephone: 0413-2211159
www.nexgenproject.com
This document summarizes an article from the International Journal of Computer Engineering and Technology (IJCET) that proposes an algorithm called Replica Placement in Graph Topology Grid (RPGTG) to optimally place data replicas in a graph-based data grid while ensuring quality of service (QoS). The algorithm aims to minimize data access time, balance load among replica servers, and avoid unnecessary replications, while restricting QoS in terms of number of hops and deadline to complete requests. The article describes how the algorithm converts the graph structure of the data grid to a hierarchical structure to better manage replica servers and proposes services to facilitate dynamic replication, including a replica catalog to track replica locations and a replica manager to perform replication
A COST EFFECTIVE COMPRESSIVE DATA AGGREGATION TECHNIQUE FOR WIRELESS SENSOR N...ijasuc
In wireless sensor network (WSN) there are two main problems in employing conventional compression
techniques. The compression performance depends on the organization of the routes for a larger extent.
The efficiency of an in-network data compression scheme is not solely determined by the compression
ratio, but also depends on the computational and communication overheads. In Compressive Data
Aggregation technique, data is gathered at some intermediate node where its size is reduced by applying
compression technique without losing any information of complete data. In our previous work, we have
developed an adaptive traffic aware aggregation technique in which the aggregation technique can be
changed into structured and structure-free adaptively, depending on the load status of the traffic. In this
paper, as an extension to our previous work, we provide a cost effective compressive data gathering
technique to enhance the traffic load, by using structured data aggregation scheme. We also design a
technique that effectively reduces the computation and communication costs involved in the compressive
data gathering process. The use of compressive data gathering process provides a compressed sensor
reading to reduce global data traffic and distributes energy consumption evenly to prolong the network
lifetime. By simulation results, we show that our proposed technique improves the delivery ratio while
reducing the energy and delay
This document summarizes and compares various clustering protocols for wireless sensor networks. It discusses clustering parameters like number of clusters and node mobility. It also classifies clustering algorithms into two main categories: probabilistic (e.g. LEACH) and non-probabilistic (e.g. weight-based and graph-based). Popular probabilistic protocols like LEACH, EEHC and HEED are described. Non-probabilistic protocols discussed include those based on node proximity, weights, and biologically inspired approaches. Overall, the document provides an overview of different clustering algorithm types and compares their advantages and disadvantages.
A fuzzy clustering algorithm for high dimensional streaming dataAlexander Decker
This document summarizes a research paper that proposes a new dimension-reduced weighted fuzzy clustering algorithm (sWFCM-HD) for high-dimensional streaming data. The algorithm can cluster datasets that have both high dimensionality and a streaming (continuously arriving) nature. It combines previous work on clustering algorithms for streaming data and high-dimensional data. The paper introduces the algorithm and compares it experimentally to show improvements in memory usage and runtime over other approaches for these types of datasets.
This document summarizes a research paper on developing an improved LEACH (Low-Energy Adaptive Clustering Hierarchy) communication protocol for energy efficient data mining in multi-feature sensor networks. It begins with background on wireless sensor networks and issues like energy efficiency. It then discusses the existing LEACH protocol and its drawbacks. The proposed improved LEACH protocol includes cluster heads, sub-cluster heads, and cluster nodes to address LEACH's limitations. This new version aims to minimize energy consumption during cluster formation and data aggregation in multi-feature sensor networks.
This document summarizes a research paper that proposes a new density-based clustering technique called Triangle-Density Based Clustering Technique (TDCT) to efficiently cluster large spatial datasets. TDCT uses a polygon approach where the number of data points inside each triangle of a polygon is calculated to determine triangle densities. Triangle densities are used to identify clusters based on a density confidence threshold. The technique aims to identify clusters of arbitrary shapes and densities while minimizing computational costs. Experimental results demonstrate the technique's superiority in terms of cluster quality and complexity compared to other density-based clustering algorithms.
AN ENTROPIC OPTIMIZATION TECHNIQUE IN HETEROGENEOUS GRID COMPUTING USING BION...ijcsit
This document summarizes a research paper that proposes a new method for improving both fault tolerance and load balancing in grid computing networks. The method converts the tree structure of grid computing nodes into a distributed R-tree index structure and then applies an entropy estimation technique. This entropy estimation helps discard nodes with high entropy from the tree, reducing complexity. The method then uses thresholding and control algorithms to select optimal route paths based on load balance and fault tolerance. Various optimization techniques like genetic algorithms, ant colony optimization, and particle swarm optimization are also applied to reach better solutions. Experimental results showed the proposed method improved performance over other existing methods.
GET IEEE BIG DATA,JAVA ,DOTNET,ANDROID ,NS2,MATLAB,EMBEDED AT LOW COST WITH BEST QUALITY PLEASE CONTACT BELOW NUMBER
FOR MORE INFORMATION PLEASE FIND THE BELOW DETAILS:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com
Mobile: 9791938249
Telephone: 0413-2211159
www.nexgenproject.com
This document summarizes an article from the International Journal of Computer Engineering and Technology (IJCET) that proposes an algorithm called Replica Placement in Graph Topology Grid (RPGTG) to optimally place data replicas in a graph-based data grid while ensuring quality of service (QoS). The algorithm aims to minimize data access time, balance load among replica servers, and avoid unnecessary replications, while restricting QoS in terms of number of hops and deadline to complete requests. The article describes how the algorithm converts the graph structure of the data grid to a hierarchical structure to better manage replica servers and proposes services to facilitate dynamic replication, including a replica catalog to track replica locations and a replica manager to perform replication
A COST EFFECTIVE COMPRESSIVE DATA AGGREGATION TECHNIQUE FOR WIRELESS SENSOR N...ijasuc
In wireless sensor network (WSN) there are two main problems in employing conventional compression
techniques. The compression performance depends on the organization of the routes for a larger extent.
The efficiency of an in-network data compression scheme is not solely determined by the compression
ratio, but also depends on the computational and communication overheads. In Compressive Data
Aggregation technique, data is gathered at some intermediate node where its size is reduced by applying
compression technique without losing any information of complete data. In our previous work, we have
developed an adaptive traffic aware aggregation technique in which the aggregation technique can be
changed into structured and structure-free adaptively, depending on the load status of the traffic. In this
paper, as an extension to our previous work, we provide a cost effective compressive data gathering
technique to enhance the traffic load, by using structured data aggregation scheme. We also design a
technique that effectively reduces the computation and communication costs involved in the compressive
data gathering process. The use of compressive data gathering process provides a compressed sensor
reading to reduce global data traffic and distributes energy consumption evenly to prolong the network
lifetime. By simulation results, we show that our proposed technique improves the delivery ratio while
reducing the energy and delay
This document summarizes and compares various clustering protocols for wireless sensor networks. It discusses clustering parameters like number of clusters and node mobility. It also classifies clustering algorithms into two main categories: probabilistic (e.g. LEACH) and non-probabilistic (e.g. weight-based and graph-based). Popular probabilistic protocols like LEACH, EEHC and HEED are described. Non-probabilistic protocols discussed include those based on node proximity, weights, and biologically inspired approaches. Overall, the document provides an overview of different clustering algorithm types and compares their advantages and disadvantages.
This document provides an overview of several clustering algorithms. It begins by defining clustering and its importance in data mining. It then categorizes clustering algorithms into four main types: partitional, hierarchical, grid-based, and density-based. For each type, some representative algorithms are described briefly. The document also reviews several popular clustering algorithms like k-means, CLARA, PAM, CLARANS, and BIRCH in more detail. It discusses aspects like the algorithms' time complexity, types of data handled, ability to detect clusters of different shapes, required input parameters, and advantages/disadvantages. Overall, the document aims to guide selection of suitable clustering algorithms for specific applications by surveying their key characteristics.
Abstract— Cloud storage is usually distributed infrastructure, where data is not stored in a single device but is spread to several storage nodes which are located in different areas. To ensure data availability some amount of redundancy has to be maintained. But introduction of data redundancy leads to additional costs such as extra storage space and communication bandwidth which required for restoring data blocks. In the existing system, the storage infrastructure is considered as homogeneous where all nodes in the system have same online availability which leads to efficiency losses. The proposed system considers that distributed storage system is heterogeneous where each node exhibit different online availability. Monte Carlo Sampling is used to measure the online availability of storage nodes. The parallel version of Particle Swarm Optimization is used to assign redundant data blocks according to their online availability. The optimal data assignment policy reduces the redundancy and their associated cost.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document summarizes a research paper that proposes a hybrid evolutionary clustering approach for optimized routing in mobile ad hoc networks. It uses particle swarm optimization (PSO) and ant colony optimization (ACO) to perform spatial clustering of nodes. Greedy routing is then used to find routes, and when dead ends are encountered, genetic algorithms are applied to find alternative routes. The approach aims to improve greedy routing performance and recovery from dead ends by avoiding the use of floating nodes. Simulation results showed improved greedy routing and fewer concave nodes compared to other methods.
A Survey on Balancing the Network Load Using Geographic Hash TablesIOSR Journals
This document summarizes a survey on balancing network load using geographic hash tables. It discusses how geographic hash tables are used to store and retrieve data from nodes in a wireless network. Two approaches to balancing the network load are proposed: 1) An analytical approach that adds new nodes to servers when load exceeds thresholds. 2) A heuristic approach that moves data between nodes to balance load without changing underlying routing protocols. The approaches aim to prevent many requests from going to single nodes. Load balancing improves network lifespan by distributing transmission and reception operations across nodes.
MULTIDIMENSIONAL ANALYSIS FOR QOS IN WIRELESS SENSOR NETWORKSijcses
Nodes in Mobile Ad-hoc network are connected wirelessly and the network is auto configuring [1]. This paper introduces the usefulness of data warehouse as an alternative to manage data collected by WSN.Wireless Sensor Network produces huge quantity of data that need to be proceeded and homogenised, so as to help researchers and other people interested in the information. Collected data is managed and compared with other coming from datasources and systems could participate in technical report and decision making. This paper proposes a model to design, extract, transform and normalize data collected by Wireless Sensor Networks by implementing a multidimensional warehouse for comparing many aspects in WSN such as (routing protocol[4], sensor, sensor mobility, cluster ….). Hence, data warehouse defined and applied to the context above is presented as a useful approach that gives specialists row data and information for decision processes and navigate from one aspect to another.
Abstract: Energy consumption is one of the constraints in Wireless Sensor Networks (WSNs). The routing protocols are the hot areas to address quality-of-service (QoS) related issues viz. Energy consumption, network lifetime, network scalability and packet overhead. In existing system a hybrid optimization based PEGASIS-DSR optimized routing protocol (PDORP) is presented which used cache and directional transmission concept of both proactive and reactive routing protocols. The performance of PDORP has been evaluated and the results indicated that it performs better in most significant parameters. The performance of the existing method is checked when it is evaluated and validated with the nodes which are highly dynamic in nature based on the application requirement. The current system finds the trusted nodes in the case of only static environment. To overcome the issue the proposed system is applied for dynamic WSN’s with the location frequently being changed. The PDORP-LC is applied with local caching (LC) to acquire the location information so that the path learning can be dynamic without depending on the fixed location. The proposed work is performing in dynamic environment with the dynamic derivation of trusted nodes.
Keywords: local caching (LC), Wireless Sensor Networks (WSNs), PEGASIS-DSR optimized routing protocol (PDORP).
Title: Energy Efficient Optimal Paths Using PDORP-LC
Author: ADARSH KUMAR B, BIBIN CHRISTOPHER, ISSAC SAJAN, AJ DEEPA
ISSN 2350-1022
International Journal of Recent Research in Mathematics Computer Science and Information Technology
Paper Publications
DEVELOPING A NOVEL MULTIDIMENSIONAL MULTIGRANULARITY DATA MINING APPROACH FOR...cscpconf
Data Mining is one of the most significant tools for discovering association patterns that are useful for many knowledge domains. Yet, there are some drawbacks in existing mining techniques. Three main weaknesses of current data-mining techniques are: 1) re-scanning of the entire database must be done whenever new attributes are added. 2) An association rule may be true on a certain granularity but fail on a smaller ones and vise verse. 3) Current methods can only be used to find either frequent rules or infrequent rules, but not both at the same time. This research proposes a novel data schema and an algorithm that solves the above weaknesses while improving on the efficiency and effectiveness of data mining strategies. Crucial mechanisms in each step will be clarified in this paper. Finally, this paper presents experimental results regarding efficiency, scalability, information loss, etc. of the proposed approach to prove its advantages.
IRJET- Clustering of Hierarchical Documents based on the Similarity Deduc...IRJET Journal
This document discusses techniques for clustering hierarchical documents based on their structural similarity. It summarizes several existing approaches:
1) A tree edit distance-based method that represents trees as paths and computes the distance between subtrees. However, it requires trees to have a pre-specified structure.
2) Chawathe's algorithm that uses pre-order tree traversal and transforms trees into sequences of node labels and depths to calculate distances. It allows efficient assignment of new documents to clusters.
3) The XCLSC algorithm that clusters documents in two phases - grouping structurally similar documents and then searching to further improve clustering results and performance. However, it has high computational requirements.
4) The XPattern and PathXP
Clustering for Stream and Parallelism (DATA ANALYTICS)DheerajPachauri
The document summarizes information about a group project involving data stream clustering. It lists the group members and then discusses key concepts related to data stream clustering like requirements for algorithms, common algorithm types and steps, prototypes and windows. It also touches on outliers and applications of clustering.
A Survey Paper on Cluster Head Selection Techniques for Mobile Ad-Hoc NetworkIOSR Journals
This document summarizes several cluster head selection techniques for mobile ad-hoc networks (MANETs). It discusses techniques that select the cluster head based on attributes like node ID, degree of connectivity, mobility, load balancing, and power consumption. Some techniques aim to improve stability and reduce overhead by minimizing cluster changes. Each technique has advantages like simplicity or load balancing, and disadvantages like additional messaging or inability to eliminate ties between nodes. The survey provides a comparison of the techniques on their selection criteria and merits and demerits.
Benefit based data caching in ad hoc networks (synopsis)Mumbai Academisc
This document summarizes a research paper that proposes a benefit-based caching algorithm for wireless ad hoc networks. The paper presents two algorithms: (1) A centralized approximation algorithm that provably delivers a solution with benefit of at least 1/4 of the optimal benefit for minimizing total data access cost. (2) A localized distributed algorithm based on the approximation algorithm that can handle node mobility and dynamic traffic conditions. Simulations show the distributed algorithm performs close to the approximation algorithm and outperforms an existing caching technique, especially in more challenging scenarios. The paper provides the first distributed implementation of an approximation algorithm for general cache placement in ad hoc networks.
Communication synchronization in cluster based wireless sensor network a re...eSAT Journals
Abstract A wireless sensor network is acquiring more popularity in different sectors. A scalable, low latency and energy efficient are desire challenges that should meet by wireless sensor network. Clustering permits sensors to systematically communicate among clusters. Cluster based sensor network satisfies these challenges as it provides flexible, energy saving and QoS. The communication efficiency and network performance degrades if the interaction between inter-cluster and intra-cluster communication are not managed properly. The proposed work uses two approaches to solve this problem. At aiming low packet delay and high throughput first approach uses cycle- based synchronous scheduling. By completely removing necessity of communication synchronization second approach send packets with no synchronization delay. The combined scheme can take benefit of both approaches. Keywords: Wireless sensor network, clustering, communication synchronization, QoS.
Dynamic selection of cluster head in in networks for energy managementeSAT Journals
Abstract In this project, we presented Multipath Region Routing (MRR) protocol for energy conservation in Wireless Sensor Networks (WSNs). Large scale dense WSNs are used in different types of applications for accurate monitoring. Energy conservation is an important issue in WSNs. In order to save energy, Multipath Region Routing protocol is used which provides balance in energy consumption and sustains the network life-span. By using this method, we can reduce the number of energy dissipation because the cluster head will collect data directly from other nodes. Hence, the energy can be preserved and network life time is extended to reasonable time. Keywords: Clustering; Wireless Sensor Networks; Security; Multipath Region Routing;
Dynamic selection of cluster head in in networks for energy managementeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document discusses load balancing strategies for grid computing. It proposes a dynamic tree-based model to represent grid architecture in a hierarchical way that supports heterogeneity and scalability. It then develops a hierarchical load balancing strategy and algorithms based on neighborhood properties to decrease communication overhead. Conventional scheduling algorithms like Min-Min, Max-Min, and Sufferage are discussed but determined to ignore dynamic network status, which is important for load balancing. Genetic algorithms are also mentioned as a potential solution.
This document summarizes a Real-Time Monitoring and Controlling (RMC) approach for networks. The RMC system allows a network administrator to monitor client systems on a local area network by viewing their IP addresses and system information. It also allows the administrator to control clients by locking their machines or performing shutdown operations from the server. The approach uses a graphical user interface and client-server model to provide real-time monitoring and controlling capabilities for maintaining security and reliability across the network.
1) The document presents MyLearnMate, a computer-based education system that uses touch, drag-and-drop, and ink features to interactively teach math and science concepts to primary school students.
2) Key concepts are taught through activities where students can touch, drag, and drop objects to learn things like days of the week, human anatomy, and the concept of force. Microsoft ink is used for writing practice.
3) A study found that using MyLearnMate improved students' test performance on math and science questions compared to traditional classroom learning. Teachers and students also preferred the interactive learning methods over traditional teaching.
1. The document analyzes science performance and dropout rates in France based on PISA test results from 2006-2009 compared to other developed countries.
2. While France achieved average results in math, its science scores remained below average and did not improve from 2006-2009. Dropout rates in France are about 11%.
3. The study finds that elementary and secondary curricula in France allocate fewer weekly hours to science compared to other core subjects, which may contribute to lower performance and higher dropout rates in science. Remedies discussed include improving teaching quality and fostering students' self-perception in science.
The document summarizes a mathematical algorithm for quickly identifying steganographic signatures in images. It defines key concepts used in the algorithm such as the definition of an image, pixel neighborhood, pixel aberration, etc. The algorithm analyzes any given image and generates a "concentrating suspicion value" (Γ) which is a numerical value indicating how likely the image contains hidden information embedded using concentrating steganographic algorithms. Images with higher Γ values are more likely to contain stego information. The algorithm provides a fast way to filter images for more thorough interrogation.
This document summarizes key points for socio-economic development in Aceh, Indonesia following conflict. It recommends:
1) Developing through participatory planning that engages local communities and innovation.
2) Ensuring political stability and peace by addressing injustices and providing jobs for ex-fighters.
3) Prioritizing micro-economic policies like entrepreneurship programs and credit facilities to revive small businesses.
This document provides an overview of several clustering algorithms. It begins by defining clustering and its importance in data mining. It then categorizes clustering algorithms into four main types: partitional, hierarchical, grid-based, and density-based. For each type, some representative algorithms are described briefly. The document also reviews several popular clustering algorithms like k-means, CLARA, PAM, CLARANS, and BIRCH in more detail. It discusses aspects like the algorithms' time complexity, types of data handled, ability to detect clusters of different shapes, required input parameters, and advantages/disadvantages. Overall, the document aims to guide selection of suitable clustering algorithms for specific applications by surveying their key characteristics.
Abstract— Cloud storage is usually distributed infrastructure, where data is not stored in a single device but is spread to several storage nodes which are located in different areas. To ensure data availability some amount of redundancy has to be maintained. But introduction of data redundancy leads to additional costs such as extra storage space and communication bandwidth which required for restoring data blocks. In the existing system, the storage infrastructure is considered as homogeneous where all nodes in the system have same online availability which leads to efficiency losses. The proposed system considers that distributed storage system is heterogeneous where each node exhibit different online availability. Monte Carlo Sampling is used to measure the online availability of storage nodes. The parallel version of Particle Swarm Optimization is used to assign redundant data blocks according to their online availability. The optimal data assignment policy reduces the redundancy and their associated cost.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document summarizes a research paper that proposes a hybrid evolutionary clustering approach for optimized routing in mobile ad hoc networks. It uses particle swarm optimization (PSO) and ant colony optimization (ACO) to perform spatial clustering of nodes. Greedy routing is then used to find routes, and when dead ends are encountered, genetic algorithms are applied to find alternative routes. The approach aims to improve greedy routing performance and recovery from dead ends by avoiding the use of floating nodes. Simulation results showed improved greedy routing and fewer concave nodes compared to other methods.
A Survey on Balancing the Network Load Using Geographic Hash TablesIOSR Journals
This document summarizes a survey on balancing network load using geographic hash tables. It discusses how geographic hash tables are used to store and retrieve data from nodes in a wireless network. Two approaches to balancing the network load are proposed: 1) An analytical approach that adds new nodes to servers when load exceeds thresholds. 2) A heuristic approach that moves data between nodes to balance load without changing underlying routing protocols. The approaches aim to prevent many requests from going to single nodes. Load balancing improves network lifespan by distributing transmission and reception operations across nodes.
MULTIDIMENSIONAL ANALYSIS FOR QOS IN WIRELESS SENSOR NETWORKSijcses
Nodes in Mobile Ad-hoc network are connected wirelessly and the network is auto configuring [1]. This paper introduces the usefulness of data warehouse as an alternative to manage data collected by WSN.Wireless Sensor Network produces huge quantity of data that need to be proceeded and homogenised, so as to help researchers and other people interested in the information. Collected data is managed and compared with other coming from datasources and systems could participate in technical report and decision making. This paper proposes a model to design, extract, transform and normalize data collected by Wireless Sensor Networks by implementing a multidimensional warehouse for comparing many aspects in WSN such as (routing protocol[4], sensor, sensor mobility, cluster ….). Hence, data warehouse defined and applied to the context above is presented as a useful approach that gives specialists row data and information for decision processes and navigate from one aspect to another.
Abstract: Energy consumption is one of the constraints in Wireless Sensor Networks (WSNs). The routing protocols are the hot areas to address quality-of-service (QoS) related issues viz. Energy consumption, network lifetime, network scalability and packet overhead. In existing system a hybrid optimization based PEGASIS-DSR optimized routing protocol (PDORP) is presented which used cache and directional transmission concept of both proactive and reactive routing protocols. The performance of PDORP has been evaluated and the results indicated that it performs better in most significant parameters. The performance of the existing method is checked when it is evaluated and validated with the nodes which are highly dynamic in nature based on the application requirement. The current system finds the trusted nodes in the case of only static environment. To overcome the issue the proposed system is applied for dynamic WSN’s with the location frequently being changed. The PDORP-LC is applied with local caching (LC) to acquire the location information so that the path learning can be dynamic without depending on the fixed location. The proposed work is performing in dynamic environment with the dynamic derivation of trusted nodes.
Keywords: local caching (LC), Wireless Sensor Networks (WSNs), PEGASIS-DSR optimized routing protocol (PDORP).
Title: Energy Efficient Optimal Paths Using PDORP-LC
Author: ADARSH KUMAR B, BIBIN CHRISTOPHER, ISSAC SAJAN, AJ DEEPA
ISSN 2350-1022
International Journal of Recent Research in Mathematics Computer Science and Information Technology
Paper Publications
DEVELOPING A NOVEL MULTIDIMENSIONAL MULTIGRANULARITY DATA MINING APPROACH FOR...cscpconf
Data Mining is one of the most significant tools for discovering association patterns that are useful for many knowledge domains. Yet, there are some drawbacks in existing mining techniques. Three main weaknesses of current data-mining techniques are: 1) re-scanning of the entire database must be done whenever new attributes are added. 2) An association rule may be true on a certain granularity but fail on a smaller ones and vise verse. 3) Current methods can only be used to find either frequent rules or infrequent rules, but not both at the same time. This research proposes a novel data schema and an algorithm that solves the above weaknesses while improving on the efficiency and effectiveness of data mining strategies. Crucial mechanisms in each step will be clarified in this paper. Finally, this paper presents experimental results regarding efficiency, scalability, information loss, etc. of the proposed approach to prove its advantages.
IRJET- Clustering of Hierarchical Documents based on the Similarity Deduc...IRJET Journal
This document discusses techniques for clustering hierarchical documents based on their structural similarity. It summarizes several existing approaches:
1) A tree edit distance-based method that represents trees as paths and computes the distance between subtrees. However, it requires trees to have a pre-specified structure.
2) Chawathe's algorithm that uses pre-order tree traversal and transforms trees into sequences of node labels and depths to calculate distances. It allows efficient assignment of new documents to clusters.
3) The XCLSC algorithm that clusters documents in two phases - grouping structurally similar documents and then searching to further improve clustering results and performance. However, it has high computational requirements.
4) The XPattern and PathXP
Clustering for Stream and Parallelism (DATA ANALYTICS)DheerajPachauri
The document summarizes information about a group project involving data stream clustering. It lists the group members and then discusses key concepts related to data stream clustering like requirements for algorithms, common algorithm types and steps, prototypes and windows. It also touches on outliers and applications of clustering.
A Survey Paper on Cluster Head Selection Techniques for Mobile Ad-Hoc NetworkIOSR Journals
This document summarizes several cluster head selection techniques for mobile ad-hoc networks (MANETs). It discusses techniques that select the cluster head based on attributes like node ID, degree of connectivity, mobility, load balancing, and power consumption. Some techniques aim to improve stability and reduce overhead by minimizing cluster changes. Each technique has advantages like simplicity or load balancing, and disadvantages like additional messaging or inability to eliminate ties between nodes. The survey provides a comparison of the techniques on their selection criteria and merits and demerits.
Benefit based data caching in ad hoc networks (synopsis)Mumbai Academisc
This document summarizes a research paper that proposes a benefit-based caching algorithm for wireless ad hoc networks. The paper presents two algorithms: (1) A centralized approximation algorithm that provably delivers a solution with benefit of at least 1/4 of the optimal benefit for minimizing total data access cost. (2) A localized distributed algorithm based on the approximation algorithm that can handle node mobility and dynamic traffic conditions. Simulations show the distributed algorithm performs close to the approximation algorithm and outperforms an existing caching technique, especially in more challenging scenarios. The paper provides the first distributed implementation of an approximation algorithm for general cache placement in ad hoc networks.
Communication synchronization in cluster based wireless sensor network a re...eSAT Journals
Abstract A wireless sensor network is acquiring more popularity in different sectors. A scalable, low latency and energy efficient are desire challenges that should meet by wireless sensor network. Clustering permits sensors to systematically communicate among clusters. Cluster based sensor network satisfies these challenges as it provides flexible, energy saving and QoS. The communication efficiency and network performance degrades if the interaction between inter-cluster and intra-cluster communication are not managed properly. The proposed work uses two approaches to solve this problem. At aiming low packet delay and high throughput first approach uses cycle- based synchronous scheduling. By completely removing necessity of communication synchronization second approach send packets with no synchronization delay. The combined scheme can take benefit of both approaches. Keywords: Wireless sensor network, clustering, communication synchronization, QoS.
Dynamic selection of cluster head in in networks for energy managementeSAT Journals
Abstract In this project, we presented Multipath Region Routing (MRR) protocol for energy conservation in Wireless Sensor Networks (WSNs). Large scale dense WSNs are used in different types of applications for accurate monitoring. Energy conservation is an important issue in WSNs. In order to save energy, Multipath Region Routing protocol is used which provides balance in energy consumption and sustains the network life-span. By using this method, we can reduce the number of energy dissipation because the cluster head will collect data directly from other nodes. Hence, the energy can be preserved and network life time is extended to reasonable time. Keywords: Clustering; Wireless Sensor Networks; Security; Multipath Region Routing;
Dynamic selection of cluster head in in networks for energy managementeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document discusses load balancing strategies for grid computing. It proposes a dynamic tree-based model to represent grid architecture in a hierarchical way that supports heterogeneity and scalability. It then develops a hierarchical load balancing strategy and algorithms based on neighborhood properties to decrease communication overhead. Conventional scheduling algorithms like Min-Min, Max-Min, and Sufferage are discussed but determined to ignore dynamic network status, which is important for load balancing. Genetic algorithms are also mentioned as a potential solution.
This document summarizes a Real-Time Monitoring and Controlling (RMC) approach for networks. The RMC system allows a network administrator to monitor client systems on a local area network by viewing their IP addresses and system information. It also allows the administrator to control clients by locking their machines or performing shutdown operations from the server. The approach uses a graphical user interface and client-server model to provide real-time monitoring and controlling capabilities for maintaining security and reliability across the network.
1) The document presents MyLearnMate, a computer-based education system that uses touch, drag-and-drop, and ink features to interactively teach math and science concepts to primary school students.
2) Key concepts are taught through activities where students can touch, drag, and drop objects to learn things like days of the week, human anatomy, and the concept of force. Microsoft ink is used for writing practice.
3) A study found that using MyLearnMate improved students' test performance on math and science questions compared to traditional classroom learning. Teachers and students also preferred the interactive learning methods over traditional teaching.
1. The document analyzes science performance and dropout rates in France based on PISA test results from 2006-2009 compared to other developed countries.
2. While France achieved average results in math, its science scores remained below average and did not improve from 2006-2009. Dropout rates in France are about 11%.
3. The study finds that elementary and secondary curricula in France allocate fewer weekly hours to science compared to other core subjects, which may contribute to lower performance and higher dropout rates in science. Remedies discussed include improving teaching quality and fostering students' self-perception in science.
The document summarizes a mathematical algorithm for quickly identifying steganographic signatures in images. It defines key concepts used in the algorithm such as the definition of an image, pixel neighborhood, pixel aberration, etc. The algorithm analyzes any given image and generates a "concentrating suspicion value" (Γ) which is a numerical value indicating how likely the image contains hidden information embedded using concentrating steganographic algorithms. Images with higher Γ values are more likely to contain stego information. The algorithm provides a fast way to filter images for more thorough interrogation.
This document summarizes key points for socio-economic development in Aceh, Indonesia following conflict. It recommends:
1) Developing through participatory planning that engages local communities and innovation.
2) Ensuring political stability and peace by addressing injustices and providing jobs for ex-fighters.
3) Prioritizing micro-economic policies like entrepreneurship programs and credit facilities to revive small businesses.
This document summarizes a research paper about democratic deficit and political participation in Nigeria. It discusses how most Nigerians do not participate in the political process, instead leaving it to political elites and their supporters. This has led to erosion of the social contract and democratic deficit. Leadership has become self-serving, lacking policy direction, corrupt, and developmentally deficient. However, active citizenship can lead to good governance. The paper argues that both citizens and leaders need to be on equal footing in the Nigerian system. Civil society and other groups should encourage political transformation and development through greater citizen participation.
This document discusses issues in sentiment analysis and emotion extraction from text. It provides an overview of natural language processing and its applications. The document then discusses the need for sentiment analysis in areas like artificial intelligence. It proceeds to compare different techniques for emotion extraction from text, including text mining, empirical studies, emotion extraction engines, vector space models, and emotion markup languages. For each technique, it outlines the general approach and provides examples or tables to illustrate how emotions can be identified from text. However, it notes that current applications have not achieved 100% accuracy in realistic sentiment analysis.
Modified Pure Radix Sort for Large Heterogeneous Data Set IOSR Journals
The document presents a modified pure radix sort algorithm for sorting large heterogeneous data sets. It discusses problems with traditional radix sort algorithms and previous work optimizing radix sort. The proposed algorithm divides the data into numeric and string clusters. It then distributes the numeric data into subsets of equal length which are sorted in parallel using an approach that bypasses certain digits in each pass. String data is sorted by assigning numbers to identical strings. The algorithm is tested on two machines and shows improved performance over traditional radix sort and quicksort, providing sorting times 10-20% faster for large heterogeneous datasets.
This document summarizes a research paper that proposes a modified pure radix sort algorithm for large heterogeneous data sets in distributed computing environments. It begins with an introduction to sorting and radix sorting. It then reviews previous work on optimizing radix sort, including reducing memory accesses and improving data locality. The paper proposes a new modified pure radix sort algorithm aimed at optimizing problems with radix sort for large heterogeneous data sets through a distributed computing approach using divide and conquer.
Empirical Analysis of Radix Sort using Curve Fitting Technique in Personal Co...IRJET Journal
The document empirically analyzes the radix sort algorithm using curve fitting techniques on data collected from running radix sort on different data sizes on a personal computer. It implements radix sort in C and runs it 100 times for data sizes ranging from 10,000 to 27,000, recording the average run times. It then uses curve fitting to identify the model that best fits the run time versus data size data points, using R-squared, adjusted R-squared, and root mean square error. The analysis finds that the power model provides the best fit for the data.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document discusses spatial approximate string search in large spatial databases. It proposes the MHR-tree and RSASSOL methods. The MHR-tree embeds min-wise signatures into an R-tree to enable approximate string search in Euclidean space. RSASSOL combines q-gram inverted lists and reference node pruning for exact string search on road networks. Experiments on real datasets with millions of points and hundreds of thousands of nodes show the efficiency and effectiveness of the proposed approaches over baseline methods.
This document discusses generating frequent itemsets using the RElim algorithm on Hadoop clusters. It begins with an abstract describing frequent itemset mining and how MapReduce is useful for large-scale data mining applications. It then provides background on Hadoop and MapReduce, describing how it partitions data and computation across clusters. The document introduces association rule mining and describes how traditional algorithms like Apriori have limitations at large scales. It proposes using the RElim algorithm on Hadoop's MapReduce framework to overcome these limitations and efficiently generate frequent itemsets from big data.
EVALUATING CASSANDRA, MONGO DB LIKE NOSQL DATASETS USING HADOOP STREAMINGijiert bestjournal
This document summarizes a research paper that evaluates Cassandra and MongoDB NoSQL databases for processing unstructured data using Hadoop streaming. It proposes a system with three stages: data preparation where data is downloaded from Cassandra servers to file systems; data transformation where JSON data is converted to other formats using MapReduce; and data processing where non-Java executables run on the transformed data. The document reviews related work on Cassandra and Hadoop performance and discusses the data models of key-value, document, column-oriented, and graph databases. It concludes that comparing Cassandra and MongoDB can help process unstructured data and outline new approaches.
The document discusses porting a seismic inversion code to run in parallel using standard message passing libraries. It describes three options considered for distributing the large 3D seismic data across processors: mapping the data to a processor grid, treating it as a sparse matrix problem, or distributing the data as 1D vectors assigned to each processor. The third option was chosen as it best preserved the code structure, had regular dependencies, and simplified communications. The parallel code was implemented using the Distributed Data Library (DDL) for data management and the Message Passing Interface (MPI) for basic point-to-point communication between processors. Initial tests on clusters showed near linear speedup on up to 30 processors.
MAP/REDUCE DESIGN AND IMPLEMENTATION OF APRIORIALGORITHM FOR HANDLING VOLUMIN...acijjournal
Apriori is one of the key algorithms to generate frequent itemsets. Analysing frequent itemset is a crucial
step in analysing structured data and in finding association relationship between items. This stands as an
elementary foundation to supervised learning, which encompasses classifier and feature extraction
methods. Applying this algorithm is crucial to understand the behaviour of structured data. Most of the
structured data in scientific domain are voluminous. Processing such kind of data requires state of the art
computing machines. Setting up such an infrastructure is expensive. Hence a distributed environment
such as a clustered setup is employed for tackling such scenarios. Apache Hadoop distribution is one of
the cluster frameworks in distributed environment that helps by distributing voluminous data across a
number of nodes in the framework. This paper focuses on map/reduce design and implementation of
Apriori algorithm for structured data analysis.
Data Partitioning in Mongo DB with CloudIJAAS Team
Cloud computing offers various and useful services like IAAS, PAAS SAAS for deploying the applications at low cost. Making it available anytime anywhere with the expectation to be it scalable and consistent. One of the technique to improve the scalability is Data partitioning. The alive techniques which are used are not that capable to track the data access pattern. This paper implements the scalable workload-driven technique for polishing the scalability of web applications. The experiments are carried out over cloud using NoSQL data store MongoDB to scale out. This approach offers low response time, high throughput and less number of distributed transaction. The result of partitioning technique is conducted and evaluated using TPC-C benchmark.
IRJET- Review of Existing Methods in K-Means Clustering AlgorithmIRJET Journal
The document reviews existing methods for the k-means clustering algorithm. It discusses how k-means clustering works and some of its limitations when dealing with large datasets, such as being dependent on the initial choice of centroids. It then proposes using Hadoop to overcome big data challenges and calculate preliminary centroids for k-means clustering in a distributed manner. Finally, it reviews different techniques that have been proposed in other research to improve k-means clustering, such as methods for selecting better initial centroids or determining the optimal number of clusters.
Iaetsd mapreduce streaming over cassandra datasetsIaetsd Iaetsd
This document discusses processing large datasets from Denmark's traffic using Apache Cassandra and MapReduce. It begins with an introduction to big data and how the volume, velocity, and variety of data requires alternative processing methods. Apache Cassandra is introduced as a distributed and scalable NoSQL database for storing large amounts of structured and unstructured data across servers. The document then discusses Cassandra's data model and system architecture. It describes how MapReduce can be used for distributed processing of datasets stored in Cassandra. The paper aims to process traffic datasets from Denmark using Cassandra and MapReduce to help the transportation department monitor traffic.
Hardware Implementations of RS Decoding Algorithm for Multi-Gb/s Communicatio...RSIS International
In this paper, we have designed the VLSI hardware for a novel RS decoding algorithm suitable for Multi-Gb/s Communication Systems. Through this paper we show that the performance benefit of the algorithm is truly witnessed when implemented in hardware thus avoiding the extra processing time of Fetch-Decode-Execute cycle of traditional microprocessor based computing systems. The new algorithm with less time complexity combined with its application specific hardware implementation makes it suitable for high speed real-time systems with hard timing constraints. The design is implemented as a digital hardware using VHDL
An OpenCL Method of Parallel Sorting Algorithms for GPU ArchitectureWaqas Tariq
In this paper, we present a comparative performance analysis of different parallel sorting algorithms: Bitonic sort and Parallel Radix Sort. In order to study the interaction between the algorithms and architecture, we implemented both the algorithms in OpenCL and compared its performance with Quick Sort algorithm, the fastest algorithm. In our simulation, we have used Intel Core2Duo CPU 2.67GHz and NVidia Quadro FX 3800 as graphical processing unit.
MapReduce is a programming model for processing large datasets in a distributed system. It allows parallel processing of data across clusters of computers. A MapReduce program defines a map function that processes key-value pairs to generate intermediate key-value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. The MapReduce framework handles parallelization of tasks, scheduling, input/output handling, and fault tolerance.
Hot-Spot analysis Using Apache Spark frameworkSupriya .
This document describes using Apache Spark and GeoSpark to process large-scale spatial and spatial-temporal data. It discusses loading spatial data into Resilient Distributed Datasets (RDDs) using GeoSpark APIs and performing operations like spatial range queries, k-nearest neighbor queries, and spatial joins on the data. It also describes implementing hot spot analysis to identify statistically significant hot spots in the spatial data using spatial statistics in Apache Spark. The document outlines the system design, including using Hadoop and Spark on a cluster, and describes experiments run on spatial data to analyze query efficiency and performance at scale.
Effective Sparse Matrix Representation for the GPU ArchitecturesIJCSEA Journal
General purpose computation on graphics processing unit (GPU) is prominent in the high performance computing era of this time. Porting or accelerating the data parallel applications onto GPU gives the default performance improvement because of the increased computational units. Better performances can be seen if application specific fine tuning is done with respect to the architecture under consideration. One such very widely used computation intensive kernel is sparse matrix vector multiplication (SPMV) in sparse matrix based applications. Most of the existing data format representations of sparse matrix are developed with respect to the central processing unit (CPU) or multi cores. This paper gives a new format for sparse matrix representation with respect to graphics processor architecture that can give 2x to 5x performance improvement compared to CSR (compressed row format), 2x to 54x performance improvement with respect to COO (coordinate format) and 3x to 10 x improvement compared to CSR vector format for the class of application that fit for the proposed new format. It also gives 10% to 133% improvements in memory transfer (of only access information of sparse matrix) between CPU and GPU. This paper gives the details of the new format and its requirement with complete experimentation
details and results of comparison.
Effective Sparse Matrix Representation for the GPU ArchitecturesIJCSEA Journal
General purpose computation on graphics processing unit (GPU) is prominent in the high performance computing era of this time. Porting or accelerating the data parallel applications onto GPU gives the default performance improvement because of the increased computational units. Better performances can be seen if application specific fine tuning is done with respect to the architecture under consideration. One such very widely used computation intensive kernel is sparse matrix vector multiplication (SPMV) in sparse matrix based applications. Most of the existing data format representations of sparse matrix are developed with respect to the central processing unit (CPU) or multi cores. This paper gives a new format for sparse matrix representation with respect to graphics processor architecture that can give 2x to 5x performance improvement compared to CSR (compressed row format), 2x to 54x performance improvement with respect to COO (coordinate format) and 3x to 10 x improvement compared to CSR vector format for the class of application that fit for the proposed new format. It also gives 10% to 133% improvements in memory transfer (of only access information of sparse matrix) between CPU and GPU. This paper gives the details of the new format and its requirement with complete experimentation details and results of comparison.
A Parallel Algorithm Template for Updating Single-Source Shortest Paths in La...Subhajit Sahu
Highlighted notes on A Parallel Algorithm Template for Updating Single-Source Shortest Paths in Large-Scale Dynamic Networks.
While doing research work under Prof. Dip Banerjee, Prof, Kishore Kothapalli.
For SSSP the researchers give an update algorithm for handling edge insertion and deletion. They implement for in OpenMP & CUDA and compare with Galois & Gunrock resp. For each vertex there are additional "affected" flags. In a later step "affected" flag is used iteratively update distances. To avoid loops disconnected vertices are set to INF. Edge deletions are slower (needs tree repair).
They have shown graphs for 50M, 100M changes, but i couldnt find what batch size they use. Is it 50/100M?
Later they did mention experiment with batch size 15, 30, 50? Is it 50 changes of 50M changes?
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
This document summarizes a study on the economic prospects and human rights violations associated with shrimp farming in coastal regions of Bangladesh. It finds that while shrimp farming contributes significantly to Bangladesh's economy through exports and jobs, it has also led to environmental degradation and various human rights issues. Specifically, the study found reports of land conflicts, violence against women, restrictions on access to common areas, blocked canals interfering with water management, loss of agricultural land, and poor labor conditions like low wages, long hours, and unsafe working environments. Overall, the document examines both the economic benefits of the shrimp industry but also its negative social and human rights impacts.
This document discusses effective communication and common mistakes made in spoken and written English. It emphasizes that mistakes are opportunities to learn and should not be seen as embarrassing. While accuracy is important, the main goal of communication is to convey meaning clearly. The document outlines strategies for effective speaking, such as maintaining eye contact and developing listening skills. It also discusses challenges faced by some English learners in pronouncing certain sounds correctly. Overall, the document promotes focusing on intelligible communication over perfection and avoiding unnecessary bias or offense.
1) This document discusses the debate among Iranian religious intellectuals regarding modernization and their approaches to balancing tradition and modernity.
2) It outlines two major groups - Western-minded thinkers who emphasize separating tradition from modernity, and religious thinkers who seek to combine the two.
3) The document also summarizes the key arguments made by supporters of modernization, such as the neutrality of science, religion's emphasis on human progress, and that interaction between civilizations and modernization can aid development. It then summarizes the arguments made by opponents, such as the partiality of science and doubts that modernization alone can achieve social development.
This document summarizes a study on the extension service needs of catfish farmers in Oyo State, Nigeria. The study found that most catfish farmers were male, between 30-50 years old, and had primary education. Radio, friends/relatives, and extension agents were the most important information sources. The top extension service needs were marketing, stocking times, and credit access. The major challenges were poor weather, lack of credit, and high feed costs. The study recommends improved extension services, economic groups, credit access, and dissemination of best practices to enhance catfish production.
1. The study aimed to identify the effect of domestic violence on speech and pronunciation disorders in children in basic education in Ajloun governorate, Jordan.
2. The study found that parents used neglect and emotional violence against their children. Parents also punished children for using inappropriate words.
3. The study revealed significant differences in domestic violence between males and females, favoring males. Differences were also found based on birth order, favoring first born for emotional violence.
This document summarizes a study on labor relations practices in Assam's tea industry, with a focus on Jorhat District. It finds that workers have varying degrees of dissatisfaction across public, private, and government-owned tea estates. Workers were surveyed on topics like recruitment, selection, training, transfers, promotions, wages, and more. The study aims to identify strong areas and problems to improve labor relations. Key findings include high dissatisfaction among workers of Dhekiajuli Tea Estate regarding recruitment procedures and selection policies. Overall, the study examines labor relations in the tea industry and how satisfaction levels differ between estate types in Assam.
This document discusses the humanistic approach to teaching English as a foreign language. [1] It outlines four main methodologies associated with the humanistic approach: the silent way, community language learning, suggestopaedia, and total physical response. [2] These methods aim to engage students holistically and reduce anxiety around language learning. Classroom practices for these methods include relaxation exercises, role-playing scenarios, games, and peer work. [3] A study in India found that students had the greatest improvements in English skills during the first semester using these humanistic methods, showing their effectiveness. The humanistic approach aims to cultivate student motivation and a childlike openness to learning.
This document analyzes pulses production in sample villages of the Assan Valley region of Uttarakhand, India. It finds that the area and production of pulses, especially winter pulses like lentils and chickpeas, has drastically declined from 1990-2007. Through surveys of 275 farmers, the study identifies key constraints on pulses production including biotic factors like insect pests and diseases, abiotic factors like climate and rainfall, lack of access to inputs, weak extension services, and lack of market access. The rotation of pulses like chickpeas and pigeon peas with crops like rice and wheat was found to reduce chemical fertilizer use and increase outputs of those staple crops.
The document summarizes a study on gender differences in marital adjustment, mental health, and frustration reactions during middle age. The study was conducted in Delhi, India with 150 males and 150 females between ages 40-55 who were bank employees, doctors, or lecturers. It was found that females had higher levels of recreational adjustment than males, while males had a more group-oriented attitude than females. The study aimed to understand how marital adjustment, mental health, and reactions to frustration differed between males and females during middle age.
This document summarizes the views of two Iranian intellectuals, Ayatollah Morteza Motahari and Dr. Abdol-Karim Soroush, on the compatibility of Islam and democracy. Motahari represented religious reformists who sought to adapt modern concepts to religious texts. Soroush was a modernist who believed religion must renew itself to engage with modern life, not the other way around. Both supported an Islamic democratic state where the people choose leaders, but Soroush argued for greater limitations on clerical power and more emphasis on popular sovereignty and human political concepts over strict religious governance. The document examines their differing approaches to integrating democracy and Islam.
This document summarizes how external economic factors influence policymaking and management in Sub-Saharan Africa. It discusses several challenges, including weak competitive capacity in global trade which makes African exports less competitive. It also examines how commodity price fluctuations, decreasing capital inflows, high external debt burdens, and economic shocks in other countries negatively impact African countries' ability to effectively plan and implement development policies. The document concludes that African countries need to address internal weaknesses to strengthen their ability to deal with challenges posed by the external economic environment.
The document summarizes a study that investigated how blended scaffolding strategies through Facebook could aid learning and improve the writing process and performance of ESL students.
The study used a mixed methods approach, collecting both quantitative data through pre- and post-writing tests as well as qualitative data from student essays and interviews. Students received either traditional instruction alone (control group) or traditional instruction plus supplemental scaffolding through Facebook (experimental group).
Initial interview findings suggested students preferred the blended approach and felt it could help with learning, clarifying questions after school, generating ideas, editing work, and ultimately improving their writing and grades. The study aimed to determine if supplemental Facebook scaffolding positively impacted writing outcomes.
This document summarizes a study on rural health care in Thoubal District, Manipur, India. It finds that while India's constitution recognizes health as a primary duty, rural populations still lack adequate access to health care due to factors like poverty, lack of infrastructure, and social/psychological barriers. The study aims to evaluate health care facilities and services in Thoubal District, examine factors influencing access to primary health care, and assess the quality of services provided by health care workers to rural communities. It analyzes key health indicators for Manipur from the National Family Health Survey and finds that while material well-being is low, Manipur has relatively good public health outcomes, such as low infant mortality.
This document summarizes a research study on the impact of microfinance banks on the standard of living of hairdressers in Oshodi-Isolo local government area of Lagos State, Nigeria. The study aims to examine how microfinance banks have impacted hairdressers' businesses and their ability to acquire assets and save. It involved surveying 120 hairdressers registered with the local government. The results found a significant relationship between microfinance efforts and the hairdressers' standard of living, indicating that microfinance has helped reduce poverty somewhat among this group. The study recommends that government ensure microfinance loans are easily obtainable with reasonable repayment schedules.
1) The document discusses the challenges faced by contemporary Indian society, including poverty, gender discrimination, corruption, illiteracy, global warming, and war. It then examines the role of NGOs in addressing these issues, such as alleviating poverty, empowering women, fighting corruption, providing education, and creating awareness about global warming.
2) The paper also identifies internal challenges NGOs face, like lack of commitment from staff, insufficient training facilities, and misappropriation of funds. External challenges include difficulties with fundraising, low community participation, and lack of trust in NGOs.
3) In conclusion, the role of NGOs is seen as tremendous in providing services to vulnerable groups. However,
This document analyzes the current status of space law and conventions regarding sovereignty in outer space. It discusses key treaties like the Outer Space Treaty of 1967 and the Moon Treaty of 1979. While these treaties established some framework, many challenges remain unaddressed. Issues around defining boundaries between airspace and outer space, liability for damage, and jurisdiction over objects in space continue to be debated. The document concludes more work is still needed to harmonize regulations and reduce ambiguity regarding sovereignty and activities in outer space.
Gender discrimination in Pakistan threatens its security and progress. Women make up over half the population but face inhumane treatment through domestic violence, forced marriages, honor killings, and lack of access to education and jobs. Discrimination is deeply rooted in society and denies women their identity, treating them as property of fathers or husbands. To improve security and prosperity, Pakistan must eliminate discrimination and empower women through education, employment, and participation in decision making.
This document discusses the concept of God in the works of Tennessee Williams and Rabindranath Tagore. While from different cultures and born decades apart, both authors deeply explored human nature and spirituality. The document analyzes Williams' play "The Night of the Iguana" in depth, noting its religious symbols and exploration of faith through characters like Shannon. It also briefly discusses Tagore's views on evil and the nature of God. Overall, the document examines how both authors conveyed spiritual questions and themes in their work despite coming from varied backgrounds.
1) The document analyzes the level of educational development and underlying disparities in Burdwan District, West Bengal.
2) It finds significant spatial variations in educational infrastructure, dropout rates, and never-enrolled student populations across the district's 31 blocks.
3) The western, more urbanized blocks have better infrastructure but higher dropout rates, while eastern agricultural blocks have poorer infrastructure but lower dropout rates. Factors like poverty, early marriage, and economic opportunities contribute to educational disparities.
The document summarizes a study on intra-household labor distribution and the role of women in family decision making in Bangladesh. It analyzed 3 samples of households and found that:
1) Male members spent more time on productive work like crops and livestock while females spent more on reproductive work.
2) Females spent significant time on productive work as well and their workload increased after joining a poverty-reduction project.
3) After joining the project, 50% of females in some households became more involved in family decision making.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
1. IOSR Journal of Computer Engineering (IOSRJCE)
ISSN: 2278-0661 Volume 3, Issue 1 (July-Aug. 2012), PP 20-23
www.iosrjournals.org
Modified Pure Radix Sort for Large Heterogeneous Data Set
A. Avinash Shukla1, B. Anil Kishore Saxena 2
Abstract: We have proposed a Modified Pure Radix Sort for Large Heterogeneous Data Set. In this research
paper we discuss the problems of radix sort, brief study of previous works of radix sort & present new modified
pure radix sort algorithm for large heterogeneous data set. We try to optimize all related problems of radix sort
through this algorithm. This algorithm works on the Technology of Distributed Computing which is
implemented on the principal of divide & conquer method.
I. Introduction
Sorting is a computational building block of fundamental importance and is the most widely studied
algorithmic problem. The importance of sorting has led to the design of efficient sorting algorithms for a variety
of architectures. Many applications rely on the availability of efficient sorting routines as a basis for their own
efficiency, while some algorithms can be conveniently phrased in terms of sorting. Radix sort is an algorithm
that sorts numbers by processing individual digits. n numbers consisting of k digits each are sorted in O (n · k)
time. Radix sort can either process digits of each number starting from the least significant digit (LSD) or the
most significant digit (MSD). The LSD algorithm first sorts the list by the least significant digit while
preserving their relative order using a stable sort. Then it sorts them by the next digit, and so on from the least
significant to the most significant, ending up with a sorted list. While the LSD radix sort requires the use of a
stable sort, the MSD radix sort algorithm does not (unless stable sorting is desired). MSD radix sort is not stable.
It is common for the counting sort algorithm to be used internally by the radix sort; Hybrid sorting approach,
such as using insertion sort for small bins improves performance of radix sort significantly.
II. Review Of Related Literature
Rajeev Raman [1] illustrated the importance of reducing misses in the standard implementation of
least-significant bit first in (LSB) radix sort, these techniques simultaneously reduce cache and TLB misses for
LSB radix sort, all the techniques proposed yield algorithms whose implementations of LSB Radix sort &
comparison- based sorting algorithms. Danial [2] explained the Communication and Cache Conscious Radix sort
Algorithm (C3-Radix sort). C3-Radix sort uses the distributed shared memory parallel programming Models.
Exploiting the memory hierarchy locality and reduce the amount of communication for distributed Memory
computers. C3-Radix sort implements & analyses on the SGI Origin 2000 NUMA Multiprocessor & provides
results for up to 16 processors and 64M 32bit keys. The results show that for small data sets compared to the
number of processors, the MPI implementation is the faster while for large data sets, the shared memory
implementation is faster. Shin-Jae Lee [3] solved the load imbalance problem present in parallel radix sort.
Redistributing the keys in each round of radix, each processor has exactly the same number of keys, thereby
reducing the overall sorting time. Load balanced radix sort is currently the fastest internal sorting method for
distributed-memory based multiprocessors. However, as the computation time is balanced, the communication
time becomes the bottleneck of the overall sorting performance. The proposed algorithm preprocesses the key
by redistribution to eliminate the communication time. Once the keys are localized to each processor, the sorting
is confined within processor, eliminating the need for global redistribution of keys & enables well balanced
communication and computation across processors. Experimental results with various key distributions indicate
significant improvements over balanced radix sort. Jimenez- Gonzalez [4] introduced a new algorithm called
Sequential Counting Split Radix sort (SCS-Radix sort). The three important features of the SCS-Radix are the
dynamic detection of data skew, the exploitation of the memory hierarchy and the execution time stability when
sorting data sets with different characteristics. They claim the algorithm to be 1:2 to 45 times faster compare to
Radix sort or quick sort. Navarro & Josep [5] focused on the improvement of data locality. CC-Radix improved
the data locality by dynamically partitioning the data set into subsets that fit in cache level L2. Once in that
cache level, each subset is sorted with Radix sort. The proposed algorithm is about 2 and1:4 times faster than
Quick sort and Explicit Block Transfer Radix sort. Nadathur Satish [6] proposed the high-performance parallel
radix sort and merge sort routines for many-core GPUs, taking advantage of the full programmability offered by
CUDA. Radix sort is the fastest GPU sort and merge sort is the fastest comparison-based sort reported in the
literature. For optimal performance, the algorithm exploited the substantial fine-grained parallelism and
decomposes the computation into independent tasks. Exploiting the high-speed on chip shared memory provided
by NVIDIA’s GPU architecture and efficient data-parallel primitives, particularly parallel scan, the algorithms
targeted the GPUs. N. Ramprasad and Pallav Kumar Baruah [7] suggested an optimization for the parallel
www.iosrjournals.org 20 | Page
2. Modified Pure Radix Sort For Large Heterogeneous Data Set
radix sort algorithm, reducing the time complexity of the algorithm and ensuring balanced load on all processor.
[16]
Implemented it on the “Cell processor”, the first implementation of the Cell Broadband Engine Architecture
(CBEA). It is a heterogeneous multi-core processor system. 102400000 elements were sorted in 0.49 seconds at
a rate of 207 Million/sec. Shibdas Bandyopadhyay and Sartaj Sahni [8] developed a new radix sort algorithm,
GRS, for GPUs that reads and writes records from/to global memory only once. The existing SDK radix sort
algorithm does this twice. Experiments indicate that GRS is 21% faster than SDK sort while sorting 100M
numbers and is faster by between 34% and 55% when sorting 40M records with 1 to 9 32-bit fields. Daniel
Jiménez-González, Juan J. Navarro, Josep-L. Larrba-Pey [9] proposed Parallel in-memory 64-bit sorting, an
important problem in Database Management Systems and other applications such as Internet Search Engines
and Data Mining Tools. [9] The algorithm is termed Parallel Counting Split Radix sort (PCS-Radix sort). The
parallel stages of the algorithm increases the data locality, balance the load between processors caused by data
skew and reduces significantly the amount of data communicated. The local stages of PCS-Radix sort are
performed only on the bits of the key that have not been sorted during the parallel stages of the algorithm. PCS-
Radix sort adapts to any parallel computer by changing three simple algorithmic parameters. [9] Implemented the
algorithm on a Cray T3E-900 and the results shows that it is more than 2 times faster than the previous fastest
64-bit parallel sorting algorithm. PCS-Radix sort achieves a speed up of more than 23 in 32 processors in
relation to the fastest sequential algorithm at our hands. Daniel Cederman and Philippas Tsigas [10] Showed at
GPU-Quick sort, an efficient Quick sort algorithm suitable for the highly parallel multi-core graphics
processors. Quick sort had previously been considered an inefficient sorting solution for graphics processors,
but GPU-Quick sort often performs better than the fastest known sorting implementations for graphics
processors, such as radix and bitonic sort. Quick sort can thus be seen as a viable alternative for sorting large
quantities of data on graphics processors.
III. Proposed Modified Radix Sort
It is observed that no single method is optimal to all available data sets with varying complexity of size,
number of fields, length etc. Thus an attempt is made to select a set of data set & optimize the implementation
by modifying the basic algorithm. Above mentioned problems of Sorting algorithm are optimized by proposed
algorithm. The algorithm is dependent on the distributed Computing Environment. Its implementation is
proposed on many core machines. Given heterogeneous list is divided into two main process one is numeric and
other is string. These two process work simultaneously. Suppose p1, p2 are the two main process. Each process
has a unique processor. Process p1 is further distributed in different sub list according to equal length of
elements in a list. These lists are sorted simultaneously on the logic of even & odd logic. Passes are transferred
alternatively on the digits. After sorting these lists, combine all this & again sort this combined list. In the case
of p2, make a pattern. Using the unique pattern, get the selected strings. Among these strings, same string
provides same numeric values. Now proposed algorithm applies on these numeric values for sorting the given
strings
IV. Algorithm For “Mrs Sort “
1. Import large heterogeneous database which is in the form of any of these format (excel sheet, oracle, Ms-
Access, SQL Server etc.)
2. Make 2 clusters (Numeric & String) of similar data set on given heterogeneous database.
3. These clusters are stored in separate list. Each list has unique processor viz; p1 & p2 respectively.
4. Process p1.
ii) Separates all elements in Different sub lists according to their equality of length.
iii) These sub lists are sorted separately through new proposed algorithm. In this proposed algorithm (MRS sort)
List are sorted by bypassing a digit. For Example, after completing one digit cycle in unit place, next Cycle is
hundredth place digit {bypassing the tenth place digit} after completing all the given no. of digits, list is sorted.
Same process is employed for sorting all the lists.
iv) Conquer (Merge) these entire sub Lists in a single combined list & sort Again as explained in step 4(iii)
5. Process p2.
i) According to unique pattern we try to search the relevant data in a list.
ii) After getting relevant data providing the unique numbers to these data if Data are same then it gets same
number.
iii) Now apply the modified radix sort step 4(iii) to solve these problems.
iv) After this it will be sorted.
6. End of Algorithm.
i.) Finds the length of all elements present in this list.
www.iosrjournals.org 21 | Page
3. Modified Pure Radix Sort For Large Heterogeneous Data Set
V. Results And Discussions
Above graph clearly shows the result of proposed algorithm. Here 6 sub lists are used each have its
own elements (10, 1000, 10000, 100000, 1000000, 10000000). These sub-lists are sorted separately, 6
processors are considered with the help of pure modified radix sort. After that all these sub-lists are merged &
sorted through proposed algorithm & generate the common graph which is shown as Time v/s Number. The
result is arrived at using multi- core machine. Now proposed MRS algorithm is runs on two different machines
& the results were observed. Results have shown clearly that MRS Sort is best sort for heterogeneous data set
on both the machines always. After MRS Sort GPU Quick Sort is the best option. Both Sorting techniques are
complete by themselves, but there are slight differences between these two sorting methods as given below.
This algorithm runs on two different machines, the results are as follow in the form of graph.
First this algorithm runs on Intel Pentium P6200,Intel HD Graphics,2GB DDR3 RAM,500 GB HDD
Operating system :- Windows 7. The results (Graph Representation) of this machine are as follow.
FIG 1.1
Here four groups are present Ram1, Ram2, Ram3, and Ram4. Each group has separate heterogeneous
data set. Ram1 represent 1 million heterogeneous data sets. Other groups have 5 millions, 10 millions, 15
millions & 20 millions, heterogeneous data set respectively. All these groups are shown on X- axis on the graph
& y-axis shows time taken (Nano Seconds) for each group.
1. Second time this algorithm runs on Intel Xeon Server Board, Intel HD Graphics, 5GB DDR3 RAM, 500
GB HDD, Operating system : - windows server 2008 R2. The results (Graph Representation) of this
machine are as follow.
Here four groups are presents whose name like Ram1, Ram2, Ram3, Ram4. Each group has separate
heterogeneous data set. Ram1 represent 1 millions of heterogeneous data sets. Other groups having 5 millions,
10 millions, 15 millions & 20 millions heterogeneous data set respectively. All these groups are shown on X-
axis on the graph & y-axis shows time taken (Nano Seconds) for each group. The results clearly shows that in
some conditions MRS Sort & GPU Quick Sort give same results & in some conditions MRS Sorts is just better
then GPU Quick Sort algorithm.
www.iosrjournals.org 22 | Page