IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document summarizes a paper that presents a novel method for passive resource discovery in cluster grid environments. The method monitors network packet frequency from nodes' network interface cards to identify nodes with available CPU cycles (<70% utilization) by detecting latency signatures from frequent context switching. Experiments on a 50-node testbed showed the method can consistently and accurately discover available resources by analyzing existing network traffic, including traffic passed through a switch. The paper also proposes algorithms for distributed two-level resource discovery, replication and utilization to optimize resource allocation and access costs in distributed computing environments.
Bra a bidirectional routing abstraction for asymmetric mobile ad hoc networks...Mumbai Academisc
This document summarizes a paper that presents a framework called BRA that provides a bidirectional abstraction of asymmetric mobile ad hoc networks to enable off-the-shelf routing protocols to work. BRA maintains multi-hop reverse routes for unidirectional links, improves connectivity by using unidirectional links, enables reverse route forwarding of control packets, and detects packet loss on unidirectional links. Simulations show packet delivery increases substantially when AODV is layered on BRA in asymmetric networks compared to regular AODV.
AN ENTROPIC OPTIMIZATION TECHNIQUE IN HETEROGENEOUS GRID COMPUTING USING BION...ijcsit
This document summarizes a research paper that proposes a new method for improving both fault tolerance and load balancing in grid computing networks. The method converts the tree structure of grid computing nodes into a distributed R-tree index structure and then applies an entropy estimation technique. This entropy estimation helps discard nodes with high entropy from the tree, reducing complexity. The method then uses thresholding and control algorithms to select optimal route paths based on load balance and fault tolerance. Various optimization techniques like genetic algorithms, ant colony optimization, and particle swarm optimization are also applied to reach better solutions. Experimental results showed the proposed method improved performance over other existing methods.
The document proposes a distributed database system for delay tolerant networks (DTNs). It discusses two key contributions: 1) practical heuristics for query scheduling that optimize for lower delays by moving relevant data to sites with the largest amounts of data or highest likelihood of contact, and 2) a prereplication scheme that reduces latency for on-demand retrieval by actively pre-caching popular data between nodes based on bandwidth availability. The paper evaluates the proposed system through simulations on artificial and real-world DTN traces.
Propose a Method to Improve Performance in Grid Environment, Using Multi-Crit...Editor IJCATR
The most important purpose of grid networks is resource subscription in a dynamic and heterogeneous environment.
They are accessible through using various methods. Subscription has mainly computational, scientific and other implications. In
order to reach grid purposes and to use available resources in grid environment, subtasks are distributed among resources and are
scheduled by considering the quality of service. It has been tried to distribute subtasks between resources in a way that maximum
QOS can be obtained. In this study, a method has been presented. In this method, three parameters; namely, sent and transferred
time between RMS and resource, process time of subtask by the resource, and the load of available tasks in resources row, have
been taken into account. In this way, multi-criteria decision is made by using TOPSIS method and this priority of the resources
are determined to assign them to subtasks. Finally, time response, as an efficient parameter, has been improved and optimized by
optimal assignment of the resources to subtasks.
T AXONOMY OF O PTIMIZATION A PPROACHES OF R ESOURCE B ROKERS IN D ATA G RIDSijcsit
A novel taxonomy of replica selection techniques is proposed. We studied some data grid approaches
where the selection strategies of data management is different. The aim of the study is to determine the
common concepts and observe their performance and to compare their performance with our strategy
Performance evaluation and estimation model using regression method for hadoo...redpel dot com
Performance evaluation and estimation model using regression method for hadoop word count.
for more ieee paper / full abstract / implementation , just visit www.redpel.com
International Refereed Journal of Engineering and Science (IRJES)irjes
a leading international journal for publication of new ideas, the state of the art research results and fundamental advances in all aspects of Engineering and Science. IRJES is a open access, peer reviewed international journal with a primary objective to provide the academic community and industry for the submission of half of original research and applications.
This document summarizes a paper that presents a novel method for passive resource discovery in cluster grid environments. The method monitors network packet frequency from nodes' network interface cards to identify nodes with available CPU cycles (<70% utilization) by detecting latency signatures from frequent context switching. Experiments on a 50-node testbed showed the method can consistently and accurately discover available resources by analyzing existing network traffic, including traffic passed through a switch. The paper also proposes algorithms for distributed two-level resource discovery, replication and utilization to optimize resource allocation and access costs in distributed computing environments.
Bra a bidirectional routing abstraction for asymmetric mobile ad hoc networks...Mumbai Academisc
This document summarizes a paper that presents a framework called BRA that provides a bidirectional abstraction of asymmetric mobile ad hoc networks to enable off-the-shelf routing protocols to work. BRA maintains multi-hop reverse routes for unidirectional links, improves connectivity by using unidirectional links, enables reverse route forwarding of control packets, and detects packet loss on unidirectional links. Simulations show packet delivery increases substantially when AODV is layered on BRA in asymmetric networks compared to regular AODV.
AN ENTROPIC OPTIMIZATION TECHNIQUE IN HETEROGENEOUS GRID COMPUTING USING BION...ijcsit
This document summarizes a research paper that proposes a new method for improving both fault tolerance and load balancing in grid computing networks. The method converts the tree structure of grid computing nodes into a distributed R-tree index structure and then applies an entropy estimation technique. This entropy estimation helps discard nodes with high entropy from the tree, reducing complexity. The method then uses thresholding and control algorithms to select optimal route paths based on load balance and fault tolerance. Various optimization techniques like genetic algorithms, ant colony optimization, and particle swarm optimization are also applied to reach better solutions. Experimental results showed the proposed method improved performance over other existing methods.
The document proposes a distributed database system for delay tolerant networks (DTNs). It discusses two key contributions: 1) practical heuristics for query scheduling that optimize for lower delays by moving relevant data to sites with the largest amounts of data or highest likelihood of contact, and 2) a prereplication scheme that reduces latency for on-demand retrieval by actively pre-caching popular data between nodes based on bandwidth availability. The paper evaluates the proposed system through simulations on artificial and real-world DTN traces.
Propose a Method to Improve Performance in Grid Environment, Using Multi-Crit...Editor IJCATR
The most important purpose of grid networks is resource subscription in a dynamic and heterogeneous environment.
They are accessible through using various methods. Subscription has mainly computational, scientific and other implications. In
order to reach grid purposes and to use available resources in grid environment, subtasks are distributed among resources and are
scheduled by considering the quality of service. It has been tried to distribute subtasks between resources in a way that maximum
QOS can be obtained. In this study, a method has been presented. In this method, three parameters; namely, sent and transferred
time between RMS and resource, process time of subtask by the resource, and the load of available tasks in resources row, have
been taken into account. In this way, multi-criteria decision is made by using TOPSIS method and this priority of the resources
are determined to assign them to subtasks. Finally, time response, as an efficient parameter, has been improved and optimized by
optimal assignment of the resources to subtasks.
T AXONOMY OF O PTIMIZATION A PPROACHES OF R ESOURCE B ROKERS IN D ATA G RIDSijcsit
A novel taxonomy of replica selection techniques is proposed. We studied some data grid approaches
where the selection strategies of data management is different. The aim of the study is to determine the
common concepts and observe their performance and to compare their performance with our strategy
Performance evaluation and estimation model using regression method for hadoo...redpel dot com
Performance evaluation and estimation model using regression method for hadoop word count.
for more ieee paper / full abstract / implementation , just visit www.redpel.com
International Refereed Journal of Engineering and Science (IRJES)irjes
a leading international journal for publication of new ideas, the state of the art research results and fundamental advances in all aspects of Engineering and Science. IRJES is a open access, peer reviewed international journal with a primary objective to provide the academic community and industry for the submission of half of original research and applications.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
RSDC (Reliable Scheduling Distributed in Cloud Computing)IJCSEA Journal
This document summarizes the PPDD algorithm for scheduling divisible loads originating from multiple sites in distributed computing environments. The PPDD algorithm is a two-phase approach that first derives a near-optimal load distribution and then considers actual communication delays when transferring load fractions. It guarantees a near-optimal solution and improved performance over previous algorithms like RSA by avoiding unnecessary load transfers between processors.
A New Architecture for Group Replication in Data GridEditor IJCATR
Nowadays, grid systems are vital technology for programs running with high performance and problems solving with largescale
in scientific, engineering and business. In grid systems, heterogeneous computational resources and data should be shared
between independent organizations that are scatter geographically. A data grid is a kind of grid types that make relations computational
and storage resources. Data replication is an efficient way in data grid to obtain high performance and high availability by saving
numerous replicas in different locations e.g. grid sites. In this research, we propose a new architecture for dynamic Group data
replication. In our architecture, we added two components to OptorSim architecture: Group Replication Management component
(GRM) and Management of Popular Files Group component (MPFG). OptorSim developed by European Data Grid projects for
evaluate replication algorithm. By using this architecture, popular files group will be replicated in grid sites at the end of each
predefined time interval.
AN EFFICIENT BANDWIDTH OPTIMIZATION AND MINIMIZING ENERGY CONSUMPTION UTILIZI...IJCNCJournal
The bandwidth utilization plays a vital role in a Wireless Sensor Network (WSN) that transmits data packets from source peer to perspective destination peer without any packet loss and time delay. In a conventional system, two main features cannot be satisfied concurrently such as low delay and high data
reliability and then the peer was transferred fewer data packets and it optimized with regular bandwidth rate. Moreover, the convention of bandwidth in network routers influences the quality of service (QoS). To overcome the above issues, an Efficient Reliability and Interval Discrepant Routing (ERIDR) algorithm is proposed to optimize bandwidth utilization on the router network with the help of bandwidth optimizer. The
bandwidth optimizer allocates required bandwidth for data transmission to each peer simultaneously to ensure the bandwidth efficiency. The proposed design is to optimize bandwidth utilization of every peer and increase data processing via higher bandwidth rate that reduces time delay and minimizes energy consumption. The proposed method establishes a high bandwidth rate router to transmit data concurrently from source peer to destination peer (peer-to-peer) without any packet loss by initializing host IP address for every peer. Based on Experimental evaluations, proposed methodology reduces 3.32 AD (Average Delay), 0.05 ET (Execution Time), 5.44 EC (Energy Consumption) and 0.28 BU (Bandwidth Utilization)
compared than existing methodologies.
Abstract: Energy consumption is one of the constraints in Wireless Sensor Networks (WSNs). The routing protocols are the hot areas to address quality-of-service (QoS) related issues viz. Energy consumption, network lifetime, network scalability and packet overhead. In existing system a hybrid optimization based PEGASIS-DSR optimized routing protocol (PDORP) is presented which used cache and directional transmission concept of both proactive and reactive routing protocols. The performance of PDORP has been evaluated and the results indicated that it performs better in most significant parameters. The performance of the existing method is checked when it is evaluated and validated with the nodes which are highly dynamic in nature based on the application requirement. The current system finds the trusted nodes in the case of only static environment. To overcome the issue the proposed system is applied for dynamic WSN’s with the location frequently being changed. The PDORP-LC is applied with local caching (LC) to acquire the location information so that the path learning can be dynamic without depending on the fixed location. The proposed work is performing in dynamic environment with the dynamic derivation of trusted nodes.
Keywords: local caching (LC), Wireless Sensor Networks (WSNs), PEGASIS-DSR optimized routing protocol (PDORP).
Title: Energy Efficient Optimal Paths Using PDORP-LC
Author: ADARSH KUMAR B, BIBIN CHRISTOPHER, ISSAC SAJAN, AJ DEEPA
ISSN 2350-1022
International Journal of Recent Research in Mathematics Computer Science and Information Technology
Paper Publications
Density Based Clustering Approach for Solving the Software Component Restruct...IRJET Journal
This document presents research on using the DBSCAN clustering algorithm to solve the problem of software component restructuring. It begins with an abstract that introduces DBSCAN and describes how it can group related software components. It then provides background on software component clustering and describes DBSCAN in more detail. The methodology section outlines the 4 phases of the proposed approach: data collection and processing, clustering with DBSCAN, visualization and analysis, and final restructuring. Experimental results show that DBSCAN produces more evenly distributed clusters compared to fuzzy clustering. The conclusion is that DBSCAN is a better technique for software restructuring as it can identify clusters of varying shapes and sizes without specifying the number of clusters in advance.
Using Neighbor’s State Cross-correlation to Accelerate Adaptation in Docitiv...paperpublications3
Abstract: In WSN, sensor nodes have limited energy budget therefore this paper mainly focus on power saving by using the docition paradigm. Docition is a new teacher-student paradigm proposed to improve cognitive radio. Although it improves the infrastruc¬ture based networks it has a weakness in case of ad-hoc mobile net¬works. The energy constraints and the total mobility of the net¬work complicate the selection of the appropriate teacher for a student. By selecting the wrong teacher, there is a high probabil¬ity that the taught information may be faulty, and thus the student radio diverges from the best state. This causes a high amount of energy loss, though the most important concern in ad-hoc networks is energy limitation. In this paper, we propose a dynamic docition for teacher selection based on the auto-correla¬tion degree of the teacher’s candidate environment and the cross-correlation degree between the teacher candidate and the student environments. We validate our approach in the context of coexist¬ence between WSN and WiFi. The WSN detects, models and exploits the unused time slots in the electromagnetic spectrum, left by WiFi, using dynamic docition. The simulation results show that the use of dynamic docition outperforms the existing docition in mobile networks. The improvements are shown through the low link overhead percentage (20% less overhead) and the low packet loss ratio (30% improvement).
Keywords: Docitive; Online Prediction Problem; WSN; pareto model; IEEE802.11 b/g;cognitive radio.
Title: Using Neighbor’s State Cross-correlation to Accelerate Adaptation in Docitive WSN
Author: Dr. Charbel Nicolas
ISSN 2349-7815
International Journal of Recent Research in Electrical and Electronics Engineering (IJRREEE)
Paper Publications
This document lists several Java/J2EE/J2ME projects related to utility computing environments, schema matching, fuzzy ontology generation, wireless sensor networks, wireless MAC protocols, distributed cache updating, selfish routing, collaborative key agreement, TCP congestion control, global roaming in mobile networks, GPS-based emergency response systems, network intrusion detection, honey pots, voice over IP, vehicle tracking, SIP-based teleconferencing, online security systems, and location-aided routing in ad hoc networks. The projects cover a wide range of topics related to distributed systems, wireless networks, and Internet applications.
Grid computing can involve lot of computational tasks which requires trustworthy computational nodes. Load balancing in grid computing is a technique which overall optimizes the whole process of assigning computational tasks to processing nodes. Grid computing is a form of distributed computing but different from conventional distributed computing in a manner that it tends to be heterogeneous, more loosely coupled and dispersed geographically. Optimization of this process must contains the overall maximization of resources utilization with balance load on each processing unit and also by decreasing the overall time or output. Evolutionary algorithms like genetic algorithms have studied so far for the implementation of load balancing across the grid networks. But problem with these genetic algorithm is that they are quite slow in cases where large number of tasks needs to be processed. In this paper we give a novel approach of parallel genetic algorithms for enhancing the overall performance and optimization of managing the whole process of load balancing across the grid nodes.
This document summarizes an article from the International Journal of Computer Engineering and Technology (IJCET) that proposes a new dynamic data replication and job scheduling strategy for data grids. The strategy aims to improve data access time and reduce bandwidth consumption by replicating data based on file popularity, storage limitations at nodes, and data category. It replicates more popular files that are in the same category as frequently accessed data to nodes close to where jobs are run. This is intended to optimize performance by locating data and jobs close together. The document provides context on related work and outlines the proposed system architecture and replication/scheduling approach.
MAP REDUCE BASED ON CLOAK DHT DATA REPLICATION EVALUATIONijdms
Distributed databases and data replication are effective ways to increase the accessibility and reliability of
un-structured, semi-structured and structured data to extract new knowledge. Replications offer better
performance and greater availability of data. With the advent of Big Data, new storage and processing
challenges are emerging.
To meet these challenges, Hadoop and DHTs compete in the storage domain and MapReduce and others in
distributed processing, with their strengths and weaknesses.
We propose an analysis of the circular and radial replication mechanisms of the CLOAK DHT. We
evaluate their performance through a comparative study of data from simulations. The results show that
radial replication is better in storage, unlike circular replication, which gives better search results.
A COST EFFECTIVE COMPRESSIVE DATA AGGREGATION TECHNIQUE FOR WIRELESS SENSOR N...ijasuc
In wireless sensor network (WSN) there are two main problems in employing conventional compression
techniques. The compression performance depends on the organization of the routes for a larger extent.
The efficiency of an in-network data compression scheme is not solely determined by the compression
ratio, but also depends on the computational and communication overheads. In Compressive Data
Aggregation technique, data is gathered at some intermediate node where its size is reduced by applying
compression technique without losing any information of complete data. In our previous work, we have
developed an adaptive traffic aware aggregation technique in which the aggregation technique can be
changed into structured and structure-free adaptively, depending on the load status of the traffic. In this
paper, as an extension to our previous work, we provide a cost effective compressive data gathering
technique to enhance the traffic load, by using structured data aggregation scheme. We also design a
technique that effectively reduces the computation and communication costs involved in the compressive
data gathering process. The use of compressive data gathering process provides a compressed sensor
reading to reduce global data traffic and distributes energy consumption evenly to prolong the network
lifetime. By simulation results, we show that our proposed technique improves the delivery ratio while
reducing the energy and delay
This document provides an overview of wireless sensor networks, including their architecture, requirements, and differences from conventional networks. Wireless sensor networks consist of dense deployments of sensor nodes that self-organize into a collaborative network. The nodes have stringent limitations on energy, computing power, and bandwidth. They monitor physical conditions and transmit aggregated sensor data in a multi-hop fashion to sink nodes for collection and analysis. Routing protocols are critical given the constraints of wireless sensor networks.
This document presents research on developing real-time adaptive feedforward control for an XY table system. The researchers identified mathematical models of the XY table plant using system identification techniques. They obtained both minimum phase and non-minimum phase models by varying the sampling time during data collection. They then developed and tested two types of feedforward zero phase error tracking controllers on the system models. Experimental results showed that the modified controller improved tracking performance for both minimum and non-minimum phase systems.
This document summarizes a study on the effect of process parameters on the strength of resistance spot welded aluminum alloy A5052 sheets with cover plates. The researchers welded 1mm thick aluminum alloy sheets using resistance spot welding with cold-rolled steel cover plates. They investigated the effect of welding current, time, and electrode force on nugget diameter, tensile-shear strength, and failure mode. Increasing welding current and time increased nugget diameter and strength, while increasing electrode force decreased nugget diameter and strength. Hardness was lowest in the nugget region compared to the base metal. Interfacial failure occurred for smaller nuggets and nugget pullout failure for larger nuggets
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
The document discusses a study on using melt-densified post-consumer recycled plastic bags as lightweight aggregate in concrete. Melt-densified aggregates (MDA) were prepared by melting plastic bags in a furnace at 160°C. Concrete mixtures with 0-20% replacement of conventional aggregates with MDA were tested. Test results showed that as MDA replacement increased, compressive strength and density decreased. With 20% replacement, compressive strength dropped 44% and density dropped 3.2%. The study concludes that MDA can partially replace conventional aggregates to reduce concrete weight while providing a way to dispose of plastic waste.
This document summarizes a study that examines heat and mass transfer over a vertical plate in a porous medium with Soret and Dufour effects, a convective surface boundary condition, chemical reaction, and magnetic field. The governing equations for the fluid flow, heat transfer, and mass transfer are presented. Similarity solutions are used to transform the governing partial differential equations into ordinary differential equations, which are then solved numerically. The results are presented graphically to show the influence of various parameters on velocity, temperature, concentration, skin friction, Nusselt number, and Sherwood number.
This document discusses the optimization and simulation of Josephson junctions as switching elements. It begins with an introduction stating the need to optimize Josephson junction parameters and characteristics to increase switching speed. It then provides a brief review of the Josephson effect and properties of Josephson junctions. The rest of the document details a theoretical approach to estimating junction parameters, characteristics, and optimization methods, and using computer simulation to model the dynamic response of Josephson junctions to better understand their switching performance and speed.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
RSDC (Reliable Scheduling Distributed in Cloud Computing)IJCSEA Journal
This document summarizes the PPDD algorithm for scheduling divisible loads originating from multiple sites in distributed computing environments. The PPDD algorithm is a two-phase approach that first derives a near-optimal load distribution and then considers actual communication delays when transferring load fractions. It guarantees a near-optimal solution and improved performance over previous algorithms like RSA by avoiding unnecessary load transfers between processors.
A New Architecture for Group Replication in Data GridEditor IJCATR
Nowadays, grid systems are vital technology for programs running with high performance and problems solving with largescale
in scientific, engineering and business. In grid systems, heterogeneous computational resources and data should be shared
between independent organizations that are scatter geographically. A data grid is a kind of grid types that make relations computational
and storage resources. Data replication is an efficient way in data grid to obtain high performance and high availability by saving
numerous replicas in different locations e.g. grid sites. In this research, we propose a new architecture for dynamic Group data
replication. In our architecture, we added two components to OptorSim architecture: Group Replication Management component
(GRM) and Management of Popular Files Group component (MPFG). OptorSim developed by European Data Grid projects for
evaluate replication algorithm. By using this architecture, popular files group will be replicated in grid sites at the end of each
predefined time interval.
AN EFFICIENT BANDWIDTH OPTIMIZATION AND MINIMIZING ENERGY CONSUMPTION UTILIZI...IJCNCJournal
The bandwidth utilization plays a vital role in a Wireless Sensor Network (WSN) that transmits data packets from source peer to perspective destination peer without any packet loss and time delay. In a conventional system, two main features cannot be satisfied concurrently such as low delay and high data
reliability and then the peer was transferred fewer data packets and it optimized with regular bandwidth rate. Moreover, the convention of bandwidth in network routers influences the quality of service (QoS). To overcome the above issues, an Efficient Reliability and Interval Discrepant Routing (ERIDR) algorithm is proposed to optimize bandwidth utilization on the router network with the help of bandwidth optimizer. The
bandwidth optimizer allocates required bandwidth for data transmission to each peer simultaneously to ensure the bandwidth efficiency. The proposed design is to optimize bandwidth utilization of every peer and increase data processing via higher bandwidth rate that reduces time delay and minimizes energy consumption. The proposed method establishes a high bandwidth rate router to transmit data concurrently from source peer to destination peer (peer-to-peer) without any packet loss by initializing host IP address for every peer. Based on Experimental evaluations, proposed methodology reduces 3.32 AD (Average Delay), 0.05 ET (Execution Time), 5.44 EC (Energy Consumption) and 0.28 BU (Bandwidth Utilization)
compared than existing methodologies.
Abstract: Energy consumption is one of the constraints in Wireless Sensor Networks (WSNs). The routing protocols are the hot areas to address quality-of-service (QoS) related issues viz. Energy consumption, network lifetime, network scalability and packet overhead. In existing system a hybrid optimization based PEGASIS-DSR optimized routing protocol (PDORP) is presented which used cache and directional transmission concept of both proactive and reactive routing protocols. The performance of PDORP has been evaluated and the results indicated that it performs better in most significant parameters. The performance of the existing method is checked when it is evaluated and validated with the nodes which are highly dynamic in nature based on the application requirement. The current system finds the trusted nodes in the case of only static environment. To overcome the issue the proposed system is applied for dynamic WSN’s with the location frequently being changed. The PDORP-LC is applied with local caching (LC) to acquire the location information so that the path learning can be dynamic without depending on the fixed location. The proposed work is performing in dynamic environment with the dynamic derivation of trusted nodes.
Keywords: local caching (LC), Wireless Sensor Networks (WSNs), PEGASIS-DSR optimized routing protocol (PDORP).
Title: Energy Efficient Optimal Paths Using PDORP-LC
Author: ADARSH KUMAR B, BIBIN CHRISTOPHER, ISSAC SAJAN, AJ DEEPA
ISSN 2350-1022
International Journal of Recent Research in Mathematics Computer Science and Information Technology
Paper Publications
Density Based Clustering Approach for Solving the Software Component Restruct...IRJET Journal
This document presents research on using the DBSCAN clustering algorithm to solve the problem of software component restructuring. It begins with an abstract that introduces DBSCAN and describes how it can group related software components. It then provides background on software component clustering and describes DBSCAN in more detail. The methodology section outlines the 4 phases of the proposed approach: data collection and processing, clustering with DBSCAN, visualization and analysis, and final restructuring. Experimental results show that DBSCAN produces more evenly distributed clusters compared to fuzzy clustering. The conclusion is that DBSCAN is a better technique for software restructuring as it can identify clusters of varying shapes and sizes without specifying the number of clusters in advance.
Using Neighbor’s State Cross-correlation to Accelerate Adaptation in Docitiv...paperpublications3
Abstract: In WSN, sensor nodes have limited energy budget therefore this paper mainly focus on power saving by using the docition paradigm. Docition is a new teacher-student paradigm proposed to improve cognitive radio. Although it improves the infrastruc¬ture based networks it has a weakness in case of ad-hoc mobile net¬works. The energy constraints and the total mobility of the net¬work complicate the selection of the appropriate teacher for a student. By selecting the wrong teacher, there is a high probabil¬ity that the taught information may be faulty, and thus the student radio diverges from the best state. This causes a high amount of energy loss, though the most important concern in ad-hoc networks is energy limitation. In this paper, we propose a dynamic docition for teacher selection based on the auto-correla¬tion degree of the teacher’s candidate environment and the cross-correlation degree between the teacher candidate and the student environments. We validate our approach in the context of coexist¬ence between WSN and WiFi. The WSN detects, models and exploits the unused time slots in the electromagnetic spectrum, left by WiFi, using dynamic docition. The simulation results show that the use of dynamic docition outperforms the existing docition in mobile networks. The improvements are shown through the low link overhead percentage (20% less overhead) and the low packet loss ratio (30% improvement).
Keywords: Docitive; Online Prediction Problem; WSN; pareto model; IEEE802.11 b/g;cognitive radio.
Title: Using Neighbor’s State Cross-correlation to Accelerate Adaptation in Docitive WSN
Author: Dr. Charbel Nicolas
ISSN 2349-7815
International Journal of Recent Research in Electrical and Electronics Engineering (IJRREEE)
Paper Publications
This document lists several Java/J2EE/J2ME projects related to utility computing environments, schema matching, fuzzy ontology generation, wireless sensor networks, wireless MAC protocols, distributed cache updating, selfish routing, collaborative key agreement, TCP congestion control, global roaming in mobile networks, GPS-based emergency response systems, network intrusion detection, honey pots, voice over IP, vehicle tracking, SIP-based teleconferencing, online security systems, and location-aided routing in ad hoc networks. The projects cover a wide range of topics related to distributed systems, wireless networks, and Internet applications.
Grid computing can involve lot of computational tasks which requires trustworthy computational nodes. Load balancing in grid computing is a technique which overall optimizes the whole process of assigning computational tasks to processing nodes. Grid computing is a form of distributed computing but different from conventional distributed computing in a manner that it tends to be heterogeneous, more loosely coupled and dispersed geographically. Optimization of this process must contains the overall maximization of resources utilization with balance load on each processing unit and also by decreasing the overall time or output. Evolutionary algorithms like genetic algorithms have studied so far for the implementation of load balancing across the grid networks. But problem with these genetic algorithm is that they are quite slow in cases where large number of tasks needs to be processed. In this paper we give a novel approach of parallel genetic algorithms for enhancing the overall performance and optimization of managing the whole process of load balancing across the grid nodes.
This document summarizes an article from the International Journal of Computer Engineering and Technology (IJCET) that proposes a new dynamic data replication and job scheduling strategy for data grids. The strategy aims to improve data access time and reduce bandwidth consumption by replicating data based on file popularity, storage limitations at nodes, and data category. It replicates more popular files that are in the same category as frequently accessed data to nodes close to where jobs are run. This is intended to optimize performance by locating data and jobs close together. The document provides context on related work and outlines the proposed system architecture and replication/scheduling approach.
MAP REDUCE BASED ON CLOAK DHT DATA REPLICATION EVALUATIONijdms
Distributed databases and data replication are effective ways to increase the accessibility and reliability of
un-structured, semi-structured and structured data to extract new knowledge. Replications offer better
performance and greater availability of data. With the advent of Big Data, new storage and processing
challenges are emerging.
To meet these challenges, Hadoop and DHTs compete in the storage domain and MapReduce and others in
distributed processing, with their strengths and weaknesses.
We propose an analysis of the circular and radial replication mechanisms of the CLOAK DHT. We
evaluate their performance through a comparative study of data from simulations. The results show that
radial replication is better in storage, unlike circular replication, which gives better search results.
A COST EFFECTIVE COMPRESSIVE DATA AGGREGATION TECHNIQUE FOR WIRELESS SENSOR N...ijasuc
In wireless sensor network (WSN) there are two main problems in employing conventional compression
techniques. The compression performance depends on the organization of the routes for a larger extent.
The efficiency of an in-network data compression scheme is not solely determined by the compression
ratio, but also depends on the computational and communication overheads. In Compressive Data
Aggregation technique, data is gathered at some intermediate node where its size is reduced by applying
compression technique without losing any information of complete data. In our previous work, we have
developed an adaptive traffic aware aggregation technique in which the aggregation technique can be
changed into structured and structure-free adaptively, depending on the load status of the traffic. In this
paper, as an extension to our previous work, we provide a cost effective compressive data gathering
technique to enhance the traffic load, by using structured data aggregation scheme. We also design a
technique that effectively reduces the computation and communication costs involved in the compressive
data gathering process. The use of compressive data gathering process provides a compressed sensor
reading to reduce global data traffic and distributes energy consumption evenly to prolong the network
lifetime. By simulation results, we show that our proposed technique improves the delivery ratio while
reducing the energy and delay
This document provides an overview of wireless sensor networks, including their architecture, requirements, and differences from conventional networks. Wireless sensor networks consist of dense deployments of sensor nodes that self-organize into a collaborative network. The nodes have stringent limitations on energy, computing power, and bandwidth. They monitor physical conditions and transmit aggregated sensor data in a multi-hop fashion to sink nodes for collection and analysis. Routing protocols are critical given the constraints of wireless sensor networks.
This document presents research on developing real-time adaptive feedforward control for an XY table system. The researchers identified mathematical models of the XY table plant using system identification techniques. They obtained both minimum phase and non-minimum phase models by varying the sampling time during data collection. They then developed and tested two types of feedforward zero phase error tracking controllers on the system models. Experimental results showed that the modified controller improved tracking performance for both minimum and non-minimum phase systems.
This document summarizes a study on the effect of process parameters on the strength of resistance spot welded aluminum alloy A5052 sheets with cover plates. The researchers welded 1mm thick aluminum alloy sheets using resistance spot welding with cold-rolled steel cover plates. They investigated the effect of welding current, time, and electrode force on nugget diameter, tensile-shear strength, and failure mode. Increasing welding current and time increased nugget diameter and strength, while increasing electrode force decreased nugget diameter and strength. Hardness was lowest in the nugget region compared to the base metal. Interfacial failure occurred for smaller nuggets and nugget pullout failure for larger nuggets
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
The document discusses a study on using melt-densified post-consumer recycled plastic bags as lightweight aggregate in concrete. Melt-densified aggregates (MDA) were prepared by melting plastic bags in a furnace at 160°C. Concrete mixtures with 0-20% replacement of conventional aggregates with MDA were tested. Test results showed that as MDA replacement increased, compressive strength and density decreased. With 20% replacement, compressive strength dropped 44% and density dropped 3.2%. The study concludes that MDA can partially replace conventional aggregates to reduce concrete weight while providing a way to dispose of plastic waste.
This document summarizes a study that examines heat and mass transfer over a vertical plate in a porous medium with Soret and Dufour effects, a convective surface boundary condition, chemical reaction, and magnetic field. The governing equations for the fluid flow, heat transfer, and mass transfer are presented. Similarity solutions are used to transform the governing partial differential equations into ordinary differential equations, which are then solved numerically. The results are presented graphically to show the influence of various parameters on velocity, temperature, concentration, skin friction, Nusselt number, and Sherwood number.
This document discusses the optimization and simulation of Josephson junctions as switching elements. It begins with an introduction stating the need to optimize Josephson junction parameters and characteristics to increase switching speed. It then provides a brief review of the Josephson effect and properties of Josephson junctions. The rest of the document details a theoretical approach to estimating junction parameters, characteristics, and optimization methods, and using computer simulation to model the dynamic response of Josephson junctions to better understand their switching performance and speed.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document summarizes a research paper that proposes techniques for fast recovery from dual-link failures in mobile ad hoc networks (MANETs). It discusses link protection and path protection schemes, and proposes two approaches for dual-link failure recovery: failure dependent link protection (FDLP) and failure independent link protection (FILP). The paper develops an integer linear program and heuristic algorithm based on minimum cost path routing to solve the backup link mutual exclusion problem for dual-link failures. It evaluates the performance of the proposed approaches through simulations, finding that FDLP achieves a higher packet delivery ratio than FILP.
This document summarizes a research paper that proposes a new routing protocol called Vector-Based Forwarding (VBF) for underwater sensor networks. VBF is a location-based routing approach that uses the position of nodes to efficiently route data through a network of moving underwater sensors. It forms redundant forwarding paths between source and destination nodes to improve reliability. VBF also uses a self-adaptation algorithm to allow nodes to discard redundant packets to reduce energy consumption. The paper argues that VBF provides robust, scalable and energy efficient routing for underwater sensor networks.
This document analyzes the performance of energy detection algorithms for spectrum sensing in cognitive radio systems. It discusses how energy detection works by formulating the spectrum sensing problem as a binary hypothesis test to determine if a primary user is present or absent. It finds that increasing the signal-to-noise ratio, sample size, or dynamic detection threshold can improve detection performance. However, it also notes that energy detection is very sensitive to noise uncertainty, which can seriously degrade performance, especially in low signal-to-noise environments. A dynamic thresholding approach is proposed to improve robustness to noise uncertainty.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Relatório da CPMI da Violência Contra a MulherPaulo Veras
- A Comissão Parlamentar Mista de Inquérito foi criada para investigar a situação da violência contra a mulher no Brasil e apurar denúncias de omissão por parte do poder público na proteção de mulheres em situação de violência.
- O relatório apresenta a composição inicial e final da Comissão, com a indicação dos membros titulares e suplentes do Senado Federal e da Câmara dos Deputados para cada partido ou bloco.
- A Comissão teve seu prazo de duração prorrogado duas vezes, encerrando seus trabalhos em agosto de 2013.
Inmaculada Garbajosa Barroso Y Tamara Garbajosa BarrosoInmaculadayTamara
Jack era el hermano más joven de una familia de piratas. A diferencia de sus hermanos mayores, que habían descubierto sus talentos como artesano y cazador de tesoros respectivamente, Jack no había descubierto aún el suyo. Su abuelo le contó sobre un tesoro escondido en una isla cercana e inaccesible, y lo animó a ir a buscarlo para descubrir su talento. Aunque con dudas, Jack emprendió el viaje. Al enfrentarse a tiburones, aves y cangrejos agresivos, log
This document summarizes a research paper that proposes a new resource scheduling algorithm called STRS for cloud computing environments. STRS aims to optimally allocate data resources across computational clusters in a distributed system to minimize data access costs. It does this through two distributed algorithms - STRSA runs at each parent node to determine optimal data allocation to child nodes, and STRSD runs at each child node to determine optimal data de-allocation. The paper also proposes a intra-cluster replication algorithm called ORPNDA that uses heuristic expansion-shrinking methods to determine optimal partial data replication within each cluster. Experimental results show STRS and ORPNDA significantly outperform general frequency-based replication schemes.
This document summarizes an article from the International Journal of Computer Engineering and Technology (IJCET) that proposes an algorithm called Replica Placement in Graph Topology Grid (RPGTG) to optimally place data replicas in a graph-based data grid while ensuring quality of service (QoS). The algorithm aims to minimize data access time, balance load among replica servers, and avoid unnecessary replications, while restricting QoS in terms of number of hops and deadline to complete requests. The article describes how the algorithm converts the graph structure of the data grid to a hierarchical structure to better manage replica servers and proposes services to facilitate dynamic replication, including a replica catalog to track replica locations and a replica manager to perform replication
MAP/REDUCE DESIGN AND IMPLEMENTATION OF APRIORIALGORITHM FOR HANDLING VOLUMIN...acijjournal
Apriori is one of the key algorithms to generate frequent itemsets. Analysing frequent itemset is a crucial
step in analysing structured data and in finding association relationship between items. This stands as an
elementary foundation to supervised learning, which encompasses classifier and feature extraction
methods. Applying this algorithm is crucial to understand the behaviour of structured data. Most of the
structured data in scientific domain are voluminous. Processing such kind of data requires state of the art
computing machines. Setting up such an infrastructure is expensive. Hence a distributed environment
such as a clustered setup is employed for tackling such scenarios. Apache Hadoop distribution is one of
the cluster frameworks in distributed environment that helps by distributing voluminous data across a
number of nodes in the framework. This paper focuses on map/reduce design and implementation of
Apriori algorithm for structured data analysis.
Analyse the performance of mobile peer to Peer network using ant colony optim...IJCI JOURNAL
The document describes analyzing the performance of a mobile peer-to-peer network using ant colony optimization. It proposes using a distributed spanning tree (DST) structure to improve efficiency by reducing the large number of messages. The DST is optimized using ant colony optimization to give an optimal solution. Simulation results show the approach reduces the number of messages, average delay, and increases packet delivery ratio in the network.
An asynchronous replication model to improve data available into a heterogene...Alexander Decker
This document summarizes a research paper that proposes an asynchronous replication model to improve data availability in heterogeneous systems. The proposed model uses a loosely coupled architecture between main and replication servers to reduce dependencies. It also supports heterogeneous systems, allowing different parts of an application to run on different systems for better performance. This makes it a cost-effective solution for data replication across different system types.
A Survey of File Replication Techniques In Grid SystemsEditor IJCATR
Grid is a type of parallel and distributed systems that is designed to provide reliable access to data
and computational resources in wide area networks. These resources are distributed in different geographical
locations. Efficient data sharing in global networks is complicated by erratic node failure, unreliable network
connectivity and limited bandwidth. Replication is a technique used in grid systems to improve the
applications’ response time and to reduce the bandwidth consumption. In this paper, we present a survey on
basic and new replication techniques that have been proposed by other researchers. After that, we have a full
comparative study on these replication strategies.
This document provides a survey of file replication techniques used in grid systems. It begins with an introduction to grid systems and discusses their use of replication to improve response times and reduce bandwidth consumption. It then categorizes replication techniques as static or dynamic and describes challenges of replication including maintaining consistency and overhead. The document surveys various replication strategies for different grid topologies like peer-to-peer, tree and hybrid. It evaluates strategies based on factors like access latency, bandwidth consumption and fault tolerance. Specific replication techniques are discussed for peer-to-peer architectures aimed at availability, placement strategies and balancing workloads.
A Survey of File Replication Techniques In Grid SystemsEditor IJCATR
Grid is a type of parallel and distributed systems that is designed to provide reliable access to data
and computational resources in wide area networks. These resources are distributed in different geographical
locations. Efficient data sharing in global networks is complicated by erratic node failure, unreliable network
connectivity and limited bandwidth. Replication is a technique used in grid systems to improve the
applications’ response time and to reduce the bandwidth consumption. In this paper, we present a survey on
basic and new replication techniques that have been proposed by other researchers. After that, we have a full
comparative study on these replication strategies
ANALYSE THE PERFORMANCE OF MOBILE PEER TO PEER NETWORK USING ANT COLONY OPTIM...ijcsity
A mobile peer-to-peer computer network is the one in which each computer in the network can act as a
client or server for the other computers in the network. The communication process among the nodes in the
mobile peer to peer network requires more no of messages. Due to this large number of messages passing,
propose an interconnection structure called distributed Spanning Tree (DST) and it improves the efficiency
of the mobile peer to peer network. The proposed method improves the data availability and consistency
across the entire network and also reduces the data latency and the required number of message passes for
any specific application in the network. Further to enhance the effectiveness of the proposed system, the
DST network is optimized with the Ant Colony Optimization method. It gives the optimal solution of the
DST method and increased availability, enhanced consistency and scalability of the network. The
simulation results shows that reduces the number of message sent for any specific application and average
delay and increases the packet delivery ratio in the network.
Abstract— Cloud storage is usually distributed infrastructure, where data is not stored in a single device but is spread to several storage nodes which are located in different areas. To ensure data availability some amount of redundancy has to be maintained. But introduction of data redundancy leads to additional costs such as extra storage space and communication bandwidth which required for restoring data blocks. In the existing system, the storage infrastructure is considered as homogeneous where all nodes in the system have same online availability which leads to efficiency losses. The proposed system considers that distributed storage system is heterogeneous where each node exhibit different online availability. Monte Carlo Sampling is used to measure the online availability of storage nodes. The parallel version of Particle Swarm Optimization is used to assign redundant data blocks according to their online availability. The optimal data assignment policy reduces the redundancy and their associated cost.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
The document summarizes research on vertical fragmentation, allocation, and re-fragmentation in distributed object relational database systems. It proposes an algorithm for vertical fragmentation and allocation that considers the usage of attributes and methods by queries at different sites. The algorithm forms usage matrices, calculates affinity between methods, clusters methods, and partitions the data into fragments that are allocated to sites where they see the most demand. It also describes handling update queries by redirecting them to a server for processing and then propagating the updates to relevant fragments.
A novel cost-based replica server placement for optimal service quality in ...IJECEIAES
Replica server placement is one of the crucial concerns for a given geographic diversity associated with placement problems in content delivery network (CDN). After reviewing the existing literatures, it is noted that studies are more for solving placement problem in conventional CDN and not much over cloud-based CDN architectures, which some few studies are reported towards replica selection are much in its nascent stages of development. Moreover, such models are not benchmarked or practically assessed to prove its effectiveness. Hence, the proposed study introduces a novel design of computational framework associated with cloud-based CDN which can facilitate cost-effective replica server management for enhanced service delivery. Implemented using analytical research methodology, the simulated study outcome shows that proposed scheme offers reduced cost, reduced resource dependencies, reduced latency, and faster processing time in contrast to existing models of replica server placement.
An Integrated Inductive-Deductive Framework for Data Mapping in Wireless Sens...M H
Wireless sensor networks (WSNs) havean intrinsic interdependency with the environments inwhich they operate. The part of the world with whichan application is concerned is defined as that applica-tion’sdomain.Thispaperadvocatesthatanapplicationdomain of a WSN can serve as a supplement to analysis,interpretation,andvisualisationmethodsandtools.Webelieve it is critical to elevate the capabilities of thedata mapping services proposed in [1] to make use of the special characteristics of an application domain. Inthis paper, we propose an adaptive Multi-DimensionalApplication Domain-driven (M-DAD) mapping frame-work that is suitable for mapping an arbitrary num-ber of sense modalities and is capable of utilising therelations between different modalities as well as otherparameters of the application domain to improve themapping performance. M-DAD starts with an initialuser defined model that is maintained and updatedthroughout the network lifetime. The experimentalresults demonstrate that M-DAD mapping frameworkperforms as well or better than mapping services with-out its extended capabilities.
IRJET- Improving Data Availability by using VPC Strategy in Cloud Environ...IRJET Journal
This document discusses improving data availability in cloud environments using virtual private cloud (VPC) strategies and data replication strategies (DRS). It proposes using VPC to define private networks in public clouds and deploying cloud resources into those private networks for improved security and control. It also proposes using DRS to store multiple copies of data across different nodes to increase data availability, reduce bandwidth usage, and provide fault tolerance. The proposed approach identifies popular data files for replication, selects the best storage sites based on factors like request frequency, failure probability, and storage usage, and decides when to replace replicas to optimize resource usage. A simulation showed this hybrid VPC and DRS approach improved performance metrics like response time, network usage, and load balancing compared to
Data Warehouses store integrated and consistent data in a subject-oriented data repository dedicated
especially to support business intelligence processes. However, keeping these repositories updated usually
involves complex and time-consuming processes, commonly denominated as Extract-Transform-Load tasks.
These data intensive tasks normally execute in a limited time window and their computational requirements
tend to grow in time as more data is dealt with. Therefore, we believe that a grid environment could suit
rather well as support for the backbone of the technical infrastructure with the clear financial advantage of
using already acquired desktop computers normally present in the organization. This article proposes a
different approach to deal with the distribution of ETL processes in a grid environment, taking into account
not only the processing performance of its nodes but also the existing bandwidth to estimate the grid
availability in a near future and therefore optimize workflow distribution.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
Hm2413291336
1. P.R.RAJESH KUMAR, B.LALITHA / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue4, July-August 2012, pp.1329-1336
A CPU Resource Discovery Solution Using Cluster Computing
P.R.RAJESH KUMAR B.LALITHA
(M.Tech) (CSE), Deptof CSE, JNTUACE Assistant Professor. Dept of CSE, JNTUACE
JNTU, Anantapur JNTU, Anantapur,
Anantapur, Andhra Pradesh, India-515001 Anantapur, Andhra Pradesh, India-515001
Abstract Computational Clusters [1, 4, 7, 14, 15]. Generally,
Mainly in the Computational Clusters, the the access pattern is used to guide the placement
most concern goes to communication. In order to decision. Several considerations like static [1] and
decrease the communication cost and devise its dynamic [2, 7, 15] placement decisions and full and
performance researchers have been conducting partial replication schemes have also been
research on placing of data in distribution system. investigated. Partial replication take into consideration
Whatever may be, data correlates because of clients reproduce subsets of resources and is generally
accesses which ultimately effects on data placement. required for environments showing strong access
In the early research, for the development of locality [8, 11]. Almost all the model placement
performance in mobile transactions, the duplication of algorithms do not specifically presume full or partial
placement matter on correlated data is mentioned. In replication [5, 15]. In reality, most of the placement
the present paper, primarily model allocation decision algorithms can be applied to both cases by
decisions can be made locally for each model site in a managing the granularity of the resources that are
tree network, with data access knowledge of its considered. But, each of these algorithms presumes
neighbors is discussed and secondly for correlated that resources are not dependent on each other. In
resources in internet environment a novel replication several applications, each and every
cost model is focused. With reference to the early request/transaction may have access to multiple
research of cost model and algorithms, the present resources and, so, bringing correlation among the
model, an inter cluster resource discovery and resources. Going by example, for a read transaction
utilization algorithm (ICRDU) for correlated data in accessing multiple resources, all of these resources at
internet environment is created. Later an intra cluster a local site will be able to decline communication
resource replication and utilization algorithm overhead. But, in the case of only a part of the data set
(ICRRU) is devised to make model placement that is being accessed is modeled at a local site, then
decisions effectively. The algorithm gets near optimal the replication will not bring much advantage. This is
solutions for the correlated data model and produces because the read transaction still requires to be
performance improvement. The experimental studies forwarded in order to recover the resources that are
explain the intra cluster resource replication and left out. The same is applied to an update transaction
utilization allocation algorithm significantly that is accessing multiple resources. In the case of
outperforms the general frequency-based replication only a part of the data set that is being updated is
schemes. modeled, a message is required for sending the update
request. So, for getting better model allocation, it is
Introduction necessary to take into consideration the access
With the advances in network technologies, correlation among resources. In [12], we have dealt
applications are all moving toward serving widely about the model placement issues of correlated
distributed users. However, today’s Internet still resources in mobile environments considering mobile
cannot guarantee quality of services and potential networks that are having a star topology. Further, it
congestions can result in prolonged delays and leave presumes that every time the base station node holds
unsatisfied customers. The problem can be more the total data set (comprising all resources that may be
severe when the accesses involve a large amount of required by the mobile nodes). This is required for
data. Replication techniques have been commonly minimizing the mobile unit connection time for the
used to minimize the communication latency by accesses. Whenever the general Internet environment
bringing the data close to the clients. Web caching is a is taken into consideration, several issues are to be
successful example of the technique. However, when considered. The first one is that there is no need of
the data may be updated, the problem is more considering the connection time for Internet based
complicated. The more models in the system, the clients and partial replication is needed.
higher the update cost will be. Thus, data needs to be Moreover, a greater complex topology
carefully placed to avoid unnecessary overhead. requires consideration for Internet accesses. So, a
There have been a lot of research works more sophisticated placement algorithm for correlated
addressing the data model placement issues in resources should be evolved. A crucial issue in model
placement algorithms in such kind of environment is
1329 | P a g e
2. P.R.RAJESH KUMAR, B.LALITHA / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue4, July-August 2012, pp.1329-1336
that who executes the intensive placement only the data model placement on the computational
computation. Some of the placement algorithms are clusters, and the model assignment inside each cluster
centralized [6, 14] that makes the central decision- can be treated separately. We presume that the actual
making site to be overloaded severely. Distributed copy of the entire set of N resources that are to be
model placement algorithms, like [10, 15], grant each accessed by users is put up at a primary data server
model site to make localized decisions. They will be that is indicated as O. Let DO indicate the N
able to react to changes in access patterns and
naturally divide the computation. In this paper, we resources on the data server O, then,
construct a dynamic model placement algorithm, DO {d1 , d2 ,..., d N } and | DO | = N. Whenever
which (1) takes into consideration a tree topology applications use data saved in this primary sever, they
network, (2) considers the correlation among generally follow a shortest path tree routing to the
resources whenever accessed by the same requests, data source that is positioned at the primary server O
and (3) gives distributed algorithm that does localized [9]. Study reveals that almost all routings are stable in
analysis for declining the computation cost. days or weeks and so the routing paths can be looked
Moreover, our algorithm is adaptive. It takes into upon as a tree topology that is rooted at the primary
consideration 2 schemes. To get optimal solutions, a server O. So, we take into consideration replication
distributed branch and bound algorithm is utilized to the widely distributed system as a tree graph,
calculate the optimal set of resources models to be indicated as T, rooted at the primary server O. To
designated on the computational clusters. Remember expedite efficient data accesses by users in the
that in commercial applications, transaction access Internet, a subset of resources in DO are modeled on
patterns follow the “80/20” rule [3], i.e., just 20% of
the transaction types accounts for 80% of the total computational clusters in T.
amount of the transactions. Whenever the transaction Let Dx indicate the resources on a data
access pattern is stabled, the set of resources
associated is considerably small. Lanier Watkins et Dx ⊆ DO , and | Dx | is the number of
server x, then
al[] discussed a Passive Solution to the Memory modeled resources in Dx . Take a set of resources Ω,
Resource Discovery Problem in Computational
let R(Ω) indicate the resident set of Ω that is the set of
Clusters. When compared to existing models this
computational clusters hold Ω or a superset of Ω, i.e.
solution is more robust secure and scalable. The prime
benefits of this model are low message complexity, R(Ω) = {x∈T | Ω ⊆ Dx }.The divided system bolsters
scalability, load balancing, and low maintainability both read and update requests and each request can
but limited their solution to deal with resource have access to an random number of resources in
requirements of the clusters. DO . For each and every data server x in T, there are
So, the optimal algorithm is obtainable in
this case. For dealing with the access patterns that are a number of clients that are connected to it. The
frequently changed, we suggest resource discovery, requests that are given by these clients can be seen as
replication and utilization algorithms for getting near requests from the data server x. Presuming that clients
optimal solutions, which is motivated from the work can give both read and update requests. Let t indicate
carried out by Lanier Watkins et al[]. The remaining a transaction, it can be a read transaction or an update
part of this paper is distributed as follows. Section 2 transaction. Let D (t) indicate the set of resources read
explains our system model and problem definition, or updated by t, D (t) ⊆ DO , and D (t) is designated
depending on the system model that is defined in [12]. as a transaction-data set of t. For a transaction t
Section 3 delves about the transaction based cost accessing D (t) given by a data server v, if D (t)
model in a distributed environment. Section 4 gives a
⊆ DV , then t is served locally. Contra wise, t is send
distributed partial replication algorithm (ICRDU).
Section 5 deals about a intra cluster resource to the closest data server, which can serve t.
replication and utilization partial replication algorithm For an update transaction t, the data server
(ICRRU). Section 6 is about the experimental study that performs t requires to send the update to other
results and Section 7 concludes the paper computational clusters through the edges in a tree that
only possess of all those computational clusters *x in
2. Resource allocation System Format T, where D (t) ∩ D*x ≠ φ. Our aim is to optimally
Studies reveal that the networks can be
disintegrated into connected autonomous systems,
allocate the models resources in DO to computational
which are under separate administrative control [9]. clusters in T in such a way that aggregate access cost
These autonomous systems are generally treated as is minimized for a given client access pattern. We
clusters. By this way the underlying topology of the define a model placement that is having the minimal
widely distributed system as a cluster based general cost as an optimal model placement of DO (OptPl
graph is modeled by us. For scaling the system, we
presume that there is a data server in each and every ( DO )). Observe that optimal placement solutions
cluster. In this research, we take into consideration may not be unique. Take a data set Ω, R (Ω) in an
1330 | P a g e
3. P.R.RAJESH KUMAR, B.LALITHA / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue4, July-August 2012, pp.1329-1336
OptPl ( DO ) is an optimal resident set of Ω. During time period. A set of resources, Ω ⊆ Ddx, may require
de-allocation from x if modeling Ω is not going to
this research, we don’t take into consideration node
benefit in terms of communication cost. Let
capacity constraint, message losses, node failures, and
update_d(Ω, x) indicate the de-allocation advantage
the consistency maintenance issues. The
for de-allocating Ω. In essence, the de-allocation
communication cost that is brought up by model
advantage of Ω comprises all update transactions from
placement and de-allocation is also not taken into
y, accessing a subset of resources in Ω. update_d(Ω,
consideration.
x) = Σ|W(S)|, where (S ⊆ Ω) ∧ (Ω ⊆ Dd x ) ∧ (S ≠
3. Operation Based limited Replication rate φ)Let read_d(Ω, x) indicate the de-allocation cost. In
format essence, the de-allocation cost of Ω comprises all read
During this section, we make definition of transactions that are issued by x, accessing a subset of
operation based limited replication rate formats for Dd x and some resources in Ω. read_d(Ω, x) =
correlated resources in a tree graph, depending on the
cost models that are defined in [Tum06a]. Take 2 Σ|Q(S)|, where (S ∩ Ω ≠ φ) ∧ (S ⊆ Dx ) ∧ (Ω ⊆
computational clusters x and y, where data server y is Dd x ). The model de-allocation access cost_d(Ω, x),
the parent data server of x. Here, we only emphasize
is the dissimilarity between read cost and update
model allocation decisions at y and model de-
advantage for de-allocating Ω from x.cost_d(Ω, x) =
allocation decision at x. This is because model
allocation and de-allocation models and approaches read _d(Ω, x) – update_d(Ω, x)Let S d opt indicate the
can be applied in a similar fashion to other
computational clusters in tree T. Take a set of set taht minimizes cost_d(Ω, x), i.e. S d opt = arg
resources Ω, Ω ⊆ DO . If Ω is modeled to x, a read min(cost_d(Ω, x)). The transaction based partial
transaction t, D (t) ⊆ Ω, can be served locally and one replication de-allocates S d opt from x if cost_d
message can be saved. We take into consideration this a
(S , x) < 0.
as one unit of advantage for modeling Ω on x. But, opt
because of the replication of Ω on x an update
transaction t, D (t) ∩ Ω ≠ φ, requires to be send to x. 4. ICRDU Algorithm
We regard this as one unit of cost put up by modeling From [13], the model placement problem in
Ω on x. T can be resolved by designating models in each
We define update_a (Ω, x) as the cost if Ω is subtree respectively, and the placement of models on
modeled from x’s parent data server (y) to x, where each data server in T can also be done independently
update_a(Ω, x) = Σ|W(S)|, where (S ∩ Ω ≠ φ) ∧ (S ∩ to its neighboring computational clusters in T.
Dx = φ) Moreover, the set of resources on v, DV , is a subset
We define read_a(Ω,x) as the additional of Dw , where w is the parent data server of v in T.
advantage if R is modeled to x, where
The model placement on v can be treated only
Read_a(Ω, x) = Σ|Q(S)|, where (S ⊆ Ω ∪
dependent on w and they are not dependent on other
Dx ) ∧ (S ∩ Ω ≠ φ) computational clusters. By this way, the model
Now, we define cost_a(Ω, x) as the data placement problem can be resolved by a distributed
object access cost for modeling Ω at x from y, where optimal partial replication (ICRDU) algorithm
cost_a(Ω, x) = update_a(Ω, x) – read_a(Ω, x) (including ICRDUA and ICRDUD, in Fig. 1). At the
end of time period,τi, parent data server y creates
Let S a opt indicates the set which
model allocation decision for its child data server x by
minimizes cost_a(Ω, x), utilizing ICRDUA, depending on its knowledge of
i.e. S a opt = arg min (cost_a(Ω, x)). The transaction the models on its child data server x that is got from
the end of last time period i -1.At the time of getting
based partial replication models S a opt to x if cost_a
the models that are allocated by y, data server x
a
(S opt , x) < 0. creates model de-allocation decision for itself by
utilizing ICRDUD. In ICRDUD, data server x can
In ICRDU, a node x can only de-allocate a
only de-allocate a set of resources Ω from x only if x
data set Ω if and only if x is the leaf node of R(Ω)
is a leaf data server of R(Ω), and it has hold the model
depending on the knowledge of the model on its child
of resources in Ω since the end of time periodτi-1. Let
nodes, and x has held the model of Ω for since the end
of last time period. We indicate Dd x as the data set S a opt (y, x, i ) indicate the data set calculated by
ICRDUA by data server y for its child x at the end of
on x in such a way that x is the leaf node of R( Dd x )
i . Let S d opt (x, i ) indicate the data set calculated
and x has held the model of Dd x since the end of last
by ICRDUD by data server x for itself at the end of
1331 | P a g e
4. P.R.RAJESH KUMAR, B.LALITHA / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue4, July-August 2012, pp.1329-1336
i . See that in for averting replication oscillation, periods. Proof: The proof is removed because of space
limitation. Kindly refer [13] for the full proof.
ICRDU need that model de-allocation decision must
be made for x after x has obtained model from its
5. ICRRU Algorithm
parent data server y in the same time period i . ICRRU since the run time of OPR algorithms
Moreover, in ICRDUA and ICRDUD, we require develops exponentially with the number of
calculating the lower bound cost for each new transactions; we construct a intra cluster resource
transaction-data union STranListIndex S . One easy replication and utilization replication algorithms
ICRRU, in which the heuristic Expansion-Shrinking
way to calculate the lower bound cost is read_a algorithms (discussed in [12]) is utilized.
( Ssuper , x) – update_a (( STranListIndex S ), x), a a a
Sheu , * Sheu ** Sheu .S: data sets and
where Ssup er S TranList .Length 1 Si , and Si is the
i TranListIndex initialized to ;
data set accessed by TranList[i], TranListIndex ≤ i ≤ * * LY ; a record log vector and initialized to ;
n.
Make a copy of LY for cost computing;
Min Cost: cost_a( Sopr
a
( y, x, i ), x), initialized to
; While* LY
Min Cost_d: cost_d( Sopr
a
( y, x, i ), x), initialized to for all data set recorded in * LY choose the
; data set S such that S Sheu is minimal:
a
S: the contained set during the search, initialized to
if ( S ** Sheu
a
) delete the record with S
;
from* LY .
Lower Bound Cost( S new )& Lower Bound
Cost_d( S new ): else if cost a(S, x) < 0 && S ** Sheu
a
Defined following the algorithm then S
a
heu S a
heu S;
Tran List: the set of distinct transactions that access else if (cost a(S, x) >= 0
data;
Tran List Index: initialized to 0; && S S
a
heu ) (S S ) a
heu
If(TranListIndex<TranList.Length){ then SahtU = S";w S;
STranListIndex =TranList; else if ( cost_a(S, x) >= 0 && S n = (S c
a
[TranlistInded].getDataObjectSet() then S heu remains unchanged;
Snew S STranListIndex ;
If(LowerBoundCost( S new )<MinCost) a
if the size of S heu has been increased,
ICRRU-A( Snew, TransListIndex+1);}
then delete the record with S from * LY , and
ICRRU-A (S, TranListIndex+1);}
else if(cost a(S, x)<Min Cost) { move all records from** LY to* LY .
MinCost=cost a(S, x);
else move the record with S from * LY to
Sopt a ( y, x, i ) = S - DMc ; }
** LY .
if (TranListIndex<TranList.Length) {
a a
STranListIndex = TranList; * S heu = S heu ;
[TranListIndex].getDataObjectSet(); move all records from** LY to * LY , and Sheu ;
a
Snew S STranListIndex ; Assume
a
that* S heu has been allocated
If(LowerBoundCost_d( S new ) MinCost_d){
to (*Dx Dx *Sheu ) , and re-do log reduction
a
ICRRU-A ( Snew, TransListIndex+1);}
in* LY .
ICRRU-A (S, TranListIndex+1);}
Else if(cost_d(S,x) MinCost_d) { while* LY
MinCost_d=cost_d(S,x); for all data set recorded in* LY
Sopt a ( x, i ) = S; }
choose the data set S with S Sheu is minimal;
a
Fig 1: ICRDU Algorithm
Theorem 1 Let L indicate the tree level. if cost_a ( Dy *Dx S Sheu x) cos t _ a( Dy *Dx S Sheu x)
a a
ICRDU stabilizes and converges to optimal in 2L time
1332 | P a g e
5. P.R.RAJESH KUMAR, B.LALITHA / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue4, July-August 2012, pp.1329-1336
Sheu Sheu S ;
a a
for all data set recorded in *Lx choose
if the size of has been increased, the data set S such that S Sheu is minimal;
d
then delete the record with S from* LY
if (S ** Sheu ) delete the record with S
d
and move all records from* LY to* LY .
from *Lx
else move the record with S from* LY to** LY .
** Sheu Dy *Dx Sheu ;
a a else if cost_d ( S , x) 0 & &S Sheu
d
If (cost(** S heu , x)<0) ** S heu = ; ;
a a d
then Sheu Sheu S ;
d
Sheu =** Sheu * Sheu :
a a a
else if (COST d ( Sheu
d
S , x) cos t _ a(Sheu , x)
d
Fig 2: ICRRU algorithm for Model allotment(ICRRU- d
then Sheu Sheu S ;
d
A)
else if (
The heuristic Expansion-Shrinking
algorithms now utilize the cost functions that are cost_d ( S , x) 0 & &S S
d
heu ) (S S ) d
heu
defined in Section 3. Moreover, in order to make d
ICRRU stabilize, we combine the Set-Expansion and then S heu remains unchanged,
Set-Shrinking algorithms. In both algorithms, the data if the size of has been increased
server first performs the Set-Expansion algorithm and then delete the record with S from *Lx and
calculates a data set, which is a subset of the optimal
data set that is calculated by the optimal algorithms. move all records from ** Lx to *Lx
Now, it performs the Set-Shrinking algorithm
else move the record with S from *Lx to ** Lx
presuming that the data set calculated by the Set-
Expansion algorithm is already designated to the child *Sheu Sheu ;
d d
data server or de-allocated from itself. Finally, we
merge the data sets calculated by the two algorithms move all records from ** Lx to *Lx and = Sheu ;
d
as the final data set to be designated from y to x or de- d
Assume *Sheu has been de-allocated from x
allocated from x. The model allotment process of
ICRRU algorithm (ICRRU-A) and the model (*Dx Dx *Sheu ) and
d
re-do log reduction
withhold for ICRRU algorithm (ICRRU-W) are
indicated in Fig. 2 and 3, respectively. ICRRU is in ** Lx
defined as given below. At the end of time period, τi, while *Lx
parent data server y creates model allocation decision
for its child data server x by performing ESRA, for all data set recorded in *LBC ,
depending on its knowledge of the models on its child
choose the data set S with S Sheu minimal:
d
data server x got from the end of last time period i 1 .
After getting the models from y, data server x if cost_d (*Dx S Sheu , x) cos t _ d (*Dx Sheu , x)
d d
performs ESRD and makes model de-allocation
Sheu Sheu S ;
d d
decision for itself. At this point, data server x can only
d
de-allocate a set of resources Ω from x only if x is a If the size ofSheu has been increased, then
leaf data server of R(Ω), and it has held the model of
delete the record with S from *Lx
resources in Ω since the end of time period i 1 .
Observe that for averting replication oscillation, and move all records from ** Lx to *Lx
ICRRU needs that model de-allocation decision must else move the record with S from *Lx to ** Lx .
be made for x after x has obtained model from its
parent data server y during same time period i . ** S d
heu *Dx Sheu ;
d
Theorem 2 Let L indicates the tree level. ICRRU If(cost_d( (** Sheu , x) 0)**Sheu
d d
;
stabilizes in at most 2L time periods.
Proof: The proof is removed because of space S d
heu ** S d
heu S ; d
heu
confinement. Kindly refer [13] for the full proof. Fig 3: ICRRU algorithm for model withhold (ICRRU-
d d d W)
S . *S , ** S . S: data sets and initialized to
heu heu heu
; ** Lx temporary record log vector and initialized 6. Simulation Results
to ; In the simulation, comparison of the ICRRU
algorithm with the commonly utilized distributed
Make a copy of Lx for cost computing: frequency-based partial replication (DFPR) scheme is
while (*Lx ) done for studying its performance and effectiveness
1333 | P a g e
6. P.R.RAJESH KUMAR, B.LALITHA / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue4, July-August 2012, pp.1329-1336
on message saving. Observe that the frequency based of a transaction having data set size μ is proportional
algorithm defined in [12] can easily be acclimatized to to 1/
ZS
where Z S is the skew parameter of the
the distributed solution, DFPR that is fairly direct that ,
each and every node makes local decision depending Zipf distribution [13]. Now, the μ resources are
on the access frequency. The tree network that we ascertained. Let NS indicate the total number of
chose comprises 20 nodes with height of 6. Initially, resources we have taken into consideration in the
the root node O (that indicates cluster HMSC) hosts experimental study.
Each and every data object is ranked. The
all the resources in data set DO . Each and every node
chance that a data object is accessed by a transaction
x in the tree is haphazardly allocated a partial set of Z
is proportional to1/ r D (also a Zipf distribution),
the resources Dx ⊆ DO . The requirement is that Dx
where Z D is the skew parameter. Lastly, if a
⊆ Dy if y is an ancestor of node x in the tree transaction is read only or update is ascertained by the
network. The metric that we take into consideration is ratio, R/W, in a uniform distribution. Here R is the
the product of the number of messages and the total number of read transactions given by PMC and
numbers of hops these messages are send in order to
W is the aggregate number of update transactions
process the requests in the tree network. Going by
example, if a request that is issued locally can be given by nodes other than PMC . From the first batch
served locally, then the number of message is counted of transactions produced, we get M resource bundles
as 0. In the case of the request being served at its and utilize them as the basis in order to formulate an
parent, then one message is taken into count. access distribution. The M resource bundles are
In this simulation, we utilize one PC separated into 2 clusters, comprising the read cluster
platform for simulating 20 server clusters and the having M 1 resource bundles for the read transactions
period of execution for each and every experiment
spans 10 time periods. The simulated distributed and the write cluster havingM 2 resource bundles for
algorithm requires retaining the history logs for every
the update transactions, where M 1 + M 2 = M and
request (most of the information is not required in a
real system). Because of the excessive I/O, the M 1 / M 2 = R/W. For each and every cluster, 0.5 M 1
simulation process becomes very time-consuming
(observe that the time is not because of the algorithm (or 0.5 M 2 ) virtual resource bundles are appended
itself). Helping to avert prejudiced access patterns, and the pre-generated M 1 (or M 2 ) resource bundles
multiple access patterns are produced haphazardly and
along with the virtual resource bundles are ranked.
the data collection process is redone for each and
The transactions are produced in the similar way as
every pattern. In order to balance between the
talked about in [12]. Regarding simulation, we
confidence level of the experimental results and the
time for the simulation study, we select repeating the presume that in a time limit T, TN transactions are
procedure 100 times (i.e., producing 100 distinct given from each data server in the tree.
access patterns). Whenever the number of resources
and the number of transactions increase, the system is In this manner, in each time period, all
likely to overload and we have to further decline the computational clusters issue TN transactions that are
repetition to 10 times (i.e., utilizing only 10 distinct
access patterns). The final data given in the following haphazardly produced depending on the same access
subsections are the average of these trials. pattern. For making the model allocation decision for
a child data server, each and every data server records
6.1 Transaction Generation the number of read transactions that are sent from the
We presume that the resources accessed by child data server and the number of update
most of the transactions follow various patterns. transactions received to the child data server. In a
Moreover, because of access locality, we presume that similar fashion recording is also done in order to
the patterns will be stable for some time periods. In make the model de-allocation decision for a child data
this experimental study, we first produce a series of server. In case of a read transaction being issued
transactions and scrutinize the resources they access. locally or sent from its children can be served locally,
The set of resources accessed by a transaction is then the transaction is recorded by the data server
defined as a resource bundle. We produce transactions records in its local log. Apart from that it also records
till M distinct resource bundles are identified. Observe the update transactions spread from its parent in the
that the resource bundles may have overlapping local transaction log. After the time period, a model
resources but no two resource bundles are identically allocation decision is done for each and every child
same. For producing a transaction, the number of data server depending on the transaction log that is
resources to be accessed by the transaction, indicated recorded for that child data server and resources are
as μ, is first ascertained. μ is surrounded by TS and reproduced and assigned to the child data server if
follows a Zipf distribution. More specially, the chance required. Now, All computational clusters make the
1334 | P a g e
7. P.R.RAJESH KUMAR, B.LALITHA / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue4, July-August 2012, pp.1329-1336
de-allocation decision depending on its local Z D = 0.2, Z S = 0.5, R/W = 0.9, TS = 3, TN = 50 *
transaction log. Performance data are calculated
depending on the allocated transactions and the model N S . M and NS changes from 10 to 100 (M is
set at each data server in each time period, just before adjusted in order to make sure that we have access to
it gets another model set from its parent. Observe that almost all resources). Fig. 5 indicates the number of
in this simulation, the number of resources selected is messages necessary for processing all the transactions
small as the amount of memory needed for recording given by the system for the two algorithms. If N S
logs for each data server in each time period. A much
larger number of resources can be selected in case the increases, the number of messages necessary for the
ICRRU is operated in a real system, as indicated in two algorithms increases as the number of transaction
[12]. also increases. But, the difference of the message
necessary for the 2 algorithms will be remained in a
6.2 The Stabilization stable way (the number of message necessary for
Comparison of the intra cluster resource resource discovery and replication with
replication and utilization algorithm with the ICRRU&ICRDU is 17% of that needed for only
distributed frequency-based algorithm is done for resource discovery[16]), even though the variation in
observing the stability of the two algorithms. Fig. 4 the number of messages increases.
gives the number of message necessary for processing
every transaction that is issued in the system for the 2
algorithms. The number of messages needed in the
system drops very speedily in the first 4 time periods
for both algorithms. The percentage of the number of
messages declined at the 1st time period is much
greater compared to that of the 2nd time period, which
is instead greater than that of the 3rd time period, so
on and so forth. After the completion 4th time period,
the number of messages that are required in the
system remains stable for both algorithms, less than
40% messages are required. At any time period, the
ICRRU requires less number of messages than the
ICRDU algorithm but the difference is negligible.
Fig 5: Scalability of resource discovery and usage[16]
and ICRRU&ICRDU
7. Summary
In this paper, we scrutinize the dynamic
designation of correlated resources in computational
clusters. We model the topology of the network that is
formed by the computational clusters in the
distributed system as a tree. Initially we show that
model designation decisions can be done locally for
each and every model site in a tree network, using
data access knowledge of its neighbors. Now, we
develop a new replication cost model for correlated
resources in Internet environment. Depending on the
Fig 4: The performance of the cost stabilization by
cost model and the algorithms that are used in
ICRDU and ICRRU
previous research, we have put in effort to develop a
inter cluster resource discovery and utilization
6.3 Scalability in terms of Resources and
algorithm (ICRDU) for correlated data in internet
Transactions proportionality
environment. ICRDU has 2 sub algorithms, ICRDUA
Comparison of the intra cluster resource
and ICRDUD, and it is indicated that ICRDU
replication and utilization algorithm ICRRU with the
stabilizes and converges in 2L time periods. Here, L is
distributed frequency-based algorithm DFPR is done
the height of the tree.
and calculate the effect of NS (the number of
Now, an intra cluster resource replication and
resources in DO ) on their performance. The utilization algorithm (ICRRU) is now developed for
parameters in this experiment are set as given here: making model placement decisions in an efficient
manner. The ICRRU has 2 sub algorithms, namely,
1335 | P a g e
8. P.R.RAJESH KUMAR, B.LALITHA / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue4, July-August 2012, pp.1329-1336
ICRRU for model allotment and ICRRU for model retrieval systems. In Proceedings of the ACM
withhold, and it is indicated that ICRRU stabilize in International Conference on Research and
2L time periods. The algorithm gets near optimal Development in Information Retrieval,
solutions for the correlated data model and produces Athens, Greece, July 2000.
significant performance gains. The simulation results [9] V. Paxson, End-to-End Routing Behavior in
indicate that the intra cluster resource replication and the Internet, IEEE/ACM Transactions
utilization allocation algorithm outperforms the Networking, 5(5) (1997) 601-615.
general resource discovery and utilization schemes in [10] J. Sidell, P. Aoki, A. Sah, C. Staelin, M.
a significant way. Stonebrakeer, and A. Yu. Data replication in
Mariposa. In Proceedings IEEE Conference
Reference on Data Engineering (New Orleans, LA,
[1] P. Apers. Data allocation in distributed Feb.), 485–494. 1996.
database systems. ACM Transactions on [11] A. Sousa, F Pedone, R Oliveira, and F
Database Systems Vol. 13, No. 3 Moura. Partial replication in the database
(Sept.).1988. state machine. In Proceedings of the IEEE
[2] A. Bestavros and C. Cunha. Server-initiated International Symposium on Network
document dissemination for the WWW. IEEE computing and Applications. 2001.
Data Engeneering Bulletin 19, 3 (Sept.), 3– [12] M. Tu, P. Li, L. Xiao I. Yen, and F. Bastani.
11. 1996. Model placement algorithms for mobile
[3] S. Ceri, S. B. Navathe, and G. Wiederhold. transaction systems. IEEE Transactions on
Distribution design of logical database Knowledge and Data Engineering. Vol. 18,
schemas. IEEE Transactions on Software No. 7. 2006.
Engineering, Vol. SE-9, No. 4. 1983. [13] M. Tu. A data management framework for
[4] D. Dowdy and D. Foster. Comparative secure and dependable data grid.
models of the file assignment problem. Ph.DDissertation, UT Dallas.
Computing Surveys, 14(2), 1982. http://www.utdallas.edu/~tumh2000/ref/Thes
[5] Y. Huang, P. Sistla, and O. Wolfson. Data is-Tu.pdf. July 2006.
replication for mobile computers. In [14] O. Wolfson and A. Milo. The multicast policy
Proceeding of 1994 ACM SIGMOD, May and its relationship to modeled data
1994. placement. ACM Trans. Database Systems.
[6] K. Kalpakis, K. Dasgupta, and O. Wolfson. Vol.16, No.1. 1991
Optimal placement of models in trees with [15] O. Wolfson, S. Jajodia and Y. Huang. An
read, write, and storage costs. IEEE adaptive data replication algorithm. ACM
Transactions on Parallel and Computational Transactions on database systems. Vol22.
Clusters. Vol 12, No. 6. 2001. No.2 pages 255-314. 1997.
[7] D. Kossmann. The state of the art in [16] Lanier Watkins, William H. Robinson, Raheem
distributed query processing. ACM A. Beyah: A Passive Solution to the Memory
Computing Surveys (CSUR). Volume 32, Resource Discovery Problem in
Issue 4. December 2000. Computational Clusters. IEEE Transactions
[8] Z. Lu and K. S. McKinley. Partial collection on Network and Service Management 7(4):
replication versus cache for information 218-230(2012)
1336 | P a g e