This document summarizes a research paper that proposes a new method for improving both fault tolerance and load balancing in grid computing networks. The method converts the tree structure of grid computing nodes into a distributed R-tree index structure and then applies an entropy estimation technique. This entropy estimation helps discard nodes with high entropy from the tree, reducing complexity. The method then uses thresholding and control algorithms to select optimal route paths based on load balance and fault tolerance. Various optimization techniques like genetic algorithms, ant colony optimization, and particle swarm optimization are also applied to reach better solutions. Experimental results showed the proposed method improved performance over other existing methods.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Grid is a type of parallel and distributed systems that is designed to provide reliable access to data
and computational resources in wide area networks. These resources are distributed in different geographical
locations. Efficient data sharing in global networks is complicated by erratic node failure, unreliable network
connectivity and limited bandwidth. Replication is a technique used in grid systems to improve the
applications’ response time and to reduce the bandwidth consumption. In this paper, we present a survey on
basic and new replication techniques that have been proposed by other researchers. After that, we have a full
comparative study on these replication strategies
RSDC (Reliable Scheduling Distributed in Cloud Computing)IJCSEA Journal
In this paper we will present a reliable scheduling algorithm in cloud computing environment. In this algorithm we create a new algorithm by means of a new technique and with classification and considering request and acknowledge time of jobs in a qualification function. By evaluating the previous algorithms, we understand that the scheduling jobs have been performed by parameters that are associated with a failure rate. Therefore in the roposed algorithm, in addition to previous parameters, some other important parameters are used so we can gain the jobs with different scheduling based on these parameters. This work is associated with a mechanism. The major job is divided to sub jobs. In order to balance the jobs we should calculate the request and acknowledge time separately. Then we create the scheduling of each job by calculating the request and acknowledge time in the form of a shared job. Finally efficiency of the system is increased. So the real time of this algorithm will be improved in comparison with the other algorithms. Finally by the mechanism presented, the total time of processing in cloud computing is improved in comparison with the other algorithms.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Grid is a type of parallel and distributed systems that is designed to provide reliable access to data
and computational resources in wide area networks. These resources are distributed in different geographical
locations. Efficient data sharing in global networks is complicated by erratic node failure, unreliable network
connectivity and limited bandwidth. Replication is a technique used in grid systems to improve the
applications’ response time and to reduce the bandwidth consumption. In this paper, we present a survey on
basic and new replication techniques that have been proposed by other researchers. After that, we have a full
comparative study on these replication strategies
RSDC (Reliable Scheduling Distributed in Cloud Computing)IJCSEA Journal
In this paper we will present a reliable scheduling algorithm in cloud computing environment. In this algorithm we create a new algorithm by means of a new technique and with classification and considering request and acknowledge time of jobs in a qualification function. By evaluating the previous algorithms, we understand that the scheduling jobs have been performed by parameters that are associated with a failure rate. Therefore in the roposed algorithm, in addition to previous parameters, some other important parameters are used so we can gain the jobs with different scheduling based on these parameters. This work is associated with a mechanism. The major job is divided to sub jobs. In order to balance the jobs we should calculate the request and acknowledge time separately. Then we create the scheduling of each job by calculating the request and acknowledge time in the form of a shared job. Finally efficiency of the system is increased. So the real time of this algorithm will be improved in comparison with the other algorithms. Finally by the mechanism presented, the total time of processing in cloud computing is improved in comparison with the other algorithms.
The advent of Big Data has seen the emergence of new processing and storage challenges. These challenges are often solved by distributed processing. Distributed systems are inherently dynamic and unstable, so it is realistic to expect that some resources will fail during use. Load balancing and task scheduling is an important step in determining the performance of parallel applications. Hence the need to design load balancing algorithms adapted to grid computing. In this paper, we propose a dynamic and hierarchical load balancing strategy at two levels: Intrascheduler load balancing, in order to avoid the use of the large-scale communication network, and interscheduler load balancing, for a load regulation of our whole system. The strategy allows improving the average response time of CLOAK-Reduce application tasks with minimal communication. We first focus on the three performance indicators, namely response time, process latency and running time of MapReduce tasks.
A h k clustering algorithm for high dimensional data using ensemble learningijitcs
Advances made to the traditional clustering algorithms solves the various problems such as curse of
dimensionality and sparsity of data for multiple attributes. The traditional H-K clustering algorithm can
solve the randomness and apriority of the initial centers of K-means clustering algorithm. But when we
apply it to high dimensional data it causes the dimensional disaster problem due to high computational
complexity. All the advanced clustering algorithms like subspace and ensemble clustering algorithms
improve the performance for clustering high dimension dataset from different aspects in different extent.
Still these algorithms will improve the performance form a single perspective. The objective of the
proposed model is to improve the performance of traditional H-K clustering and overcome the limitations
such as high computational complexity and poor accuracy for high dimensional data by combining the
three different approaches of clustering algorithm as subspace clustering algorithm and ensemble
clustering algorithm with H-K clustering algorithm.
An Integrated Distributed Clustering Algorithm for Large Scale WSN...................................................1
S. R. Boselin Prabhu, S. Sophia, S. Arthi and K. Vetriselvi
An Efficient Connection between Statistical Software and Database Management System ................... 1
Sunghae Jun
Pragmatic Approach to Component Based Software Metrics Based on Static Methods ......................... 1
S. Sagayaraj and M. Poovizhi
SDI System with Scalable Filtering of XML Documents for Mobile Clients ............................................... 1
Yi Yi Myint and Hninn Aye Thant
An Easy yet Effective Method for Detecting Spatial Domain LSB Steganography .................................... 1
Minati Mishra and Flt. Lt. Dr. M. C. Adhikary
Minimizing the Time of Detection of Large (Probably) Prime Numbers ................................................... 1
Dragan Vidakovic, Dusko Parezanovic and Zoran Vucetic
Design of ATL Rules for TransformingUML 2 Sequence Diagrams into Petri Nets..................................... 1
Elkamel Merah, Nabil Messaoudi, Dalal Bardou and Allaoua Chaoui
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Grid computing can involve lot of computational tasks which requires trustworthy computational nodes. Load balancing in grid computing is a technique which overall optimizes the whole process of assigning computational tasks to processing nodes. Grid computing is a form of distributed computing but different from conventional distributed computing in a manner that it tends to be heterogeneous, more loosely coupled and dispersed geographically. Optimization of this process must contains the overall maximization of resources utilization with balance load on each processing unit and also by decreasing the overall time or output. Evolutionary algorithms like genetic algorithms have studied so far for the implementation of load balancing across the grid networks. But problem with these genetic algorithm is that they are quite slow in cases where large number of tasks needs to be processed. In this paper we give a novel approach of parallel genetic algorithms for enhancing the overall performance and optimization of managing the whole process of load balancing across the grid nodes.
A New Architecture for Group Replication in Data GridEditor IJCATR
Nowadays, grid systems are vital technology for programs running with high performance and problems solving with largescale
in scientific, engineering and business. In grid systems, heterogeneous computational resources and data should be shared
between independent organizations that are scatter geographically. A data grid is a kind of grid types that make relations computational
and storage resources. Data replication is an efficient way in data grid to obtain high performance and high availability by saving
numerous replicas in different locations e.g. grid sites. In this research, we propose a new architecture for dynamic Group data
replication. In our architecture, we added two components to OptorSim architecture: Group Replication Management component
(GRM) and Management of Popular Files Group component (MPFG). OptorSim developed by European Data Grid projects for
evaluate replication algorithm. By using this architecture, popular files group will be replicated in grid sites at the end of each
predefined time interval.
Data Warehouses store integrated and consistent data in a subject-oriented data repository dedicated
especially to support business intelligence processes. However, keeping these repositories updated usually
involves complex and time-consuming processes, commonly denominated as Extract-Transform-Load tasks.
These data intensive tasks normally execute in a limited time window and their computational requirements
tend to grow in time as more data is dealt with. Therefore, we believe that a grid environment could suit
rather well as support for the backbone of the technical infrastructure with the clear financial advantage of
using already acquired desktop computers normally present in the organization. This article proposes a
different approach to deal with the distribution of ETL processes in a grid environment, taking into account
not only the processing performance of its nodes but also the existing bandwidth to estimate the grid
availability in a near future and therefore optimize workflow distribution.
A COST EFFECTIVE COMPRESSIVE DATA AGGREGATION TECHNIQUE FOR WIRELESS SENSOR N...ijasuc
In wireless sensor network (WSN) there are two main problems in employing conventional compression
techniques. The compression performance depends on the organization of the routes for a larger extent.
The efficiency of an in-network data compression scheme is not solely determined by the compression
ratio, but also depends on the computational and communication overheads. In Compressive Data
Aggregation technique, data is gathered at some intermediate node where its size is reduced by applying
compression technique without losing any information of complete data. In our previous work, we have
developed an adaptive traffic aware aggregation technique in which the aggregation technique can be
changed into structured and structure-free adaptively, depending on the load status of the traffic. In this
paper, as an extension to our previous work, we provide a cost effective compressive data gathering
technique to enhance the traffic load, by using structured data aggregation scheme. We also design a
technique that effectively reduces the computation and communication costs involved in the compressive
data gathering process. The use of compressive data gathering process provides a compressed sensor
reading to reduce global data traffic and distributes energy consumption evenly to prolong the network
lifetime. By simulation results, we show that our proposed technique improves the delivery ratio while
reducing the energy and delay
The growth of internet of things and wireless technology has led to enormous generation of data for various application uses such as healthcare, scientific and data intensive application. Cloud based Storage Area Network (SAN) has been widely in recent time for storing and processing these data. Providing fault tolerant and continuous access to data with minimal latency and cost is challenging. For that efficient fault tolerant mechanism is required. Data replication is an efficient mechanism for providing fault tolerant mechanism that has been considered by exiting methodologies. However, data replica placement is challenging and existing method are not efficient considering application dynamic requirement of cloud based storage area network. Thus, incurring latency, due to which induce higher cost of data transmission. This work present an efficient replica placement and transmission technique using Bipartite Graph based Data Replica Placement (BGDRP) technique that aid in minimizing latency and computing cost. Performance of BGDRP is evaluated using real-time scientific application workflow. The outcome shows BGDRP technique minimize data access latency, computation time and cost over state-of-art technique.
A Platform for Large-Scale Grid Data Service on Dynamic High-Performance Netw...Tal Lavian Ph.D.
Data intensive Grid applications often deal with multiple terabytes and even petabytes of data. For them to be effectively deployed over distances, it is crucial that Grid infrastructures learn how to best exploit high-performance networks
(such as agile optical networks). The network footprint of these Grid applications show pronounced peaks and valleys in utilization, prompting for a radical overhaul of traditional network provisioning styles such as peak-provisioning, point-and-click or operator-assisted provisioning. A Grid stack must become capable to dynamically orchestrate a complex set of variables related to application requirements, data services, and network provisioning services, all within a rapidly and continually changing environment. Presented here is a platform that addresses some of these issues. This service platform closely integrates a set of large-scale data services with those for dynamic bandwidth allocation, through a network resource middleware service, using an OGSA-compliant interface allowing direct access by external applications. Recently, this platform has been implemented as an experimental research prototype on a unique wide area optical networking testbed incorporating state-of-the-art photonic
components. The paper, which presents initial results of research conducted on this prototype, indicates that these methods have the potential to address multiple major challenges related to data intensive applications. Given the complexities of this topic, especially where scheduling is required, only selected aspects of this platform are considered in this paper.
FDMC: Framework for Decision Making in Cloud for EfficientResource Management IJECEIAES
An effective resource management is one of the critical success factors for precise virtualization process in cloud computing in presence of dynamic demands of the user. After reviewing the existing research work towards resource management in cloud, it was found that there is still a large scope of enhancement. The existing techniques are found not to completely utilize the potential features of virtual machine in order to perform resource allocation. This paper presents a framework called FDMC or Framework for Decision Making in Cloud that gives better capability for the VMs to perform resource allocation. The contribution of FDMC is a joint operation of VM to ensure faster processing of task and thereby withstand more number of increasing traffic. The study outcome was compared with some of the existing systems to find FDMC excels better performance in the scale of task allocation time, amount of core wasted, amount of storage wasted, and communication cost.
Analysing Mobile Random Early Detection for Congestion Control in Mobile Ad-h...IJECEIAES
This research paper suggests and analyse a technique for congestion control in mobile ad hoc networks. The technique is based on a new hybrid approach that uses clustering and queuing techniques. In clustering, in general cluster head transfers the data, following a queuing method based on a RED (Random Early Detection), the mobile environment makes it Mobile RED (or MRED), It majorly depends upon mobility of nodes and mobile environments leads to unpredictable queue size. To simulate this technique, the Network Simulator 2 (or NS2) is used for various scenarios. The simulated results are compared with NRED (Neighbourhood Random Early Detection) queuing technique of congestion control. It has been observed that the results are improved using MRED comparatively.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
TASK-DECOMPOSITION BASED ANOMALY DETECTION OF MASSIVE AND HIGH-VOLATILITY SES...ijdpsjournal
The Science Information Network (SINET) is a Japanese academic backbone network for more than 800 universities and research institutions. The characteristic of SINET traffic is that it is enormous and highly variable. In this paper, we present a task-decomposition based anomaly detection of massive and highvolatility session data of SINET. Three main features are discussed: Tash scheduling, Traffic discrimination, and Histogramming. We adopt a task-decomposition based dynamic scheduling method to handle the massive session data stream of SINET. In the experiment, we have analysed SINET traffic from 2/27 to 3/8 and detect some anomalies by LSTM based time-series data processing.
Abstract— Cloud storage is usually distributed infrastructure, where data is not stored in a single device but is spread to several storage nodes which are located in different areas. To ensure data availability some amount of redundancy has to be maintained. But introduction of data redundancy leads to additional costs such as extra storage space and communication bandwidth which required for restoring data blocks. In the existing system, the storage infrastructure is considered as homogeneous where all nodes in the system have same online availability which leads to efficiency losses. The proposed system considers that distributed storage system is heterogeneous where each node exhibit different online availability. Monte Carlo Sampling is used to measure the online availability of storage nodes. The parallel version of Particle Swarm Optimization is used to assign redundant data blocks according to their online availability. The optimal data assignment policy reduces the redundancy and their associated cost.
The advent of Big Data has seen the emergence of new processing and storage challenges. These challenges are often solved by distributed processing. Distributed systems are inherently dynamic and unstable, so it is realistic to expect that some resources will fail during use. Load balancing and task scheduling is an important step in determining the performance of parallel applications. Hence the need to design load balancing algorithms adapted to grid computing. In this paper, we propose a dynamic and hierarchical load balancing strategy at two levels: Intrascheduler load balancing, in order to avoid the use of the large-scale communication network, and interscheduler load balancing, for a load regulation of our whole system. The strategy allows improving the average response time of CLOAK-Reduce application tasks with minimal communication. We first focus on the three performance indicators, namely response time, process latency and running time of MapReduce tasks.
A h k clustering algorithm for high dimensional data using ensemble learningijitcs
Advances made to the traditional clustering algorithms solves the various problems such as curse of
dimensionality and sparsity of data for multiple attributes. The traditional H-K clustering algorithm can
solve the randomness and apriority of the initial centers of K-means clustering algorithm. But when we
apply it to high dimensional data it causes the dimensional disaster problem due to high computational
complexity. All the advanced clustering algorithms like subspace and ensemble clustering algorithms
improve the performance for clustering high dimension dataset from different aspects in different extent.
Still these algorithms will improve the performance form a single perspective. The objective of the
proposed model is to improve the performance of traditional H-K clustering and overcome the limitations
such as high computational complexity and poor accuracy for high dimensional data by combining the
three different approaches of clustering algorithm as subspace clustering algorithm and ensemble
clustering algorithm with H-K clustering algorithm.
An Integrated Distributed Clustering Algorithm for Large Scale WSN...................................................1
S. R. Boselin Prabhu, S. Sophia, S. Arthi and K. Vetriselvi
An Efficient Connection between Statistical Software and Database Management System ................... 1
Sunghae Jun
Pragmatic Approach to Component Based Software Metrics Based on Static Methods ......................... 1
S. Sagayaraj and M. Poovizhi
SDI System with Scalable Filtering of XML Documents for Mobile Clients ............................................... 1
Yi Yi Myint and Hninn Aye Thant
An Easy yet Effective Method for Detecting Spatial Domain LSB Steganography .................................... 1
Minati Mishra and Flt. Lt. Dr. M. C. Adhikary
Minimizing the Time of Detection of Large (Probably) Prime Numbers ................................................... 1
Dragan Vidakovic, Dusko Parezanovic and Zoran Vucetic
Design of ATL Rules for TransformingUML 2 Sequence Diagrams into Petri Nets..................................... 1
Elkamel Merah, Nabil Messaoudi, Dalal Bardou and Allaoua Chaoui
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Grid computing can involve lot of computational tasks which requires trustworthy computational nodes. Load balancing in grid computing is a technique which overall optimizes the whole process of assigning computational tasks to processing nodes. Grid computing is a form of distributed computing but different from conventional distributed computing in a manner that it tends to be heterogeneous, more loosely coupled and dispersed geographically. Optimization of this process must contains the overall maximization of resources utilization with balance load on each processing unit and also by decreasing the overall time or output. Evolutionary algorithms like genetic algorithms have studied so far for the implementation of load balancing across the grid networks. But problem with these genetic algorithm is that they are quite slow in cases where large number of tasks needs to be processed. In this paper we give a novel approach of parallel genetic algorithms for enhancing the overall performance and optimization of managing the whole process of load balancing across the grid nodes.
A New Architecture for Group Replication in Data GridEditor IJCATR
Nowadays, grid systems are vital technology for programs running with high performance and problems solving with largescale
in scientific, engineering and business. In grid systems, heterogeneous computational resources and data should be shared
between independent organizations that are scatter geographically. A data grid is a kind of grid types that make relations computational
and storage resources. Data replication is an efficient way in data grid to obtain high performance and high availability by saving
numerous replicas in different locations e.g. grid sites. In this research, we propose a new architecture for dynamic Group data
replication. In our architecture, we added two components to OptorSim architecture: Group Replication Management component
(GRM) and Management of Popular Files Group component (MPFG). OptorSim developed by European Data Grid projects for
evaluate replication algorithm. By using this architecture, popular files group will be replicated in grid sites at the end of each
predefined time interval.
Data Warehouses store integrated and consistent data in a subject-oriented data repository dedicated
especially to support business intelligence processes. However, keeping these repositories updated usually
involves complex and time-consuming processes, commonly denominated as Extract-Transform-Load tasks.
These data intensive tasks normally execute in a limited time window and their computational requirements
tend to grow in time as more data is dealt with. Therefore, we believe that a grid environment could suit
rather well as support for the backbone of the technical infrastructure with the clear financial advantage of
using already acquired desktop computers normally present in the organization. This article proposes a
different approach to deal with the distribution of ETL processes in a grid environment, taking into account
not only the processing performance of its nodes but also the existing bandwidth to estimate the grid
availability in a near future and therefore optimize workflow distribution.
A COST EFFECTIVE COMPRESSIVE DATA AGGREGATION TECHNIQUE FOR WIRELESS SENSOR N...ijasuc
In wireless sensor network (WSN) there are two main problems in employing conventional compression
techniques. The compression performance depends on the organization of the routes for a larger extent.
The efficiency of an in-network data compression scheme is not solely determined by the compression
ratio, but also depends on the computational and communication overheads. In Compressive Data
Aggregation technique, data is gathered at some intermediate node where its size is reduced by applying
compression technique without losing any information of complete data. In our previous work, we have
developed an adaptive traffic aware aggregation technique in which the aggregation technique can be
changed into structured and structure-free adaptively, depending on the load status of the traffic. In this
paper, as an extension to our previous work, we provide a cost effective compressive data gathering
technique to enhance the traffic load, by using structured data aggregation scheme. We also design a
technique that effectively reduces the computation and communication costs involved in the compressive
data gathering process. The use of compressive data gathering process provides a compressed sensor
reading to reduce global data traffic and distributes energy consumption evenly to prolong the network
lifetime. By simulation results, we show that our proposed technique improves the delivery ratio while
reducing the energy and delay
The growth of internet of things and wireless technology has led to enormous generation of data for various application uses such as healthcare, scientific and data intensive application. Cloud based Storage Area Network (SAN) has been widely in recent time for storing and processing these data. Providing fault tolerant and continuous access to data with minimal latency and cost is challenging. For that efficient fault tolerant mechanism is required. Data replication is an efficient mechanism for providing fault tolerant mechanism that has been considered by exiting methodologies. However, data replica placement is challenging and existing method are not efficient considering application dynamic requirement of cloud based storage area network. Thus, incurring latency, due to which induce higher cost of data transmission. This work present an efficient replica placement and transmission technique using Bipartite Graph based Data Replica Placement (BGDRP) technique that aid in minimizing latency and computing cost. Performance of BGDRP is evaluated using real-time scientific application workflow. The outcome shows BGDRP technique minimize data access latency, computation time and cost over state-of-art technique.
A Platform for Large-Scale Grid Data Service on Dynamic High-Performance Netw...Tal Lavian Ph.D.
Data intensive Grid applications often deal with multiple terabytes and even petabytes of data. For them to be effectively deployed over distances, it is crucial that Grid infrastructures learn how to best exploit high-performance networks
(such as agile optical networks). The network footprint of these Grid applications show pronounced peaks and valleys in utilization, prompting for a radical overhaul of traditional network provisioning styles such as peak-provisioning, point-and-click or operator-assisted provisioning. A Grid stack must become capable to dynamically orchestrate a complex set of variables related to application requirements, data services, and network provisioning services, all within a rapidly and continually changing environment. Presented here is a platform that addresses some of these issues. This service platform closely integrates a set of large-scale data services with those for dynamic bandwidth allocation, through a network resource middleware service, using an OGSA-compliant interface allowing direct access by external applications. Recently, this platform has been implemented as an experimental research prototype on a unique wide area optical networking testbed incorporating state-of-the-art photonic
components. The paper, which presents initial results of research conducted on this prototype, indicates that these methods have the potential to address multiple major challenges related to data intensive applications. Given the complexities of this topic, especially where scheduling is required, only selected aspects of this platform are considered in this paper.
FDMC: Framework for Decision Making in Cloud for EfficientResource Management IJECEIAES
An effective resource management is one of the critical success factors for precise virtualization process in cloud computing in presence of dynamic demands of the user. After reviewing the existing research work towards resource management in cloud, it was found that there is still a large scope of enhancement. The existing techniques are found not to completely utilize the potential features of virtual machine in order to perform resource allocation. This paper presents a framework called FDMC or Framework for Decision Making in Cloud that gives better capability for the VMs to perform resource allocation. The contribution of FDMC is a joint operation of VM to ensure faster processing of task and thereby withstand more number of increasing traffic. The study outcome was compared with some of the existing systems to find FDMC excels better performance in the scale of task allocation time, amount of core wasted, amount of storage wasted, and communication cost.
Analysing Mobile Random Early Detection for Congestion Control in Mobile Ad-h...IJECEIAES
This research paper suggests and analyse a technique for congestion control in mobile ad hoc networks. The technique is based on a new hybrid approach that uses clustering and queuing techniques. In clustering, in general cluster head transfers the data, following a queuing method based on a RED (Random Early Detection), the mobile environment makes it Mobile RED (or MRED), It majorly depends upon mobility of nodes and mobile environments leads to unpredictable queue size. To simulate this technique, the Network Simulator 2 (or NS2) is used for various scenarios. The simulated results are compared with NRED (Neighbourhood Random Early Detection) queuing technique of congestion control. It has been observed that the results are improved using MRED comparatively.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
TASK-DECOMPOSITION BASED ANOMALY DETECTION OF MASSIVE AND HIGH-VOLATILITY SES...ijdpsjournal
The Science Information Network (SINET) is a Japanese academic backbone network for more than 800 universities and research institutions. The characteristic of SINET traffic is that it is enormous and highly variable. In this paper, we present a task-decomposition based anomaly detection of massive and highvolatility session data of SINET. Three main features are discussed: Tash scheduling, Traffic discrimination, and Histogramming. We adopt a task-decomposition based dynamic scheduling method to handle the massive session data stream of SINET. In the experiment, we have analysed SINET traffic from 2/27 to 3/8 and detect some anomalies by LSTM based time-series data processing.
Abstract— Cloud storage is usually distributed infrastructure, where data is not stored in a single device but is spread to several storage nodes which are located in different areas. To ensure data availability some amount of redundancy has to be maintained. But introduction of data redundancy leads to additional costs such as extra storage space and communication bandwidth which required for restoring data blocks. In the existing system, the storage infrastructure is considered as homogeneous where all nodes in the system have same online availability which leads to efficiency losses. The proposed system considers that distributed storage system is heterogeneous where each node exhibit different online availability. Monte Carlo Sampling is used to measure the online availability of storage nodes. The parallel version of Particle Swarm Optimization is used to assign redundant data blocks according to their online availability. The optimal data assignment policy reduces the redundancy and their associated cost.
By centralizing and integrating product information management with ERP and shop floor execution systems, footwear companies can more effectively deliver a consistent customer experience across channels and accelerate time to market.
Gopalan Atlantis, 2BHK & 3BHK Apartments for sale in Whitefield, Bangalore
Gopalan Atlantis – Whitefield
Gopalan Atlantis, one of the premier residential projects in Bangalore, offers you exclusively designed 2 and 3 bedroom luxury apartments with futuristic lifestyle, embellished with the best of amenities for you and your family.
For More......:
http://bangalore5.com/Villa-Houses-in-Bangalore/
https://www.bangalore5.com/2BHK-Apartments-in-Bangalore/
http://bangalore5.com/Flats-purchase-in-Bangalore/
http://bangalore5.com/BMRDA-Approved-Layouts/
Load balancing functionalities are crucial for best Grid performance and utilization. Accordingly,this
paper presents a new meta-scheduling method called TunSys. It is inspired from the natural phenomenon of
heat propagation and thermal equilibrium. TunSys is based on a Grid polyhedron model with a spherical
like structure used to ensure load balancing through a local neighborhood propagation strategy.
Furthermore, experimental results compared to FCFS, DGA and HGA show encouraging results in terms
of system performance and scalability and in terms of load balancing efficiency.
Load balancing functionalities are crucial for best Grid performance and utilization. Accordingly,this paper presents a new meta-scheduling method called TunSys. It is inspired from the natural phenomenon of heat propagation and thermal equilibrium. TunSys is based on a Grid polyhedron model with a spherical like structure used to ensure load balancing through a local neighborhood propagation strategy. Furthermore, experimental results compared to FCFS, DGA and HGA show encouraging results in terms of system performance and scalability and in terms of load balancing efficiency.
International Journal of Grid Computing & Applications (IJGCA)ijgca
Service-oriented computing is a popular design methodology for large scale business computing systems. Grid computing enables the sharing of distributed computing and data resources such as processing, networking and storage capacity to create a cohesive resource environment for executing distributed applications in service-oriented computing. Grid computing represents more business-oriented orchestration of pretty homogeneous and powerful distributed computing resources to optimize the execution of time consuming process as well. Grid computing have received a significant and sustained research interest in terms of designing and deploying large scale and high performance computational in e-Science and businesses. The objective of the journal is to serve as both the premier venue for presenting foremost research results in the area and as a forum for introducing and exploring new concepts.
CLOUD COMPUTING – PARTITIONING ALGORITHM AND LOAD BALANCING ALGORITHMijcseit
Tremendous usage of internet has made huge data on the network, without compromising on the
performance of network the end-users must obtain best service. As cloud provides different services on
leasing basis many companies are migrating from their own Infrastructure to cloud,This migration should
not compromise on performance of the cloud, The performance of the cloud can be improved by having
excellent load balancing strategy such that the end user is satisfied. The paper reveals the method by which
a cloud can be partitioned and a study of different algorithm with comparative study to balance the
dynamic load. The comparative study between Ant Colony and Honey Bee algorithm gives the result which
algorithm is optimal in normal load condition also the simplest round robin algorithm is applied when the
partition are in Idle state
CLOUD COMPUTING – PARTITIONING ALGORITHM AND LOAD BALANCING ALGORITHMijcseit
Tremendous usage of internet has made huge data on the network, without compromising on the
performance of network the end-users must obtain best service. As cloud provides different services on
leasing basis many companies are migrating from their own Infrastructure to cloud,This migration should
not compromise on performance of the cloud, The performance of the cloud can be improved by having
excellent load balancing strategy such that the end user is satisfied. The paper reveals the method by which
a cloud can be partitioned and a study of different algorithm with comparative study to balance the
dynamic load. The comparative study between Ant Colony and Honey Bee algorithm gives the result which
algorithm is optimal in normal load condition also the simplest round robin algorithm is applied when the
partition are in Idle state
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Mobile Agents based Energy Efficient Routing for Wireless Sensor NetworksEswar Publications
Energy Efficiency and prolonged network lifetime are few of the major concern areas. Energy consumption rated of sensor nodes can be reduced in various ways. Data aggregation, result sharing and filtration of aggregated data among sensor nodes deployed in the unattended regions have been few of the most researched areas in the field of wireless sensor networks. While data aggregation is concerned with minimizing the information transfer from source to sink to reduce network traffic and removing congestion in network, result sharing focuses on sharing of information among agents pertinent to the tasks at hand and filtration of aggregated data so as to remove redundant information. There exist various algorithms for data aggregation and filtration using different mobile agents. In this proposed work same mobile agent is used to perform both tasks data aggregation and data filtration. This approach advocates the sharing of resources and reducing the energy consumption level of sensor nodes.
Propose a Method to Improve Performance in Grid Environment, Using Multi-Crit...Editor IJCATR
The most important purpose of grid networks is resource subscription in a dynamic and heterogeneous environment.
They are accessible through using various methods. Subscription has mainly computational, scientific and other implications. In
order to reach grid purposes and to use available resources in grid environment, subtasks are distributed among resources and are
scheduled by considering the quality of service. It has been tried to distribute subtasks between resources in a way that maximum
QOS can be obtained. In this study, a method has been presented. In this method, three parameters; namely, sent and transferred
time between RMS and resource, process time of subtask by the resource, and the load of available tasks in resources row, have
been taken into account. In this way, multi-criteria decision is made by using TOPSIS method and this priority of the resources
are determined to assign them to subtasks. Finally, time response, as an efficient parameter, has been improved and optimized by
optimal assignment of the resources to subtasks.
An advanced ensemble load balancing approach for fog computing applicationsIJECEIAES
Fog computing has emerged as a viable concept for expanding the capabilities of cloud computing to the periphery of the network allowing for efficient data processing and analysis from internet of things (IoT) devices. Load balancing is essential in fog computing because it ensures optimal resource utilization and performance among distributed fog nodes. This paper proposed an ensemble-based load-balancing approach for fog computing environments. An advanced ensemble load balancing approach (AELBA) uses real-time monitoring and analysis of fog node metrics, such as resource utilization, network congestion, and service response times, to facilitate effective load distribution. Based on the ensemble's collective decision-making, these metrics are fed into a centralized load-balancing controller, which dynamically adjusts the load distribution across fog nodes. Performance of the proposed ensemble load-balancing approach is evaluated and compared it to traditional load-balancing techniques in fog using extensive simulation experiments. The results demonstrate that our ensemble-based approach outperforms individual load-balancing algorithms regarding response time, resource utilization, and scalability. It adapts to dynamic fog environments, providing efficient load balancing even under varying workload conditions.
An optimized cost-based data allocation model for heterogeneous distributed ...IJECEIAES
Continuous attempts have been made to improve the flexibility and effectiveness of distributed computing systems. Extensive effort in the fields of connectivity technologies, network programs, high processing components, and storage helps to improvise results. However, concerns such as slowness in response, long execution time, and long completion time have been identified as stumbling blocks that hinder performance and require additional attention. These defects increased the total system cost and made the data allocation procedure for a geographically dispersed setup difficult. The load-based architectural model has been strengthened to improve data allocation performance. To do this, an abstract job model is employed, and a data query file containing input data is processed on a directed acyclic graph. The jobs are executed on the processing engine with the lowest execution cost, and the system's total cost is calculated. The total cost is computed by summing the costs of communication, computation, and network. The total cost of the system will be reduced using a Swarm intelligence algorithm. In heterogeneous distributed computing systems, the suggested approach attempts to reduce the system's total cost and improve data distribution. According to simulation results, the technique efficiently lowers total system cost and optimizes partitioned data allocation.
MAP/REDUCE DESIGN AND IMPLEMENTATION OF APRIORIALGORITHM FOR HANDLING VOLUMIN...acijjournal
Apriori is one of the key algorithms to generate frequent itemsets. Analysing frequent itemset is a crucial
step in analysing structured data and in finding association relationship between items. This stands as an
elementary foundation to supervised learning, which encompasses classifier and feature extraction
methods. Applying this algorithm is crucial to understand the behaviour of structured data. Most of the
structured data in scientific domain are voluminous. Processing such kind of data requires state of the art
computing machines. Setting up such an infrastructure is expensive. Hence a distributed environment
such as a clustered setup is employed for tackling such scenarios. Apache Hadoop distribution is one of
the cluster frameworks in distributed environment that helps by distributing voluminous data across a
number of nodes in the framework. This paper focuses on map/reduce design and implementation of
Apriori algorithm for structured data analysis.
Mobile Data Gathering with Load Balanced Clustering and Dual Data Uploading i...1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
A Survey of File Replication Techniques In Grid SystemsEditor IJCATR
Grid is a type of parallel and distributed systems that is designed to provide reliable access to data
and computational resources in wide area networks. These resources are distributed in different geographical
locations. Efficient data sharing in global networks is complicated by erratic node failure, unreliable network
connectivity and limited bandwidth. Replication is a technique used in grid systems to improve the
applications’ response time and to reduce the bandwidth consumption. In this paper, we present a survey on
basic and new replication techniques that have been proposed by other researchers. After that, we have a full
comparative study on these replication strategies.
A Survey of File Replication Techniques In Grid SystemsEditor IJCATR
Grid is a type of parallel and distributed systems that is designed to provide reliable access to data
and computational resources in wide area networks. These resources are distributed in different geographical
locations. Efficient data sharing in global networks is complicated by erratic node failure, unreliable network
connectivity and limited bandwidth. Replication is a technique used in grid systems to improve the
applications’ response time and to reduce the bandwidth consumption. In this paper, we present a survey on
basic and new replication techniques that have been proposed by other researchers. After that, we have a full
comparative study on these replication strategies
Task Scheduling using Hybrid Algorithm in Cloud Computing Environmentsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Cloud computing is a new computing paradigm that, just as electricity was firstly generated at home and
evolved to be supplied from a few utility providers, aims to transform computing into a utility. It is a mapping
strategy that efficiently equilibrates the task load into multiple computational resources in the network based on the
system status to improve performance. The objective of this research paper is to show the results of Hybrid DEGA,
in which GA is implemented after DE
Fault-Tolerance Aware Multi Objective Scheduling Algorithm for Task Schedulin...csandit
Computational Grid (CG) creates a large heterogeneous and distributed paradigm to manage and execute the applications which are computationally intensive. In grid scheduling tasks are assigned to the proper processors in the grid system to for its execution by considering the execution policy and the optimization objectives. In this paper, makespan and the faulttolerance of the computational nodes of the grid which are the two important parameters for the task execution, are considered and tried to optimize it. As the grid scheduling is considered to be NP-Hard, so a meta-heuristics evolutionary based techniques are often used to find a solution for this. We have proposed a NSGA II for this purpose. The performance estimation ofthe proposed Fault tolerance Aware NSGA II (FTNSGA II) has been done by writing program in Matlab. The simulation results evaluates the performance of the all proposed algorithm and the results of proposed model is compared with existing model Min-Min and Max-Min algorithm which proves effectiveness of the model.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Vaccine management system project report documentation..pdfKamal Acharya
The Division of Vaccine and Immunization is facing increasing difficulty monitoring vaccines and other commodities distribution once they have been distributed from the national stores. With the introduction of new vaccines, more challenges have been anticipated with this additions posing serious threat to the already over strained vaccine supply chain system in Kenya.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Automobile Management System Project Report.pdfKamal Acharya
The proposed project is developed to manage the automobile in the automobile dealer company. The main module in this project is login, automobile management, customer management, sales, complaints and reports. The first module is the login. The automobile showroom owner should login to the project for usage. The username and password are verified and if it is correct, next form opens. If the username and password are not correct, it shows the error message.
When a customer search for a automobile, if the automobile is available, they will be taken to a page that shows the details of the automobile including automobile name, automobile ID, quantity, price etc. “Automobile Management System” is useful for maintaining automobiles, customers effectively and hence helps for establishing good relation between customer and automobile organization. It contains various customized modules for effectively maintaining automobiles and stock information accurately and safely.
When the automobile is sold to the customer, stock will be reduced automatically. When a new purchase is made, stock will be increased automatically. While selecting automobiles for sale, the proposed software will automatically check for total number of available stock of that particular item, if the total stock of that particular item is less than 5, software will notify the user to purchase the particular item.
Also when the user tries to sale items which are not in stock, the system will prompt the user that the stock is not enough. Customers of this system can search for a automobile; can purchase a automobile easily by selecting fast. On the other hand the stock of automobiles can be maintained perfectly by the automobile shop manager overcoming the drawbacks of existing system.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
COLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdfKamal Acharya
The College Bus Management system is completely developed by Visual Basic .NET Version. The application is connect with most secured database language MS SQL Server. The application is develop by using best combination of front-end and back-end languages. The application is totally design like flat user interface. This flat user interface is more attractive user interface in 2017. The application is gives more important to the system functionality. The application is to manage the student’s details, driver’s details, bus details, bus route details, bus fees details and more. The application has only one unit for admin. The admin can manage the entire application. The admin can login into the application by using username and password of the admin. The application is develop for big and small colleges. It is more user friendly for non-computer person. Even they can easily learn how to manage the application within hours. The application is more secure by the admin. The system will give an effective output for the VB.Net and SQL Server given as input to the system. The compiled java program given as input to the system, after scanning the program will generate different reports. The application generates the report for users. The admin can view and download the report of the data. The application deliver the excel format reports. Because, excel formatted reports is very easy to understand the income and expense of the college bus. This application is mainly develop for windows operating system users. In 2017, 73% of people enterprises are using windows operating system. So the application will easily install for all the windows operating system users. The application-developed size is very low. The application consumes very low space in disk. Therefore, the user can allocate very minimum local disk space for this application.
AN ENTROPIC OPTIMIZATION TECHNIQUE IN HETEROGENEOUS GRID COMPUTING USING BIONIC ALGORITHMS
1. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 6, December 2015
DOI:10.5121/ijcsit.2015.7602 19
AN ENTROPIC OPTIMIZATION TECHNIQUE IN
HETEROGENEOUS GRID COMPUTING USING
BIONIC ALGORITHMS
Saad M. Darwish1
, Adel A. El-zoghabi2
and Moustafa F. Ashry3
Information Technology Department, Graduate Studies and Research Institute, University
of Alexandria. 163 Horreya Avenue, El-Shatby, 21526 P.O. Box 832, Alexandria, Egypt
ABSTRACT
The wide usage of the Internet and the availability of powerful computers and high-speed networks as low-
cost commodity components have a deep impact on the way we use computers today, in such a way that
these technologies facilitated the usage of multi-owner and geographically distributed resources to address
large-scale problems in many areas such as science, engineering, and commerce. The new paradigm of
Grid computing has evolved from these researches on these topics. Performance and utilization of the grid
depends on a complex and excessively dynamic procedure of optimally balancing the load among the
available nodes. In this paper, we suggest a novel two-dimensional figure of merit that depict the network
effects on load balance and fault tolerance estimation to improve the performance of the network
utilizations. The enhancement of fault tolerance is obtained by adaptively decrease replication time and
message cost. On the other hand, load balance is improved by adaptively decrease mean job response time.
Finally, analysis of Genetic Algorithm, Ant Colony Optimization, and Particle Swarm Optimization is
conducted with regards to their solutions, issues and improvements concerning load balancing in
computational grid. Consequently, a significant system utilization improvement was attained. Experimental
results eventually demonstrate that the proposed method's performance surpasses other methods.
KEYWORDS
Grid Computing; Big Data; Bionic Algorithm; Load Balancing; Fault Tolerance; R-tree.
1. INTRODUCTION
Lately, the progressive expansion of large, complex, heterogeneous, and multi-dimensional data
has put under spot the capacity of the existing data management software and hardware
infrastructure. Nowadays, there exist many different types of data sources, such as sensors,
scientific instruments, and the Internet, contributing to the data booming. The relatively slower
development of new and efficient data models to process complex and large-scale data poses
huge challenges that require straightaway attention from academia, research, and industry [1] [2].
As traditional data models, which are basically relational in nature, can't handle the today’s data
needs, a new technology in data science is produced that gives rise to a fast emergence of a wide
range of non-relational data models, which, today, are popularly known as NoSQL and/or Big
Data models [3]. Grid technology has emerged as a new era of large-scale distributed computing
with high-performance orientation. Grid resource management is literally the process of
identifying requirements, matching resources to applications, allocating those resources,
scheduling and monitoring grid resources over time in order to run grid applications as efficiently
as possible [4]. Resource discovery represents the first phase of resource management, whereas
scheduling and monitoring is the next step. Scheduling process leads the job to the appropriate
resource and monitoring process views the resources. Heavily loaded resources will act as server
2. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 6, December 2015
20
of task, while lightly loaded resources will act as receiver of task. Task will migrate from heavily
loaded node to lightly loaded node. As the resources are dynamic in nature, their load varies with
change in configuration of grid resulting in a significantly influence of the load balancing of the
tasks on grid’s performance [5] [6].
Load balancing results in a low cost, dispensed scheme that balances the load among all the
processors. Efficacious load balancing algorithms are basically important in enhancing the global
throughput of grid resources. For applying load balancing in grid environment, multiple
techniques, algorithms and policies have been introduced. Load balancing processes can be
classified as centralized or decentralized, dynamic or static, and periodic or non-periodic [7].
Regarding the concept of fault tolerance, the main problems facing grid environments are
Feasibility, Reliability, Scalability, and Connectivity [8] [9]. A fault can be tolerated depending
on its behavior or the way of occurrence. To evaluate a fault tolerance in a system there are three
different methods which are Replication, Check-pointing, and Scheduling/Redundancy [10].
Fractal transform based self similarity will always be a critical issue in the prospect of the cost
of data storage and transmission times especially with the huge interest for images, sound, video
sequences, computer animations and volume visualization [11]. A measurement of the inherent
size of the data in a space can be represented by a fractal dimension of a cloud of points. This
method has a significant disadvantage which is the long computing time, which can be reduced
by several proposed methods. The most common approach for reducing the computational
complexity is to organize the domain blocks into a tree-structure, which could lead to faster
searching over the linear search [9]. This approach is able to reduce the order of complexity from
O(N) to O(log N). Instead of dividing space in some manner, it is also possible to group the
objects in some hierarchic organization based on a rectangular approximation of their location. R-
tree [12] is the most popular index structure for multi-dimensional data. Each type of R-tree could
be distinctive in some aspects of performance such as query retrieval, insertion cost, application
specific and so on. Accordingly the subsequent search in the domain pool is substituted by multi-
dimensional nearest neighbor searching, run in logarithmic time.
In this paper a new method based on a balanced tree structure overlay over a grid network is
proposed. Furthermore a fault tolerant technique, exploiting the replication of non leaf nodes to
ensure the tree connectivity in presence of crashes is presented. This method is based on
distributed R-tree with the concept of entropy of each node. Consequently, all ineffectual route
paths will be discarded from the pool creating a more stable network. The proposed method
improves the performance of the network and hence improves fault tolerance and load balancing
algorithm when applying this approach to a grid computing networks. The rest of this paper is
organized as follows: Section 2, mention a literature survey and related works. Section 3, presents
the proposed method that improve both fault tolerance and load balance in grid computing,
followed by experimental results and discussion in Section 4. In Section 5, conclusions of the
present work are outlined.
2. RELATED WORK
At the service level of the grid software infrastructure, workload and resource management are
two main functions provided. The problem of load balancing in grid computing is handled by
assigning loads in a grid taking into account the communication overhead in gathering the load
information. Load index is used as a decision aspect in the process of scheduling jobs within and
among clusters. To solve this problem, a decentralized model for heterogeneous grid has been
suggested as a collection of clusters by a ring topology for the Grid managers that are charged of
managing a dynamic pool of processing elements, computers or processors [7].
3. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 6, December 2015
21
Resource scheduling for tasks engages in the scheduling of a set of independent tasks [5]. Other
works have been based on scheduling technologies using economic/market-based models [6]. As
job scheduling is known to be NP-complete, the use of non-heuristics is an effective method that
practically matches its difficulty. Single heuristic approaches for the problem include Local
Search, Simulated Annealing, and Tabu Search [6]. In recent years, some new-type bionic
algorithms are become hot research topics such as Genetic Algorithm (GA), Particle Swarm
Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm, Bees Algorithm
(BA), Artificial Fish Swarm Algorithm (AFSA) [2]. In the Ant Colony algorithm, each job
submitted issues an ant which searches through the network finding the best node to deliver the
job to it. Ants catch information from each node they pass through as a pheromone in each node.
This eventually helps other ants to find their paths more efficiently. In the particle swarm
algorithm each node in the network individually acts as a particle which sends or receives jobs
from its neighbors to optimize its load locally, resulting in a partially global optimization of the
load within the whole network given time constraint limitations [17].
In [14], a hierarchical model of load distribution is introduced. It consists of a tree based Grid
model together with three load balancing algorithms at various levels of the hierarchical model.
The load balancing strategy is a hierarchical bottom up methodology where intra-site load is
balanced first, followed by intra-cluster and finally intra-grid load balancing. To run
complementary load balancing for batch jobs without specific execution deadlines, an agent-
based self-organization scheme is proposed in [18]. In [19], a combination of intelligent agents
and multi-agent scheme is applied for both global grid load balancing and local grid resource
scheduling. Another approach proposed in [15], based on the fault tolerant hybrid load balancing
strategy to reach job assignments with optimal computing node utilization and minimum response
time.
In this paper, a tree based structure and the domain-range entropy is proposed to reduce the
complexity of the grid computing network due to the similarity features with the tolerated and
balanced R-tree index structure which can be run in logarithmic time. This will improve both load
balancing and fault tolerance algorithm by enhancing connectivity, communication delay,
network bandwidth, in addition to resource availability, and resource unpredictability.
3. THE PROPOSED METHOD
The optimization procedures depends on the complete adaptive proposed method is shown in
Figure 1. The first step is to estimate Grid Computing Service (GCS) parameters then mapping
this grid structure into DR-tree index structure enhanced by the Entropy method to reduce the
completion time of the decision maker. Finally using threshold device to select the route path
depending on two parameters load balance and fault tolerance controller. Migration controller is
used to improve the fault tolerance and self-stabilizing controller is used to improve the load
balance in cumulative condition way and this method is depend on first introduced in [20]. Three
different optimization techniques applied on the herein proposed system to reach the optimum
solution, namely: Genetic Algorithm, Ant colony optimization and Particle swarm optimization.
Every user submits his computing jobs with their hardware requirements to the GCS. The GCS
then replies to the user by sending the results after finishing the processing of the jobs.
At the first step GCS estimation will analyze the network parameters by determine the Three-
level Top-Down view of the grid computing model depending on the method produced in [7] as
shown in Figure 2. Level 0: Local Grid Manager (LGM), the network is subdivided into
geographical areas where any LGM manages a group of Site Managers (SMs). Level 1: Site
Manager (SM), every SM is assigned the management of processing elements (computers or
processors) cluster which is dynamically configured (i.e., processing elements may join or leave
4. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 6, December 2015
22
the cluster at any time). Level 2: Processing Elements (PE) any public or private PC or
workstation can register within any SM to join the grid system and offer its computing resources
to be exploited by the grid users. As soon as being adhered to the grid, a computing element
triggers the GCS system which will return to the SM some information about its resources like
CPU speed.
Figure 1. The complete adaptive proposed procedure.
DR-Tree Index Structure
Fault Tolerance
Estimation
Load Balance
Estimation
Threshold Device
Migration
Controller
Route Path Selector
Self-Stabilizing
Controller
Grid Computing Service Estimation
Entropy Estimation
Grid Users
Grid Users
Optimization Techniques
5. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 6, December 2015
23
Figure 2. Grid computing model structure.
3.1. Distributed R-tree (DR-tree)
First we convert the tree model of grid computing nodes into DR-tree for the similarity features
then dealing with the new tree with the DR-tree conditions. R-trees [13] are defined as height-
balanced tree handling objects whose representation can be confined in a poly-space rectangle. In
other words, R-tree can be depicted as a data structure capable of efficiently indexing a multi-
dimensional space. R-tree has by the following structural properties: The root comprises 2 to M
children, every internal node has from m to M children (m ≤ M/2), and all leaves are equal in
level.
Distributed R-trees (DR-trees) expand the R-tree index structures having similar self-organized
nodes that are in a virtual balanced tree pave based on semantic relations. The structure maintains
the R-trees index structure features: search time logarithmic in the network size and bounded
degree per node. Physical machines connected to the system could be further referred to as p-
nodes (shortcut for physical nodes). A DR-tree is defined as a virtual structure distributed over a
set of p-nodes. In the following context, terms related to DR-tree will have the prefix: “v-”.
Consequently, DR-trees nodes will be denoted v-nodes. There are two distribution invariants
properties in the implementation of DR-tree proposed in [16]. The main points in the composition
of a DR-tree are the join/leave procedures. A p-node creates a v-leaf as soon as it is adhered to the
system. After that, it contacts another p-node to embed its v-leaf in the existing DR-tree. During
this insertion, some v-nodes may split as shown by Algorithm1.
High Speed Network
LGM
LGM
Site/Cluster
LAN
PE
PE
SM
SM SM
SM
PE
PE
PE
PE PE PE
Level 0
Level 1
Level 2
6. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 6, December 2015
24
Algorithm1
1: Void onSplit (n:V_Node)
2: If n.is V_ Root then
3: Initiate new V_ Root = create n.V_Node.
4: Calculate n.v_ father = initiate new V_Root.
5: End if
6: Calculate m, where m = select n.v_children.
7: Initiate then calculate new V_Node = create m.V_Node.
8: Divide D = n.v_children.
9: Create new V_Node.v_children = D.
10: Create new V_ Node.v_ father then put n.v_ father into it.
3.2. Entropy
According to Shannon, Entropy [11] is defined as a set of events S={ x1 , x2 ,... xn }, where p(xi )
=pi represents the probability of incidence of each event. These probabilities, P={ p1 , p2 ,... pn },
are such that each pi ≥0 , and ∑ = 1, which takes the form:
, , … = = − ∑ log (1)
Entropy can be defined as the average self-information that is, the mean (expected or average)
amount of information for an occurrence of an event xi. In the context of message coding, entropy
can be represented as the minimum bound on the bits average number for each input value. The
function H has the following lower and the upper limits:
0 = 1,0,0, … 0 ≤ , , … ≤ , , … = (2)
Searching the pool of domain blocks is considered time consuming as good approximations are
obtained when many domain blocks are allowed. This method consists of omitting the domain
block having high entropy from the domain pool, and hence all the unnecessary domains will
disappear from the pool to reach a higher production domain pool. This way will reduce the
overhead of the network by decrease the number of searching nodes and then improve the
performance of the grid computing networks. During GCS estimation, the grid manager will
trigger Algorithm1 by picking up some parameter ε, subsequently Algorithm2 will be executed.
Algorithm2
1: Step 1: Initialization choose parameter ε ;
2: Divide the input grid into n:V_Node
3: For (j =1; j ≤ n; j ++) {
4: H = entropy (V_Node);
5: If (H ≤ ε )
6: Step 2: execute Algorithm 1}
3.3. Optimization Techniques
A load balancing algorithm should optimize the usage of available resources either in the Grid
such as computational or data resources, in addition to time or cost related to these resources, etc.
The Grid environment results in a dynamic search space and hence as this optimality represents a
partial optimal solution that improves the performance. The proposed approaches optimize the
parameter ε by combining genetic algorithm, ant colony and particle swarm optimization
following the initialization process in the above mentioned algorithm2 to accomplish the
optimum resource utilization. These three optimized methods were discussed earlier in details in
[21].
7. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 6, December 2015
25
3.3.1. Genetic Algorithm (GA)
GA is part of the group of Evolutionary Algorithms (EA). The evolutionary algorithms exploit
the three key principles of the natural evolution: natural selection, reproduction, and species
diversity, preserved by the differences of each generation with the previous. Genetic Algorithm
usually works with a group of individuals, representing possible solutions of the task. Giving an
individual assessment according to the desired solution, the selection principle is applied by using
a criterion. The best-suited individuals create the next generation. The huge diversity of problems
in the engineering sphere, equally as in other fields, implies the usage of algorithms from
different type, with multiple characteristics and settings. Algorithm 3 shows the GA procedure
used for select optimum parameter ε.
Algorithm 3
1: Initialize population of individual grid user jobs to both longest and smallest to fastest processor with 60
random schedules.
2: Evaluate the fitness of all individuals.
3: While termination condition not met do.
4: Select fitter individuals for reproduction at minimum execution time.
5: Crossover between individuals by two-point crossover.
6: Mutate individuals by simple swap operator.
7: Evaluate the fitness of the modified individuals that keeping relevant fitness.
8: Generate a new population
9: End while.
10: Initiate Authorized task for sequence Si For each processor Pi.
11: Concatenate sequences Si. A permutation sequence of tasks will be assigned to processors.
3.3.2. Ant Colony Optimization (ACO) algorithm
Ant Colony Optimization (ACO) is denoted as an analytical approach for dealing with
optimization problems based on population met. ACO produces good solutions to the
optimization problems by redistributing work among the nodes by means of ants. The ants
traverse the grid network and leave the pheromones on the path. As soon as they reach the
destination, the ants update the pheromones tables. The pheromone trail laying and following
behavior of real ants represent the main source of ACO. The ants move from node to node, and
consequently explore the information presented by the pheromones values and thus increasingly
build the resultant solution. Algorithm 4 shows Ant Colony Algorithm procedure used for select
optimum parameter ε.
Algorithm 4
1: Input: Parameters for the ACO.
2: Output: Optimal solution to the problem.
3: Begin
4: Initialize the pheromone
5: While stopping criterion not satisfied do
6: Position each ant in a starting node
7: Repeat
8: For each ant do
9: Chose next node
10: by applying the state transition rate
11: End For
12: Until every ant has built a solution
13: Update the pheromone
14: End While
15: End
8. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 6, December 2015
26
3.3.3. Particle Swarm Optimization (PSO) algorithm
As Particle Swarm Optimization technique is simple and able to successfully tackle these
problems, it is also applied in many optimization and search problems. PSO optimizes an
objective function by repeatedly amending a swarm of solution vectors, known as particles,
depending on special memory management technique. Each particle is altered by attributing the
memory of individual swarm’s best information. The swarm can regularly amend its best
observed solution and converges to an optimum by virtue of the collective intelligence of these
particles. Every element ranges from 0 to 1 and vice versa. Additionally, each particle processes a
D-dimensional velocity vector whose elements' range is [-Vmax,Vmax]. Velocities are
interpreted according to probabilities that a bit will be in one state or the other. When the
algorithm starts, a collection of particles with their velocity vectors are developed randomly. Then
in some phase the algorithm's aim is to obtain the optimal or near optimal solutions referring to its
predefined fitness function. The velocity vector is amended in each time step using two best
positions, pbest and nbest, and then velocity vectors are used to update the position of the
particles. Pbest and nbest are D-dimensional, with the elements composed of 0 and 1 exactly like
particles position and represent the algorithm's memory. The personal best position, pbest, is the
best position the particle has visited, whereas nbest is the best position visited by the particle and
its neighbors since the first time step. At the time where all of the population size of the swarm
becomes the neighbor of a particle, nbest is named global best (star neighborhood topology) and
if the smaller neighborhoods are specified for each particle (e.g. ring neighborhood topology),
then nbest can be called local. Algorithm 5 shows Particle Swarm Optimization Algorithm
procedure used for select optimum parameter ε.
Algorithm 5
1: Randomly particle and position will store in a created and initialized m x n matrix.
Where
‘m’ denotes a number of node
‘n’ denotes a number of job
2: Calculate the estimated time to complete values of each node using Range Based Matrix.
3: Calculate the Fitness of each Particle and search for the Pbest and Gbest.
4: Xk is estimated and in case its value is greater than the fitness value of pbestk, pbestk is
replaced with Xk. Where Xk is denotes the updated position of the particle.
5: Replace nbest with pbest because Fitness value of nbest is smaller than pbest.
6: Until to reach the max Velocity.
7: If it is satisfying the maximum velocity.
8: Stop.
9: End.
10: If the maximum velocity is not satisfied.
11: Update the Position Matrix.
12: Update the Velocity Matrix.
13:End.
3.4. Fault Tolerance
In the typical DR-tree performance [13], when a p-node fails, all its sub-trees are replaced in the
non-faulty structure (p-node by p-node) to ensure the DR-tree invariants. The proposed algorithm
guarantees fault-tolerance and also preserves the DR-tree design with its invariants by using the
non leaf v-nodes reproduction. The pattern for v-nodes replication is represented as every p-node
holds a replica of the v-father of its top v-node and on the other hand, the p-root holds no replica.
9. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 6, December 2015
27
3.4.1. Fault Tolerance Estimation
This section investigates the cost of replication as presented in [9]. Then apply it to the proposed
system. Let costs be defined as the cost of updating replicas in a split time. As each v-node has
M−1 replicas and each update costs one message, we have:
= + 2 # − 1 (3)
A p-node associated with the system may change between 0 and $ % & ' splits, adding its v-
leaf to the v-children of another v-node that is denoted in the sequence the joined v-node. The
latter has: Between m and M v-children; ( % & − 1) v-ancestors; between m−1 and M −1
replicas. We will define hereafter an upper bound on the number of updates as long as each v-
node has M − 1 replicas. A v-node can have m to M v-children and therefore has M − m + 1
possible numbers of v-children. The split will happen only when it has exactly M v-children. The
probability for a v-node to split is p, where:
= *+%,
(4)
The probability for a p-node to generate k splits is the probability pk that the associated v-node
and its k – 1 first v-ancestor have literally M v-children while its k-th ancestor does not split.
Hence:
- = -
1 − (5)
Let costr denote the average cost of replicas updates when a p-node is associated with a DR-tree:
. = / # − 1 + ∑ -
$0123 4 '
- 5 (6)
The first term represents the case where no splits are produced, i.e., M−1 replicas of the joined v-
node are to be updated, while the second corresponds to the other cases. We could have identified
the case where the v-root splits, with different probability as it has M − 1 possible v-children.
However, for m > 2, this probability is smaller than p so in order to simplify the calculation, an
upper bound is assumed.
3.4.2. Migration Controller
Reinsertion policy and replication policy [16] are used to evaluate DR-Tree insertion operations,
which uses internal v-nodes. As soon as the crash of one non leaf p-node was generated for each
DR-tree, the cost of system restoration in terms of number of messages and stabilization time is
calculated based on both the reinsertion and replication policies.
(1) Stabilization time: The reinsertion mechanism is responsible of balancing the system in a
number of cycles proportional to both, the tip of the participation graph and the level of the burst
p-node. As the stabilization time is the time of the longest reinsertion, it is proportional to
logm(N).
(2) The message cost of the restoration phase: denoted by the number of messages required to
balance the system from a non leaf p-node burst. The costs are of different magnitude so with the
reinsertion policy, the number of message distribution is much skewed which results in a high
standard deviation.
10. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 6, December 2015
28
3.5. Load Balance
As explained earlier in [14], every SM collects the information of any PE joining or leaving the
grid system and then transmits it to its parent LGM. This means that a processing element only
needs a communication when joining or leaving its site. The system workload between the
processing elements will be balanced when using the collected information to efficiently utilize
the whole system resources in order to minimize user jobs response time. By applying this policy,
not only the connectivity will be improved but also, the system performance. This is
accomplished by minimizing the communication overhead abstracted from capturing system
information before making a load balancing decision. The following parameters will be defined
for GCS model to formalize the load balancing policy:
1. Job: All jobs in the system will represented by a job Id, job size in bytes, and a number of job
instructions.
2. Processing Element Capacity (PECij): The PEC can be measured assuming an average
number of job instructions using the PEs CPU speed and will be defined as number of jobs per
second that the jth
PE at full capacity in the ith
site can be processed.
3. Site Processing Capacity (SPCi): Defined as number of jobs that can be processed by the ith
site per second. Hence, the SPCi can be measured by summing all the PECs for all the PEs
managed the ith
site.
4. Local Grid Manager Processing Capacity (LPC): The LPC can be measured by summing all
the SPCs for all the sites managed by that LGM and will be defined as number of jobs per second
that LGM can be processed.
3.5.1. Load Balance Estimation
The load balancing policy as presented in [7] is a multi-level one and can be explained at each
level of the grid architecture as follows:
A. Load Balancing At Level 0: Local Grid Manager
As mentioned earlier, the LGM sustains information about all of its responsible SMs in terms of
processing capacity SPCs. LPC can be considered as the total processing capacity of a LGM
obtained from the summation of all the SPCs for the total sites managed by that LGM. Depending
on the total processing capacity of every site SPC, the LGM scheduler will control the workload
balance between sites group members (SMs). Where N defined as the number of jobs arrived at a
LGM in the steady state, the ith
site workload (SiWL) is the number of jobs to be allocated to ith
site manager and is calculated as follows:
6 78 = & ×
:;<=
>;<
(7)
B. Load Balancing At Level 1: Site/Cluster Manager
Every SM stores some information about the PECs of all the processing elements in its cluster.
The total site processing capacity SPC of all the processing elements is calculated from the sum
of all the PECs included in that site. With the same policy used by the LGM scheduler to balance
the load, the SM scheduler will be used where M is defined as the number of jobs arrived at a SM
in the steady state. Under the policy of distributing site workload among the group of processing
elements based on their processing capacity, the throughput of every PE will be maximized and
also its resource utilization will be improved. On the other hand, the number of jobs to be
allocated to ith
PE is defined as the ith
PE workload (PEiWL) which is calculated as follows:
?@ 78 = # ×
;A<=
:;<
(8)
11. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 6, December 2015
29
3.5.2. Self-Stabilizing Controller
In order to calculate the mean job response time, we assume one LGM scenario as a simplified
grid model. In this scenario, we will be examining the time elapsed by a job in the processing
elements. Algorithm 6 will execute to calculate the traffic intensity ρij and hence, the expected
mean job response time:
Algorithm 6
1: Obtain λ,µ where, λ: is the external job arrival rate from grid clients to the LGM, µ: is the LGM
processing capacity.
2: Calculate ρ=λ/µ is the system traffic intensity. For the system to be stable ρ must be less than 1.
2: For i= 1 to m
3: Calculate λi,µi where, λi is the job arrival rate from the LGM to the ith
SM which is controlled by that
LGM, µi: is the ith
SM processing capacity.
4: Calculate the traffic intensity of the ith
SM, ρi=λi/µi .
5: For j= 1 to n
6: Calculate λij,µij where, λij: is the job arrival rate from the ith
SM to the jth
PE controlled by that SM, µij:
is the jth
PE processing capacity which is controlled by the ith
SM.
7: Calculate the traffic intensity of the jth
PE which is controlled by ith
SM, ρij=λij/µij.
8: Calculate the expected mean job response time, @BCDE.
9: End.
10: End.
The jobs arrive consecutively from clients to the LGM with a time-invariant Poisson process
assumption. The inter-arrival times are not only independent, but also identically and
exponentially allocated with the arrival rate λ jobs/second. Simultaneous arrivals are excluded.
Each of PE in the non-static site pool will be represented by an M/M/1 queue. Jobs arriving to the
LGM will be naturally distributed on the sites regulated by that LGM with a routing probability,
?F6 =
:;<=
>;<
.
Complying with the load balancing policy (LBP), if i is the site number G = G × ?F6 = G ×
:;<=
>;<
. In a similar situation, the site i arrivals will be also naturally distributed on the PEs managed
by that site with a routing probability ?F@ H =
;A<=I
:;<=
according to the LBP, where j is the PE
number and i is the site number G H = G × ?F@ H = G ×
;A<=I
:;<=
. As the arrivals to LGM are
simulated to follow a Poisson process, the arrivals to the PEs will also follow a Poisson process.
Let us consider that the service times at the jth
PE in the ith
SM are exponentially shared with fixed
service rate µij jobs/second. They will represent the PE's processing capacity (PEC) in our load
balancing policy. The service control is First Come First Serviced. To calculate the expected
mean job response time, we assume that E [Tg] denotes the mean time spent by a job at the grid to
the arrival rate λ and E [Ng] denotes the total number of jobs in the system. Thus, the mean
elapsed time by a job at the grid is given by equation 9 as follows:
E [Ng] = λ × E [Tg] (9)
E [Ng] can be measured by adding the mean number of jobs in every PE at all grid sites. So,
@B&DE = ∑ ∑ @[&;A
H
]H
%
where i=1,2,…,m is the number of site managers handled by a LGM,
j=1,2,…,n is the number of processing elements handled by a SM and @[&;A
H
] is the mean
number of jobs in a processing element number j at site number i. Because every PE is
represented as an M/M/1 queue, @B&;A
H
E =
P=I
+P=I
where ρij=λij/µij, µij = PECij for PE number j at
site number i. Referring to equation 9, the expected mean job response time is given by:
12. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 6, December 2015
30
@BCDE =
Q
× @B&DE =
Q
× ∑ ∑ @[&;A
H
]H
%
(10)
Note that the stability condition for PEij is ρ<1.
3.6. Threshold Device
A novel 2-D Figure of Merit is used to test the network performance. An example of the 2-D
figure of merit is shown in Figure 3. It divides the fault tolerance (FT) error – load balance (LB)
error space into different performance conditions as follows: The FT estimation varies between
three threshold intervals (FT error units are in numbers depending on the grid size) as follows: (1)
From [0 … 30] which indicates a Good system. (2) From [30 … 60] which indicates a Medium
system. (3) Above 60 which indicates a Bad system. The LB estimation varies between three
threshold intervals (LB error units are in second depending on the mean job response time) as
follows: (1) From [0 … 0.1] which indicates a Good system. (2) From [0.1 … 0.3] which
indicates a Medium system. (3) Above 0.3 which indicates a Bad system. We can observe that
Figure 3 is divided into 9 different areas as follows: (1) GG: this interval is good for both FT and
LB estimation. (2) GM: this interval is good for FT estimation and medium for LB estimation. (3)
GB: this interval is good for FT estimation and bad for LB estimation. (4) MG: this interval is
medium for FT estimation and good for LB estimation. (5) MM: this interval is medium for both
FT and LB estimation. (6) MB: this interval is medium for FT estimation and bad for LB
estimation. (7) BG: this interval is bad for FT estimation and good for LB estimation. (8) BM:
this interval is bad for FT estimation and medium for LB estimation. (9) BB: this interval is bad
for both FT and LB estimation.
Figure 3. The proposed 2-D Figure of Merit.
Finally, to enhance fault tolerance estimation we decrease replication time and message cost and
this will cause increase in probability of job completed. On the other hand, to enhance load
balance estimation we decrease mean job response time and this will cause increase in number of
jobs/second.
13. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 6, December 2015
31
4. RESULTS AND DISCUSSION
A simulation model is built using MATLAB simulator to evaluate the performance of grid
computing system based on the proposed algorithm. This simulation model consists of one local
grid manager which manages a number of site managers which in turn manages a number of
processing elements (Workstations or Processors). All simulations are performed on a PC (Dual
Core Processor, 2.3 GHz, 2 GB RAM) using Windows 7 Professional OS. Figure 4 show the
comparison between the load balance of the proposed method and the old algorithm mentioned
before in [4] at different arrival rates. The mean job response time for different random
distributions such as Exponential, Uniform, Normal and Poisson are calculated and after many
trials it can be shown that the same results obtained for all distributions. The improvement ratio
(gain) can be calculated and shown in Figure 5 and from this figure we can see that maximum
improvement ratio is 98%.
Figure 4. Load Balance Estimation Compared With The Old Algorithm.
400 600 800 1000 1200 1400 1600 1800
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Arrival Rate jobs/second
MeanJobResponsTimeinseconds
Proposed-Alg
Old-Alg
14. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 6, December 2015
32
Figure 5. Load Balance Improvement Ratio.
The second experimental results is shown in Figure 6, here fault tolerance estimation is
proportional to grid size and it can be shown that grid size for the proposed method is better than
the old algorithm mentioned in [7] and the calculated values experimented at different levels of
entropy values. The threshold device selects the best route path for the processing element from
the all members of the grid then deciding the best system to be stable with respect to load balance
and fault tolerance policies. When applying this best selected route path and feed back again to
select the best value for the entropy threshold to improve both load balance and fault tolerance we
found that the load balance still the same and this confirm the stability condition of DR-tree for
the system. But the experimental results show that fault tolerance enhanced with different entropy
threshold values and the improvement ratio for fault tolerance is 33%. The last experiment show
that when decrease the entropy threshold under 80% of its values the stability of the system
affected and the output result not accurate this could be shown in Figure 7 as the total final
improvement ratios are 98% for load balance and 33% for fault tolerance which are very good
enhancement ratios.
400 600 800 1000 1200 1400 1600 1800
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
ArrivalRatejobs/second
ImprovementRatioofMeanJobResponsTimeinseconds
15. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 6, December 2015
33
Figure 6. Fault Tolerance Estimation with different Entropy Levels Compared With The Old Algorithm.
Figure 7. Path Selector with 2-D Figure of Merit with different Entropy Levels Compared With The Old
Algorithm.
Finally, Applying optimization Algorithms to improve the performance of system utilization by
using parameters as follows: No. of dimensions = 20, No of particles = 64, No. of iterations =
1000. Figure 8, 9, 10 show Fitness functions for GA, ACO and PSO respectively. When
comparing the three different optimization algorithms with respect to system utilization the
experimental study show that they almost the same but PSO algorithm give the best performance
and the system utilization decrease about 75% as shown in Figure 11.
400 600 800 1000 1200 1400 1600 1800
40
45
50
55
60
65
ArrivalRate jobs/second
Gridsize
No-Entropy
Entropy-100%
Entropy-90%
Entropy-80%
Entropy-70%
Old-Alg
0 10 20 30 40 50 60 70 80 90 100
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Mean Fault Tolerance Estimation Error
MeanLoadBalanceEstimationErrorinseconds
No-Entropy
Entropy-100%
Entropy-90%
Entropy-80%
Entropy-70%
Old-Alg
16. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 6, December 2015
34
Figure 8. Fitness function of GA.
Figure 9. Fitness function of ACO.
17. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 6, December 2015
35
Figure 10. Fitness function of PSO.
Figure 11. System utilization
18. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 6, December 2015
36
5. CONCLUSION
This paper proposes a new adaptive procedure based on advanced fractal transform that enhances
the tree model structure of grid computing environment to improve the network performance
parameters affected by both fault tolerance and load balance. First, the network parameters fault
tolerance and load balance estimation have been calculated based on fractal transform. In our
work, simulator of fault tolerant approach for load balancing in grid environment is build. In this
one local grid manager with 9 sites and each site have 1 to 3 processing elements and queue
length of each computing element is from 1 to 50 and this was illustrated in a 2-D Figure of
Merit. The grid computing routing protocol is enhanced by improving both fault tolerance and
load balance estimation in a novel 2-D figure of merit. The improvement of the fault tolerance
estimation is achieved by decrease replication time and message cost and this will cause increase
in probability of job completed. On the other hand, the load balance estimation is enhanced by
decrease mean job response time and this will cause increase in number of jobs/second. Finally,
the improvement ratios are 98% for load balance and 33% for fault tolerance which are very good
enhancement ratios. Also when comparing the system utilization by three different optimization
algorithms GA, ACO and PSO they give almost the same results but PSO produce the best
performance and decrease the system utilization to 75% which is high performance parameters.
Further experimentation can be done by implementing our system on a real grid computing
network and study its performance.
REFERENCES
[1] S. Sharma, U. S. Tim, J. Wong, S. Gadia & S. Sharma, (2014) "A Brief Review on Leading Big Data
Models", Data Science Journal, Vol. 13, No. 0, pp. 138-157.
[2] S. Sharma, R. Shandilya, S. Patnaik & A. Mahapatra (2015a). Leading NoSQL models for handling
Big Data: a brief review, International Journal of Business Information Systems, Inderscience, Vol.
18, No. 4.
[3] S. Sharma, U. S. Tim, J. Wong, S. Gadia, R. Shandily & S. Peddoju (2015b). Classification and
Comparison of NoSQL Big Data Models, International Journal of Big Data Intelligence, Inderscience,
Vol. 2, No. 2.
[4] F. Berman, G. Fox & A. Hey, (2003) Grid Computing: Making the Global Infrastructure a Reality,
Wiley Series in Communications Networking & Distributed Systems, pp. 809-824.
[5] S. Elavarasi, J. Akilandeswari & B. Sathiyabhama, (2011) “A Survey on Partition Clustering
Algorithms”, International Journal of Enterprise Computing and Business Systems (Online)
(IJECBS), Vol. 1, Issue 1, pp. 1-14.
[6] R. Sharma, V. Soni, M. Mishra & P. Bhuyan, (2010) “A Survey of Job Scheduling and Resource
Management in Grid Computing”, World Academy of Science Engineering and Technology, Vol. 64,
pp. 461-466.
[7] S. El-Zoghdy, (2011) “A Load Balancing Policy for Heterogeneous Computational Grids”,
International Journal of Advanced Computer Science and Applications (IJACSA), Vol. 2, No. 5, pp.
93-100.
[8] A. Kumar, R. Yadav, Ranvijay & A. Jain, (2011) “Fault Tolerance in Real Time Distributed System”,
International Journal on Computer Science and Engineering (IJCSE), Vol. 3, No. 2, pp. 933-939.
[9] M. Valero, L. Arantes, M. Potop-Butucaru & P. Sens, (2011) “Enhancing Fault Tolerance of
Distributed R-Tree”, 5th Latin-American Symposium on Dependable Computing (LADC 2011), pp.
25-34.
[10] Y. Zhao & C. Li, (2008) “Research on the Distributed Parallel Spatial Indexing Schema Based on R-
Tree”, International Archives of the Photogrammetric, Remote Sensing and Spatial Information
Sciences. Vol. XXXVII, Part B2, pp. 1113-1118.
[11] M. Hassaballah, M. Makky & Y. Mahdy, (2005) “A Fast Fractal Image Compression Method Based
Entropy”, Electronic Letters on Computer Vision and Image Analysis, Vol. 5, No. 1, pp. 30-40.
19. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 6, December 2015
37
[12] V. Vaddella & R. Inampudi, (2010) “Fast Fractal Compression of Satellite and Medical Images Based
on Domain-Range Entropy”, Journal of Applied Computer Science & Mathematics, Vol. 4, No. 9, pp.
21-26.
[13] L. Balasubramanian & M. Sugumaran, (2012) "A State-of-Art in R-Tree Variants for Spatial
Indexing", International Journal of Computer Applications, Vol. 42, No. 20, pp. 35-41.
[14] B. Yagoubi & Y. Slimani, (2006) "Dynamic Load Balancing Strategy for Grid Computing", World
Academy of Science Engineering and Technology, Vol. 19, pp. 90-95.
[15] J. Balasangameshwara & N. Raju, (2012) "A Hybrid Policy for Fault Tolerant Load Balancing in Grid
Computing Environments", Journal of Network and Computer Applications (JNCA 2012), Vol. 35,
Issue 1, pp. 412-422.
[16] S. Bianchi, A. Datta, P. Felber, & M. Gradinariu, (2007) "Stabilizing Peer-to-Peer Spatial Filters",
27th International Conference on Distributed Computing Systems (ICDCS'07), pp. 27-36.
[17] R. Mukhopadhyay, D. Ghosh, & N. Mukherjee, (2010) "A Study on the Application of Existing Load
Balancing Algorithms for Large, Dynamic, Heterogeneous Distributed Systems", Recent Advances in
Software Engineering, Parallel and Distributed Systems ACM 2010, pp. 238-243.
[18] J. Cao, (2004) "Self-organizing Agents for Grid Load Balancing", Proceedings of the 5th IEEE/ACM
International Workshop on Grid Computing (GRID '04), pp. 388-395.
[19] J. Cao, D. Spooner, S. Jarvis, & G. Nudd, (2005) "Grid Load Balancing Using Intelligent Agents",
Future Generation Computer Systems, Vol. 21, Issue 1, pp. 135-149.
[20] Saad M. Darwish, Adel A. El-zoghabi, & Moustafa F. Ashry, (2015) "Improving Fault Tolerance and
Load Balancing in Heterogeneous Grid Computing Using Fractal Transform", World Academy of
Science, Engineering and Technology, International Journal of Computer and Information
Engineering Volume 2, Number 5, 2015, International Science Index Volume 2, Number 5, 2015
waset.org/abstracts/18473, International Conference on Distributed Computing and Networking
(ICDCN 2015), Tokyo, Japan, May 2015, pp. 820-840.
[21] R. Rajeswari & Dr. N.Kasthuri, (2013) "Comparative survey on load balancing techniques in
computational grids", International Journal of Scientific & Engineering Research (IJSER), Vol. 4,
Issue 9, pp. 79-92.
AUTHORS
Saad M.Darwish received his Ph.D. degree from the Alexandria University, Egypt. His
research and professional interests include image processing, optimization techniques,
security technologies, and machine learning. He has published in journals and conferences
and severed as TPC of many international conferences. Since Feb. 2012, he has been an
associate professor in the Department of Information Technology, Institute of Graduate
Studies and Research, Egypt.
Adel A.El-Zoghabi received his Ph.D. degree from the Alexandria University, Egypt. His
research and professional interests include image processing, optimization techniques,
security technologies, and machine learning. He has published in journals and conferences
and severed as TPC of many international conferences. Since April. 2013, he has been a
head of the IT Department of Information Technology, Institute of Graduate Studies and
Research, Egypt.
Moustafa F.Ashry received his M.Sc. degree in communication engineering from the Arab
Academy for Science And Technology (AASTMT), Egypt. His research, published and
professional interests include communication networks, Ad-Hoc networks, routing
protocols, optimization techniques, and security technologies. Since Feb. 2012, he has been
a Ph.D. student in the Department of Information Technology, Institute of Graduate Studies
and Research, Egypt.