This document summarizes a research paper on partitioning algorithms and load balancing algorithms for cloud computing. It discusses how cloud partitioning can improve load balancing and system performance. It reviews different partitioning and load balancing algorithms such as ant colony optimization and honey bee behavior algorithms. The paper presents a model for cloud partitioning based on geographic regions. It also provides experimental results comparing the ant colony and honey bee algorithms, finding that honey bee performs better, especially with higher load thresholds. The paper concludes that load balancing is important for optimizing cloud resource utilization.
Tremendous usage of internet has made huge data on the network, without compromising on the
performance of network the end-users must obtain best service. As cloud provides different services on
leasing basis many companies are migrating from their own Infrastructure to cloud,This migration should
not compromise on performance of the cloud, The performance of the cloud can be improved by having
excellent load balancing strategy such that the end user is satisfied. The paper reveals the method by which
a cloud can be partitioned and a study of different algorithm with comparative study to balance the
dynamic load. The comparative study between Ant Colony and Honey Bee algorithm gives the result which
algorithm is optimal in normal load condition also the simplest round robin algorithm is applied when the
partition are in Idle state
A Novel Switch Mechanism for Load Balancing in Public CloudIJMER
In cloud computing environment, one of the core design principles is dynamic scalability,
which guarantees cloud storage service to handle the growing amounts of application data in a flexible
manner or to be readily enlarged. By integrating several private and public cloud services, the hybrid
clouds can effectively provide dynamic scalability of service and data migration. A load balancing is a
method of dividing computing loads among numerous hardware resources. Due to unpredictable job
arrival pattern and the capacities of the nodes in cloud differ for the load balancing problem. In this load
control is very crucial to improve system performance and maintenance. This paper presents a switch
mechanism for load balancing in cloud computing. The load balancing model given in this work is aimed
at the public cloud which has numerous nodes with distributed computing resources in many different
geographical areas. Thus, this model divides the public cloud environment into several cloud partitions.
When the cloud environment is very large and complex, these divisions simplify the load balancing. The
cloud environment has a main controller that chooses the suitable partitions for arriving jobs while the
balancer for each cloud partition chooses the best load balancing strategy
Public Cloud Partition Using Load Status Evaluation and Cloud Division RulesIJSRD
with growth of cloud computing load balancing is important impact on performance. Cloud computing efficiency depends on good load balancer. Many type of situation occur that time cloud partitioning is done by load balancer. Different type of situation needed different type of strategies for public cloud portioning using load balancer.in this paper we work on, partition of public cloud using two type of situation first is load status evaluation and second is cloud division rules. Load status evaluation is measure in number of cloudlets arrives at datacenter and cloud divisions rules are based on cloudlet come from which geographical location. On the basis of geographical location we partition public cloud and improve performance of load balancing in cloud computing. We implement proposed system with help of cloudsim3.0 simulator.
Abstract— Cloud storage is usually distributed infrastructure, where data is not stored in a single device but is spread to several storage nodes which are located in different areas. To ensure data availability some amount of redundancy has to be maintained. But introduction of data redundancy leads to additional costs such as extra storage space and communication bandwidth which required for restoring data blocks. In the existing system, the storage infrastructure is considered as homogeneous where all nodes in the system have same online availability which leads to efficiency losses. The proposed system considers that distributed storage system is heterogeneous where each node exhibit different online availability. Monte Carlo Sampling is used to measure the online availability of storage nodes. The parallel version of Particle Swarm Optimization is used to assign redundant data blocks according to their online availability. The optimal data assignment policy reduces the redundancy and their associated cost.
Energy efficiency in virtual machines allocation for cloud data centers with ...IJECEIAES
Energy usage of data centers is a challenging and complex issue because computing applications and data are growing so quickly that increasingly larger servers and disks are needed to process them fast enough within the required time period. In the past few years, many approaches to virtual machine placement have been proposed. This study proposes a new approach for virtual machine allocation to physical hosts. Either minimizes the physical hosts and avoids the SLA violation. The proposed method in comparison to the other algorithms achieves better results.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Tremendous usage of internet has made huge data on the network, without compromising on the
performance of network the end-users must obtain best service. As cloud provides different services on
leasing basis many companies are migrating from their own Infrastructure to cloud,This migration should
not compromise on performance of the cloud, The performance of the cloud can be improved by having
excellent load balancing strategy such that the end user is satisfied. The paper reveals the method by which
a cloud can be partitioned and a study of different algorithm with comparative study to balance the
dynamic load. The comparative study between Ant Colony and Honey Bee algorithm gives the result which
algorithm is optimal in normal load condition also the simplest round robin algorithm is applied when the
partition are in Idle state
A Novel Switch Mechanism for Load Balancing in Public CloudIJMER
In cloud computing environment, one of the core design principles is dynamic scalability,
which guarantees cloud storage service to handle the growing amounts of application data in a flexible
manner or to be readily enlarged. By integrating several private and public cloud services, the hybrid
clouds can effectively provide dynamic scalability of service and data migration. A load balancing is a
method of dividing computing loads among numerous hardware resources. Due to unpredictable job
arrival pattern and the capacities of the nodes in cloud differ for the load balancing problem. In this load
control is very crucial to improve system performance and maintenance. This paper presents a switch
mechanism for load balancing in cloud computing. The load balancing model given in this work is aimed
at the public cloud which has numerous nodes with distributed computing resources in many different
geographical areas. Thus, this model divides the public cloud environment into several cloud partitions.
When the cloud environment is very large and complex, these divisions simplify the load balancing. The
cloud environment has a main controller that chooses the suitable partitions for arriving jobs while the
balancer for each cloud partition chooses the best load balancing strategy
Public Cloud Partition Using Load Status Evaluation and Cloud Division RulesIJSRD
with growth of cloud computing load balancing is important impact on performance. Cloud computing efficiency depends on good load balancer. Many type of situation occur that time cloud partitioning is done by load balancer. Different type of situation needed different type of strategies for public cloud portioning using load balancer.in this paper we work on, partition of public cloud using two type of situation first is load status evaluation and second is cloud division rules. Load status evaluation is measure in number of cloudlets arrives at datacenter and cloud divisions rules are based on cloudlet come from which geographical location. On the basis of geographical location we partition public cloud and improve performance of load balancing in cloud computing. We implement proposed system with help of cloudsim3.0 simulator.
Abstract— Cloud storage is usually distributed infrastructure, where data is not stored in a single device but is spread to several storage nodes which are located in different areas. To ensure data availability some amount of redundancy has to be maintained. But introduction of data redundancy leads to additional costs such as extra storage space and communication bandwidth which required for restoring data blocks. In the existing system, the storage infrastructure is considered as homogeneous where all nodes in the system have same online availability which leads to efficiency losses. The proposed system considers that distributed storage system is heterogeneous where each node exhibit different online availability. Monte Carlo Sampling is used to measure the online availability of storage nodes. The parallel version of Particle Swarm Optimization is used to assign redundant data blocks according to their online availability. The optimal data assignment policy reduces the redundancy and their associated cost.
Energy efficiency in virtual machines allocation for cloud data centers with ...IJECEIAES
Energy usage of data centers is a challenging and complex issue because computing applications and data are growing so quickly that increasingly larger servers and disks are needed to process them fast enough within the required time period. In the past few years, many approaches to virtual machine placement have been proposed. This study proposes a new approach for virtual machine allocation to physical hosts. Either minimizes the physical hosts and avoids the SLA violation. The proposed method in comparison to the other algorithms achieves better results.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
UnaCloud is an opportunistic based cloud infrastructure
(IaaS) that allows to access on-demand computing
capabilities using commodity desktops. Although UnaCloud
tried to maximize the use of idle resources to deploy virtual
machines on them, it does not use energy-efficient resource
allocation algorithms. In this paper, we design and implement
different energy-aware techniques to operate in an energyefficient
way and at the same time guarantee the performance
to the users. Performance tests with different algorithms and
scenarios using real trace workloads from UnaCloud, show how
different policies can change the energy consumption patterns
and reduce the energy consumption in opportunistic cloud
infrastructures. The results show that some algorithms can
reduce the energy-consumption power up to 30% over the
percentage earned by opportunistic environment.
Data Dissemination in Wireless Sensor Networks: A State-of-the Art SurveyCSCJournals
A wireless sensor network is a network of tiny nodes with wireless sensing capacity for data collection processing and further communicating with the Base Station this paper discusses the overall mechanism of data dissemination right from data collection at the sensor nodes, clustering of sensor nodes, data aggregation at the cluster heads and disseminating data to the Base Station the overall motive of the paper is to conserve energy so that lifetime of the network is extended this paper highlights the existing algorithms and open research gaps in efficient data dissemination.
WIRELESS SENSOR NETWORK CLUSTERING USING PARTICLES SWARM OPTIMIZATION FOR RED...IJMIT JOURNAL
Wireless sensor networks (WSN) is composed of a large number of small nodes with limited functionality. The most important issue in this type of networks is energy constraints. In this area several researches have been done from which clustering is one of the most effective solutions. The goal of clustering is to divide network into sections each of which has a cluster head (CH). The task of cluster heads collection, data aggregation and transmission to the base station is undertaken. In this paper, we introduce a new approach for clustering sensor networks based on Particle Swarm Optimization (PSO) algorithm using the optimal fitness function, which aims to extend network lifetime. The parameters used in this algorithm are residual energy density, the distance from the base station, intra-cluster distance from the cluster head. Simulation results show that the proposed method is more effective compared to protocols such as (LEACH, CHEF, PSO-MV) in terms of network lifetime and energy consumption.
ITA: The Improved Throttled Algorithm of Load Balancing on Cloud ComputingIJCNCJournal
Cloud computing makes the information technology industry boom. It is a great solution for businesses who want to save costs while ensuring the quality of service. One of the key issues that make cloud computing successful is the load balancing technique used in the load balancer to minimize time costs and optimize costs economically. This paper proposes an algorithm to enhance the processing time of tasks so that it can help improve the load balancing capacity on cloud computing. This algorithm, named as Improved Throttled Algorithm (ITA), is an improvement of Throttled Algorithm. The paper uses the Cloud Analyst tool to simulate. The selected algorithms are used to compare: Equally Load, Round Robin, Throttled and TMA. The simulation results show that the proposed algorithm ITA has improved the processing time of tasks, time spent processing requests and reduced the cost of Datacenters compared to the selected popular algorithms as above. The improvement of ITA is because of selecting virtual machines in an index table that is available but in order of priority. It helps response times and processing times remain stable, limits the idling resources, and cloud costs are minimized compared to selected algorithms.
A COST EFFECTIVE COMPRESSIVE DATA AGGREGATION TECHNIQUE FOR WIRELESS SENSOR N...ijasuc
In wireless sensor network (WSN) there are two main problems in employing conventional compression
techniques. The compression performance depends on the organization of the routes for a larger extent.
The efficiency of an in-network data compression scheme is not solely determined by the compression
ratio, but also depends on the computational and communication overheads. In Compressive Data
Aggregation technique, data is gathered at some intermediate node where its size is reduced by applying
compression technique without losing any information of complete data. In our previous work, we have
developed an adaptive traffic aware aggregation technique in which the aggregation technique can be
changed into structured and structure-free adaptively, depending on the load status of the traffic. In this
paper, as an extension to our previous work, we provide a cost effective compressive data gathering
technique to enhance the traffic load, by using structured data aggregation scheme. We also design a
technique that effectively reduces the computation and communication costs involved in the compressive
data gathering process. The use of compressive data gathering process provides a compressed sensor
reading to reduce global data traffic and distributes energy consumption evenly to prolong the network
lifetime. By simulation results, we show that our proposed technique improves the delivery ratio while
reducing the energy and delay
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Mobile Data Gathering with Load Balanced Clustering and Dual Data Uploading i...1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
ADVANCED DIFFUSION APPROACH TO DYNAMIC LOAD-BALANCING FOR CLOUD STORAGEijdpsjournal
Load-balancing techniques have become a critical function in cloud storage systems that consist of complex heterogeneous networks of nodes with different capacities. However, the convergence rate of any load-balancing algorithm as well as its performance deteriorated as the number of nodes in the system, the diameter of the network and the communication overhead increased. Therefore, this paper presents an approach aims at scaling the system out not up - in other words, allowing the system to be expanded by adding more nodes without the need to increase the power of each node while at the same time increasing the overall performance of the system. Also, our proposal aims at improving the performance by not only
considering the parameters that will affect the algorithm performance but also simplifying the structure of the network that will execute the algorithm. Our proposal was evaluated through mathematical analysis as well as computer simulations, and it was compared with the centralized approach and the original diffusion technique. Results show that our solution outperforms them in terms of throughput and response time.
Finally, we proved that our proposal converges to the state of equilibrium where the loads in all in-domain nodes are the same since each node receives an amount of load proportional to its capacity. Therefore, we conclude that this approach would have an advantage of being fair, simple and no node is privileged.
Data gathering in wireless sensor networks using intermediate nodesIJCNCJournal
Energy consumption is an essential concern to Wireless Sensor Networks (WSNs).The major cause of the energy consumption in WSNs is due to the data aggregation. A data aggregation is a process of collecting data from sensor nodes and transmitting these data to the sink node or base station. An effective way to perform such a task is accomplished by using clustering. In clustering, nodes are grouped into clusters where a number of nodes, called cluster heads, are responsible for gathering data from other nodes, aggregate them and transmit them to the Base Station (BS).
In this paper we produce a new algorithm which focused on reducing the transmission bath between sensor nodes and cluster heads. A proper utilization and reserving of the available power resources is achieved with this technique compared to the well-known LEACH_C algorithm.
Every cluster comprise of a leader which is known as cluster head. The cluster head will be chosen by the sensor nodes in the individual cluster or be pre-assigned by the user. The main advantages of clustering are the transmission of aggregated data to the base station, offers scalability for huge number of nodes and trims down energy consumption. Fundamentally, clustering could be classified into centralized clustering, distributed clustering and hybrid clustering. In centralized clustering, the cluster head is fixed. The rest of the nodes in the cluster act as member nodes. In distributed clustering, the cluster head is not fixed. The cluster head keeps on shifting form node to node within the cluster on the basis of some parameters. Hybrid clustering is the combination of both centralized clustering and distributed clustering mechanisms. This paper gives a brief overview on clustering process in wireless sensor networks. A research on the well evaluated distributed clustering algorithm Low Energy Adaptive Clustering Hierarchy (LEACH) and its followers are portrayed artistically. To overcome the drawbacks of these existing algorithms a hybrid distributed clustering model has been proposed for attaining energy efficiency to a larger scale.
Energy aware clustering protocol (eacp)IJCNCJournal
Energy saving to prolong the network life is an important design issue while developing a new routing
protocol for wireless sensor network. Clustering is a key technique for this and helps in maximizing the
network lifetime and scalability. Most of the routing and data dissemination protocols of WSN assume a
homogeneous network architecture, in which all sensors have the same capabilities in terms of battery
power, communication, sensing, storage, and processing. Recently, there has been an interest in
heterogeneous sensor networks, especially for real deployments. This research paper has proposed a new
energy aware clustering protocol (EACP) for heterogeneous wireless sensor networks. Heterogeneity is
introduced in EACP by using two types of nodes: normal and advanced. In EACP cluster heads for normal
nodes are elected with the help of a probability scheme based on residual and average energy of the
normal nodes. This will ensure that only the high residual normal nodes can become the cluster head in a
round. Advanced nodes use a separate probability based scheme for cluster head election and they will
further act as a gateway for normal cluster heads and transmit their data load to base station when they
are not doing the duty of a cluster head. Finally a sleep state is suggested for some sensor nodes during
cluster formation phase to save network energy. The performance of EACP is compared with SEP and
simulation result shows the better result for stability period, network life and energy saving than SEP.
LOAD BALANCING AND ENERGY EFFICIENCY IN WSN BY CLUSTER JOINING METHODIAEME Publication
In any WSN life of network is depending on life of sensor node. Thus, proper load balancing is very useful for improving life of network. The tree-based routing protocols like GSTEB used dynamic tree structures for routing without any formation of collections. In cases of larger networks, the scheme is not always feasible. In this proposed work cluster-based routing method is used. Cluster head is selected such that it should be close to the base station and should have maximum residential energy than other nodes selected for cluster formation. Size of cluster is controlled by using location-based cluster joining method such that nodes selects their nearest collection head based on the signal strength from cluster head and distance between node and cluster head. Nodes connect to head having the highest signal strength and closest to the base station, this minimizes size of cluster and reduces extra energy consumption. In addition to this cluster formation process starts only after availability of data due to an event. So proposed protocol performs better than existing tree based protocols like GSTEB in terms of energy efficiency
Empirical studies have revealed that a significant amount of energy is lost unnecessarily in the
network architectures, protocols, routers and various other network devices. Thus there is a need for techniques
to obtain green networking in the computer architecture which can lead to energy saving. Green networking is
an emerging phenomenon in the computer industry because of its economic and environmental benefits. Saving
energy leads to cost-cutting and lower emission of greenhouse gases which are apparently one of the major
threats to the environment. ’Greening’ as the name suggests is the process of constructing network architecture
in such a way so as to avoid unnecessary loss of power and energy due its various components and can be
implemented using various techniques out of which four are mentioned in this review paper, namely Adaptive
link rate (ALR), Dynamic Voltage and Frequency scaling(DVFS), Interface proxying and energy aware
applications and software.
AN ENTROPIC OPTIMIZATION TECHNIQUE IN HETEROGENEOUS GRID COMPUTING USING BION...ijcsit
The wide usage of the Internet and the availability of powerful computers and high-speed networks as low cost
commodity components have a deep impact on the way we use computers today, in such a way that
these technologies facilitated the usage of multi-owner and geographically distributed resources to address
large-scale problems in many areas such as science, engineering, and commerce. The new paradigm of
Grid computing has evolved from these researches on these topics. Performance and utilization of the grid
depends on a complex and excessively dynamic procedure of optimally balancing the load among the
available nodes. In this paper, we suggest a novel two-dimensional figure of merit that depict the network
effects on load balance and fault tolerance estimation to improve the performance of the network
utilizations. The enhancement of fault tolerance is obtained by adaptively decrease replication time and
message cost. On the other hand, load balance is improved by adaptively decrease mean job response time.
Finally, analysis of Genetic Algorithm, Ant Colony Optimization, and Particle Swarm Optimization is
conducted with regards to their solutions, issues and improvements concerning load balancing in
computational grid. Consequently, a significant system utilization improvement was attained. Experimental
results eventually demonstrate that the proposed method's performance surpasses other methods.
UnaCloud is an opportunistic based cloud infrastructure
(IaaS) that allows to access on-demand computing
capabilities using commodity desktops. Although UnaCloud
tried to maximize the use of idle resources to deploy virtual
machines on them, it does not use energy-efficient resource
allocation algorithms. In this paper, we design and implement
different energy-aware techniques to operate in an energyefficient
way and at the same time guarantee the performance
to the users. Performance tests with different algorithms and
scenarios using real trace workloads from UnaCloud, show how
different policies can change the energy consumption patterns
and reduce the energy consumption in opportunistic cloud
infrastructures. The results show that some algorithms can
reduce the energy-consumption power up to 30% over the
percentage earned by opportunistic environment.
Data Dissemination in Wireless Sensor Networks: A State-of-the Art SurveyCSCJournals
A wireless sensor network is a network of tiny nodes with wireless sensing capacity for data collection processing and further communicating with the Base Station this paper discusses the overall mechanism of data dissemination right from data collection at the sensor nodes, clustering of sensor nodes, data aggregation at the cluster heads and disseminating data to the Base Station the overall motive of the paper is to conserve energy so that lifetime of the network is extended this paper highlights the existing algorithms and open research gaps in efficient data dissemination.
WIRELESS SENSOR NETWORK CLUSTERING USING PARTICLES SWARM OPTIMIZATION FOR RED...IJMIT JOURNAL
Wireless sensor networks (WSN) is composed of a large number of small nodes with limited functionality. The most important issue in this type of networks is energy constraints. In this area several researches have been done from which clustering is one of the most effective solutions. The goal of clustering is to divide network into sections each of which has a cluster head (CH). The task of cluster heads collection, data aggregation and transmission to the base station is undertaken. In this paper, we introduce a new approach for clustering sensor networks based on Particle Swarm Optimization (PSO) algorithm using the optimal fitness function, which aims to extend network lifetime. The parameters used in this algorithm are residual energy density, the distance from the base station, intra-cluster distance from the cluster head. Simulation results show that the proposed method is more effective compared to protocols such as (LEACH, CHEF, PSO-MV) in terms of network lifetime and energy consumption.
ITA: The Improved Throttled Algorithm of Load Balancing on Cloud ComputingIJCNCJournal
Cloud computing makes the information technology industry boom. It is a great solution for businesses who want to save costs while ensuring the quality of service. One of the key issues that make cloud computing successful is the load balancing technique used in the load balancer to minimize time costs and optimize costs economically. This paper proposes an algorithm to enhance the processing time of tasks so that it can help improve the load balancing capacity on cloud computing. This algorithm, named as Improved Throttled Algorithm (ITA), is an improvement of Throttled Algorithm. The paper uses the Cloud Analyst tool to simulate. The selected algorithms are used to compare: Equally Load, Round Robin, Throttled and TMA. The simulation results show that the proposed algorithm ITA has improved the processing time of tasks, time spent processing requests and reduced the cost of Datacenters compared to the selected popular algorithms as above. The improvement of ITA is because of selecting virtual machines in an index table that is available but in order of priority. It helps response times and processing times remain stable, limits the idling resources, and cloud costs are minimized compared to selected algorithms.
A COST EFFECTIVE COMPRESSIVE DATA AGGREGATION TECHNIQUE FOR WIRELESS SENSOR N...ijasuc
In wireless sensor network (WSN) there are two main problems in employing conventional compression
techniques. The compression performance depends on the organization of the routes for a larger extent.
The efficiency of an in-network data compression scheme is not solely determined by the compression
ratio, but also depends on the computational and communication overheads. In Compressive Data
Aggregation technique, data is gathered at some intermediate node where its size is reduced by applying
compression technique without losing any information of complete data. In our previous work, we have
developed an adaptive traffic aware aggregation technique in which the aggregation technique can be
changed into structured and structure-free adaptively, depending on the load status of the traffic. In this
paper, as an extension to our previous work, we provide a cost effective compressive data gathering
technique to enhance the traffic load, by using structured data aggregation scheme. We also design a
technique that effectively reduces the computation and communication costs involved in the compressive
data gathering process. The use of compressive data gathering process provides a compressed sensor
reading to reduce global data traffic and distributes energy consumption evenly to prolong the network
lifetime. By simulation results, we show that our proposed technique improves the delivery ratio while
reducing the energy and delay
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Mobile Data Gathering with Load Balanced Clustering and Dual Data Uploading i...1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
ADVANCED DIFFUSION APPROACH TO DYNAMIC LOAD-BALANCING FOR CLOUD STORAGEijdpsjournal
Load-balancing techniques have become a critical function in cloud storage systems that consist of complex heterogeneous networks of nodes with different capacities. However, the convergence rate of any load-balancing algorithm as well as its performance deteriorated as the number of nodes in the system, the diameter of the network and the communication overhead increased. Therefore, this paper presents an approach aims at scaling the system out not up - in other words, allowing the system to be expanded by adding more nodes without the need to increase the power of each node while at the same time increasing the overall performance of the system. Also, our proposal aims at improving the performance by not only
considering the parameters that will affect the algorithm performance but also simplifying the structure of the network that will execute the algorithm. Our proposal was evaluated through mathematical analysis as well as computer simulations, and it was compared with the centralized approach and the original diffusion technique. Results show that our solution outperforms them in terms of throughput and response time.
Finally, we proved that our proposal converges to the state of equilibrium where the loads in all in-domain nodes are the same since each node receives an amount of load proportional to its capacity. Therefore, we conclude that this approach would have an advantage of being fair, simple and no node is privileged.
Data gathering in wireless sensor networks using intermediate nodesIJCNCJournal
Energy consumption is an essential concern to Wireless Sensor Networks (WSNs).The major cause of the energy consumption in WSNs is due to the data aggregation. A data aggregation is a process of collecting data from sensor nodes and transmitting these data to the sink node or base station. An effective way to perform such a task is accomplished by using clustering. In clustering, nodes are grouped into clusters where a number of nodes, called cluster heads, are responsible for gathering data from other nodes, aggregate them and transmit them to the Base Station (BS).
In this paper we produce a new algorithm which focused on reducing the transmission bath between sensor nodes and cluster heads. A proper utilization and reserving of the available power resources is achieved with this technique compared to the well-known LEACH_C algorithm.
Every cluster comprise of a leader which is known as cluster head. The cluster head will be chosen by the sensor nodes in the individual cluster or be pre-assigned by the user. The main advantages of clustering are the transmission of aggregated data to the base station, offers scalability for huge number of nodes and trims down energy consumption. Fundamentally, clustering could be classified into centralized clustering, distributed clustering and hybrid clustering. In centralized clustering, the cluster head is fixed. The rest of the nodes in the cluster act as member nodes. In distributed clustering, the cluster head is not fixed. The cluster head keeps on shifting form node to node within the cluster on the basis of some parameters. Hybrid clustering is the combination of both centralized clustering and distributed clustering mechanisms. This paper gives a brief overview on clustering process in wireless sensor networks. A research on the well evaluated distributed clustering algorithm Low Energy Adaptive Clustering Hierarchy (LEACH) and its followers are portrayed artistically. To overcome the drawbacks of these existing algorithms a hybrid distributed clustering model has been proposed for attaining energy efficiency to a larger scale.
Energy aware clustering protocol (eacp)IJCNCJournal
Energy saving to prolong the network life is an important design issue while developing a new routing
protocol for wireless sensor network. Clustering is a key technique for this and helps in maximizing the
network lifetime and scalability. Most of the routing and data dissemination protocols of WSN assume a
homogeneous network architecture, in which all sensors have the same capabilities in terms of battery
power, communication, sensing, storage, and processing. Recently, there has been an interest in
heterogeneous sensor networks, especially for real deployments. This research paper has proposed a new
energy aware clustering protocol (EACP) for heterogeneous wireless sensor networks. Heterogeneity is
introduced in EACP by using two types of nodes: normal and advanced. In EACP cluster heads for normal
nodes are elected with the help of a probability scheme based on residual and average energy of the
normal nodes. This will ensure that only the high residual normal nodes can become the cluster head in a
round. Advanced nodes use a separate probability based scheme for cluster head election and they will
further act as a gateway for normal cluster heads and transmit their data load to base station when they
are not doing the duty of a cluster head. Finally a sleep state is suggested for some sensor nodes during
cluster formation phase to save network energy. The performance of EACP is compared with SEP and
simulation result shows the better result for stability period, network life and energy saving than SEP.
LOAD BALANCING AND ENERGY EFFICIENCY IN WSN BY CLUSTER JOINING METHODIAEME Publication
In any WSN life of network is depending on life of sensor node. Thus, proper load balancing is very useful for improving life of network. The tree-based routing protocols like GSTEB used dynamic tree structures for routing without any formation of collections. In cases of larger networks, the scheme is not always feasible. In this proposed work cluster-based routing method is used. Cluster head is selected such that it should be close to the base station and should have maximum residential energy than other nodes selected for cluster formation. Size of cluster is controlled by using location-based cluster joining method such that nodes selects their nearest collection head based on the signal strength from cluster head and distance between node and cluster head. Nodes connect to head having the highest signal strength and closest to the base station, this minimizes size of cluster and reduces extra energy consumption. In addition to this cluster formation process starts only after availability of data due to an event. So proposed protocol performs better than existing tree based protocols like GSTEB in terms of energy efficiency
Empirical studies have revealed that a significant amount of energy is lost unnecessarily in the
network architectures, protocols, routers and various other network devices. Thus there is a need for techniques
to obtain green networking in the computer architecture which can lead to energy saving. Green networking is
an emerging phenomenon in the computer industry because of its economic and environmental benefits. Saving
energy leads to cost-cutting and lower emission of greenhouse gases which are apparently one of the major
threats to the environment. ’Greening’ as the name suggests is the process of constructing network architecture
in such a way so as to avoid unnecessary loss of power and energy due its various components and can be
implemented using various techniques out of which four are mentioned in this review paper, namely Adaptive
link rate (ALR), Dynamic Voltage and Frequency scaling(DVFS), Interface proxying and energy aware
applications and software.
AN ENTROPIC OPTIMIZATION TECHNIQUE IN HETEROGENEOUS GRID COMPUTING USING BION...ijcsit
The wide usage of the Internet and the availability of powerful computers and high-speed networks as low cost
commodity components have a deep impact on the way we use computers today, in such a way that
these technologies facilitated the usage of multi-owner and geographically distributed resources to address
large-scale problems in many areas such as science, engineering, and commerce. The new paradigm of
Grid computing has evolved from these researches on these topics. Performance and utilization of the grid
depends on a complex and excessively dynamic procedure of optimally balancing the load among the
available nodes. In this paper, we suggest a novel two-dimensional figure of merit that depict the network
effects on load balance and fault tolerance estimation to improve the performance of the network
utilizations. The enhancement of fault tolerance is obtained by adaptively decrease replication time and
message cost. On the other hand, load balance is improved by adaptively decrease mean job response time.
Finally, analysis of Genetic Algorithm, Ant Colony Optimization, and Particle Swarm Optimization is
conducted with regards to their solutions, issues and improvements concerning load balancing in
computational grid. Consequently, a significant system utilization improvement was attained. Experimental
results eventually demonstrate that the proposed method's performance surpasses other methods.
An advanced ensemble load balancing approach for fog computing applicationsIJECEIAES
Fog computing has emerged as a viable concept for expanding the capabilities of cloud computing to the periphery of the network allowing for efficient data processing and analysis from internet of things (IoT) devices. Load balancing is essential in fog computing because it ensures optimal resource utilization and performance among distributed fog nodes. This paper proposed an ensemble-based load-balancing approach for fog computing environments. An advanced ensemble load balancing approach (AELBA) uses real-time monitoring and analysis of fog node metrics, such as resource utilization, network congestion, and service response times, to facilitate effective load distribution. Based on the ensemble's collective decision-making, these metrics are fed into a centralized load-balancing controller, which dynamically adjusts the load distribution across fog nodes. Performance of the proposed ensemble load-balancing approach is evaluated and compared it to traditional load-balancing techniques in fog using extensive simulation experiments. The results demonstrate that our ensemble-based approach outperforms individual load-balancing algorithms regarding response time, resource utilization, and scalability. It adapts to dynamic fog environments, providing efficient load balancing even under varying workload conditions.
A Prolific Scheme for Load Balancing Relying on Task Completion Time IJECEIAES
In networks with lot of computation, load balancing gains increasing significance. To offer various resources, services and applications, the ultimate aim is to facilitate the sharing of services and resources on the network over the Internet. A key issue to be focused and addressed in networks with large amount of computation is load balancing. Load is the number of tasks„t‟ performed by a computation system. The load can be categorized as network load and CPU load. For an efficient load balancing strategy, the process of assigning the load between the nodes should enhance the resource utilization and minimize the computation time. This can be accomplished by a uniform distribution of load of to all the nodes. A Load balancing method should guarantee that, each node in a network performs almost equal amount of work pertinent to their capacity and availability of resources. Relying on task subtraction, this work has presented a pioneering algorithm termed as E-TS (Efficient-Task Subtraction). This algorithm has selected appropriate nodes for each task. The proposed algorithm has improved the utilization of computing resources and has preserved the neutrality in assigning the load to the nodes in the network.
Cloud computing is an on demand service in which shared resources, information, software and other devices are provided to the end user as per their requirement at a specific time. A cloud consists of several elements such as clients, datacenters and distributed servers. There are n number of clients and end users involved in cloud environment. These clients may make requests to the cloud system simultaneously, making it difficult for the cloud to manage the entire load at a time. The load can be CPU load, memory load, delay or network load. This might cause inconvenience to the clients as there may be delay in the response time or it might affect the performance and efficiency of the cloud environment. So, the concept of load balancing is very important in cloud computing to improve the efficiency of the cloud. Good load balancing makes cloud computing more efficient and improves user satisfaction. This paper gives an approach to balance the incoming load in cloud environment by making partitions of the public cloud.
Development of a Suitable Load Balancing Strategy In Case Of a Cloud Computi...IJMER
Cloud computing is an attracting technology in the field of computer science. In
Gartner’s report, it says that the cloud will bring changes to the IT industry. The cloud is changing
our life by providing users with new types of services. Users get service from a cloud without paying
attention to the details. NIST gave a definition of cloud computing as a model for enabling
ubiquitous, convenient, on-demand network access to a shared pool of configurable computing
resources (e.g., networks, servers, storage, applications, and services) that can be rapidly
provisioned and released with minimal management effort or service provider interaction. More
and more people pay attention to cloud computing. Cloud computing is efficient and scalable but
maintaining the stability of processing so many jobs in the cloud computing environment is a very
complex problem with load balancing receiving much attention for researchers. Since the job
arrival pattern is not predictable and the capacities of each node in the cloud differ, for load
balancing problem, workload control is crucial to improve system performance and maintain
stability. Load balancing schemes depending on whether the system dynamics are important can be
either static or dynamic. Static schemes do not use the system information and are less complex
while dynamic schemes will bring additional costs for the system but can change as the system
status changes. A dynamic scheme is used here for its flexibility. The model has a main controller
and balancers to gather and analyze the information. Thus, the dynamic control has little influence
on the other working nodes. The system status then provides a basis for choosing the right load
balancing strategy. The load balancing model given in this research article is aimed at the public
cloud which has numerous nodes with distributed computing resources in many different
geographic locations. Thus, this model divides the public cloud into several cloud partitions. When
the environment is very large and complex, these divisions simplify the load balancing. The cloud
has a main controller that chooses the suitable partitions for arriving jobs while the balancer for
each cloud partition chooses the best load balancing strategy.
ANALYSIS ON LOAD BALANCING ALGORITHMS IMPLEMENTATION ON CLOUD COMPUTING ENVIR...AM Publications
Cloud computing means storing and accessing data and programs over the Internet instead of your computer's hard drive. The cloud is just a metaphor for the Internet. The elements involved in cloud computing are clients, data center and distributed server. One of the main problems in cloud computing is load balancing. Balancing the load means to distribute the workload among several nodes evenly so that no single node will be overloaded. Load can be of any type that is it can be CPU load, memory capacity or network load. In this paper we presented an architecture of load balancing and algorithm which will further improve the load balancing problem by minimizing the response time. In this paper, we have proposed the enhanced version of existing regulated load balancing approach for cloud computing by comping the Randomization and greedy load balancing algorithm. To check the performance of proposed approach, we have used the cloud analyst simulator (Cloud Analyst). Through simulation analysis, it has been found that proposed improved version of regulated load balancing approach has shown better performance in terms of cost, response time and data processing time.
Dynamic Cloud Partitioning and Load Balancing in Cloud Shyam Hajare
Cloud computing is the emerging and transformational paradigm in the field of information technology. It mostly focuses in providing various services on demand and resource allocation and secure data storage are some of them. To store huge amount of data and accessing data from such metadata is new challenge. Distributing and balancing of the load over a cloud using cloud partitioning can ease the situation. Implementing load balancing by considering static as well as dynamic parameters can improve the performance cloud service provider and can improve the user satisfaction. Implementation the model can provide dynamic way of resource selection de-pending upon different situation of cloud environment at the time of accessing cloud provisions based on cloud partitioning. This model can provide effective load balancing algorithm over the cloud environment, better refresh time methods and better load status evaluation methods.
A load balancing strategy for reducing data loss risk on cloud using remodif...IJECEIAES
Cloud computing always deals with new problems to fulfill the demand of the challenging organizations around the whole world. Reducing response time without the risk of data loss is a very critical issue for the user requests on cloud computing. Load balancing ensures quick response of virtual machine (VM), proper usage of VMs, throughput, and minimal cost of VMs. This paper introduces a re-modified throttled algorithm (RTMA) that reduces the risk of data hampering and data loss considering the availability of VM which increases system’s performance. Response time of virtual machines have been considered in our work, so that when migration process is running, data will not be overflowed in the VMs. Thus, the data migration process becomes high and reliable. We have completed the overall simulation of our proposed algorithm on the cloud analyst tool and successfully reduced the risk of data loss as well as maintains the response time.
Cost-Efficient Task Scheduling with Ant Colony Algorithm for Executing Large ...Editor IJCATR
The aim of cloud computing is to share a large number of resources and pieces of equipment to compute and store knowledge and information for great scientific sources. Therefore, the scheduling algorithm is regarded as one of the most important challenges and problems in the cloud. To solve the task scheduling problem in this study, the ant colony optimization (ACO) algorithm was adapted from social theories with a fair and accurate resource allocation approach based on machine performance and capacity. This study was intended to decrease the runtime and executive costs. It was also meant to optimize the use of machines and reduce their idle time. Finally, the proposed method was compared with Berger and greedy algorithms. The simulation results indicate that the proposed algorithm reduced the makespan and executive cost when tasks were added. It also increased fairness and load balancing. Moreover, it made the optimal use of machines possible and increased user satisfaction. According to evaluations, the proposed algorithm improved the makespan by 80%.
An Efficient Cloud Scheduling Algorithm for the Conservation of Energy throug...IJECEIAES
Method of broadcasting is the well known operation that is used for providing support to different computing protocols in cloud computing. Attaining energy efficiency is one of the prominent challenges, that is quite significant in the scheduling process that is used in cloud computing as, there are fixed limits that have to be met by the system. In this research paper, we are particularly focusing on the cloud server maintenance and scheduling process and to do so, we are using the interactive broadcasting energy efficient computing technique along with the cloud computing server. Additionally, the remote host machines used for cloud services are dissipating more power and with that they are consuming more and more energy. The effect of the power consumption is one of the main factors for determining the cost of the computing resources. With the idea of using the avoidance technology for assigning the data center resources that dynamically depend on the application demands and supports the cloud computing with the optimization of the servers in use.
Evaluation of load balancing approaches for Erlang concurrent application in ...TELKOMNIKA JOURNAL
Cloud system accommodates the computing environment including PaaS (platform as a service), SaaS (software as a service), and IaaS (infrastructure as service) that enables the services of cloud systems. Cloud system allows multiple users to employ computing services through browsers, which reflects an alternative service model that alters the local computing workload to a distant site. Cloud virtualization is another characteristic of the clouds that deliver virtual computing services and imitate the functionality of physical computing resources. It refers to an elastic load balancing management that provides the flexible model of on-demand services. The virtualization allows organizations to improve high levels of reliability, accessibility, and scalability by having a capability to execute applications on multiple resources simultaneously. In this paper we use a queuing model to consider a flexible load balancing and evaluate performance metrics such as mean queue length, throughput, mean waiting time, utilization, and mean traversal time. The model is aware of the arrival of concurrent applications with an Erlang distribution. Simulation results regarding performance metrics are investigated. Results point out that in Cloud systems both the fairness and load balancing are to be significantly considered.
TASK SCHEDULING USING AMALGAMATION OF MET HEURISTICS SWARM OPTIMIZATION ALGOR...Journal For Research
Cloud Computing is the latest networking technology and also popular archetype for hosting the application and delivering of services over the network. The foremost technology of the cloud computing is virtualization which enables of building the applications, dynamically sharing of resources and providing diverse services to the cloud users. With virtualization, a service provider can guarantee Quality of Service to the user at the same time as achieving higher server consumption and energy competence. One of the most important challenges in the cloud computing environment is the VM placemnt and task scheduling problem. This paper focus on Metaheuristic Swarm Optimisation Algorithms(MSOA) deals with the problem of VM placement and Task scheduling in cloud environment. The MSOA is a simple parallel algorithm that can be applied in different ways to resolve the task scheduling problems. The proposed algorithm is considered an amalgamation of the SO algorithm and the Cuckoo search (CS) algorithm; called MSOACS. The proposed algorithm is evaluated using Cloudsim Simulator. The results proves the reduction of the makespan and increase the utilization ratio of the proposed MSOACS algorithm compared with SOA algorithms and Randomised Allocation Allocation (RA).
Task Scheduling using Hybrid Algorithm in Cloud Computing Environmentsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
A brief information about the SCOP protein database used in bioinformatics.
The Structural Classification of Proteins (SCOP) database is a comprehensive and authoritative resource for the structural and evolutionary relationships of proteins. It provides a detailed and curated classification of protein structures, grouping them into families, superfamilies, and folds based on their structural and sequence similarities.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Introduction:
RNA interference (RNAi) or Post-Transcriptional Gene Silencing (PTGS) is an important biological process for modulating eukaryotic gene expression.
It is highly conserved process of posttranscriptional gene silencing by which double stranded RNA (dsRNA) causes sequence-specific degradation of mRNA sequences.
dsRNA-induced gene silencing (RNAi) is reported in a wide range of eukaryotes ranging from worms, insects, mammals and plants.
This process mediates resistance to both endogenous parasitic and exogenous pathogenic nucleic acids, and regulates the expression of protein-coding genes.
What are small ncRNAs?
micro RNA (miRNA)
short interfering RNA (siRNA)
Properties of small non-coding RNA:
Involved in silencing mRNA transcripts.
Called “small” because they are usually only about 21-24 nucleotides long.
Synthesized by first cutting up longer precursor sequences (like the 61nt one that Lee discovered).
Silence an mRNA by base pairing with some sequence on the mRNA.
Discovery of siRNA?
The first small RNA:
In 1993 Rosalind Lee (Victor Ambros lab) was studying a non- coding gene in C. elegans, lin-4, that was involved in silencing of another gene, lin-14, at the appropriate time in the
development of the worm C. elegans.
Two small transcripts of lin-4 (22nt and 61nt) were found to be complementary to a sequence in the 3' UTR of lin-14.
Because lin-4 encoded no protein, she deduced that it must be these transcripts that are causing the silencing by RNA-RNA interactions.
Types of RNAi ( non coding RNA)
MiRNA
Length (23-25 nt)
Trans acting
Binds with target MRNA in mismatch
Translation inhibition
Si RNA
Length 21 nt.
Cis acting
Bind with target Mrna in perfect complementary sequence
Piwi-RNA
Length ; 25 to 36 nt.
Expressed in Germ Cells
Regulates trnasposomes activity
MECHANISM OF RNAI:
First the double-stranded RNA teams up with a protein complex named Dicer, which cuts the long RNA into short pieces.
Then another protein complex called RISC (RNA-induced silencing complex) discards one of the two RNA strands.
The RISC-docked, single-stranded RNA then pairs with the homologous mRNA and destroys it.
THE RISC COMPLEX:
RISC is large(>500kD) RNA multi- protein Binding complex which triggers MRNA degradation in response to MRNA
Unwinding of double stranded Si RNA by ATP independent Helicase
Active component of RISC is Ago proteins( ENDONUCLEASE) which cleave target MRNA.
DICER: endonuclease (RNase Family III)
Argonaute: Central Component of the RNA-Induced Silencing Complex (RISC)
One strand of the dsRNA produced by Dicer is retained in the RISC complex in association with Argonaute
ARGONAUTE PROTEIN :
1.PAZ(PIWI/Argonaute/ Zwille)- Recognition of target MRNA
2.PIWI (p-element induced wimpy Testis)- breaks Phosphodiester bond of mRNA.)RNAse H activity.
MiRNA:
The Double-stranded RNAs are naturally produced in eukaryotic cells during development, and they have a key role in regulating gene expression .
Lateral Ventricles.pdf very easy good diagrams comprehensive
CLOUD COMPUTING – PARTITIONING ALGORITHM AND LOAD BALANCING ALGORITHM
1. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.5, October 2014
DOI : 10.5121/ijcseit.2014.4504 41
CLOUD COMPUTING – PARTITIONING ALGORITHM
AND LOAD BALANCING ALGORITHM
Anisaara Nadaph1
and Prof. Vikas Maral2
1
Department of Computer Engineering, K.J College of Engineering and Management
Research Pune
2
Department of Computer Engineering, K.J College of Engineering and Management
Research Pune
ABSTRACT
Tremendous usage of internet has made huge data on the network, without compromising on the
performance of network the end-users must obtain best service. As cloud provides different services on
leasing basis many companies are migrating from their own Infrastructure to cloud,This migration should
not compromise on performance of the cloud, The performance of the cloud can be improved by having
excellent load balancing strategy such that the end user is satisfied. The paper reveals the method by which
a cloud can be partitioned and a study of different algorithm with comparative study to balance the
dynamic load. The comparative study between Ant Colony and Honey Bee algorithm gives the result which
algorithm is optimal in normal load condition also the simplest round robin algorithm is applied when the
partition are in Idle state
KEYWORDS
cloud , central controller system (ccs), partition status collector
1. INTRODUCTION
Due to versatile use of internet cloud computing is becoming the back bone of soft computing.
When a server is overloaded the arriving job should be diverted to the server which is in normal
(underloaded) state such that there will be maximum utilization of the available resources. Cloud
computing has given the IT sector new direction for utilization of resources in a organized
manner as a user pay for usage, Cloud computing is a combined technique from the Grid
Computing , utility computing and autonomic computing.
Cloud Architecture can be alienated into 2 section
i. Front end – Client computer or application to connect the back end
ii. Bach End – servers, data center or data storage unit
The two are connected by the network called as Internet, There is a central manger to monitor
the traffic for efficient performance of the system. The architecture for balancing load
depends on whether the system is for static or dynamic as the static system doesn’t store the
current status of the system, it is immaterial for its design in disparity to this the dynamic
system accumulate the current system information and works according to what is the current
status of the system.
2. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.5, October 2014
42
2. LITERATURE SURVEY
This section gives the study of different strategy to partition the cloud also for balancing the load
the different algorithm available
2.1. PartitionigTechnoques
There are many Partitioning techniques such as
i. Relocation Algorithms
ii. Probabilistic Clustering
iii. K-medoids Methods
iv. K-means Methods
v. Density-Based Algorithms
vi. Density-Based Connectivity Clustering
vii. Density Functions Clustering.
All this technique has many drawback due to which it is not efficient for the project.In this paper
we will be simply dividing the server based on geographical location in which we will consider
the Region wise division of server this will simplify to check the traffic rate of server, as some
part of globe have night and other have day due to this the rate of traffic is different for different
region. Cloud computing is growing technology in the field of computer science. Gartner’s [12],
in his report that the cloud will changes the scenario in IT sector. The cloud computing is
changing life with various types of services. Users get service from a cloud without paying
attention to the details [11]. For cloud computing NIST[5] has given a definition as a model for
enabling everywhere, suitable, on-demand network access to a common pool of configurable
computing assets that can be swiftly provisioned and unconfined with minimal management
service provider interaction. Much and more people pay interest to cloud computing[6] .Cloud
computing is resourceful and scalable but maintaining the steadiness of processing numerous jobs
for cloud computing surroundings is a very multifaceted problem with load balancing receiving
much consideration. As we know in dynamic system the job can arrival in a pattern which is not
predictable and the abilities of each node in the cloud vary, for load balancing problem, workload
control is critical to pick up system performance and maintain steadiness. The load balancing
model given in this piece of writing is expected at the public cloud which has abundant nodes for
geographic isolated areas. Thus, this model has number of partition in the cloud for a public
cloud. These partitions make it easy for load balancing when the location is very large and
compound. The cloud has a main manager that prefer the appropriate partitions for incoming jobs
while for every job that is arriving the balancer decides the finest load balancing policy from
Ant colony and Honey Bee load balancing algorithms[3][2].
.
2.2. Load Balancing
Load balancing is a new technique that improves the performance of available networks and
resources by providing a highest throughput with least response time. Dividing the load between
servers, data can be sent and received with negligible delay. A simple example of load balancing
in our daily today life can be concerned with websites. Without load balancing, users could
experience long delays, timeouts and possible delayed system responses time. Load balancing
solutions usually apply redundant servers which help a better distribution of the communication
traffic so that the website is availability without delay. There are different load balancing
3. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.5, October 2014
43
algorithms, which can be broadly divided into two groups: static balancing [4] and dynamic load
balancing.
There are many research work of load balancing for the cloud partitioning. Load balancing in
cloud computing was described in a white paper written by Adler[8] who introduced the methods
used for load balancing in the cloud. Load balancing in the cloud is further a new problem that
requires new architectures to adapt to many enhancement. Chaczko et al described the idea that
load balancing plays in improving the performance and maintaining stability
2.3. Ant Colony Algorithm
This algorithm is based on the natural behavior of the ant in which a special substance called
pheromone is laid down by the ant who goes out in search of the food and remaining ant follows
the pheromone [13][10], The shortest path is selected in which maximum pheromone is there as
that will be the shortest path [13][10].as the ant select the shortest path the ant colony[4]
algorithm each ant will maintain a record-set and the traffic that is arriving will be diverted to the
path which has the more probability as in natural ant which select a path with highest pheromone.
2.4. Honey Bee Method
A honey-bee colony consists of[9] queen, forager. Forager are of type Employeed( bees with job)
Unemployed(bees without job) where unemployed bee are also called as the scout bee who
moves around in search of food without any knowledge and employed bees are those to whom the
scout bee gives the information and based on the dance performed the patch or group of
unemployed bee moves scout bee moves in all direction in search of food. And they come back
and perform the waggle dance[7] on the dance floor to give the information to the remaining bee
regarding the distance, direction and extent of food that is available based on the dance floor and
accordingly patches(group of honey bee) moves to collect the food.
3. PROPOSED WORK
3.1. System Architecture
There is a central controller system(ccs) towhich all the request arrive and then the Load balance
is selected based on the balancer having minimum load, after that the nodes on which the data to
be processed is selected This selection is based on Ant Colony Algorithm or Honey Bee
Algorithm.
3.2. System Model
There several cloud computing methodologies in this paper we are focusing on the public cloud,
and this cloud (public cloud) is based on the some standard cloud computing architecture, with
service provider providing the service. A large public cloud consists of many nodes and the
nodes in distinct geographical areas. To manage large cloud spread over large geographical areas
cloud partitioning are used. A public cloud that consist of partition with divisions based on the
geographic locations..The load balancing algorithm is based on the cloud partitioning
architecture. After establishing the cloud regions, the load balancing is invoked: when a job
comes at the system, with the central controller system make decision that to which cloud region
the job enters the decision is based on maximum memory available if same amount of memory is
available then on number of request. The partition monitoring controller load balancer then make
4. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.5, October 2014
44
decision on strategy to allocate the jobs to the nodes in a particular region. According to the
monitoring controller when the load status of a cloud region is normal, this partitioning can be
managed local way. But if the status of cloud partition load is not normal, job should be
relocating to another region
3.3. System Flow
This flow diagram explained below provides the systematic description of the entire system with
Ant colony Algorithm and Honey Bee Behavior.Flow as shown in figure , in which first a request
arrives to the central controller system and then depending on the free memory of the load
balancer it is decided as per which balancer is selected and then the node for processing is
selected by Ant colony or Honey bee algorithm, for different size job instances of request the
study for both algorithm is done and comparative result will be shown.
Figure1 Flow Diagram for Load Balancing using Ant Colony Algorithm or Honey Bee Algorithm
5. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.5, October 2014
45
4. MATHEMATICAL MODEL
This fragment of paper describes the mathematical model for the entire system which includes
selection of Load balancer by the central controller system and then selection of Nodes by the
balancer for processing by Ant Colony Algorithm and Honey Bee Behaviour
4.1 Selection of balancer
The function used for selection of the load balancer[1]
(1)
Where Memory(n) is memory utilized by node, m_utli(B) is the memory utilized by Load
Balancer.
a. If m_utli(B) =0 then the Balancer is in Idle state.
b. If m_utli(B) >0 and <= Threshold Load, Balancer is in Normal state.
c. If m_utli(B) >threshold, Balancer is in Overloaded state.
4.2 Selection of node in Idle state
The selection of node is done in Round Robin fashion as shown in equation(2)
v node(i) : empty(node(i)) select(node (i)) (2)
where i takes the value from a to d for balancer A and a to d for balancer B
where a to d are the nodes attached with the balancer.
4.3. Selection of node in Normal state
4.3.1 Load Balancing by Ant Colony Algorithm[4]
Let the system be defined by S for choice of node using Ant Colony Algorithm on Cloud
Partition. The 5 parameters for the system S selection is given as Input, Output, Function,
Success, Failure, hence it is defined as
S={I,O,F,S,Fi}
where I= input to the system, O= output of the system, F = function used in the system, S=
condition for success, Fi= condition for failure of the system.
For the system I = set of jobs that arrived from the central controller , the output obtained for the
system will be (O)= selection of the node, the system is in success state if the node is obtained
and the system is in failure state if node is not obtained for storage of job.selection of Node using
Ant Colony Algorithm the function F2 [10] is computed as
a. Initially a randomized selection is done is memory space is available then select the node
6. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.5, October 2014
46
b. If the randomly selected node does not have sufficient space then the node having
maximum entry in the forward Pheromone table is selected as shown in equation (3) fp is
the table having forward pheromone or value for selection of the next node and max is a
variable used to check for maximum pheromone and it is initialized to 0
{ (3)
4.3.2 Load Balancing by Honey Bee Behavior[2]
Let S1 be System for choice of node using Honey Bee Behavior .The system S1 has 5 parameters
in it Input, Output, Function, Success, Failure given as S1= {I, O, F, S, E}.
The input to the Balancer for selection of node by honey bee is job of varying size, Ouptut will
be node selected. The function applied for node selection is in equation(3)
v node(i) : M_Freemax(i) select(node (i)) (4)
where i takes the value from a to d for balancer A and a to d for balancer B
where a to d are the nodes attached with the balancer.
• Preemptive scheduling is needed when the load balancer is selected, but the server nodes
do not have sufficient space for execution of job then the jobs with same priority are
checked and the node which has maximum number of same priority job are selected.
(5)
In equation (5) used Tn is the total number of job of same priority, the Tn value is calculated for
each server for different priority which ranges from 1 to m, and the node having more number of
same priority job as arrived job is assigned for processing.
5. EXPERIMENTAL RESULT
The overall memory of each balancer is calculated my adding the memory space of each server.
Initlally all server are in Idle state and hence a Round Robin algorithm are used whereas when all
server reaches the normal state the Ant Colony optimization or Honey Bee behaviour algorithm
are used. Job is assigned to the central Controller (ccs) then the ccs decides (based on the balancer
with maximum space). the job is transferred to the balancer for this either the Ant Colony or
Honey Bee algorithm is used and for each job with Ant Colony and Honey Bee the time required
to assign the node is calculated for the same the result obtained are as follows. The figure 2 and
figure 3 gives the experimental result when the maximum threshold of the server is set as 100KB
whereas figure 4 and figure 5 gives the result when maximum threshold is set to 300KB.
Max= fp (i)
Next node=i
7. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.5, October 2014
47
Figure 2 – Ant colony optimization (upper threshold -100)
Figure 3 – Honey Bee behavior (upper threshold -100)
Figure 4 – Ant colony optimization (upper threshold -300)
8. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.5, October 2014
48
Figure 5 – Honey Bee behavior (upper threshold -300)
6. CONCLUSIONS
This paper presented a analysis of Honey Bee and ant Colony load-balancing algorithms for
Cloud partitioning. It was observed that centralized allocation will not efficient for load across all
nodes in a system. So a partitioning approach is required that balances load among network. As
such, Honey Bee and Ant colony algorithms were consider for this comparative analysis of study.
Both (Honey bee foraging and Ant behavior) based on a swarm intelligence. When resource and
system architecture are considered, new issues will arise. Both Ant Colony and Honey Bee appear
better as the number of processing requests is increased. As such, enhanced future work required
for further study, It is found that there are number of algorithms available for load balancing. It is
not known how to select appropriate balancing techniques for given applications that will provide
a suitable configuration for the application. The experiment gives the conclusion that Honey Bee
gives better result than Ant colony optimization techniques
REFERENCES
[1] GaochaoXu, Junjie Pang, and Xiaodong Fu ”A Load Balancing Model Based on Cloud Partitioning
for the Public Cloud” IEEE TRANSACTIONS ON CLOUD COMPUTING YEAR 2013
[2] DhineshBabu L.D. a, P. VenkataKrishnab,” Honey bee behavior inspired load balancing of tasks in
cloud computing environments”, www.elsevier.com/locate/asoc,2013.
[3] LizheWang,Jie Tao, Marcel Kunze ”Scientific Cloud Computing: Early Definition and Experience”
The 10th IEEE International
[4] Kumar Nishant, Pratik Sharma, Vishal Krishna, Chhavi Gupta and KunwarPratap Singh, ”Load
Balancing of Nodes in Cloud Using Ant Colony Optimization”, 2012 14th International Conference
on Modelling and Simulation
[5] P. Mell and T. Grance, The NIST definition of cloud computing,
“http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf”, 2012.
[6] Microsoft Academic Research, Cloud computing, “http:// libra.msra.cn/Keyword/6051/cloud-
computing?query= cloud%20computing”, 2012.
[7] Nyree LemmensCoMo, VrijeUniversiteit van Brussel, Belgium Steven de Jong, ”A bee algorithm for
multi-agent systems: Recruitment and navigation combined”
[8] Adler, Load balancing in the cloud: Tools, tips and techniques, http://www.rightscale. com/info
center/white-papers/Load-Balancing-in-the-Cloud.pdf”, 2012
9. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 4, No.5, October 2014
49
[9] Brian R. Johnson James C. Nieh, ” Modeling the Adaptive Role of Negative Signaling in Honey Bee
Intraspecific Competition”, Springerlink.com, 2010
[10] Ratan Mishra1 and AnantJaiswal ”Ant colony Optimization:”A Solution of Load balancing in Cloud”,
International Journal of Web Semantic Technology Vol.3, No.2, April 2012
[11] Rouse, Public cloud, “http://searchcloudcomputing. techtarget.com/definition/public-cloud”, 2012.
[12] A. Bhadani and S. Chaudhary,Performance evaluation of web servers using central load balancing
policy over virtual machine on cloud, Proceeding of the Third Annual ACM Banglore Conference ,
January 2010
[13] http://mute-et.sourceforge.net/howAnts.shtml