Cloud computing makes the information technology industry boom. It is a great solution for businesses who want to save costs while ensuring the quality of service. One of the key issues that make cloud computing successful is the load balancing technique used in the load balancer to minimize time costs and optimize costs economically. This paper proposes an algorithm to enhance the processing time of tasks so that it can help improve the load balancing capacity on cloud computing. This algorithm, named as Improved Throttled Algorithm (ITA), is an improvement of Throttled Algorithm. The paper uses the Cloud Analyst tool to simulate. The selected algorithms are used to compare: Equally Load, Round Robin, Throttled and TMA. The simulation results show that the proposed algorithm ITA has improved the processing time of tasks, time spent processing requests and reduced the cost of Datacenters compared to the selected popular algorithms as above. The improvement of ITA is because of selecting virtual machines in an index table that is available but in order of priority. It helps response times and processing times remain stable, limits the idling resources, and cloud costs are minimized compared to selected algorithms.
Using Grid Technologies in the Cloud for High Scalabilitymabuhr
An unstated assumption is that clouds are scalable. But are they? Stick thousands upon thousands of machines together and there are a lot of potential bottlenecks just waiting to choke off your scalability supply. And if the cloud is scalable what are the chances that your application is really linearly scalable? At 10 machines all may be well. Even at 50 machines the seas look calm. But at 100, 200, or 500 machines all hell might break loose. How do you know?
You know through real life testing. These kinds of tests are brutally hard and complicated. who wants to do all the incredibly precise and difficult work of producing cloud scalability tests? GridDynamics has stepped up to the challenge and has just released their Cloud Performance Reports.
PROCESS OF LOAD BALANCING IN CLOUD COMPUTING USING GENETIC ALGORITHMecij
The running generation of world, cloud computing has become the most powerful, chief and also lightning technology. IT based companies has already changed their way to buy and design hardware through this technology. It is a high utility which can also make software more attractive. Load balancing research in
cloud technology is one of the burning technologies in modern time. In this paper, pointing various proposed algorithms, the topic of load balancing in Cloud Computing are researched and compared to provide a gist of the latest way in this research area. By using Genetic Algorithm the balance is most
flexible which is represented here.
Intelligent Workload Management in Virtualized Cloud EnvironmentIJTET Journal
Abstract— Cloud computing is a rising high performance computing environment with a huge scale, heterogeneous collection of self-sufficient systems and elastic computational design. To develop the overall performance of cloud computing, through the deadline constraint, a task scheduling replica is traditional for falling the system power utilization of cloud computing and recovering the yield of service providers. To improve the overall act of cloud environment, with the deadline constraint, a task scheduling model is conventional for reducing the system performance time of cloud computing and improving the profit of service providers. In favor of scheduling replica, a solving technique based on multi-objective genetic algorithm (MO-GA) is considered and the study is determined on programming rules, intersect operators, mixture operators and the scheme of arrangement of Pareto solutions. The model is designed based on open source cloud computing simulation platform CloudSim, to obtainable scheduling algorithms, the result shows that the proposed algorithm can obtain an enhanced solution, thus balancing the load for the concert of multiple objects.
CONTEXT-AWARE DECISION MAKING SYSTEM FOR MOBILE CLOUD OFFLOADINGIJCNCJournal
In this study, a mobile cloud offloading system has been developed to decide that a process run on the cloud or on the mobile platform. A context-aware decision algorithm has been developed. The low performance and problem of battery consumption of mobile devices have been fundamental challenges on the mobile computing. To overcome this kind of challenges, recent advances towards mobile cloud computing propose a selective mobile-to-cloud offloading service by moving a mobile application from a slow mobile device to a fast server in the cloud during run time. Determine whether a process running on cloud or not is an important issue. Power consumption and time limits are vitally important for decision. In this study we used PowerTutor application which is a dynamic power measurement modelling tool. Another important factor is the process completion time. Calculate the power consumption is very difficult
A Comparative Study of Load Balancing Algorithms for Cloud ComputingIJERA Editor
Cloud Computing is fast growing technology in both industry research and academy. User can access the cloud
service and pay based on the usage of resource. Balancing the load is major task of cloud service provider with
minimum response time, maximum throughput and better resource utilization. There are many load balancing
algorithms proposed to assign a user request to cloud resource in efficient manner. In this paper three load balancing
algorithms are simulated in Cloud Analyst and results are compared.
Power consumption prediction in cloud data center using machine learningIJECEIAES
The flourishing development of the cloud computing paradigm provides several ser- vices in the industrial business world. Power consumption by cloud data centers is one of the crucial issues for service providers in the domain of cloud computing. Pursuant to the rapid technology enhancements in cloud environments and data centers augmentations, power utilization in data centers is expected to grow unabated. A diverse set of numerous connected devices, engaged with the ubiquitous cloud, results in unprecedented power utilization by the data centers, accompanied by increased carbon footprints. Nearly a million physical machines (PM) are running all over the data centers, along with (5 – 6) million virtual machines (VM). In the next five years, the power needs of this domain are expected to spiral up to 5% of global power production. The virtual machine power consumption reduction impacts the diminishing of the PM’s power, however further changing in power consumption of data center year by year, to aid the cloud vendors using prediction methods. The sudden fluctuation in power utilization will cause power outage in the cloud data centers. This paper aims to forecast the VM power consumption with the help of regressive predictive analysis, one of the Machine Learning (ML) techniques. The potency of this approach to make better predictions of future value, using Multi-layer Perceptron (MLP) regressor which provides 91% of accuracy during the prediction process.
Using Grid Technologies in the Cloud for High Scalabilitymabuhr
An unstated assumption is that clouds are scalable. But are they? Stick thousands upon thousands of machines together and there are a lot of potential bottlenecks just waiting to choke off your scalability supply. And if the cloud is scalable what are the chances that your application is really linearly scalable? At 10 machines all may be well. Even at 50 machines the seas look calm. But at 100, 200, or 500 machines all hell might break loose. How do you know?
You know through real life testing. These kinds of tests are brutally hard and complicated. who wants to do all the incredibly precise and difficult work of producing cloud scalability tests? GridDynamics has stepped up to the challenge and has just released their Cloud Performance Reports.
PROCESS OF LOAD BALANCING IN CLOUD COMPUTING USING GENETIC ALGORITHMecij
The running generation of world, cloud computing has become the most powerful, chief and also lightning technology. IT based companies has already changed their way to buy and design hardware through this technology. It is a high utility which can also make software more attractive. Load balancing research in
cloud technology is one of the burning technologies in modern time. In this paper, pointing various proposed algorithms, the topic of load balancing in Cloud Computing are researched and compared to provide a gist of the latest way in this research area. By using Genetic Algorithm the balance is most
flexible which is represented here.
Intelligent Workload Management in Virtualized Cloud EnvironmentIJTET Journal
Abstract— Cloud computing is a rising high performance computing environment with a huge scale, heterogeneous collection of self-sufficient systems and elastic computational design. To develop the overall performance of cloud computing, through the deadline constraint, a task scheduling replica is traditional for falling the system power utilization of cloud computing and recovering the yield of service providers. To improve the overall act of cloud environment, with the deadline constraint, a task scheduling model is conventional for reducing the system performance time of cloud computing and improving the profit of service providers. In favor of scheduling replica, a solving technique based on multi-objective genetic algorithm (MO-GA) is considered and the study is determined on programming rules, intersect operators, mixture operators and the scheme of arrangement of Pareto solutions. The model is designed based on open source cloud computing simulation platform CloudSim, to obtainable scheduling algorithms, the result shows that the proposed algorithm can obtain an enhanced solution, thus balancing the load for the concert of multiple objects.
CONTEXT-AWARE DECISION MAKING SYSTEM FOR MOBILE CLOUD OFFLOADINGIJCNCJournal
In this study, a mobile cloud offloading system has been developed to decide that a process run on the cloud or on the mobile platform. A context-aware decision algorithm has been developed. The low performance and problem of battery consumption of mobile devices have been fundamental challenges on the mobile computing. To overcome this kind of challenges, recent advances towards mobile cloud computing propose a selective mobile-to-cloud offloading service by moving a mobile application from a slow mobile device to a fast server in the cloud during run time. Determine whether a process running on cloud or not is an important issue. Power consumption and time limits are vitally important for decision. In this study we used PowerTutor application which is a dynamic power measurement modelling tool. Another important factor is the process completion time. Calculate the power consumption is very difficult
A Comparative Study of Load Balancing Algorithms for Cloud ComputingIJERA Editor
Cloud Computing is fast growing technology in both industry research and academy. User can access the cloud
service and pay based on the usage of resource. Balancing the load is major task of cloud service provider with
minimum response time, maximum throughput and better resource utilization. There are many load balancing
algorithms proposed to assign a user request to cloud resource in efficient manner. In this paper three load balancing
algorithms are simulated in Cloud Analyst and results are compared.
Power consumption prediction in cloud data center using machine learningIJECEIAES
The flourishing development of the cloud computing paradigm provides several ser- vices in the industrial business world. Power consumption by cloud data centers is one of the crucial issues for service providers in the domain of cloud computing. Pursuant to the rapid technology enhancements in cloud environments and data centers augmentations, power utilization in data centers is expected to grow unabated. A diverse set of numerous connected devices, engaged with the ubiquitous cloud, results in unprecedented power utilization by the data centers, accompanied by increased carbon footprints. Nearly a million physical machines (PM) are running all over the data centers, along with (5 – 6) million virtual machines (VM). In the next five years, the power needs of this domain are expected to spiral up to 5% of global power production. The virtual machine power consumption reduction impacts the diminishing of the PM’s power, however further changing in power consumption of data center year by year, to aid the cloud vendors using prediction methods. The sudden fluctuation in power utilization will cause power outage in the cloud data centers. This paper aims to forecast the VM power consumption with the help of regressive predictive analysis, one of the Machine Learning (ML) techniques. The potency of this approach to make better predictions of future value, using Multi-layer Perceptron (MLP) regressor which provides 91% of accuracy during the prediction process.
AUTO RESOURCE MANAGEMENT TO ENHANCE RELIABILITY AND ENERGY CONSUMPTION IN HET...IJCNCJournal
A classic information processing has been replaced by cloud computing in more studies where cloud computing becomes more popular and growing than other computing models. Cloud computing works for providing on-demand services for users. Reliability and energy consumption are two hot challenges and tradeoffs problem in the cloud computing environment that requires accurate attention and research. This paper proposes an Auto Resource Management (ARM) scheme to enhance reliability by reducing the Service Level Agreement (SLA) violation and reduce energy consumed by cloud computing servers. In this context, the ARM consists of three compounds, they are static/dynamic threshold, virtual machine selection policy, and short prediction resource utilization method. The Minimum Utilization Non-Negative (MUN) virtual machine selection policy and Rate of Change (RoC) dynamic threshold present in this paper. Also, a method of choosing a value as the static threshold is proposed. To improve ARM performance, the paper proposes a Short Prediction Resource Utilization (SPRU) that aims to improve the process of decision making by including the resources utilization of future time and the current time. The output results show that SPRU enhanced the decision-making process for managing cloud computing resources and reduced energy consumption and the SLA violation. The proposed scheme tested under real workload data over the CloudSim simulator.
Elastic neural network method for load prediction in cloud computing gridIJECEIAES
Cloud computing still has no standard definition, yet it is concerned with Internet or network on-demand delivery of resources and services. It has gained much popularity in last few years due to rapid growth in technology and the Internet. Many issues yet to be tackled within cloud computing technical challenges, such as Virtual Machine migration, server association, fault tolerance, scalability, and availability. The most we are concerned with in this research is balancing servers load; the way of spreading the load between various nodes exists in any distributed systems that help to utilize resource and job response time, enhance scalability, and user satisfaction. Load rebalancing algorithm with dynamic resource allocation is presented to adapt with changing needs of a cloud environment. This research presents a modified elastic adaptive neural network (EANN) with modified adaptive smoothing errors, to build an evolving system to predict Virtual Machine load. To evaluate the proposed balancing method, we conducted a series of simulation studies using cloud simulator and made comparisons with previously suggested approaches in the previous work. The experimental results show that suggested method betters present approaches significantly and all these approaches.
Cloud Computing Load Balancing Algorithms Comparison Based SurveyINFOGAIN PUBLICATION
Cloud computing is an online primarily based computing. This computing paradigm has increased the employment of network wherever the potential of 1 node may be used by alternative node. Cloud provides services on demand to distributive resources like info, servers, software, infrastructure etc. in pay as you go basis. Load reconciliation is one amongst the vexing problems in distributed atmosphere. Resources of service supplier have to be compelled to balance the load of shopper request. Totally different load reconciliation algorithms are planned so as to manage the resources of service supplier with efficiency and effectively. This paper presents a comparison of assorted policies used for load reconciliation.
Dynamic Cloud Partitioning and Load Balancing in Cloud Shyam Hajare
Cloud computing is the emerging and transformational paradigm in the field of information technology. It mostly focuses in providing various services on demand and resource allocation and secure data storage are some of them. To store huge amount of data and accessing data from such metadata is new challenge. Distributing and balancing of the load over a cloud using cloud partitioning can ease the situation. Implementing load balancing by considering static as well as dynamic parameters can improve the performance cloud service provider and can improve the user satisfaction. Implementation the model can provide dynamic way of resource selection de-pending upon different situation of cloud environment at the time of accessing cloud provisions based on cloud partitioning. This model can provide effective load balancing algorithm over the cloud environment, better refresh time methods and better load status evaluation methods.
Load Balancing in Auto Scaling Enabled Cloud Environmentsneirew J
Cloud computing is growing in popularity and it has been continuously updated with more improvements.
Auto scaling is one of such improvements that help to maintain the availability of customer’s subscribed
cloud system. The appearance of an auto scaling mechanism in the cloud system with many existing system
mechanisms is an issue that needs to be considered. Because, normally, there is no free drawbacks
whenever a new part is added to a certain stable system. In this paper, we consider how existing load
balancing and auto scaling impact on each other. For the purpose, we have modeled a cloud system with
an auto scaler and a load balancer and implementing simulations based on the constructed model. Also
based on the results from the computer simulations we proposed about choosing load balancers for
subscribed cloud system with auto scaling service.
UnaCloud is an opportunistic based cloud infrastructure
(IaaS) that allows to access on-demand computing
capabilities using commodity desktops. Although UnaCloud
tried to maximize the use of idle resources to deploy virtual
machines on them, it does not use energy-efficient resource
allocation algorithms. In this paper, we design and implement
different energy-aware techniques to operate in an energyefficient
way and at the same time guarantee the performance
to the users. Performance tests with different algorithms and
scenarios using real trace workloads from UnaCloud, show how
different policies can change the energy consumption patterns
and reduce the energy consumption in opportunistic cloud
infrastructures. The results show that some algorithms can
reduce the energy-consumption power up to 30% over the
percentage earned by opportunistic environment.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
Public Cloud Partition Using Load Status Evaluation and Cloud Division RulesIJSRD
with growth of cloud computing load balancing is important impact on performance. Cloud computing efficiency depends on good load balancer. Many type of situation occur that time cloud partitioning is done by load balancer. Different type of situation needed different type of strategies for public cloud portioning using load balancer.in this paper we work on, partition of public cloud using two type of situation first is load status evaluation and second is cloud division rules. Load status evaluation is measure in number of cloudlets arrives at datacenter and cloud divisions rules are based on cloudlet come from which geographical location. On the basis of geographical location we partition public cloud and improve performance of load balancing in cloud computing. We implement proposed system with help of cloudsim3.0 simulator.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system
Time Efficient VM Allocation using KD-Tree Approach in Cloud Server Environmentrahulmonikasharma
Cloud computing is an incipient and quickly evolving model, with new expenses and capabilities being proclaimed frequently. The increases of user on cloud with the expansion of variety of services, with that the complete allocation of resource with the minimum latent time for Virtual machine is necessary. To allocate this virtual cloud computing resources to the cloud user is a key technical issue because user demand is dynamic in nature that required dynamic allocation of resource too. To improve the allocation there must be a correct balanced algorithmic scheduling for Resource Allocation Technique. The aim of this work is to allocate resource to scientific experiment request coming from multiple users, wherever customized Virtual machines (VM) are aloft in applicable host out there in cloud. Therefore, properly programmed scheduling cloud is extremely vital and it’s significant to develop efficient scheduling methods for appropriately allocation of VMs into physical resource. The planned formulas minimize the time interval quality so as of O (Log n) by adopting KD-Tree.
LOAD BALANCING IN AUTO SCALING-ENABLED CLOUD ENVIRONMENTSijccsa
Cloud computing is growing in popularity and it has been continuously updated with more improvements.
Auto scaling is one of such improvements that help to maintain the availability of customer’s subscribed
cloud system. The appearance of an auto scaling mechanism in the cloud system with many existing system
mechanisms is an issue that needs to be considered. Because, normally, there is no free drawbacks
whenever a new part is added to a certain stable system. In this paper, we consider how existing load
balancing and auto scaling impact on each other. For the purpose, we have modeled a cloud system with
an auto scaler and a load balancer and implementing simulations based on the constructed model. Also
based on the results from the computer simulations we proposed about choosing load balancers for
subscribed cloud system with auto scaling service.
PROPOSED LOAD BALANCING ALGORITHM TO REDUCE RESPONSE TIME AND PROCESSING TIME...IJCNCJournal
Cloud computing is a new technology that brings new challenges to all organizations around the world.
Improving response time for user requests on cloud computing is a critical issue to combat bottlenecks. As
for cloud computing, bandwidth to from cloud service providers is a bottleneck. With the rapid development
of the scale and number of applications, this access is often threatened by overload. Therefore, this paper
our proposed Throttled Modified Algorithm(TMA) for improving the response time of VMs on cloud
computing to improve performance for end-user. We have simulated the proposed algorithm with the
CloudAnalyts simulation tool and this algorithm has improved response times and processing time of the
cloud data center.
AUTO RESOURCE MANAGEMENT TO ENHANCE RELIABILITY AND ENERGY CONSUMPTION IN HET...IJCNCJournal
A classic information processing has been replaced by cloud computing in more studies where cloud computing becomes more popular and growing than other computing models. Cloud computing works for providing on-demand services for users. Reliability and energy consumption are two hot challenges and tradeoffs problem in the cloud computing environment that requires accurate attention and research. This paper proposes an Auto Resource Management (ARM) scheme to enhance reliability by reducing the Service Level Agreement (SLA) violation and reduce energy consumed by cloud computing servers. In this context, the ARM consists of three compounds, they are static/dynamic threshold, virtual machine selection policy, and short prediction resource utilization method. The Minimum Utilization Non-Negative (MUN) virtual machine selection policy and Rate of Change (RoC) dynamic threshold present in this paper. Also, a method of choosing a value as the static threshold is proposed. To improve ARM performance, the paper proposes a Short Prediction Resource Utilization (SPRU) that aims to improve the process of decision making by including the resources utilization of future time and the current time. The output results show that SPRU enhanced the decision-making process for managing cloud computing resources and reduced energy consumption and the SLA violation. The proposed scheme tested under real workload data over the CloudSim simulator.
Elastic neural network method for load prediction in cloud computing gridIJECEIAES
Cloud computing still has no standard definition, yet it is concerned with Internet or network on-demand delivery of resources and services. It has gained much popularity in last few years due to rapid growth in technology and the Internet. Many issues yet to be tackled within cloud computing technical challenges, such as Virtual Machine migration, server association, fault tolerance, scalability, and availability. The most we are concerned with in this research is balancing servers load; the way of spreading the load between various nodes exists in any distributed systems that help to utilize resource and job response time, enhance scalability, and user satisfaction. Load rebalancing algorithm with dynamic resource allocation is presented to adapt with changing needs of a cloud environment. This research presents a modified elastic adaptive neural network (EANN) with modified adaptive smoothing errors, to build an evolving system to predict Virtual Machine load. To evaluate the proposed balancing method, we conducted a series of simulation studies using cloud simulator and made comparisons with previously suggested approaches in the previous work. The experimental results show that suggested method betters present approaches significantly and all these approaches.
Cloud Computing Load Balancing Algorithms Comparison Based SurveyINFOGAIN PUBLICATION
Cloud computing is an online primarily based computing. This computing paradigm has increased the employment of network wherever the potential of 1 node may be used by alternative node. Cloud provides services on demand to distributive resources like info, servers, software, infrastructure etc. in pay as you go basis. Load reconciliation is one amongst the vexing problems in distributed atmosphere. Resources of service supplier have to be compelled to balance the load of shopper request. Totally different load reconciliation algorithms are planned so as to manage the resources of service supplier with efficiency and effectively. This paper presents a comparison of assorted policies used for load reconciliation.
Dynamic Cloud Partitioning and Load Balancing in Cloud Shyam Hajare
Cloud computing is the emerging and transformational paradigm in the field of information technology. It mostly focuses in providing various services on demand and resource allocation and secure data storage are some of them. To store huge amount of data and accessing data from such metadata is new challenge. Distributing and balancing of the load over a cloud using cloud partitioning can ease the situation. Implementing load balancing by considering static as well as dynamic parameters can improve the performance cloud service provider and can improve the user satisfaction. Implementation the model can provide dynamic way of resource selection de-pending upon different situation of cloud environment at the time of accessing cloud provisions based on cloud partitioning. This model can provide effective load balancing algorithm over the cloud environment, better refresh time methods and better load status evaluation methods.
Load Balancing in Auto Scaling Enabled Cloud Environmentsneirew J
Cloud computing is growing in popularity and it has been continuously updated with more improvements.
Auto scaling is one of such improvements that help to maintain the availability of customer’s subscribed
cloud system. The appearance of an auto scaling mechanism in the cloud system with many existing system
mechanisms is an issue that needs to be considered. Because, normally, there is no free drawbacks
whenever a new part is added to a certain stable system. In this paper, we consider how existing load
balancing and auto scaling impact on each other. For the purpose, we have modeled a cloud system with
an auto scaler and a load balancer and implementing simulations based on the constructed model. Also
based on the results from the computer simulations we proposed about choosing load balancers for
subscribed cloud system with auto scaling service.
UnaCloud is an opportunistic based cloud infrastructure
(IaaS) that allows to access on-demand computing
capabilities using commodity desktops. Although UnaCloud
tried to maximize the use of idle resources to deploy virtual
machines on them, it does not use energy-efficient resource
allocation algorithms. In this paper, we design and implement
different energy-aware techniques to operate in an energyefficient
way and at the same time guarantee the performance
to the users. Performance tests with different algorithms and
scenarios using real trace workloads from UnaCloud, show how
different policies can change the energy consumption patterns
and reduce the energy consumption in opportunistic cloud
infrastructures. The results show that some algorithms can
reduce the energy-consumption power up to 30% over the
percentage earned by opportunistic environment.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
Public Cloud Partition Using Load Status Evaluation and Cloud Division RulesIJSRD
with growth of cloud computing load balancing is important impact on performance. Cloud computing efficiency depends on good load balancer. Many type of situation occur that time cloud partitioning is done by load balancer. Different type of situation needed different type of strategies for public cloud portioning using load balancer.in this paper we work on, partition of public cloud using two type of situation first is load status evaluation and second is cloud division rules. Load status evaluation is measure in number of cloudlets arrives at datacenter and cloud divisions rules are based on cloudlet come from which geographical location. On the basis of geographical location we partition public cloud and improve performance of load balancing in cloud computing. We implement proposed system with help of cloudsim3.0 simulator.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system
Time Efficient VM Allocation using KD-Tree Approach in Cloud Server Environmentrahulmonikasharma
Cloud computing is an incipient and quickly evolving model, with new expenses and capabilities being proclaimed frequently. The increases of user on cloud with the expansion of variety of services, with that the complete allocation of resource with the minimum latent time for Virtual machine is necessary. To allocate this virtual cloud computing resources to the cloud user is a key technical issue because user demand is dynamic in nature that required dynamic allocation of resource too. To improve the allocation there must be a correct balanced algorithmic scheduling for Resource Allocation Technique. The aim of this work is to allocate resource to scientific experiment request coming from multiple users, wherever customized Virtual machines (VM) are aloft in applicable host out there in cloud. Therefore, properly programmed scheduling cloud is extremely vital and it’s significant to develop efficient scheduling methods for appropriately allocation of VMs into physical resource. The planned formulas minimize the time interval quality so as of O (Log n) by adopting KD-Tree.
LOAD BALANCING IN AUTO SCALING-ENABLED CLOUD ENVIRONMENTSijccsa
Cloud computing is growing in popularity and it has been continuously updated with more improvements.
Auto scaling is one of such improvements that help to maintain the availability of customer’s subscribed
cloud system. The appearance of an auto scaling mechanism in the cloud system with many existing system
mechanisms is an issue that needs to be considered. Because, normally, there is no free drawbacks
whenever a new part is added to a certain stable system. In this paper, we consider how existing load
balancing and auto scaling impact on each other. For the purpose, we have modeled a cloud system with
an auto scaler and a load balancer and implementing simulations based on the constructed model. Also
based on the results from the computer simulations we proposed about choosing load balancers for
subscribed cloud system with auto scaling service.
PROPOSED LOAD BALANCING ALGORITHM TO REDUCE RESPONSE TIME AND PROCESSING TIME...IJCNCJournal
Cloud computing is a new technology that brings new challenges to all organizations around the world.
Improving response time for user requests on cloud computing is a critical issue to combat bottlenecks. As
for cloud computing, bandwidth to from cloud service providers is a bottleneck. With the rapid development
of the scale and number of applications, this access is often threatened by overload. Therefore, this paper
our proposed Throttled Modified Algorithm(TMA) for improving the response time of VMs on cloud
computing to improve performance for end-user. We have simulated the proposed algorithm with the
CloudAnalyts simulation tool and this algorithm has improved response times and processing time of the
cloud data center.
Load Balancing Algorithm to Improve Response Time on Cloud Computingneirew J
Load balancing techniques in cloud computing can be applied at different levels. There are two main
levels: load balancing on physical server and load balancing on virtual servers. Load balancing on a
physical server is policy of allocating physical servers to virtual machines. And load balancing on virtual
machines is a policy of allocating resources from physical server to virtual machines for tasks or
applications running on them. Depending on the requests of the user on cloud computing is SaaS (Software
as a Service), PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) that has a proper load
balancing policy. When receiving the task, the cloud data center will have to allocate these tasks efficiently
so that the response time is minimized to avoid congestion. Load balancing should also be performed
between different datacenters in the cloud to ensure minimum transfer time. In this paper, we propose a
virtual machine-level load balancing algorithm that aims to improve the average response time and
average processing time of the system in the cloud environment. The proposed algorithm is compared to the
algorithms of Avoid Deadlocks [5], Maxmin [6], Throttled [8] and the results show that our algorithms
have optimized response times.
LOAD BALANCING ALGORITHM TO IMPROVE RESPONSE TIME ON CLOUD COMPUTINGijccsa
Load balancing techniques in cloud computing can be applied at different levels. There are two main
levels: load balancing on physical server and load balancing on virtual servers. Load balancing on a
physical server is policy of allocating physical servers to virtual machines. And load balancing on virtual
machines is a policy of allocating resources from physical server to virtual machines for tasks or
applications running on them. Depending on the requests of the user on cloud computing is SaaS (Software
as a Service), PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) that has a proper load
balancing policy. When receiving the task, the cloud data center will have to allocate these tasks efficiently
so that the response time is minimized to avoid congestion. Load balancing should also be performed
between different datacenters in the cloud to ensure minimum transfer time. In this paper, we propose a
virtual machine-level load balancing algorithm that aims to improve the average response time and
average processing time of the system in the cloud environment. The proposed algorithm is compared to the
algorithms of Avoid Deadlocks [5], Maxmin [6], Throttled [8] and the results show that our algorithms
have optimized response times.
STUDY THE EFFECT OF PARAMETERS TO LOAD BALANCING IN CLOUD COMPUTINGIJCNCJournal
The rapid growth of users on the cloud service and number of services to the user increases the load on the
servers at cloud datacenter. This issue is becoming a challenge for the researchers. And requires used
effectively a load balancing technique not only to balance the resources for servers but also reduce the
negative impact to the end-user service. The current, load balancing techniques have solved the various
problems such as: (i) load balancing after a server was overloaded; (ii) load balancing and load forecast
for the allocation of resources; iii) improving the parameters affecting to load balancing in cloud. The
study of improving these parameters have great significance to improving system performance through
load balancing. From there, we can propose more effective methods of load balancing, in order to increase
system performance. Therefore, in this paper we researched some parameters affecting the performance
load balancing on the cloud computing.
MCCVA: A NEW APPROACH USING SVM AND KMEANS FOR LOAD BALANCING ON CLOUDijccsa
Nowadays, the demand of using resources, using services via the intranet system or on the Internet is rapidly growing. The respective problem coming is how to use these resources effectively in terms of time and quality. Therefore, the network QoS and its economy are people concerns, cloud computing was born in an inevitable trend. However, managing resources and scheduling tasks in virtualized data centres on the cloud are challenging tasks. Currently, there are a lot of Load Balancing algorithms applied in clouds and proposed by many authors, scholars, and experts. These existing methods are more about natural and heuristic, but the application of AI, or modern datamining technologies, in load balancing is not too popular due to the different characteristics of cloud. In this paper, we propose an algorithm to reduce the processing time (makespan) on cloud computing, helping the load balancing work more efficiency. Here, we use the SVM algorithm to classify the coming Requests, K - Mean to cluster the VMs in cloud, then the LB will allocate the requests into the VMs in the most reasonable way. In this way, request with the least processing time will be allocated to the VMs with the lowest usage. We name this new proposal as MCCVA - Makespan Classification & Clustering VM Algorithm. We have experimented and evaluated this algorithm in CloudSim, a cloud simulation environment, we obtained better results than some other wellknown algorithms. With this MCCVA, we can see the big potential of AI and datamining in Load Balancing, we can further develop LB with AI to achieve better and better results of QoS.
LOAD BALANCING ALGORITHM ON CLOUD COMPUTING FOR OPTIMIZE RESPONE TIMEijccsa
To improve the performance of cloud computing, there are many parameters and issues that we should consider, including resource allocation, resource responsiveness, connectivity to resources, unused resources exploration, corresponding resource mapping and planning for resource. The planning for the use of resources can be based on many kinds of parameters, and the service response time is one of them.
The users can easily figure out the response time of their requests, and it becomes one of the important QoSs. When we discover and explore more on this, response time can provide solutions for the distribution, the load balancing of resources with better efficiency. This is one of the most promising
research directions for improving the cloud technology. Therefore, this paper proposes a load balancing algorithm based on response time of requests on cloud with the name APRA (ARIMA Prediction of Response Time Algorithm), the main idea is to use ARIMA algorithms to predict the coming response time, thus giving a better way of effectively resolving resource allocation with threshold value. The experiment
result outcomes are potential and valuable for load balancing with predicted response time, it shows that prediction is a great direction for load balancing.
THE EFFECT OF THE RESOURCE CONSUMPTION CHARACTERISTICS OF CLOUD APPLICATIONS ...ijccsa
Auto scaling is a service provided by the cloud service provider that allows provision of temporary
resources to the subscriber’s systems to prevent overloading. So far, many methods of auto scaling have
been proposed and applied. Among them, solutions based on low-level metrics are commonly used in
industry systems. Resource statistics are the basis for detecting overloading situation and making
additional resources in a timely manner. However, the effectiveness of these methods depends very much
on the accuracy of the overload calculation from low-level metrics. Overloading is mentioned in solutions
that usually favor a shortage of CPU resources. However, the demand for resources comes from the
application running on that each application has the characteristics of demanding different resource types,
with different CPU, memory, I/O ratios so it can not just be statistically on CPU consumption. The point of
view here is that even though based on low level resources, the source for calculation and forecasting is the
characteristic of the resource needs of the application. In this paper, we will develop an empirical model to
assess the effect of the application's resource consumption characteristics on the efficiency of the lowmetricauto scaling solutions and propose an auto scaling solution that is calculated based on statistics of
different types of resources. The results of the simulations show that the proposed solution based on
multiple resources is more positive.
THE EFFECT OF THE RESOURCE CONSUMPTION CHARACTERISTICS OF CLOUD APPLICATIONS ...ijccsa
Auto scaling is a service provided by the cloud service provider that allows provision of temporary resources to the subscriber’s systems to prevent overloading. So far, many methods of auto scaling have been proposed and applied. Among them, solutions based on low-level metrics are commonly used in industry systems. Resource statistics are the basis for detecting overloading situation and making additional resources in a timely manner. However, the effectiveness of these methods depends very much on the accuracy of the overload calculation from low-level metrics. Overloading is mentioned in solutions that usually favor a shortage of CPU resources. However, the demand for resources comes from the application running on that each application has the characteristics of demanding different resource types, with different CPU, memory, I/O ratios so it can not just be statistically on CPU consumption. The point of view here is that even though based on low level resources, the source for calculation and forecasting is the characteristic of the resource needs of the application. In this paper, we will develop an empirical model to assess the effect of the application's resource consumption characteristics on the efficiency of the lowmetricauto scaling solutions and propose an auto scaling solution that is calculated based on statistics of different types of resources. The results of the simulations show that the proposed solution based on multiple resources is more positive.
Providing a multi-objective scheduling tasks by Using PSO algorithm for cost ...Editor IJCATR
This article is intended to use the multi-PSO algorithm for scheduling tasks for cost management in cloud computing. This means that
any migration costs due to supply failure consider as a one objective and each task is a little particle and recognize by use of the
appropriate fitness schedule function (how the particles arrangement) that cost at least amount of total expense. In addition to, the weight
is granted to the each expenditure that reflects the importance of cost. The data which is used to simulate proposed method are series of
academic and research data that are prepared from the Internet and MATLAB software is used for simulation. We simulate two issues,
in the first issue, consider four task by four vehicles and divide tasks. In the second issue, make the issue more complicated and consider
six tasks by four vehicles. We write PSO's output for each two issues of various iterations. Finally, the particles dispersion and as well
as the output of the cost function were computed for each pa
DYNAMIC ALLOCATION METHOD FOR EFFICIENT LOAD BALANCING IN VIRTUAL MACHINES FO...acijjournal
This paper proposes a Dynamic resource allocation method for Cloud computing. Cloud computing is a model for delivering information technology services in which resources are retrieved from the internet through web-based tools and applications, rather than a direct connection to a server. Users can set up
and boot the required resources and they have to pay only for the required resources. Thus, in the future providing a mechanism for efficient resource management and assignment will be an important objective of Cloud computing. In this project we propose a method, dynamic scheduling and consolidation mechanism that allocate resources based on the load of Virtual Machines (VMs) on Infrastructure as a service (IaaS). This method enables users to dynamically add and/or delete one or more instances on the basis of the load and the conditions specified by the user. Our objective is to develop an effective load balancing algorithm using Virtual Machine Monitoring to
maximize or minimize different performance parameters(throughput for example) for the Clouds of
different sizes (virtual topology de-pending on the application requirement).
Task Scheduling using Hybrid Algorithm in Cloud Computing Environmentsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
ANALYSIS ON LOAD BALANCING ALGORITHMS IMPLEMENTATION ON CLOUD COMPUTING ENVIR...AM Publications
Cloud computing means storing and accessing data and programs over the Internet instead of your computer's hard drive. The cloud is just a metaphor for the Internet. The elements involved in cloud computing are clients, data center and distributed server. One of the main problems in cloud computing is load balancing. Balancing the load means to distribute the workload among several nodes evenly so that no single node will be overloaded. Load can be of any type that is it can be CPU load, memory capacity or network load. In this paper we presented an architecture of load balancing and algorithm which will further improve the load balancing problem by minimizing the response time. In this paper, we have proposed the enhanced version of existing regulated load balancing approach for cloud computing by comping the Randomization and greedy load balancing algorithm. To check the performance of proposed approach, we have used the cloud analyst simulator (Cloud Analyst). Through simulation analysis, it has been found that proposed improved version of regulated load balancing approach has shown better performance in terms of cost, response time and data processing time.
Abstract: Efficient task scheduling method can meet users' requirements, and improve the resource utilization, then increase the overall performance of the cloud computing environment. Cloud computing has new features, such as flexibility, Virtualization and etc., in this paper we propose a two levels task scheduling method based on load balancing in cloud computing. This task scheduling method meet user's requirements and get high resource utilization that simulation results in Cloud Sim simulator prove this.Keywords: cloud computing; task scheduling; virtualization.
Title: A Task Scheduling Algorithm in Cloud Computing
Author: Ali Bagherinia
ISSN 2350-1022
International Journal of Recent Research in Mathematics Computer Science and Information Technology
Paper Publications
Cloud computing is that ensuing generation of computation. In all probability folks can have everything they need on the cloud. Cloud computing provides resources to shopper on demand. The resources also are code package resources or hardware resources. Cloud computing architectures unit distributed, parallel and serves the requirements of multiple purchasers in various things. This distributed style deploys resources distributive to deliver services with efficiency to users in various geographical channels. Purchasers in a very distributed setting generate request haphazardly in any processor. So the most important disadvantage of this randomness is expounded to task assignment. The unequal task assignment to the processor creates imbalance i.e., variety of the processors sq. measure over laden and many of them unit of measurement to a lower place loaded. The target of load equalisation is to transfer the load from over laden technique to a lower place loaded technique transparently. Load equalisation is one altogether the central issues in cloud computing. To comprehend high performance, minimum interval and high resource utilization relation we want to transfer the tasks between nodes in cloud network. Load equalisation technique is utilized to distribute tasks from over loaded nodes to a lower place loaded or idle nodes. In following sections we have a tendency to tend to stand live discuss concerning cloud computing, load equalisation techniques and additionally the planned work of our load equalisation system. Proposed load equalisation rule is simulated on Cloud Analyst toolkit. Performance is analyzed on the parameters of overall interval, knowledge transfer, average knowledge center mating time and total value of usage. Results area unit compared with 3 existing load equalisation algorithms specifically spherical Robin, Equally unfold Current Execution Load, and Throttled. Results on the premise of case studies performed shows additional knowledge transfer with minimum interval.
Similar to ITA: The Improved Throttled Algorithm of Load Balancing on Cloud Computing (20)
Vehicle Ad Hoc Networks (VANETs) have become a viable technology to improve traffic flow and safety on the roads. Due to its effectiveness and scalability, the Wingsuit Search-based Optimised Link State Routing Protocol (WS-OLSR) is frequently used for data distribution in VANETs. However, the selection of MultiPoint Relays (MPRs) plays a pivotal role in WS-OLSR's performance. This paper presents an improved MPR selection algorithm tailored to WS-OLSR, designed to enhance the overall routing efficiency and reduce overhead. The analysis found that the current OLSR protocol has problems such as redundancy of HELLO and TC message packets or failure to update routing information in time, so a WS-OLSR routing protocol based on improved-MPR selection algorithm was proposed. Firstly, factors such as node mobility and link changes are comprehensively considered to reflect network topology changes, and the broadcast cycle of node HELLO messages is controlled through topology changes. Secondly, a new MPR selection algorithm is proposed, considering link stability issues and nodes. Finally, evaluate its effectiveness in terms of packet delivery ratio, end-to-end delay, and control message overhead. Simulation results demonstrate the superior performance of our improved MR selection algorithm when compared to traditional approaches.
A Novel Medium Access Control Strategy for Heterogeneous Traffic in Wireless ...IJCNCJournal
So far, Wireless Body Area Networks (WBANs) have played a pivotal role in driving the development of intelligent healthcare systems with broad applicability across various domains. Each WBAN consists of one or more types of sensors that can be embedded in clothing, attached directly to the body, or even implanted beneath an individual's skin. These sensors typically serve asingle application. However, the traffic generated by each sensor may have distinct requirements. This diversity necessitates a dual approach: tailored treatment based on the specific needs of each traffic typeand the fulfillment of application requirements, such asreliability and timeliness. Never the less, the presence of energy constraints and the unreliable nature of wireless communications make QoS provisioning under such networks a non-trivial task. In this context, the current paper introduces a novel Medium AccessControl (MAC) strategy for the regular traffic applications of WBANs, designed to significantly enhance efficiency when compared to the established MAC protocols IEEE 802.15.4 and IEEE 802.15.6, with a particular focus on improving reliability, timeliness, and energy efficiency.
May_2024 Top 10 Read Articles in Computer Networks & Communications.pdfIJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
A Topology Control Algorithm Taking into Account Energy and Quality of Transm...IJCNCJournal
The efficient use of energy in wireless sensor networks is critical for extending node lifetime. The network topology is one of the factors that have a significant impact on the energy usage at the nodes and the quality of transmission (QoT) in the network. We propose a topology control algorithm for software-defined wireless sensor networks (SDWSNs) in this paper. Our method is to formulate topology control algorithm as a nonlinear programming (NP) problem with the objective to optimizing two metrics, maximum communication range, and desired degree. This NP problem is solved at the SDWSN controller by employing the genetic algorithm (GA) to determine the best topology. The simulation results show that the proposed algorithm outperforms the MaxPower algorithm in terms of average node degree and energy expansion ratio.
Multi-Server user Authentication Scheme for Privacy Preservation with Fuzzy C...IJCNCJournal
The integration of artificial intelligence technology with a scalable Internet of Things (IoT) platform facilitates diverse smart communication services, allowing remote users to access services from anywhere at any time. The multi-server environment within IoT introduces a flexible security service model, enabling users to interact with any server through a single registration. To ensure secure and privacy preservation services for resources, an authentication scheme is essential. Zhao et al. recently introduced a user authentication scheme for the multi-server environment, utilizing passwords and smart cards, claiming resilience against well-known attacks. This paper conducts cryptanalysis on Zhao et al.'s scheme, focusing on denial of service and privacy attacks, revealing a lack of user-friendliness. Subsequently, we propose a new multi-server user authentication scheme for privacy preservation with fuzzy commitment over the IoT environment, addressing the shortcomings of Zhao et al.'s scheme. Formal security verification of the proposed scheme is conducted using the ProVerif simulation tool. Through both formal and informal security analyses, we demonstrate that the proposed scheme is resilient against various known attacks and those identified in Zhao et al.'s scheme.
Advanced Privacy Scheme to Improve Road Safety in Smart Transportation SystemsIJCNCJournal
In -Vehicle Ad-Hoc Network (VANET), vehicles continuously transmit and receive spatiotemporal data with neighboring vehicles, thereby establishing a comprehensive 360-degree traffic awareness system. Vehicular Network safety applications facilitate the transmission of messages between vehicles that are near each other, at regular intervals, enhancing drivers' contextual understanding of the driving environment and significantly improving traffic safety. Privacy schemes in VANETs are vital to safeguard vehicles’ identities and their associated owners or drivers. Privacy schemes prevent unauthorized parties from linking the vehicle's communications to a specific real-world identity by employing techniques such as pseudonyms, randomization, or cryptographic protocols. Nevertheless, these communications frequently contain important vehicle information that malevolent groups could use to Monitor the vehicle over a long period. The acquisition of this shared data has the potential to facilitate the reconstruction of vehicle trajectories, thereby posing a potential risk to the privacy of the driver. Addressing the critical challenge of developing effective and scalable privacy-preserving protocols for communication in vehicle networks is of the highest priority. These protocols aim to reduce the transmission of confidential data while ensuring the required level of communication. This paper aims to propose an Advanced Privacy Vehicle Scheme (APV) that periodically changes pseudonyms to protect vehicle identities and improve privacy. The APV scheme utilizes a concept called the silent period, which involves changing the pseudonym of a vehicle periodically based on the tracking of neighboring vehicles. The pseudonym is a temporary identifier that vehicles use to communicate with each other in a VANET. By changing the pseudonym regularly, the APV scheme makes it difficult for unauthorized entities to link a vehicle's communications to its real-world identity. The proposed APV is compared to the SLOW, RSP, CAPS, and CPN techniques. The data indicates that the efficiency of APV is a better improvement in privacy metrics. It is evident that the AVP offers enhanced safety for vehicles during transportation in the smart city.
April 2024 - Top 10 Read Articles in Computer Networks & CommunicationsIJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
DEF: Deep Ensemble Neural Network Classifier for Android Malware DetectionIJCNCJournal
Malware is one of the threats to security of computer networks and information systems. Since malware instances are available sufficiently, there is increased interest among researchers on usage of Artificial Intelligence (AI). Of late AI-enabled methods such as machine learning (ML) and deep learning paved way for solving many real-world problems. As it is a learning-based approach, accumulated training samples help in improving thequality of training and thus leveraging malware detection accuracy. Existing deep learning methods are focusing on learning-based malware detection systems. However, there is need for improving the state of the art through ensemble approach. Towards this end, in this paper we proposed a framework known as Deep Ensemble Framework (DEF) for automatic malware detection. The framework obtains features from training samples. From given malware instance a grayscale image is generated. There is another process to extract the opcode sequences. Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM) techniques are used to obtain grayscale image and opcode sequence respectively. Afterwards, a stacking ensemble is employed in order to achieve efficient malware detection and classification. Malware samples collected fromthe Internet sources and Microsoft are used for theempirical study. An algorithm known as Ensemble Learning for Automatic Malware Detection (EL-AML) is proposed to realize our framework. Another algorithm named Pre-Process is proposed to assist the EL-AML algorithm for obtaining intermediate features required by CNN and LSTM.Empirical study reveals that our framework outperforms many existing methods in terms of speed-up and accuracy.
High Performance NMF Based Intrusion Detection System for Big Data IOT TrafficIJCNCJournal
With the emergence of smart devices and the Internet of Things (IoT), millions of users connected to the network produce massive network traffic datasets. These vast datasets of network traffic, Big Data are challenging to store, deal with and analyse using a single computer. In this paper we developed parallel implementation using a High Performance Computer (HPC) for the Non-Negative Matrix Factorization technique as an engine for an Intrusion Detection System (HPC-NMF-IDS). The large IoT traffic datasets of order of millions samples are distributed evenly on all the computing cores for both storage and speedup purpose. The distribution of computing tasks involved in the Matrix Factorization takes into account the reduction of the communication cost between the computing cores. The experiments we conducted on the proposed HPC-IDS-NMF give better results than the traditional ML-based intrusion detection systems. We could train the HPC model with datasets of one million samples in only 31 seconds instead of the 40 minutes using one processor), that is a speed up of 87 times. Moreover, we have got an excellent detection accuracy rate of 98% for KDD dataset.
A Novel Medium Access Control Strategy for Heterogeneous Traffic in Wireless ...IJCNCJournal
So far, Wireless Body Area Networks (WBANs) have played a pivotal role in driving the development of intelligent healthcare systems with broad applicability across various domains. Each WBAN consists of one or more types of sensors that can be embedded in clothing, attached directly to the body, or even implanted beneath an individual's skin. These sensors typically serve asingle application. However, the traffic generated by each sensor may have distinct requirements. This diversity necessitates a dual approach: tailored treatment based on the specific needs of each traffic typeand the fulfillment of application requirements, such asreliability and timeliness. Never the less, the presence of energy constraints and the unreliable nature of wireless communications make QoS provisioning under such networks a non-trivial task. In this context, the current paper introduces a novel Medium AccessControl (MAC) strategy for the regular traffic applications of WBANs, designed to significantly enhance efficiency when compared to the established MAC protocols IEEE 802.15.4 and IEEE 802.15.6, with a particular focus on improving reliability, timeliness, and energy efficiency.
A Topology Control Algorithm Taking into Account Energy and Quality of Transm...IJCNCJournal
The efficient use of energy in wireless sensor networks is critical for extending node lifetime. The network topology is one of the factors that have a significant impact on the energy usage at the nodes and the quality of transmission (QoT) in the network. We propose a topology control algorithm for software-defined wireless sensor networks (SDWSNs) in this paper. Our method is to formulate topology control algorithm as a nonlinear programming (NP) problem with the objective to optimizing two metrics, maximum communication range, and desired degree. This NP problem is solved at the SDWSN controller by employing the genetic algorithm (GA) to determine the best topology. The simulation results show that the proposed algorithm outperforms the MaxPower algorithm in terms of average node degree and energy expansion ratio.
Multi-Server user Authentication Scheme for Privacy Preservation with Fuzzy C...IJCNCJournal
The integration of artificial intelligence technology with a scalable Internet of Things (IoT) platform facilitates diverse smart communication services, allowing remote users to access services from anywhere at any time. The multi-server environment within IoT introduces a flexible security service model, enabling users to interact with any server through a single registration. To ensure secure and privacy preservation services for resources, an authentication scheme is essential. Zhao et al. recently introduced a user authentication scheme for the multi-server environment, utilizing passwords and smart cards, claiming resilience against well-known attacks. This paper conducts cryptanalysis on Zhao et al.'s scheme, focusing on denial of service and privacy attacks, revealing a lack of user-friendliness. Subsequently, we propose a new multi-server user authentication scheme for privacy preservation with fuzzy commitment over the IoT environment, addressing the shortcomings of Zhao et al.'s scheme. Formal security verification of the proposed scheme is conducted using the ProVerif simulation tool. Through both formal and informal security analyses, we demonstrate that the proposed scheme is resilient against various known attacks and those identified in Zhao et al.'s scheme.
Advanced Privacy Scheme to Improve Road Safety in Smart Transportation SystemsIJCNCJournal
In -Vehicle Ad-Hoc Network (VANET), vehicles continuously transmit and receive spatiotemporal data with neighboring vehicles, thereby establishing a comprehensive 360-degree traffic awareness system. Vehicular Network safety applications facilitate the transmission of messages between vehicles that are near each other, at regular intervals, enhancing drivers' contextual understanding of the driving environment and significantly improving traffic safety. Privacy schemes in VANETs are vital to safeguard vehicles’ identities and their associated owners or drivers. Privacy schemes prevent unauthorized parties from linking the vehicle's communications to a specific real-world identity by employing techniques such as pseudonyms, randomization, or cryptographic protocols. Nevertheless, these communications frequently contain important vehicle information that malevolent groups could use to Monitor the vehicle over a long period. The acquisition of this shared data has the potential to facilitate the reconstruction of vehicle trajectories, thereby posing a potential risk to the privacy of the driver. Addressing the critical challenge of developing effective and scalable privacy-preserving protocols for communication in vehicle networks is of the highest priority. These protocols aim to reduce the transmission of confidential data while ensuring the required level of communication. This paper aims to propose an Advanced Privacy Vehicle Scheme (APV) that periodically changes pseudonyms to protect vehicle identities and improve privacy. The APV scheme utilizes a concept called the silent period, which involves changing the pseudonym of a vehicle periodically based on the tracking of neighboring vehicles. The pseudonym is a temporary identifier that vehicles use to communicate with each other in a VANET. By changing the pseudonym regularly, the APV scheme makes it difficult for unauthorized entities to link a vehicle's communications to its real-world identity. The proposed APV is compared to the SLOW, RSP, CAPS, and CPN techniques. The data indicates that the efficiency of APV is a better improvement in privacy metrics. It is evident that the AVP offers enhanced safety for vehicles during transportation in the smart city.
DEF: Deep Ensemble Neural Network Classifier for Android Malware DetectionIJCNCJournal
Malware is one of the threats to security of computer networks and information systems. Since malware instances are available sufficiently, there is increased interest among researchers on usage of Artificial Intelligence (AI). Of late AI-enabled methods such as machine learning (ML) and deep learning paved way for solving many real-world problems. As it is a learning-based approach, accumulated training samples help in improving thequality of training and thus leveraging malware detection accuracy. Existing deep learning methods are focusing on learning-based malware detection systems. However, there is need for improving the state of the art through ensemble approach. Towards this end, in this paper we proposed a framework known as Deep Ensemble Framework (DEF) for automatic malware detection. The framework obtains features from training samples. From given malware instance a grayscale image is generated. There is another process to extract the opcode sequences. Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM) techniques are used to obtain grayscale image and opcode sequence respectively. Afterwards, a stacking ensemble is employed in order to achieve efficient malware detection and classification. Malware samples collected fromthe Internet sources and Microsoft are used for theempirical study. An algorithm known as Ensemble Learning for Automatic Malware Detection (EL-AML) is proposed to realize our framework. Another algorithm named Pre-Process is proposed to assist the EL-AML algorithm for obtaining intermediate features required by CNN and LSTM.Empirical study reveals that our framework outperforms many existing methods in terms of speed-up and accuracy.
High Performance NMF based Intrusion Detection System for Big Data IoT TrafficIJCNCJournal
With the emergence of smart devices and the Internet of Things (IoT), millions of users connected to the network produce massive network traffic datasets. These vast datasets of network traffic, Big Data are challenging to store, deal with and analyse using a single computer. In this paper we developed parallel implementation using a High Performance Computer (HPC) for the Non-Negative Matrix Factorization technique as an engine for an Intrusion Detection System (HPC-NMF-IDS). The large IoT traffic datasets of order of millions samples are distributed evenly on all the computing cores for both storage and speedup purpose. The distribution of computing tasks involved in the Matrix Factorization takes into account the reduction of the communication cost between the computing cores. The experiments we conducted on the proposed HPC-IDS-NMF give better results than the traditional ML-based intrusion detection systems. We could train the HPC model with datasets of one million samples in only 31 seconds instead of the 40 minutes using one processor), that is a speed up of 87 times. Moreover, we have got an excellent detection accuracy rate of 98% for KDD dataset.
IoT Guardian: A Novel Feature Discovery and Cooperative Game Theory Empowered...IJCNCJournal
Cyber intrusion attacks increasingly target the Internet of Things (IoT) ecosystem, exploiting vulnerable devices and networks. Malicious activities must be identified early to minimize damage and mitigate threats. Using actual benign and attack traffic from the CICIoT2023 dataset, this WORK aims to evaluate and benchmark machine-learning techniques for IoT intrusion detection. There are four main phases to the system. First, the CICIoT2023 dataset is refined to remove irrelevant features and clean up missing and duplicate data. The second phase employs statistical models and artificial intelligence to discover novel features. The most significant features are then selected in the third phase based on cooperative game theory. Using the original CICIoT2023 dataset and a dataset containing only novel features, we train and evaluate a variety of machine learning classifiers. On the original dataset, Random Forest achieved the highest accuracy of 99%. Still, with novel features, Random Forest's performance dropped only slightly (96%) while other models achieved significantly lower accuracy. As a whole, the work contributes substantial contributions to tailored feature engineering, feature selection, and rigorous benchmarking of IoT intrusion detection techniques. IoT networks and devices face continuously evolving threats, making it necessary to develop robust intrusion detection systems.
Enhancing Traffic Routing Inside a Network through IoT Technology & Network C...IJCNCJournal
IoT networking uses real items as stationary or mobile nodes. Mobile nodes complicate networking. Internet of Things (IoT) networks have a lot of control overhead messages because devices are mobile. These signals are generated by the constant flow of control data as such device identity, geographical positioning, node mobility, device configuration, and others. Network clustering is a popular overhead communication management method. Many cluster-based routing methods have been developed to address system restrictions. Node clustering based on the Internet of Things (IoT) protocol, may be used to cluster all network nodes according to predefined criteria. Each cluster will have a Smart Designated Node. SDN cluster management is efficient. Many intelligent nodes remain in the network. The network design spreads these signals. This paper presents an intelligent and responsive routing approach for clustered nodes in IoT networks. An existing method builds a new sub-area clustered topology. The Nodes Clustering Based on the Internet of Things (NCIoT) method improves message transmission between any two nodes. This will facilitate the secure and reliable interchange of healthcare data between professionals and patients. NCIoT is a system that organizes nodes in the Internet of Things (IoT) by grouping them together based on their proximity. It also picks SDN routes for these nodes. This approach involves selecting one option from a range of choices and preparing for likely outcomes problem addressing limitations on activities is a primary focus during the review process. Predictive inquiry employs the process of analyzing data to forecast and anticipate future events. This document provides an explanation of compact units. The Predictive Inquiry Small Packets (PISP) improved its backup system and partnered with SDN to establish a routing information table for each intelligent node, resulting in higher routing performance. Both principal and secondary roads are available for use. The simulation findings indicate that NCIoT algorithms outperform CBR protocols. Enhancements lead to a substantial 78% boost in network performance. In addition, the end-to-end latency dropped by 12.5%. The PISP methodology produces 5.9% more inquiry packets compared to alternative approaches. The algorithms are constructed and evaluated against academic ones.
IoT Guardian: A Novel Feature Discovery and Cooperative Game Theory Empowered...IJCNCJournal
Cyber intrusion attacks increasingly target the Internet of Things (IoT) ecosystem, exploiting vulnerable devices and networks. Malicious activities must be identified early to minimize damage and mitigate threats. Using actual benign and attack traffic from the CICIoT2023 dataset, this WORK aims to evaluate and benchmark machine-learning techniques for IoT intrusion detection. There are four main phases to the system. First, the CICIoT2023 dataset is refined to remove irrelevant features and clean up missing and duplicate data. The second phase employs statistical models and artificial intelligence to discover novel features. The most significant features are then selected in the third phase based on cooperative game theory. Using the original CICIoT2023 dataset and a dataset containing only novel features, we train and evaluate a variety of machine learning classifiers. On the original dataset, Random Forest achieved the highest accuracy of 99%. Still, with novel features, Random Forest's performance dropped only slightly (96%) while other models achieved significantly lower accuracy. As a whole, the work contributes substantial contributions to tailored feature engineering, feature selection, and rigorous benchmarking of IoT intrusion detection techniques. IoT networks and devices face continuously evolving threats, making it necessary to develop robust intrusion detection systems.
** Connect, Collaborate, And Innovate: IJCNC - Where Networking Futures Take ...IJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
Enhancing Traffic Routing Inside a Network through IoT Technology & Network C...IJCNCJournal
IoT networking uses real items as stationary or mobile nodes. Mobile nodes complicate networking. Internet of Things (IoT) networks have a lot of control overhead messages because devices are mobile. These signals are generated by the constant flow of control data as such device identity, geographical positioning, node mobility, device configuration, and others. Network clustering is a popular overhead communication management method. Many cluster-based routing methods have been developed to address system restrictions. Node clustering based on the Internet of Things (IoT) protocol, may be used to cluster all network nodes according to predefined criteria. Each cluster will have a Smart Designated Node. SDN cluster management is efficient. Many intelligent nodes remain in the network. The network design spreads these signals. This paper presents an intelligent and responsive routing approach for clustered nodes in IoT networks. An existing method builds a new sub-area clustered topology. The Nodes Clustering Based on the Internet of Things (NCIoT) method improves message transmission between any two nodes. This will facilitate the secure and reliable interchange of healthcare data between professionals and patients. NCIoT is a system that organizes nodes in the Internet of Things (IoT) by grouping them together based on their proximity. It also picks SDN routes for these nodes. This approach involves selecting one option from a range of choices and preparing for likely outcomes problem addressing limitations on activities is a primary focus during the review process. Predictive inquiry employs the process of analyzing data to forecast and anticipate future events. This document provides an explanation of compact units. The Predictive Inquiry Small Packets (PISP) improved its backup system and partnered with SDN to establish a routing information table for each intelligent node, resulting in higher routing performance. Both principal and secondary roads are available for use. The simulation findings indicate that NCIoT algorithms outperform CBR protocols. Enhancements lead to a substantial 78% boost in network performance. In addition, the end-to-end latency dropped by 12.5%. The PISP methodology produces 5.9% more inquiry packets compared to alternative approaches. The algorithms are constructed and evaluated against academic ones.
TECHNICAL TRAINING MANUAL GENERAL FAMILIARIZATION COURSEDuvanRamosGarzon1
AIRCRAFT GENERAL
The Single Aisle is the most advanced family aircraft in service today, with fly-by-wire flight controls.
The A318, A319, A320 and A321 are twin-engine subsonic medium range aircraft.
The family offers a choice of engines
Quality defects in TMT Bars, Possible causes and Potential Solutions.PrashantGoswami42
Maintaining high-quality standards in the production of TMT bars is crucial for ensuring structural integrity in construction. Addressing common defects through careful monitoring, standardized processes, and advanced technology can significantly improve the quality of TMT bars. Continuous training and adherence to quality control measures will also play a pivotal role in minimizing these defects.
COLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdfKamal Acharya
The College Bus Management system is completely developed by Visual Basic .NET Version. The application is connect with most secured database language MS SQL Server. The application is develop by using best combination of front-end and back-end languages. The application is totally design like flat user interface. This flat user interface is more attractive user interface in 2017. The application is gives more important to the system functionality. The application is to manage the student’s details, driver’s details, bus details, bus route details, bus fees details and more. The application has only one unit for admin. The admin can manage the entire application. The admin can login into the application by using username and password of the admin. The application is develop for big and small colleges. It is more user friendly for non-computer person. Even they can easily learn how to manage the application within hours. The application is more secure by the admin. The system will give an effective output for the VB.Net and SQL Server given as input to the system. The compiled java program given as input to the system, after scanning the program will generate different reports. The application generates the report for users. The admin can view and download the report of the data. The application deliver the excel format reports. Because, excel formatted reports is very easy to understand the income and expense of the college bus. This application is mainly develop for windows operating system users. In 2017, 73% of people enterprises are using windows operating system. So the application will easily install for all the windows operating system users. The application-developed size is very low. The application consumes very low space in disk. Therefore, the user can allocate very minimum local disk space for this application.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Event Management System Vb Net Project Report.pdfKamal Acharya
In present era, the scopes of information technology growing with a very fast .We do not see any are untouched from this industry. The scope of information technology has become wider includes: Business and industry. Household Business, Communication, Education, Entertainment, Science, Medicine, Engineering, Distance Learning, Weather Forecasting. Carrier Searching and so on.
My project named “Event Management System” is software that store and maintained all events coordinated in college. It also helpful to print related reports. My project will help to record the events coordinated by faculties with their Name, Event subject, date & details in an efficient & effective ways.
In my system we have to make a system by which a user can record all events coordinated by a particular faculty. In our proposed system some more featured are added which differs it from the existing system such as security.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Democratizing Fuzzing at Scale by Abhishek Aryaabh.arya
Presented at NUS: Fuzzing and Software Security Summer School 2024
This keynote talks about the democratization of fuzzing at scale, highlighting the collaboration between open source communities, academia, and industry to advance the field of fuzzing. It delves into the history of fuzzing, the development of scalable fuzzing platforms, and the empowerment of community-driven research. The talk will further discuss recent advancements leveraging AI/ML and offer insights into the future evolution of the fuzzing landscape.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Courier management system project report.pdfKamal Acharya
It is now-a-days very important for the people to send or receive articles like imported furniture, electronic items, gifts, business goods and the like. People depend vastly on different transport systems which mostly use the manual way of receiving and delivering the articles. There is no way to track the articles till they are received and there is no way to let the customer know what happened in transit, once he booked some articles. In such a situation, we need a system which completely computerizes the cargo activities including time to time tracking of the articles sent. This need is fulfilled by Courier Management System software which is online software for the cargo management people that enables them to receive the goods from a source and send them to a required destination and track their status from time to time.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
ITA: The Improved Throttled Algorithm of Load Balancing on Cloud Computing
1. International Journal of Computer Networks & Communications (IJCNC) Vol.14, No.1, January 2022
DOI: 10.5121/ijcnc.2022.14102 25
ITA: THE IMPROVED THROTTLED ALGORITHM OF
LOAD BALANCING ON CLOUD COMPUTING
Hieu N. Le1
and Hung C. Tran2
1
Department of Information Technology,
Ho Chi Minh City Open University, Ho Chi Minh City, Vietnam
2
Posts and Telecommunication Institute of Technology, Ho Chi Minh City, Vietnam
ABSTRACT
Cloud computing makes the information technology industry boom. It is a great solution for businesses
who want to save costs while ensuring the quality of service. One of the key issues that make cloud
computing successful is the load balancing technique used in the load balancer to minimize time costs and
optimize costs economically. This paper proposes an algorithm to enhance the processing time of tasks so
that it can help improve the load balancing capacity on cloud computing. This algorithm, named as
Improved Throttled Algorithm (ITA), is an improvement of Throttled Algorithm. The paper uses the Cloud
Analyst tool to simulate. The selected algorithms are used to compare: Equally Load, Round Robin,
Throttled and TMA. The simulation results show that the proposed algorithm ITA has improved the
processing time of tasks, time spent processing requests and reduced the cost of Datacenters compared to
the selected popular algorithms as above. The improvement of ITA is because of selecting virtual machines
in an index table that is available but in order of priority. It helps response times and processing times
remain stable, limits the idling resources, and cloud costs are minimized compared to selected algorithms.
KEYWORDS
Cloud Computing, Load Balancing, Processing Time, Improved Throttle Algorithm.
1. INTRODUCTION
Nowadays, as the demand for using common resources and services on the Internet is rapidly
increasing, the issues about ensuring the efficiency of time, service quality and cost-saving are
the greatest purposes. To meet that demand, in June 2007, Cloud Computing model was
launched, and Amazon promoted research and deployment. Shortly thereafter, with the
participation of large companies: Microsoft, Google, IBM, etc. Cloud Computing is promoted to
thrive. Cloud computing (Cloud Computing) [1], [2], [3], [4] is the trend of developing computer
networks, inheriting previous networks and distributed computing concepts to integrate on-
demand machine resources, convenient and faster way. It is allowing users to deliver services, to
release resources easily, to reduce communication with suppliers, and users need to pay only for
what they used (pay-by-use).
According to [4], QoS in cloud includes the following models: load balancer, system model and
the applications of QoS models. From here, we can see that load balancing is one of the important
factors in ensuring the quality of service on the cloud and it is one of the research directions for
cloud development and improvement. To improve the performance of cloud services, resource
management [5] faces basic issues such as resource allocation, resource response, connecting to
resources, discovering unused resources, mapping corresponding resources, modelling resources,
providing resources, and planning resource utilization. In particular, the plan for resource use is
2. International Journal of Computer Networks & Communications (IJCNC) Vol.14, No.1, January 2022
26
based on connections over time and the processing time of the service. From there we can study
how to improve the processing time, we come up with a solution for the request allocation in load
balancing. This is one of the research directions for improving cloud technology and making the
cloud more and more perfect and advanced.
Cloud computing [6], [7], [8], [9] has been evolving since the 1980s through several phases
including grid and utility computing, Application Service Provider, and software as a service
(Software as a Service). Until 2006, the term "cloud computing" really emerged strongly. During
this time, Amazon released its Elastic Compute Cloud (EC2) services, which allowed people to
"hire computing power and processing power" to run their enterprise applications, started
delivering browser-based enterprise applications that year. Three years later, in 2009, Google
App Engine was born and became one of the historic milestones of electricity development in
cloud computing. However, along with the constant development, this will also lead to some
problems on cloud computing, especially the problem of overload on cloud computing.
Therefore, many load balancing algorithms are appeared to solve the overload problem.
In this paper, we propose the ITA, Improved Throttle Algorithm, that enhances load balancing
with better task processing time. The article includes the following sections: Section I is the
introduction; Section II is discussed about the related works; Section III is the proposed algorithm
ITA; Section IV are simulation results and evaluation; Section V is the conclusion.
2. RELATED WORK
The researchers have been working on load balancing and they have found out many algorithms
to solve these issues on cloud computing. To study cloud computing, Rajkumar Buyya and co-
authors [10] had reported their project of developing a Cloud Analyst Tool, a simulation tool for
people to create their cloud environment and test it. In this report, they had integrated some
famous algorithms inside. This tool is opening a new generation of cloud computing studies,
especially the load balancer. Till the year 2018, D. Asir et al. [11] once again evaluated
CloudSim and its related tools such as Cloud Analyst. The paper showed us a very positive
prospect of CloudSim. They concluded that CloudSim is a great tool for people to develop and
study cloud before deploying it, CloudSim also helps us model and simulate the cloud-computing
environment efficiently.
After the appearance of CloudSim and its tools, many of research have been conducted and
performed well in this simulation environment. In 2012, Klaithem Al Nuaimi et al. [12] had
surveyed and gave the readersa big picture of load balancers in cloud. What they had introduced
are challenges in load balancing and they had reviewedthe existing load balancing algorithms
(static and dynamic load balancers). This study also evaluated some well-known algorithms such
as CLBDM (Central Load Balancing Decision Model), MapReduce algorithm, MapReduce
algorithm, WLC (weighted least connection), ESWLC (Exponential Smooth Forecast based on
Weighted Least Connection), dual direction downloading algorithm from FTP servers (DDFTP),
Load Balancing Min-Min (LBMM), Opportunistic Load Balancing algorithm (OLB). In 2015,
Abhay Kumar et al. [13] had proposed a new static load balancer in cloud, the authors had
combined Monitoring Load Balancing Algorithm with Throttled Load Balancing Algorithm for
their new approach. The simulation of this study was compared to Throttled Algorithm, using the
Overall Response Time and the datacenter Processing Time. The outcome result was better than
the Throttle Algorithm.
With the focus of this study, we would like to introduce about the recent algorithms which are
popular and well-known in load balancing on cloud computing.
3. International Journal of Computer Networks & Communications (IJCNC) Vol.14, No.1, January 2022
27
2.1. Round-Robin Algorithm
N. Swarnkar et al. [14] discussed about the Round-Robin circle approach in load balancing. The
Round-Robin load balancing technique is the simplest technique built and widely used for time-
sharing systems. Round-Robin performs a fair load distribution in a rotating circular order of
VMs. This algorithm has advantages such as simple algorithm, ensuring fairness operation to
handle tasks timely, responding quickly if the VMs’ processing powers are assumed to be equal.
There is no hunger in this algorithm. The disadvantages of Round Robin: each node has a fixed
interval time, there is no flexibility and expansion, there are many nodes that can have heavy
loads while others can be idle, do not save the previous allocation state of VM.
2.2. Weighted Round-Robin Algorithm
S. Swaroop Moharana et al. [15] discussed the weighted Robin algorithm for load balancing in
the cloud environment. The Weighted Round-Robin algorithm performs circular distribution
based on the Round-Robin algorithm and relies on the capacity of each virtual machine through
its weight table to distribute the load to the virtual machines respectively. Advantages of this
algorithm: improved Round-Robin algorithm, operating in a rotation manner but combined with
the weight-table of the processing capacity of virtual machines, more efficient than original
Round-Robin in case the virtual machines’ processing power is different. Besides, the
disadvantages are: no flexibility and expansion, do not save the previous allocation state of
virtual machines.
Figure 1. Weighted Round-RobinAlgorithm Model
2.3. Active Monitoring Load Balancer Algorithm
The Active Monitoring Load Balancer Algorithm [16] aims to maintain load balancing on all
available virtual machines. The algorithm maintains information about each virtual machine and
the current number of requests allocated to the virtual machine respectively. When a request is
allocated in a virtual machine, the virtual machine with the least load will be determined by the
intermediate datacenter. If there is more than one available VM, the first virtual machine ID is
selected. The middle datacenter then sends the request to the virtual machine identified by that ID
and updates the request allocation to the virtual machine. When the middle datacenter receives a
response from the Cloudlet, it reduces the number of allocations to the virtual machine by one.
With this algorithm, the advantages: execution time depends on the state of the system, the load
will be transferred from the heavy-load VM to the lighter-loaded VM, improve response time
significantly. The disadvantages: only rely on the current load of the VM to allocate new
requests, choosing a less-load VM to decide whether to make the next request allocation. This
sometimes will not get good results in some cases.
4. International Journal of Computer Networks & Communications (IJCNC) Vol.14, No.1, January 2022
28
2.4. Throttled Algorithm
Makroo et al. [17] discussed the throttled load balancing method for cloud environment. The
Throttled algorithm balances the load by maintaining a configuration table of virtual machines
and their status. When a request is required to allocate in virtual machines from the Datacenter,
the load balancer chooses the VM which is first found in the list of available VMs. If a virtual
machine is found, the request will be allocated to this virtual machine. If no VMis found, it will
return the Datacenter with a value = -1 then Datacenter will put this request/task into the queue
and wait for the found VM. This algorithm is a dynamic load balancing algorithm, the
advantages: the list of virtual machines is maintained with the status of VMs, good performance,
use resources effectively. The disadvantages: need to scan all WMs, it is not effective if the idle
VM is at the bottom of the list because the algorithm requires selecting the first idle virtual
machine from the list to allocate tasks, do not consider the current load of virtual machines.
Figure 2. Throttled Algorithm Model
2.5. Other Algorithms
In 2013, Shridhar G. Domanal et al. [18] proposed a modification of the Throttled algorithm. The
algorithm is smart that incoming requests for virtual machines are made available through the
status list of all virtual machines. This status list is constantly updated to help improve the
average system response time.
In 2014 Rakesh Kumar Mishra et al. [19] proposed two effective options for choosing a
datacenter to reduce processing time. The first suggestion is to select a preferred datacenter based
on Round Robin. Compared to the random selection algorithm, datacenter’s processing time is
better, but resource is not well-handled. The next proposal is based on the extended Round Robin
to choose a datacenter with better processing time than random selection, but this depends on the
datacenter configuration and scheduling costs. Instead of selecting a random datacenter, the
algorithm can select the datacenter in a Round-Robin way to distribute requests uniformly across
all datacenters in cloud. It leads to more resource usage, but datacenters may have different
processing speeds and costs.
In 2017 Imtiyaz Ahmad et al. [20] proposed the Advanced Throttled Load Balancing Algorithm.
Priority is assigned to each VM. The priority is calculated based on the capacity of the VM and
the number and size of tasks assigned to the operation. The improved tuning scheduler selects the
VM with the highest priority among the available VM sets. A priority threshold level is also set
to avoid overloading. If the priority of the VM is less than the priority level, then the task is not
allocated to that VM.
In 2018, Nguyen Xuan Phi et al [21] proposed the TMA, Throttled Modified Algorithm, to
reduce response time and processing time on cloud computing. This algorithm has two indexes to
5. International Journal of Computer Networks & Communications (IJCNC) Vol.14, No.1, January 2022
29
describe the status of available VM, 0 stands for available VM, and 1 stand for unavailable VM.
The VM state is maintained with the 2 indexes table instead of scanning all the VMs status. It
only needs to allocate the request to the virtual machine in the available table, so this reduces the
response time and processing time. The result of the algorithm increases significantly as the
number of virtual machines increases.
In 2018, the article of Gupta, S. Et al [22] aims to maximize the throughput and to increase the
performance of the cloud services by proposing a new load balancing. The authors selected
Throttled, Round-Robin, and their proposed algorithm, Advanced Throttled Algorithm to make a
comparison. This ATA is also an improvement of Throttled algorithm by distributing workload
evenly between VMs. An index table is maintained by load balancer that contains all the
currently allocated request of VM. The new request arrived VM with the least load balance is
chosen by the load balancer. Load balancer deallocates the VM after completion of the task. For
simulation the authors used the cloud analyst tool, the result showed that the proposed algorithm
was more efficient than the existing static and dynamic load balancing algorithms.
In 2019, Tran Cong Hung et al. [23] proposed MMSIA algorithm to improve the Max-Min
scheduling algorithm, which improves the execution time of the requests by clustering size of
requests and clustering utilization percent of VMs. The algorithm then allocates the largest
cluster requests to the VM with the smallest utilization percent, this will be repeated till the
request list is empty. In the simulation result, the MMSIA algorithm has improved the execution
time.
In 2020, the research article of Abiodun K. Moses et al. [24] proposed an approach, which uses
both maximum-minimum and round-robin algorithm (MMRR). With this algorithm, there quest
having long execution time will be allocated with Max-Min and there quest having smallest
execution time will be allocated by Round-Robin. In this paper, the authors also use Cloud
analyst tool to implement the proposed technique and make a comparative analysis to the
algorithm was conducted to optimize cloud services to clients. The findings of this proposed
approach showed that Maximum Minimum Round Robin (MMRR) made significant changes to
cloud computing. According to the simulation results, the loading time of datacenter was good in
both Throttled and MMRR, but Round Robin was worst. The proposed MMRR performed better
from other algorithms which are tested based on the whole response time (227,26ms) and cost-
effectiveness (89%).
Also in 2020, Badshaha P Mulla et al. [25] studied about the VM allocation in heterogeneous
cloud. The authors have proposed an enhancement in load balancing for efficient VM allocation
in a heterogeneous cloud. Their proposal allocates independent user tasks or requests to available
virtual machines in datacentre efficiently to manage proper load balancing. They aimed to
minimize the response time also the processing time for requests. The proposed work has been
simulated on Facebook cloud-based social networking application on the internet. Their results
obtained a significant reduction of response time, this leads to the data centre processing time
decreasing too, compared to Throttled and Round-Robin algorithms.
In 2021, T. J. B. Durga Devi et al. [26] proposed an application of neuro fuzzy inference system
in load balancing. In this article, the authors also mention about the security of VM in cloud
environment. NP-hard optimization problem corresponds to load balancing. They follow the
Forbes about the introduction of General Data Protection Regulation, security in cloud continue
to be an issue, the existing system uses a fuzzy based hybrid LB, but this is not satisfied. The
authors focus on opportunities for improving CPU utilization and turnaround time and in terms of
security, they proposed their work, MANFIS (Modified Adaptive Neuro Fuzzy Inference
System). Parameters of MANFIS are optimized by introducing Fire-fly Algorithm. Security is
6. International Journal of Computer Networks & Communications (IJCNC) Vol.14, No.1, January 2022
30
imposed on user authentication by using the Enhanced Elliptic Curve Cryptography. This is a
password-less mechanism to authenticate users. With the results, the authors tell us the
improvement and the satisfactory of security on cloud, they show better performance with respect
to resource utilization, cost, and execution time, when compared with existing system, as shown
by results of experimentation.
In 2021, an overview by Dalia Abdulkareem Shafiq et al. [27], the review study shows that Load
Balancing is an important aspect on Cloud Computing environment, help to enhance the
workload distribution and utilize the resources efficiently, especially improve the response time
for cloud users. The paper tells us that there are a lot of issues related to LB, they are scheduling
of tasks, migration, resource utilization, and so on. The authors survey and analyse the past six
years research and studies on Load Balancing. This review also shows us the potential of
intelligent approaches such as Artificial Intelligence, Machine Learning for LB on cloud. This
study will be helpful for researchers to identify research problems related to load balancing,
especially to further reduce the response time and avoid failures in the server. Another advantage
of this study for our proposal, it is about the simulation tools and experimental environments.
They also show that, CloudSim and Cloud Analyst are the best using for this field of study, it is
the benefit of these tools.
The recent studies and research on load balancing are widely developing, but most of them are
heuristic based and improvement of existing algorithms due to the cloud characteristics. On
cloud, the LB must work real time to serve as fast as possible for user requests, so researchers
and cloud providers cannot directly add in or integrate the AI or Machine Learning techniques
into the LB. With these directions, we can improve Throttle Algorithm with implementing the
lack of it, it is the priority order of VMs. The related works above give us more and more idea to
help enhance the performance of LB on cloud, therefore we would like to propose our ITA, an
improvement of Throttled Algorithm.
3. PROPOSED ALGORITHM
Currently, due to the dynamic characteristic of Throttled Algorithm, this algorithm in load
balancing is mentioned and improved by many articles and research works. With many
innovative purposes to reduce the response time and cloud processing time, to reduce resources
waste, and to reduce load imbalance, we also would like to propose an improvement of the
Throttled algorithm, named Improved Throttled Algorithm (ITA). In this study, the proposed
algorithm improves from the work of [21] to help enhance response time, request processing
time, and reduce the cost of the Datacenter. With this algorithm, we select a virtual machine for
load balancing by using the available index with priority. This approach helps load balancer get
faster processing time and minimize idle resources.
3.1. Research Model
Being an improved algorithm from Throttled, ITA chose the first available virtual machine to
assign the task with the selected resource. In particular, the selected virtual machine is calculated
to be used least (the minimum usage), which usage is calculated by the total processing time
(Makespan) of the most recent task.
7. International Journal of Computer Networks & Communications (IJCNC) Vol.14, No.1, January 2022
31
Figure 3. Research Model of ITA
Usage is calculated from the total processing time of the most recent task. datacenter cost is
defined by the total of Data Transfer Cost and Virtual Machine Rental Cost. With this research
model, we have the assumptions: The load balancer knows in advance the input requirements;
The load balancer knows in advance the list of virtual machines, the ability of the virtual
machines (these virtual machines have the same configuration of RAM, memory, and CPU;
physical hosts have the same configuration of RAM, memory, CPU, bandwidth, and geographic
differences of data centres). The target of this research model: Minimize response time and
processing time (makespan); Help process requests faster; Limit the load imbalance.
3.2. Steps of the Algorithm
The sequences of ITA algorithm are explained and described in the following steps:
Step 1: Load Balancer of ITA performs load balancing by maintaining and constantly updating
the Usage of Virtual Machine list.
- The index table contains the information of virtual machines (VMs) in an available
state‘0’ (Available Index)
- Index table contains information of virtual machines (VMs) in a busy state ‘1’ (Busy
Index).
In the beginning, all virtual machines (VMs) are in the "Available Index" table and the "Busy
Index" table is empty.
Step 2: The datacenter controller (DCC) receives a new processing task from a request.
Step 3: The datacenter controller (DCC) queries the ITA load balancer for subsequent allocation.
Step 4: The ITA load balancer detects and sends the virtual machine (VM) ID from the
"Available Index" table with the smallest Usage, then return this virtual machine ID to the
datacenter controller (DCC).
- The datacenter controller (DCC) will push that request into the virtual machine (VM)
identified by that ID to process and notify the ITA load balancer about this allocation.
The ITA load balancer will update that the virtual machine ID (VM) into the "Busy
Index" table and wait for the next request from the datacenter Controller (DCC).
- In case, if the "Available Index" table is empty (all VMs are busy). The ITA load
balancer will return a value of -1 to the datacenter controller (DCC).The datacenter
controller (DCC) will put that request in a queue for subsequent of allocations.
8. International Journal of Computer Networks & Communications (IJCNC) Vol.14, No.1, January 2022
32
Step 5:After the request has been processed by the VM and the datacenter controller (DCC)
receives a response. This will notify the ITA load balancer that the load balancer updates the
table of "Available Index" and updates the Usage list of this table.
Step 6: If there are many requests, the datacenter controller repeats Step 3, and this is repeated
until the "Available Index"table is empty.
Pseudocode of getNextAvailableVm function in step 4ofITA Algorithm
1. Begin
2. vmId = -1;min = 0;i=0;
3. For eachvminVMList
4. usage = vm.getUsage();
5. vmId = vm.getId();
6. Ifi== 0 then
7. min = usage;
8. Else
9. If min > usage then
10. min = usage;
11. End If
12. End If
13. i++;
14. End For
15. Allocated(VmId);
16. Return vmId;
17. End
Figure 4. ITA Operation Diagram
4. SIMULATION RESULTS
This study uses the Cloud Analyst tool to install an ITA algorithm for simulating and testing, the
experimental results show that our proposed method has improved the processing time compared
to TMA [21] algorithm, Round-Robin algorithm, Equally Load algorithm, and original Throttled
algorithm.
9. International Journal of Computer Networks & Communications (IJCNC) Vol.14, No.1, January 2022
33
Figure 5. Cloud Analyst Tools with 5 Algorithms
4.1. Simulation Environment
We simulate the cloud environment by using CloudSim library set (open-source library suite,
provided by http://www.cloudbus.org/) built with Cloud Analyst and program in JAVA language.
We have created 4 different cloud simulation environments, 4 scenarios respectively, then we
created a random request environment from different geographical UserBases but the service is
on the same cloud. Then, we implement the ITA algorithm in a simulation environment.
Similarly, we run the proposed algorithm and compare with four algorithms: Equally Spread
Current Execution Load, Round Robin, Throttled, and TMA.
The proposed ITA algorithm is built by creating a ThrottledVmITALoadBalancer class, inheriting
from the VmLoadBalancer class, updating a number of loading parameters (CPU, RAM …) and
related properties. The method getNumber is to calculate the usage of the virtual machine and we
adjust the built-in functions to match the proposed algorithm.
4.2. Evaluation Criteria
Experiment cloud simulation with the above parameters and run the existing Cloud Analyst load
balancing algorithm: Round Robin, Throttled and Equally Spread Current Execution Load,
additional TMA algorithm [21] and run the proposed algorithm, same input, compare the outputs,
especially the response time parameter (Response Time), cost of datacenter (Costing).
With this simulation results, we use 4 key output variables to evaluate the performance of the 5
algorithms. They are Overall Response Time (ORT), datacenter Processing Time (DPT),
datacenter Request Servicing Time (DPT), and datacenter Cost (DCC). According to cloudSim
and CloudAnalyst Tool, the overall response time is the Series of the response time after serving
requests from the cloud users. Response time is the sum of processing time and data transferring
time via the networks. The processing time is the time that a machine / VM needs to take to
process the task along with the coming request. This processing time is labeled with the
datacenter, locally. The request service time is also calculated as the serving time of the
datacenter, excluding the transferring time to the user. The datacenter cost is calculated from the
10. International Journal of Computer Networks & Communications (IJCNC) Vol.14, No.1, January 2022
34
sum of total works in the VMs, the works may be various such as ALU action, CU action, storage
action etc. Each action, we need to spend some power and time on it to work on, this number is
the cost of an action, and in VMs we can call it as MIPS. If the job needs more MIPS, we need
more costing for it. The total cost is the sum of all actions costing. With each variable, we get the
max value, min value and average value after processing the user ‘s requests.
ORT = DPT + [Transfering Time]
DCC = [Action cost] * [Total actions – Total MIPS]
For Response Time, we use response time of virtual machine to evaluate its performance. The
less predictive response time of the cloud is, the better efficiency of the algorithm is, the lower
the cost is and the better that technique is. Requests are represented by Cloudlets in CloudSim
and the size of the Cloudlet is randomly generated by the JAVA random function. The number of
Cloudlets is from 100 to 1500, respectively.
Table 1. Parameters of Request/Cloudlet
Length File Size Output Size Number of CPU (PEs)
100 ~ 1700 5000 ~ 45000 450 ~ 750 1
4.3. Simulation Results
We run testing with 4 cases of different input (different userbase, different datacenter, and VM
quantity), 4 scenarios respectively. Scenario 1, simulation environment includes 01 Datacenter
with 20 virtual machines, 2 Userbases. Scenario 2, simulation environment includes 01
Datacenter with 5 virtual machines, 3 Userbases. In scenario 3, simulation environment includes
01 Datacenter with 5 virtual machines, 4 Userbases. Finally, scenario 4, simulation environment
includes 01 Datacenter with 50 virtual machines, 01 Datacenter with 5 virtual machines, 5
Userbases.
The simulated environments’ detail parameters are displayed in Figure 6, 7. The experiment
results of 4 scenarios are in Table 2.
Figure 6. Scenario 1 ‘s configuration (1 datacenter, 20 VMs, and 2 Userbases) and Scenario 2 ‘s
configuration (1 datacenter, 5 VMs, and 3 Userbases)
11. International Journal of Computer Networks & Communications (IJCNC) Vol.14, No.1, January 2022
35
Figure 7. Scenario 3 ‘s configuration (1 datacenter, 5 VMs and 4 Userbases) and Scenario 4 ‘s
configuration (2 datacenters, 50 + 5 VMs and 5 Userbases)
Table 2. Experimental results of 4 Scenarios
In Table 2, we can see that the results of the ITA algorithm are always at the minimum level
among the 5 algorithms, with all 4 scenarios with different parameters. We can see the stable of
ITA despite the variant inputs, because the 04 selected scenarios are randomly and very different
from each other.
12. International Journal of Computer Networks & Communications (IJCNC) Vol.14, No.1, January 2022
36
Figure 8. Result simulation of Scenario 1
Figure 8 represents for the 1st
scenario, it can show that ITA algorithm has much better results
than Throttled algorithm, TMA algorithm, Round-Robin algorithm, Equally Load algorithm with
the same input data. With the3 params: Average of Overall Response Time, Average of Data
Centre Processing Time, Average of DC Request Servicing Times, ITA algorithm is best
performance algorithm. We can easily realize that Throttled algorithm, TMAalgorithm and ITA
algorithm are the dynamic load balancers, so they must perform better then the group of static
load balancer like Round-Robin and Equally Load.
Figure 9. Result simulation of Scenario 2
Figure 9 represents for the 2nd
scenario. We can see clearly about ITA, it is in the best ones
among the 5 algorithms. We can notice that Average of Data Centre Processing Time and
Average of DC Request Servicing Times have the smallest number (0.56) among them (0.56 –
0.75). No other algorithms (among 5 selected) can be better than ITA. In this case, the original
Throttled Algorithm may work worst due to the number of VMs, we set only 5 VM for this case
and we see the disadvantages of Throttled.
In scenario 3 and 4, we do not see the big different of all 5 algorithms because the configurations
are so complicated. We have 3 and 5 userbases with different geographical characteristics. Then
all the 5 algorithms perform well, all 3 parameters (Average of Overall Response Time, Average
of Data Centre Processing Time, Average of DC Request Servicing Times) are equally. But the
cost of datacenter shows that ITA is the best one. We can see more detail in figure 10.
- 5,000.00 10,000.00 15,000.00 20,000.00 25,000.00
Average of Overall Response Time
Average of Data Center Processing Time
Average of DC Request Servicing Times
ITA TMA Throttled Round Robin Equally Load
0.55 0.56 0.56 0.56 0.56 0.56 0.57 0.57 0.57 0.57
Average of Data Center Processing Time
Average of DC Request Servicing Times
ITA TMA Throttled Round Robin Equally Load
13. International Journal of Computer Networks & Communications (IJCNC) Vol.14, No.1, January 2022
37
Figure 10. Total datacenter Cost of 4 Scenarios
In total, figure 10 shows that ITA is the best saving cost for the cloud. ITA always keeps the
lowest amount of costing in both 4 scenarios, and it can show the good improvement of ITA.
This part presents a simulation of the proposed ITA algorithm, the parameters as well as the
given 4 scenarios based on the request process of users in the cloud environment. From there,
record the response time, processing time, and cost of the cloud. Simulating ITA with the
parameters of different datacenters, different numbers of virtual machines, bearing loads from
100 to 1500 requests, ITA reduces the cost of the cloud, the allocation of requests to the virtual
machines handles evenly. Running the ITA algorithm simulation with a larger number of VMs on
different datacenter also good results compared to other algorithms, in terms of improved costs
than previous algorithms. But ITA will show the best performance if it serves the same userbase
of requests.
5. CONCLUSIONS
This article presents an overview of the cloud, popular and recent load balancing techniques
using in cloud computing environments. It also studies the approach of improving cloud
computing performance by improving the load balancer. Through simulation using the GUI tool
of Cloud Analyst, we implement and simulate load balancing techniques including Round Robin,
Throttled, TMA (Throttle Modified Algorithm), Equally Load and ITA (Improved Throttled
algorithm – the proposal). The results of the proposed algorithm (ITA) meet the goals such as
improved time response, limited resource starvation, virtual machines with strong processing
power to process more requests and saving cost. This study also helps balance the load more
effectively, compared with popular algorithms like Round Robin, Throttled and Equally Load.
The datacenter cost of this proposal is always lower than the others, which shows that the ITA
algorithm can be used in practice and potential for further development. In the real world, we can
apply our ITA to a real datacenter for testing and further research. Because it is based on Throttle
Algorithm, which is present everywhere in our current networks. ITA is the potential to apply and
implement with further testing and researching.
0
20
40
60
80
100
120
140
Scenario 1 Scenario 2 Scenario 3 Scenario 4
Equally Load Round Robin Throttled TMA ITA
14. International Journal of Computer Networks & Communications (IJCNC) Vol.14, No.1, January 2022
38
CONFLICTS OF INTEREST
The authors declare no conflict of interest.
ACKNOWLEDGMENTS
We, the authors would like to thank the Posts and Telecommunication Institute of Technology,
Campus in Ho Chi Minh City, Vietnam. We also appreciate a lot the support from Ho Chi Minh
City Open University, Vietnam.
REFERENCES
[1] S. Kemp, “Digital 2019: global internet use accelerates,” 30 January 2019. [Online]. Available:
https://wearesocial.com/blog/2019/01/digital-2019-global-internet-use-accelerates.
[2] Soni Gulshan and Mala Kalra, “A novel approach for load balancing in cloud datacenter,” Advance
Computing Conference (IACC), 2014 IEEE International, 2014.
[3] Y. Wen and C. Chang, "Load balancing job assignment for cluster-based cloud computing," 2014
Sixth International Conference on Ubiquitous and Future Networks (ICUFN), pp. 199-204, 2014.
[4] K. Verma, "Cloud Computing and its Types," Cloudkul, [Online]. Available:
https://cloudkul.com/blog/what-is-cloud-computing/.
[5] Calheiros, R.N., Ranjan, R., Beloglazov, A., De Rose, C.A.F. and Buyya, R., ""CloudSim: atoolkit
for modeling and simulation of cloud computing environments and evaluation of resource
provisioning algorithms," Journal of Software: Practice and Experience, vol. 41, pp. 23-50, 2010.
[6] D. C.Marinescu, Cloud Computing (Second edition), Elsevier, 2018.
[7] Bui Thanh Khiet, Nguyen Thi Nguyet Que, Ho Dac Hung, Pham Tran Vu, Tran Cong Hung, "A Fair
VM Allocation for Cloud Computing based on Game Theory," Proceedings of the 10th National
Conference on Fundamental and Applied Information Technology Research (FAIR'10), 2017.
[8] Bhaskar, R., Deepu, S.R., Shylaja, B.S., "Dynamic allocation method for efficient load balancing in
virtual machines for cloud computing," Advanced Computing An International Journal, vol. 3, 2012.
[9] Rajwinder Kaur, Pawan Luthra, "Load Balancing in Cloud Computing," Recent Trends in
Information, Telecommunication and Computing, Association of Computer Electronics and Electrical
Engineers, pp. 374-381, 2014.
[10] Bhathiya Wickremasinghe and Assoc Prof and Rajkumar Buyya Contents, "Cloud Analyst: A
CloudSim- based Tool for Modelling and Analysis of Large Scale Cloud Computing Environments,"
2010.
[11] D. Asir Antony Gnana Singh, R. Priyadharshini and E. Jebamalar Leavline, "Analysis of Cloud
Environment Using CloudSim," in Springer.
[12] Klaithem Al Nuaimi, Nader Mohamed, Mariam Al Nuaimi and Jameela Al-Jaroodi, "A Survey of
Load Balancing in Cloud Computing: Challenges and Algorithms," Second Symposium on Network
Cloud Computing and Applications, 2012.
[13] Abhay Kumar Agarwal and Atul Raj, "A New Static Load Balancing Algorithm in Cloud
Computing," International Journal of Computer Applications, vol. 132, p. 0975 – 8887, 2015.
[14] N.Swarnkar, A. K. Singh and Shankar, "A Survey of Load Balancing Technique in Cloud
Computing," International Journal of Engineering Research & Technology(IJERT), vol. 2, pp. 800-
804, 2013.
[15] S. S. Moharana, R. D. Ramesh and D. Powar, "Analysis of Load Balancers in Cloud Computing,"
International Journal of Computer Science and Engineering (IJCSE), vol. 2, pp. 101-108, 2013.
[16] Vikas Kumar, Shiva Prakash, "Modified Active Monitoring Load Balancing Algorithm in Cloud
Computing Environment," International Journal for Scientific Research and Development (IJSRD),
vol. 2, pp. 132-135, 2014.
[17] A. Makroo and D. Dahiya, "An efficient VM load balancer for Cloudm," in The 2014 International
Conference on Applied Mathematics, Computational Science & Engineering (AMCSE 2014), Varna,
2014.
[18] Rakesh Kumar Mishra, Sreenu Naik Bhukya, "Service Broker Algorithm for CloudAnalyst,"
International Journal of Computer Science and Information Technology (IJCSIT), pp. 3957-3962,
2017.
15. International Journal of Computer Networks & Communications (IJCNC) Vol.14, No.1, January 2022
39
[19] Er. Imtiyaz Ahmad , Er. Shakeel Ahmad, Er. Sourav Mirdha, "An Enhanced Throttled Load
Balancing Approach for Cloud Environment," International Research Journal of Engineering and
Technology (IRJET), 2017.
[20] Nguyen Xuan Phi, Tran Cong Hung, "LOAD BALANCING ARGORITHM TO IMPROVE
REPSPONSE TIME ON CLOUD COMPUTING," International Journal on Cloud Computing:
Services and Architecture (IJCCSA), 2017.
[21] Nguyen Xuan Phi, Cao Trung Tin, Luu Nguyen Ky Thu, Tran Cong Hung, "Proposed Load
Balancing Algorithm to Reduce Response time and Processing time on Cloud Computing,"
International Journal of Computer Networks & Communications (IJCNC), 2018.
[22] Gupta, S., Dixit, N., & Yadav, P., "An Advanced Throttled (ATH) Algorithm and Its Performance
Analysis with Different Variants of Cloud Computing Load Balancing Algorithm," Communication,
Networks and Computing, pp. 385-399, 2018.
[23] Tran Cong Hung, Phan Thanh Hy, Le Ngoc Hieu, Nguyen Xuan Phi, "MMSIA: Improved Max-Min
Scheduling Algorithm for Load Balancing on Cloud Computing," Proceedings of The 3rd
International Conference on Machine Learning and Soft Computing (CMLSC 2019), pp. 60-64, 2019.
[24] Moses, A. K., Joseph, A., Oluwaseun, O. R., Misra, S., & Emmanuel, A., "APPLICABILITY OF
MMRR LOAD BALANCING ALGORITHM IN CLOUD COMPUTING," International Journal of
Computer Mathematics: Computer Systems Theory, 2020.
[25] B. P. Mulla, C. Rama Krishna, and R. Kumar Tickoo, “Load balancing algorithm for efficient VM
allocation in heterogeneous cloud,” International Journal of Computer Networks & Communications
(IJCNC), vol. 12, no. 1, pp. 83–96, 2020.
[26] A. S. a. P. A. T. J. B. Durga Devi, “Modified adaptive neuro fuzzy inference system based load
balancing for virtual machine with security in cloud computing environment,” Journal of Ambient
Intelligence and Humanized Computing, vol. 12, no. 3, pp. 3869-3876, 2021.
[27] N. Z. J. a. A. A. D. A. Shafiq, “Load balancing techniques in cloud computing environment: A
review,” Journal of King Saud University – Computer and Information Sciences, 2021.
AUTHORS
Tran Cong Hung was born in Vietnam in 1961. He received a B.E in electronic and
Telecommunication engineering with first-class honors from HOCHIMINH University
of technology in Vietnam, 1987. He received a B.E in informatics and computer
engineering from HOCHIMINH University of technology in Vietnam, 1995. He
received the Master of Engineering degree in telecommunications engineering course
from the postgraduate department of Hanoi University of Technology in Vietnam, 1998.
He received Ph.D. at Hanoi.
Hieu Le Ngoc has been working in the IT industry as an IT System Architect since
2010. As of now, he is working as an IT lecturer for HCMC Open University (Vietnam).
His major study is about cloud computing and cloud efficiency for better service; minor
studies are education, education in IT line, Languages Teaching (Chinese & English),
Business and Economics.
More info https://www.researchgate.net/profile/Hieu-Le-24
.