The rapid growth of users on the cloud service and number of services to the user increases the load on the
servers at cloud datacenter. This issue is becoming a challenge for the researchers. And requires used
effectively a load balancing technique not only to balance the resources for servers but also reduce the
negative impact to the end-user service. The current, load balancing techniques have solved the various
problems such as: (i) load balancing after a server was overloaded; (ii) load balancing and load forecast
for the allocation of resources; iii) improving the parameters affecting to load balancing in cloud. The
study of improving these parameters have great significance to improving system performance through
load balancing. From there, we can propose more effective methods of load balancing, in order to increase
system performance. Therefore, in this paper we researched some parameters affecting the performance
load balancing on the cloud computing.
LOAD BALANCING ALGORITHM TO IMPROVE RESPONSE TIME ON CLOUD COMPUTINGijccsa
Load balancing techniques in cloud computing can be applied at different levels. There are two main
levels: load balancing on physical server and load balancing on virtual servers. Load balancing on a
physical server is policy of allocating physical servers to virtual machines. And load balancing on virtual
machines is a policy of allocating resources from physical server to virtual machines for tasks or
applications running on them. Depending on the requests of the user on cloud computing is SaaS (Software
as a Service), PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) that has a proper load
balancing policy. When receiving the task, the cloud data center will have to allocate these tasks efficiently
so that the response time is minimized to avoid congestion. Load balancing should also be performed
between different datacenters in the cloud to ensure minimum transfer time. In this paper, we propose a
virtual machine-level load balancing algorithm that aims to improve the average response time and
average processing time of the system in the cloud environment. The proposed algorithm is compared to the
algorithms of Avoid Deadlocks [5], Maxmin [6], Throttled [8] and the results show that our algorithms
have optimized response times.
PROCESS OF LOAD BALANCING IN CLOUD COMPUTING USING GENETIC ALGORITHMecij
The running generation of world, cloud computing has become the most powerful, chief and also lightning technology. IT based companies has already changed their way to buy and design hardware through this technology. It is a high utility which can also make software more attractive. Load balancing research in
cloud technology is one of the burning technologies in modern time. In this paper, pointing various proposed algorithms, the topic of load balancing in Cloud Computing are researched and compared to provide a gist of the latest way in this research area. By using Genetic Algorithm the balance is most
flexible which is represented here.
Dynamic Cloud Partitioning and Load Balancing in Cloud Shyam Hajare
Cloud computing is the emerging and transformational paradigm in the field of information technology. It mostly focuses in providing various services on demand and resource allocation and secure data storage are some of them. To store huge amount of data and accessing data from such metadata is new challenge. Distributing and balancing of the load over a cloud using cloud partitioning can ease the situation. Implementing load balancing by considering static as well as dynamic parameters can improve the performance cloud service provider and can improve the user satisfaction. Implementation the model can provide dynamic way of resource selection de-pending upon different situation of cloud environment at the time of accessing cloud provisions based on cloud partitioning. This model can provide effective load balancing algorithm over the cloud environment, better refresh time methods and better load status evaluation methods.
LOAD BALANCING ALGORITHM ON CLOUD COMPUTING FOR OPTIMIZE RESPONE TIMEijccsa
To improve the performance of cloud computing, there are many parameters and issues that we should consider, including resource allocation, resource responsiveness, connectivity to resources, unused resources exploration, corresponding resource mapping and planning for resource. The planning for the use of resources can be based on many kinds of parameters, and the service response time is one of them.
The users can easily figure out the response time of their requests, and it becomes one of the important QoSs. When we discover and explore more on this, response time can provide solutions for the distribution, the load balancing of resources with better efficiency. This is one of the most promising
research directions for improving the cloud technology. Therefore, this paper proposes a load balancing algorithm based on response time of requests on cloud with the name APRA (ARIMA Prediction of Response Time Algorithm), the main idea is to use ARIMA algorithms to predict the coming response time, thus giving a better way of effectively resolving resource allocation with threshold value. The experiment
result outcomes are potential and valuable for load balancing with predicted response time, it shows that prediction is a great direction for load balancing.
Cloud computing Review over various scheduling algorithmsIJEEE
Cloud computing has taken an importantposition in the field of research as well as in thegovernment organisations. Cloud computing uses virtualnetwork technology to provide computer resources tothe end users as well as to the customer’s. Due tocomplex computing environment the use of high logicsand task scheduler algorithms are increase which resultsin costly operation of cloud network. Researchers areattempting to build such kind of job scheduling algorithms that are compatible and applicable in cloud computing environment.In this paper, we review research work which is recently proposed by researchers on the base of energy saving scheduling techniques. We also studying various scheduling algorithms and issues related to them in cloud computing.
Cloud computing is an emerging technology. It process huge amount of data so scheduling mechanism
works as a vital role in the cloud computing. Thus my protocol is designed to minimize the switching time,
improve the resource utilization and also improve the server performance and throughput. This method or
protocol is based on scheduling the jobs in the cloud and to solve the drawbacks in the existing protocols.
Here we assign the priority to the job which gives better performance to the computer and try my best to
minimize the waiting time and switching time. Best effort has been made to manage the scheduling of jobs
for solving drawbacks of existing protocols and also improvise the efficiency and throughput of the server.
LOAD BALANCING ALGORITHM TO IMPROVE RESPONSE TIME ON CLOUD COMPUTINGijccsa
Load balancing techniques in cloud computing can be applied at different levels. There are two main
levels: load balancing on physical server and load balancing on virtual servers. Load balancing on a
physical server is policy of allocating physical servers to virtual machines. And load balancing on virtual
machines is a policy of allocating resources from physical server to virtual machines for tasks or
applications running on them. Depending on the requests of the user on cloud computing is SaaS (Software
as a Service), PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) that has a proper load
balancing policy. When receiving the task, the cloud data center will have to allocate these tasks efficiently
so that the response time is minimized to avoid congestion. Load balancing should also be performed
between different datacenters in the cloud to ensure minimum transfer time. In this paper, we propose a
virtual machine-level load balancing algorithm that aims to improve the average response time and
average processing time of the system in the cloud environment. The proposed algorithm is compared to the
algorithms of Avoid Deadlocks [5], Maxmin [6], Throttled [8] and the results show that our algorithms
have optimized response times.
PROCESS OF LOAD BALANCING IN CLOUD COMPUTING USING GENETIC ALGORITHMecij
The running generation of world, cloud computing has become the most powerful, chief and also lightning technology. IT based companies has already changed their way to buy and design hardware through this technology. It is a high utility which can also make software more attractive. Load balancing research in
cloud technology is one of the burning technologies in modern time. In this paper, pointing various proposed algorithms, the topic of load balancing in Cloud Computing are researched and compared to provide a gist of the latest way in this research area. By using Genetic Algorithm the balance is most
flexible which is represented here.
Dynamic Cloud Partitioning and Load Balancing in Cloud Shyam Hajare
Cloud computing is the emerging and transformational paradigm in the field of information technology. It mostly focuses in providing various services on demand and resource allocation and secure data storage are some of them. To store huge amount of data and accessing data from such metadata is new challenge. Distributing and balancing of the load over a cloud using cloud partitioning can ease the situation. Implementing load balancing by considering static as well as dynamic parameters can improve the performance cloud service provider and can improve the user satisfaction. Implementation the model can provide dynamic way of resource selection de-pending upon different situation of cloud environment at the time of accessing cloud provisions based on cloud partitioning. This model can provide effective load balancing algorithm over the cloud environment, better refresh time methods and better load status evaluation methods.
LOAD BALANCING ALGORITHM ON CLOUD COMPUTING FOR OPTIMIZE RESPONE TIMEijccsa
To improve the performance of cloud computing, there are many parameters and issues that we should consider, including resource allocation, resource responsiveness, connectivity to resources, unused resources exploration, corresponding resource mapping and planning for resource. The planning for the use of resources can be based on many kinds of parameters, and the service response time is one of them.
The users can easily figure out the response time of their requests, and it becomes one of the important QoSs. When we discover and explore more on this, response time can provide solutions for the distribution, the load balancing of resources with better efficiency. This is one of the most promising
research directions for improving the cloud technology. Therefore, this paper proposes a load balancing algorithm based on response time of requests on cloud with the name APRA (ARIMA Prediction of Response Time Algorithm), the main idea is to use ARIMA algorithms to predict the coming response time, thus giving a better way of effectively resolving resource allocation with threshold value. The experiment
result outcomes are potential and valuable for load balancing with predicted response time, it shows that prediction is a great direction for load balancing.
Cloud computing Review over various scheduling algorithmsIJEEE
Cloud computing has taken an importantposition in the field of research as well as in thegovernment organisations. Cloud computing uses virtualnetwork technology to provide computer resources tothe end users as well as to the customer’s. Due tocomplex computing environment the use of high logicsand task scheduler algorithms are increase which resultsin costly operation of cloud network. Researchers areattempting to build such kind of job scheduling algorithms that are compatible and applicable in cloud computing environment.In this paper, we review research work which is recently proposed by researchers on the base of energy saving scheduling techniques. We also studying various scheduling algorithms and issues related to them in cloud computing.
Cloud computing is an emerging technology. It process huge amount of data so scheduling mechanism
works as a vital role in the cloud computing. Thus my protocol is designed to minimize the switching time,
improve the resource utilization and also improve the server performance and throughput. This method or
protocol is based on scheduling the jobs in the cloud and to solve the drawbacks in the existing protocols.
Here we assign the priority to the job which gives better performance to the computer and try my best to
minimize the waiting time and switching time. Best effort has been made to manage the scheduling of jobs
for solving drawbacks of existing protocols and also improvise the efficiency and throughput of the server.
Task scheduling is an important aspect to improve the utilization of resources in the Cloud Computing. This paper proposes a Divide and Conquer based approach for heterogeneous earliest finish time algorithm. The proposed system works in two phases. In the first phase it assigns the ranks to the incoming tasks with respect to size of it. In the second phase, we properly assign and manage the task to the virtual machine with the consideration of ideal time of respective virtual machine. This helps to get more effective resource utilization in Cloud Computing. The experimental results using Cybershake Scientific Workflow shows that the proposed Divide and Conquer HEFT performs better than HEFT in terms of task's finish time and response time. The result obtained by experimentally demonstrate that the proposed DCHEFT performance superiorly.
Public Cloud Partition Using Load Status Evaluation and Cloud Division RulesIJSRD
with growth of cloud computing load balancing is important impact on performance. Cloud computing efficiency depends on good load balancer. Many type of situation occur that time cloud partitioning is done by load balancer. Different type of situation needed different type of strategies for public cloud portioning using load balancer.in this paper we work on, partition of public cloud using two type of situation first is load status evaluation and second is cloud division rules. Load status evaluation is measure in number of cloudlets arrives at datacenter and cloud divisions rules are based on cloudlet come from which geographical location. On the basis of geographical location we partition public cloud and improve performance of load balancing in cloud computing. We implement proposed system with help of cloudsim3.0 simulator.
In this research paper, we have conducted work on modeling of local broker policy based on workload profile in Network cloud. For this we are using workload based applications. To handle workload based applications, two Scheduling Policies Random Non-overlap and Workload profile based be used. We compare these two scheduling policies based on three parameters Execution (mean) time, Response (mean) time and Waiting (mean) time. Workload based profile policy gave better results than Random-Non overlap policy in terms of time performance parameter.
Time Efficient VM Allocation using KD-Tree Approach in Cloud Server Environmentrahulmonikasharma
Cloud computing is an incipient and quickly evolving model, with new expenses and capabilities being proclaimed frequently. The increases of user on cloud with the expansion of variety of services, with that the complete allocation of resource with the minimum latent time for Virtual machine is necessary. To allocate this virtual cloud computing resources to the cloud user is a key technical issue because user demand is dynamic in nature that required dynamic allocation of resource too. To improve the allocation there must be a correct balanced algorithmic scheduling for Resource Allocation Technique. The aim of this work is to allocate resource to scientific experiment request coming from multiple users, wherever customized Virtual machines (VM) are aloft in applicable host out there in cloud. Therefore, properly programmed scheduling cloud is extremely vital and it’s significant to develop efficient scheduling methods for appropriately allocation of VMs into physical resource. The planned formulas minimize the time interval quality so as of O (Log n) by adopting KD-Tree.
A survey of various scheduling algorithm in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Elastic neural network method for load prediction in cloud computing gridIJECEIAES
Cloud computing still has no standard definition, yet it is concerned with Internet or network on-demand delivery of resources and services. It has gained much popularity in last few years due to rapid growth in technology and the Internet. Many issues yet to be tackled within cloud computing technical challenges, such as Virtual Machine migration, server association, fault tolerance, scalability, and availability. The most we are concerned with in this research is balancing servers load; the way of spreading the load between various nodes exists in any distributed systems that help to utilize resource and job response time, enhance scalability, and user satisfaction. Load rebalancing algorithm with dynamic resource allocation is presented to adapt with changing needs of a cloud environment. This research presents a modified elastic adaptive neural network (EANN) with modified adaptive smoothing errors, to build an evolving system to predict Virtual Machine load. To evaluate the proposed balancing method, we conducted a series of simulation studies using cloud simulator and made comparisons with previously suggested approaches in the previous work. The experimental results show that suggested method betters present approaches significantly and all these approaches.
Scheduling Divisible Jobs to Optimize the Computation and Energy Costsinventionjournals
ABSTRACT : The important challenge in cloud computing environment is to design a scheduling strategy to handle jobs, and to process them in a heterogeneous environment with shared data centers. In this paper, we attempt to investigate a new analytical framework model that enables an existing private cloud data-center for scheduling jobs and minimizing the overall computation and energy cost together. Our model is based on Divisible Load Theory (DLT) model to derive closed-form solution for the load fractions to be assigned to each machines considering computation and energy cost. Our analysis also attempts to schedule the jobs such a way that cloud provider can gain maximum benefit for his service and Quality of Service (QoS) requirement user’s job. Finally, we quantify the performance of the strategies via rigorous simulation studies.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Task Scheduling using Hybrid Algorithm in Cloud Computing Environmentsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Load Balancing in Auto Scaling Enabled Cloud Environmentsneirew J
Cloud computing is growing in popularity and it has been continuously updated with more improvements.
Auto scaling is one of such improvements that help to maintain the availability of customer’s subscribed
cloud system. The appearance of an auto scaling mechanism in the cloud system with many existing system
mechanisms is an issue that needs to be considered. Because, normally, there is no free drawbacks
whenever a new part is added to a certain stable system. In this paper, we consider how existing load
balancing and auto scaling impact on each other. For the purpose, we have modeled a cloud system with
an auto scaler and a load balancer and implementing simulations based on the constructed model. Also
based on the results from the computer simulations we proposed about choosing load balancers for
subscribed cloud system with auto scaling service.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Cloud computing is that ensuing generation of computation. In all probability folks can have everything they need on the cloud. Cloud computing provides resources to shopper on demand. The resources also are code package resources or hardware resources. Cloud computing architectures unit distributed, parallel and serves the requirements of multiple purchasers in various things. This distributed style deploys resources distributive to deliver services with efficiency to users in various geographical channels. Purchasers in a very distributed setting generate request haphazardly in any processor. So the most important disadvantage of this randomness is expounded to task assignment. The unequal task assignment to the processor creates imbalance i.e., variety of the processors sq. measure over laden and many of them unit of measurement to a lower place loaded. The target of load equalisation is to transfer the load from over laden technique to a lower place loaded technique transparently. Load equalisation is one altogether the central issues in cloud computing. To comprehend high performance, minimum interval and high resource utilization relation we want to transfer the tasks between nodes in cloud network. Load equalisation technique is utilized to distribute tasks from over loaded nodes to a lower place loaded or idle nodes. In following sections we have a tendency to tend to stand live discuss concerning cloud computing, load equalisation techniques and additionally the planned work of our load equalisation system. Proposed load equalisation rule is simulated on Cloud Analyst toolkit. Performance is analyzed on the parameters of overall interval, knowledge transfer, average knowledge center mating time and total value of usage. Results area unit compared with 3 existing load equalisation algorithms specifically spherical Robin, Equally unfold Current Execution Load, and Throttled. Results on the premise of case studies performed shows additional knowledge transfer with minimum interval.
Application of selective algorithm for effective resource provisioning in clo...ijccsa
Modern day continued demand for resource hungry services and applications in IT sector has led to
development of Cloud computing. Cloud computing environment involves high cost infrastructure on one
hand and need high scale computational resources on the other hand. These resources need to be
provisioned (allocation and scheduling) to the end users in most efficient manner so that the tremendous
capabilities of cloud are utilized effectively and efficiently. In this paper we discuss a selective algorithm
for allocation of cloud resources to end-users on-demand basis. This algorithm is based on min-min and
max-min algorithms. These are two conventional task scheduling algorithm. The selective algorithm uses
certain heuristics to select between the two algorithms so that overall makespan of tasks on the machines is
minimized. The tasks are scheduled on machines in either space shared or time shared manner. We
evaluate our provisioning heuristics using a cloud simulator, called CloudSim. We also compared our
approach to the statistics obtained when provisioning of resources was done in First-Cum-First-
Serve(FCFS) manner. The experimental results show that overall makespan of tasks on given set of VMs
minimizes significantly in different scenarios.
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...ijccsa
Fast development of knowledge and communication has established a new computational style which is
known as cloud computing. One of the main issues considered by the cloud infrastructure providers, is to
minimize the costs and maximize the profitability. Energy management in the cloud data centers is very
important to achieve such goal. Energy consumption can be reduced either by releasing idle nodes or by
reducing the virtual machines migrations. To do the latter, one of the challenges is to select the placement
approach of the migrated virtual machines on the appropriate node. In this paper, an approach to reduce
the energy consumption in cloud data centers is proposed. This approach adapts harmony search
algorithm to migrate the virtual machines. It performs the placement by sorting the nodes and virtual
machines based on their priority in descending order. The priority is calculated based on the workload.
The proposed approach is simulated. The evaluation results show the reduction in the virtual machine
migrations, the increase of efficiency and the reduction of energy consumption.
KEYWORDS
Energy Consumption, Virtual Machine Placement, Harmony Search Algorithm, Server Consolidati
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system
Performance comparison of aodv and olsr using 802.11 a and dsrc (802.11p) pro...IJCNCJournal
A Vehicular Ad Hoc Network (VANET) is a network formed purely among vehicles without presence of any
communication infrastructure as base stations and/or access point. Frequent topological changes due to
high mobility is one of the main issues in VANETs. In this paper we evaluate Ad-hoc On-Demand Distance
Vector (AODV) and Optimized Link State Routing (OLSR) routing protocols using 802.11a and 802.11p in
a realistic urban scenario. For this comparison, we chose five performance metrics: Path Availability, Endto-
End Delay, Number of Created Paths, Path Length and Path Duration. Simulation results show, that for
most of the metrics evaluated, OLSR outperforms AODV when 802.11p and that 802.11p is more efficient
in urban VANETs.
Task scheduling is an important aspect to improve the utilization of resources in the Cloud Computing. This paper proposes a Divide and Conquer based approach for heterogeneous earliest finish time algorithm. The proposed system works in two phases. In the first phase it assigns the ranks to the incoming tasks with respect to size of it. In the second phase, we properly assign and manage the task to the virtual machine with the consideration of ideal time of respective virtual machine. This helps to get more effective resource utilization in Cloud Computing. The experimental results using Cybershake Scientific Workflow shows that the proposed Divide and Conquer HEFT performs better than HEFT in terms of task's finish time and response time. The result obtained by experimentally demonstrate that the proposed DCHEFT performance superiorly.
Public Cloud Partition Using Load Status Evaluation and Cloud Division RulesIJSRD
with growth of cloud computing load balancing is important impact on performance. Cloud computing efficiency depends on good load balancer. Many type of situation occur that time cloud partitioning is done by load balancer. Different type of situation needed different type of strategies for public cloud portioning using load balancer.in this paper we work on, partition of public cloud using two type of situation first is load status evaluation and second is cloud division rules. Load status evaluation is measure in number of cloudlets arrives at datacenter and cloud divisions rules are based on cloudlet come from which geographical location. On the basis of geographical location we partition public cloud and improve performance of load balancing in cloud computing. We implement proposed system with help of cloudsim3.0 simulator.
In this research paper, we have conducted work on modeling of local broker policy based on workload profile in Network cloud. For this we are using workload based applications. To handle workload based applications, two Scheduling Policies Random Non-overlap and Workload profile based be used. We compare these two scheduling policies based on three parameters Execution (mean) time, Response (mean) time and Waiting (mean) time. Workload based profile policy gave better results than Random-Non overlap policy in terms of time performance parameter.
Time Efficient VM Allocation using KD-Tree Approach in Cloud Server Environmentrahulmonikasharma
Cloud computing is an incipient and quickly evolving model, with new expenses and capabilities being proclaimed frequently. The increases of user on cloud with the expansion of variety of services, with that the complete allocation of resource with the minimum latent time for Virtual machine is necessary. To allocate this virtual cloud computing resources to the cloud user is a key technical issue because user demand is dynamic in nature that required dynamic allocation of resource too. To improve the allocation there must be a correct balanced algorithmic scheduling for Resource Allocation Technique. The aim of this work is to allocate resource to scientific experiment request coming from multiple users, wherever customized Virtual machines (VM) are aloft in applicable host out there in cloud. Therefore, properly programmed scheduling cloud is extremely vital and it’s significant to develop efficient scheduling methods for appropriately allocation of VMs into physical resource. The planned formulas minimize the time interval quality so as of O (Log n) by adopting KD-Tree.
A survey of various scheduling algorithm in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Elastic neural network method for load prediction in cloud computing gridIJECEIAES
Cloud computing still has no standard definition, yet it is concerned with Internet or network on-demand delivery of resources and services. It has gained much popularity in last few years due to rapid growth in technology and the Internet. Many issues yet to be tackled within cloud computing technical challenges, such as Virtual Machine migration, server association, fault tolerance, scalability, and availability. The most we are concerned with in this research is balancing servers load; the way of spreading the load between various nodes exists in any distributed systems that help to utilize resource and job response time, enhance scalability, and user satisfaction. Load rebalancing algorithm with dynamic resource allocation is presented to adapt with changing needs of a cloud environment. This research presents a modified elastic adaptive neural network (EANN) with modified adaptive smoothing errors, to build an evolving system to predict Virtual Machine load. To evaluate the proposed balancing method, we conducted a series of simulation studies using cloud simulator and made comparisons with previously suggested approaches in the previous work. The experimental results show that suggested method betters present approaches significantly and all these approaches.
Scheduling Divisible Jobs to Optimize the Computation and Energy Costsinventionjournals
ABSTRACT : The important challenge in cloud computing environment is to design a scheduling strategy to handle jobs, and to process them in a heterogeneous environment with shared data centers. In this paper, we attempt to investigate a new analytical framework model that enables an existing private cloud data-center for scheduling jobs and minimizing the overall computation and energy cost together. Our model is based on Divisible Load Theory (DLT) model to derive closed-form solution for the load fractions to be assigned to each machines considering computation and energy cost. Our analysis also attempts to schedule the jobs such a way that cloud provider can gain maximum benefit for his service and Quality of Service (QoS) requirement user’s job. Finally, we quantify the performance of the strategies via rigorous simulation studies.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Task Scheduling using Hybrid Algorithm in Cloud Computing Environmentsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Load Balancing in Auto Scaling Enabled Cloud Environmentsneirew J
Cloud computing is growing in popularity and it has been continuously updated with more improvements.
Auto scaling is one of such improvements that help to maintain the availability of customer’s subscribed
cloud system. The appearance of an auto scaling mechanism in the cloud system with many existing system
mechanisms is an issue that needs to be considered. Because, normally, there is no free drawbacks
whenever a new part is added to a certain stable system. In this paper, we consider how existing load
balancing and auto scaling impact on each other. For the purpose, we have modeled a cloud system with
an auto scaler and a load balancer and implementing simulations based on the constructed model. Also
based on the results from the computer simulations we proposed about choosing load balancers for
subscribed cloud system with auto scaling service.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Cloud computing is that ensuing generation of computation. In all probability folks can have everything they need on the cloud. Cloud computing provides resources to shopper on demand. The resources also are code package resources or hardware resources. Cloud computing architectures unit distributed, parallel and serves the requirements of multiple purchasers in various things. This distributed style deploys resources distributive to deliver services with efficiency to users in various geographical channels. Purchasers in a very distributed setting generate request haphazardly in any processor. So the most important disadvantage of this randomness is expounded to task assignment. The unequal task assignment to the processor creates imbalance i.e., variety of the processors sq. measure over laden and many of them unit of measurement to a lower place loaded. The target of load equalisation is to transfer the load from over laden technique to a lower place loaded technique transparently. Load equalisation is one altogether the central issues in cloud computing. To comprehend high performance, minimum interval and high resource utilization relation we want to transfer the tasks between nodes in cloud network. Load equalisation technique is utilized to distribute tasks from over loaded nodes to a lower place loaded or idle nodes. In following sections we have a tendency to tend to stand live discuss concerning cloud computing, load equalisation techniques and additionally the planned work of our load equalisation system. Proposed load equalisation rule is simulated on Cloud Analyst toolkit. Performance is analyzed on the parameters of overall interval, knowledge transfer, average knowledge center mating time and total value of usage. Results area unit compared with 3 existing load equalisation algorithms specifically spherical Robin, Equally unfold Current Execution Load, and Throttled. Results on the premise of case studies performed shows additional knowledge transfer with minimum interval.
Application of selective algorithm for effective resource provisioning in clo...ijccsa
Modern day continued demand for resource hungry services and applications in IT sector has led to
development of Cloud computing. Cloud computing environment involves high cost infrastructure on one
hand and need high scale computational resources on the other hand. These resources need to be
provisioned (allocation and scheduling) to the end users in most efficient manner so that the tremendous
capabilities of cloud are utilized effectively and efficiently. In this paper we discuss a selective algorithm
for allocation of cloud resources to end-users on-demand basis. This algorithm is based on min-min and
max-min algorithms. These are two conventional task scheduling algorithm. The selective algorithm uses
certain heuristics to select between the two algorithms so that overall makespan of tasks on the machines is
minimized. The tasks are scheduled on machines in either space shared or time shared manner. We
evaluate our provisioning heuristics using a cloud simulator, called CloudSim. We also compared our
approach to the statistics obtained when provisioning of resources was done in First-Cum-First-
Serve(FCFS) manner. The experimental results show that overall makespan of tasks on given set of VMs
minimizes significantly in different scenarios.
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...ijccsa
Fast development of knowledge and communication has established a new computational style which is
known as cloud computing. One of the main issues considered by the cloud infrastructure providers, is to
minimize the costs and maximize the profitability. Energy management in the cloud data centers is very
important to achieve such goal. Energy consumption can be reduced either by releasing idle nodes or by
reducing the virtual machines migrations. To do the latter, one of the challenges is to select the placement
approach of the migrated virtual machines on the appropriate node. In this paper, an approach to reduce
the energy consumption in cloud data centers is proposed. This approach adapts harmony search
algorithm to migrate the virtual machines. It performs the placement by sorting the nodes and virtual
machines based on their priority in descending order. The priority is calculated based on the workload.
The proposed approach is simulated. The evaluation results show the reduction in the virtual machine
migrations, the increase of efficiency and the reduction of energy consumption.
KEYWORDS
Energy Consumption, Virtual Machine Placement, Harmony Search Algorithm, Server Consolidati
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system
Performance comparison of aodv and olsr using 802.11 a and dsrc (802.11p) pro...IJCNCJournal
A Vehicular Ad Hoc Network (VANET) is a network formed purely among vehicles without presence of any
communication infrastructure as base stations and/or access point. Frequent topological changes due to
high mobility is one of the main issues in VANETs. In this paper we evaluate Ad-hoc On-Demand Distance
Vector (AODV) and Optimized Link State Routing (OLSR) routing protocols using 802.11a and 802.11p in
a realistic urban scenario. For this comparison, we chose five performance metrics: Path Availability, Endto-
End Delay, Number of Created Paths, Path Length and Path Duration. Simulation results show, that for
most of the metrics evaluated, OLSR outperforms AODV when 802.11p and that 802.11p is more efficient
in urban VANETs.
Centrality-Based Network Coder Placement For Peer-To-Peer Content DistributionIJCNCJournal
Network coding has been shown to achieve optimal multicast throughput, yet at an expensive computation
cost: every node in the network has to code. Interested in minimizing resource consumption of network
coding while maintaining its performance, in this paper, we propose a practical network coder placement
algorithm which achieves comparable content distribution time as network coding, and at the same time,
substantially reduces the number of network coders compared to a full network coding solution in which all
peers have to encode, i.e. become encoders. Our algorithm is derived from two key elements. First, it is
based on the insight that coding at upstream peers eliminates information duplication to downstream peers,
which results in efficient content distribution. Second, our placement strategy exploits centrality
characteristics of the network topology to quickly determine key positions to place encoders. Performance
evaluation using various topology and algorithm parameters confirms the effectiveness of our proposed
method.
A multi tenant platform for sms integrated servicesIJCNCJournal
This paper presents the design and development of a Linux based infrastructure and platform for a
wholesale service provider to offer SMS integrated services to retail providers for resale. The multi-tenant
platform, capable of providing multiple types of SMS integrated services, was designed and built mainly
with off-the-shelf components and open source software. The platform is highly reliable and flexible,
enabling fast provision of customisable services by integrating SMS, voice, VoIP, and web services in
innovative ways. This paper describes the design of the platform and presents four major application areas.
Qos group based optimal retransmission medium access protocol for wireless se...IJCNCJournal
This paper presents, a Group Based Optimal Retransmission Medium Access (GORMA) Protocol is
designed that combines protocol of Collision Avoidance (CA) and energy management for low-cost, shortrange,
low-data rate and low-energy sensor nodes applications in environment monitoring, agriculture,
industrial plants etc. In this paper, the GORMA protocol focuses on efficient MAC protocol to provide
autonomous Quality of Service (QoS) to the sensor nodes in one-hop QoS retransmission group and two
QoS groups in WSNs where the source nodes do not have receiver circuits. Hence, they can only transmit
data to a sink node, but cannot receive any control signals from the sink node. The proposed protocol
GORMA provides QoS to the nodes which work independently on predefined time by allowing them to
transmit each packet an optimal number of times within a given period. Our simulation results shows that
the performance of GORMA protocol, which maximize the delivery probability of one-hop QoS group and
two QoS groups and minimize the energy consumption.
Reducing the Peak to Average Power Ratio of Mimo-Ofdm SystemsIJCNCJournal
In this paper, we proposed a particle swarm optimization (PSO) based partial transmit sequence (PTS)
technique in order to achieve the lowest Peak-to-Average Power Ratio(PAPR) in Multiple Input Multiple
Output- Orthogonal Frequency Division Multiplexing (MIMO-OFDM) systems. Our approach consist of
applying the PSO based PTS on each antenna of the system in order to find the optimal phase factors,
which is a straightforward method to get the minimum PAPR in such a system. The simulation results
demonstrate that the PSO based PTS algorithm when applied to MIMO-OFDM systems with a wide range
of phase factors, tends to give a high performance. In addition, there is no need to increase the number of
particles of the PSO algorithm to enhance the performance of the system. As a result of this, the complexity
of finding the minimum PAPR is kept at a reasonable level.
A Compact Dual Band Dielectric Resonator Antenna For Wireless ApplicationsIJCNCJournal
This paper presents the design of a dual band rectangular Dielectric Resonator Antenna (DRA)
coupled to narrow slot aperture that is fed by microstrip line. The fundamental TE111 mode and
higher-order TE113 mode are excited with their resonant frequencies respectively. These
frequencies can be controlled by changing the DRA dimensions. A dielectric resonator with high
permittivity is used to miniaturize the global structure. The proposed antenna is designed to have
dual band operation suitable for both DCS (1710 - 1880 MHz) and WLAN (2400 - 2484 MHz)
applications. The return loss, radiation pattern and gain of the proposed antenna are evaluated.
Reasonable agreement between simulation and experimental results is obtained.
Mesh pull backup parent pools for video-on-demand multicast treesIJCNCJournal
Resilient multicast is a challenging issue for overlay trees particularly because of high churn. In this work,
we propose a mechanism that allows scalable video multicast. While the regular operation involves treepush
of the video, any node that looses its parent on the tree solicits video from a predetermined backup set
of nodes in a mesh-pull fashion. The main idea is to allocate less bandwidth for backup to improve
bandwidth utilization while maintaining the best possible video quality. The choice of essential design
parameters are studied together with seamlessness of the streaming under variety of fault scenarios.
Simulation results indicate the optimality of the proposed approach as far as resiliency, bandwidth
utilization, delay and video quality are concerned.
Analytical Modelling of Localized P2P Streaming Systems under NAT ConsiderationIJCNCJournal
NAT has been design to work with Internet client-server structure. The emerged of Peer-to-Peer (P2P)
networks and applications revealed the incompatibility between P2P applications and NAT. Many methods
has been developed and implemented to solve connectivity between peers behind NAT devices.
Nevertheless, various NATing types can’t communicate with one another. In this work, we are going to
study the impact of NAT types on the start-up delay time of peers in P2P streaming systems. We will
demonstrate the ability of NATing to expel peers in P2P live streaming systems. A new neighbour selecting
algorithm will be proposed. This algorithm will utilize NAT-types configurations as a parameter. We have
utilized NS2 simulator to show the performance of the new algorithm in increasing the connectivity,
reducing the number of expelled peers and implementing of locality.
Probability Density Functions of the Packet Length for Computer Networks With...IJCNCJournal
The research on Internet traffic classification and identification, with application on prevention of attacks
and intrusions, increased considerably in the past years. Strategies based on statistical characteristics of
the Internet traffic, that use parameters such as packet length (size) and inter-arrival time and their
probability density functions, are popular. This paper presents a new statistical modeling for packet length,
which shows that it can be modeled using a probability density function that involves a normal or a beta
distribution, according to the traffic generated by the users. The proposed functions has parameters that
depend on the type of traffic and can be used as part of an Internet traffic classification and identification
strategy. The models can be used to compare, simulate and estimate the computer network traffic, as well
as to generate synthetic traffic and estimate the packets processing capacity of Internet routers
A FRAMEWORK FOR SOFTWARE-AS-A-SERVICE SELECTION AND PROVISIONINGIJCNCJournal
As cloud computing is increasingly transforming the information technology landscape, organizations and
businesses are exhibiting strong interest in Software-as-a-Service (SaaS) offerings that can help them
increase business agility and reduce their operational costs. They increasingly demand services that can
meet their functional and non-functional requirements. Given the plethora and the variety of SaaS
offerings, we propose, in this paper, a framework for SaaS provisioning, which relies on brokered Service
Level agreements (SLAs), between service consumers and SaaS providers. The Cloud Service Broker (CSB)
helps service consumers find the right SaaS providers that can fulfil their functional and non-functional
requirements. The proposed selection algorithm ranks potential SaaS providers by matching their offerings
against the requirements of the service consumer using an aggregate utility function. Furthermore, the CSB
is in charge of conducting SLA negotiation with selected SaaS providers, on behalf of service consumers,
and performing SLA compliance monitoring
Analysis of Hierarchical Scheduling for Heterogeneous Traffic over NetworkIJCNCJournal
Scheduling real time and non real time packets at network nodes has an important impact by reducing the
processing overhead, queuing delay and response time. Most of the existing packet scheduling algorithms
used in network based on First-In First-Out (FIFO), non-preemptive priority, and preemptive priority
scheduling. However, these algorithms incur a large processing overhead, queuing delay and response
time and are not dynamic to the data traffic changes. In this paper, we present a new hierarchical
scheduling algorithm to assign priority, Hierarchical Hybrid EDF/FIFO which can not only serve the real
time traffic but also provide best effort service to non real time traffic. To examine our approach for
scheduling, we realized our analytical study to express the worst case queuing delay and the worst case
response time for different traffics. The simulation results showed that the Hierarchical hybrid EDF/FIFO
achieved the minimum packet delay and adequate loss packet for non real time traffic when compared with
Hierarchical FIFO. In general, the performances of our approach draw near to Hierarchical EDF which
confirms the effectiveness of this approach.
Enhanced aodv route discovery and route establishment for qos provision for r...IJCNCJournal
MANET is a temporary connection of mobile nodes via wireless links having no centralized base station.
We developed a protocol with an enhanced route discovery mechanism that avoids the pre-transmission
delay. When a source node wants to communicate with another node, it broadcast RREQ. EAODV give
priority to the source node of real time transmission. When RREQ packet send to neighbor node, for real
time transmission it accept route request on priority basis and the drop ratio of packets decreased, then
throughput increases by receiving more packets at destination and delivery ratio also increased through
these QOS improved.
The Extended Clustering Ad Hoc Routing Protocol (Ecrp)IJCNCJournal
Ad hoc networks are a collection of mobile nodes communicating via wireless channels without any fixed
infrastructure. Because of their ease and low cost of building, ad hoc networks have a lot of attractive
applications in different fields. The topology of ad hoc networks changes dynamically, and each node in the
network can act as a host or router. With the increase in the number of wireless devices and large amount
of traffic to be exchanged, the demand for scalable routing protocols has increased. This paper presents a
scalable routing protocol, based on AODV protocol, called the Extended Clustering Ad Hoc Routing
Protocol (ECRP). This is a hybrid protocol, which combines reactive and proactive approaches in routing.
The protocol uses the Global Positioning System to determine the position of certain nodes in the network.
The evaluation methodology and simulation results obtained show that the protocol is efficient and scales
well in large networks
PROPOSED LOAD BALANCING ALGORITHM TO REDUCE RESPONSE TIME AND PROCESSING TIME...IJCNCJournal
Cloud computing is a new technology that brings new challenges to all organizations around the world.
Improving response time for user requests on cloud computing is a critical issue to combat bottlenecks. As
for cloud computing, bandwidth to from cloud service providers is a bottleneck. With the rapid development
of the scale and number of applications, this access is often threatened by overload. Therefore, this paper
our proposed Throttled Modified Algorithm(TMA) for improving the response time of VMs on cloud
computing to improve performance for end-user. We have simulated the proposed algorithm with the
CloudAnalyts simulation tool and this algorithm has improved response times and processing time of the
cloud data center.
Load Balancing Algorithm to Improve Response Time on Cloud Computingneirew J
Load balancing techniques in cloud computing can be applied at different levels. There are two main
levels: load balancing on physical server and load balancing on virtual servers. Load balancing on a
physical server is policy of allocating physical servers to virtual machines. And load balancing on virtual
machines is a policy of allocating resources from physical server to virtual machines for tasks or
applications running on them. Depending on the requests of the user on cloud computing is SaaS (Software
as a Service), PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) that has a proper load
balancing policy. When receiving the task, the cloud data center will have to allocate these tasks efficiently
so that the response time is minimized to avoid congestion. Load balancing should also be performed
between different datacenters in the cloud to ensure minimum transfer time. In this paper, we propose a
virtual machine-level load balancing algorithm that aims to improve the average response time and
average processing time of the system in the cloud environment. The proposed algorithm is compared to the
algorithms of Avoid Deadlocks [5], Maxmin [6], Throttled [8] and the results show that our algorithms
have optimized response times.
Cloud computing is that ensuing generation of computation. In all probability folks can have everything they need on the cloud. Cloud computing provides resources to shopper on demand. The resources also are code package resources or hardware resources. Cloud computing architectures unit distributed, parallel and serves the requirements of multiple purchasers in various things. This distributed style deploys resources distributive to deliver services with efficiency to users in various geographical channels. Purchasers in a very distributed setting generate request haphazardly in any processor. So the most important disadvantage of this randomness is expounded to task assignment. The unequal task assignment to the processor creates imbalance i.e., variety of the processors sq. measure over laden and many of them unit of measurement to a lower place loaded. The target of load equalisation is to transfer the load from over laden technique to a lower place loaded technique transparently. Load equalisation is one altogether the central issues in cloud computing. To comprehend high performance, minimum interval and high resource utilization relation we want to transfer the tasks between nodes in cloud network. Load equalisation technique is utilized to distribute tasks from over loaded nodes to a lower place loaded or idle nodes. In following sections we have a tendency to tend to stand live discuss concerning cloud computing, load equalisation techniques and additionally the planned work of our load equalisation system. Proposed load equalisation rule is simulated on Cloud Analyst toolkit. Performance is analyzed on the parameters of overall interval, knowledge transfer, average knowledge center mating time and total value of usage. Results area unit compared with 3 existing load equalisation algorithms specifically spherical Robin, Equally unfold Current Execution Load, and Throttled. Results on the premise of case studies performed shows additional knowledge transfer with minimum interval.
ITA: THE IMPROVED THROTTLED ALGORITHM OF LOAD BALANCING ON CLOUD COMPUTINGIJCNCJournal
Cloud computing makes the information technology industry boom. It is a great solution for businesses who want to save costs while ensuring the quality of service. One of the key issues that make cloud computing successful is the load balancing technique used in the load balancer to minimize time costs and optimize costs economically. This paper proposes an algorithm to enhance the processing time of tasks so that it can help improve the load balancing capacity on cloud computing. This algorithm, named as Improved Throttled Algorithm (ITA), is an improvement of Throttled Algorithm. The paper uses the Cloud Analyst tool to simulate. The selected algorithms are used to compare: Equally Load, Round Robin, Throttled and TMA. The simulation results show that the proposed algorithm ITA has improved the processing time of tasks, time spent processing requests and reduced the cost of Datacenters compared to the selected popular algorithms as above. The improvement of ITA is because of selecting virtual machines in an index table that is available but in order of priority. It helps response times and processing times remain stable, limits the idling resources, and cloud costs are minimized compared to selected algorithms.
ITA: The Improved Throttled Algorithm of Load Balancing on Cloud ComputingIJCNCJournal
Cloud computing makes the information technology industry boom. It is a great solution for businesses who want to save costs while ensuring the quality of service. One of the key issues that make cloud computing successful is the load balancing technique used in the load balancer to minimize time costs and optimize costs economically. This paper proposes an algorithm to enhance the processing time of tasks so that it can help improve the load balancing capacity on cloud computing. This algorithm, named as Improved Throttled Algorithm (ITA), is an improvement of Throttled Algorithm. The paper uses the Cloud Analyst tool to simulate. The selected algorithms are used to compare: Equally Load, Round Robin, Throttled and TMA. The simulation results show that the proposed algorithm ITA has improved the processing time of tasks, time spent processing requests and reduced the cost of Datacenters compared to the selected popular algorithms as above. The improvement of ITA is because of selecting virtual machines in an index table that is available but in order of priority. It helps response times and processing times remain stable, limits the idling resources, and cloud costs are minimized compared to selected algorithms.
Cloud computing is a new computing paradigm that, just as electricity was firstly generated at home and
evolved to be supplied from a few utility providers, aims to transform computing into a utility. It is a mapping
strategy that efficiently equilibrates the task load into multiple computational resources in the network based on the
system status to improve performance. The objective of this research paper is to show the results of Hybrid DEGA,
in which GA is implemented after DE
MCCVA: A NEW APPROACH USING SVM AND KMEANS FOR LOAD BALANCING ON CLOUDijccsa
Nowadays, the demand of using resources, using services via the intranet system or on the Internet is rapidly growing. The respective problem coming is how to use these resources effectively in terms of time and quality. Therefore, the network QoS and its economy are people concerns, cloud computing was born in an inevitable trend. However, managing resources and scheduling tasks in virtualized data centres on the cloud are challenging tasks. Currently, there are a lot of Load Balancing algorithms applied in clouds and proposed by many authors, scholars, and experts. These existing methods are more about natural and heuristic, but the application of AI, or modern datamining technologies, in load balancing is not too popular due to the different characteristics of cloud. In this paper, we propose an algorithm to reduce the processing time (makespan) on cloud computing, helping the load balancing work more efficiency. Here, we use the SVM algorithm to classify the coming Requests, K - Mean to cluster the VMs in cloud, then the LB will allocate the requests into the VMs in the most reasonable way. In this way, request with the least processing time will be allocated to the VMs with the lowest usage. We name this new proposal as MCCVA - Makespan Classification & Clustering VM Algorithm. We have experimented and evaluated this algorithm in CloudSim, a cloud simulation environment, we obtained better results than some other wellknown algorithms. With this MCCVA, we can see the big potential of AI and datamining in Load Balancing, we can further develop LB with AI to achieve better and better results of QoS.
A cloud computing scheduling and its evolutionary approachesnooriasukmaningtyas
Despite the increasing use of cloud computing technology because it offers
unique features to serve its customers perfectly, exploiting the full potential
is very difficult due to the many problems and challenges. Therefore,
scheduling resources are one of these challenges. Researchers are still finding
it difficult to determine which of the scheduling algorithms are appropriate
and effective and that helps increases the performance of the system to
accomplish these tasks. This paper provides a broad and detailed examination
of resource scheduling algorithms in the environment of a cloud computing
environment and highlights the advantages and disadvantages of some
algorithms to help researchers in selecting the best algorithms to schedule a
particular workload to get a satisfy a quality of service, guarantee good
utilization of the cloud resources also minimizing the make-span.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
Providing a multi-objective scheduling tasks by Using PSO algorithm for cost ...Editor IJCATR
This article is intended to use the multi-PSO algorithm for scheduling tasks for cost management in cloud computing. This means that
any migration costs due to supply failure consider as a one objective and each task is a little particle and recognize by use of the
appropriate fitness schedule function (how the particles arrangement) that cost at least amount of total expense. In addition to, the weight
is granted to the each expenditure that reflects the importance of cost. The data which is used to simulate proposed method are series of
academic and research data that are prepared from the Internet and MATLAB software is used for simulation. We simulate two issues,
in the first issue, consider four task by four vehicles and divide tasks. In the second issue, make the issue more complicated and consider
six tasks by four vehicles. We write PSO's output for each two issues of various iterations. Finally, the particles dispersion and as well
as the output of the cost function were computed for each pa
ANALYSIS ON LOAD BALANCING ALGORITHMS IMPLEMENTATION ON CLOUD COMPUTING ENVIR...AM Publications
Cloud computing means storing and accessing data and programs over the Internet instead of your computer's hard drive. The cloud is just a metaphor for the Internet. The elements involved in cloud computing are clients, data center and distributed server. One of the main problems in cloud computing is load balancing. Balancing the load means to distribute the workload among several nodes evenly so that no single node will be overloaded. Load can be of any type that is it can be CPU load, memory capacity or network load. In this paper we presented an architecture of load balancing and algorithm which will further improve the load balancing problem by minimizing the response time. In this paper, we have proposed the enhanced version of existing regulated load balancing approach for cloud computing by comping the Randomization and greedy load balancing algorithm. To check the performance of proposed approach, we have used the cloud analyst simulator (Cloud Analyst). Through simulation analysis, it has been found that proposed improved version of regulated load balancing approach has shown better performance in terms of cost, response time and data processing time.
LOAD BALANCING IN AUTO SCALING-ENABLED CLOUD ENVIRONMENTSijccsa
Cloud computing is growing in popularity and it has been continuously updated with more improvements.
Auto scaling is one of such improvements that help to maintain the availability of customer’s subscribed
cloud system. The appearance of an auto scaling mechanism in the cloud system with many existing system
mechanisms is an issue that needs to be considered. Because, normally, there is no free drawbacks
whenever a new part is added to a certain stable system. In this paper, we consider how existing load
balancing and auto scaling impact on each other. For the purpose, we have modeled a cloud system with
an auto scaler and a load balancer and implementing simulations based on the constructed model. Also
based on the results from the computer simulations we proposed about choosing load balancers for
subscribed cloud system with auto scaling service.
Cloud Computing Load Balancing Algorithms Comparison Based SurveyINFOGAIN PUBLICATION
Cloud computing is an online primarily based computing. This computing paradigm has increased the employment of network wherever the potential of 1 node may be used by alternative node. Cloud provides services on demand to distributive resources like info, servers, software, infrastructure etc. in pay as you go basis. Load reconciliation is one amongst the vexing problems in distributed atmosphere. Resources of service supplier have to be compelled to balance the load of shopper request. Totally different load reconciliation algorithms are planned so as to manage the resources of service supplier with efficiency and effectively. This paper presents a comparison of assorted policies used for load reconciliation.
Hybrid Based Resource Provisioning in CloudEditor IJCATR
The data centres and energy consumption characteristics of the various machines are often noted with different capacities.
The public cloud workloads of different priorities and performance requirements of various applications when analysed we had noted
some invariant reports about cloud. The Cloud data centres become capable of sensing an opportunity to present a different program.
In out proposed work, we are using a hybrid method for resource provisioning in data centres. This method is used to allocate the
resources at the working conditions and also for the energy stored in the power consumptions. Proposed method is used to allocate the
process behind the cloud storage.
A Prolific Scheme for Load Balancing Relying on Task Completion Time IJECEIAES
In networks with lot of computation, load balancing gains increasing significance. To offer various resources, services and applications, the ultimate aim is to facilitate the sharing of services and resources on the network over the Internet. A key issue to be focused and addressed in networks with large amount of computation is load balancing. Load is the number of tasks„t‟ performed by a computation system. The load can be categorized as network load and CPU load. For an efficient load balancing strategy, the process of assigning the load between the nodes should enhance the resource utilization and minimize the computation time. This can be accomplished by a uniform distribution of load of to all the nodes. A Load balancing method should guarantee that, each node in a network performs almost equal amount of work pertinent to their capacity and availability of resources. Relying on task subtraction, this work has presented a pioneering algorithm termed as E-TS (Efficient-Task Subtraction). This algorithm has selected appropriate nodes for each task. The proposed algorithm has improved the utilization of computing resources and has preserved the neutrality in assigning the load to the nodes in the network.
Similar to STUDY THE EFFECT OF PARAMETERS TO LOAD BALANCING IN CLOUD COMPUTING (20)
Vehicle Ad Hoc Networks (VANETs) have become a viable technology to improve traffic flow and safety on the roads. Due to its effectiveness and scalability, the Wingsuit Search-based Optimised Link State Routing Protocol (WS-OLSR) is frequently used for data distribution in VANETs. However, the selection of MultiPoint Relays (MPRs) plays a pivotal role in WS-OLSR's performance. This paper presents an improved MPR selection algorithm tailored to WS-OLSR, designed to enhance the overall routing efficiency and reduce overhead. The analysis found that the current OLSR protocol has problems such as redundancy of HELLO and TC message packets or failure to update routing information in time, so a WS-OLSR routing protocol based on improved-MPR selection algorithm was proposed. Firstly, factors such as node mobility and link changes are comprehensively considered to reflect network topology changes, and the broadcast cycle of node HELLO messages is controlled through topology changes. Secondly, a new MPR selection algorithm is proposed, considering link stability issues and nodes. Finally, evaluate its effectiveness in terms of packet delivery ratio, end-to-end delay, and control message overhead. Simulation results demonstrate the superior performance of our improved MR selection algorithm when compared to traditional approaches.
A Novel Medium Access Control Strategy for Heterogeneous Traffic in Wireless ...IJCNCJournal
So far, Wireless Body Area Networks (WBANs) have played a pivotal role in driving the development of intelligent healthcare systems with broad applicability across various domains. Each WBAN consists of one or more types of sensors that can be embedded in clothing, attached directly to the body, or even implanted beneath an individual's skin. These sensors typically serve asingle application. However, the traffic generated by each sensor may have distinct requirements. This diversity necessitates a dual approach: tailored treatment based on the specific needs of each traffic typeand the fulfillment of application requirements, such asreliability and timeliness. Never the less, the presence of energy constraints and the unreliable nature of wireless communications make QoS provisioning under such networks a non-trivial task. In this context, the current paper introduces a novel Medium AccessControl (MAC) strategy for the regular traffic applications of WBANs, designed to significantly enhance efficiency when compared to the established MAC protocols IEEE 802.15.4 and IEEE 802.15.6, with a particular focus on improving reliability, timeliness, and energy efficiency.
May_2024 Top 10 Read Articles in Computer Networks & Communications.pdfIJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
A Topology Control Algorithm Taking into Account Energy and Quality of Transm...IJCNCJournal
The efficient use of energy in wireless sensor networks is critical for extending node lifetime. The network topology is one of the factors that have a significant impact on the energy usage at the nodes and the quality of transmission (QoT) in the network. We propose a topology control algorithm for software-defined wireless sensor networks (SDWSNs) in this paper. Our method is to formulate topology control algorithm as a nonlinear programming (NP) problem with the objective to optimizing two metrics, maximum communication range, and desired degree. This NP problem is solved at the SDWSN controller by employing the genetic algorithm (GA) to determine the best topology. The simulation results show that the proposed algorithm outperforms the MaxPower algorithm in terms of average node degree and energy expansion ratio.
Multi-Server user Authentication Scheme for Privacy Preservation with Fuzzy C...IJCNCJournal
The integration of artificial intelligence technology with a scalable Internet of Things (IoT) platform facilitates diverse smart communication services, allowing remote users to access services from anywhere at any time. The multi-server environment within IoT introduces a flexible security service model, enabling users to interact with any server through a single registration. To ensure secure and privacy preservation services for resources, an authentication scheme is essential. Zhao et al. recently introduced a user authentication scheme for the multi-server environment, utilizing passwords and smart cards, claiming resilience against well-known attacks. This paper conducts cryptanalysis on Zhao et al.'s scheme, focusing on denial of service and privacy attacks, revealing a lack of user-friendliness. Subsequently, we propose a new multi-server user authentication scheme for privacy preservation with fuzzy commitment over the IoT environment, addressing the shortcomings of Zhao et al.'s scheme. Formal security verification of the proposed scheme is conducted using the ProVerif simulation tool. Through both formal and informal security analyses, we demonstrate that the proposed scheme is resilient against various known attacks and those identified in Zhao et al.'s scheme.
Advanced Privacy Scheme to Improve Road Safety in Smart Transportation SystemsIJCNCJournal
In -Vehicle Ad-Hoc Network (VANET), vehicles continuously transmit and receive spatiotemporal data with neighboring vehicles, thereby establishing a comprehensive 360-degree traffic awareness system. Vehicular Network safety applications facilitate the transmission of messages between vehicles that are near each other, at regular intervals, enhancing drivers' contextual understanding of the driving environment and significantly improving traffic safety. Privacy schemes in VANETs are vital to safeguard vehicles’ identities and their associated owners or drivers. Privacy schemes prevent unauthorized parties from linking the vehicle's communications to a specific real-world identity by employing techniques such as pseudonyms, randomization, or cryptographic protocols. Nevertheless, these communications frequently contain important vehicle information that malevolent groups could use to Monitor the vehicle over a long period. The acquisition of this shared data has the potential to facilitate the reconstruction of vehicle trajectories, thereby posing a potential risk to the privacy of the driver. Addressing the critical challenge of developing effective and scalable privacy-preserving protocols for communication in vehicle networks is of the highest priority. These protocols aim to reduce the transmission of confidential data while ensuring the required level of communication. This paper aims to propose an Advanced Privacy Vehicle Scheme (APV) that periodically changes pseudonyms to protect vehicle identities and improve privacy. The APV scheme utilizes a concept called the silent period, which involves changing the pseudonym of a vehicle periodically based on the tracking of neighboring vehicles. The pseudonym is a temporary identifier that vehicles use to communicate with each other in a VANET. By changing the pseudonym regularly, the APV scheme makes it difficult for unauthorized entities to link a vehicle's communications to its real-world identity. The proposed APV is compared to the SLOW, RSP, CAPS, and CPN techniques. The data indicates that the efficiency of APV is a better improvement in privacy metrics. It is evident that the AVP offers enhanced safety for vehicles during transportation in the smart city.
April 2024 - Top 10 Read Articles in Computer Networks & CommunicationsIJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
DEF: Deep Ensemble Neural Network Classifier for Android Malware DetectionIJCNCJournal
Malware is one of the threats to security of computer networks and information systems. Since malware instances are available sufficiently, there is increased interest among researchers on usage of Artificial Intelligence (AI). Of late AI-enabled methods such as machine learning (ML) and deep learning paved way for solving many real-world problems. As it is a learning-based approach, accumulated training samples help in improving thequality of training and thus leveraging malware detection accuracy. Existing deep learning methods are focusing on learning-based malware detection systems. However, there is need for improving the state of the art through ensemble approach. Towards this end, in this paper we proposed a framework known as Deep Ensemble Framework (DEF) for automatic malware detection. The framework obtains features from training samples. From given malware instance a grayscale image is generated. There is another process to extract the opcode sequences. Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM) techniques are used to obtain grayscale image and opcode sequence respectively. Afterwards, a stacking ensemble is employed in order to achieve efficient malware detection and classification. Malware samples collected fromthe Internet sources and Microsoft are used for theempirical study. An algorithm known as Ensemble Learning for Automatic Malware Detection (EL-AML) is proposed to realize our framework. Another algorithm named Pre-Process is proposed to assist the EL-AML algorithm for obtaining intermediate features required by CNN and LSTM.Empirical study reveals that our framework outperforms many existing methods in terms of speed-up and accuracy.
High Performance NMF Based Intrusion Detection System for Big Data IOT TrafficIJCNCJournal
With the emergence of smart devices and the Internet of Things (IoT), millions of users connected to the network produce massive network traffic datasets. These vast datasets of network traffic, Big Data are challenging to store, deal with and analyse using a single computer. In this paper we developed parallel implementation using a High Performance Computer (HPC) for the Non-Negative Matrix Factorization technique as an engine for an Intrusion Detection System (HPC-NMF-IDS). The large IoT traffic datasets of order of millions samples are distributed evenly on all the computing cores for both storage and speedup purpose. The distribution of computing tasks involved in the Matrix Factorization takes into account the reduction of the communication cost between the computing cores. The experiments we conducted on the proposed HPC-IDS-NMF give better results than the traditional ML-based intrusion detection systems. We could train the HPC model with datasets of one million samples in only 31 seconds instead of the 40 minutes using one processor), that is a speed up of 87 times. Moreover, we have got an excellent detection accuracy rate of 98% for KDD dataset.
A Novel Medium Access Control Strategy for Heterogeneous Traffic in Wireless ...IJCNCJournal
So far, Wireless Body Area Networks (WBANs) have played a pivotal role in driving the development of intelligent healthcare systems with broad applicability across various domains. Each WBAN consists of one or more types of sensors that can be embedded in clothing, attached directly to the body, or even implanted beneath an individual's skin. These sensors typically serve asingle application. However, the traffic generated by each sensor may have distinct requirements. This diversity necessitates a dual approach: tailored treatment based on the specific needs of each traffic typeand the fulfillment of application requirements, such asreliability and timeliness. Never the less, the presence of energy constraints and the unreliable nature of wireless communications make QoS provisioning under such networks a non-trivial task. In this context, the current paper introduces a novel Medium AccessControl (MAC) strategy for the regular traffic applications of WBANs, designed to significantly enhance efficiency when compared to the established MAC protocols IEEE 802.15.4 and IEEE 802.15.6, with a particular focus on improving reliability, timeliness, and energy efficiency.
A Topology Control Algorithm Taking into Account Energy and Quality of Transm...IJCNCJournal
The efficient use of energy in wireless sensor networks is critical for extending node lifetime. The network topology is one of the factors that have a significant impact on the energy usage at the nodes and the quality of transmission (QoT) in the network. We propose a topology control algorithm for software-defined wireless sensor networks (SDWSNs) in this paper. Our method is to formulate topology control algorithm as a nonlinear programming (NP) problem with the objective to optimizing two metrics, maximum communication range, and desired degree. This NP problem is solved at the SDWSN controller by employing the genetic algorithm (GA) to determine the best topology. The simulation results show that the proposed algorithm outperforms the MaxPower algorithm in terms of average node degree and energy expansion ratio.
Multi-Server user Authentication Scheme for Privacy Preservation with Fuzzy C...IJCNCJournal
The integration of artificial intelligence technology with a scalable Internet of Things (IoT) platform facilitates diverse smart communication services, allowing remote users to access services from anywhere at any time. The multi-server environment within IoT introduces a flexible security service model, enabling users to interact with any server through a single registration. To ensure secure and privacy preservation services for resources, an authentication scheme is essential. Zhao et al. recently introduced a user authentication scheme for the multi-server environment, utilizing passwords and smart cards, claiming resilience against well-known attacks. This paper conducts cryptanalysis on Zhao et al.'s scheme, focusing on denial of service and privacy attacks, revealing a lack of user-friendliness. Subsequently, we propose a new multi-server user authentication scheme for privacy preservation with fuzzy commitment over the IoT environment, addressing the shortcomings of Zhao et al.'s scheme. Formal security verification of the proposed scheme is conducted using the ProVerif simulation tool. Through both formal and informal security analyses, we demonstrate that the proposed scheme is resilient against various known attacks and those identified in Zhao et al.'s scheme.
Advanced Privacy Scheme to Improve Road Safety in Smart Transportation SystemsIJCNCJournal
In -Vehicle Ad-Hoc Network (VANET), vehicles continuously transmit and receive spatiotemporal data with neighboring vehicles, thereby establishing a comprehensive 360-degree traffic awareness system. Vehicular Network safety applications facilitate the transmission of messages between vehicles that are near each other, at regular intervals, enhancing drivers' contextual understanding of the driving environment and significantly improving traffic safety. Privacy schemes in VANETs are vital to safeguard vehicles’ identities and their associated owners or drivers. Privacy schemes prevent unauthorized parties from linking the vehicle's communications to a specific real-world identity by employing techniques such as pseudonyms, randomization, or cryptographic protocols. Nevertheless, these communications frequently contain important vehicle information that malevolent groups could use to Monitor the vehicle over a long period. The acquisition of this shared data has the potential to facilitate the reconstruction of vehicle trajectories, thereby posing a potential risk to the privacy of the driver. Addressing the critical challenge of developing effective and scalable privacy-preserving protocols for communication in vehicle networks is of the highest priority. These protocols aim to reduce the transmission of confidential data while ensuring the required level of communication. This paper aims to propose an Advanced Privacy Vehicle Scheme (APV) that periodically changes pseudonyms to protect vehicle identities and improve privacy. The APV scheme utilizes a concept called the silent period, which involves changing the pseudonym of a vehicle periodically based on the tracking of neighboring vehicles. The pseudonym is a temporary identifier that vehicles use to communicate with each other in a VANET. By changing the pseudonym regularly, the APV scheme makes it difficult for unauthorized entities to link a vehicle's communications to its real-world identity. The proposed APV is compared to the SLOW, RSP, CAPS, and CPN techniques. The data indicates that the efficiency of APV is a better improvement in privacy metrics. It is evident that the AVP offers enhanced safety for vehicles during transportation in the smart city.
DEF: Deep Ensemble Neural Network Classifier for Android Malware DetectionIJCNCJournal
Malware is one of the threats to security of computer networks and information systems. Since malware instances are available sufficiently, there is increased interest among researchers on usage of Artificial Intelligence (AI). Of late AI-enabled methods such as machine learning (ML) and deep learning paved way for solving many real-world problems. As it is a learning-based approach, accumulated training samples help in improving thequality of training and thus leveraging malware detection accuracy. Existing deep learning methods are focusing on learning-based malware detection systems. However, there is need for improving the state of the art through ensemble approach. Towards this end, in this paper we proposed a framework known as Deep Ensemble Framework (DEF) for automatic malware detection. The framework obtains features from training samples. From given malware instance a grayscale image is generated. There is another process to extract the opcode sequences. Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM) techniques are used to obtain grayscale image and opcode sequence respectively. Afterwards, a stacking ensemble is employed in order to achieve efficient malware detection and classification. Malware samples collected fromthe Internet sources and Microsoft are used for theempirical study. An algorithm known as Ensemble Learning for Automatic Malware Detection (EL-AML) is proposed to realize our framework. Another algorithm named Pre-Process is proposed to assist the EL-AML algorithm for obtaining intermediate features required by CNN and LSTM.Empirical study reveals that our framework outperforms many existing methods in terms of speed-up and accuracy.
High Performance NMF based Intrusion Detection System for Big Data IoT TrafficIJCNCJournal
With the emergence of smart devices and the Internet of Things (IoT), millions of users connected to the network produce massive network traffic datasets. These vast datasets of network traffic, Big Data are challenging to store, deal with and analyse using a single computer. In this paper we developed parallel implementation using a High Performance Computer (HPC) for the Non-Negative Matrix Factorization technique as an engine for an Intrusion Detection System (HPC-NMF-IDS). The large IoT traffic datasets of order of millions samples are distributed evenly on all the computing cores for both storage and speedup purpose. The distribution of computing tasks involved in the Matrix Factorization takes into account the reduction of the communication cost between the computing cores. The experiments we conducted on the proposed HPC-IDS-NMF give better results than the traditional ML-based intrusion detection systems. We could train the HPC model with datasets of one million samples in only 31 seconds instead of the 40 minutes using one processor), that is a speed up of 87 times. Moreover, we have got an excellent detection accuracy rate of 98% for KDD dataset.
IoT Guardian: A Novel Feature Discovery and Cooperative Game Theory Empowered...IJCNCJournal
Cyber intrusion attacks increasingly target the Internet of Things (IoT) ecosystem, exploiting vulnerable devices and networks. Malicious activities must be identified early to minimize damage and mitigate threats. Using actual benign and attack traffic from the CICIoT2023 dataset, this WORK aims to evaluate and benchmark machine-learning techniques for IoT intrusion detection. There are four main phases to the system. First, the CICIoT2023 dataset is refined to remove irrelevant features and clean up missing and duplicate data. The second phase employs statistical models and artificial intelligence to discover novel features. The most significant features are then selected in the third phase based on cooperative game theory. Using the original CICIoT2023 dataset and a dataset containing only novel features, we train and evaluate a variety of machine learning classifiers. On the original dataset, Random Forest achieved the highest accuracy of 99%. Still, with novel features, Random Forest's performance dropped only slightly (96%) while other models achieved significantly lower accuracy. As a whole, the work contributes substantial contributions to tailored feature engineering, feature selection, and rigorous benchmarking of IoT intrusion detection techniques. IoT networks and devices face continuously evolving threats, making it necessary to develop robust intrusion detection systems.
Enhancing Traffic Routing Inside a Network through IoT Technology & Network C...IJCNCJournal
IoT networking uses real items as stationary or mobile nodes. Mobile nodes complicate networking. Internet of Things (IoT) networks have a lot of control overhead messages because devices are mobile. These signals are generated by the constant flow of control data as such device identity, geographical positioning, node mobility, device configuration, and others. Network clustering is a popular overhead communication management method. Many cluster-based routing methods have been developed to address system restrictions. Node clustering based on the Internet of Things (IoT) protocol, may be used to cluster all network nodes according to predefined criteria. Each cluster will have a Smart Designated Node. SDN cluster management is efficient. Many intelligent nodes remain in the network. The network design spreads these signals. This paper presents an intelligent and responsive routing approach for clustered nodes in IoT networks. An existing method builds a new sub-area clustered topology. The Nodes Clustering Based on the Internet of Things (NCIoT) method improves message transmission between any two nodes. This will facilitate the secure and reliable interchange of healthcare data between professionals and patients. NCIoT is a system that organizes nodes in the Internet of Things (IoT) by grouping them together based on their proximity. It also picks SDN routes for these nodes. This approach involves selecting one option from a range of choices and preparing for likely outcomes problem addressing limitations on activities is a primary focus during the review process. Predictive inquiry employs the process of analyzing data to forecast and anticipate future events. This document provides an explanation of compact units. The Predictive Inquiry Small Packets (PISP) improved its backup system and partnered with SDN to establish a routing information table for each intelligent node, resulting in higher routing performance. Both principal and secondary roads are available for use. The simulation findings indicate that NCIoT algorithms outperform CBR protocols. Enhancements lead to a substantial 78% boost in network performance. In addition, the end-to-end latency dropped by 12.5%. The PISP methodology produces 5.9% more inquiry packets compared to alternative approaches. The algorithms are constructed and evaluated against academic ones.
IoT Guardian: A Novel Feature Discovery and Cooperative Game Theory Empowered...IJCNCJournal
Cyber intrusion attacks increasingly target the Internet of Things (IoT) ecosystem, exploiting vulnerable devices and networks. Malicious activities must be identified early to minimize damage and mitigate threats. Using actual benign and attack traffic from the CICIoT2023 dataset, this WORK aims to evaluate and benchmark machine-learning techniques for IoT intrusion detection. There are four main phases to the system. First, the CICIoT2023 dataset is refined to remove irrelevant features and clean up missing and duplicate data. The second phase employs statistical models and artificial intelligence to discover novel features. The most significant features are then selected in the third phase based on cooperative game theory. Using the original CICIoT2023 dataset and a dataset containing only novel features, we train and evaluate a variety of machine learning classifiers. On the original dataset, Random Forest achieved the highest accuracy of 99%. Still, with novel features, Random Forest's performance dropped only slightly (96%) while other models achieved significantly lower accuracy. As a whole, the work contributes substantial contributions to tailored feature engineering, feature selection, and rigorous benchmarking of IoT intrusion detection techniques. IoT networks and devices face continuously evolving threats, making it necessary to develop robust intrusion detection systems.
** Connect, Collaborate, And Innovate: IJCNC - Where Networking Futures Take ...IJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
Enhancing Traffic Routing Inside a Network through IoT Technology & Network C...IJCNCJournal
IoT networking uses real items as stationary or mobile nodes. Mobile nodes complicate networking. Internet of Things (IoT) networks have a lot of control overhead messages because devices are mobile. These signals are generated by the constant flow of control data as such device identity, geographical positioning, node mobility, device configuration, and others. Network clustering is a popular overhead communication management method. Many cluster-based routing methods have been developed to address system restrictions. Node clustering based on the Internet of Things (IoT) protocol, may be used to cluster all network nodes according to predefined criteria. Each cluster will have a Smart Designated Node. SDN cluster management is efficient. Many intelligent nodes remain in the network. The network design spreads these signals. This paper presents an intelligent and responsive routing approach for clustered nodes in IoT networks. An existing method builds a new sub-area clustered topology. The Nodes Clustering Based on the Internet of Things (NCIoT) method improves message transmission between any two nodes. This will facilitate the secure and reliable interchange of healthcare data between professionals and patients. NCIoT is a system that organizes nodes in the Internet of Things (IoT) by grouping them together based on their proximity. It also picks SDN routes for these nodes. This approach involves selecting one option from a range of choices and preparing for likely outcomes problem addressing limitations on activities is a primary focus during the review process. Predictive inquiry employs the process of analyzing data to forecast and anticipate future events. This document provides an explanation of compact units. The Predictive Inquiry Small Packets (PISP) improved its backup system and partnered with SDN to establish a routing information table for each intelligent node, resulting in higher routing performance. Both principal and secondary roads are available for use. The simulation findings indicate that NCIoT algorithms outperform CBR protocols. Enhancements lead to a substantial 78% boost in network performance. In addition, the end-to-end latency dropped by 12.5%. The PISP methodology produces 5.9% more inquiry packets compared to alternative approaches. The algorithms are constructed and evaluated against academic ones.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
2024.06.01 Introducing a competency framework for languag learning materials ...
STUDY THE EFFECT OF PARAMETERS TO LOAD BALANCING IN CLOUD COMPUTING
1. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
DOI: 10.5121/ijcnc.2016.8303 33
STUDY THE EFFECT OF PARAMETERS TO LOAD
BALANCING IN CLOUD COMPUTING
Tran Cong Hung and Nguyen Xuan Phi
Posts and Telecommunications Institute of Technology, Vietnam
ABSTRACT
The rapid growth of users on the cloud service and number of services to the user increases the load on the
servers at cloud datacenter. This issue is becoming a challenge for the researchers. And requires used
effectively a load balancing technique not only to balance the resources for servers but also reduce the
negative impact to the end-user service. The current, load balancing techniques have solved the various
problems such as: (i) load balancing after a server was overloaded; (ii) load balancing and load forecast
for the allocation of resources; iii) improving the parameters affecting to load balancing in cloud. The
study of improving these parameters have great significance to improving system performance through
load balancing. From there, we can propose more effective methods of load balancing, in order to increase
system performance. Therefore, in this paper we researched some parameters affecting the performance
load balancing on the cloud computing.
KEYWORDS
Load balancing; Virtual Mmachines;Lload Balancing Parameters;Cloud Computing.
1. INTRODUCTION
Cloud computing is a rapidly growing technology in today's Internet world. With the advent of
cloud computing, a new age of the Internet has really started. Along with the use of cloud
computing in excess of any applications or services running on the Internet so far. Although the
number of users on the Internet is huge compared to those who used the devices on the cloud, but
the growth rate of users in cloud computing is growing. The Internet giants like Google,
Microsoft, Amazon, and many other small-scale businesses, have started using Cloud and gives
the user base for their Cloud experience and serve other purposes such as: work, business,
research, ... cloud computing allows users to use software applications without having to install
them on their computers. End-users pay a subscription fee to use Cloud services. That means that
businesses only pay a service fee to use the software without hardware infrastructure equipment,
Storage, ... for your business purposes. All the rest is due to the cloud service provider is
responsible. Businesses focus on their main objective is business. In cloud computing, software is
stored directly from the server of the service provider end user access through the Internet
environment, this has increased the efficiency of their business over and over again. These
include a number of cloud services typically such as Gmail (Google), EC2 (Amazon), Azure
(Microsoft), Bluemix Cloud platform (IBM), ... With so many amenities, the popularity, the user
to use cloud computing will increase. The increase in the number of users is synonymous with the
increasing number of requests for cloud service providers at any given time. The number of users
can be considered as the load of the server. Therefore, inevitably will lead to processing power of
the server will be overloaded. The solution to this problem may be to increase the number of
servers, increase the processing power of the server, or in many cases is scheduled for requesting
access to the servers. The increase in processing power of servers is costly and is a temporary
solution and in many cases unnecessary. So the solution is to study primacy of algorithms for
2. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
34
resource balancing (load) on cloud computing for businesses will help service providers reduce
the considerable cost hardware infrastructure equipment.
With that approach, we studied the parameters affect the performance load balancing on the cloud
and it is the basis to propose methods for improving high performance load balancing on the
cloud. This paper is organized as follows: Part I: introduction. Part II is the related work. Section
III studies the affect of the parameters by using the tool CloudSim. Part IV simulation results and
Section V concludes.
2. RELATED WORK
In works [1], the P. Srinivasa Rao et al. refers to the elements to take a balanced approach to
effective, the information nodes of all other nodes. When a node receives a job, it has to query the
status of the other buttons to find out which node has less usage to forward the work, and when
all nodes are query phenomenon overload happens. With the nodes broadcast a statement
informed about its status will also cause huge load on the network. Next is the issue of time
wasted at each node to perform query status. Besides, the current state of the network is also a
factor affecting the performance load balancing. Because, in a complex network, with multiple
subnets, the network node configured to locate all the other node is a fairly complex task. Thus,
querying the status of nodes in the cloud will affect the performance load balancing.
The Agraj Sharma et al. [2] showed that factors response time will greatly affect the performance
load balancing on the cloud. The work [2] outlined two outstanding issues of the previous
algorithm are: i) load balancing occurs only when the server is overloaded; ii) continuous
information retrieval resources available lead to increased computational cost and bandwidth
consumption. So the authors have proposed an algorithm based on the response time of the
request to assign the required decisions for servers appropriately, the algorithm approach has
reduced the query information on available resources, reduced communication and computation
on each server. The result is simulated with ECLIPSE software using JAVA IDE 1.6 has proven
the correctness of the proposed algorithm.
According to algorithm Min – Min [3] minimizes the time to complete the work in each network
node however algorithm has yet to consider the workload of each resource. Therefore, the authors
offer algorithm Load Balance Improved Min-Min (LBIMM) to overcome this weakness. Failure
to consider the workload of each resource leading to a number of resources are overloaded, some
resources are idle; therefore, the work done in each resource is a factor affecting the performance
load balancing on the cloud. Tradition Algorithm Min - Min is simple and is the basis for the
current scheduling algorithm in cloud computing. After giving the algorithm to consider the
workload of each resource to overcome the limitations of tradition algorithm Min - Min is
algorithm LBIMM [3] for better experimental results, shown in Table 1.
Table1 . Comparison of the results of the algorithm Min - Min and LBIMM [3]
Scheduling Algorithm Min-Min LBIMM
Number of Tasks 10 10
Number of Resources 5 5
Makespan (Sec) 19.625 12.5
Average Resource Utilization
Ratio
45.29% 82.29%
3. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
35
In this experiment, the same value the Number of Task and Number of Resources but LBIMM
given Makespan (execution time in each resource) than less value and Average Resource
Utilization Ratio (percentage of resource use average) higher than the algorithm Min - Min.
Again shows, the study workload in the cloud resources that have great influence on the balance
of resources in the cloud. The parameters mentioned here are: Average Resource Utilization Ratio
and Makespan.
The study aims to maximize throughput, reduce latency in cloud computing is also the author
Dhinesh Babu L.D et al. [4] concerned. Because the problem of balancing resources on cloud
computing is an optimization problem NP-hard class, load balancing for independent work is a
very important priority in the scheduling algorithm on cloud. With the proposed algorithm HBB-
LB with the basic parameters are: through put, the waiting time in the queue, authors proved the
effect of these parameters on performance job scheduling avoid overloading on the cloud
environment.
In the works [5] Author Gaochao Xu and his team believe in environmental load balancing cloud
computing has a significant impact on the performance of the system. Good load balancing makes
cloud computing more effectively and improve the satisfaction of users. This work [5] introduced
a load balancing model for the Public Cloud based on the concept of Cloud Partitioning with a
conversion mechanism to choose different strategies for different situations. The division cloud
into partitions will contribute to building load-balancing strategy is effective.
Figure 1. Relationship of the Cloud Partition [5]
Figure 1 shows the relationship of cloud partitions after having broken into separate partitions [5].
In each partition has its own set of load balancing and the balance is subject to the control of the
Main Controller, select the partition decision cloud to serve, the information on the partition
(state, load level, bandwidth, ...) management focus in the Main Controller. Now that we can see,
in order to improve the efficiency of load balancing on the cloud, the division into separate
partitions cloud will improve overall system performance servers.
Hiren H. Bhatt et al. [6] propose an algorithm enhanced load balancing using flexible load sharing
algorithm (FLS). The author says, in the distributed environment, load balancing of the nodes and
task scheduling is the main concern for the designers. In the Fig 2 show the nodes of domain in
cloud scenario.
4. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
36
Fig. 2. Nodes of domain in cloud.
Domain of node A includes the node F and H, and B includes H, I, and D with FLS if H is the
domain of A then the domain of node H will include node A. Then node will exchange the
information from one node to another node. The third partition function will make the partition in
to each domain so that using the remote location the information will be transferred to another
domain. This research shows the basic technique for grouping the virtual machines. Divide the
virtual machines to different groups and flexible load sharing will easily workload distributed for
the virtual machines so that will reduce the overload VMs. The parameters in this study are:
virtual grouping machines, load sharing.
Ritu Kapur [7] propose an algorithm LBRS (Load Balanced Resource Scheduling Algorithm).
The idea of the algorithm is to consider the importance of resource scheduling policys and load
balancing for resources in cloud. The main goal is to maximize the CPU Utilization, maximize
the Throughput, minimize the Response Time, minimize the Waiting time, minimize the
Turnaround Time, minimize the Resource Cost and obey the Fairness princip. In here, the
parameters mentioned are QoS: throughput, response time, waiting time,…
We found that the study of the parameters affecting to load balancing in cloud very practical
sense, not only can increase performance processing by load balancing method the system but
also the scientific basis to study make effective algorithms for improving performance of load
balancing on the cloud computing. Some key parameters: throughput, response time, latency,
execution time, ... Our main goal are methods to maximize throughput, reduce response time,
minimum chemical timeout, reducing implementation time, ... to improve performance for data
center cloud.
5. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
37
Table 1. Analysis of existing algorithms
Author’s
Name of
Method
Parameters Findings
Srinivasa Rao et al
[1]
Dynamic load
balancing with a
centralized
monitoring
capability
Node status,
Processcor,
Job Requests
Minimizes the processing overhead on the
processors and communication/network
overhead on the network
Agraj Sharma et al
[2] Load Balancing
by considering
only the response
time of the each
request
Response
Time,
Threshold,
Prediction
Time
Considers the current responses and its
variations to decide the allocation of new
incoming requests, hence eliminates the
need of collecting further data from any
other source hence wasting the
communication bandwidth
Huankai Chen et al
[3]
Load Balance
Improved Min-
Min (LBIMM)
Makespan,
Cloud Task
Scheduling,
Average
Resource
Utilization
Ratio
Decreasing completion time of tasks,
improving load balance of resources
Dhinesh Babu L.D
et al. [4]
honey bee
behavior inspired
load balancing
(HBB-LB)
Priorities of
tasks, Honey
bee behavior
Algorithm not only balances the load, but
also takes into consideration the priorities
of tasks that have been removed from
heavily loaded Virtual Machines
Gaochao Xu et al
[5]
load balance
model for the
public cloud
based on the
cloud partitioning
concept with a
switch
mechanism
to choose
different
strategies for
different
situations
Public cloud;
Cloud
partition
Division into separate partitions cloud will
improve overall system performance
servers
Hiren H. Bhatt et
al. [6]
flexible load
sharing algorithm
(FLS).
Virtual
grouping
machines,
Load sharing
Workload distributed for the virtual
machines so that will reduce the overload
VMs.
Ritu Kapur [7] LBRS (Load
Balanced
Resource
Scheduling
Algorithm).
Throughput,
Response
time,
Waiting time
To consider the importance of resource
scheduling policys and load balancing for
resources in cloud
3. CLOUDSIM: SIMULATION TOOL FOR CLOUD COMPUTING
Tools are the authors of CloudSim works [8] announced is very useful for simulating cloud
computing environment. We used CloudSim Tool for simulation of our experiment.
6. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
38
3.1. THE BASIC COMPONENTS OF CLOUDSIM
Figure 3. Layered CloudSim architecture [8].
Figure 4. CloudSim class design diagram [8].
7. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
39
The function of the main class in CloudSim [8]:
• Cloudlet: Class of application services such as content delivery, social networking,
business data, ...
• Cloudlet Scheduler: This abstract class is expanded by the implementation of different
policies to determine the processing power of a VM Cloudlets. There are two policies:
space-shared and time-shared.
• Virtual Machines (VM): The system handles the cloud computing work. It was created
by the parameter before being handled work for the Datacenter. These parameters are:
image size, virtual memory, MIPS, bandwidth, number of CPUs.
• Vm Allocation Policy: This class represents the allocation of virtual machines and
servers. The main function is to select the server in the data center ready yet for memory,
storage for virtual machines deployed on them.
• VmScheduler: this is an abstract class that is performed by space-shared Host, time-
shared requirement to allocate processing cores to VMs
• Datacenter: This model core services layer infrastructure (hardware) provided by Cloud
providers (eg Amazon, Azure, App Engine, ...), or in other words this is the layer that
contains virtual servers to host different applications.
• DatacenterBroker or Cloud Broker: This layer acts as a broker, perform the negotiation
between SaaS and Cloud Providers negotiations is controlled by QoS. Broker represent
the SaaS Provider. It discovers the appropriate Cloud Providers by querying CIS layer
(Cloud Information Service) and online negotiating commitments to allocate resources /
services for the application QoS. The research and development system must extend this
class to evaluate and test various brokerage policy. Difference between Broker and
CloudCoordinator be understood as follows: Broker representing clients (ie decision
components to increase efficiency), while CloudCoordinator behalf of the data center (ie
trying to maximize the overall performance of the data center, without considering the
specific needs of customers.
• BwProvisioner: The main role of this part is to implement the allocation of network
bandwidth for a set of virtual machines compete on the data center. The cloud system
development, and researchers can extend this class with its policy of prioritizing (QoS) to
reflect the needs of the application. BwProvisioningSimple also allows a virtual machine
can take up a lot of bandwidth on demand and use; however, it will be limited by the total
bandwidth of the server.
• CloudCoordinator: This layer is responsible to periodically check the status of resources
in the data center and on which to undertake decisive load. These components include
special sensors and monitoring set up to handle the load. Monitoring data center
resources is done by service updateDatacenter () by sending queries to the sensor.
Services / Resource Discovery made by setDatacenter (), this method can be extended to
implement the protocols and different mechanisms (multicast, broadcast, peer-to-peer).
Besides, this component can also be extended to simulate cloud services such as Amazon
EC2 Load Balancer-. The developers aim to extend this class to implement backup
policies associated clouds (inter-cloud provisioning).
• DatacenterCharacteristics: This layer contains the configuration information of the
resources in the data center
• Host: class physical resources (computer / host) include the following information: the
amount of memory, the list of types of cores (for representing multi-core computers),
shared policies the strength of the VMs, anticipated policy (provisioning) and memory
bandwidth of VMs.
• NetworkTopology: This class reads the file to create a network topology Brite need
simulation. This information is used to simulate the latency in CloudSim network.
8. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
40
• RamProvisioner: this is the class that represents the memory allocation policy the main
(RAM) to the virtual machine. This class approval idle memory for the VM Host
• SanStorage: This class represents the network of Data Center storage (eg Amazon S3,
Azure blob storage). SanStorage simulation can be used to store and extract information
through the web interface without depending of time and space.
• Sensor: This interface is CloudCoordinator used to monitor specific parameters (energy
consumption, use of resources, ...). This interface is defined by the method of: i) set the
minimum and maximum thresholds for performance parameters; ii) periodically update
the measurements. Therefore, this class is used to model the actual service by cloud
vendors such as Amazon's CloudWatch, Microsoft Azure's FabricController. Each data
center can create one or more sensors, each sensor is responsible for monitoring the
various performance parameters.
3.2. NETWORK TRAFFIC FLOW IN SIMULATION
Modeling network topology to perform simulated cloud entities (host, storage capacity, users) has
significant implications because the latency of the network architecture directly affect the
experience customer services.
Figure 5. Network communication flow [8]
Looking at Figure 5 we can see, there is delay matrix in CloudSim. From entity i to entity j take
some time t + d with t is the time the message was sent, and d is the latency. The simulation using
matrix latency easier when performing simulations of routers, switches and this matrix can be
replaced by the weight or bandwidth between entities in data center. CloudSim taken from file
Brite network topology include the following information: hosts, data centers, Cloud Broker, ...
9. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
41
4. SIMULATION RESULTS
In this scenario, we simulate a Datacenter only a physical host and create 10 virtual machines
(VM) on the host. The aim is recorded and compared the time taken by a Cloudlet (considered
Job) on the virtual machine when changing the parameter simulation, such as the magnitude of
cloudlet, bw, ram, mips, ... of the virtual machine. Scenario simulation is done on CloudSim, here
the Cloudlet can be considered as the work on job, depending on the magnitude of it that the
virtual machine will handle fast or slow. Correspondingly, the Cloudlet in Figure 4 is the input of
the system and we made a total measurement time (Time Execution) since CloudSim Cloudlet on
until the end. Time Execution Unit of measure is milliseconds. Parameter mips represents the
number of million intruction per second. The parameters of the VM and the Cloudlet used to
simulate as follows (Table 3):
Table 3. Parameters of VMs and Cloudlet
4.1 THE MIPS PARAMETER AFFECTED
With 10 virtual machines created with the same parameter we obtained the results as shown in
Table 4.
Table 4. Results of simulation when the parameter are equal
Cloudlet ID STATUS
Data center
ID
VM ID Time
Start
Time
Finish
Time
1 SUCCESS 2 1 2000 0.1 2000.1
2 SUCCESS 2 2 2000 0.1 2000.1
3 SUCCESS 2 3 2000 0.1 2000.1
4 SUCCESS 2 4 2000 0.1 2000.1
5 SUCCESS 2 5 2000 0.1 2000.1
6 SUCCESS 2 6 2000 0.1 2000.1
7 SUCCESS 2 7 2000 0.1 2000.1
8 SUCCESS 2 8 2000 0.1 2000.1
9 SUCCESS 2 9 2000 0.1 2000.1
10 SUCCESS 2 10 2000 0.1 2000.1
VM description
mips = 20 Million instructions per second
size = 5120 Image size (MB)
ram = 128 Vm memory (MB)
bw= 1000 Bandwith
pesNumber = 1 Number of cpus
String vmm = "Windows" VMM name
Cloudlet properties
pesNumber=1 Number of cpus
length = 40000 Length of this Cloudlet (size)
fileSize = 300
Input file size of this Cloudlet BEFORE submitting to a
CloudResource
outputSize = 300
Output size of this Cloudlet AFTER submitting and executing
to a CloudResource
10. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
42
From Table 4 we see when the same input parameters, we obtained Time is equal. Now, we
change the values of parameters mips (as Table 5), the results obtained as Table 6 and Figure 6.
Table 5. Change the parameter mips
VM 1 VM 2 VM 3 VM 4 VM 5 VM 6 VM 7 VM 8 VM 9
VM
10
mips*2 mips mips*4 mips*2.5 mips*3 mips*5 mips*6 mips*3.5 mips*7 mips*4.5
Table 6. Simulation results when changed parameters mips
Cloudlet
ID
STATUS
Data center
ID
VM
ID
Time
Start
Time
Finish Time
9 SUCCESS 2 9 285.71 0.1 285.81
7 SUCCESS 2 7 333.33 0.1 333.43
6 SUCCESS 2 6 399.99 0.1 400.09
10 SUCCESS 2 10 444.44 0.1 444.54
3 SUCCESS 2 3 500 0.1 500.1
8 SUCCESS 2 8 571.43 0.1 571.53
5 SUCCESS 2 5 666.66 0.1 666.76
4 SUCCESS 2 4 800 0.1 800.1
1 SUCCESS 2 1 1000 0.1 1000.1
2 SUCCESS 2 2 2000 0.1 2000.1
Figure 6. Execution time of VM when changed mips
11. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
43
From Figure 6 we see, VM9 (with mips = mips * 9) has highest mips should be the first choice to
Cloud carried out the work, therefore the smallest Time Execution, next the VMs has mips
descending is selected, VM2 should have the smallest mips then Time Execution is the greatest.
Figure 6 shows the correlation between mips and Time Execution. The degree of influence of
mips parameter clearly affect the execution time of the virtual machines in the cloud. Therefore,
the research to increase value of mips will avoid the traffic deadlock or otherwise improve
performance of load balancing in the cloud.
4.2 THE LENGTH PARAMETER AFFECTED
In CloudSim, length parameter represents the size of the Cloudlet (job). In here, we will simulate
split length value for virtual machines, which simulate the tasks to be subdivided and assigned to
virtual machines run in the Cloud.
Table 6. Length value for VMs
VM 1 length/400 VM 6 length/600
VM 2 length/300 VM 7 length/800
VM 3 length/700 VM 8 length/200
VM 4 length/500 VM 9 length/100
VM 5 length/100 VM 10 length/50
Since the parameters in Table 6 (the other parameters is unchange), we obtained the results as
shown in Table 7 and Figure 7.
Table 7. Results of simulation parameters when changed length
Cloudlet
ID
STATUS
Data center
ID
VM
ID
Time
Start
Time
Finish
Time
7 SUCCESS 2 7 2.5 0.1 2.51
3 SUCCESS 2 3 2.85 0.1 2.95
6 SUCCESS 2 6 3.3 0.1 3.4
4 SUCCESS 2 4 4 0.1 4.1
1 SUCCESS 2 1 5 0.1 5.1
2 SUCCESS 2 2 6.65 0.1 6.75
8 SUCCESS 2 8 10 0.1 10.1
5 SUCCESS 2 5 20 0.1 20.1
9 SUCCESS 2 9 20 0.1 20.1
10 SUCCESS 2 10 40 0.1 40.1
12. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
44
Figure 7. Execution time when length changed of VMs
Figure 7 show that, the virtual machine has length value greater (length = 40000, from Table 3)
then the time to execute the greater. The length value represents the size of the job come to cloud.
Here we see that, VM5, VM9 and VM10 corresponding length value is length/100 = 400,
length/100 = 400, length/50 = 800 (with length = 40000, from Table 6), so corresponding time of
execution is 20ms, 20ms and 40ms (Figure 7). The VM6 and VM7 have length value smaller
length value of VM5, so corresponding time of execution is 3.3ms and 2.5ms (smaller execution
time of VM5 is 20ms). Reason for time of executed on VM5, VM9 and VM10 larger than the
other VMs is because it has a larger length value. Besides, VM7 has minimum length value (in
Table 6) so there will be minimal execution time and vice versa VM10 greatest length value
should have performed well greatest time. Now that we can see, the study of algorithms to be able
to split the job before being assigned to the virtual machine VM is extremely important. This
work has reduced the bottleneck phenomenon and increased the processing speed of the system,
or in other words increase the performance for load balancing, balance of resources.
5. CONCLUSION
In the future, cloud computing is a promising technology, is a model for providing the resources
needed for customer service oriented. Platform virtualization is one of the essential characteristics
of cloud computing, virtualization refers to factors affecting business performance, such as
information technology resources, hardware, software, operating system and service. Cloud
computing is characterized by distributed computing and this is also the risk of bottlenecks that
may occur during the allocation of resources for load balancing. The preventable risk overloading
in cloud computing will make resources highly available status. The system access requests can
occur simultaneously and will not avoid the "competitive" idle resources of the cloud. We have
simulation and analysis of data on the impact of the parameters on cloud to performance of load
balancing. From there, we discovered that, parameters makespan (runtime) is of great significance
for the data center cloud. So the task of the researchers is to study the algorithms are effective
load balancing to reduce the time makespan of virtual machines. Research suggest adjusting the
parameters in order to improve performance of load balancing is also a research aimed at solving
the imbalance on the cloud computing environment.
13. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.3, May 2016
45
REFERENCES
[1] P. Srinivasa Rao, V.P.C Rao and A.Govardhan,“Dynamic Load Balancing With Central Monitoring
of Distributed Job Processing System”, International Journal of Computer Applications (0975 – 8887)
,Volume 65– No.21, March 2013.
[2] Agraj Sharma, Sateesh K. Peddoju, “Response Time Based Load Balancing in Cloud Computing”,
2014 International Conference on Control, Instrumentation, Communication and Computational
Technologies (ICCICCT).
[3] Huankai Chen, Frank Wang, Na Helian, Gbola Akanmu, “User-Priority Guided Min-Min Scheduling
Algorithm For Load Balancing in Cloud Computing”, Parallel Computing Technologies, 2013
National Conference.
[4] Dhinesh Babu ,Venkata Krishna P, “Honey bee behavior inspired load balancing of tasks in cloud
computing environments”, Elsevier- Journal of Applied Soft Computing, no-l3, 2013, pp-2292-2303
[5] Gaochao Xu, Junjie Pang and Xiaodong Fu, “A Load Balancing Model Based on Cloud Partitioning
for the Public Cloud”, Tsinghua Science and Technology , ISSN 1007-0214 04/12, pp 34-39, Volume
18, Number I, Febuary 2013.
[6] Hiren H. Bhatt and Hitesh A. Bheda, “Enhance Load Balancing using Flexible Load Sharing in Cloud
Computing”, 2015 1st International Conference on Next Generation Computing Technologies
(NGCT-2015) Dehradun, India, 4-5 September 2015.
[7] Ritu Kapur, “A Workload Balanced Approach for Resource Scheduling in Cloud Computing”, 2015
Eighth International Conference on Contemporary Computing (IC3), 20-22 Aug. 2015
[8] Rodrigo N. Calheiros, Rajiv Ranjan, Anton Beloglazov, Cesar A. F. De Rose and Rajkumar Buyya,
“CloudSim: a toolkit for modeling and simulation of cloud computing environments and evaluation of
resource provisioning algorithms” , Softw. Pract. Exper. 2011; 41:23–50, Published online 24 August
2010 in Wiley Online Library (wileyonlinelibrary.com). DOI: 10.1002/spe.995
AUTHORS
Tran Cong Hung was born in Vietnam in 1961. He received the B.E in electronic and
Telecommunication engineering with first class honors from HOCHIMINH University of
technology in Vietnam, 1987. He received the B.E in informatics and computer engineering
from HOCHIMINH University of technology in Vietnam, 1995. He received the master of
engineering degree in telecommunications engineering course from postgraduate department
Hanoi University of technology in Vietnam, 1998. He received Ph.D at Hanoi University of
technology in Vietnam, 2004. His main research areas are B – ISDN performance parameters and
measuring methods, QoS in high speed networks, MPLS. He is, currently, Associate Professor PhD. of
Faculty of Information Technology II, Posts and Telecoms Institute of Technology in HOCHIMINH,
Vietnam.
Nguyen Xuan Phi was born in Vietnam in 1980. He received Master in Posts &
Telecommunications Institute of Technology in Ho Chi Minh, Vietnam, 2012, major in
Networking and Data Transmission. Currently, he is a PhD candidate in Information
System from Post & Telecommunications Institute of Technology, Vietnam. He is
working at the Information Technology Center of AGRIBANK in Ho Chi Minh, Vietnam.