Cloud computing gives on-demand access to computing resources in
metered and powerfully adapted way; it empowers the client to get access to
fast and flexible resources through virtualization and widely adaptable for
various applications. Further, to provide assurance of productive
computation, scheduling of task is very much important in cloud
infrastructure environment. Moreover, the main aim of task execution
phenomena is to reduce the execution time and reserve infrastructure;
further, considering huge application, workflow scheduling has drawn fine
attention in business as well as scientific area. Hence, in this research work,
we design and develop an optimized load balancing in parallel computation
aka optimal load balancing in parallel computing (OLBP) mechanism to
distribute the load; at first different parameter in workload is computed and
then loads are distributed. Further OLBP mechanism considers makespan
time and energy as constraint and further task offloading is done considering
the server speed. This phenomenon provides the balancing of workflow;
further OLBP mechanism is evaluated using cyber shake workflow dataset
and outperforms the existing workflow mechanism.
An efficient cloudlet scheduling via bin packing in cloud computingIJECEIAES
In this ever-developing technological world, one way to manage and deliver services is through cloud computing, a massive web of heterogenous autonomous systems that comprise adaptable computational design. Cloud computing can be improved through task scheduling, albeit it being the most challenging aspect to be improved. Better task scheduling can improve response time, reduce power consumption and processing time, enhance makespan and throughput, and increase profit by reducing operating costs and raising the system reliability. This study aims to improve job scheduling by transferring the job scheduling problem into a bin packing problem. Three modifies implementations of bin packing algorithms were proposed to be used for task scheduling (MBPTS) based on the minimisation of makespan. The results, which were based on the open-source simulator CloudSim, demonstrated that the proposed MBPTS was adequate to optimise balance results, reduce waiting time and makespan, and improve the utilisation of the resource in comparison to the current scheduling algorithms such as the particle swarm optimisation (PSO) and first come first serve (FCFS).
Recently, lot of interest have been put forth by researchers to improve workload scheduling in cloud platform. However, execution of scientific workflow on cloud platform is time consuming and expensive. As users are charged based on hour of usage, lot of research work have been emphasized in minimizing processing time for reduction of cost. However, the processing cost can be reduced by minimizing energy consumption especially when resources are heterogeneous in nature; very limited work have been done considering optimizing cost with energy and processing time parameters together in meeting task quality of service (QoS) requirement. This paper presents cost and performance aware workload scheduling (CPA-WS) technique under heterogeneous cloud platform. This paper presents a cost optimization model through minimization of processing time and energy dissipation for execution of task. Experiments are conducted using two widely used workflow such as Inspiral and CyberShake. The outcome shows the CPA-WS significantly reduces energy, time, and cost in comparison with standard workload scheduling model.
Multi-objective tasks scheduling using bee colony algorithm in cloud computingIJECEIAES
This document presents a new approach for scheduling multi-objective tasks in cloud computing using an artificial bee colony algorithm. The proposed algorithm aims to optimize response time, schedule length ratio, and efficiency. It models tasks as bees that are assigned to processing elements in data centers to minimize completion time while balancing resource loads. The results showed the bee colony algorithm achieved better performance than other scheduling methods in cloud computing environments.
LOAD BALANCING ALGORITHM ON CLOUD COMPUTING FOR OPTIMIZE RESPONE TIMEijccsa
To improve the performance of cloud computing, there are many parameters and issues that we should consider, including resource allocation, resource responsiveness, connectivity to resources, unused resources exploration, corresponding resource mapping and planning for resource. The planning for the use of resources can be based on many kinds of parameters, and the service response time is one of them.
The users can easily figure out the response time of their requests, and it becomes one of the important QoSs. When we discover and explore more on this, response time can provide solutions for the distribution, the load balancing of resources with better efficiency. This is one of the most promising
research directions for improving the cloud technology. Therefore, this paper proposes a load balancing algorithm based on response time of requests on cloud with the name APRA (ARIMA Prediction of Response Time Algorithm), the main idea is to use ARIMA algorithms to predict the coming response time, thus giving a better way of effectively resolving resource allocation with threshold value. The experiment
result outcomes are potential and valuable for load balancing with predicted response time, it shows that prediction is a great direction for load balancing.
The cloud environment offers an appropriate location for the implementation of huge range of scientific applications. However, in the existing workflows the major dispute is to assign the assets to the tasks in a well-organized way so, that it acquires less finishing time and load on every virtual machines will be impartial. To overcome this problem, GA_ MINMIN has been proposed that combines the features of GA and MINMIN scheduling algorithms. This algorithm is fundamentally a three-layer structure where GA is connected on the main level and hereditary calculation was performed for distributing belonging in an advanced way. At second level, the execution request of the assignments was resolved based on their size. This would be finished with the assistance of MIN-MIN. At third level, all the virtual machines have been running in parallel so that task response time will get decreased with more advanced outcomes. The proposed algorithm has been executed on the simulation environment.
A cloud computing scheduling and its evolutionary approachesnooriasukmaningtyas
Despite the increasing use of cloud computing technology because it offers
unique features to serve its customers perfectly, exploiting the full potential
is very difficult due to the many problems and challenges. Therefore,
scheduling resources are one of these challenges. Researchers are still finding
it difficult to determine which of the scheduling algorithms are appropriate
and effective and that helps increases the performance of the system to
accomplish these tasks. This paper provides a broad and detailed examination
of resource scheduling algorithms in the environment of a cloud computing
environment and highlights the advantages and disadvantages of some
algorithms to help researchers in selecting the best algorithms to schedule a
particular workload to get a satisfy a quality of service, guarantee good
utilization of the cloud resources also minimizing the make-span.
Reliable and efficient webserver management for task scheduling in edge-cloud...IJECEIAES
The development in the field of cloud webserver management for the execution of the workflow and meeting the quality-of-service (QoS) prerequisites in a distributed cloud environment has been a challenging task. Though, internet of things (IoT) of work presented for the scheduling of the workflow in a heterogeneous cloud environment. Moreover, the rapid development in the field of cloud computing like edge-cloud computing creates new methods to schedule the workflow in a heterogenous cloud environment to process different tasks like IoT, event-driven applications, and different network applications. The current methods used for workflow scheduling have failed to provide better trade-offs to meet reliable performance with minimal delay. In this paper, a novel web server resource management framework is presented namely the reliable and efficient webserver management (REWM) framework for the edge-cloud environment. The experiment is conducted on complex bioinformatic workflows; the result shows the significant reduction of cost and energy by the proposed REWM in comparison with standard webserver management methodology.
Time and Reliability Optimization Bat Algorithm for Scheduling Workflow in CloudIRJET Journal
This document describes using a meta-heuristic optimization algorithm called the Bat Algorithm (BA) to schedule workflows in cloud computing environments. The BA is applied to optimize a multi-objective function that minimizes workflow execution time and maximizes reliability while keeping costs within a user-specified budget. The BA is compared to a basic randomized evolutionary algorithm (BREA) that uses greedy approaches. Experimental results show the BA performs better by finding schedules that have lower execution times and higher reliability within the given budget constraints. The BA is well-suited for this problem because it can efficiently search large solution spaces and automatically focus on optimal regions like other metaheuristics.
An efficient cloudlet scheduling via bin packing in cloud computingIJECEIAES
In this ever-developing technological world, one way to manage and deliver services is through cloud computing, a massive web of heterogenous autonomous systems that comprise adaptable computational design. Cloud computing can be improved through task scheduling, albeit it being the most challenging aspect to be improved. Better task scheduling can improve response time, reduce power consumption and processing time, enhance makespan and throughput, and increase profit by reducing operating costs and raising the system reliability. This study aims to improve job scheduling by transferring the job scheduling problem into a bin packing problem. Three modifies implementations of bin packing algorithms were proposed to be used for task scheduling (MBPTS) based on the minimisation of makespan. The results, which were based on the open-source simulator CloudSim, demonstrated that the proposed MBPTS was adequate to optimise balance results, reduce waiting time and makespan, and improve the utilisation of the resource in comparison to the current scheduling algorithms such as the particle swarm optimisation (PSO) and first come first serve (FCFS).
Recently, lot of interest have been put forth by researchers to improve workload scheduling in cloud platform. However, execution of scientific workflow on cloud platform is time consuming and expensive. As users are charged based on hour of usage, lot of research work have been emphasized in minimizing processing time for reduction of cost. However, the processing cost can be reduced by minimizing energy consumption especially when resources are heterogeneous in nature; very limited work have been done considering optimizing cost with energy and processing time parameters together in meeting task quality of service (QoS) requirement. This paper presents cost and performance aware workload scheduling (CPA-WS) technique under heterogeneous cloud platform. This paper presents a cost optimization model through minimization of processing time and energy dissipation for execution of task. Experiments are conducted using two widely used workflow such as Inspiral and CyberShake. The outcome shows the CPA-WS significantly reduces energy, time, and cost in comparison with standard workload scheduling model.
Multi-objective tasks scheduling using bee colony algorithm in cloud computingIJECEIAES
This document presents a new approach for scheduling multi-objective tasks in cloud computing using an artificial bee colony algorithm. The proposed algorithm aims to optimize response time, schedule length ratio, and efficiency. It models tasks as bees that are assigned to processing elements in data centers to minimize completion time while balancing resource loads. The results showed the bee colony algorithm achieved better performance than other scheduling methods in cloud computing environments.
LOAD BALANCING ALGORITHM ON CLOUD COMPUTING FOR OPTIMIZE RESPONE TIMEijccsa
To improve the performance of cloud computing, there are many parameters and issues that we should consider, including resource allocation, resource responsiveness, connectivity to resources, unused resources exploration, corresponding resource mapping and planning for resource. The planning for the use of resources can be based on many kinds of parameters, and the service response time is one of them.
The users can easily figure out the response time of their requests, and it becomes one of the important QoSs. When we discover and explore more on this, response time can provide solutions for the distribution, the load balancing of resources with better efficiency. This is one of the most promising
research directions for improving the cloud technology. Therefore, this paper proposes a load balancing algorithm based on response time of requests on cloud with the name APRA (ARIMA Prediction of Response Time Algorithm), the main idea is to use ARIMA algorithms to predict the coming response time, thus giving a better way of effectively resolving resource allocation with threshold value. The experiment
result outcomes are potential and valuable for load balancing with predicted response time, it shows that prediction is a great direction for load balancing.
The cloud environment offers an appropriate location for the implementation of huge range of scientific applications. However, in the existing workflows the major dispute is to assign the assets to the tasks in a well-organized way so, that it acquires less finishing time and load on every virtual machines will be impartial. To overcome this problem, GA_ MINMIN has been proposed that combines the features of GA and MINMIN scheduling algorithms. This algorithm is fundamentally a three-layer structure where GA is connected on the main level and hereditary calculation was performed for distributing belonging in an advanced way. At second level, the execution request of the assignments was resolved based on their size. This would be finished with the assistance of MIN-MIN. At third level, all the virtual machines have been running in parallel so that task response time will get decreased with more advanced outcomes. The proposed algorithm has been executed on the simulation environment.
A cloud computing scheduling and its evolutionary approachesnooriasukmaningtyas
Despite the increasing use of cloud computing technology because it offers
unique features to serve its customers perfectly, exploiting the full potential
is very difficult due to the many problems and challenges. Therefore,
scheduling resources are one of these challenges. Researchers are still finding
it difficult to determine which of the scheduling algorithms are appropriate
and effective and that helps increases the performance of the system to
accomplish these tasks. This paper provides a broad and detailed examination
of resource scheduling algorithms in the environment of a cloud computing
environment and highlights the advantages and disadvantages of some
algorithms to help researchers in selecting the best algorithms to schedule a
particular workload to get a satisfy a quality of service, guarantee good
utilization of the cloud resources also minimizing the make-span.
Reliable and efficient webserver management for task scheduling in edge-cloud...IJECEIAES
The development in the field of cloud webserver management for the execution of the workflow and meeting the quality-of-service (QoS) prerequisites in a distributed cloud environment has been a challenging task. Though, internet of things (IoT) of work presented for the scheduling of the workflow in a heterogeneous cloud environment. Moreover, the rapid development in the field of cloud computing like edge-cloud computing creates new methods to schedule the workflow in a heterogenous cloud environment to process different tasks like IoT, event-driven applications, and different network applications. The current methods used for workflow scheduling have failed to provide better trade-offs to meet reliable performance with minimal delay. In this paper, a novel web server resource management framework is presented namely the reliable and efficient webserver management (REWM) framework for the edge-cloud environment. The experiment is conducted on complex bioinformatic workflows; the result shows the significant reduction of cost and energy by the proposed REWM in comparison with standard webserver management methodology.
Time and Reliability Optimization Bat Algorithm for Scheduling Workflow in CloudIRJET Journal
This document describes using a meta-heuristic optimization algorithm called the Bat Algorithm (BA) to schedule workflows in cloud computing environments. The BA is applied to optimize a multi-objective function that minimizes workflow execution time and maximizes reliability while keeping costs within a user-specified budget. The BA is compared to a basic randomized evolutionary algorithm (BREA) that uses greedy approaches. Experimental results show the BA performs better by finding schedules that have lower execution times and higher reliability within the given budget constraints. The BA is well-suited for this problem because it can efficiently search large solution spaces and automatically focus on optimal regions like other metaheuristics.
Optimization of energy consumption in cloud computing datacenters IJECEIAES
Cloud computing has emerged as a practical paradigm for providing IT resources, infrastructure and services. This has led to the establishment of datacenters that have substantial energy demands for their operation. This work investigates the optimization of energy consumption in cloud datacenter using energy efficient allocation of tasks to resources. The work seeks to develop formal optimization models that minimize the energy consumption of computational resources and evaluates the use of existing optimization solvers in testing these models. Integer linear programming (ILP) techniques are used to model the scheduling problem. The objective is to minimize the total power consumed by the active and idle cores of the servers’ CPUs while meeting a set of constraints. Next, we use these models to carry out a detailed performance comparison between a selected set of Generic ILP and 0-1 Boolean satisfiability based solvers in solving the ILP formulations. Simulation results indicate that in some cases the developed models have saved up to 38% in energy consumption when compared to common techniques such as round robin. Furthermore, results also showed that generic ILP solvers had superior performance when compared to SAT-based ILP solvers especially as the number of tasks and resources grow in size.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
Dynamic Task Scheduling based on Burst Time Requirement for Cloud EnvironmentIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
Multi-objective load balancing in cloud infrastructure through fuzzy based de...IAESIJAI
Cloud computing became a popular technology which influence not only
product development but also made technology business easy. The services
like infrastructure, platform and software can reduce the complexity of
technology requirement for any ecosystem. As the users of cloud-based
services increases the complexity of back-end technologies also increased.
The heterogeneous requirement of users in terms for various configurations
creates different unbalancing issues related to load. Hence effective load
balancing in a cloud system with reference to time and space become crucial
as it adversely affect system performance. Since the user requirement and
expected performance is multi-objective use of decision-making tools like
fuzzy logic will yield good results as it uses human procedure knowledge in
decision making. The overall system performance can be further improved by
dynamic resource scheduling using optimization technique like genetic
algorithm.
A combined computing framework for load balancing in multi-tenant cloud eco-...IJECEIAES
Since the world is getting digitalized, cloud computing has become a core part of it. Massive data on a daily basis is processed, stored, and transferred over the internet. Cloud computing has become quite popular because of its superlative quality and enhanced capability to improvise data management, offering better computing resources and data to its user bases (UBs). However, there are many issues in the existing cloud traffic management approaches and how to manage data during service execution. The study introduces two distinct research models: data center virtualization framework under multi-tenant cloud-ecosystem (DCVF-MT) and collaborative workflow of multi-tenant load balancing (CW-MTLB) with analytical research modeling. The sequence of execution flow considers a set of algorithms for both models that address the core problem of load balancing and resource allocation in the cloud computing (CC) ecosystem. The research outcome illustrates that DCVF-MT, outperforms the one-toone approach by approximately 24.778% performance improvement in traffic scheduling. It also yields a 40.33% performance improvement in managing cloudlet handling time. Moreover, it attains an overall 8.5133% performance improvement in resource cost optimization, which is significant to ensure the adaptability of the frameworks into futuristic cloud applications where adequate virtualization and resource mapping will be required.
Demand-driven Gaussian window optimization for executing preferred population...IJECEIAES
Scheduling is one of the essential enabling technique for Cloud computing which facilitates efficient resource utilization among the jobs scheduled for processing. However, it experiences performance overheads due to the inappropriate provisioning of resources to requesting jobs. It is very much essential that the performance of Cloud is accomplished through intelligent scheduling and allocation of resources. In this paper, we propose the application of Gaussian window where jobs of heterogeneous in nature are scheduled in the round-robin fashion on different Cloud clusters. The clusters are heterogeneous in nature having datacenters with varying sever capacity. Performance evaluation results show that the proposed algorithm has enhanced the QoS of the computing model. Allocation of Jobs to specific Clusters has improved the system throughput and has reduced the latency.
Scientific workload execution on a distributed computing platform such as a
cloud environment is time-consuming and expensive. The scientific workload
has task dependencies with different service level agreement (SLA)
prerequisites at different levels. Existing workload scheduling (WS) designs
are not efficient in assuring SLA at the task level. Alongside, induces higher
costs as the majority of scheduling mechanisms reduce either time or energy.
In reducing, cost both energy and makespan must be optimized together for
allocating resources. No prior work has considered optimizing energy and
processing time together in meeting task level SLA requirements. This paper
presents task level energy and performance assurance-workload scheduling
(TLEPA-WS) algorithm for the distributed computing environment. The
TLEPA-WS guarantees energy minimization with the performance
requirement of the parallel application under a distributed computational
environment. Experiment results show a significant reduction in using energy
and makespan; thereby reducing the cost of workload execution in comparison
with various standard workload execution models.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
A Review on Scheduling in Cloud Computingijujournal
This document reviews scheduling techniques in cloud computing. It discusses key concepts like virtualization and different scheduling algorithms. The review surveys various scheduling algorithms for tasks, workflows, real-time applications and energy optimization. It analyzes algorithms for load balancing, fault tolerance and resource utilization to improve performance metrics like makespan, cost and energy consumption. The document concludes that effective scheduling is important in cloud computing to provide on-demand services and complete tasks accurately and on time.
A Review on Scheduling in Cloud Computingijujournal
This document reviews scheduling techniques in cloud computing. It discusses key concepts like virtualization and different scheduling algorithms. The review surveys various scheduling algorithms for tasks, workflows, real-time applications and energy efficiency. It analyzes algorithms based on parameters like makespan, cost, energy consumption and concludes many algorithms can improve resource utilization and performance while reducing energy costs.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
This document provides an overview of scheduling mechanisms in cloud computing. It discusses task scheduling, gang scheduling based on performance and cost evaluation, and resource scheduling. For task scheduling, it describes classifying tasks based on quality of service parameters and MapReduce level scheduling. It then explains two gang scheduling algorithms - Adaptive First Come First Serve (AFCFS) and Largest Job First Serve (LJFS) - and how they are used to evaluate performance and cost. Finally, it briefly discusses resource scheduling and factors that affect scheduling mechanisms in cloud computing like efficiency, fairness, costs, and communication patterns.
Task scheduling is an important aspect to improve the utilization of resources in the Cloud Computing. This paper proposes a Divide and Conquer based approach for heterogeneous earliest finish time algorithm. The proposed system works in two phases. In the first phase it assigns the ranks to the incoming tasks with respect to size of it. In the second phase, we properly assign and manage the task to the virtual machine with the consideration of ideal time of respective virtual machine. This helps to get more effective resource utilization in Cloud Computing. The experimental results using Cybershake Scientific Workflow shows that the proposed Divide and Conquer HEFT performs better than HEFT in terms of task's finish time and response time. The result obtained by experimentally demonstrate that the proposed DCHEFT performance superiorly.
A HYPER-HEURISTIC METHOD FOR SCHEDULING THEJOBS IN CLOUD ENVIRONMENTieijjournal1
Currently cloud computing has turned into a promising technology and has become a great key for
satisfying a flexible service oriented , online provision and storage of computing resources and user’s
information in lesser expense with dynamism framework on pay per use basis.In this technology Job
Scheduling Problem is acritical issue. For well-organizedmanaging and handling resources,
administrations, scheduling plays a vital role. This paper shares out the improved Hyper- Heuristic
Scheduling Approach to schedule resources, by taking account of computation time and makespan with two
detection operators. Operators are used to select the low-level heuristics automatically. Conditional
Revealing Algorithm (CRA)idea is applied for finding the job failures while allocating the resources. We
believe that proposed hyper-heuristic achieve better results than other individual heuristics
A HYPER-HEURISTIC METHOD FOR SCHEDULING THEJOBS IN CLOUD ENVIRONMENTieijjournal
The document proposes a hyper-heuristic method for scheduling jobs in a cloud environment. It combines two low-level heuristics - Ant Colony Optimization and Particle Swarm Optimization - and uses two operators, intensification and diversity revealing, to select the heuristics. It also uses a conditional revealing operator to identify job failures while allocating resources. The hyper-heuristic aims to achieve better results than individual heuristics in terms of lower makespan time.
IRJET- Advance Approach for Load Balancing in Cloud Computing using (HMSO) Hy...IRJET Journal
This document proposes a new hybrid multi-swarm optimization (HMSO) algorithm for load balancing in cloud computing. It aims to minimize response time and costs while improving resource utilization and customer satisfaction. The HMSO algorithm uses multi-level particle swarm optimization to find an optimal resource allocation solution. Simulation results show that the proposed HMSO technique reduces response time and datacenter costs compared to other algorithms. It also achieves a more balanced load distribution across resources.
Elastic neural network method for load prediction in cloud computing gridIJECEIAES
Cloud computing still has no standard definition, yet it is concerned with Internet or network on-demand delivery of resources and services. It has gained much popularity in last few years due to rapid growth in technology and the Internet. Many issues yet to be tackled within cloud computing technical challenges, such as Virtual Machine migration, server association, fault tolerance, scalability, and availability. The most we are concerned with in this research is balancing servers load; the way of spreading the load between various nodes exists in any distributed systems that help to utilize resource and job response time, enhance scalability, and user satisfaction. Load rebalancing algorithm with dynamic resource allocation is presented to adapt with changing needs of a cloud environment. This research presents a modified elastic adaptive neural network (EANN) with modified adaptive smoothing errors, to build an evolving system to predict Virtual Machine load. To evaluate the proposed balancing method, we conducted a series of simulation studies using cloud simulator and made comparisons with previously suggested approaches in the previous work. The experimental results show that suggested method betters present approaches significantly and all these approaches.
The document proposes an Earthquake Disaster Based Resource Scheduling (EDBRS) framework for efficiently allocating cloud computing resources during earthquake disasters. The framework aims to minimize execution costs and times of cloud workloads by prioritizing urgent workloads related to emergency response. It models the resource scheduling problem and considers factors like workload deadlines, resource speeds and costs. The framework also presents algorithms for optimally assigning equal-length and variable-length workloads across multiple public and private cloud resources to balance performance and cost. The goal is to efficiently allocate cloud resources to disaster response zones based on urgency to reduce loss of life during earthquakes.
This document proposes an Earthquake Disaster Based Resource Scheduling (EDBRS) framework for efficiently allocating cloud computing resources during earthquake disasters. The framework prioritizes resource allocation based on the urgency of workloads, with more urgent workloads related to earthquake response and rescue receiving resources first. An algorithm is proposed that schedules resources to workloads based on this urgency criterion. The algorithm aims to reduce the execution time and costs of cloud workloads submitted during disasters as compared to existing scheduling algorithms. The performance of the proposed algorithm is evaluated using CloudSim simulation software, and it is shown to outperform existing algorithms.
Cost-Efficient Task Scheduling with Ant Colony Algorithm for Executing Large ...Editor IJCATR
This document summarizes a research paper that proposes an optimized ant colony optimization (ACO) algorithm for task scheduling in cloud computing. The goal is to minimize makespan and cost while improving fairness and load balancing. The ACO algorithm is adapted to prioritize and fairly allocate tasks to machines based on their performance. Simulations show the proposed ACO algorithm reduces makespan by 80% compared to Berger and greedy algorithms. It also increases processor utilization and balances loads across machines better than the other algorithms. The researchers conclude the optimized ACO approach improves resource usage and user satisfaction for task scheduling in cloud computing.
Because of the rapid growth in technology breakthroughs, including
multimedia and cell phones, Telugu character recognition (TCR) has recently
become a popular study area. It is still necessary to construct automated and
intelligent online TCR models, even if many studies have focused on offline
TCR models. The Telugu character dataset construction and validation using
an Inception and ResNet-based model are presented. The collection of 645
letters in the dataset includes 18 Achus, 38 Hallus, 35 Othulu, 34×16
Guninthamulu, and 10 Ankelu. The proposed technique aims to efficiently
recognize and identify distinctive Telugu characters online. This model's main
pre-processing steps to achieve its goals include normalization, smoothing,
and interpolation. Improved recognition performance can be attained by using
stochastic gradient descent (SGD) to optimize the model's hyperparameters.
Investigating human subjects is the goal of predicting human emotions in the
real world scenario. A significant number of psychological effects require
(feelings) to be produced, directly releasing human emotions. The
development of effect theory leads one to believe that one must be aware of
one's sentiments and emotions to forecast one's behavior. The proposed line
of inquiry focuses on developing a reliable model incorporating
neurophysiological data into actual feelings. Any change in emotional affect
will directly elicit a response in the body's physiological systems. This
approach is named after the notion of Gaussian mixture models (GMM). The
statistical reaction following data processing, quantitative findings on emotion
labels, and coincidental responses with training samples all directly impact the
outcomes that are accomplished. In terms of statistical parameters such as
population mean and standard deviation, the suggested method is evaluated
compared to a technique considered to be state-of-the-art. The proposed
system determines an individual's emotional state after a minimum of 6
iterative learning using the Gaussian expectation-maximization (GEM)
statistical model, in which the iterations tend to continue to zero error. Perhaps
each of these improves predictions while simultaneously increasing the
amount of value extracted.
More Related Content
Similar to Optimized load balancing mechanism in parallel computing for workflow in cloud computing environment
Optimization of energy consumption in cloud computing datacenters IJECEIAES
Cloud computing has emerged as a practical paradigm for providing IT resources, infrastructure and services. This has led to the establishment of datacenters that have substantial energy demands for their operation. This work investigates the optimization of energy consumption in cloud datacenter using energy efficient allocation of tasks to resources. The work seeks to develop formal optimization models that minimize the energy consumption of computational resources and evaluates the use of existing optimization solvers in testing these models. Integer linear programming (ILP) techniques are used to model the scheduling problem. The objective is to minimize the total power consumed by the active and idle cores of the servers’ CPUs while meeting a set of constraints. Next, we use these models to carry out a detailed performance comparison between a selected set of Generic ILP and 0-1 Boolean satisfiability based solvers in solving the ILP formulations. Simulation results indicate that in some cases the developed models have saved up to 38% in energy consumption when compared to common techniques such as round robin. Furthermore, results also showed that generic ILP solvers had superior performance when compared to SAT-based ILP solvers especially as the number of tasks and resources grow in size.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
Dynamic Task Scheduling based on Burst Time Requirement for Cloud EnvironmentIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
Multi-objective load balancing in cloud infrastructure through fuzzy based de...IAESIJAI
Cloud computing became a popular technology which influence not only
product development but also made technology business easy. The services
like infrastructure, platform and software can reduce the complexity of
technology requirement for any ecosystem. As the users of cloud-based
services increases the complexity of back-end technologies also increased.
The heterogeneous requirement of users in terms for various configurations
creates different unbalancing issues related to load. Hence effective load
balancing in a cloud system with reference to time and space become crucial
as it adversely affect system performance. Since the user requirement and
expected performance is multi-objective use of decision-making tools like
fuzzy logic will yield good results as it uses human procedure knowledge in
decision making. The overall system performance can be further improved by
dynamic resource scheduling using optimization technique like genetic
algorithm.
A combined computing framework for load balancing in multi-tenant cloud eco-...IJECEIAES
Since the world is getting digitalized, cloud computing has become a core part of it. Massive data on a daily basis is processed, stored, and transferred over the internet. Cloud computing has become quite popular because of its superlative quality and enhanced capability to improvise data management, offering better computing resources and data to its user bases (UBs). However, there are many issues in the existing cloud traffic management approaches and how to manage data during service execution. The study introduces two distinct research models: data center virtualization framework under multi-tenant cloud-ecosystem (DCVF-MT) and collaborative workflow of multi-tenant load balancing (CW-MTLB) with analytical research modeling. The sequence of execution flow considers a set of algorithms for both models that address the core problem of load balancing and resource allocation in the cloud computing (CC) ecosystem. The research outcome illustrates that DCVF-MT, outperforms the one-toone approach by approximately 24.778% performance improvement in traffic scheduling. It also yields a 40.33% performance improvement in managing cloudlet handling time. Moreover, it attains an overall 8.5133% performance improvement in resource cost optimization, which is significant to ensure the adaptability of the frameworks into futuristic cloud applications where adequate virtualization and resource mapping will be required.
Demand-driven Gaussian window optimization for executing preferred population...IJECEIAES
Scheduling is one of the essential enabling technique for Cloud computing which facilitates efficient resource utilization among the jobs scheduled for processing. However, it experiences performance overheads due to the inappropriate provisioning of resources to requesting jobs. It is very much essential that the performance of Cloud is accomplished through intelligent scheduling and allocation of resources. In this paper, we propose the application of Gaussian window where jobs of heterogeneous in nature are scheduled in the round-robin fashion on different Cloud clusters. The clusters are heterogeneous in nature having datacenters with varying sever capacity. Performance evaluation results show that the proposed algorithm has enhanced the QoS of the computing model. Allocation of Jobs to specific Clusters has improved the system throughput and has reduced the latency.
Scientific workload execution on a distributed computing platform such as a
cloud environment is time-consuming and expensive. The scientific workload
has task dependencies with different service level agreement (SLA)
prerequisites at different levels. Existing workload scheduling (WS) designs
are not efficient in assuring SLA at the task level. Alongside, induces higher
costs as the majority of scheduling mechanisms reduce either time or energy.
In reducing, cost both energy and makespan must be optimized together for
allocating resources. No prior work has considered optimizing energy and
processing time together in meeting task level SLA requirements. This paper
presents task level energy and performance assurance-workload scheduling
(TLEPA-WS) algorithm for the distributed computing environment. The
TLEPA-WS guarantees energy minimization with the performance
requirement of the parallel application under a distributed computational
environment. Experiment results show a significant reduction in using energy
and makespan; thereby reducing the cost of workload execution in comparison
with various standard workload execution models.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
A Review on Scheduling in Cloud Computingijujournal
This document reviews scheduling techniques in cloud computing. It discusses key concepts like virtualization and different scheduling algorithms. The review surveys various scheduling algorithms for tasks, workflows, real-time applications and energy optimization. It analyzes algorithms for load balancing, fault tolerance and resource utilization to improve performance metrics like makespan, cost and energy consumption. The document concludes that effective scheduling is important in cloud computing to provide on-demand services and complete tasks accurately and on time.
A Review on Scheduling in Cloud Computingijujournal
This document reviews scheduling techniques in cloud computing. It discusses key concepts like virtualization and different scheduling algorithms. The review surveys various scheduling algorithms for tasks, workflows, real-time applications and energy efficiency. It analyzes algorithms based on parameters like makespan, cost, energy consumption and concludes many algorithms can improve resource utilization and performance while reducing energy costs.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
This document provides an overview of scheduling mechanisms in cloud computing. It discusses task scheduling, gang scheduling based on performance and cost evaluation, and resource scheduling. For task scheduling, it describes classifying tasks based on quality of service parameters and MapReduce level scheduling. It then explains two gang scheduling algorithms - Adaptive First Come First Serve (AFCFS) and Largest Job First Serve (LJFS) - and how they are used to evaluate performance and cost. Finally, it briefly discusses resource scheduling and factors that affect scheduling mechanisms in cloud computing like efficiency, fairness, costs, and communication patterns.
Task scheduling is an important aspect to improve the utilization of resources in the Cloud Computing. This paper proposes a Divide and Conquer based approach for heterogeneous earliest finish time algorithm. The proposed system works in two phases. In the first phase it assigns the ranks to the incoming tasks with respect to size of it. In the second phase, we properly assign and manage the task to the virtual machine with the consideration of ideal time of respective virtual machine. This helps to get more effective resource utilization in Cloud Computing. The experimental results using Cybershake Scientific Workflow shows that the proposed Divide and Conquer HEFT performs better than HEFT in terms of task's finish time and response time. The result obtained by experimentally demonstrate that the proposed DCHEFT performance superiorly.
A HYPER-HEURISTIC METHOD FOR SCHEDULING THEJOBS IN CLOUD ENVIRONMENTieijjournal1
Currently cloud computing has turned into a promising technology and has become a great key for
satisfying a flexible service oriented , online provision and storage of computing resources and user’s
information in lesser expense with dynamism framework on pay per use basis.In this technology Job
Scheduling Problem is acritical issue. For well-organizedmanaging and handling resources,
administrations, scheduling plays a vital role. This paper shares out the improved Hyper- Heuristic
Scheduling Approach to schedule resources, by taking account of computation time and makespan with two
detection operators. Operators are used to select the low-level heuristics automatically. Conditional
Revealing Algorithm (CRA)idea is applied for finding the job failures while allocating the resources. We
believe that proposed hyper-heuristic achieve better results than other individual heuristics
A HYPER-HEURISTIC METHOD FOR SCHEDULING THEJOBS IN CLOUD ENVIRONMENTieijjournal
The document proposes a hyper-heuristic method for scheduling jobs in a cloud environment. It combines two low-level heuristics - Ant Colony Optimization and Particle Swarm Optimization - and uses two operators, intensification and diversity revealing, to select the heuristics. It also uses a conditional revealing operator to identify job failures while allocating resources. The hyper-heuristic aims to achieve better results than individual heuristics in terms of lower makespan time.
IRJET- Advance Approach for Load Balancing in Cloud Computing using (HMSO) Hy...IRJET Journal
This document proposes a new hybrid multi-swarm optimization (HMSO) algorithm for load balancing in cloud computing. It aims to minimize response time and costs while improving resource utilization and customer satisfaction. The HMSO algorithm uses multi-level particle swarm optimization to find an optimal resource allocation solution. Simulation results show that the proposed HMSO technique reduces response time and datacenter costs compared to other algorithms. It also achieves a more balanced load distribution across resources.
Elastic neural network method for load prediction in cloud computing gridIJECEIAES
Cloud computing still has no standard definition, yet it is concerned with Internet or network on-demand delivery of resources and services. It has gained much popularity in last few years due to rapid growth in technology and the Internet. Many issues yet to be tackled within cloud computing technical challenges, such as Virtual Machine migration, server association, fault tolerance, scalability, and availability. The most we are concerned with in this research is balancing servers load; the way of spreading the load between various nodes exists in any distributed systems that help to utilize resource and job response time, enhance scalability, and user satisfaction. Load rebalancing algorithm with dynamic resource allocation is presented to adapt with changing needs of a cloud environment. This research presents a modified elastic adaptive neural network (EANN) with modified adaptive smoothing errors, to build an evolving system to predict Virtual Machine load. To evaluate the proposed balancing method, we conducted a series of simulation studies using cloud simulator and made comparisons with previously suggested approaches in the previous work. The experimental results show that suggested method betters present approaches significantly and all these approaches.
The document proposes an Earthquake Disaster Based Resource Scheduling (EDBRS) framework for efficiently allocating cloud computing resources during earthquake disasters. The framework aims to minimize execution costs and times of cloud workloads by prioritizing urgent workloads related to emergency response. It models the resource scheduling problem and considers factors like workload deadlines, resource speeds and costs. The framework also presents algorithms for optimally assigning equal-length and variable-length workloads across multiple public and private cloud resources to balance performance and cost. The goal is to efficiently allocate cloud resources to disaster response zones based on urgency to reduce loss of life during earthquakes.
This document proposes an Earthquake Disaster Based Resource Scheduling (EDBRS) framework for efficiently allocating cloud computing resources during earthquake disasters. The framework prioritizes resource allocation based on the urgency of workloads, with more urgent workloads related to earthquake response and rescue receiving resources first. An algorithm is proposed that schedules resources to workloads based on this urgency criterion. The algorithm aims to reduce the execution time and costs of cloud workloads submitted during disasters as compared to existing scheduling algorithms. The performance of the proposed algorithm is evaluated using CloudSim simulation software, and it is shown to outperform existing algorithms.
Cost-Efficient Task Scheduling with Ant Colony Algorithm for Executing Large ...Editor IJCATR
This document summarizes a research paper that proposes an optimized ant colony optimization (ACO) algorithm for task scheduling in cloud computing. The goal is to minimize makespan and cost while improving fairness and load balancing. The ACO algorithm is adapted to prioritize and fairly allocate tasks to machines based on their performance. Simulations show the proposed ACO algorithm reduces makespan by 80% compared to Berger and greedy algorithms. It also increases processor utilization and balances loads across machines better than the other algorithms. The researchers conclude the optimized ACO approach improves resource usage and user satisfaction for task scheduling in cloud computing.
Similar to Optimized load balancing mechanism in parallel computing for workflow in cloud computing environment (20)
Because of the rapid growth in technology breakthroughs, including
multimedia and cell phones, Telugu character recognition (TCR) has recently
become a popular study area. It is still necessary to construct automated and
intelligent online TCR models, even if many studies have focused on offline
TCR models. The Telugu character dataset construction and validation using
an Inception and ResNet-based model are presented. The collection of 645
letters in the dataset includes 18 Achus, 38 Hallus, 35 Othulu, 34×16
Guninthamulu, and 10 Ankelu. The proposed technique aims to efficiently
recognize and identify distinctive Telugu characters online. This model's main
pre-processing steps to achieve its goals include normalization, smoothing,
and interpolation. Improved recognition performance can be attained by using
stochastic gradient descent (SGD) to optimize the model's hyperparameters.
Investigating human subjects is the goal of predicting human emotions in the
real world scenario. A significant number of psychological effects require
(feelings) to be produced, directly releasing human emotions. The
development of effect theory leads one to believe that one must be aware of
one's sentiments and emotions to forecast one's behavior. The proposed line
of inquiry focuses on developing a reliable model incorporating
neurophysiological data into actual feelings. Any change in emotional affect
will directly elicit a response in the body's physiological systems. This
approach is named after the notion of Gaussian mixture models (GMM). The
statistical reaction following data processing, quantitative findings on emotion
labels, and coincidental responses with training samples all directly impact the
outcomes that are accomplished. In terms of statistical parameters such as
population mean and standard deviation, the suggested method is evaluated
compared to a technique considered to be state-of-the-art. The proposed
system determines an individual's emotional state after a minimum of 6
iterative learning using the Gaussian expectation-maximization (GEM)
statistical model, in which the iterations tend to continue to zero error. Perhaps
each of these improves predictions while simultaneously increasing the
amount of value extracted.
Early diagnosis of cancers is a major requirement for patients and a
complicated job for the oncologist. If it is diagnosed early, it could have made
the patient more likely to live. For a few decades, fuzzy logic emerged as an
emphatic technique in the identification of diseases like different types of
cancers. The recognition of cancer diseases mostly operated with inexactness,
inaccuracy, and vagueness. This paper aims to design the fuzzy expert system
(FES) and its implementation for the detection of prostate cancer. Specifically,
prostate-specific antigen (PSA), prostate volume (PV), age, and percentage
free PSA (%FPSA) are used to determine prostate cancer risk (PCR), while
PCR serves as an output parameter. Mamdani fuzzy inference method is used
to calculate a range of PCR. The system provides a scale of risk of prostate
cancer and clears the path for the oncologist to determine whether their
patients need a biopsy. This system is fast as it requires minimum calculation
and hence comparatively less time which reduces mortality and morbidity and
is more reliable than other economic systems and can be frequently used by
doctors.
The biomedical profession has gained importance due to the rapid and accurate diagnosis of clinical patients using computer-aided diagnosis (CAD) tools.
The diagnosis and treatment of Alzheimer’s disease (AD) using complementary multimodalities can improve the quality of life and mental state of patients.
In this study, we integrated a lightweight custom convolutional neural network
(CNN) model and nature-inspired optimization techniques to enhance the performance, robustness, and stability of progress detection in AD. A multi-modal
fusion database approach was implemented, including positron emission tomography (PET) and magnetic resonance imaging (MRI) datasets, to create a fused
database. We compared the performance of custom and pre-trained deep learning models with and without optimization and found that employing natureinspired algorithms like the particle swarm optimization algorithm (PSO) algorithm significantly improved system performance. The proposed methodology,
which includes a fused multimodality database and optimization strategy, improved performance metrics such as training, validation, test accuracy, precision, and recall. Furthermore, PSO was found to improve the performance of
pre-trained models by 3-5% and custom models by up to 22%. Combining different medical imaging modalities improved the overall model performance by
2-5%. In conclusion, a customized lightweight CNN model and nature-inspired
optimization techniques can significantly enhance progress detection, leading to
better biomedical research and patient care.
Class imbalance is a pervasive issue in the field of disease classification from
medical images. It is necessary to balance out the class distribution while training a model. However, in the case of rare medical diseases, images from affected
patients are much harder to come by compared to images from non-affected
patients, resulting in unwanted class imbalance. Various processes of tackling
class imbalance issues have been explored so far, each having its fair share of
drawbacks. In this research, we propose an outlier detection based image classification technique which can handle even the most extreme case of class imbalance. We have utilized a dataset of malaria parasitized and uninfected cells. An
autoencoder model titled AnoMalNet is trained with only the uninfected cell images at the beginning and then used to classify both the affected and non-affected
cell images by thresholding a loss value. We have achieved an accuracy, precision, recall, and F1 score of 98.49%, 97.07%, 100%, and 98.52% respectively,
performing better than large deep learning models and other published works.
As our proposed approach can provide competitive results without needing the
disease-positive samples during training, it should prove to be useful in binary
disease classification on imbalanced datasets.
Recently, plant identification has become an active trend due to encouraging
results achieved in plant species detection and plant classification fields
among numerous available plants using deep learning methods. Therefore,
plant classification analysis is performed in this work to address the problem
of accurate plant species detection in the presence of multiple leaves together,
flowers, and noise. Thus, a convolutional neural network based deep feature
learning and classification (CNN-DFLC) model is designed to analyze
patterns of plant leaves and perform classification using generated finegrained feature weights. The proposed CNN-DFLC model precisely estimates
which the given image belongs to which plant species. Several layers and
blocks are utilized to design the proposed CNN-DFLC model. Fine-grained
feature weights are obtained using convolutional and pooling layers. The
obtained feature maps in training are utilized to predict labels and model
performance is tested on the Vietnam plant image (VPN-200) dataset. This
dataset consists of a total number of 20,000 images and testing results are
achieved in terms of classification accuracy, precision, recall, and other
performance metrics. The mean classification accuracy obtained using the
proposed CNN-DFLC model is 96.42% considering all 200 classes from the
VPN-200 dataset.
Big data as a service (BDaaS) platform is widely used by various
organizations for handling and processing the high volume of data generated
from different internet of things (IoT) devices. Data generated from these IoT
devices are kept in the form of big data with the help of cloud computing
technology. Researchers are putting efforts into providing a more secure and
protected access environment for the data available on the cloud. In order to
create a safe, distributed, and decentralised environment in the cloud,
blockchain technology has emerged as a useful tool. In this research paper, we
have proposed a system that uses blockchain technology as a tool to regulate
data access that is provided by BDaaS platforms. We are securing the access
policy of data by using a modified form of ciphertext policy-attribute based
encryption (CP-ABE) technique with the help of blockchain technology. For
secure data access in BDaaS, algorithms have been created using a mix of CPABE with blockchain technology. Proposed smart contract algorithms are
implemented using Eclipse 7.0 IDE and the cloud environment has been
simulated on CloudSim tool. Results of key generation time, encryption time,
and decryption time has been calculated and compared with access control
mechanism without blockchain technology.
Internet of things (IoT) has become one of the eminent phenomena in human
life along with its collaboration with wireless sensor networks (WSNs), due
to enormous growth in the domain; there has been a demand to address the
various issues regarding it such as energy consumption, redundancy, and
overhead. Data aggregation (DA) is considered as the basic mechanism to
minimize the energy efficiency and communication overhead; however,
security plays an important role where node security is essential due to the
volatile nature of WSN. Thus, we design and develop proximate node aware
secure data aggregation (PNA-SDA). In the PNA-SDA mechanism, additional
data is used to secure the original data, and further information is shared with
the proximate node; moreover, further security is achieved by updating the
state each time. Moreover, the node that does not have updated information is
considered as the compromised node and discarded. PNA-SDA is evaluated
considering the different parameters like average energy consumption, and
average deceased node; also, comparative analysis is carried out with the
existing model in terms of throughput and correct packet identification.
Drones provide an alternative progression in protection submissions since
they are capable of conducting autonomous seismic investigations. Recent
advancement in unmanned aerial vehicle (UAV) communication is an internet
of a drone combined with 5G networks. Because of the quick utilization of
rapidly progressed registering frameworks besides 5G officialdoms, the
information from the user is consistently refreshed and pooled. Thus, safety
or confidentiality is vital among clients, and a proficient substantiation
methodology utilizing a vigorous sanctuary key. Conventional procedures
ensure a few restrictions however taking care of the assault arrangements in
information transmission over the internet of drones (IOD) environmental
frameworks. A unique hyperelliptical curve (HEC) cryptographically based
validation system is proposed to provide protected data facilities among
drones. The proposed method has been compared with the existing methods
in terms of packet loss rate, computational cost, and delay and thereby
provides better insight into efficient and secure communication. Finally, the
simulation results show that our strategy is efficient in both computation and
communication.
Monitoring behavior, numerous actions, or any such information is considered
as surveillance and is done for information gathering, influencing, managing,
or directing purposes. Citizens employ surveillance to safeguard their
communities. Governments do this for the purposes of intelligence collection,
including espionage, crime prevention, the defense of a method, a person, a
group, or an item; or the investigation of criminal activity. Using an internet
of things (IoT) rover, the area will be secured with better secrecy and
efficiency instead of humans, will provide an additional safety step. In this
paper, there is a discussion about an IoT rover for remote surveillance based
around a Raspberry Pi microprocessor which will be able to monitor a
closed/open space. This rover will allow safer survey operations and would
help to reduce the risks involved with it.
In a world where climate change looms large the spotlight often shines on
greenhouse gases, but the shadow of man-made aerosols should not be
underestimated. These tiny particles play a pivotal role in disrupting Earth's
radiative equilibrium, yet many mysteries surround their influence on various
physical aspects of our planet. The root of these mysteries lies in the limited
data we have on aerosol sources, formation processes, conversion dynamics,
and collection methods. Aerosols, composed of particulate matter (PM),
sulfates, and nitrates, hold significant sway across the hemisphere. Accurate
measurement demands the refinement of in-situ, satellite, and ground-based
techniques. As aerosols interact intricately with the environment, their full
impact remains an enigma. Enter a groundbreaking study in Morocco that
dared to compare an internet of thing (IoT) system with satellite-based
atmospheric models, with a focus on fine particles below 10 and 2.5
micrometers in diameter. The initial results, particularly in regions abundant
with extraction pits, shed light on the IoT system's potential to decode
aerosols' role in the grand narrative of climate change. These findings inspire
hope as we confront the formidable global challenge of climate change.
The use of technology has a significant impact to reduce the consequences of
accidents. Sensors, small components that detect interactions experienced by
various components, play a crucial role in this regard. This study focuses on
how the MPU6050 sensor module can be used to detect the movement of
people who are falling, defined as the inability of the lower body, including
the hips and feet, to support the body effectively. An airbag system is
proposed to reduce the impact of a fall. The data processing method in this
study involves the use of a threshold value to identify falling motion. The
results of the study have identified a threshold value for falling motion,
including an acceleration relative (AR) value of less than or equal to 0.38 g,
an angle slope of more than or equal to 40 degrees, and an angular velocity
of more than or equal to 30 °/s. The airbag system is designed to inflate
faster than the time of impact, with a gas flow rate of 0.04876 m3
/s and an
inflating time of 0.05 s. The overall system has a specificity value of 100%,
a sensitivity of 85%, and an accuracy of 94%.
The fundamental principle of the paper is that the soil moisture sensor obtains
the moisture content level of the soil sample. The water pump is automatically
activated if the moisture content is insufficient, which causes water to flow
into the soil. The water pump is immediately turned off when the moisture
content is high enough. Smart home, smart city, smart transportation, and
smart farming are just a few of the new intelligent ideas that internet of things
(IoT) includes. The goal of this method is to increase productivity and
decrease manual labour among farmers. In this paper, we present a system for
monitoring and regulating water flow that employs a soil moisture sensor to
keep track of soil moisture content as well as the land’s water level to keep
track of and regulate the amount of water supplied to the plant. The device
also includes an automated led lighting system.
In order to provide sensing services to low-powered IoT devices, wireless sensor networks (WSNs) organize specialized transducers into networks. Energy usage is one of the most important design concerns in WSN because it is very hard to replace or recharge the batteries in sensor nodes. For an energy-constrained network, the clustering technique is crucial in preserving battery life. By strategically selecting a cluster head (CH), a network's load can be balanced, resulting in decreased energy usage and extended system life. Although clustering has been predominantly used in the literature, the concept of chain-based clustering has not yet been explored. As a result, in this paper, we employ a chain-based clustering architecture for data dissemination in the network. Furthermore, for CH selection, we employ the coati optimisation algorithm, which was recently proposed and has demonstrated significant improvement over other optimization algorithms. In this method, the parameters considered for selecting the CH are energy, node density, distance, and the network’s average energy. The simulation results show tremendous improvement over the competitive cluster-based routing algorithms in the context of network lifetime, stability period (first node dead), transmission rate, and the network's power reserves.
The construction industry is an industry that is always surrounded by
uncertainties and risks. The industry is always associated with a threatindustry which has a complex, tedious layout and techniques characterized by
unpredictable circumstances. It comprises a variety of human talents and the
coordination of different areas and activities associated with it. In this
competitive era of the construction industry, delays and cost overruns of the
project are often common in every project and the causes of that are also
common. One of the problems which we are trying to cater to is the improper
handling of materials at the construction site. In this paper, we propose
developing a system that is capable of tracking construction material on site
that would benefit the contractor and client for better control over inventory
on-site and to minimize loss of material that occurs due to theft and misplacing
of materials.
Today, health monitoring relies heavily on technological advancements. This
study proposes a low-power wide-area network (LPWAN) based, multinodal
health monitoring system to monitor vital physiological data. The suggested
system consists of two nodes, an indoor node, and an outdoor node, and the
nodes communicate via long range (LoRa) transceivers. Outdoor nodes use an
MPU6050 module, heart rate, oxygen pulse, temperature, and skin resistance
sensors and transmit sensed values to the indoor node. We transferred the data
received by the master node to the cloud using the Adafruit cloud service. The
system can operate with a coverage of 4.5 km, where the optimal distance
between outdoor sensor nodes and the indoor master node is 4 km. To further
predict fall detection, various machine learning classification techniques have
been applied. Upon comparing various classifier techniques, the decision tree
method achieved an accuracy of 0.99864 with a training and testing ratio of
70:30. By developing accurate prediction models, we can identify high-risk
individuals and implement preventative measures to reduce the likelihood of
a fall occurring. Remote monitoring of the health and physical status of elderly
people has proven to be the most beneficial application of this technology.
The effectiveness of adaptive filters are mainly dependent on the design
techniques and the algorithm of adaptation. The most common adaptation
technique used is least mean square (LMS) due its computational simplicity.
The application depends on the adaptive filter configuration used and are well
known for system identification and real time applications. In this work, a
modified delayed μ-law proportionate normalized least mean square
(DMPNLMS) algorithm has been proposed. It is the improvised version of the
µ-law proportionate normalized least mean square (MPNLMS) algorithm.
The algorithm is realized using Ladner-Fischer type of parallel prefix
logarithmic adder to reduce the silicon area. The simulation and
implementation of very large-scale integration (VLSI) architecture are done
using MATLAB, Vivado suite and complementary metal–oxide–
semiconductor (CMOS) 90 nm technology node using Cadence RTL and
Genus Compiler respectively. The DMPNLMS method exhibits a reduction
in mean square error, a higher rate of convergence, and more stability. The
synthesis results demonstrate that it is area and delay effective, making it
practical for applications where a faster operating speed is required.
The increasing demand for faster, robust, and efficient device development of enabling technology to mass production of industrial research in circuit design deals with challenges like size, efficiency, power, and scalability. This paper, presents a design and analysis of low power high speed full adder using negative capacitance field effecting transistors. A comprehensive study is performed with adiabatic logic and reversable logic. The performance of full adder is studied with metal oxide field effect transistor (MOSFET) and negative capacitance field effecting (NCFET). The NCFET based full adder offers a low power and high speed compared with conventional MOSFET. The complete design and analysis are performed using cadence virtuoso. The adiabatic logic offering low delay of 0.023 ns and reversable logic is offering low power of 7.19 mw.
The global agriculture system faces significant challenges in meeting the
growing demand for food production, particularly given projections that the
world's population will reach 70% by 2050. Hydroponic farming is an
increasingly popular technique in this field, offering a promising solution to
these challenges. This paper will present the improvement of the current
traditional hydroponic method by providing a system that can be used to
monitor and control the important element in order to help the plant grow up
smoothly. This proposed system is quite efficient and user-friendly that can
be used by anyone. This is a combination of a traditional hydroponic system,
an automatic control system and a smartphone. The primary objective is to
develop a smart system capable of monitoring and controlling potential
hydrogen (pH) levels, a key factor that affects hydroponic plant growth.
Ultimately, this paper offers an alternative approach to address the challenges
of the existing agricultural system and promote the production of clean,
disease-free, and healthy food for a better future.
One advantage of the open computing language (OpenCL) software framework is its ability to run on different architectures. Field programmable gate arrays (FPGAs) are a high-speed computing architecture used for computation acceleration. This work develops a set of eight benchmarks (memory synchronization functions, explained in this study) using an OpenCL framework to study the effect of memory access time on overall performance when targeting the general FPGA computing platform. The results indicate the best synchronization mechanism to be adopted to synthesize the proposed design on the FPGA computation architecture. The proposed research results also demonstrate the effectiveness of using a taskparallel model approach to avoid using high-cost synchronization mechanisms within proposed designs that are constructed on the general FPGA computation platform.
More from International Journal of Reconfigurable and Embedded Systems (20)
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Optimized load balancing mechanism in parallel computing for workflow in cloud computing environment
1. International Journal of Reconfigurable and Embedded Systems (IJRES)
Vol. 12, No. 2, July 2023, pp. 276~286
ISSN: 2089-4864, DOI: 10.11591/ijres.v12.i2.pp276-286 276
Journal homepage: http://ijres.iaescore.com
Optimized load balancing mechanism in parallel computing for
workflow in cloud computing environment
Asma Anjum, Asma Parveen
Department of Computer Science and Engineering, Khaja Banda Nawaz College of Engineering, Kalaburagi, India
Article Info ABSTRACT
Article history:
Received Jul 29, 2022
Revised Oct 15, 2022
Accepted Dec 10, 2022
Cloud computing gives on-demand access to computing resources in
metered and powerfully adapted way; it empowers the client to get access to
fast and flexible resources through virtualization and widely adaptable for
various applications. Further, to provide assurance of productive
computation, scheduling of task is very much important in cloud
infrastructure environment. Moreover, the main aim of task execution
phenomena is to reduce the execution time and reserve infrastructure;
further, considering huge application, workflow scheduling has drawn fine
attention in business as well as scientific area. Hence, in this research work,
we design and develop an optimized load balancing in parallel computation
aka optimal load balancing in parallel computing (OLBP) mechanism to
distribute the load; at first different parameter in workload is computed and
then loads are distributed. Further OLBP mechanism considers makespan
time and energy as constraint and further task offloading is done considering
the server speed. This phenomenon provides the balancing of workflow;
further OLBP mechanism is evaluated using cyber shake workflow dataset
and outperforms the existing workflow mechanism.
Keywords:
Cloud computing
Load balancing
Makespan
OLBP
Parallel computation
This is an open access article under the CC BY-SA license.
Corresponding Author:
Asma Anjum
Department of Computer Science and Engineering, Khaja Banda Nawaz College of Engineering
Rauza Buzurg, Kalaburagi, Karnataka, India
Email: asmacs13@gmail.com
1. INTRODUCTION
Cloud computing (CC) is nothing but specific computing paradigm that provides the absolute
management and services on the internet; further, cloud computing is also referred as the fine computing
environment with characteristics of being convenient, sharing and on demand services which includes
servers, storage, applications, services, networks and so on. Moreover, these can be provided in rapid and
efficient manner without much effort [1]. various application and resources; virtual machine (VM) distributes
the various resource such as processing cores, system BUS and so on, however these resources are restricted
through processing power phenomena [2]. Cloud computing have various application from industrial point as
well as academic point as well as industrial point; moreover, in order to take advantage of computing
resources, workflow is designed. Workflows comprises large number of dependent and independent task and
possesses huge applications in scientific purpose as well as business purpose; these application includes bio-
informatics, medical, weather forecasting, astronomy, and so forth [3], [4].
Figure 1 shows the simple workflow and scientific workflow is represented in the performance
evaluation section. Moreover, Figure 1 shows that there are 9 tasks. Simple workflow is easy to schedule,
however scientific workflow is quite complicated for example montage, epigenomics, SIPHT and cybershake
workflow applied in various research like earthquakes, astronomy and so on. Moreover, scientific workflow
2. Int J Reconfigurable & Embedded Syst ISSN: 2089-4864
Optimized load balancing mechanism in parallel computing for workflow in cloud … (Asma Anjum)
277
involves complex data of various size and requires the higher processing power; nevertheless, through
providing the on-demand virtual resource, cloud-computing paradigm helps in handling these workflows.
Figure 1. Simple workflow
Early work of load balancing was more focus towards traditional approach like dynamic load
balancing, round robin, ant colony optimization, first come first serve (FCFS). However, most common
approach used were first input first output (FIFO) and weighted round robin (WRR). Nevertheless, it also
supports simulation and modelling of cloud computing environments which consists of inter-networked and
single clouds, also it exposes the customized interface to implement load balancing and scheduling policies
into the VMs as well as provisioning mechanism for VM allocation, Abdullahi et al. [5], Yaghmaee et al. [6]
for more details. Moreover, considering the importance of workflow scheduling, several research were
carried out which is extensively discussed in literature review section; few approaches are single approach or
hybrid approach like meta-heuristic mechanism produce better results in comparison with other model.
Moreover, methods like heterogeneous earliest finish time (HEFT) is heterogeneous mechanism where each
task are mapped in priority order to virtual machine explained in [7]–[9]. Other method like gravitational
search algorithm (GSA) utilizes gravitational law, as it starts with random particles set and each particle
provides solution and mass are computed using the fitness function as explained by researchers [10]–[14].
Figure 2 shows the workflow of scheduling. Power consumption and simplicity and critical task is to handle
the various challenges which are derived from the cloud model [13], [14].
Figure 2. Workflow in cloud computing
3. ISSN: 2089-4864
Int J Reconfigurable & Embedded Syst, Vol. 12, No. 2, July 2023: 276-286
278
Workflow scheduling is considered as the NP-complete problem that are widely researched for
various paradigms including cluster computing and grid environment. Further, through cloud-based
environment the scientific workflow can be scheduled in efficient manner. However, for huge value of m and
n, achieving the optimal solution through existing method is realy expensive such as brute force and its
lengthy process. Although methods like metaheuristic approach have been proposed but they suffer from
their own demerits. Hence motivated by the above-discussed points, in this research work we design and
develop a mechanism optimized load balancing in parallel computation (OLBP) to distribute the load in
efficient manner for achieving the improvisation in workflow scheduling; further contribution of research
work is highlighted: at first, we carry extensive survey of existing mechanism to optimize the energy and
makespan. Further, an optimized load-balancing scheme in parallel computing is designed so that loads are
distributed in a manner, OLBP considers the task offloading through server speed. Moreover, execution
speed and energy consumption are taken as constraint and further it is optimized integrated with load
balancing approach. OLBP is evaluated considering the scientific workflow cybershake and its variant; four
distinctive instances are created and compared with the existing protocol to prove the efficiency of proposed
mechanism. The first component of this research begins with a backdrop of computation and the need for the
cloud computing phenomenon. We next address the importance of scheduling mechanisms before concluding
with motivation and the value of the research work. The second segment examines the various current
protocols and their shortcomings; the third section also develops OLBP and its mathematical formulation. In
the fourth segment, OLBP is assessed by taking into account numerous instances.
2. LITERATURE SURVEY
In cloud computing paradigm, the phenomena of task scheduling are to identify the optimal
mapping among the virtual machine and tasks in accordance with cloud model and users’ goal such as less
execution time and optimal productivity. This can be achieved through load balancing and optimization
mechanism as shown by Yin et al. [15], Gao et al. [16]. In general, single optimization mechanism approach
were developed namely Min-Min by Patel et al. [17], MaxMin Mao et al. [18], Kong et al. [19]; further,
improvisation has been explained by Zhang et al. [20] where segmented Min-Min approach were developed.
In here, task was pre-ordered before the completion time, later designed sequence was segmented and
MinMin approach were applied to these designed segments. Although algorithm performed better than the
MinMin when task length changed dramatically through providing longer task to be executed first. Etminani
and Naghibzadeh [21] adopted a novel scheduling mechanism based on MaxMin and MinMin; Further
standard deviation (SD) was computed in from both algorithm on virtual machines, also Devipriya and
Ramesh [22] developed improvised MaxMin algorithm depending on scheduling large, task execution time to
virtual machines with less computing speed, this method successfully minimized the task execution time.
Although the above improvised version of MinMin and MaxMin minimizes the task execution time and it is
easy to perform but these approaches lead to unbalancing load and long-term overload affects quality of
service which is eminent in workflow scheduling, for more details refer [23], [24].
Multi-objective optimization approaches were created to address these drawbacks. The researchers
[25]–[27] have created an LB-ACO-based algorithm that use the ant colony optimization (ACO) mechanism
to get the best results; non-domination sorting is then used to produce a Pareto solution set that reflects the
tradeoff between load balancing with makespan time in a cloud computing environment. Furthermore, a task-
scheduling mechanism based on the genetic ACO mechanism was created in [28]. As described by Cui and
Zhang [29], genetic algorithm (GA) based task scheduling approach were developed based on GA; this
approach was based on the two-dimensional coding mechanism, also genetic mutation and crossover
operation were designed to produce the new offspring which for improvising the population diversity. Li et
al. [30], Xiao [31] moreover have integrated the three mechanisms i.e., artificial bee colony (ABC), GA and
ACO, a sharing module is developed for sharing the optimal solutions found through three algorithm and
further solution space was created. Integration of these three model and improvised convergence accuracies
mentioned by Wu and Wang [32] developed an estimation of distribution algorithm (EDA) approach for
solving the parallel scheduling mechanism that has the priority constraint. Further, in literature survey we
observed that most of the method ignores the load balancing mechanism and focus on single objective such
as task execution time or power optimization as mentioned by Aziza and Krichen [33], Long et al. [34].
3. METHOD
In this section, we design and develop an optimized load balancing mechanism in Parallel
computation; optimized load balancing distributes the load in efficient way such that the various constraint
related to workflow is fulfilled. Task scheduling as well as resource provisioning are two obstacles that load
4. Int J Reconfigurable & Embedded Syst ISSN: 2089-4864
Optimized load balancing mechanism in parallel computing for workflow in cloud … (Asma Anjum)
279
balancing poses in any workflow. In addition, the key goals of cloud service providers are to deliver
dependable services at an affordable price with improved resource usage, availability, and energy efficiency.
In case of independent task, the order of execution is not important, however for dependent task, execution
order is priority, as the execution of priority task is necessary, and it gets more complicated. Direct acyclic
graph (DAG) which represents the workflow where edge indicates the task and edge indicates the relation
between two nodes. Figure 3 shows the load balancing mechanism and comprises various parts such as users,
data center controller (DCC), load balancer where the load balancing is carried out through proposed
approach, VM aka Virtual Machine manager which manages VM, VM monitor that monitor VM activity and
physical server; in this work we focus on designing load balancing part.
Figure 3. General load balancing
3.1. Optimized load balancing in parallel computation-OLBP steps
In this section, we go through the steps of optimized load balancing algorithm. The steps for OLBP
algorithm:
Step 1: Initiate.
Step 2: Creating the analytical paradigm for parallel and heterogeneous computing.
Step 3: Power utilization is formulated.
Step 4: The designed analytical model divides the model into two constraints.
Step 4: We use the task’s fixed arrival rate to calculate the service fee.
Step 5: The number of people waiting, task response time, busy server, server usage, and server speed are all
calculated.
Step 6: Further, we design the load balancing through server speed and later task offloading is carried out
considering makespan and power consumption.
3.2. System model
To design the load balancing mechanism in workflow, heterogeneous server model is designed; let’s
consider a m heterogeneous server R1, R2, R3, … Rm of speed r1, r2, r3, … rm and size l1, l2, l3, l4 … , lm.
Further, this model is treated as a particular model. Further, lets assume that Rh has lh identical server with rh
speed. Let’s consider the task with arrival time Α where task arrival times are independent and shared; further
load distribution mechanism is parted into various sub-mechanism, hence task execution time is, let ζh =
1
wh
̈
=
rh
q̈
be the service rate where average number of tasks finished through Si server in given unit of time,
then server utilization can be computed ibn (1). The above example can be considered as the average time
when server is busy; let oh,j be the probability where there are j tasks in queue for Rh as given in (2).
5. ISSN: 2089-4864
Int J Reconfigurable & Embedded Syst, Vol. 12, No. 2, July 2023: 276-286
280
χh =
Αhq̈
lhrh
(1)
oh,j = {
oh,0
(lhχh)j
j!
j ≤ lj
oh,o
lh
lh χh
j
lh!
j > lj
(2)
oh,0 = (∑
(lhχh)k
j!
+
(lhχh)lh
j!
.
1
1−χh
lh−1
j! )
−1
(3)
Further, probability of queueing in Rh is given as (4).
Op,h = oh,0
lh
lh
lh!
.
χh
lh
1−χh
(4)
Average task response is computed through (5).
L̈h = ∑ joh,j =
∞
j=0 lhχh +
χh
1−χh
Op,h (5)
Average task in the model is given as:
Sh = wj
̈ (1 +
Op,h
lg(1−χh)
) (6)
Sh =
q̈
rj
(1 + oj,0
lh
lh−1
lh!
.
χh
lh
(1−χh)2) (7)
3.3. Power consumption
Circuit delay as well as power dissipation in digital complementary metal-oxide-semiconductor
(CMOS) circuits are modelled using a straightforward technique. The use of power is very important here.
Can be used to solve the basic power problem.
O = zBU2
e (8)
The loading capacitance is denoted by B in (8), the activity factor by a, the supply voltage by U, and
the clock frequency by e. Additionally, because the system in this case is heterogeneous, it is considered that
the server Rh has rh speed. Two separate possibilities are also taken into consideration. Given that there is no
task to be completed in the initial scenario, the energy consumed can be given as (9).
Oh = lhOj
∗
+ Αhq
̅ϑhrh
δh−1
(9)
Due to the server’s high-power consumption, speed is ϑhrh
δh
with lh server. Even though there are no tasks
remaining to complete in the second situation, the server continues to run at the specified speed r h, allowing
the suggested consumption to be calculated using the (10).
Oh = lh(Oh
∗
+ ϑirh
δh
) (10)
3.4. Optimal load balancing in parallel computing
Let’s consider that server Rh server of lh speed be the rh,j, here j is the task that is queued in the
system; further, the task value should not be zero. Furthermore, we consider the speed mechanism i.e.,
(rh,0, rh,1, rh,3, …) of Rh,j, these speed scheme directly represent and reflect the power and speed dependency
of workload. Moreover, this are the three different scenarios. Consider that the job that is queuing in the
network is called j, and that server Rh server of speed lh is called rh,j. The performance expectancy must not
be zero. Additionally, we consider the speed mechanism of R (h, j), namely (rh,0, rh,1, rh,3, …), and Rh,j,
which directly represents and reflects the power as well as speed dependency of the workload. Additionally,
the three distinct circumstances are listed after this. For instance, if rh,1 = rh,2 = rh,3 … , = Rh then we use a
6. Int J Reconfigurable & Embedded Syst ISSN: 2089-4864
Optimized load balancing mechanism in parallel computing for workflow in cloud … (Asma Anjum)
281
unit speed mechanism., If rh,0 = 0, then it is said to be idle, If rh,0 = rh then it is said to be the constant
speed mode. Additionally, arrival rate (AR) as well as service rate (SR) having various states l are processed
for additional load balancing depending on speed and power, where l refers to the task within server model,
δh,j = {
j
rh.j
q
̅
1 ≤ j ≤ lh − 1
lh
rh.j
q
̅
l ≥ lh
(11)
oh,j = (oh,0 )Ah
j
(ζh,1, ζh,2 … , ζh,j)
−1
(12)
where oh,0 can be formulated through the (13).
oh,0 = (1 + ∑
(Ahq
̅)
j!
lh−1
j=1 . (rh,1rh,2 … rh,j)
−1
+ ∑
(Ahq
̅)k
lh! lh
j−lh
∞
j=lh
. (rh,1rh,2 … rh,j)
−1
)
−1
(13)
The average number of tasks in Rh as (14).
M
̅h = ∑ joh,j
∞
l=1 (14)
The provides the task’s RT (response time),
Mh = M
̅h(Ah)−1
(15)
the number of servers in Rh is later calculated and is supplied as (16).
𝒜h = ∑ lhj
∞
j=lh
+ ∑ lhOh,j
∞
l=0 (16)
Consequently, the above is used to calculate server usage,
χj = 𝒜h(lh)−1
(17)
r̅h = ∑ oh,jjrh,j
∞
k=0 (18)
the average energy consumption is calculated last, and it is also formulated for the two situations that were
described before in the same section.
Oh = ϑh (lhOh
∗
+ ∑ oh,jjrh,j
nj−1
l=0
+ ∑ oh,jjrh,j lh
nj−1
l=0
) (19)
Oh = lhϑh ∑ lhOh
∗
+ Oh,
∞
l=0 , jrh,j (20)
Further we compute the server speed; in here optimized server speed is formulated; consider a server
speed of Rh and represented through ϑh = (ai,1, … m ah,ch−1; rh,1, rh,ch
, ). Moreover, for given server lh.
Execution speed is sh,1. Thus, execution speed can be formulated as (21).
ζh,j =
{
j
rh,1
q̈
1 ≤ j ≤ lh − 1
lh
rh,1
q̈
lh ≤ j ≤ ah,1
lh
rh,i
q̈
ah,i−1 + 1 ≤ j ≤ ah,i
lh
rh,i
q̈
j ≥ ah,i−1 + 1
(21)
Further, closed form solution is derived for various quantities such as average power, average task
response, server utilization, average execution speed, these can be formulated and optimized through (22).
Ö h = ∑ loh,j
∞
j=1 (22)
7. ISSN: 2089-4864
Int J Reconfigurable & Embedded Syst, Vol. 12, No. 2, July 2023: 276-286
282
Using above task response time is formulates as (23).
S̈h = Ö h/Ah (23)
Further, considering these parameters we design workload strategy for load balancing.
3.5. Workload designing strategy
In this section, we design and develop the workload strategy balance which optimizes two sub
problem i.e., makespan time and energy. Both are considered as the sub-problem associated with load
balancing. Further, in this section optimized task offloading is designed to solve the makespan time and
energy problem.
3.5.1. Makespan problem
In this section, we focus on load balancing mechanism with makespan time minimization; further
considering the makespan time, distribution of load must be found which can be considered as the task arrival
time to model. Thus E (A1, A2 … Am) = A where
E (A1, A2 … Am) = A1 + A2 + ⋯ + Am
ϑh is less than unit
∀1 ≤ h ≤ m (24)
The load-balanced is achieved through server speed and dynamic. Given the m number of a server
system with system size and speed (φi = (ah,1, ah,2, … , ah,ch−1, rh,1, rh,1 , … , rh,e)) of Rh, power
consumption is given as O1
∗
, O2
∗
, … , Oh
∗
, task execution requirement is given through q
̅, arrival rate as Λ, now
we need to balance the load hence the load distribution is found; can be given in (24), Further, this is
optimized in the process.
E (A1, A2 … Am) = A1 + A2 + ⋯ . . . +A0 (25)
(A1, A2 … Am) = Φ∇E(A1, A2 … Am) (26)
∂S
∂A1
= Φ
∂E
∂Ah
= Φ (27)
Φ is a multiplier, given this multiplier, we find the A1, A2 … A0 used for verifying the conditionthis generates
the value of Φ.
3.5.2. Energy optimization
Let’s consider number of servers as n with speed as l1, l2, l3, … lm with speed ψh =
(ah,1, ah,2, … ah,ch−1, rh,1, rh,2, … , rh,ch
) of Rh. ∀1 ≤ h ≤ m. Considering the energy parameter as
O1
∗
, O2
∗
, … , Om
∗
with requirement of task execution as q̈, with arrival rate A, a load balancing must be achieved
such that energy P(λ1, λ2, λ3) is minimized having constraint as given (28).
E (A1, A2 … Am) = A1 + A2 + ⋯ . . . +A0 ϑh is less than unit
∀1 ≤ h ≤ m (28)
Further, the above problem is minimized through Lagrange multiplier and denoted as (29).
∇S (A1, A2 … Am) = Φ∇E(A1, A2 … Am) (29)
∂S
∂A1
= Φ
∂E
∂Ah
= Φ
Moreover, it is observed that
∂S
∂A1
is complex function of λi, this causes highly improbable for finding the
analytical solution, hence this can be solved through given steps; (a) As, we observe that
∂S
∂A1
is an increasing
function with given Φ, using bisection approach we find the value of Ah, (b) Moreover, the calculated A1 +
8. Int J Reconfigurable & Embedded Syst ISSN: 2089-4864
Optimized load balancing mechanism in parallel computing for workflow in cloud … (Asma Anjum)
283
A2 + ⋯ . . . +Am is used for verifying the condition E (A1, A2 … Am) = A . (c) Further, this verification is
employed to find Φ through bisection method. (d) Further the optimized equation is given as (30)
∂S
∂A1
=
1
A
(Oh +
∂Oh
∂Ah
) (30)
4. PERFORMANCE EVALUATION
Resources for cloud computing have become one of the most successful real-time computing
models for active usage. They are accessible, affordable, and can be used from anywhere in the world over
the internet. Additionally, it may be used for a variety of purposes, one of which is workflow scheduling for
scientific workflows where the tasks are interdependent and independent and have a range of benefits and
drawbacks. However, these computing devices’ excessive energy consumption can impair their performance.
Additionally, makespan is a significant restriction, thus we introduced an OLBP for heterogeneous
computing systems to efficiently reduce energy consumption while also delivering superior performance to
optimize these goals. Our suggested model runs on Windows 10 64-bit with 16 GB of RAM and an Intel
Core i5-4460 processor. The CPU runs at 3.20 GHz. Eclipse Neon is being used to emulate this project. We
utilize CloudSim as our simulator, and the 3 editor and code are written in Java [34]. We also test the
effectiveness of our suggested model using the cybershake scientific workflow in four different situations,
including total time comparison and power consumption. Additionally, a graphic representation of our
findings that takes into account task count, execution time, and energy consumption is provided. We have
taken into consideration studies with scientific workflows of 30, 50, 100, and 1000.
4.1. Energy consumption
Energy is one of the important parameters in any optimization problem; Figure 4 shows the energy
consumption comparison of existing and proposed method; for first instance, existing model requires 3495.42
whereas proposed model requires 1303.740369. For second and third instance existing model requires
8518.395166 and 18966.33731 whereas OLBP requires 1330.921945 and 1436.836915 respectively.
Similarly, for fourth instance existing model requires 236303.2878 and proposed model OLBP requires
3228.604586. Further, another parameter average power is considered to evaluate the model less power
required, the more efficient is the model. Figure 5 shows average power comparison where it is observed that
in case of all instances, existing model requires 19.14, 20.11, 0.30, and 20.11 whereas proposed model
requires 15.81, 15.90, 15.90, and 15.90 respectively for all four instances.
Figure 4. Energy consumption
Figure 5. Average power
9. ISSN: 2089-4864
Int J Reconfigurable & Embedded Syst, Vol. 12, No. 2, July 2023: 276-286
284
4.2. Makespan time
Here, for first, second, third and fourth instance takes makespan time of 6359.41, 14448.9,
30124.41, 74543 whereas OLBP requires makespan time of 2938.22, 2953.86, 3116.22, 4794.39 and 1280.85
respectively. Figure 6 displays the power sum comparison. Figure 7 shows the makespan time comparison.
Figure 6. Power sum
Figure 7. Makespan
4.3. Comparative analysis
In this section, we compare the effectiveness of the existing and suggested method while taking into
account a number of different parameters, including energy consumption, power total, average power, and
makespan time on the cybershake workflow. In the above section, we observed that, in terms of energy for
instance 1, there is 62% less consumption, for instance 2, 84.37% less consumption, for instance 3, 92.42%
less consumption and 98.63% less consumption for instance 4. Similarly, considering average power for
instance 1, instance 2, instance 3, and instance 4, proposed model consumes 17.40%, 20.96%, 21.70%, and
20.96% efficient than the existing mechanism respectively. Furthermore, considering the power sum as
parameter, proposed model consumes 61.83%, 83.41%, 91.90%, and 94.91% less power sum than the
existing protocol for instance 1 to instance 4 respectively. At last, makespan time is compared and it is
observed that proposed model possesses 53.79% efficient for instance 1, 79.55% efficient for instance 2,
89.65% efficient for instance 3 and 93.56% efficient for instance 4 than the existing model. Furthermore,
throughout the comparison it is observed that in case of each parameter, as the workflow variant changes and
as number of tasks increases performance degrades marginally, however proposed model remains either
constant or decreases comparatively less.
5. CONCLUSION
Workflow management systems is considered to be eminent in construction of various application
such as e-commerce, scientific application and so forth through automation process of inter-enterprise and
intra-enterprise business model. Moreover, to handle the dynamic business environment and increase in
global competition has led the researcher to design the scalable and flexible business environment; these
phenomena can be achieved through load balancing. Further, one of the barriers to green cities and a pressing
10. Int J Reconfigurable & Embedded Syst ISSN: 2089-4864
Optimized load balancing mechanism in parallel computing for workflow in cloud … (Asma Anjum)
285
issue that must be resolved is the high energy consumption and increased makespan time in cloud data
centers. To date, many load-balancing algorithms have been created in an effort to cut down on energy use
and process execution timespan. As a result, in this research effort OLBP is established, which schedules and
distributes the workload in a way that important parameters like power, energy, and makespan time are
maximized. This research work is aimed to address the issue. Moreover, to evaluate the OLBP, four
distinctive instances are designed; each instances comprises workflow variants with random number of
virtual machines. Further comparative analysis indicates that OLBP outperforms the existing model in terms
of average power, energy consumption, makespan time. Although OLBP outperforms the existing protocol,
there are lot of research which needs to research; in future we would be focusing on optimizing the other
performance parameter such as fault tolerance, reliability, and cost.
REFERENCES
[1] F. Tao, Y. Cheng, L. D. Xu, L. Zhang, and B. H. Li, “CCIoT-CMfg: Cloud computing and internet of things-based cloud
manufacturing service system,” IEEE Transactions on Industrial Informatics, vol. 10, no. 2, pp. 1435–1442, May 2014, doi:
10.1109/TII.2014.2306383.
[2] M. Masdari and M. Zangakani, “Green cloud computing using proactive virtual machine placement: challenges and issues,”
Journal of Grid Computing, vol. 18, no. 4, pp. 727–759, Dec. 2020, doi: 10.1007/s10723-019-09489-9.
[3] L. Mashayekhy, M. M. Nejad, D. Grosu, Q. Zhang, and W. Shi, “Energy-aware scheduling of MapReduce jobs for big data
applications,” IEEE Transactions on Parallel and Distributed Systems, vol. 26, no. 10, pp. 2720–2733, Oct. 2015, doi:
10.1109/TPDS.2014.2358556.
[4] Y. Wu, X. Tan, L. Qian, D. H. K. Tsang, W. Z. Song, and L. Yu, “Optimal pricing and energy scheduling for hybrid energy
trading market in future smart grid,” IEEE Transactions on Industrial Informatics, vol. 11, no. 6, pp. 1585–1596, Dec. 2015, doi:
10.1109/TII.2015.2426052.
[5] M. Abdullahi, M. A. Ngadi, S. I. Dishing, S. M. Abdulhamid, and B. I. E Ahmad, “An efficient symbiotic organisms search
algorithm with chaotic optimization strategy for multi-objective task scheduling problems in cloud computing environment,”
Journal of Network and Computer Applications, vol. 133, pp. 60–74, May 2019, doi: 10.1016/j.jnca.2019.02.005.
[6] M. H. Yaghmaee, M. Moghaddassian, and A. L. Garcia, “Autonomous two-tier cloud-based demand side management approach
with microgrid,” IEEE Transactions on Industrial Informatics, vol. 13, no. 3, pp. 1109–1120, Jun. 2017, doi:
10.1109/TII.2016.2619070.
[7] M. Hosseinzadeh, M. Y. Ghafour, H. K. Hama, B. Vo, and A. Khoshnevis, “Multi-objective task and workflow scheduling
approaches in cloud computing: A comprehensive review,” Journal of Grid Computing, vol. 18, no. 3, pp. 327–356, Sep. 2020,
doi: 10.1007/s10723-020-09533-z.
[8] X. Li, P. Garraghan, X. Jiang, Z. Wu, and J. Xu, “Holistic virtual machine scheduling in cloud datacenters towards minimizing
total energy,” IEEE Transactions on Parallel and Distributed Systems, vol. 29, no. 6, pp. 1317–1331, Jun. 2018, doi:
10.1109/TPDS.2017.2688445.
[9] H. Yan et al., “Cost-efficient consolidating service for aliyun’s cloud-scale computing,” IEEE Transactions on Services
Computing, vol. 12, no. 1, pp. 117–130, Jan. 2019, doi: 10.1109/TSC.2016.2612186.
[10] X. Zhu, L. T. Yang, H. Chen, J. Wang, S. Yin, and X. Liu, “Real-time tasks oriented energy-aware scheduling in virtualized
clouds,” IEEE Transactions on Cloud Computing, vol. 2, no. 2, pp. 168–180, Apr. 2014, doi: 10.1109/TCC.2014.2310452.
[11] G. Juve, A. Chervenak, E. Deelman, S. Bharathi, G. Mehta, and K. Vahi, “Characterizing and profiling scientific workflows,”
Future Generation Computer Systems, vol. 29, no. 3, pp. 682–692, Mar. 2013, doi: 10.1016/j.future.2012.08.015.
[12] H. Chen, X. Zhu, G. Liu, and W. Pedrycz, “Uncertainty-aware online scheduling for real-time workflows in cloud service
environment,” IEEE Transactions on Services Computing, vol. 14, no. 4, pp. 1167–1178, Jul. 2021, doi:
10.1109/TSC.2018.2866421.
[13] G. Xie, J. Jiang, Y. Liu, R. Li, and K. Li, “Minimizing energy consumption of real-time parallel applications using downward and
upward approaches on heterogeneous systems,” IEEE Transactions on Industrial Informatics, vol. 13, no. 3, pp. 1068–1078, Jun.
2017, doi: 10.1109/TII.2017.2676183.
[14] H. Rong, H. Zhang, S. Xiao, C. Li, and C. Hu, “Optimizing energy consumption for data centers,” Renewable and Sustainable
Energy Reviews, vol. 58, pp. 674–691, May 2016, doi: 10.1016/j.rser.2015.12.283.
[15] Y. Yin, F. Yu, Y. Xu, L. Yu, and J. Mu, “Network location-aware service recommendation with random walk in cyber-physical
systems,” Sensors (Switzerland), vol. 17, no. 9, p. 2059, Sep. 2017, doi: 10.3390/s17092059.
[16] H. Gao, Y. Duan, H. Miao, and Y. Yin, “An approach to data consistency checking for the dynamic replacement of service
process,” IEEE Access, vol. 5, pp. 11700–11711, 2017, doi: 10.1109/ACCESS.2017.2715322.
[17] G. Patel, R. Mehta, and U. Bhoi, “Enhanced load balanced min-min algorithm for static meta task scheduling in cloud
computing,” Procedia Computer Science, vol. 57, pp. 545–553, 2015, doi: 10.1016/j.procs.2015.07.385.
[18] Y. Mao, X. Chen, and X. Li, “Max–min task scheduling algorithm for load balance in cloud computing,” in Proceedings of
International Conference on Computer Science and Information Technology, vol. 255, 2014, pp. 457–465, doi: /10.1007/978-81-
322-1759-6_53.
[19] X. Kong, C. Lin, Y. Jiang, W. Yan, and X. Chu, “Efficient dynamic task scheduling in virtualized data centers with fuzzy
prediction,” Journal of Network and Computer Applications, vol. 34, no. 4, pp. 1068–1077, Jul. 2011, doi:
10.1016/j.jnca.2010.06.001.
[20] M. Y. Wu, W. Shu, and H. Zhang, “Segmented Min-min: A static mapping algorithm for meta-tasks on heterogeneous computing
systems,” in Proceedings of the Heterogeneous Computing Workshop, HCW, 2000, pp. 375–385, doi: 10.1109/hcw.2000.843759.
[21] K. Etminani and M. Naghibzadeh, “A min-min max-min selective algorihtm for grid task scheduling,” in 2007 3rd IEEE/IFIP
International Conference in Central Asia on Internet, ICI 2007, Sep. 2007, pp. 1–7, doi: 10.1109/canet.2007.4401694.
[22] S. Devipriya and C. Ramesh, “Improved max-min heuristic model for task scheduling in cloud,” in Proceedings of the 2013
International Conference on Green Computing, Communication and Conservation of Energy, ICGCE 2013, Dec. 2013, pp. 883–
888, doi: 10.1109/ICGCE.2013.6823559.
11. ISSN: 2089-4864
Int J Reconfigurable & Embedded Syst, Vol. 12, No. 2, July 2023: 276-286
286
[23] E. K. Tabak, B. B. Cambazoglu, and C. Aykanat, “Improving the performance of independenttask assignment heuristics
minmin,maxmin and sufferage,” IEEE Transactions on Parallel and Distributed Systems, vol. 25, no. 5, pp. 1244–1256, May
2014, doi: 10.1109/TPDS.2013.107.
[24] Y. Yin, S. Aihua, G. Min, X. Yueshen, and W. Shuoping, “QoS prediction for web service recommendation with network
location-aware neighbor selection,” International Journal of Software Engineering and Knowledge Engineering, vol. 26, no. 4,
pp. 611–632, May 2016, doi: 10.1142/S0218194016400040.
[25] M. A. Tawfeek, A. El-Sisi, A. E. Keshk, and F. A. Torkey, “Cloud task scheduling based on ant colony optimization,” in
Proceedings - 2013 8th International Conference on Computer Engineering and Systems, ICCES 2013, Nov. 2013, pp. 64–69,
doi: 10.1109/ICCES.2013.6707172.
[26] H. Gao, D. Chu, Y. Duan, and Y. Yin, “Probabilistic model checking-based service selection method for business process
modeling,” International Journal of Software Engineering and Knowledge Engineering, vol. 27, no. 6, pp. 897–923, Aug. 2017,
doi: 10.1142/S0218194017500334.
[27] A. Gupta and R. Garg, “Load balancing based task scheduling with ACO in cloud computing,” in 2017 International Conference
on Computer and Applications, ICCA 2017, Sep. 2017, pp. 174–179, doi: 10.1109/COMAPP.2017.8079781.
[28] C. Y. Liu, C. M. Zou, and P. Wu, “A task scheduling algorithm based on genetic algorithm and ant colony optimization in cloud
computing,” in Proceedings - 13th International Symposium on Distributed Computing and Applications to Business, Engineering
and Science, DCABES 2014, Nov. 2014, pp. 68–72, doi: 10.1109/DCABES.2014.18.
[29] Y. Cui and Z. Xiaoqing, “Workflow tasks scheduling optimization based on genetic algorithm in clouds,” in 2018 3rd IEEE
International Conference on Cloud Computing and Big Data Analysis, ICCCBDA 2018, Apr. 2018, pp. 6–10, doi:
10.1109/ICCCBDA.2018.8386458.
[30] K. Li, G. Xu, G. Zhao, Y. Dong, and D. Wang, “Cloud task scheduling based on load balancing ant colony optimization,” in
Proceedings - 2011 6th Annual ChinaGrid Conference, ChinaGrid 2011, Aug. 2011, pp. 3–9, doi: 10.1109/ChinaGrid.2011.17.
[31] F. U. Xiao, “Task scheduling scheme based on sharing mechanism and swarm intelligence optimization algorithm in cloud
computing,” Computer Science, vol. 45, no. 1, p. 303307, 2018.
[32] C. G. Wu and L. Wang, “A multi-model estimation of distribution algorithm for energy efficient scheduling under cloud
computing system,” Journal of Parallel and Distributed Computing, vol. 117, pp. 63–72, Jul. 2018, doi:
10.1016/j.jpdc.2018.02.009.
[33] H. Aziza and S. Krichen, “Bi-objective decision support system for task-scheduling based on genetic algorithm in cloud
computing,” Computing, vol. 100, no. 2, pp. 65–91, Feb. 2018, doi: 10.1007/s00607-017-0566-5.
[34] W. Long, L. Yuqing, and X. Qingxin, “Using cloudsim to model and simulate cloud computing environment,” in Proceedings -
9th International Conference on Computational Intelligence and Security, CIS 2013, Dec. 2013, pp. 323–328, doi:
10.1109/CIS.2013.75.
BIOGRAPHIES OF AUTHORS
Asma Anjum is a full-time research scholar at Visvesvaraya Technological
University, Belagavi, and Karnataka. She has completed her B. E and MTech in Computer
Science And Engineering in the year 2015 and 2017 respectively. She has 5 international
publications amongst which three are IEEE conferences and Scopus indexed. Her research
interest includes networking and cloud computing. She can be contacted at email:
asmacs13@gmail.com.
Dr. Asma Parveen got graduated in Electrical Engineering in 1993 and
completed post-graduation in Computer Science and Engineering in 2004 and in 2016 she
was awarded Ph.D. in Computer Science and Engineering. She has published many research
papers in leading international journals and conference proceedings. She can be contacted at
email: drasma.cse@gmail.com.