This document presents a new approach for scheduling multi-objective tasks in cloud computing using an artificial bee colony algorithm. The proposed algorithm aims to optimize response time, schedule length ratio, and efficiency. It models tasks as bees that are assigned to processing elements in data centers to minimize completion time while balancing resource loads. The results showed the bee colony algorithm achieved better performance than other scheduling methods in cloud computing environments.
An efficient cloudlet scheduling via bin packing in cloud computingIJECEIAES
In this ever-developing technological world, one way to manage and deliver services is through cloud computing, a massive web of heterogenous autonomous systems that comprise adaptable computational design. Cloud computing can be improved through task scheduling, albeit it being the most challenging aspect to be improved. Better task scheduling can improve response time, reduce power consumption and processing time, enhance makespan and throughput, and increase profit by reducing operating costs and raising the system reliability. This study aims to improve job scheduling by transferring the job scheduling problem into a bin packing problem. Three modifies implementations of bin packing algorithms were proposed to be used for task scheduling (MBPTS) based on the minimisation of makespan. The results, which were based on the open-source simulator CloudSim, demonstrated that the proposed MBPTS was adequate to optimise balance results, reduce waiting time and makespan, and improve the utilisation of the resource in comparison to the current scheduling algorithms such as the particle swarm optimisation (PSO) and first come first serve (FCFS).
Cost-Efficient Task Scheduling with Ant Colony Algorithm for Executing Large ...Editor IJCATR
The aim of cloud computing is to share a large number of resources and pieces of equipment to compute and store knowledge and information for great scientific sources. Therefore, the scheduling algorithm is regarded as one of the most important challenges and problems in the cloud. To solve the task scheduling problem in this study, the ant colony optimization (ACO) algorithm was adapted from social theories with a fair and accurate resource allocation approach based on machine performance and capacity. This study was intended to decrease the runtime and executive costs. It was also meant to optimize the use of machines and reduce their idle time. Finally, the proposed method was compared with Berger and greedy algorithms. The simulation results indicate that the proposed algorithm reduced the makespan and executive cost when tasks were added. It also increased fairness and load balancing. Moreover, it made the optimal use of machines possible and increased user satisfaction. According to evaluations, the proposed algorithm improved the makespan by 80%.
Task Scheduling using Hybrid Algorithm in Cloud Computing Environmentsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
LOAD BALANCING ALGORITHM ON CLOUD COMPUTING FOR OPTIMIZE RESPONE TIMEijccsa
To improve the performance of cloud computing, there are many parameters and issues that we should consider, including resource allocation, resource responsiveness, connectivity to resources, unused resources exploration, corresponding resource mapping and planning for resource. The planning for the use of resources can be based on many kinds of parameters, and the service response time is one of them.
The users can easily figure out the response time of their requests, and it becomes one of the important QoSs. When we discover and explore more on this, response time can provide solutions for the distribution, the load balancing of resources with better efficiency. This is one of the most promising
research directions for improving the cloud technology. Therefore, this paper proposes a load balancing algorithm based on response time of requests on cloud with the name APRA (ARIMA Prediction of Response Time Algorithm), the main idea is to use ARIMA algorithms to predict the coming response time, thus giving a better way of effectively resolving resource allocation with threshold value. The experiment
result outcomes are potential and valuable for load balancing with predicted response time, it shows that prediction is a great direction for load balancing.
A combined computing framework for load balancing in multi-tenant cloud eco-...IJECEIAES
Since the world is getting digitalized, cloud computing has become a core part of it. Massive data on a daily basis is processed, stored, and transferred over the internet. Cloud computing has become quite popular because of its superlative quality and enhanced capability to improvise data management, offering better computing resources and data to its user bases (UBs). However, there are many issues in the existing cloud traffic management approaches and how to manage data during service execution. The study introduces two distinct research models: data center virtualization framework under multi-tenant cloud-ecosystem (DCVF-MT) and collaborative workflow of multi-tenant load balancing (CW-MTLB) with analytical research modeling. The sequence of execution flow considers a set of algorithms for both models that address the core problem of load balancing and resource allocation in the cloud computing (CC) ecosystem. The research outcome illustrates that DCVF-MT, outperforms the one-toone approach by approximately 24.778% performance improvement in traffic scheduling. It also yields a 40.33% performance improvement in managing cloudlet handling time. Moreover, it attains an overall 8.5133% performance improvement in resource cost optimization, which is significant to ensure the adaptability of the frameworks into futuristic cloud applications where adequate virtualization and resource mapping will be required.
Cloud computing gives on-demand access to computing resources in
metered and powerfully adapted way; it empowers the client to get access to
fast and flexible resources through virtualization and widely adaptable for
various applications. Further, to provide assurance of productive
computation, scheduling of task is very much important in cloud
infrastructure environment. Moreover, the main aim of task execution
phenomena is to reduce the execution time and reserve infrastructure;
further, considering huge application, workflow scheduling has drawn fine
attention in business as well as scientific area. Hence, in this research work,
we design and develop an optimized load balancing in parallel computation
aka optimal load balancing in parallel computing (OLBP) mechanism to
distribute the load; at first different parameter in workload is computed and
then loads are distributed. Further OLBP mechanism considers makespan
time and energy as constraint and further task offloading is done considering
the server speed. This phenomenon provides the balancing of workflow;
further OLBP mechanism is evaluated using cyber shake workflow dataset
and outperforms the existing workflow mechanism.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
An efficient cloudlet scheduling via bin packing in cloud computingIJECEIAES
In this ever-developing technological world, one way to manage and deliver services is through cloud computing, a massive web of heterogenous autonomous systems that comprise adaptable computational design. Cloud computing can be improved through task scheduling, albeit it being the most challenging aspect to be improved. Better task scheduling can improve response time, reduce power consumption and processing time, enhance makespan and throughput, and increase profit by reducing operating costs and raising the system reliability. This study aims to improve job scheduling by transferring the job scheduling problem into a bin packing problem. Three modifies implementations of bin packing algorithms were proposed to be used for task scheduling (MBPTS) based on the minimisation of makespan. The results, which were based on the open-source simulator CloudSim, demonstrated that the proposed MBPTS was adequate to optimise balance results, reduce waiting time and makespan, and improve the utilisation of the resource in comparison to the current scheduling algorithms such as the particle swarm optimisation (PSO) and first come first serve (FCFS).
Cost-Efficient Task Scheduling with Ant Colony Algorithm for Executing Large ...Editor IJCATR
The aim of cloud computing is to share a large number of resources and pieces of equipment to compute and store knowledge and information for great scientific sources. Therefore, the scheduling algorithm is regarded as one of the most important challenges and problems in the cloud. To solve the task scheduling problem in this study, the ant colony optimization (ACO) algorithm was adapted from social theories with a fair and accurate resource allocation approach based on machine performance and capacity. This study was intended to decrease the runtime and executive costs. It was also meant to optimize the use of machines and reduce their idle time. Finally, the proposed method was compared with Berger and greedy algorithms. The simulation results indicate that the proposed algorithm reduced the makespan and executive cost when tasks were added. It also increased fairness and load balancing. Moreover, it made the optimal use of machines possible and increased user satisfaction. According to evaluations, the proposed algorithm improved the makespan by 80%.
Task Scheduling using Hybrid Algorithm in Cloud Computing Environmentsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
LOAD BALANCING ALGORITHM ON CLOUD COMPUTING FOR OPTIMIZE RESPONE TIMEijccsa
To improve the performance of cloud computing, there are many parameters and issues that we should consider, including resource allocation, resource responsiveness, connectivity to resources, unused resources exploration, corresponding resource mapping and planning for resource. The planning for the use of resources can be based on many kinds of parameters, and the service response time is one of them.
The users can easily figure out the response time of their requests, and it becomes one of the important QoSs. When we discover and explore more on this, response time can provide solutions for the distribution, the load balancing of resources with better efficiency. This is one of the most promising
research directions for improving the cloud technology. Therefore, this paper proposes a load balancing algorithm based on response time of requests on cloud with the name APRA (ARIMA Prediction of Response Time Algorithm), the main idea is to use ARIMA algorithms to predict the coming response time, thus giving a better way of effectively resolving resource allocation with threshold value. The experiment
result outcomes are potential and valuable for load balancing with predicted response time, it shows that prediction is a great direction for load balancing.
A combined computing framework for load balancing in multi-tenant cloud eco-...IJECEIAES
Since the world is getting digitalized, cloud computing has become a core part of it. Massive data on a daily basis is processed, stored, and transferred over the internet. Cloud computing has become quite popular because of its superlative quality and enhanced capability to improvise data management, offering better computing resources and data to its user bases (UBs). However, there are many issues in the existing cloud traffic management approaches and how to manage data during service execution. The study introduces two distinct research models: data center virtualization framework under multi-tenant cloud-ecosystem (DCVF-MT) and collaborative workflow of multi-tenant load balancing (CW-MTLB) with analytical research modeling. The sequence of execution flow considers a set of algorithms for both models that address the core problem of load balancing and resource allocation in the cloud computing (CC) ecosystem. The research outcome illustrates that DCVF-MT, outperforms the one-toone approach by approximately 24.778% performance improvement in traffic scheduling. It also yields a 40.33% performance improvement in managing cloudlet handling time. Moreover, it attains an overall 8.5133% performance improvement in resource cost optimization, which is significant to ensure the adaptability of the frameworks into futuristic cloud applications where adequate virtualization and resource mapping will be required.
Cloud computing gives on-demand access to computing resources in
metered and powerfully adapted way; it empowers the client to get access to
fast and flexible resources through virtualization and widely adaptable for
various applications. Further, to provide assurance of productive
computation, scheduling of task is very much important in cloud
infrastructure environment. Moreover, the main aim of task execution
phenomena is to reduce the execution time and reserve infrastructure;
further, considering huge application, workflow scheduling has drawn fine
attention in business as well as scientific area. Hence, in this research work,
we design and develop an optimized load balancing in parallel computation
aka optimal load balancing in parallel computing (OLBP) mechanism to
distribute the load; at first different parameter in workload is computed and
then loads are distributed. Further OLBP mechanism considers makespan
time and energy as constraint and further task offloading is done considering
the server speed. This phenomenon provides the balancing of workflow;
further OLBP mechanism is evaluated using cyber shake workflow dataset
and outperforms the existing workflow mechanism.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
Dynamic Task Scheduling based on Burst Time Requirement for Cloud EnvironmentIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
A hybrid approach for scheduling applications in cloud computing environment IJECEIAES
Cloud computing plays an important role in our daily life. It has direct and positive impact on share and update data, knowledge, storage and scientific resources between various regions. Cloud computing performance heavily based on job scheduling algorithms that are utilized for queue waiting in modern scientific applications. The researchers are considered cloud computing a popular platform for new enforcements. These scheduling algorithms help in design efficient queue lists in cloud as well as they play vital role in reducing waiting for processing time in cloud computing. A novel job scheduling is proposed in this paper to enhance performance of cloud computing and reduce delay time in queue waiting for jobs. The proposed algorithm tries to avoid some significant challenges that throttle from developing applications of cloud computing. However, a smart scheduling technique is proposed in our paper to improve performance processing in cloud applications. Our experimental result of the proposed job scheduling algorithm shows that the proposed schemes possess outstanding enhancing rates with a reduction in waiting time for jobs in queue list.
Multi-objective load balancing in cloud infrastructure through fuzzy based de...IAESIJAI
Cloud computing became a popular technology which influence not only
product development but also made technology business easy. The services
like infrastructure, platform and software can reduce the complexity of
technology requirement for any ecosystem. As the users of cloud-based
services increases the complexity of back-end technologies also increased.
The heterogeneous requirement of users in terms for various configurations
creates different unbalancing issues related to load. Hence effective load
balancing in a cloud system with reference to time and space become crucial
as it adversely affect system performance. Since the user requirement and
expected performance is multi-objective use of decision-making tools like
fuzzy logic will yield good results as it uses human procedure knowledge in
decision making. The overall system performance can be further improved by
dynamic resource scheduling using optimization technique like genetic
algorithm.
Demand-driven Gaussian window optimization for executing preferred population...IJECEIAES
Scheduling is one of the essential enabling technique for Cloud computing which facilitates efficient resource utilization among the jobs scheduled for processing. However, it experiences performance overheads due to the inappropriate provisioning of resources to requesting jobs. It is very much essential that the performance of Cloud is accomplished through intelligent scheduling and allocation of resources. In this paper, we propose the application of Gaussian window where jobs of heterogeneous in nature are scheduled in the round-robin fashion on different Cloud clusters. The clusters are heterogeneous in nature having datacenters with varying sever capacity. Performance evaluation results show that the proposed algorithm has enhanced the QoS of the computing model. Allocation of Jobs to specific Clusters has improved the system throughput and has reduced the latency.
Optimization of energy consumption in cloud computing datacenters IJECEIAES
Cloud computing has emerged as a practical paradigm for providing IT resources, infrastructure and services. This has led to the establishment of datacenters that have substantial energy demands for their operation. This work investigates the optimization of energy consumption in cloud datacenter using energy efficient allocation of tasks to resources. The work seeks to develop formal optimization models that minimize the energy consumption of computational resources and evaluates the use of existing optimization solvers in testing these models. Integer linear programming (ILP) techniques are used to model the scheduling problem. The objective is to minimize the total power consumed by the active and idle cores of the servers’ CPUs while meeting a set of constraints. Next, we use these models to carry out a detailed performance comparison between a selected set of Generic ILP and 0-1 Boolean satisfiability based solvers in solving the ILP formulations. Simulation results indicate that in some cases the developed models have saved up to 38% in energy consumption when compared to common techniques such as round robin. Furthermore, results also showed that generic ILP solvers had superior performance when compared to SAT-based ILP solvers especially as the number of tasks and resources grow in size.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Resource-efficient workload task scheduling for cloud-assisted internet of th...IJECEIAES
One of the most challenging tasks in the internet of things-cloud-based environment is the resource allocation for the tasks. The cloud provides various resources such as virtual machines, computational cores, networks, and other resources for the execution of the various tasks of the internet of things (IoT). Moreover, some methods are used for executing IoT tasks using an optimal resource management system but these methods are not efficient. Hence, in this research, we present a resource-efficient workload task scheduling (RWTS) model for a cloud-assisted IoT environment to execute the IoT task which utilizes few numbers of resources to bring a good tradeoff, achieve high performance using fewer resources of the cloud, compute the number of resources required for the execution of the IoT task such as bandwidth and computational core. Furthermore, this model mainly focuses to reduce energy consumption and also provides a task scheduling model to schedule the IoT tasks in an IoT-cloud-based environment. The experimentation has been done using the Montage workflow and the results have been obtained in terms of execution time, power sum, average power, and energy consumption. When compared with the existing model, the RWTS model performs better when the size of the tasks is increased.
A cloud computing scheduling and its evolutionary approachesnooriasukmaningtyas
Despite the increasing use of cloud computing technology because it offers
unique features to serve its customers perfectly, exploiting the full potential
is very difficult due to the many problems and challenges. Therefore,
scheduling resources are one of these challenges. Researchers are still finding
it difficult to determine which of the scheduling algorithms are appropriate
and effective and that helps increases the performance of the system to
accomplish these tasks. This paper provides a broad and detailed examination
of resource scheduling algorithms in the environment of a cloud computing
environment and highlights the advantages and disadvantages of some
algorithms to help researchers in selecting the best algorithms to schedule a
particular workload to get a satisfy a quality of service, guarantee good
utilization of the cloud resources also minimizing the make-span.
Reliable and efficient webserver management for task scheduling in edge-cloud...IJECEIAES
The development in the field of cloud webserver management for the execution of the workflow and meeting the quality-of-service (QoS) prerequisites in a distributed cloud environment has been a challenging task. Though, internet of things (IoT) of work presented for the scheduling of the workflow in a heterogeneous cloud environment. Moreover, the rapid development in the field of cloud computing like edge-cloud computing creates new methods to schedule the workflow in a heterogenous cloud environment to process different tasks like IoT, event-driven applications, and different network applications. The current methods used for workflow scheduling have failed to provide better trade-offs to meet reliable performance with minimal delay. In this paper, a novel web server resource management framework is presented namely the reliable and efficient webserver management (REWM) framework for the edge-cloud environment. The experiment is conducted on complex bioinformatic workflows; the result shows the significant reduction of cost and energy by the proposed REWM in comparison with standard webserver management methodology.
Providing a multi-objective scheduling tasks by Using PSO algorithm for cost ...Editor IJCATR
This article is intended to use the multi-PSO algorithm for scheduling tasks for cost management in cloud computing. This means that
any migration costs due to supply failure consider as a one objective and each task is a little particle and recognize by use of the
appropriate fitness schedule function (how the particles arrangement) that cost at least amount of total expense. In addition to, the weight
is granted to the each expenditure that reflects the importance of cost. The data which is used to simulate proposed method are series of
academic and research data that are prepared from the Internet and MATLAB software is used for simulation. We simulate two issues,
in the first issue, consider four task by four vehicles and divide tasks. In the second issue, make the issue more complicated and consider
six tasks by four vehicles. We write PSO's output for each two issues of various iterations. Finally, the particles dispersion and as well
as the output of the cost function were computed for each pa
PROPOSED LOAD BALANCING ALGORITHM TO REDUCE RESPONSE TIME AND PROCESSING TIME...IJCNCJournal
Cloud computing is a new technology that brings new challenges to all organizations around the world.
Improving response time for user requests on cloud computing is a critical issue to combat bottlenecks. As
for cloud computing, bandwidth to from cloud service providers is a bottleneck. With the rapid development
of the scale and number of applications, this access is often threatened by overload. Therefore, this paper
our proposed Throttled Modified Algorithm(TMA) for improving the response time of VMs on cloud
computing to improve performance for end-user. We have simulated the proposed algorithm with the
CloudAnalyts simulation tool and this algorithm has improved response times and processing time of the
cloud data center.
Task scheduling is an important aspect to improve the utilization of resources in the Cloud Computing. This paper proposes a Divide and Conquer based approach for heterogeneous earliest finish time algorithm. The proposed system works in two phases. In the first phase it assigns the ranks to the incoming tasks with respect to size of it. In the second phase, we properly assign and manage the task to the virtual machine with the consideration of ideal time of respective virtual machine. This helps to get more effective resource utilization in Cloud Computing. The experimental results using Cybershake Scientific Workflow shows that the proposed Divide and Conquer HEFT performs better than HEFT in terms of task's finish time and response time. The result obtained by experimentally demonstrate that the proposed DCHEFT performance superiorly.
Recently, lot of interest have been put forth by researchers to improve workload scheduling in cloud platform. However, execution of scientific workflow on cloud platform is time consuming and expensive. As users are charged based on hour of usage, lot of research work have been emphasized in minimizing processing time for reduction of cost. However, the processing cost can be reduced by minimizing energy consumption especially when resources are heterogeneous in nature; very limited work have been done considering optimizing cost with energy and processing time parameters together in meeting task quality of service (QoS) requirement. This paper presents cost and performance aware workload scheduling (CPA-WS) technique under heterogeneous cloud platform. This paper presents a cost optimization model through minimization of processing time and energy dissipation for execution of task. Experiments are conducted using two widely used workflow such as Inspiral and CyberShake. The outcome shows the CPA-WS significantly reduces energy, time, and cost in comparison with standard workload scheduling model.
The Cloud computing becomes an important topic
in the area of high performance distributed computing. On the
other hand, task scheduling is considered one the most significant
issues in the Cloud computing where the user has to pay for the
using resource based on the time. Therefore, distributing the
cloud resource among the users' applications should maximize
resource utilization and minimize task execution Time. The goal
of task scheduling is to assign tasks to appropriate resources that
optimize one or more performance parameters (i.e., completion
time, cost, resource utilization, etc.). In addition, the scheduling
belongs to a category of a problem known as an NP-complete
problem. Therefore, the heuristic algorithm could be applied to
solve this problem. In this paper, an enhanced dependent task
scheduling algorithm based on Genetic Algorithm (DTGA) has
been introduced for mapping and executing an application’s
tasks. The aim of this proposed algorithm is to minimize the
completion time. The performance of this proposed algorithm has
been evaluated using WorkflowSim toolkit and Standard Task
Graph Set (STG) benchmark.
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
More Related Content
Similar to Multi-objective tasks scheduling using bee colony algorithm in cloud computing
Dynamic Task Scheduling based on Burst Time Requirement for Cloud EnvironmentIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
A hybrid approach for scheduling applications in cloud computing environment IJECEIAES
Cloud computing plays an important role in our daily life. It has direct and positive impact on share and update data, knowledge, storage and scientific resources between various regions. Cloud computing performance heavily based on job scheduling algorithms that are utilized for queue waiting in modern scientific applications. The researchers are considered cloud computing a popular platform for new enforcements. These scheduling algorithms help in design efficient queue lists in cloud as well as they play vital role in reducing waiting for processing time in cloud computing. A novel job scheduling is proposed in this paper to enhance performance of cloud computing and reduce delay time in queue waiting for jobs. The proposed algorithm tries to avoid some significant challenges that throttle from developing applications of cloud computing. However, a smart scheduling technique is proposed in our paper to improve performance processing in cloud applications. Our experimental result of the proposed job scheduling algorithm shows that the proposed schemes possess outstanding enhancing rates with a reduction in waiting time for jobs in queue list.
Multi-objective load balancing in cloud infrastructure through fuzzy based de...IAESIJAI
Cloud computing became a popular technology which influence not only
product development but also made technology business easy. The services
like infrastructure, platform and software can reduce the complexity of
technology requirement for any ecosystem. As the users of cloud-based
services increases the complexity of back-end technologies also increased.
The heterogeneous requirement of users in terms for various configurations
creates different unbalancing issues related to load. Hence effective load
balancing in a cloud system with reference to time and space become crucial
as it adversely affect system performance. Since the user requirement and
expected performance is multi-objective use of decision-making tools like
fuzzy logic will yield good results as it uses human procedure knowledge in
decision making. The overall system performance can be further improved by
dynamic resource scheduling using optimization technique like genetic
algorithm.
Demand-driven Gaussian window optimization for executing preferred population...IJECEIAES
Scheduling is one of the essential enabling technique for Cloud computing which facilitates efficient resource utilization among the jobs scheduled for processing. However, it experiences performance overheads due to the inappropriate provisioning of resources to requesting jobs. It is very much essential that the performance of Cloud is accomplished through intelligent scheduling and allocation of resources. In this paper, we propose the application of Gaussian window where jobs of heterogeneous in nature are scheduled in the round-robin fashion on different Cloud clusters. The clusters are heterogeneous in nature having datacenters with varying sever capacity. Performance evaluation results show that the proposed algorithm has enhanced the QoS of the computing model. Allocation of Jobs to specific Clusters has improved the system throughput and has reduced the latency.
Optimization of energy consumption in cloud computing datacenters IJECEIAES
Cloud computing has emerged as a practical paradigm for providing IT resources, infrastructure and services. This has led to the establishment of datacenters that have substantial energy demands for their operation. This work investigates the optimization of energy consumption in cloud datacenter using energy efficient allocation of tasks to resources. The work seeks to develop formal optimization models that minimize the energy consumption of computational resources and evaluates the use of existing optimization solvers in testing these models. Integer linear programming (ILP) techniques are used to model the scheduling problem. The objective is to minimize the total power consumed by the active and idle cores of the servers’ CPUs while meeting a set of constraints. Next, we use these models to carry out a detailed performance comparison between a selected set of Generic ILP and 0-1 Boolean satisfiability based solvers in solving the ILP formulations. Simulation results indicate that in some cases the developed models have saved up to 38% in energy consumption when compared to common techniques such as round robin. Furthermore, results also showed that generic ILP solvers had superior performance when compared to SAT-based ILP solvers especially as the number of tasks and resources grow in size.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Resource-efficient workload task scheduling for cloud-assisted internet of th...IJECEIAES
One of the most challenging tasks in the internet of things-cloud-based environment is the resource allocation for the tasks. The cloud provides various resources such as virtual machines, computational cores, networks, and other resources for the execution of the various tasks of the internet of things (IoT). Moreover, some methods are used for executing IoT tasks using an optimal resource management system but these methods are not efficient. Hence, in this research, we present a resource-efficient workload task scheduling (RWTS) model for a cloud-assisted IoT environment to execute the IoT task which utilizes few numbers of resources to bring a good tradeoff, achieve high performance using fewer resources of the cloud, compute the number of resources required for the execution of the IoT task such as bandwidth and computational core. Furthermore, this model mainly focuses to reduce energy consumption and also provides a task scheduling model to schedule the IoT tasks in an IoT-cloud-based environment. The experimentation has been done using the Montage workflow and the results have been obtained in terms of execution time, power sum, average power, and energy consumption. When compared with the existing model, the RWTS model performs better when the size of the tasks is increased.
A cloud computing scheduling and its evolutionary approachesnooriasukmaningtyas
Despite the increasing use of cloud computing technology because it offers
unique features to serve its customers perfectly, exploiting the full potential
is very difficult due to the many problems and challenges. Therefore,
scheduling resources are one of these challenges. Researchers are still finding
it difficult to determine which of the scheduling algorithms are appropriate
and effective and that helps increases the performance of the system to
accomplish these tasks. This paper provides a broad and detailed examination
of resource scheduling algorithms in the environment of a cloud computing
environment and highlights the advantages and disadvantages of some
algorithms to help researchers in selecting the best algorithms to schedule a
particular workload to get a satisfy a quality of service, guarantee good
utilization of the cloud resources also minimizing the make-span.
Reliable and efficient webserver management for task scheduling in edge-cloud...IJECEIAES
The development in the field of cloud webserver management for the execution of the workflow and meeting the quality-of-service (QoS) prerequisites in a distributed cloud environment has been a challenging task. Though, internet of things (IoT) of work presented for the scheduling of the workflow in a heterogeneous cloud environment. Moreover, the rapid development in the field of cloud computing like edge-cloud computing creates new methods to schedule the workflow in a heterogenous cloud environment to process different tasks like IoT, event-driven applications, and different network applications. The current methods used for workflow scheduling have failed to provide better trade-offs to meet reliable performance with minimal delay. In this paper, a novel web server resource management framework is presented namely the reliable and efficient webserver management (REWM) framework for the edge-cloud environment. The experiment is conducted on complex bioinformatic workflows; the result shows the significant reduction of cost and energy by the proposed REWM in comparison with standard webserver management methodology.
Providing a multi-objective scheduling tasks by Using PSO algorithm for cost ...Editor IJCATR
This article is intended to use the multi-PSO algorithm for scheduling tasks for cost management in cloud computing. This means that
any migration costs due to supply failure consider as a one objective and each task is a little particle and recognize by use of the
appropriate fitness schedule function (how the particles arrangement) that cost at least amount of total expense. In addition to, the weight
is granted to the each expenditure that reflects the importance of cost. The data which is used to simulate proposed method are series of
academic and research data that are prepared from the Internet and MATLAB software is used for simulation. We simulate two issues,
in the first issue, consider four task by four vehicles and divide tasks. In the second issue, make the issue more complicated and consider
six tasks by four vehicles. We write PSO's output for each two issues of various iterations. Finally, the particles dispersion and as well
as the output of the cost function were computed for each pa
PROPOSED LOAD BALANCING ALGORITHM TO REDUCE RESPONSE TIME AND PROCESSING TIME...IJCNCJournal
Cloud computing is a new technology that brings new challenges to all organizations around the world.
Improving response time for user requests on cloud computing is a critical issue to combat bottlenecks. As
for cloud computing, bandwidth to from cloud service providers is a bottleneck. With the rapid development
of the scale and number of applications, this access is often threatened by overload. Therefore, this paper
our proposed Throttled Modified Algorithm(TMA) for improving the response time of VMs on cloud
computing to improve performance for end-user. We have simulated the proposed algorithm with the
CloudAnalyts simulation tool and this algorithm has improved response times and processing time of the
cloud data center.
Task scheduling is an important aspect to improve the utilization of resources in the Cloud Computing. This paper proposes a Divide and Conquer based approach for heterogeneous earliest finish time algorithm. The proposed system works in two phases. In the first phase it assigns the ranks to the incoming tasks with respect to size of it. In the second phase, we properly assign and manage the task to the virtual machine with the consideration of ideal time of respective virtual machine. This helps to get more effective resource utilization in Cloud Computing. The experimental results using Cybershake Scientific Workflow shows that the proposed Divide and Conquer HEFT performs better than HEFT in terms of task's finish time and response time. The result obtained by experimentally demonstrate that the proposed DCHEFT performance superiorly.
Recently, lot of interest have been put forth by researchers to improve workload scheduling in cloud platform. However, execution of scientific workflow on cloud platform is time consuming and expensive. As users are charged based on hour of usage, lot of research work have been emphasized in minimizing processing time for reduction of cost. However, the processing cost can be reduced by minimizing energy consumption especially when resources are heterogeneous in nature; very limited work have been done considering optimizing cost with energy and processing time parameters together in meeting task quality of service (QoS) requirement. This paper presents cost and performance aware workload scheduling (CPA-WS) technique under heterogeneous cloud platform. This paper presents a cost optimization model through minimization of processing time and energy dissipation for execution of task. Experiments are conducted using two widely used workflow such as Inspiral and CyberShake. The outcome shows the CPA-WS significantly reduces energy, time, and cost in comparison with standard workload scheduling model.
The Cloud computing becomes an important topic
in the area of high performance distributed computing. On the
other hand, task scheduling is considered one the most significant
issues in the Cloud computing where the user has to pay for the
using resource based on the time. Therefore, distributing the
cloud resource among the users' applications should maximize
resource utilization and minimize task execution Time. The goal
of task scheduling is to assign tasks to appropriate resources that
optimize one or more performance parameters (i.e., completion
time, cost, resource utilization, etc.). In addition, the scheduling
belongs to a category of a problem known as an NP-complete
problem. Therefore, the heuristic algorithm could be applied to
solve this problem. In this paper, an enhanced dependent task
scheduling algorithm based on Genetic Algorithm (DTGA) has
been introduced for mapping and executing an application’s
tasks. The aim of this proposed algorithm is to minimize the
completion time. The performance of this proposed algorithm has
been evaluated using WorkflowSim toolkit and Standard Task
Graph Set (STG) benchmark.
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
Developing a smart system for infant incubators using the internet of things ...IJECEIAES
This research is developing an incubator system that integrates the internet of things and artificial intelligence to improve care for premature babies. The system workflow starts with sensors that collect data from the incubator. Then, the data is sent in real-time to the internet of things (IoT) broker eclipse mosquito using the message queue telemetry transport (MQTT) protocol version 5.0. After that, the data is stored in a database for analysis using the long short-term memory network (LSTM) method and displayed in a web application using an application programming interface (API) service. Furthermore, the experimental results produce as many as 2,880 rows of data stored in the database. The correlation coefficient between the target attribute and other attributes ranges from 0.23 to 0.48. Next, several experiments were conducted to evaluate the model-predicted value on the test data. The best results are obtained using a two-layer LSTM configuration model, each with 60 neurons and a lookback setting 6. This model produces an R 2 value of 0.934, with a root mean square error (RMSE) value of 0.015 and a mean absolute error (MAE) of 0.008. In addition, the R 2 value was also evaluated for each attribute used as input, with a result of values between 0.590 and 0.845.
A review on internet of things-based stingless bee's honey production with im...IJECEIAES
Honey is produced exclusively by honeybees and stingless bees which both are well adapted to tropical and subtropical regions such as Malaysia. Stingless bees are known for producing small amounts of honey and are known for having a unique flavor profile. Problem identified that many stingless bees collapsed due to weather, temperature and environment. It is critical to understand the relationship between the production of stingless bee honey and environmental conditions to improve honey production. Thus, this paper presents a review on stingless bee's honey production and prediction modeling. About 54 previous research has been analyzed and compared in identifying the research gaps. A framework on modeling the prediction of stingless bee honey is derived. The result presents the comparison and analysis on the internet of things (IoT) monitoring systems, honey production estimation, convolution neural networks (CNNs), and automatic identification methods on bee species. It is identified based on image detection method the top best three efficiency presents CNN is at 98.67%, densely connected convolutional networks with YOLO v3 is 97.7%, and DenseNet201 convolutional networks 99.81%. This study is significant to assist the researcher in developing a model for predicting stingless honey produced by bee's output, which is important for a stable economy and food security.
A trust based secure access control using authentication mechanism for intero...IJECEIAES
The internet of things (IoT) is a revolutionary innovation in many aspects of our society including interactions, financial activity, and global security such as the military and battlefield internet. Due to the limited energy and processing capacity of network devices, security, energy consumption, compatibility, and device heterogeneity are the long-term IoT problems. As a result, energy and security are critical for data transmission across edge and IoT networks. Existing IoT interoperability techniques need more computation time, have unreliable authentication mechanisms that break easily, lose data easily, and have low confidentiality. In this paper, a key agreement protocol-based authentication mechanism for IoT devices is offered as a solution to this issue. This system makes use of information exchange, which must be secured to prevent access by unauthorized users. Using a compact contiki/cooja simulator, the performance and design of the suggested framework are validated. The simulation findings are evaluated based on detection of malicious nodes after 60 minutes of simulation. The suggested trust method, which is based on privacy access control, reduced packet loss ratio to 0.32%, consumed 0.39% power, and had the greatest average residual energy of 0.99 mJoules at 10 nodes.
Fuzzy linear programming with the intuitionistic polygonal fuzzy numbersIJECEIAES
In real world applications, data are subject to ambiguity due to several factors; fuzzy sets and fuzzy numbers propose a great tool to model such ambiguity. In case of hesitation, the complement of a membership value in fuzzy numbers can be different from the non-membership value, in which case we can model using intuitionistic fuzzy numbers as they provide flexibility by defining both a membership and a non-membership functions. In this article, we consider the intuitionistic fuzzy linear programming problem with intuitionistic polygonal fuzzy numbers, which is a generalization of the previous polygonal fuzzy numbers found in the literature. We present a modification of the simplex method that can be used to solve any general intuitionistic fuzzy linear programming problem after approximating the problem by an intuitionistic polygonal fuzzy number with n edges. This method is given in a simple tableau formulation, and then applied on numerical examples for clarity.
The performance of artificial intelligence in prostate magnetic resonance im...IJECEIAES
Prostate cancer is the predominant form of cancer observed in men worldwide. The application of magnetic resonance imaging (MRI) as a guidance tool for conducting biopsies has been established as a reliable and well-established approach in the diagnosis of prostate cancer. The diagnostic performance of MRI-guided prostate cancer diagnosis exhibits significant heterogeneity due to the intricate and multi-step nature of the diagnostic pathway. The development of artificial intelligence (AI) models, specifically through the utilization of machine learning techniques such as deep learning, is assuming an increasingly significant role in the field of radiology. In the realm of prostate MRI, a considerable body of literature has been dedicated to the development of various AI algorithms. These algorithms have been specifically designed for tasks such as prostate segmentation, lesion identification, and classification. The overarching objective of these endeavors is to enhance diagnostic performance and foster greater agreement among different observers within MRI scans for the prostate. This review article aims to provide a concise overview of the application of AI in the field of radiology, with a specific focus on its utilization in prostate MRI.
Seizure stage detection of epileptic seizure using convolutional neural networksIJECEIAES
According to the World Health Organization (WHO), seventy million individuals worldwide suffer from epilepsy, a neurological disorder. While electroencephalography (EEG) is crucial for diagnosing epilepsy and monitoring the brain activity of epilepsy patients, it requires a specialist to examine all EEG recordings to find epileptic behavior. This procedure needs an experienced doctor, and a precise epilepsy diagnosis is crucial for appropriate treatment. To identify epileptic seizures, this study employed a convolutional neural network (CNN) based on raw scalp EEG signals to discriminate between preictal, ictal, postictal, and interictal segments. The possibility of these characteristics is explored by examining how well timedomain signals work in the detection of epileptic signals using intracranial Freiburg Hospital (FH), scalp Children's Hospital Boston-Massachusetts Institute of Technology (CHB-MIT) databases, and Temple University Hospital (TUH) EEG. To test the viability of this approach, two types of experiments were carried out. Firstly, binary class classification (preictal, ictal, postictal each versus interictal) and four-class classification (interictal versus preictal versus ictal versus postictal). The average accuracy for stage detection using CHB-MIT database was 84.4%, while the Freiburg database's time-domain signals had an accuracy of 79.7% and the highest accuracy of 94.02% for classification in the TUH EEG database when comparing interictal stage to preictal stage.
Analysis of driving style using self-organizing maps to analyze driver behaviorIJECEIAES
Modern life is strongly associated with the use of cars, but the increase in acceleration speeds and their maneuverability leads to a dangerous driving style for some drivers. In these conditions, the development of a method that allows you to track the behavior of the driver is relevant. The article provides an overview of existing methods and models for assessing the functioning of motor vehicles and driver behavior. Based on this, a combined algorithm for recognizing driving style is proposed. To do this, a set of input data was formed, including 20 descriptive features: About the environment, the driver's behavior and the characteristics of the functioning of the car, collected using OBD II. The generated data set is sent to the Kohonen network, where clustering is performed according to driving style and degree of danger. Getting the driving characteristics into a particular cluster allows you to switch to the private indicators of an individual driver and considering individual driving characteristics. The application of the method allows you to identify potentially dangerous driving styles that can prevent accidents.
Hyperspectral object classification using hybrid spectral-spatial fusion and ...IJECEIAES
Because of its spectral-spatial and temporal resolution of greater areas, hyperspectral imaging (HSI) has found widespread application in the field of object classification. The HSI is typically used to accurately determine an object's physical characteristics as well as to locate related objects with appropriate spectral fingerprints. As a result, the HSI has been extensively applied to object identification in several fields, including surveillance, agricultural monitoring, environmental research, and precision agriculture. However, because of their enormous size, objects require a lot of time to classify; for this reason, both spectral and spatial feature fusion have been completed. The existing classification strategy leads to increased misclassification, and the feature fusion method is unable to preserve semantic object inherent features; This study addresses the research difficulties by introducing a hybrid spectral-spatial fusion (HSSF) technique to minimize feature size while maintaining object intrinsic qualities; Lastly, a soft-margins kernel is proposed for multi-layer deep support vector machine (MLDSVM) to reduce misclassification. The standard Indian pines dataset is used for the experiment, and the outcome demonstrates that the HSSF-MLDSVM model performs substantially better in terms of accuracy and Kappa coefficient.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...ssuser7dcef0
Power plants release a large amount of water vapor into the
atmosphere through the stack. The flue gas can be a potential
source for obtaining much needed cooling water for a power
plant. If a power plant could recover and reuse a portion of this
moisture, it could reduce its total cooling water intake
requirement. One of the most practical way to recover water
from flue gas is to use a condensing heat exchanger. The power
plant could also recover latent heat due to condensation as well
as sensible heat due to lowering the flue gas exit temperature.
Additionally, harmful acids released from the stack can be
reduced in a condensing heat exchanger by acid condensation. reduced in a condensing heat exchanger by acid condensation.
Condensation of vapors in flue gas is a complicated
phenomenon since heat and mass transfer of water vapor and
various acids simultaneously occur in the presence of noncondensable
gases such as nitrogen and oxygen. Design of a
condenser depends on the knowledge and understanding of the
heat and mass transfer processes. A computer program for
numerical simulations of water (H2O) and sulfuric acid (H2SO4)
condensation in a flue gas condensing heat exchanger was
developed using MATLAB. Governing equations based on
mass and energy balances for the system were derived to
predict variables such as flue gas exit temperature, cooling
water outlet temperature, mole fraction and condensation rates
of water and sulfuric acid vapors. The equations were solved
using an iterative solution technique with calculations of heat
and mass transfer coefficients and physical properties.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Recycled Concrete Aggregate in Construction Part III
Multi-objective tasks scheduling using bee colony algorithm in cloud computing
1. International Journal of Electrical and Computer Engineering (IJECE)
Vol. 12, No. 5, October 2022, pp. 5657~5666
ISSN: 2088-8708, DOI: 10.11591/ijece.v12i5.pp5657-5666 5657
Journal homepage: http://ijece.iaescore.com
Multi-objective tasks scheduling using bee colony algorithm in
cloud computing
Mehdi Salehi Babadi1
, Mohammad Ebrahim Shiri1
, Mohammad Reza Moazami Goudarzi2
,
Hamid Haj Seyyed Javadi3
1
Department of Computer Engineering, Borujerd Branch, Islamic Azad University, Borujerd, Iran
2
Department of Mathematics, Borujerd Branch, Islamic Azad University, Borujerd, Iran
3
Department of Mathematics and Computer Science, Shahed University, Tehran, Iran
Article Info ABSTRACT
Article history:
Received Feb 20, 2021
Revised Mar 10, 2022
Accepted Apr 8, 2022
Due to the development of communication device technology and the need
to use up-to-date infrastructure ready to respond quickly and in a timely
manner to computational needs, the competition for the use of processing
resources is increasing nowadays. The scheduling tasks in the cloud
computing environment have been remained a challenge to access a quick
and efficient solution. In this paper, the aim is to present a new tactic for
allocating the available processing resources based on the artificial bee
colony (ABC) algorithm and cellular automata for solving the task
scheduling problem in the cloud computing network. The results show the
performance of the proposed method is better than its counterparts.
Keywords:
Artificial bee colony algorithm
Cellular automata
Cloud computing
Resource allocation
This is an open access article under the CC BY-SA license.
Corresponding Author:
Mohammad Ebrahim Shiri
Department of Computer Engineering, Borujerd Branch, Islamic Azad University
Borujerd, Iran
Email: shiri@aut.ac.ir
1. INTRODUCTION
With the dramatic increase in the variety of information technology. equipment and services, the
management of services offered in this area has also faced many challenges. Managing problems and
requests, managing equipment and resources related to technical support services, and allocating them to
users, as well as monitoring, controlling, and scheduling are among the causes that force information
technology (IT) [1]. Managers to provide useful and efficient tools themselves. There is also a need for
people to do their heavy computing work without having expensive hardware and software through services
[2]. Cloud computing has been the latest technology response to these needs. The National Institute of
Technology and Standards defines cloud computing as: “cloud computing is a model for providing easy
access based on the demand of user by the network to a set of changeable and configurable computing
resources (e.g., networks, servers, storage, applications, and services) that this access can be provided or
release with minimal need for resources management or the need for direct intervention by the service
provider”. Sharing “intangible and consumable” computing power among several tenants can improve
productivity rates because other servers are not idle for any reason with this effective way and computers are
being used more because cloud computing clients do not need to calculate and determine their maximum
load [3].
A cellular automaton is a mathematical model for representing systems in which objects called cells
together model system behavior, which can be defined in one or more dimensional formats [4]. The
2. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 5, October 2022: 5657-5666
5658
homogenous and parallel structure of cellular automata makes it suitable for modeling different types of
physical systems. To optimally model a physical system, the simple structure of cellular automata is used as
finite and local interactions between cells. The simple structure is defined as a one-dimensional cellular
automaton and each cell with two states (1.0) with a uniform ternary neighbor (itself and its right and left
neighbors). The most important feature of a cellular automata structure is its modularity. According to
interactions [4], it is observed that provide one-dimensional three-neighbor cellular automata provides the
best efficiency for modeling physical systems.
In this paper, a new task scheduling problem in a cloud environment, a highly distributed computing
platform, is concentrated. A time-aware scheduling algorithm based on artificial bee colony (ABC) is
proposed to solve this problem. The primary objective of the proposed algorithm is to achieve a good
trade-off between response time, schedule length ratio, and efficiency to complete the tasks in the Cloud
system. Our approach was experimentally evaluated and on many datasets of various sizes. The results prove
that the proposed algorithm achieved better results and was more optimal.
In [5], a new different optimization algorithm, evolution by combining elite to schedule tasks in the
cloud computing environment is voluntary, which is called multi-objective ant lion optimizer (MOALO).
MOALO algorithm has performed better than other algorithms. In [6], a new load balancing was also, which
proposed by combining a cuckoo search algorithm with a bee colony called a self-adaptive artificial bee
colony (SABC). It increases the utilization of resources, and execution time is reduced. Best task-to-virtual
machine mapping is computed by the proposed algorithm. Speed of virtual machines (VM) processing and
submitted workload length has an effect on it.
In [7], proposes an in-depth reinforcement learning model based on learning the quality of service
(QoS) feature to optimize data center resource planning. In the in-depth learning phase, we propose a QoS
feature learning method based on the enhancement of automatic voice removal encoders to extract stronger
QoS feature information. The study [8] provides a technical analysis of cloud service deployment approaches
in internet of things (IoT) systems. The key point of this technical analysis is to identify the basic studies in
service placement approaches that need more attention to develop more efficient and effective strategies in
IoT locations.
In [9], a new cloud computing task-scheduling algorithm that introduces min-min and max-min
algorithms to generate initialization population selects task completion time and load balancing as double
fitness functions and improves the quality of initialization population, algorithm searchability, and
convergence speed was proposed. In [10], for task scheduling in the cloud, we use particle swarm
optimization (PSO), firefly algorithm (FA), bat algorithm (BA), and grasshopper optimize cloud algorithm
(GOA), which are swarm-based algorithms. The experimental results indicate that the improved GOA can
optimize task scheduling problems by effective utilization of available resources.
In [11] provides a new approach for improving the task scheduling problem in a cloud-fog
environment in terms of execution time (makespan) and operating costs for bag-of-tasks applications. A task
scheduling evolutionary algorithm has been proposed. A single custom representation of the problem and a
uniform intersection is built for the proposed algorithm. In [12], a task scheduler based on a genetic
algorithm for the cloud computing system was invented that used the genetic algorithm [13] to minimize the
time to complete tasks in the cloud computing environment.
In [14], an ant algorithm for symmetric scheduling of tasks on cloud computing was presented. The
goal is to allocate the optimal resource to each task according to the source and task characteristics. Each task
is regarded as an ant, and the weight of the resources is regarded as a pheromone. In other words, the higher
the source weight, the higher the pheromone. The results show the optimum completion time and balance. In
[15] to enhance the performance of cloud computing and reduce delay time in the queue waiting for jobs. The
proposed algorithm tries to avoid some significant challenges that throttle from developing applications of
clouding computing. Our experimental result of the proposed job scheduling algorithm shows that the proposed
schemes possess outstanding enhancing rates with a reduction in waiting time for jobs in the queue list.
Paper [16], apply the latest whale optimization metamorphosis (WOA) algorithm to schedule cloud
work with a multi-objective optimization model, with the aim of improving the performance of a cloud
system with given computing resources. Accordingly, we propose an advanced approach called WOA cloud
scheduling (IWC) Improvement to further improve the ability to search for the optimal WOA-based
approach. In [17] paper, a modified henry gas solubility optimization (HGSO) is presented, which is based on
the WOA and comprehensive opposition-based learning (COBL) for optimum task scheduling. The proposed
method is named henry gas solubility whale cloud (HGSWC). HGSWC is validated on a set of thirty-six
optimization benchmark functions, and it is contrasted with conventional HGSO and WOA.
In [18] proposes three main contributions to solve this load balancing problem. First, it proposes a
heterogeneous initialized load balancing (HILB) algorithm to perform a good task scheduling process that
improves the makespan in the case of homogeneous or heterogeneous resources and provides a direction to
3. Int J Elec & Comp Eng ISSN: 2088-8708
Multi-objective tasks scheduling using bee colony algorithm in cloud computing … (Mehdi Salehi Babadi)
5659
reach optimal load deviation. Second, it proposes a hybrid load balance based on a genetic algorithm
(HLBGA) as a combination of HILB and genetic algorithm (GA). Third, a newly formulated fitness function
that minimizes the load deviation is used for GA [19] proposes an efficient algorithm traffic-aware adaptive
server load balancing (TAASLB) to balance the flows to the servers in a data center network. It works based
on two parameters, residual bandwidth and server capacity. It detects the elephant flows and forwards them
towards the optimal server where they can be processed quickly.
In [20], proposed hybrid electro search with a genetic algorithm (HESGA) to improve the behavior
of task scheduling by considering parameters such as makespan, load balancing, utilization of resources, and
cost of the multi-cloud. The proposed method combined the advantage of a genetic algorithm and an electro
search algorithm. The genetic algorithm provides the best local optimal solutions, whereas the electro search
algorithm provides the best global optima solutions. The proposed algorithm outperforms existing scheduling
algorithms such as hybrid particle swarm optimization genetic algorithm (HPSOGA), GA, evolution strategy
(ES), and ant colony optimization algorithm (ACO). In [21], an efficient hybridized scheduling algorithm
that replicates the parasitic behavior of the cuckoo and food gathering habit of the crow bird, named the
cuckoo crow search algorithm (CCSA), had been presented for improvising the task scheduling process. The
crow bird always stares at its neighbors, looking for a better food source than the one it currently possesses.
In [22], a resource scheduling optimization model based on service level agreement based on a
random planning approach in cloud computing is presented. In [23], the flexible task scheduling problem in a
cloud computing system is studied and solved by a hybrid discrete artificial bee colony (ABC) algorithm,
where the considered problem is first modeled as a hybrid flow shop scheduling (HFS) problem. Both a
single objective and multiple objectives are considered. In [24], a cloud computing multi-objective task
scheduling optimization based on a fuzzy self-defense algorithm is proposed. Select the shortest time, the
degree of resource load balance, and the cost of multi-objective task completion as the goal of cloud
computing multi-objective task scheduling, establish a mathematical model to measure the effect of
multi-objective task scheduling and construct the objective function of cloud computing multi-objective task
scheduling.
2. METHOD
2.1. Problem statement
The cloud management program manages all cloud resources using various cloud modules such as
network module, operating system image module, cost module, and endorsement module. The tasks are
distributed in different data centers (DC’s) available through the cloud infrastructure. Each data center
divides the user tasks into several “sub-task” and makes them available to processing elements (PE) [25].
2.2. Proposed schema
The proposed scheduling module will have the task of assigning the right job to the right source at
the right time in the framework of cloud. In Figure 1, DC represents a data center and PE represents the
processing elements. In this model, a new cloud is regarded as a set of user tasks that its complex computing
is performed using cloud resources. Suppose UserJob=(𝑈1, 𝑈2, … , 𝑈𝑁) is a set of user programs that enters
(execution-request) at a given time. Each UserJob (Ui) is represented by a double <ai, di>, where ai
represents the time of entry, and di represents the time limit of the UserJob. If a task fails within its time
limit, it is designated as a failed task and must be re-entered into a new scheduling queue. In the scheduling
process, user tasks are assigned to available data centers (𝐷1, 𝐷2, … , 𝐷𝑀), where M<=N, means that the
number of data centers may be less than the number of tasks requested. Each data center (Di) is represented
by a double <Ci, mi>. In this dual, Ci is the cost of executing tasks in the data center per unit time, and mi is
the number of PEs available for executing user tasks. Each data center has a number of processing elements
{𝑃𝐸1, 𝑃𝐸2, … , 𝑃𝑒𝑘} to execute users' work. Each processing element with a PE’s characteristic also means
processing speed[26].
2.3. Problem parameters
2.3.1. Response-time problem analysis
These tasks are embedded in one of the existing datacenters called D, which has a certain number of
PE processing units, and these tasks must be distributed among these processing units. Each U has a required
processing volume, i.e., P and an allowed time i.e., T. Each PE also has a PEj processing speed and a
processing cost. If the execution time of Tk in 𝑃𝑒𝑗 is shown by τk, then the termination time can be stated:
this processing time is proportional to the difference in processing volume required for 𝑃𝑘 in 𝑃𝑒𝑗.
𝑃𝑘𝑟𝑒𝑚𝑎𝑖𝑛𝑖𝑛𝑔
= 𝑃𝐾𝑖𝑛𝑖𝑡𝑖𝑎𝑙 − 𝜏𝑘 ∗ 𝑃𝐸𝑗 <= 𝜏𝑘 =
𝑃𝑘
𝑃𝐸𝑗
(1)
4. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 5, October 2022: 5657-5666
5660
If 𝑃𝑘> 𝑃𝑒𝑗 is, then the Tk spent for 𝑈𝑘 task processing is decreased from the amount of processing
required, and some processing remains. Naturally, we want this processing time to be less than the time
allowed to perform the full task of the user task, i.e., Tim. In general, assuming the parallel distribution of 𝑈𝑘
tasks among 𝑃𝑒𝑗 processors, the time of completing task U is equal to the time of completing the longest tasks.
𝑀𝑎𝑘𝑒𝑠𝑝𝑎𝑛 = 𝑚𝑎𝑥{𝐹𝑖𝑛𝑖𝑠ℎ (𝜏𝑘)} (2)
Figure 1. Management structure of a cloud processor [26]
2.3.2. Energy consumption problem analysis
If the unit processing cost per 𝑃𝑒𝑗 is equal to 𝑃𝐶𝑗, then the energy consumed to perform the whole
task U is equal to (3).
EU = ∑ (τk ∗ PCj)
n
k=1 (3)
This is equal to the cost spent for the UserJob in the D datacenter.
2.3.3. The efficiency formulation
Here, a new relationship must be established to determine the efficiency of the task distribution
algorithm. This productivity can be obtained by comparing the amount of processing that could do with the
whole time of D (data) with U (UserJob) with the amount of processing that actually performed. That’s mean:
eff =
∑ (τk∗PEjk)
n
k=1
Makespan∗∑ PEj
J
j=1
−1
(4)
This equation is equal to the inverse of the sum of the time that each processor unit was occupied by
a particular task divided by the total time U task multiplied by the total available processing power.
5. Int J Elec & Comp Eng ISSN: 2088-8708
Multi-objective tasks scheduling using bee colony algorithm in cloud computing … (Mehdi Salehi Babadi)
5661
Therefore, the lower the total execution time of the task and the more optimized processing power is
distributed among the tasks, the higher the efficiency of the algorithm. The schedule length ratio (SLR)
parameter can also be obtained from this (5).
𝑆𝐿𝑅 =
𝑚𝑎𝑥{𝐹𝑖𝑛𝑖𝑠ℎ (𝜏𝑘)}
𝑚𝑖𝑛{𝐹𝑖𝑛𝑖𝑠ℎ (𝜏𝑘)}
(5)
Now suppose each task U has linear cell automata that correspond to the order of T subtasks, and
each cell automata have a bee with greedy selection. At each m period of the processing resource allocation
algorithm operation, we have a number of 𝑃𝑘 residual from different 𝑈𝑘, each of which has a bee with
greedy selection. To continue task for each 𝑃𝑘, one power of choice is definedwhich is the ability of the bee
assigned to it, which is equal to the amount of processing operations residual from the 𝑈𝑘 task. In short, any
remaining or new task in any given period will choose the most powerful of the available processing
resources, depending on the selection power it has, and this process will continue until all tasks are
completed. If we show the periods of allocating tasks by m, each bee's selection power is (6).
𝑃𝑂𝑊𝑘 = 𝑃𝑘𝑖𝑛𝑖𝑡𝑖𝑎𝑙 − ∑ 𝜏𝑘(𝑚)
𝑀
𝑚=1 ∗ 𝑃𝐸𝑆𝑗(𝑚) (6)
That is, the selection power of each bee for its own task is equal to the initial amount of processing
required for that task minus the processing power allocated to that task at different times. However, with each
iteration of the resource allocation algorithm of task 𝑈𝑘, that has the most selection power to obtain the
highest 𝑃𝑒𝑗. However, the energy consumption equation is also rewritten as (7).
EU = ∑ ∑ (τkm ∗ PCjm)
n
k=1
M
m=1 (7)
And the efficiency of the algorithm is also equal to (8).
eff =
∑ ∑ (τk∗PEjm)
n
k=1
M
m=1
Makespan∗∑ PEj
J
j=1
−1
(8)
The unknown inputs of the problem are the number of tasks available and the processing volume of
each one, the number of processors and processing power, and the processing cost of each, and the outputs of
the problem, as well as the total processing time, total energy consumed, and algorithm efficiency that are
compared in two states of distribution with greedy selection and constant random distribution. In the constant
random distribution, each task is assigned as a processing resource at the beginning of the task, and this
allocation does not change anymore. One remaining point is that given the extension of servers and
processing resources in cloud systems, the probability that the number of tasks assigned is greater than the
number of processors is very low, usually 𝐽 ≥ 𝐾.
3. RESULTS AND DISCUSSION
3.1. Test data of the problem
After introducing the overview, examining the technical terms, and reviewing similar work, we were
able to present a new theory for the distribution of cloud computing resources among existing tasks. It is time
to look at the results and judge the validity and effect clouds of the proposed technique. First, we implement
the simulation with a 12-sub task process on a cloud processor with 20 hardware sources. Naturally, a virtual
machine is defined simultaneously for each task, but how each virtual machine accesses the hardware is set
by the proposed algorithm.
The first step is to determine the number of different hardware resources with different processing
power. The reason for the random selection is that according to the use of cloud servers from discrete
processing resources in a very large area of the Internet network and connection and disconnection of many
of these resources, it cannot be assured from existing processing power. The processing values required in
each sub-task are also unknown and randomly determined in a reasonable range.
Also, the processing cost depends not only on the processing power but also on other parameters
such as the distance, and position of the processor. This issue should also be determined randomly. After
specifying the tasks and hardware resources available for each task, automata and a bee is assigned, and their
performance is such that the location of each subtask may vary depending on the selection power of the bee during
the execution of the program, as shown in the Table 1. Whole task execution using cellular automata to divide
training resources and tasks in this example took 48 time periods, i.e., 12 parallel automata changed 48 times.
6. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 5, October 2022: 5657-5666
5662
Table 1. Change of the 12-cell automata in the first 15 iterations of the algorithm
1 2 3 4 5 6 7 8 9 10 11 12
1 2 3 4 5 6 7 8 9 10 11 12
7 14 11 6 2 4 12 8 16 10 20 12
7 14 11 6 2 4 12 8 16 10 20 12
7 14 11 6 2 4 12 8 16 10 20 12
7 14 11 6 2 4 12 20 8 10 16 12
7 14 11 6 2 4 12 16 20 10 8 12
7 14 11 6 2 4 12 16 20 10 8 12
7 14 11 12 2 4 6 16 20 10 8 12
7 14 11 6 2 12 4 16 20 10 8 12
7 14 11 4 2 12 6 16 20 10 8 12
7 14 12 4 2 6 11 16 20 10 8 12
7 14 6 4 2 11 12 16 20 10 8 12
7 14 11 12 2 4 6 16 20 10 8 12
7 14 4 12 2 6 11 16 20 10 8 12
7 14 12 6 2 11 4 16 20 10 8 12
3.1.1. The response times
As we said, the whole task is processed by the available cloud resources once using the variable
cellular automata and once by the allocation of random processing resources. Now we compare the output
parameters between these two modes. As shown in Figure 2, processing time of whole tasks with variable
automata in a left column equals 48 seconds, and random allocation of resources in a right column equals
220 seconds the difference is very significant. As it can be seen, the proposed algorithm in this study reduces
the processing time by more than four times.
Figure 2. Processing time of whole tasks with variable automata
3.1.2. Energy consumption criterion
In Figure 3 shows energy consumption criterion of fault data in favorable case. The energy
consumption in the two methods used in the cellular automata method was about 450,000 joules and in the
random allocation method was about 3.6 million joules. It can be seen that the proposed method reduces
energy consumption by eight times compared to the random allocation method 1. In Table 1 change of the
12-cell automata in the first 15 iterations of the algorithm. After these 15 iterations, due to zero of most
processors remained, other automata elements were not changed until the last remaining processing.
3.1.3. The efficiency criterion
Figure 4 shows the efficiency criterion. Comparison of processing efficiency in two random
distribution modes and cellular automata with astronomical differences is shown in Figure 4. It is observed
that the improvement of all system performance parameters in the use of cellular automata with greedy bee
selector is more significant than the random allocation of resources.
3.1.4. The SLR criterion
Figure 5 show comparison of SLR parameter in mentioned methods. It can be seen that the ratio of
the longest process to the shortest one in cellular automata has been 45 times and in random distribution
method 220 times. It can be seen that the improvement of all system performance parameters in the use of
7. Int J Elec & Comp Eng ISSN: 2088-8708
Multi-objective tasks scheduling using bee colony algorithm in cloud computing … (Mehdi Salehi Babadi)
5663
cellular automata with a greedy bee selector compared to random allocation of resources is very significant.
It seems that the proposed method can have a significant impact on improving cloud computing services.
In order to investigate the performance of the algorithm under different conditions, we are cloud the
simulation in 45 sub-tasks with a maximum load of 4,000 processing units and distributed among
65 hardware sources with a processing capability of 1,000 units and a processing cost of 10,000 units. The
results are Figure 6, show the time taken to perform all tasks in the presented method was only eight periods,
and in the random allocation method, the tasks were over 1,400 periods, which the difference is very
significant. The cost of processing in the proposed method was nearly two thousand times less than the
random allocation method as shown in Figure 7.
Figure 8 show the processing efficiency seems to have decreased here, unlike the previous
simulation, and it is less than the random distribution of resources. In the case of random distribution, some
processing resources seem to have spent much more time than other processing sources, and this has been
influenced in the following graph. Figure 9 shows the SLR parameter, which shows the time difference of
working among the cloud processor, is much less than the random distribution method in the presented
method, which indicates the balanced distribution of tasks in the current algorithm.
Figure 3. Energy consumption criterion
Figure 4. The efficiency criterion
Figure 5. Comparison of SLR parameter
8. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 5, October 2022: 5657-5666
5664
Figure 6. Total processing time with random processor dist.
Figure 7. The cost of processing in the proposed method was nearly two thousand times less than
the random allocation method
Figure 8. Processing efficiency with greedy Bees alg.
Figure 9. SLR with greedy Bees alg.
9. Int J Elec & Comp Eng ISSN: 2088-8708
Multi-objective tasks scheduling using bee colony algorithm in cloud computing … (Mehdi Salehi Babadi)
5665
4. CONCLUSION AND FUTURE WORKS
In the two simulations above, it is assumed that the power of the preceding available hardware
resources was less than the amount of processing volume required for each task. To finish the discussion, we
run the simulation in a different way. This time, 65 tasks with a maximum processing volume of 5,000 units
are distributed among a hundred cloud processors with a maximum processing power of 7,000 units. The
processing cost is 10,000 units. The results showed that the current method still has had superiority to the
random allocation of resources, but this level of superiority was not as significant as in the previous cases.
The proposed algorithm for actively distributing processing resources between the tasks presented seems to
be more effective when the available hardware processing power is less than the processing power required
by the tasks. The remaining point is the time allowed for processing, which according to the speed limit and
the number of processors, it cannot consider this time constraint. While from the users’ point of view, the
length of time is undesirable, but at the same time, the user cannot expect any task that expects the cloud
system to be done by the user. however, the allowed processing time of a parameter is independent, and any
value can be assumed, and it cannot expect the process distribution system to do a very large task using a
small processor small in a short time.
There are several suggestions for future research: i) applying input tasks in specific cases, infinitely
different states can be considered for sequencing and processing tasks, and system performance is examined
in this case; ii) considering two parallel user tasks for engaging processors that become idle before
completing all tasks; iii) comparison of techniques other than a random distribution of tasks for comparison
with the proposed method; iv) considering unforeseen factors such as the non-availability of some processing
resources. The current method has many advantages compared to the random distribution method, but to
complement other discussions, researchers can provide other tactics of distributing processing resources at
the same time as the method proposed in this study and compare their performance with the implementation
of both simulations.
REFERENCES
[1] R. Ding and Z. X. Guo, “Microstructural modelling of dynamic recrystallisation using an extended cellular automaton approach,”
Computational Materials Science, vol. 23, no. 1–4, pp. 209–218, Apr. 2002, doi: 10.1016/S0927-0256(01)00211-7.
[2] H. Tabrizchi and M. Kuchaki Rafsanjani, “A survey on security challenges in cloud computing: issues, threats, and solutions,”
The Journal of Supercomputing, vol. 76, no. 12, pp. 9493–9532, Dec. 2020, doi: 10.1007/s11227-020-03213-1.
[3] F. Mahan, S. M. Rozehkhani, and W. Pedrycz, “A novel resource productivity based on granular neural network in cloud
computing,” Complexity, vol. 2021, pp. 1–15, May 2021, doi: 10.1155/2021/5556378.
[4] P. Christen and O. Del Fabbro, “Automatic programming of cellular automata and artificial neural networks guided by
philosophy,” in New Trends in Business Information Systems and Technology, Springer, 2021, pp. 131–146. doi: 10.1007/978-3-
030-48332-6_9.
[5] L. Abualigah and A. Diabat, “A novel hybrid antlion optimization algorithm for multi-objective task scheduling problems in
cloud computing environments,” Cluster Computing, vol. 24, no. 1, pp. 205–223, Mar. 2021, doi: 10.1007/s10586-020-03075-5.
[6] R. Kumar and A. Chaturvedi, “Improved cuckoo search with artificial bee colony for efficient load balancing in cloud computing
environment,” in Smart Innovations in Communication and Computational Sciences, 2021, pp. 123–131. doi: 10.1007/978-981-
15-5345-5_11.
[7] B. Wang, F. Liu, and W. Lin, “Energy-efficient VM scheduling based on deep reinforcement learning,” Future Generation
Computer Systems, vol. 125, pp. 616–628, Dec. 2021, doi: 10.1016/j.future.2021.07.023.
[8] L. Heng, G. Yin, and X. Zhao, “Energy aware cloud‐edge service placement approaches in the Internet of Things
communications,” International Journal of Communication Systems, vol. 35, no. 1, Jan. 2022, doi: 10.1002/dac.4899.
[9] G. E. Weiqing and C. Yanru, “Task-scheduling algorithm based on improved genetic algorithm in cloud computing environment,”
Recent Advances in Electrical and Electronic Engineering (Formerly Recent Patents on Electrical and Electronic Engineering),
vol. 14, no. 1, pp. 13–19, Jan. 2021, doi: 10.2174/2352096513999200424075719.
[10] A. Zandvakili, N. Mansouri, and M. M. Javidi, “Swarm-based algorithms using chaos for task scheduling in cloud,” in 2021 7th
International Conference on Web Research (ICWR), May 2021, pp. 211–215. doi: 10.1109/ICWR51868.2021.9443157.
[11] M. N. Abdulredha, B. A. Attea, and A. J. Jabir, “An evolutionary algorithm for task scheduling problem in the cloud-fog
environment,” Journal of Physics: Conference Series, vol. 1963, no. 1, Jul. 2021, doi: 10.1088/1742-6596/1963/1/012044.
[12] K. Karmakar, R. K. Das, and S. Khatua, “Resource scheduling for tasks of a workflow in cloud environment,” in International
Conference on Distributed Computing and Internet Technology, 2020, pp. 214–226. doi: 10.1007/978-3-030-36987-3_13.
[13] Vijindra and S. Shenai, “Survey on scheduling issues in cloud computing,” Procedia Engineering, vol. 38, pp. 2881–2888, 2012,
doi: 10.1016/j.proeng.2012.06.337.
[14] A. Mubeen, M. Ibrahim, N. Bibi, M. Baz, H. Hamam, and O. Cheikhrouhou, “Alts: an adaptive load balanced task scheduling
approach for cloud computing,” Processes, vol. 9, no. 9, Aug. 2021, doi: 10.3390/pr9091514.
[15] A. S. Abdalkafor and K. M. Ali Alheeti, “A hybrid approach for scheduling applications in cloud computing environment,”
International Journal of Electrical and Computer Engineering (IJECE), vol. 10, no. 2, pp. 1387–1397, Apr. 2020, doi:
10.11591/ijece.v10i2.pp1387-1397.
[16] X. Chen et al., “A WOA-based optimization approach for task scheduling in cloud computing systems,” IEEE Systems Journal,
vol. 14, no. 3, pp. 3117–3128, Sep. 2020, doi: 10.1109/JSYST.2019.2960088.
[17] M. Abd Elaziz and I. Attiya, “An improved Henry gas solubility optimization algorithm for task scheduling in cloud computing,”
Artificial Intelligence Review, vol. 54, no. 5, pp. 3599–3637, Jun. 2021, doi: 10.1007/s10462-020-09933-3.
[18] W. Saber, W. Moussa, A. M. Ghuniem, and R. Rizk, “Hybrid load balance based on genetic algorithm in cloud environment,”
International Journal of Electrical and Computer Engineering (IJECE), vol. 11, no. 3, pp. 2477–2489, Jun. 2021, doi:
10.11591/ijece.v11i3.pp2477-2489.
10. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 5, October 2022: 5657-5666
5666
[19] C. Fancy and M. Pushpalatha, “Traffic-aware adaptive server load balancing for software defined networks,” International
Journal of Electrical and Computer Engineering (IJECE), vol. 11, no. 3, pp. 2211–2218, Jun. 2021, doi:
10.11591/ijece.v11i3.pp2211-2218.
[20] S. Velliangiri, P. Karthikeyan, V. M. Arul Xavier, and D. Baswaraj, “Hybrid electro search with genetic algorithm for task
scheduling in cloud computing,” Ain Shams Engineering Journal, vol. 12, no. 1, pp. 631–639, Mar. 2021, doi:
10.1016/j.asej.2020.07.003.
[21] P. Krishnadoss, “CCSA: hybrid cuckoo crow search algorithm for task scheduling in cloud computing,” International Journal of
Intelligent Engineering and Systems, vol. 14, no. 4, pp. 241–250, Aug. 2021, doi: 10.22266/ijies2021.0831.22.
[22] Q. Li and Yike Guo, “Optimization of resource scheduling in cloud computing,” in 2010 12th International Symposium on
Symbolic and Numeric Algorithms for Scientific Computing, Sep. 2010, pp. 315–320. doi: 10.1109/SYNASC.2010.8.
[23] J. Li and Y. Han, “A hybrid multi-objective artificial bee colony algorithm for flexible task scheduling problems in cloud
computing system,” Cluster Computing, vol. 23, no. 4, pp. 2483–2499, Dec. 2020, doi: 10.1007/s10586-019-03022-z.
[24] X. Guo, “Multi-objective task scheduling optimization in cloud computing based on fuzzy self-defense algorithm,” Alexandria
Engineering Journal, vol. 60, no. 6, pp. 5603–5609, Dec. 2021, doi: 10.1016/j.aej.2021.04.051.
[25] X. Zhang et al., “Runtime model-based management of diverse cloud resources,” in Proc. of the 16th International Conference on
Model Driven Engineering Languages and Systems, 2013, pp. 572–588. doi: 10.1007/978-3-642-41533-3_35.
[26] C. A. Lee, “A perspective on scientific cloud computing,” Oct. 2010. doi: 10.1145/1851476.1851542.
BIOGRAPHIES OF AUTHORS
Mehdi Salehi Babadi received his M.Sc. degree in software Engineering from
Department of Computer Engineering, Borujerd Branch, Islamic Azad University, Borujerd,
Iran, in 2012. Currently, He is a Ph.D. student in the Department of Computer Engineering at
Borujerd Branch, Islamic Azad University, Borujerd, Iran since 2015. His Ph.D. advisor is
Professor Mohammad Ebrahim Shiri. His research interests include social network, cellular
automata, IoT, cloud computing, cryptography and security. He can be contacted at email:
msalehib@hotmail.com.
Mohammad Ebrahim Shiri received his Ph.D. degree in Computer Science from
the Department of Computer Science, university of Montreal, Montreal, Canada in 2000.
Currently, he is an Assistant Professor in the Department of Computer Science at Amirkabir
University of Technology in Tehran, Iran. His research interests include machine learning,
e-learning, database, network security and cloud computing. He can be contacted email:
shiri@aut.ac.ir.
Mohammad Reza Moazami Goudarzi received his Ph.D. degree in the
Department of Mathematics, Islamic Azad University, Tehran, Iran in 2011. Currently, he is an
assistant professor in the Department of Mathematics at Borujerd Branch, Islamic Azad
University, Borujerd, Iran. His research areas include operation research and data envelopment
analysis. He can be contacted at email: mrmoazamig@gmail.com.
Hamid Haj Seyyed Javadi received the M.Sc. and Ph.D. degrees in Amirkabir
University of Technology, Tehran, Iran in 1996 and 2003 respectively. He has been working as
a fulltime faculty member and Professor in the Department of Mathematics and Computer
Science at Shahed University, Tehran, Iran. His research interests are IoT, computer algebra,
cryptography and security. He can be contacted at email: h.s.javadi@shahed.ac.ir.