This document reviews scheduling techniques in cloud computing. It discusses key concepts like virtualization and different scheduling algorithms. The review surveys various scheduling algorithms for tasks, workflows, real-time applications and energy optimization. It analyzes algorithms for load balancing, fault tolerance and resource utilization to improve performance metrics like makespan, cost and energy consumption. The document concludes that effective scheduling is important in cloud computing to provide on-demand services and complete tasks accurately and on time.
IRJET- Time and Resource Efficient Task Scheduling in Cloud Computing Environ...IRJET Journal
This document summarizes a research paper that proposes a Task Based Allocation (TBA) algorithm to efficiently schedule tasks in a cloud computing environment. The algorithm aims to minimize makespan (completion time of all tasks) and maximize resource utilization. It first generates an Expected Time to Complete (ETC) matrix that estimates the time each task will take on different virtual machines. It then sorts tasks by length and allocates each task to the VM that minimizes its completion time, updating the VM wait times. The algorithm is evaluated using CloudSim simulation and is shown to reduce makespan, execution time and costs compared to random and first-come, first-served scheduling approaches.
Intelligent Workload Management in Virtualized Cloud EnvironmentIJTET Journal
Abstract— Cloud computing is a rising high performance computing environment with a huge scale, heterogeneous collection of self-sufficient systems and elastic computational design. To develop the overall performance of cloud computing, through the deadline constraint, a task scheduling replica is traditional for falling the system power utilization of cloud computing and recovering the yield of service providers. To improve the overall act of cloud environment, with the deadline constraint, a task scheduling model is conventional for reducing the system performance time of cloud computing and improving the profit of service providers. In favor of scheduling replica, a solving technique based on multi-objective genetic algorithm (MO-GA) is considered and the study is determined on programming rules, intersect operators, mixture operators and the scheme of arrangement of Pareto solutions. The model is designed based on open source cloud computing simulation platform CloudSim, to obtainable scheduling algorithms, the result shows that the proposed algorithm can obtain an enhanced solution, thus balancing the load for the concert of multiple objects.
A hybrid approach for scheduling applications in cloud computing environment IJECEIAES
Cloud computing plays an important role in our daily life. It has direct and positive impact on share and update data, knowledge, storage and scientific resources between various regions. Cloud computing performance heavily based on job scheduling algorithms that are utilized for queue waiting in modern scientific applications. The researchers are considered cloud computing a popular platform for new enforcements. These scheduling algorithms help in design efficient queue lists in cloud as well as they play vital role in reducing waiting for processing time in cloud computing. A novel job scheduling is proposed in this paper to enhance performance of cloud computing and reduce delay time in queue waiting for jobs. The proposed algorithm tries to avoid some significant challenges that throttle from developing applications of cloud computing. However, a smart scheduling technique is proposed in our paper to improve performance processing in cloud applications. Our experimental result of the proposed job scheduling algorithm shows that the proposed schemes possess outstanding enhancing rates with a reduction in waiting time for jobs in queue list.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDM O...ijgca
The ever-increasing status of the cloud computing h
ypothesis and the budding concept of federated clou
d
computing have enthused research efforts towards in
tellectual cloud service selection aimed at develop
ing
techniques for enabling the cloud users to gain max
imum benefit from cloud computing by selecting
services which provide optimal performance at lowes
t possible cost. Cloud computing is a novel paradig
m
for the provision of computing infrastructure, whic
h aims to shift the location of the computing
infrastructure to the network in order to reduce th
e maintenance costs of hardware and software resour
ces.
Cloud computing systems vitally provide access to l
arge pools of resources. Resources provided by clou
d
computing systems hide a great deal of services fro
m the user through virtualization. In this paper, t
he
cloud data center is modelled as
queuing system with a single task arrivals
and a task request buffer of infinite capacity.
Multi-objective load balancing in cloud infrastructure through fuzzy based de...IAESIJAI
Cloud computing became a popular technology which influence not only
product development but also made technology business easy. The services
like infrastructure, platform and software can reduce the complexity of
technology requirement for any ecosystem. As the users of cloud-based
services increases the complexity of back-end technologies also increased.
The heterogeneous requirement of users in terms for various configurations
creates different unbalancing issues related to load. Hence effective load
balancing in a cloud system with reference to time and space become crucial
as it adversely affect system performance. Since the user requirement and
expected performance is multi-objective use of decision-making tools like
fuzzy logic will yield good results as it uses human procedure knowledge in
decision making. The overall system performance can be further improved by
dynamic resource scheduling using optimization technique like genetic
algorithm.
Energy-Efficient Task Scheduling in Cloud EnvironmentIRJET Journal
1. The document discusses developing an energy-efficient task scheduling approach for cloud data centers using deep reinforcement learning.
2. It aims to minimize computational costs and cooling costs by optimizing task assignment to servers based on factors like temperature, CPU, and memory.
3. The proposed approach uses a greedy algorithm to schedule tasks to servers maintaining the lowest temperature, thus reducing energy consumption and improving data center performance.
IRJET- An Energy-Saving Task Scheduling Strategy based on Vacation Queuing & ...IRJET Journal
This document summarizes a research paper that proposes an energy-saving task scheduling strategy for cloud computing based on vacation queuing and optimization of resources. The proposed approach aims to minimize energy consumption, reduce processing time, and increase the number of sleeping nodes to make the system more efficient. It introduces a task scheduling algorithm that assigns tasks to computing nodes based on their properties using a load balancer. Simulation results show the proposed algorithm reduces energy consumption while meeting task performance compared to the vacation queuing algorithm. The document discusses related work on energy optimization techniques, presents the proposed approach, and analyzes results showing improvements in energy usage, time, and idle nodes.
Optimization of energy consumption in cloud computing datacenters IJECEIAES
Cloud computing has emerged as a practical paradigm for providing IT resources, infrastructure and services. This has led to the establishment of datacenters that have substantial energy demands for their operation. This work investigates the optimization of energy consumption in cloud datacenter using energy efficient allocation of tasks to resources. The work seeks to develop formal optimization models that minimize the energy consumption of computational resources and evaluates the use of existing optimization solvers in testing these models. Integer linear programming (ILP) techniques are used to model the scheduling problem. The objective is to minimize the total power consumed by the active and idle cores of the servers’ CPUs while meeting a set of constraints. Next, we use these models to carry out a detailed performance comparison between a selected set of Generic ILP and 0-1 Boolean satisfiability based solvers in solving the ILP formulations. Simulation results indicate that in some cases the developed models have saved up to 38% in energy consumption when compared to common techniques such as round robin. Furthermore, results also showed that generic ILP solvers had superior performance when compared to SAT-based ILP solvers especially as the number of tasks and resources grow in size.
IRJET- Time and Resource Efficient Task Scheduling in Cloud Computing Environ...IRJET Journal
This document summarizes a research paper that proposes a Task Based Allocation (TBA) algorithm to efficiently schedule tasks in a cloud computing environment. The algorithm aims to minimize makespan (completion time of all tasks) and maximize resource utilization. It first generates an Expected Time to Complete (ETC) matrix that estimates the time each task will take on different virtual machines. It then sorts tasks by length and allocates each task to the VM that minimizes its completion time, updating the VM wait times. The algorithm is evaluated using CloudSim simulation and is shown to reduce makespan, execution time and costs compared to random and first-come, first-served scheduling approaches.
Intelligent Workload Management in Virtualized Cloud EnvironmentIJTET Journal
Abstract— Cloud computing is a rising high performance computing environment with a huge scale, heterogeneous collection of self-sufficient systems and elastic computational design. To develop the overall performance of cloud computing, through the deadline constraint, a task scheduling replica is traditional for falling the system power utilization of cloud computing and recovering the yield of service providers. To improve the overall act of cloud environment, with the deadline constraint, a task scheduling model is conventional for reducing the system performance time of cloud computing and improving the profit of service providers. In favor of scheduling replica, a solving technique based on multi-objective genetic algorithm (MO-GA) is considered and the study is determined on programming rules, intersect operators, mixture operators and the scheme of arrangement of Pareto solutions. The model is designed based on open source cloud computing simulation platform CloudSim, to obtainable scheduling algorithms, the result shows that the proposed algorithm can obtain an enhanced solution, thus balancing the load for the concert of multiple objects.
A hybrid approach for scheduling applications in cloud computing environment IJECEIAES
Cloud computing plays an important role in our daily life. It has direct and positive impact on share and update data, knowledge, storage and scientific resources between various regions. Cloud computing performance heavily based on job scheduling algorithms that are utilized for queue waiting in modern scientific applications. The researchers are considered cloud computing a popular platform for new enforcements. These scheduling algorithms help in design efficient queue lists in cloud as well as they play vital role in reducing waiting for processing time in cloud computing. A novel job scheduling is proposed in this paper to enhance performance of cloud computing and reduce delay time in queue waiting for jobs. The proposed algorithm tries to avoid some significant challenges that throttle from developing applications of cloud computing. However, a smart scheduling technique is proposed in our paper to improve performance processing in cloud applications. Our experimental result of the proposed job scheduling algorithm shows that the proposed schemes possess outstanding enhancing rates with a reduction in waiting time for jobs in queue list.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDM O...ijgca
The ever-increasing status of the cloud computing h
ypothesis and the budding concept of federated clou
d
computing have enthused research efforts towards in
tellectual cloud service selection aimed at develop
ing
techniques for enabling the cloud users to gain max
imum benefit from cloud computing by selecting
services which provide optimal performance at lowes
t possible cost. Cloud computing is a novel paradig
m
for the provision of computing infrastructure, whic
h aims to shift the location of the computing
infrastructure to the network in order to reduce th
e maintenance costs of hardware and software resour
ces.
Cloud computing systems vitally provide access to l
arge pools of resources. Resources provided by clou
d
computing systems hide a great deal of services fro
m the user through virtualization. In this paper, t
he
cloud data center is modelled as
queuing system with a single task arrivals
and a task request buffer of infinite capacity.
Multi-objective load balancing in cloud infrastructure through fuzzy based de...IAESIJAI
Cloud computing became a popular technology which influence not only
product development but also made technology business easy. The services
like infrastructure, platform and software can reduce the complexity of
technology requirement for any ecosystem. As the users of cloud-based
services increases the complexity of back-end technologies also increased.
The heterogeneous requirement of users in terms for various configurations
creates different unbalancing issues related to load. Hence effective load
balancing in a cloud system with reference to time and space become crucial
as it adversely affect system performance. Since the user requirement and
expected performance is multi-objective use of decision-making tools like
fuzzy logic will yield good results as it uses human procedure knowledge in
decision making. The overall system performance can be further improved by
dynamic resource scheduling using optimization technique like genetic
algorithm.
Energy-Efficient Task Scheduling in Cloud EnvironmentIRJET Journal
1. The document discusses developing an energy-efficient task scheduling approach for cloud data centers using deep reinforcement learning.
2. It aims to minimize computational costs and cooling costs by optimizing task assignment to servers based on factors like temperature, CPU, and memory.
3. The proposed approach uses a greedy algorithm to schedule tasks to servers maintaining the lowest temperature, thus reducing energy consumption and improving data center performance.
IRJET- An Energy-Saving Task Scheduling Strategy based on Vacation Queuing & ...IRJET Journal
This document summarizes a research paper that proposes an energy-saving task scheduling strategy for cloud computing based on vacation queuing and optimization of resources. The proposed approach aims to minimize energy consumption, reduce processing time, and increase the number of sleeping nodes to make the system more efficient. It introduces a task scheduling algorithm that assigns tasks to computing nodes based on their properties using a load balancer. Simulation results show the proposed algorithm reduces energy consumption while meeting task performance compared to the vacation queuing algorithm. The document discusses related work on energy optimization techniques, presents the proposed approach, and analyzes results showing improvements in energy usage, time, and idle nodes.
Optimization of energy consumption in cloud computing datacenters IJECEIAES
Cloud computing has emerged as a practical paradigm for providing IT resources, infrastructure and services. This has led to the establishment of datacenters that have substantial energy demands for their operation. This work investigates the optimization of energy consumption in cloud datacenter using energy efficient allocation of tasks to resources. The work seeks to develop formal optimization models that minimize the energy consumption of computational resources and evaluates the use of existing optimization solvers in testing these models. Integer linear programming (ILP) techniques are used to model the scheduling problem. The objective is to minimize the total power consumed by the active and idle cores of the servers’ CPUs while meeting a set of constraints. Next, we use these models to carry out a detailed performance comparison between a selected set of Generic ILP and 0-1 Boolean satisfiability based solvers in solving the ILP formulations. Simulation results indicate that in some cases the developed models have saved up to 38% in energy consumption when compared to common techniques such as round robin. Furthermore, results also showed that generic ILP solvers had superior performance when compared to SAT-based ILP solvers especially as the number of tasks and resources grow in size.
Scheduling Divisible Jobs to Optimize the Computation and Energy Costsinventionjournals
ABSTRACT : The important challenge in cloud computing environment is to design a scheduling strategy to handle jobs, and to process them in a heterogeneous environment with shared data centers. In this paper, we attempt to investigate a new analytical framework model that enables an existing private cloud data-center for scheduling jobs and minimizing the overall computation and energy cost together. Our model is based on Divisible Load Theory (DLT) model to derive closed-form solution for the load fractions to be assigned to each machines considering computation and energy cost. Our analysis also attempts to schedule the jobs such a way that cloud provider can gain maximum benefit for his service and Quality of Service (QoS) requirement user’s job. Finally, we quantify the performance of the strategies via rigorous simulation studies.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The increased availability of the cloud models and allied developing models creates easier computing cloud environment. Energy consumption and effective energy management are the two important challenges in virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling (DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our proposed model confirms the effectiveness of its implementation, scalability, power consumption and execution time with respect to other existing approaches.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The
increased availability of the cloud models and allied developing models creates easier computing cloud
environment. Energy consumption and effective energy management are the two important challenges in
virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling
(DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the
required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our
proposed model confirms the effectiveness of its implementation, scalability, power consumption and
execution time with respect to other existing approaches.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The increased availability of the cloud models and allied developing models creates easier computing cloud environment. Energy consumption and effective energy management are the two important challenges in virtualized computing platforms. Energy consumption can be minimized by allocating computationally intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling (DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the required QoS. However, they do not control the internal and external switching to server frequencies, which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm minimizes consumption of energy and time during computation, reconfiguration and communication. Our proposed model confirms the effectiveness of its implementation, scalability, power consumption and execution time with respect to other existing approaches.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The
increased availability of the cloud models and allied developing models creates easier computing cloud
environment. Energy consumption and effective energy management are the two important challenges in
virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling
(DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the
required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our
proposed model confirms the effectiveness of its implementation, scalability, power consumption and
execution time with respect to other existing approaches.
Cloud computing offers to users worldwide a low cost on-demand services, according to their requirements. In the recent years, the rapid growth and service quality of cloud computing has made it an attractive technology for different Tech Companies. However with the growing number of data centers resources, high levels of energy cost are being consumed with more carbon emissions in the air. For instance, the Google data center estimation of electric power consumption is equivalent to the energy requirement of a small sized city. Also, even if the virtualization of resources in cloud computing datacenters may reduce the number of physical machines and hardware equipments cost, it is still restrained by energy consumption issue. Energy efficiency has become a major concern for today’s cloud datacenter researchers, with a simultaneous improvement of the cloud service quality and reducing operation cost. This paper analyses and discusses the literature review of works related to the contribution of energy efficiency enhancement in cloud computing datacenters. The main objective is to have the best management of the involved physical machines which host the virtual ones in the cloud datacenters.
An Energy Aware Resource Utilization Framework to Control Traffic in Cloud Ne...IJECEIAES
Energy consumption in cloud computing occur due to the unreasonable way in which tasks are scheduled. So energy aware task scheduling is a major concern in cloud computing as energy consumption results into significant waste of energy, reduce the profit margin and also high carbon emissions which is not environmentally sustainable. Hence, energy efficient task scheduling solutions are required to attain variable resource management, live migration, minimal virtual machine design, overall system efficiency, reduction in operating costs, increasing system reliability, and prompting environmental protection with minimal performance overhead. This paper provides a comprehensive overview of the energy efficient techniques and approaches and proposes the energy aware resource utilization framework to control traffic in cloud networks and overloads.
IRJET- Load Balancing and Crash Management in IoT EnvironmentIRJET Journal
This document proposes a system to provide load balancing and crash management in an Internet of Things (IoT) environment. It introduces an Application Delivery Controller (ADC) that sits between devices and data centers. The ADC monitors the load and availability of data centers using a performance counter algorithm. It routes traffic to less busy data centers using the MQTT protocol if load increases or a data center crashes. This provides uninterrupted connectivity and prevents the whole system from going down during network failures or crashes. The system was implemented with clients that can request services or publish information to servers, which acknowledge tasks using a unique machine ID for future connections.
ENERGY EFFICIENT VIRTUAL MACHINE ASSIGNMENT BASED ON ENERGY CONSUMPTION AND R...IAEME Publication
The document proposes an energy efficient virtual machine (VM) assignment algorithm for cloud networks. The algorithm aims to minimize energy consumption by considering both the energy used by VMs and balancing resource utilization across host machines. It first measures VM and host energy usage, then classifies VMs as CPU-type or memory-type based on their resource usage. The algorithm schedules VMs onto hosts in a way that balances CPU and memory utilization while selecting hosts that minimize increased energy usage. The algorithm is evaluated in CloudSim and shown to significantly reduce energy consumption compared to other techniques.
This document provides an overview of scheduling mechanisms in cloud computing. It discusses task scheduling, gang scheduling based on performance and cost evaluation, and resource scheduling. For task scheduling, it describes classifying tasks based on quality of service parameters and MapReduce level scheduling. It then explains two gang scheduling algorithms - Adaptive First Come First Serve (AFCFS) and Largest Job First Serve (LJFS) - and how they are used to evaluate performance and cost. Finally, it briefly discusses resource scheduling and factors that affect scheduling mechanisms in cloud computing like efficiency, fairness, costs, and communication patterns.
An energy optimization with improved QOS approach for adaptive cloud resources IJECEIAES
In recent times, the utilization of cloud computing VMs is extremely enhanced in our day-to-day life due to the ample utilization of digital applications, network appliances, portable gadgets, and information devices etc. In this cloud computing VMs numerous different schemes can be implemented like multimedia-signal-processing-methods. Thus, efficient performance of these cloud-computing VMs becomes an obligatory constraint, precisely for these multimedia-signal-processing-methods. However, large amount of energy consumption and reduction in efficiency of these cloud-computing VMs are the key issues faced by different cloud computing organizations. Therefore, here, we have introduced a dynamic voltage and frequency scaling (DVFS) based adaptive cloud resource re-configurability (퐴퐶푅푅) technique for cloud computing devices, which efficiently reduces energy consumption, as well as perform operations in very less time. We have demonstrated an efficient resource allocation and utilization technique to optimize by reducing different costs of the model. We have also demonstrated efficient energy optimization techniques by reducing task loads. Our experimental outcomes shows the superiority of our proposed model 퐴퐶푅푅 in terms of average run time, power consumption and average power required than any other state-of-art techniques.
A survey paper on an improved scheduling algorithm for task offloading on cloudAditya Tornekar
This document summarizes an article that discusses task offloading from mobile devices to the cloud. It begins by explaining how mobile devices have limitations like battery life and processing power. To address this, the concept of offloading tasks to the cloud is proposed to reduce energy consumption on devices and improve their computation capabilities. The document then discusses how scheduling algorithms can help organize tasks migrated to the cloud efficiently. It reviews several scheduling algorithms and their effectiveness in reducing response time and power consumption. Finally, it discusses how virtual machine management techniques on the cloud can help efficiently utilize resources and further reduce energy usage.
AUTO RESOURCE MANAGEMENT TO ENHANCE RELIABILITY AND ENERGY CONSUMPTION IN HET...IJCNCJournal
This document summarizes an article from the International Journal of Computer Networks & Communications that proposes an Auto Resource Management (ARM) scheme to improve reliability and reduce energy consumption in heterogeneous cloud computing environments. The ARM scheme includes three components: 1) static and dynamic thresholds to detect host over/underutilization, 2) a virtual machine selection policy, and 3) a method to select placement hosts for migrated VMs. It also proposes a Short Prediction Resource Utilization method to improve decision making by considering predicted future utilization along with current utilization. The scheme is tested on a cloud simulator using real workload trace data, and results show it can enhance decision making, reduce energy consumption and SLA violations.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDMOD...ijgca
This document discusses modeling cloud computing data centers as queuing systems to analyze performance factors. It begins with background on cloud computing and queuing theory. It then models a cloud data center as an [(M/G/1) : (∞/GDMODEL)] queuing system with single task arrivals and infinite task buffer capacity. Key performance factors analyzed include mean number of tasks in the system. Analytical results are obtained by solving the model to estimate response time distribution and other metrics. The modeling approach allows determining the relationship between performance and number of servers/buffer size.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDMOD...ijgca
This document discusses modeling cloud computing data centers as queuing systems to analyze performance factors. It presents an analytical model of a cloud data center as a [(M/G/1) : (∞/GDMODEL)] queuing system with single task arrivals and infinite task buffer capacity. The model is solved to obtain important performance metrics like mean number of tasks in the system. Prior work on modeling cloud systems and queuing theory concepts are also reviewed. Key assumptions of the proposed model include tasks following a Poisson arrival process and service times having a general probability distribution.
IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
Cloud computing provides the facility for the business customers to scale up and down their resource usage
based on needs. This is because of the virtualization technology. The scheduling objectives are to improve
the system’s schedule ability for the real-time tasks and to save energy. To achieve the objectives, we
employed the virtualization technique and rolling-horizon optimization with vertical scheduling operation.
The project considers Cluster Scoring Based Task Scheduling (CSBTS) algorithm which aims to decrease
task’s completion time and the policies for VM’s creation, migration and cancellation are to dynamically
adjust the scale of cloud in a while meets the real-time requirements and to save energy
IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
Cloud computing provides the facility for the business customers to scale up and down their resource usage
based on needs. This is because of the virtualization technology. The scheduling objectives are to improve
the system’s schedule ability for the real-time tasks and to save energy. To achieve the objectives, we
employed the virtualization technique and rolling-horizon optimization with vertical scheduling operation.
The project considers Cluster Scoring Based Task Scheduling (CSBTS) algorithm which aims to decrease
task’s completion time and the policies for VM’s creation, migration and cancellation are to dynamically
adjust the scale of cloud in a while meets the real-time requirements and to save energy.
IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
Cloud computing provides the facility for the business customers to scale up and down their resource usage
based on needs. This is because of the virtualization technology. The scheduling objectives are to improve
the system’s schedule ability for the real-time tasks and to save energy. To achieve the objectives, we
employed the virtualization technique and rolling-horizon optimization with vertical scheduling operation.
The project considers Cluster Scoring Based Task Scheduling (CSBTS) algorithm which aims to decrease
task’s completion time and the policies for VM’s creation, migration and cancellation are to dynamically
adjust the scale of cloud in a while meets the real-time requirements and to save energy
IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
Cloud computing provides the facility for the business customers to scale up and down their resource usage
based on needs. This is because of the virtualization technology. The scheduling objectives are to improve
the system’s schedule ability for the real-time tasks and to save energy. To achieve the objectives, we
employed the virtualization technique and rolling-horizon optimization with vertical scheduling operation.
The project considers Cluster Scoring Based Task Scheduling (CSBTS) algorithm which aims to decrease
task’s completion time and the policies for VM’s creation, migration and cancellation are to dynamically
adjust the scale of cloud in a while meets the real-time requirements and to save energy.
IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
The document presents a cluster scoring based task scheduling (CSBTS) algorithm for improving real-time task completion and harnessing energy in virtualized cloud environments. The CSBTS aims to decrease task completion time by calculating cluster score values based on available transmission power and computing power. It also establishes policies for dynamically creating, migrating, and cancelling virtual machines to reduce energy consumption while meeting real-time requirements. The algorithm considers data and computation intensive jobs and divides them into subtasks to assign to clusters based on their updated scores. The goal is to optimize scheduling of real-time tasks and energy savings.
Users Approach on Providing Feedback for Smart Home Devices – Phase IIijujournal
Smart Home technology has accomplished extraordinary success in making individuals' lives more straightforward and relaxing. Technology has recently brought about numerous savvy and refined frame works that advanced clever living innovation. In this paper, we will investigate the behavioral intention of user's approach to providing feedback for smart home devices. We will conduct an online survey for a sample of three to five students selected by simple random sampling to study the user's motto for giving feedback on smart home devices and their expectations. We have observed that most users are ready to actively share their input on smart home devices to improve the product's service and quality to fulfill the user’s needs and make their lives easier.
Users Approach on Providing Feedback for Smart Home Devices – Phase IIijujournal
Smart Home technology has accomplished extraordinary success in making individuals' lives more
straightforward and relaxing. Technology has recently brought about numerous savvy and refined frame
works that advanced clever living innovation. In this paper, we will investigate the behavioral intention of
user's approach to providing feedback for smart home devices. We will conduct an online survey for a
sample of three to five students selected by simple random sampling to study the user's motto for giving
feedback on smart home devices and their expectations. We have observed that most users are ready to
actively share their input on smart home devices to improve the product's service and quality to fulfill the
user’s needs and make their lives easier.
More Related Content
Similar to A Review on Scheduling in Cloud Computing
Scheduling Divisible Jobs to Optimize the Computation and Energy Costsinventionjournals
ABSTRACT : The important challenge in cloud computing environment is to design a scheduling strategy to handle jobs, and to process them in a heterogeneous environment with shared data centers. In this paper, we attempt to investigate a new analytical framework model that enables an existing private cloud data-center for scheduling jobs and minimizing the overall computation and energy cost together. Our model is based on Divisible Load Theory (DLT) model to derive closed-form solution for the load fractions to be assigned to each machines considering computation and energy cost. Our analysis also attempts to schedule the jobs such a way that cloud provider can gain maximum benefit for his service and Quality of Service (QoS) requirement user’s job. Finally, we quantify the performance of the strategies via rigorous simulation studies.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The increased availability of the cloud models and allied developing models creates easier computing cloud environment. Energy consumption and effective energy management are the two important challenges in virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling (DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our proposed model confirms the effectiveness of its implementation, scalability, power consumption and execution time with respect to other existing approaches.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The
increased availability of the cloud models and allied developing models creates easier computing cloud
environment. Energy consumption and effective energy management are the two important challenges in
virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling
(DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the
required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our
proposed model confirms the effectiveness of its implementation, scalability, power consumption and
execution time with respect to other existing approaches.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The increased availability of the cloud models and allied developing models creates easier computing cloud environment. Energy consumption and effective energy management are the two important challenges in virtualized computing platforms. Energy consumption can be minimized by allocating computationally intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling (DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the required QoS. However, they do not control the internal and external switching to server frequencies, which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm minimizes consumption of energy and time during computation, reconfiguration and communication. Our proposed model confirms the effectiveness of its implementation, scalability, power consumption and execution time with respect to other existing approaches.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The
increased availability of the cloud models and allied developing models creates easier computing cloud
environment. Energy consumption and effective energy management are the two important challenges in
virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling
(DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the
required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our
proposed model confirms the effectiveness of its implementation, scalability, power consumption and
execution time with respect to other existing approaches.
Cloud computing offers to users worldwide a low cost on-demand services, according to their requirements. In the recent years, the rapid growth and service quality of cloud computing has made it an attractive technology for different Tech Companies. However with the growing number of data centers resources, high levels of energy cost are being consumed with more carbon emissions in the air. For instance, the Google data center estimation of electric power consumption is equivalent to the energy requirement of a small sized city. Also, even if the virtualization of resources in cloud computing datacenters may reduce the number of physical machines and hardware equipments cost, it is still restrained by energy consumption issue. Energy efficiency has become a major concern for today’s cloud datacenter researchers, with a simultaneous improvement of the cloud service quality and reducing operation cost. This paper analyses and discusses the literature review of works related to the contribution of energy efficiency enhancement in cloud computing datacenters. The main objective is to have the best management of the involved physical machines which host the virtual ones in the cloud datacenters.
An Energy Aware Resource Utilization Framework to Control Traffic in Cloud Ne...IJECEIAES
Energy consumption in cloud computing occur due to the unreasonable way in which tasks are scheduled. So energy aware task scheduling is a major concern in cloud computing as energy consumption results into significant waste of energy, reduce the profit margin and also high carbon emissions which is not environmentally sustainable. Hence, energy efficient task scheduling solutions are required to attain variable resource management, live migration, minimal virtual machine design, overall system efficiency, reduction in operating costs, increasing system reliability, and prompting environmental protection with minimal performance overhead. This paper provides a comprehensive overview of the energy efficient techniques and approaches and proposes the energy aware resource utilization framework to control traffic in cloud networks and overloads.
IRJET- Load Balancing and Crash Management in IoT EnvironmentIRJET Journal
This document proposes a system to provide load balancing and crash management in an Internet of Things (IoT) environment. It introduces an Application Delivery Controller (ADC) that sits between devices and data centers. The ADC monitors the load and availability of data centers using a performance counter algorithm. It routes traffic to less busy data centers using the MQTT protocol if load increases or a data center crashes. This provides uninterrupted connectivity and prevents the whole system from going down during network failures or crashes. The system was implemented with clients that can request services or publish information to servers, which acknowledge tasks using a unique machine ID for future connections.
ENERGY EFFICIENT VIRTUAL MACHINE ASSIGNMENT BASED ON ENERGY CONSUMPTION AND R...IAEME Publication
The document proposes an energy efficient virtual machine (VM) assignment algorithm for cloud networks. The algorithm aims to minimize energy consumption by considering both the energy used by VMs and balancing resource utilization across host machines. It first measures VM and host energy usage, then classifies VMs as CPU-type or memory-type based on their resource usage. The algorithm schedules VMs onto hosts in a way that balances CPU and memory utilization while selecting hosts that minimize increased energy usage. The algorithm is evaluated in CloudSim and shown to significantly reduce energy consumption compared to other techniques.
This document provides an overview of scheduling mechanisms in cloud computing. It discusses task scheduling, gang scheduling based on performance and cost evaluation, and resource scheduling. For task scheduling, it describes classifying tasks based on quality of service parameters and MapReduce level scheduling. It then explains two gang scheduling algorithms - Adaptive First Come First Serve (AFCFS) and Largest Job First Serve (LJFS) - and how they are used to evaluate performance and cost. Finally, it briefly discusses resource scheduling and factors that affect scheduling mechanisms in cloud computing like efficiency, fairness, costs, and communication patterns.
An energy optimization with improved QOS approach for adaptive cloud resources IJECEIAES
In recent times, the utilization of cloud computing VMs is extremely enhanced in our day-to-day life due to the ample utilization of digital applications, network appliances, portable gadgets, and information devices etc. In this cloud computing VMs numerous different schemes can be implemented like multimedia-signal-processing-methods. Thus, efficient performance of these cloud-computing VMs becomes an obligatory constraint, precisely for these multimedia-signal-processing-methods. However, large amount of energy consumption and reduction in efficiency of these cloud-computing VMs are the key issues faced by different cloud computing organizations. Therefore, here, we have introduced a dynamic voltage and frequency scaling (DVFS) based adaptive cloud resource re-configurability (퐴퐶푅푅) technique for cloud computing devices, which efficiently reduces energy consumption, as well as perform operations in very less time. We have demonstrated an efficient resource allocation and utilization technique to optimize by reducing different costs of the model. We have also demonstrated efficient energy optimization techniques by reducing task loads. Our experimental outcomes shows the superiority of our proposed model 퐴퐶푅푅 in terms of average run time, power consumption and average power required than any other state-of-art techniques.
A survey paper on an improved scheduling algorithm for task offloading on cloudAditya Tornekar
This document summarizes an article that discusses task offloading from mobile devices to the cloud. It begins by explaining how mobile devices have limitations like battery life and processing power. To address this, the concept of offloading tasks to the cloud is proposed to reduce energy consumption on devices and improve their computation capabilities. The document then discusses how scheduling algorithms can help organize tasks migrated to the cloud efficiently. It reviews several scheduling algorithms and their effectiveness in reducing response time and power consumption. Finally, it discusses how virtual machine management techniques on the cloud can help efficiently utilize resources and further reduce energy usage.
AUTO RESOURCE MANAGEMENT TO ENHANCE RELIABILITY AND ENERGY CONSUMPTION IN HET...IJCNCJournal
This document summarizes an article from the International Journal of Computer Networks & Communications that proposes an Auto Resource Management (ARM) scheme to improve reliability and reduce energy consumption in heterogeneous cloud computing environments. The ARM scheme includes three components: 1) static and dynamic thresholds to detect host over/underutilization, 2) a virtual machine selection policy, and 3) a method to select placement hosts for migrated VMs. It also proposes a Short Prediction Resource Utilization method to improve decision making by considering predicted future utilization along with current utilization. The scheme is tested on a cloud simulator using real workload trace data, and results show it can enhance decision making, reduce energy consumption and SLA violations.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDMOD...ijgca
This document discusses modeling cloud computing data centers as queuing systems to analyze performance factors. It begins with background on cloud computing and queuing theory. It then models a cloud data center as an [(M/G/1) : (∞/GDMODEL)] queuing system with single task arrivals and infinite task buffer capacity. Key performance factors analyzed include mean number of tasks in the system. Analytical results are obtained by solving the model to estimate response time distribution and other metrics. The modeling approach allows determining the relationship between performance and number of servers/buffer size.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDMOD...ijgca
This document discusses modeling cloud computing data centers as queuing systems to analyze performance factors. It presents an analytical model of a cloud data center as a [(M/G/1) : (∞/GDMODEL)] queuing system with single task arrivals and infinite task buffer capacity. The model is solved to obtain important performance metrics like mean number of tasks in the system. Prior work on modeling cloud systems and queuing theory concepts are also reviewed. Key assumptions of the proposed model include tasks following a Poisson arrival process and service times having a general probability distribution.
IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
Cloud computing provides the facility for the business customers to scale up and down their resource usage
based on needs. This is because of the virtualization technology. The scheduling objectives are to improve
the system’s schedule ability for the real-time tasks and to save energy. To achieve the objectives, we
employed the virtualization technique and rolling-horizon optimization with vertical scheduling operation.
The project considers Cluster Scoring Based Task Scheduling (CSBTS) algorithm which aims to decrease
task’s completion time and the policies for VM’s creation, migration and cancellation are to dynamically
adjust the scale of cloud in a while meets the real-time requirements and to save energy
IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
Cloud computing provides the facility for the business customers to scale up and down their resource usage
based on needs. This is because of the virtualization technology. The scheduling objectives are to improve
the system’s schedule ability for the real-time tasks and to save energy. To achieve the objectives, we
employed the virtualization technique and rolling-horizon optimization with vertical scheduling operation.
The project considers Cluster Scoring Based Task Scheduling (CSBTS) algorithm which aims to decrease
task’s completion time and the policies for VM’s creation, migration and cancellation are to dynamically
adjust the scale of cloud in a while meets the real-time requirements and to save energy.
IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
Cloud computing provides the facility for the business customers to scale up and down their resource usage
based on needs. This is because of the virtualization technology. The scheduling objectives are to improve
the system’s schedule ability for the real-time tasks and to save energy. To achieve the objectives, we
employed the virtualization technique and rolling-horizon optimization with vertical scheduling operation.
The project considers Cluster Scoring Based Task Scheduling (CSBTS) algorithm which aims to decrease
task’s completion time and the policies for VM’s creation, migration and cancellation are to dynamically
adjust the scale of cloud in a while meets the real-time requirements and to save energy
IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
Cloud computing provides the facility for the business customers to scale up and down their resource usage
based on needs. This is because of the virtualization technology. The scheduling objectives are to improve
the system’s schedule ability for the real-time tasks and to save energy. To achieve the objectives, we
employed the virtualization technique and rolling-horizon optimization with vertical scheduling operation.
The project considers Cluster Scoring Based Task Scheduling (CSBTS) algorithm which aims to decrease
task’s completion time and the policies for VM’s creation, migration and cancellation are to dynamically
adjust the scale of cloud in a while meets the real-time requirements and to save energy.
IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
The document presents a cluster scoring based task scheduling (CSBTS) algorithm for improving real-time task completion and harnessing energy in virtualized cloud environments. The CSBTS aims to decrease task completion time by calculating cluster score values based on available transmission power and computing power. It also establishes policies for dynamically creating, migrating, and cancelling virtual machines to reduce energy consumption while meeting real-time requirements. The algorithm considers data and computation intensive jobs and divides them into subtasks to assign to clusters based on their updated scores. The goal is to optimize scheduling of real-time tasks and energy savings.
Similar to A Review on Scheduling in Cloud Computing (20)
Users Approach on Providing Feedback for Smart Home Devices – Phase IIijujournal
Smart Home technology has accomplished extraordinary success in making individuals' lives more straightforward and relaxing. Technology has recently brought about numerous savvy and refined frame works that advanced clever living innovation. In this paper, we will investigate the behavioral intention of user's approach to providing feedback for smart home devices. We will conduct an online survey for a sample of three to five students selected by simple random sampling to study the user's motto for giving feedback on smart home devices and their expectations. We have observed that most users are ready to actively share their input on smart home devices to improve the product's service and quality to fulfill the user’s needs and make their lives easier.
Users Approach on Providing Feedback for Smart Home Devices – Phase IIijujournal
Smart Home technology has accomplished extraordinary success in making individuals' lives more
straightforward and relaxing. Technology has recently brought about numerous savvy and refined frame
works that advanced clever living innovation. In this paper, we will investigate the behavioral intention of
user's approach to providing feedback for smart home devices. We will conduct an online survey for a
sample of three to five students selected by simple random sampling to study the user's motto for giving
feedback on smart home devices and their expectations. We have observed that most users are ready to
actively share their input on smart home devices to improve the product's service and quality to fulfill the
user’s needs and make their lives easier.
October 2023-Top Cited Articles in IJU.pdfijujournal
International Journal of Ubiquitous Computing (IJU) is a quarterly open access peer-reviewed journal that provides excellent international forum for sharing knowledge and results in theory, methodology and applications of ubiquitous computing. Current information age is witnessing a dramatic use of digital and electronic devices in the workplace and beyond. Ubiquitous Computing presents a rather arduous requirement of robustness, reliability and availability to the end user. Ubiquitous computing has received a significant and sustained research interest in terms of designing and deploying large scale and high performance computational applications in real life. The aim of the journal is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
ACCELERATION DETECTION OF LARGE (PROBABLY) PRIME NUMBERSijujournal
This document discusses methods for efficiently generating large prime numbers for use in RSA cryptography. It presents experimental results measuring the time taken to generate prime numbers when trial dividing the starting number by different numbers of initial primes before applying the Miller-Rabin primality test. The optimal number of trial divisions can be estimated as B=E/D, where E is the time for Miller-Rabin test and D is the maximum usefulness of trial division. Experimental results on different sized numbers support dividing by around 20 initial primes as optimal.
A novel integrated approach for handling anomalies in RFID dataijujournal
Radio Frequency Identification (RFID) is a convenient technology employed in various applications. The
success of these RFID applications depends heavily on the quality of the data stream generated by RFID
readers. Due to various anomalies found predominantly in RFID data it limits the widespread adoption of
this technology. Our work is to eliminate the anomalies present in RFID data in an effective manner so that
it can be applied for high end applications. Our approach is a hybrid approach of middleware and
deferred because it is not always possible to remove all anomalies and redundancies in middleware. The
processing of other anomalies is deferred until the query time and cleaned by business rules. Experimental
results show that the proposed approach performs the cleaning in an effective manner compared to the
existing approaches.
UBIQUITOUS HEALTHCARE MONITORING SYSTEM USING INTEGRATED TRIAXIAL ACCELEROMET...ijujournal
Ubiquitous healthcare has become one of the prominent areas of research inorder to address the
challenges encountered in healthcare environment. In contribution to this area, this study developed a
system prototype that recommends diagonostic services based on physiological data collected in real time
from a distant patient. The prototype uses WBAN body sensors to be worn by the individual and an android
smart phone as a personal server. Physiological data is collected and uploaded to a Medical Health
Server (MHS) via GPRS/internet to be analysed. Our implemented prototype monitors the activity, location
and physiological data such as SpO2 and Heart Rate (HR) of the elderly and patients in rehabilitation. The
uploaded information can be accessed in real time by medical practitioners through a web application.
ENHANCING INDEPENDENT SENIOR LIVING THROUGH SMART HOME TECHNOLOGIESijujournal
The population of elderly folks is ballooning worldwide as people live longer. But getting older often
means declining health and trouble living solo. Smart home tech could keep an eye on old folks and get
help quickly when needed so they can stay independent. This paper looks at a system combining wireless
sensors, video watches, automation, resident monitoring, emergency detection, and remote access. Sensors
track health signs, activities, appliance use. Video analytics spot odd stuff like falls. Sensor fusion and
machine learning find normal patterns so wonks can see unhealthy changes and send alerts. Multi-channel
alerts reach caregivers and emergency folks. A LabVIEW can integrate devices and enables local and
remote oversight and can control and handle emergency responses. Benefits seem to be early illness clues,
quick help, less burden on caregivers, and optimized home settings. But will old folks use all this tech? Can
we prove it really helps folks live longer and better? More research on maximizing reliability and
evaluating real-world impacts is needed. But designed thoughtfully, smart homes could may profoundly
improve the aging experience.
HMR LOG ANALYZER: ANALYZE WEB APPLICATION LOGS OVER HADOOP MAPREDUCEijujournal
In today’s Internet world, log file analysis is becoming a necessary task for analyzing the customer’s
behavior in order to improve advertising and sales as well as for datasets like environment, medical,
banking system it is important to analyze the log data to get required knowledge from it. Web mining is the
process of discovering the knowledge from the web data. Log files are getting generated very fast at the
rate of 1-10 Mb/s per machine, a single data center can generate tens of terabytes of log data in a day.
These datasets are huge. In order to analyze such large datasets we need parallel processing system and
reliable data storage mechanism. Virtual database system is an effective solution for integrating the data
but it becomes inefficient for large datasets. The Hadoop framework provides reliable data storage by
Hadoop Distributed File System and MapReduce programming model which is a parallel processing
system for large datasets. Hadoop distributed file system breaks up input data and sends fractions of the
original data to several machines in hadoop cluster to hold blocks of data. This mechanism helps to
process log data in parallel using all the machines in the hadoop cluster and computes result efficiently.
The dominant approach provided by hadoop to “Store first query later”, loads the data to the Hadoop
Distributed File System and then executes queries written in Pig Latin. This approach reduces the response
time as well as the load on to the end system. This paper proposes a log analysis system using Hadoop
MapReduce which will provide accurate results in minimum response time.
SERVICE DISCOVERY – A SURVEY AND COMPARISONijujournal
The document summarizes and compares several major service discovery approaches. It provides an overview of service discovery objectives and techniques, then surveys prominent protocols including SLP, Jini, and UPnP. Each approach is analyzed based on features like service description, discovery architecture, announcement/query mechanisms, and how they handle service usage and dynamic network changes. The comparison aims to identify strengths and limitations to guide future research in improving service discovery.
SIX DEGREES OF SEPARATION TO IMPROVE ROUTING IN OPPORTUNISTIC NETWORKSijujournal
Opportunistic Networks are able to exploit social behavior to create connectivity opportunities. This
paradigm uses pair-wise contacts for routing messages between nodes. In this context we investigated if the
“six degrees of separation” conjecture of small-world networks can be used as a basis to route messages in
Opportunistic Networks. We propose a simple approach for routing that outperforms some popular
protocols in simulations that are carried out with real world traces using ONE simulator. We conclude that
static graph models are not suitable for underlay routing approaches in highly dynamic networks like
Opportunistic Networks without taking account of temporal factors such as time, duration and frequency of
previous encounters.
International Journal of Ubiquitous Computing (IJU)ijujournal
International Journal of Ubiquitous Computing (IJU) is a quarterly open access peer-reviewed journal that provides excellent international forum for sharing knowledge and results in theory, methodology and applications of ubiquitous computing. Current information age is witnessing a dramatic use of digital and electronic devices in the workplace and beyond. Ubiquitous Computing presents a rather arduous requirement of robustness, reliability and availability to the end user. Ubiquitous computing has received a significant and sustained research interest in terms of designing and deploying large scale and high performance computational applications in real life. The aim of the journal is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
PERVASIVE COMPUTING APPLIED TO THE CARE OF PATIENTS WITH DEMENTIA IN HOMECARE...ijujournal
The aging population and the consequent increase in the incidence of dementias is causing many
challenges to health systems, mainly related to infrastructure, low services quality and high costs. One
solution is to provide the care at house of the patient, through of home care services. However, it is not a
trivial task, since a patient with dementia requires constant care and monitoring from a caregiver, who
suffers physical and emotional overload. In this context, this work presents an modelling for development of
pervasive systems aimed at helping the care of these patients in order to lessen the burden of the caregiver
while the patient continue to receive the necessary care.
A proposed Novel Approach for Sentiment Analysis and Opinion Miningijujournal
as the people are being dependent on internet the requirement of user view analysis is increasing
exponentially. Customer posts their experience and opinion about the product policy and services. But,
because of the massive volume of reviews, customers can’t read all reviews. In order to solve this problem,
a lot of research is being carried out in Opinion Mining. In order to solve this problem, a lot of research is
being carried out in Opinion Mining. Through the Opinion Mining, we can know about contents of whole
product reviews, Blogs are websites that allow one or more individuals to write about things they want to
share with other The valuable data contained in posts from a large number of users across geographic,
demographic and cultural boundaries provide a rich data source not only for commercial exploitation but
also for psychological & sociopolitical research. This paper tries to demonstrate the plausibility of the idea
through our clustering and classifying opinion mining experiment on analysis of blog posts on recent
product policy and services reviews. We are proposing a Nobel approach for analyzing the Review for the
customer opinion
International Journal of Ubiquitous Computing (IJU)ijujournal
International Journal of Ubiquitous Computing (IJU) is a quarterly open access peer-reviewed journal that provides excellent international forum for sharing knowledge and results in theory, methodology and applications of ubiquitous computing. Current information age is witnessing a dramatic use of digital and electronic devices in the workplace and beyond. Ubiquitous Computing presents a rather arduous requirement of robustness, reliability and availability to the end user. Ubiquitous computing has received a significant and sustained research interest in terms of designing and deploying large scale and high performance computational applications in real life. The aim of the journal is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
USABILITY ENGINEERING OF GAMES: A COMPARATIVE ANALYSIS OF MEASURING EXCITEMEN...ijujournal
Usability engineering and usability testing are concepts that continue to evolve. Interesting research studies
and new ideas come up every now and then. This paper tests the hypothesis of using an EDA-based
physiological measurements as a usability testing tool by considering three measures; which are observers‟
opinions, self-reported data and EDA-based physiological sensor data. These data were analyzed
comparatively and statistically. It concludes by discussing the findings that has been obtained from those
subjective and objective measures, which partially supports the hypothesis.
SECURED SMART SYSTEM DESING IN PERVASIVE COMPUTING ENVIRONMENT USING VCSijujournal
Ubiquitous Computing uses mobile phones or tiny devices for application development with sensors
embedded in mobile phones. The information generated by these devices is a big task in collection and
storage. For further, the data transmission to the intended destination is delay tolerant. In this paper, we
made an attempt to propose a new security algorithm for providing security to Pervasive Computing
Environment (PCE) system using Public-key Encryption (PKE) algorithm, Biometric Security (BS)
algorithm and Visual Cryptography Scheme (VCS) algorithm. In the proposed PCE monitoring system it
automates various home appliances using VCS and also provides security against intrusion using Zigbee
IEEE 802.15.4 based Sensor Network, GSM and Wi-Fi networks are embedded through a standard Home
gateway.
PERFORMANCE COMPARISON OF ROUTING PROTOCOLS IN MOBILE AD HOC NETWORKSijujournal
Routing protocols have an important role in any Mobile Ad Hoc Network (MANET). Researchers have
elaborated several routing protocols that possess different performance levels. In this paper we give a
performance evaluation of AODV, DSR, DSDV, OLSR and DYMO routing protocols in Mobile Ad Hoc
Networks (MANETS) to determine the best in different scenarios. We analyse these MANET routing
protocols by using NS-2 simulator. We specify how the Number of Nodes parameter influences their
performance. In this study, performance is calculated in terms of Packet Delivery Ratio, Average End to
End Delay, Normalised Routing Load and Average Throughput.
The document compares the performance of various optical character recognition (OCR) tools. It analyzes eight OCR tools - Online OCR, Free Online OCR, OCR Convert, Convert image to text.net, Free OCR, i2OCR, Free OCR to Word Convert, and Google Docs. The document provides sample outputs of each tool processing the same input image. It then evaluates the tools based on character accuracy, character error rate, special symbol accuracy, and special symbol error rate to determine which tools most accurately convert images to editable text.
Optical Character Recognition (OCR) is a technique, used to convert scanned image into editable text
format. Many different types of Optical Character Recognition (OCR) tools are commercially available
today; it is a useful and popular method for different types of applications. OCR can predict the accurate
result depends on text pre-processing and segmentation algorithms. Image quality is one of the most
important factors that improve quality of recognition in performing OCR tools. Images can be processed
independently (.png, .jpg, and .gif files) or in multi-page PDF documents (.pdf). The primary objective of
this work is to provide the overview of various Optical Character Recognition (OCR) tools and analyses of
their performance by applying the two factors of OCR tool performance i.e. accuracy and error rate.
DETERMINING THE NETWORK THROUGHPUT AND FLOW RATE USING GSR AND AAL2Rijujournal
In multi-radio wireless mesh networks, one node is eligible to transmit packets over multiple channels to
different destination nodes simultaneously. This feature of multi-radio wireless mesh network makes high
throughput for the network and increase the chance for multi path routing. This is because the multiple
channel availability for transmission decreases the probability of the most elegant problem called as
interference problem which is either of interflow and intraflow type. For avoiding the problem like
interference and maintaining the constant network performance or increasing the performance the WMN
need to consider the packet aggregation and packet forwarding. Packet aggregation is process of collecting
several packets ready for transmission and sending them to the intended recipient through the channel,
while the packet forwarding holds the hop-by-hop routing. But choosing the correct path among different
available multiple paths is most the important factor in the both case for a routing algorithm. Hence the
most challenging factor is to determine a forwarding strategy which will provide the schedule for each
node for transmission within the channel. In this research work we have tried to implement two forwarding
strategies for the multi path multi radio WMN as the approximate solution for the above said problem. We
have implemented Global State Routing (GSR) which will consider the packet forwarding concept and
Aggregation Aware Layer 2 Routing (AAL2R) which considers the both concept i.e. both packet forwarding
and packet aggregation. After the successful implementation the network performance has been measured
by means of simulation study.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Embedded machine learning-based road conditions and driving behavior monitoring
A Review on Scheduling in Cloud Computing
1. International Journal of UbiComp (IJU), Vol.7, No.3, July 2016
DOI:10.5121/iju.2016.7302 9
A Review on Scheduling in Cloud Computing
Sujitha.A1
, Gunasekar.K2
1
M.E.Scholar, Department of Computer Science & Engineering, Nandha Engineering
College, Erode-638052, Tamil Nadu, India
2
Associate Professor, Department of Computer Science & Engineering, Nandha
Engineering College, Erode-638052, Tamil Nadu, India
ABSTRACT
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
KEYWORDS
Cloud Computing, Scheduling, Virtualization
1. INTRODUCTION
Cloud computing is also referred as on-demand computing, is a kind of Internet-based computing
that provides resources sharing and data to computers and other devices on demand. It relies on
sharing of resources to achieve integrity and scale of economy, resemble to a utility computing.
Cloud computing has become a highly demanded service due to the benefit of high computing
power, monetary cost, high performance, availability ,scalability along with accessibility. There
was several cloud vendors are experiencing growth rates of 50% per year, but being still in a
stage of inception, it has risk that have to be focused to make cloud computing services more
reliable and user friendly. The figure 1 is about the cloud computing services.
Figure 1 Cloud Computing Services
2. International Journal of UbiComp (IJU), Vol.7, No.3, July 2016
10
Virtualized resources with an under utilization rate which consume an unsatisfactory amount of
energy compared to the energy consumption of a completely utilized cloud computing. According
to [13], energy consumption of an idle resource is about as 60% or peak power. In cloud
computing, there is a connection between energy consumption and resource utilization. An
effective technique is Task consolidation which is incredibly enabled by virtualization
technologies, which aid the concurrent execution of several tasks, harness resource utilization and
in turn reduce the energy consumption [1].
2. OVERVIEW
2.1 Scheduling
The cloud computing is effective based on the scheduling. Based on various parameters the task
is to be scheduled such as arrival time, system load, execution time and deadline. It makes the
task to finish on time and which guarantee the clients to improve flexibility in cloud and
reliability of systems in cloud. The tasks are uncertain, so scheduling is deployed to overcome the
uncertainty [13].
2.2 Virtualization
Virtualization technology is responsible for the creation, migration and cancellation of virtual
machines [8]. When the task needs space in excess, fluctuation in utilization of resources leads to
migration. Virtualization carries out the load balancing, consolidation, and hot spot mitigation
[11]. It allocates data center resources dynamically based on the demands given by the user and
number of servers is to be reduced to bloom out the green computing. The figure 2 describes the
work of virtualization technology.
Figure 2 the Work of Virtualization Technology
3. LITERATURE SURVEY
A Cloud Gaming System Based on User-Level Virtualization
Author focused on the cloud gaming. Cloud gaming renders an interactive gaming application in
the cloud and streams the scenes as a video sequence to the player over Internet. Author proposed
GCloud, a GPU/CPU hybrid cluster for cloud gaming based on the user-level virtualization
technology. Deployed a performance model to analyze the server-capacity and games’ resource-
consumptions, which sort by type into two games: CPU-critical and memory-io-critical.
hardware
virtualization
virtualized host os
service models
Cloud users
3. International Journal of UbiComp (IJU), Vol.7, No.3, July 2016
11
Simulation tests evident that both of the First-Fit-like and the Best-Fit-like strategies outrun the
other(s). Test results indicate that GCloud is efficient.
ANGEL: Agent-Based Scheduling for Real-Time Tasks
Author devised a novel agent-based scheduling mechanism [7] in cloud computing environment
to assign real-time tasks and dynamically provision resources. A bidirectional announcement-
bidding mechanism and the collaborative processes employed consist of three phases. The three
phases are basic matching phase, forward announcement-bidding phase and backward
announcement-bidding phase. In a meanwhile, author designs both forward and backward
announcement-bidding phases for calculation rule of the bidding values and two heuristics for
selecting contractors. The bidirectional announcement-bidding mechanism is used to propose an
agent-based dynamic scheduling algorithm named ANGEL for real-time, separate and aperiodic
tasks in clouds. Extensive experiments are conducted on cloudsim platform.
An Energy-Saving Task Scheduling Strategy Based on Vacation Queuing Theory
High energy consumption [15] is one of the major issues of cloud computing systems. Requested
jobs in cloud computing environments have the nature of changeability, and gauge nodes have to
be powered on all the time to await requested tasks. This results in an incredible wastage of
energy. Task scheduling algorithm of an energy-saving based on the vacation queuing model for
cloud computing systems is proposed here. No of computer nodes, total idle energy, the number
of tasks arriving at the system, heuristic task scheduling algorithms, the meta-heuristic task
scheduling algorithms, the queuing theory-based algorithms used here. Simulation results evident
that the proposed algorithm can ensure task performance, while reducing the energy cost of a
cloud computing system effectively.
Exploring Blind Online Scheduling for Mobile cloud
Mobile cloud is a technology through the enabled users will enjoy abundant multimedia
applications in a computing environment. An important issue for the mobile cloud is the
scheduling of massive multimedia flows with heterogeneous QoS guarantee. Multimedia servers
based on the slot information of the users’ requests that occurs on last time, and route all the
multimedia flows according to the first-come first-served rule. Operating time of schedule, User
waiting time, Performance improvement time is used to measure the performance of the system.
Blind online scheduling algorithm (BOSA) [2] is used here. Mobile Cloud Multimedia Services
(MCMS) are the environmental tool. Simulation results shows that the proposed scheme can
efficiently schedule heterogeneous multimedia flows to satisfy dynamic QoS requirements in a
practical mobile cloud.
Temporal Load Balancing with Energy Cost Optimization
A cloud computing service plays a vital role in people’s daily life. These services are encouraged
by infrastructure known as Internet data center (IDC) [1]. As demand for cloud computing
services sounds, energy consumed by IDCs is blown forward. Workload intensity, Queuing delay,
Energy costs are used to measure the performance. Eco-IDC Minimize energy cost algorithm
used here. Eco-IDC Minimize energy cost Service delay avoidance as a result of this technique.
A Hyper-Heuristic Scheduling Algorithm for Cloud
4. International Journal of UbiComp (IJU), Vol.7, No.3, July 2016
12
Rule-based scheduling algorithms have been used on vast cloud computing systems because they
are simple and easy to implement. There are a lot of areas to improve these algorithms
performance, notably by using heuristic scheduling. A novel heuristic scheduling algorithm is
called hyper-heuristic scheduling algorithm (HHSA) [5] which is used to find better scheduling
solutions for cloud computing systems. Result shows that it reduce the make span of task
scheduling compared with the other scheduling algorithms.
FESTAL: Fault-Tolerant Elastic Scheduling Algorithm for Real-Time Tasks in Virtualized
Clouds
Fault tolerance in clouds receives an attention in both industry and academia mainly for real-time
applications due to their safety critical nature. Researches on the fault-tolerant scheduling [16]
study the virtualization and the elasticity is the two key features of clouds. To address this issue,
author presents a fault-tolerant mechanism which extends the primary-backup model to
incorporate the features of clouds. Host, Time, Task count, Interval time are used to measure the
performance of the system. An efficient fault-tolerant elastic scheduling algorithm FESTAL Non-
Migration-FESTAL (NMFESTAL), Non-Overlapping-FESTAL (NOFESTAL),Elastic First Fit
(EFF). FESTAL is able to achieve both fault tolerance and high performance in terms of resource
utilization.
Virtual Machine Scheduling for Improving Energy Efficiency in IAAS Cloud
Author leveraged a VM scheduling scheme encounter resource constraints, like the physical
server size (CPU, memory, storage, bandwidth, etc.) and capacity of network link to minimize
both the numbers of active PMs and network elements so as to finally reduce energy
consumption. Numbers of VM, Total energy consumption, changing traffic between VM are used
to measure the performance of the system. VM-Mig algorithms [19] are used here.
Evolutionary Multi-Objective Workflow Scheduling in Cloud
An already established workflow scheduling algorithms in classic distributed or heterogeneous
computing environments, it arise some issues in being directly applied to the Cloud
environments. To solve this workflow scheduling problem on an infrastructure as a service
(IAAS) platform, they used an evolutionary multi-objective optimization (EMO)-based algorithm
[6]. Time, Cost, Runtime ratio are used to measure the performance of the system. An
evolutionary multi-objective optimization (EMO)-based algorithm is used. The result of this
paper solves the multi-objective Cloud scheduling problem which minimizes both make span and
cost simultaneously.
Scheduling in Compute Cloud with Multiple Data Banks Using Divisible Load Paradigm
Author leveraged that to design a scheduling strategy for heterogeneous computing resources
with shared data banks is challengeable one [9]. The compute cloud environment is used to
reduce the total processing time. No of processors, processing time, No of workers role are used
to measure the performance. In order to divisible load theory, the scheduling challenge is
formulated as relevant recursive equations and constraints. In that are derived from the continuity
of processing time because retrieval from multiple data banks. The scheduling problem in a
compute cloud is equated as a linear programming problem. One is to utilize the best possible use
of available productive resources, and the other is to solve complex problems by splitting them
into solvable parts.
5. International Journal of UbiComp (IJU), Vol.7, No.3, July 2016
13
4. comparisons on different scheduling techniques
TITLE ALGORITHM PARAMETER CONCLUSION
A Cloud Gaming System
Based on User-Level
Virtualization
Gcloud, a
GPU/CPU hybrid
cluster
Server number
Total game requests
Balance the gaming-
responsiveness, costs
Temporal Load Balancing
with Energy Cost
Optimization
Eco-IDC Workload intensity
Queuing delay
Energy cost
Energy cost reduction for
IDC -Alleviates
workload drop.
ANGEL: Agent-Based
Scheduling for Real-Time
Tasks
Dynamic
scheduling
Algorithm—
ANGEL
Task count
Task Guarantee ratio
It addresses the issue of
schedulability,
Priority,scalability, real-
time in virtualized cloud
environment .
An Energy-Saving Task
Scheduling Strategy
Based on Vacation
Queuing Theory
Heuristic-task
scheduling
Meta-heuristic-
task scheduling
The queuing
theory
Algorithms
No of computer
nodes
Total idle energy
The number of tasks
arriving at the system
It ensure task performance,
reducing
The energy cost of a cloud
computing system
effectively.
Scheduling in Compute
Cloud with Multiple Data
Banks Using Divisible
Load Paradigm
Scheduling
Algorithm
No of processors
Processing time
No of workers role
It solve
Complex problems by
breaking them into
solvable parts.
Exploring Blind Online
Scheduling for Mobile
cloud
Blind online
scheduling
Algorithm (BOSA)
Operating time of
schedule
User waiting time
Performance
improvement time
It reduce the delay and
Energy among the servers.
A Hyper-Heuristic
Scheduling
Algorithm for Cloud
Hyper-heuristic
scheduling
algorithm (HHSA)
Interaction
Best so far make span
Reduce the make span of
task
Evolutionary Multi-
Objective Workflow
Scheduling in Cloud
Evolutionary multi-
objective
optimization
(EMO)-based
algorithm
Time
Cost
Runtime ratio
Minimizes make span and
cost simultaneously
FESTAL: Fault-Tolerant
Elastic Scheduling
Algorithm for Real-Time
Tasks in Virtualized
Clouds
FESTAL.
Non-
Migration-
FESTAL,
Non-
Overlapping-
FESTAL
Host
Time
Task count
Interval time
Tools:
Cloudsim
FESTAL is able to
Achieve both fault
tolerance and high
performance in terms
Of resource utilization.
Virtual Machine
Scheduling for Improving
Energy Efficiency in
IAAS Cloud
VM-Mig algorithm Number of VM
Total energy
consumption
Changing traffic
between VM
This paper reduces the
Quantity of physical
resources to save energy
Consumption
6. International Journal of UbiComp (IJU), Vol.7, No.3, July 2016
14
5. CONCLUSION
In cloud computing, there were numerous services are providing on demand service provisioning
is the main feature in IAAS. Scheduling is to provide the service and to reach the end user on
time. Various techniques for efficiently schedule the task are discussed in this paper. So in this
paper, our focus is basically on how effectively schedule the task to finish it off with accuracy
and correctness. We have also discussed process of scheduling and the algorithms. In this paper
various problems are surveyed and their solutions are discussed.
REFERENCES
[1] Jianying Luo, Lei Rao, and Xue Liu “Temporal Load Balancing with Service Delay Guarantees for
Data Center Energy Cost Optimization” , IEEE transactions on parallel and distributed systems,Vol.
25, No. 3, March 2014.
[2] Liang Zhou and Zhen Yang, “Exploring blind online scheduling for mobile cloud multimedia
services”, IEEE Wireless Communications, June 2013.
[3] Xiaomin Zhu, Laurence T. Yang, Huangke Chen, Ji Wang, Shu Yin and Xiaocheng Liu,” Real-Time
Tasks Oriented Energy-Aware Scheduling In Virtualized Clouds”, IEEE Transactions On Cloud
Computing, Vol. 2/April-June 2014.
[4] JRui Zhang, Kui Wu “Online Resource Scheduling Under Concave Pricing for Cloud Computing”,
IEEE Transactions On Parallel And Distributed Systems Vol. 27, No. 4, April 2016.
[5] Chun-Wei Tsai, Wei-Cheng Huang “A Hyper-Heuristic Scheduling Algorithm for Cloud”, IEEE
Transactions On cloud Computing, Vol. 2, No. 2, April-June 2014.
[6] Zhaomeng Zhu, Gongxuan Zhang “Evolutionary Multi-Objective Workflow Scheduling in Cloud”,
IEEE Transactions On Parallel And Distributed Systems, Vol. 27, No. 5, May 2016.
[7] Xiaomin Zhu, Member “ANGEL: Agent-Based Scheduling for Real-Time Tasks in Virtualized
Clouds”, IEEE Transactions On Computers, Vol. 64, No. 12, December 2015.
[8] Chao Zhang “VGASA: Adaptive Scheduling Algorithm of Virtualized GPU Resource in Cloud
Gaming”, IEEE Transactions On Parallel And Distributed Systems, Vol. 25, No. 11, November 2014.
[9] S. Suresh Hao Huang “Scheduling In Compute Cloud With Multiple Data Banks Using Divisible Load
Paradigm”, IEEE Transactions On Aerospace And Electronic Systems, Vol. 51, No. 2 April 2015
October 11, 2014.
[10] Zhaomeng Zhu, Gongxuan Zhang, Miqing Li, and Xiaohui Liu “Evolutionary Multi-Objective
Workflow Scheduling in Cloud”, IEEE Transactions On Parallel And Distributed Systems, Vol. 27,
No. 5, May 2016.
[11] Xue Lin, Yanzhi Wang, Qing Xie,” Task Scheduling with Dynamic Voltage and Frequency Scaling for
Energy Minimization in the Mobile Cloud Computing Environment”, IEEE Transactions On Services
Computing, Vol. 8, No. 2, March/April 2015.
[12] Xingquan Zuo, Guoxiang Zhang, and Wei Tan, “Self-Adaptive Learning PSO-Based Deadline
Constrained Task Scheduling for Hybrid iaas Cloud”, IEEE Transactions On Automation Science And
Engineering, Vol. 11, No. 2, April 2014.
[13] Chunsheng Zhu, Victor C. M. Leung, Laurence T. Yang, and Lei Shu “Collaborative Location-Based
Sleep Scheduling for Wireless Sensor Networks Integrated With Mobile Cloud Computing”, IEEE
Transactions On Computers, Vol. 64, No. 7, July 2015.
7. International Journal of UbiComp (IJU), Vol.7, No.3, July 2016
15
[14] Maria Alejandra Rodriguez and Rajkumar Buyya “Deadline Based Resource Provisioning and
Scheduling Algorithm for Scientific Workflows on Clouds”, IEEE Transactions On Cloud Computing,
Vol. 2, No. 2, April-June 2014.
[15] Xiang Deng, Di Wu, Junfeng Shen, and Jian He, “Eco-Aware Online Power Management and Load
Scheduling for Green Cloud Datacenters”, IEEE Systems Journal, Vol. 10, No. 1, March 2016.
[16] Ji Wang, Weidong Bao, Xiaomin Zhu, Laurence T. Yang, and Yang Xiang, “FESTAL: Fault-Tolerant
Elastic Scheduling Algorithm for Real-Time Tasks in Virtualized Clouds”, IEEE Systems Journal,
Vol. 10, No. 1, March 2016.
[17] Chun-Wei Tsai and Joel J. P. C. Rodrigues, “Metaheuristic Scheduling for Cloud: A Survey”, IEEE
Systems Journal, Vol. 8, No. 1, March 2014.
[18] Marco Polverini, Antonio Cianfrani, Shaolei Ren and Athanasios V. Vasilakos “Thermal-Aware
Scheduling of Batch Jobs in Geographically Distributed Data Centers” ,IEEE Transactions On Cloud
Computing, Vol. 2, No. 1, January-March 2014.
[19] Dong Jiankang, Wang Hongbo, Li Yang yang, Cheng Shiduan,”Virtual Machine Scheduling For
Improving Energy Efficiency In Iaas Cloud”, China Communications , March 2014.
Carlo Mastroianni, MichelaMeo, and Giuseppe Papuzzo,” Probabilistic Consolidation Of Virtual
Machines In Self-Organizing Cloud Data Centers”, IEEE Transactions On Cloud Computing, Vol. 1,
No. 2, July-December 2013.