This document discusses modeling cloud computing data centers as queuing systems to analyze performance factors. It presents an analytical model of a cloud data center as a [(M/G/1) : (∞/GDMODEL)] queuing system with single task arrivals and infinite task buffer capacity. The model is solved to obtain important performance metrics like mean number of tasks in the system. Prior work on modeling cloud systems and queuing theory concepts are also reviewed. Key assumptions of the proposed model include tasks following a Poisson arrival process and service times having a general probability distribution.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDM O...ijgca
The ever-increasing status of the cloud computing h
ypothesis and the budding concept of federated clou
d
computing have enthused research efforts towards in
tellectual cloud service selection aimed at develop
ing
techniques for enabling the cloud users to gain max
imum benefit from cloud computing by selecting
services which provide optimal performance at lowes
t possible cost. Cloud computing is a novel paradig
m
for the provision of computing infrastructure, whic
h aims to shift the location of the computing
infrastructure to the network in order to reduce th
e maintenance costs of hardware and software resour
ces.
Cloud computing systems vitally provide access to l
arge pools of resources. Resources provided by clou
d
computing systems hide a great deal of services fro
m the user through virtualization. In this paper, t
he
cloud data center is modelled as
queuing system with a single task arrivals
and a task request buffer of infinite capacity.
Scheduling Algorithm Based Simulator for Resource Allocation Task in Cloud Co...IRJET Journal
This document proposes a scheduling algorithm for allocating resources in cloud computing based on the Project Evaluation and Review Technique (PERT). It aims to address issues like starvation of lower priority tasks. The algorithm models task allocation as a directed acyclic graph and uses PERT to schedule critical and non-critical tasks, prioritizing higher priority tasks. The algorithm is evaluated against other scheduling methods and shows improvements in reducing completion time and optimizing resource allocation for all tasks.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
Congestion Control through Load Balancing Technique for Mobile Networks: A Cl...IDES Editor
The Optimal Routing Path (ORP) for mobile
cellular networks is proposed in this paper with the
introduction of cluster-based approach. Here an improved
dynamic selection procedure is used to elect cluster head.
The cluster head is only responsible for the computation of
least congested path. Hence the delay is reduced with the
significant reduction on the number of backtrackings.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
THRESHOLD BASED VM PLACEMENT TECHNIQUE FOR LOAD BALANCED RESOURCE PROVISIONIN...IJCNCJournal
The unbalancing load issue is a multi-variation, multi-imperative issue that corrupts the execution and productivity of processing assets. Workload adjusting methods give solutions of load unbalancing circumstances for two bothersome aspects over-burdening and under-stacking. Cloud computing utilizes planning and workload balancing for a virtualized environment, resource partaking in cloud foundation. These two factors must be handled in an improved way in cloud computing to accomplish ideal resource sharing. Henceforth, there requires productive resource, asset reservation for guaranteeing load advancement in the cloud. This work aims to present an incorporated resource, asset reservation, and workload adjusting calculation for effective cloud provisioning. The strategy develops a Priority-based Resource Scheduling Model to acquire the resource, asset reservation with threshold-based load balancing for improving the proficiency in cloud framework. Extending utilization of Virtual Machines through the suitable and sensible outstanding task at hand modifying is then practiced by intensely picking a job from submitting jobs using Priority-based Resource Scheduling Model to acquire resource asset reservation. Experimental evaluations represent, the proposed scheme gives better results by reducing execution time, with minimum resource cost and improved resource utilization in dynamic resource provisioning conditions.
Multiple Downlink Fair Packet Scheduling Scheme in Wi-MaxEditor IJCATR
IEEE 802.16 is standardization for a broadband wireless access in network metropolitan area network (MAN). IEEE 802.16
standard (Wi-Max) defines the concrete quality of service (QoS) requirement, a scheduling scheme and efficient packet scheduling
scheme which is necessary to achieve the QoS requirement. In this paper, a novel waiting queue based on downlink bandwidth
allocation architecture from a number of rtps schedule has been proposed to improve the performance of nrtPS services without any
impaction to other services. This paper proposes an efficient QoS scheduling scheme that satisfies both throughput and delay guarantee
to various real and non-real applications corresponding to different scheduling schemes for k=1,2,3,4. Simulation results show that
proposed scheduling scheme can provide a tight QoS guarantee in terms of delay for all types of traffic as defined in WiMax standards.
This process results in maintaining the fairness of allocation and helps to eliminate starvation of lower priority class services. The
authors propose a new efficient and generalized scheduling schemes for IEEE 802.16 broadband wireless access system reflecting the
delay requirements.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDM O...ijgca
The ever-increasing status of the cloud computing h
ypothesis and the budding concept of federated clou
d
computing have enthused research efforts towards in
tellectual cloud service selection aimed at develop
ing
techniques for enabling the cloud users to gain max
imum benefit from cloud computing by selecting
services which provide optimal performance at lowes
t possible cost. Cloud computing is a novel paradig
m
for the provision of computing infrastructure, whic
h aims to shift the location of the computing
infrastructure to the network in order to reduce th
e maintenance costs of hardware and software resour
ces.
Cloud computing systems vitally provide access to l
arge pools of resources. Resources provided by clou
d
computing systems hide a great deal of services fro
m the user through virtualization. In this paper, t
he
cloud data center is modelled as
queuing system with a single task arrivals
and a task request buffer of infinite capacity.
Scheduling Algorithm Based Simulator for Resource Allocation Task in Cloud Co...IRJET Journal
This document proposes a scheduling algorithm for allocating resources in cloud computing based on the Project Evaluation and Review Technique (PERT). It aims to address issues like starvation of lower priority tasks. The algorithm models task allocation as a directed acyclic graph and uses PERT to schedule critical and non-critical tasks, prioritizing higher priority tasks. The algorithm is evaluated against other scheduling methods and shows improvements in reducing completion time and optimizing resource allocation for all tasks.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
Congestion Control through Load Balancing Technique for Mobile Networks: A Cl...IDES Editor
The Optimal Routing Path (ORP) for mobile
cellular networks is proposed in this paper with the
introduction of cluster-based approach. Here an improved
dynamic selection procedure is used to elect cluster head.
The cluster head is only responsible for the computation of
least congested path. Hence the delay is reduced with the
significant reduction on the number of backtrackings.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
THRESHOLD BASED VM PLACEMENT TECHNIQUE FOR LOAD BALANCED RESOURCE PROVISIONIN...IJCNCJournal
The unbalancing load issue is a multi-variation, multi-imperative issue that corrupts the execution and productivity of processing assets. Workload adjusting methods give solutions of load unbalancing circumstances for two bothersome aspects over-burdening and under-stacking. Cloud computing utilizes planning and workload balancing for a virtualized environment, resource partaking in cloud foundation. These two factors must be handled in an improved way in cloud computing to accomplish ideal resource sharing. Henceforth, there requires productive resource, asset reservation for guaranteeing load advancement in the cloud. This work aims to present an incorporated resource, asset reservation, and workload adjusting calculation for effective cloud provisioning. The strategy develops a Priority-based Resource Scheduling Model to acquire the resource, asset reservation with threshold-based load balancing for improving the proficiency in cloud framework. Extending utilization of Virtual Machines through the suitable and sensible outstanding task at hand modifying is then practiced by intensely picking a job from submitting jobs using Priority-based Resource Scheduling Model to acquire resource asset reservation. Experimental evaluations represent, the proposed scheme gives better results by reducing execution time, with minimum resource cost and improved resource utilization in dynamic resource provisioning conditions.
Multiple Downlink Fair Packet Scheduling Scheme in Wi-MaxEditor IJCATR
IEEE 802.16 is standardization for a broadband wireless access in network metropolitan area network (MAN). IEEE 802.16
standard (Wi-Max) defines the concrete quality of service (QoS) requirement, a scheduling scheme and efficient packet scheduling
scheme which is necessary to achieve the QoS requirement. In this paper, a novel waiting queue based on downlink bandwidth
allocation architecture from a number of rtps schedule has been proposed to improve the performance of nrtPS services without any
impaction to other services. This paper proposes an efficient QoS scheduling scheme that satisfies both throughput and delay guarantee
to various real and non-real applications corresponding to different scheduling schemes for k=1,2,3,4. Simulation results show that
proposed scheduling scheme can provide a tight QoS guarantee in terms of delay for all types of traffic as defined in WiMax standards.
This process results in maintaining the fairness of allocation and helps to eliminate starvation of lower priority class services. The
authors propose a new efficient and generalized scheduling schemes for IEEE 802.16 broadband wireless access system reflecting the
delay requirements.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
RESOURCE ALLOCATION METHOD FOR CLOUD COMPUTING ENVIRONMENTS WITH DIFFERENT SE...IJCNCJournal
In a cloud computing environment with multiple data centers over a wide area, it is highly likely that each data center would provide the different service quality to users at different locations. It is also required to consider the nodes at the edge of the network (local cloud) which support applications such as IoTs that require low latency and location awareness. The authors proposed the joint multiple resource allocation method in a cloud computing environment that consists of multiple data centers and each data center provides the different network delay. However, the existing method does not take account of cases where requests that require a short network delay occur more than expected. Moreover, the existing method does not take account of service processing time in data centers and therefore cannot provide the optimal resource allocation when it is necessary to take the total processing time (both network delay and service processing time in a data center) into consideration in resource allocation.
A MULTI-OBJECTIVE PERSPECTIVE FOR OPERATOR SCHEDULING USING FINEGRAINED DVS A...VLSICS Design
The stringent power budget of fine grained power managed digital integrated circuits have driven chip designers to optimize power at the cost of area and delay, which were the traditional cost criteria for circuit optimization. The emerging scenario motivates us to revisit the classical operator scheduling problem under the availability of DVFS enabled functional units that can trade-off cycles with power. We study the design space defined due to this trade-off and present a branch-and-bound(B/B) algorithm to explore this state space and report the pareto-optimal front with respect to area and power. The scheduling also aims at maximum resource sharing and is able to attain sufficient area and power gains for complex benchmarks when timing constraints are relaxed by sufficient amount. Experimental results show that the algorithm that operates without any user constraint(area/power) is able to solve the problem for mostavailable benchmarks, and the use of power budget or area budget constraints leads to significant performance gain.
GENERATIVE SCHEDULING OF EFFECTIVE MULTITASKING WORKLOADS FOR BIG-DATA ANALYT...IAEME Publication
This document proposes an evolutionary ordinal optimization (eOO) approach for scheduling dynamic and multitasking workloads for big data analytics in cloud computing environments. The eOO approach iteratively applies ordinal optimization to obtain suboptimal schedules faster than exhaustive searching, while adapting to workload fluctuations over time. Experimental results show the eOO approach achieves up to 30% higher task throughput compared to existing Monte Carlo and blind pick scheduling methods.
A BAYE'S THEOREM BASED NODE SELECTION FOR LOAD BALANCING IN CLOUD ENVIRONMENThiij
Cloud computing is a popular computing model as it renders service to large number of users request on
the fly and has lead to the proliferation of large number of cloud users. This has lead to the overloaded
nodes in the cloud environment along with the problem of load imbalance among the cloud servers and
thereby impacts the performance. Hence, in this paper a heuristic Baye's theorem approach is considered
along with clustering to identify the optimal node for load balancing. Experiments using the proposed
approach are carried out on cloudsim simulator and are compared with the existing approach. Results
demonstrates that task deployment performed using this approach has improved performance in terms of
utilization and throughput when compared to the existing approaches
A Baye's Theorem Based Node Selection for Load Balancing in Cloud Environmentneirew J
Cloud computing is a popular computing model as it renders service to large number of users request on
the fly and has lead to the proliferation of large number of cloud users. This has lead to the overloaded
nodes in the cloud environment along with the problem of load imbalance among the cloud servers and
thereby impacts the performance. Hence, in this paper a heuristic Baye's theorem approach is considered
along with clustering to identify the optimal node for load balancing. Experiments using the proposed
approach are carried out on cloudsim simulator and are compared with the existing approach. Results
demonstrates that task deployment performed using this approach has improved performance in terms of
utilization and throughput when compared to the existing approaches.
Fault-Tolerance Aware Multi Objective Scheduling Algorithm for Task Schedulin...csandit
Computational Grid (CG) creates a large heterogeneous and distributed paradigm to manage and execute the applications which are computationally intensive. In grid scheduling tasks are assigned to the proper processors in the grid system to for its execution by considering the execution policy and the optimization objectives. In this paper, makespan and the faulttolerance of the computational nodes of the grid which are the two important parameters for the task execution, are considered and tried to optimize it. As the grid scheduling is considered to be NP-Hard, so a meta-heuristics evolutionary based techniques are often used to find a solution for this. We have proposed a NSGA II for this purpose. The performance estimation ofthe proposed Fault tolerance Aware NSGA II (FTNSGA II) has been done by writing program in Matlab. The simulation results evaluates the performance of the all proposed algorithm and the results of proposed model is compared with existing model Min-Min and Max-Min algorithm which proves effectiveness of the model.
The document discusses principles of parallel algorithm design, including decomposing problems into tasks, mapping tasks to processes, and characteristics that affect parallel performance such as granularity, degree of concurrency, critical path length, and task interaction graphs.
A NOVEL SLOTTED ALLOCATION MECHANISM TO PROVIDE QOS FOR EDCF PROTOCOLIAEME Publication
The IEEE 802.11e EDCF mechanism cannot guarantee the QOS of high-priority traffic as the bandwidth consumption of the low-priority traffic increases. Also, in the presence of high priority traffic dampen link utilization of low priority traffic. To overcome these problems, we propose the Novel mechanism in our research that extends IEEE 802.11e EDCF by introducing a Super Slot and Virtual Collision. Compared to EDCF, our proposed approach has EDCF has two advantages: (a) Higher priority traffic achieves Quality of service regardless of the amount of low priority traffic, and (b) Low priority traffic obtains a higher throughput in the presence of same amount of high priority traffic.
Load balancing in public cloud combining the concepts of data mining and netw...eSAT Publishing House
1) The document discusses load balancing techniques in public clouds by combining concepts from data mining, networking, and cloud computing.
2) It proposes using a VDBSCAN clustering algorithm to partition the public cloud into sub-areas called cloud partitions for simpler load balancing.
3) A job assignment strategy is presented that uses round robin or game theory techniques to allocate jobs to partitions and nodes based on their load status.
A Review - Synchronization Approaches to Digital systemsIJERA Editor
Synchronization is a prime requirement in the process of Digital systems. Wherein new devices are upcoming
towards providing higher service level, advanced distributed systems are been integrated onto a single platform
for higher service provision. However with the integration of large processing units, the distributed processing
needs a high level synchronization with minimum processing overhead. The issue of synchronization was
processed by various approaches. This paper outlines a brief review on the developments made in the field of
synchronization approach to digital system, under distributed mode operation.
Energy efficiency in virtual machines allocation for cloud data centers with ...IJECEIAES
Energy usage of data centers is a challenging and complex issue because computing applications and data are growing so quickly that increasingly larger servers and disks are needed to process them fast enough within the required time period. In the past few years, many approaches to virtual machine placement have been proposed. This study proposes a new approach for virtual machine allocation to physical hosts. Either minimizes the physical hosts and avoids the SLA violation. The proposed method in comparison to the other algorithms achieves better results.
Wireless data broadcast is an efficient way of disseminating data to users in the mobile computing environments. From the server’s point of view, how to place the data items on channels is a crucial issue, with the objective of minimizing the average access time and tuning time. Similarly, how to schedule the data retrieval process for a given request at the client side such that all the requested items can be downloaded in a short time is also an important problem. In this paper, we investigate the multi-item data retrieval scheduling in the push-based multichannel broadcast environments. The most important issues in mobile computing are energy efficiency and query response efficiency. However, in data broadcast the objectives of reducing access latency and energy cost can be contradictive to each other. Consequently, we define a new problem named Minimum Cost Data Retrieval Problem (MCDR) and Large Number Data Retrieval (LNDR) Problem. We also develop a heuristic algorithm to download a large number of items efficiently. When there is no replicated item in a broadcast cycle, we show that an optimal retrieval schedule can be obtained in polynomial time
A DENIAL OF SERVICE STRATEGY TO ORCHESTRATE STEALTHY ATTACK PATTERNS IN CLOUD...IAEME Publication
The triumph of the cloud computing archetype is owing to its on-demand, self-service, and pay-by-use nature. The possessions of Denial of Service (DoS) attacks engross not only the quality of the delivered service, but also the service continuance costs in terms of reserve utilization. Explicitly, the longer the detection delay is, the elevated the costs to be incurred. Consequently, a fastidious consideration has to be paid for stealthy DoS attacks. They aim at minimizing their visibility, and are sophisticated attacks adapted to influence the worst-case recital of the target system through definite periodic, pulsing, and low-rate traffic patterns. A strategy to orchestrate stealthy attack patterns, which reveal a slowly-increasing-intensity inclination premeditated to impose the utmost financial cost to the cloud Customer has been proposed, while relating to the job size and the service advent rate obligatory by the detection mechanisms. It is described both how to apply the proposed strategy, and its effects on the target system deployed in the cloud.
Empirical studies have revealed that a significant amount of energy is lost unnecessarily in the
network architectures, protocols, routers and various other network devices. Thus there is a need for techniques
to obtain green networking in the computer architecture which can lead to energy saving. Green networking is
an emerging phenomenon in the computer industry because of its economic and environmental benefits. Saving
energy leads to cost-cutting and lower emission of greenhouse gases which are apparently one of the major
threats to the environment. ’Greening’ as the name suggests is the process of constructing network architecture
in such a way so as to avoid unnecessary loss of power and energy due its various components and can be
implemented using various techniques out of which four are mentioned in this review paper, namely Adaptive
link rate (ALR), Dynamic Voltage and Frequency scaling(DVFS), Interface proxying and energy aware
applications and software.
This document discusses green computing and proposes a simulator for evaluating green scheduling algorithms. It begins with background on green computing and why it is important. It then outlines the key components of the simulator, including: a computation model using DAGs, an energy consumption model based on CPU throttling levels, and an abstraction for energy-aware schedulers. The document describes classes for modeling cores, throttling levels, and the overall simulation framework, which is designed to be extensible to different scheduling algorithms, core types, and energy models. The goal is to simulate and evaluate scheduling heuristics to minimize energy consumption while meeting performance targets.
A REVIEW OF MEMORY UTILISATION AND MANAGEMENT RELATED ISSUES IN WIRELESS SENS...IAEME Publication
In WSN applications, conditions do arise where it is mandatory to update existing firmware residing in each node with the aim of remotely effecting essential software improvements. Classically, programming the nodes involves the use of wired communication Interface, however using these links remotely is not feasible. In addition, it is economically impracticable to reprogram each node in the field especially when they can’t be Reach and are in large numbers. Hence, the only viable option left is to employ a reliable over-the-air update process.
The document discusses principles of parallel algorithm design. It introduces parallel algorithms, decomposition techniques, and characteristics of tasks and interactions. Recursive, data, exploratory, and hybrid decomposition techniques are covered. Mapping tasks to processes aims to minimize execution time by balancing load, minimizing interaction between processes, and assigning independent tasks to different processes. Granularity, degree of concurrency, and critical path length are used to analyze decompositions and their performance.
Tomography is important for network design and routing optimization. Prior approaches require either
precise time synchronization or complex cooperation. Furthermore, active tomography consumes explicit
probing resulting in limited scalability. To address the first issue we propose a novel Delay Correlation
Estimation methodology named DCE with no need of synchronization and special cooperation. For the
second issue we develop a passive realization mechanism merely using regular data flow without explicit
bandwidth consumption. Extensive simulations in OMNeT++ are made to evaluate its accuracy where we
show that DCE measurement is highly identical with the true value. Also from test result we find that
mechanism of passive realization is able to achieve both regular data transmission and purpose of
tomography with excellent robustness versus different background traffic and package size.
This document provides the solutions to selected problems from the textbook "Introduction to Parallel Computing". The solutions are supplemented with figures where needed. Figure and equation numbers are represented in roman numerals to differentiate them from the textbook. The document contains solutions to problems from 13 chapters of the textbook covering topics in parallel computing models, algorithms, and applications.
Scheduling Divisible Jobs to Optimize the Computation and Energy Costsinventionjournals
ABSTRACT : The important challenge in cloud computing environment is to design a scheduling strategy to handle jobs, and to process them in a heterogeneous environment with shared data centers. In this paper, we attempt to investigate a new analytical framework model that enables an existing private cloud data-center for scheduling jobs and minimizing the overall computation and energy cost together. Our model is based on Divisible Load Theory (DLT) model to derive closed-form solution for the load fractions to be assigned to each machines considering computation and energy cost. Our analysis also attempts to schedule the jobs such a way that cloud provider can gain maximum benefit for his service and Quality of Service (QoS) requirement user’s job. Finally, we quantify the performance of the strategies via rigorous simulation studies.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
A Review on Scheduling in Cloud Computingijujournal
This document reviews scheduling techniques in cloud computing. It discusses key concepts like virtualization and different scheduling algorithms. The review surveys various scheduling algorithms for tasks, workflows, real-time applications and energy optimization. It analyzes algorithms for load balancing, fault tolerance and resource utilization to improve performance metrics like makespan, cost and energy consumption. The document concludes that effective scheduling is important in cloud computing to provide on-demand services and complete tasks accurately and on time.
RESOURCE ALLOCATION METHOD FOR CLOUD COMPUTING ENVIRONMENTS WITH DIFFERENT SE...IJCNCJournal
In a cloud computing environment with multiple data centers over a wide area, it is highly likely that each data center would provide the different service quality to users at different locations. It is also required to consider the nodes at the edge of the network (local cloud) which support applications such as IoTs that require low latency and location awareness. The authors proposed the joint multiple resource allocation method in a cloud computing environment that consists of multiple data centers and each data center provides the different network delay. However, the existing method does not take account of cases where requests that require a short network delay occur more than expected. Moreover, the existing method does not take account of service processing time in data centers and therefore cannot provide the optimal resource allocation when it is necessary to take the total processing time (both network delay and service processing time in a data center) into consideration in resource allocation.
A MULTI-OBJECTIVE PERSPECTIVE FOR OPERATOR SCHEDULING USING FINEGRAINED DVS A...VLSICS Design
The stringent power budget of fine grained power managed digital integrated circuits have driven chip designers to optimize power at the cost of area and delay, which were the traditional cost criteria for circuit optimization. The emerging scenario motivates us to revisit the classical operator scheduling problem under the availability of DVFS enabled functional units that can trade-off cycles with power. We study the design space defined due to this trade-off and present a branch-and-bound(B/B) algorithm to explore this state space and report the pareto-optimal front with respect to area and power. The scheduling also aims at maximum resource sharing and is able to attain sufficient area and power gains for complex benchmarks when timing constraints are relaxed by sufficient amount. Experimental results show that the algorithm that operates without any user constraint(area/power) is able to solve the problem for mostavailable benchmarks, and the use of power budget or area budget constraints leads to significant performance gain.
GENERATIVE SCHEDULING OF EFFECTIVE MULTITASKING WORKLOADS FOR BIG-DATA ANALYT...IAEME Publication
This document proposes an evolutionary ordinal optimization (eOO) approach for scheduling dynamic and multitasking workloads for big data analytics in cloud computing environments. The eOO approach iteratively applies ordinal optimization to obtain suboptimal schedules faster than exhaustive searching, while adapting to workload fluctuations over time. Experimental results show the eOO approach achieves up to 30% higher task throughput compared to existing Monte Carlo and blind pick scheduling methods.
A BAYE'S THEOREM BASED NODE SELECTION FOR LOAD BALANCING IN CLOUD ENVIRONMENThiij
Cloud computing is a popular computing model as it renders service to large number of users request on
the fly and has lead to the proliferation of large number of cloud users. This has lead to the overloaded
nodes in the cloud environment along with the problem of load imbalance among the cloud servers and
thereby impacts the performance. Hence, in this paper a heuristic Baye's theorem approach is considered
along with clustering to identify the optimal node for load balancing. Experiments using the proposed
approach are carried out on cloudsim simulator and are compared with the existing approach. Results
demonstrates that task deployment performed using this approach has improved performance in terms of
utilization and throughput when compared to the existing approaches
A Baye's Theorem Based Node Selection for Load Balancing in Cloud Environmentneirew J
Cloud computing is a popular computing model as it renders service to large number of users request on
the fly and has lead to the proliferation of large number of cloud users. This has lead to the overloaded
nodes in the cloud environment along with the problem of load imbalance among the cloud servers and
thereby impacts the performance. Hence, in this paper a heuristic Baye's theorem approach is considered
along with clustering to identify the optimal node for load balancing. Experiments using the proposed
approach are carried out on cloudsim simulator and are compared with the existing approach. Results
demonstrates that task deployment performed using this approach has improved performance in terms of
utilization and throughput when compared to the existing approaches.
Fault-Tolerance Aware Multi Objective Scheduling Algorithm for Task Schedulin...csandit
Computational Grid (CG) creates a large heterogeneous and distributed paradigm to manage and execute the applications which are computationally intensive. In grid scheduling tasks are assigned to the proper processors in the grid system to for its execution by considering the execution policy and the optimization objectives. In this paper, makespan and the faulttolerance of the computational nodes of the grid which are the two important parameters for the task execution, are considered and tried to optimize it. As the grid scheduling is considered to be NP-Hard, so a meta-heuristics evolutionary based techniques are often used to find a solution for this. We have proposed a NSGA II for this purpose. The performance estimation ofthe proposed Fault tolerance Aware NSGA II (FTNSGA II) has been done by writing program in Matlab. The simulation results evaluates the performance of the all proposed algorithm and the results of proposed model is compared with existing model Min-Min and Max-Min algorithm which proves effectiveness of the model.
The document discusses principles of parallel algorithm design, including decomposing problems into tasks, mapping tasks to processes, and characteristics that affect parallel performance such as granularity, degree of concurrency, critical path length, and task interaction graphs.
A NOVEL SLOTTED ALLOCATION MECHANISM TO PROVIDE QOS FOR EDCF PROTOCOLIAEME Publication
The IEEE 802.11e EDCF mechanism cannot guarantee the QOS of high-priority traffic as the bandwidth consumption of the low-priority traffic increases. Also, in the presence of high priority traffic dampen link utilization of low priority traffic. To overcome these problems, we propose the Novel mechanism in our research that extends IEEE 802.11e EDCF by introducing a Super Slot and Virtual Collision. Compared to EDCF, our proposed approach has EDCF has two advantages: (a) Higher priority traffic achieves Quality of service regardless of the amount of low priority traffic, and (b) Low priority traffic obtains a higher throughput in the presence of same amount of high priority traffic.
Load balancing in public cloud combining the concepts of data mining and netw...eSAT Publishing House
1) The document discusses load balancing techniques in public clouds by combining concepts from data mining, networking, and cloud computing.
2) It proposes using a VDBSCAN clustering algorithm to partition the public cloud into sub-areas called cloud partitions for simpler load balancing.
3) A job assignment strategy is presented that uses round robin or game theory techniques to allocate jobs to partitions and nodes based on their load status.
A Review - Synchronization Approaches to Digital systemsIJERA Editor
Synchronization is a prime requirement in the process of Digital systems. Wherein new devices are upcoming
towards providing higher service level, advanced distributed systems are been integrated onto a single platform
for higher service provision. However with the integration of large processing units, the distributed processing
needs a high level synchronization with minimum processing overhead. The issue of synchronization was
processed by various approaches. This paper outlines a brief review on the developments made in the field of
synchronization approach to digital system, under distributed mode operation.
Energy efficiency in virtual machines allocation for cloud data centers with ...IJECEIAES
Energy usage of data centers is a challenging and complex issue because computing applications and data are growing so quickly that increasingly larger servers and disks are needed to process them fast enough within the required time period. In the past few years, many approaches to virtual machine placement have been proposed. This study proposes a new approach for virtual machine allocation to physical hosts. Either minimizes the physical hosts and avoids the SLA violation. The proposed method in comparison to the other algorithms achieves better results.
Wireless data broadcast is an efficient way of disseminating data to users in the mobile computing environments. From the server’s point of view, how to place the data items on channels is a crucial issue, with the objective of minimizing the average access time and tuning time. Similarly, how to schedule the data retrieval process for a given request at the client side such that all the requested items can be downloaded in a short time is also an important problem. In this paper, we investigate the multi-item data retrieval scheduling in the push-based multichannel broadcast environments. The most important issues in mobile computing are energy efficiency and query response efficiency. However, in data broadcast the objectives of reducing access latency and energy cost can be contradictive to each other. Consequently, we define a new problem named Minimum Cost Data Retrieval Problem (MCDR) and Large Number Data Retrieval (LNDR) Problem. We also develop a heuristic algorithm to download a large number of items efficiently. When there is no replicated item in a broadcast cycle, we show that an optimal retrieval schedule can be obtained in polynomial time
A DENIAL OF SERVICE STRATEGY TO ORCHESTRATE STEALTHY ATTACK PATTERNS IN CLOUD...IAEME Publication
The triumph of the cloud computing archetype is owing to its on-demand, self-service, and pay-by-use nature. The possessions of Denial of Service (DoS) attacks engross not only the quality of the delivered service, but also the service continuance costs in terms of reserve utilization. Explicitly, the longer the detection delay is, the elevated the costs to be incurred. Consequently, a fastidious consideration has to be paid for stealthy DoS attacks. They aim at minimizing their visibility, and are sophisticated attacks adapted to influence the worst-case recital of the target system through definite periodic, pulsing, and low-rate traffic patterns. A strategy to orchestrate stealthy attack patterns, which reveal a slowly-increasing-intensity inclination premeditated to impose the utmost financial cost to the cloud Customer has been proposed, while relating to the job size and the service advent rate obligatory by the detection mechanisms. It is described both how to apply the proposed strategy, and its effects on the target system deployed in the cloud.
Empirical studies have revealed that a significant amount of energy is lost unnecessarily in the
network architectures, protocols, routers and various other network devices. Thus there is a need for techniques
to obtain green networking in the computer architecture which can lead to energy saving. Green networking is
an emerging phenomenon in the computer industry because of its economic and environmental benefits. Saving
energy leads to cost-cutting and lower emission of greenhouse gases which are apparently one of the major
threats to the environment. ’Greening’ as the name suggests is the process of constructing network architecture
in such a way so as to avoid unnecessary loss of power and energy due its various components and can be
implemented using various techniques out of which four are mentioned in this review paper, namely Adaptive
link rate (ALR), Dynamic Voltage and Frequency scaling(DVFS), Interface proxying and energy aware
applications and software.
This document discusses green computing and proposes a simulator for evaluating green scheduling algorithms. It begins with background on green computing and why it is important. It then outlines the key components of the simulator, including: a computation model using DAGs, an energy consumption model based on CPU throttling levels, and an abstraction for energy-aware schedulers. The document describes classes for modeling cores, throttling levels, and the overall simulation framework, which is designed to be extensible to different scheduling algorithms, core types, and energy models. The goal is to simulate and evaluate scheduling heuristics to minimize energy consumption while meeting performance targets.
A REVIEW OF MEMORY UTILISATION AND MANAGEMENT RELATED ISSUES IN WIRELESS SENS...IAEME Publication
In WSN applications, conditions do arise where it is mandatory to update existing firmware residing in each node with the aim of remotely effecting essential software improvements. Classically, programming the nodes involves the use of wired communication Interface, however using these links remotely is not feasible. In addition, it is economically impracticable to reprogram each node in the field especially when they can’t be Reach and are in large numbers. Hence, the only viable option left is to employ a reliable over-the-air update process.
The document discusses principles of parallel algorithm design. It introduces parallel algorithms, decomposition techniques, and characteristics of tasks and interactions. Recursive, data, exploratory, and hybrid decomposition techniques are covered. Mapping tasks to processes aims to minimize execution time by balancing load, minimizing interaction between processes, and assigning independent tasks to different processes. Granularity, degree of concurrency, and critical path length are used to analyze decompositions and their performance.
Tomography is important for network design and routing optimization. Prior approaches require either
precise time synchronization or complex cooperation. Furthermore, active tomography consumes explicit
probing resulting in limited scalability. To address the first issue we propose a novel Delay Correlation
Estimation methodology named DCE with no need of synchronization and special cooperation. For the
second issue we develop a passive realization mechanism merely using regular data flow without explicit
bandwidth consumption. Extensive simulations in OMNeT++ are made to evaluate its accuracy where we
show that DCE measurement is highly identical with the true value. Also from test result we find that
mechanism of passive realization is able to achieve both regular data transmission and purpose of
tomography with excellent robustness versus different background traffic and package size.
This document provides the solutions to selected problems from the textbook "Introduction to Parallel Computing". The solutions are supplemented with figures where needed. Figure and equation numbers are represented in roman numerals to differentiate them from the textbook. The document contains solutions to problems from 13 chapters of the textbook covering topics in parallel computing models, algorithms, and applications.
Scheduling Divisible Jobs to Optimize the Computation and Energy Costsinventionjournals
ABSTRACT : The important challenge in cloud computing environment is to design a scheduling strategy to handle jobs, and to process them in a heterogeneous environment with shared data centers. In this paper, we attempt to investigate a new analytical framework model that enables an existing private cloud data-center for scheduling jobs and minimizing the overall computation and energy cost together. Our model is based on Divisible Load Theory (DLT) model to derive closed-form solution for the load fractions to be assigned to each machines considering computation and energy cost. Our analysis also attempts to schedule the jobs such a way that cloud provider can gain maximum benefit for his service and Quality of Service (QoS) requirement user’s job. Finally, we quantify the performance of the strategies via rigorous simulation studies.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
A Review on Scheduling in Cloud Computingijujournal
This document reviews scheduling techniques in cloud computing. It discusses key concepts like virtualization and different scheduling algorithms. The review surveys various scheduling algorithms for tasks, workflows, real-time applications and energy optimization. It analyzes algorithms for load balancing, fault tolerance and resource utilization to improve performance metrics like makespan, cost and energy consumption. The document concludes that effective scheduling is important in cloud computing to provide on-demand services and complete tasks accurately and on time.
A Review on Scheduling in Cloud Computingijujournal
This document reviews scheduling techniques in cloud computing. It discusses key concepts like virtualization and different scheduling algorithms. The review surveys various scheduling algorithms for tasks, workflows, real-time applications and energy efficiency. It analyzes algorithms based on parameters like makespan, cost, energy consumption and concludes many algorithms can improve resource utilization and performance while reducing energy costs.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The increased availability of the cloud models and allied developing models creates easier computing cloud environment. Energy consumption and effective energy management are the two important challenges in virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling (DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our proposed model confirms the effectiveness of its implementation, scalability, power consumption and execution time with respect to other existing approaches.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The
increased availability of the cloud models and allied developing models creates easier computing cloud
environment. Energy consumption and effective energy management are the two important challenges in
virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling
(DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the
required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our
proposed model confirms the effectiveness of its implementation, scalability, power consumption and
execution time with respect to other existing approaches.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The increased availability of the cloud models and allied developing models creates easier computing cloud environment. Energy consumption and effective energy management are the two important challenges in virtualized computing platforms. Energy consumption can be minimized by allocating computationally intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling (DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the required QoS. However, they do not control the internal and external switching to server frequencies, which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm minimizes consumption of energy and time during computation, reconfiguration and communication. Our proposed model confirms the effectiveness of its implementation, scalability, power consumption and execution time with respect to other existing approaches.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The
increased availability of the cloud models and allied developing models creates easier computing cloud
environment. Energy consumption and effective energy management are the two important challenges in
virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling
(DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the
required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our
proposed model confirms the effectiveness of its implementation, scalability, power consumption and
execution time with respect to other existing approaches.
AN OPEN JACKSON NETWORK MODEL FOR HETEROGENEOUS INFRASTRUCTURE AS A SERVICE O...IJCNCJournal
Cloud computing is an environment which provides services for user demand such as software, platform, infrastructure. Applications which are deployed on cloud computing have become more varied and complex to adapt to increase end-user quantity and fluctuating workload. One popular characteristic of
cloud computing is the heterogeneity of network, hosts and virtual machines (VM). There were many studies on cloud computing modeling based on queuing theory, but most studies have focused on homogeneity characteristic. In this study, we propose a cloud computing model based on open Jackson
network for multi-tier application systems which are deployed on heterogeneous VMs of IaaS cloud computing. The important metrics are analyzed in our experiments such as mean waiting time; mean request quantity, the throughput of the system. Besides that, metrics in model is used to modify number VMs
allocated for applications. Result of experiments shows that open queue network provides high efficiency.
Intelligent Workload Management in Virtualized Cloud EnvironmentIJTET Journal
Abstract— Cloud computing is a rising high performance computing environment with a huge scale, heterogeneous collection of self-sufficient systems and elastic computational design. To develop the overall performance of cloud computing, through the deadline constraint, a task scheduling replica is traditional for falling the system power utilization of cloud computing and recovering the yield of service providers. To improve the overall act of cloud environment, with the deadline constraint, a task scheduling model is conventional for reducing the system performance time of cloud computing and improving the profit of service providers. In favor of scheduling replica, a solving technique based on multi-objective genetic algorithm (MO-GA) is considered and the study is determined on programming rules, intersect operators, mixture operators and the scheme of arrangement of Pareto solutions. The model is designed based on open source cloud computing simulation platform CloudSim, to obtainable scheduling algorithms, the result shows that the proposed algorithm can obtain an enhanced solution, thus balancing the load for the concert of multiple objects.
This document provides an overview of scheduling mechanisms in cloud computing. It discusses task scheduling, gang scheduling based on performance and cost evaluation, and resource scheduling. For task scheduling, it describes classifying tasks based on quality of service parameters and MapReduce level scheduling. It then explains two gang scheduling algorithms - Adaptive First Come First Serve (AFCFS) and Largest Job First Serve (LJFS) - and how they are used to evaluate performance and cost. Finally, it briefly discusses resource scheduling and factors that affect scheduling mechanisms in cloud computing like efficiency, fairness, costs, and communication patterns.
A hybrid approach for scheduling applications in cloud computing environment IJECEIAES
Cloud computing plays an important role in our daily life. It has direct and positive impact on share and update data, knowledge, storage and scientific resources between various regions. Cloud computing performance heavily based on job scheduling algorithms that are utilized for queue waiting in modern scientific applications. The researchers are considered cloud computing a popular platform for new enforcements. These scheduling algorithms help in design efficient queue lists in cloud as well as they play vital role in reducing waiting for processing time in cloud computing. A novel job scheduling is proposed in this paper to enhance performance of cloud computing and reduce delay time in queue waiting for jobs. The proposed algorithm tries to avoid some significant challenges that throttle from developing applications of cloud computing. However, a smart scheduling technique is proposed in our paper to improve performance processing in cloud applications. Our experimental result of the proposed job scheduling algorithm shows that the proposed schemes possess outstanding enhancing rates with a reduction in waiting time for jobs in queue list.
CONFIGURABLE TASK MAPPING FOR MULTIPLE OBJECTIVES IN MACRO-PROGRAMMING OF WIR...ijassn
Macro-programming is the new generation advanced method of using Wireless Sensor Network (WSNs), where application developers can extract data from sensor nodes through a high level abstraction of the system. Instead of developing the entire application, task graph representation of the WSN model presents simplified approach of data collection. However, mapping of tasks onto sensor nodes highlights several problems in energy consumption and routing delay. In this paper, we present an efficient hybrid approach of task mapping for WSN – Hybrid Genetic Algorithm, considering multiple objectives of optimization – energy consumption, routing delay and soft real time requirement. We also present a method to configure the algorithm as per user's need by changing the heuristics used for optimization. The trade-off analysis between energy consumption and delivery delay was performed and simulation results are presented. The algorithm is applicable during macro-programming enabling developers to choose a better mapping according to their application requirements.
CONFIGURABLE TASK MAPPING FOR MULTIPLE OBJECTIVES IN MACRO-PROGRAMMING OF WIR...ijassn
Macro-programming is the new generation advanced method of using Wireless Sensor Network (WSNs),
where application developers can extract data from sensor nodes through a high level abstraction of the
system. Instead of developing the entire application, task graph representation of the WSN model presents
simplified approach of data collection.
A Prolific Scheme for Load Balancing Relying on Task Completion Time IJECEIAES
In networks with lot of computation, load balancing gains increasing significance. To offer various resources, services and applications, the ultimate aim is to facilitate the sharing of services and resources on the network over the Internet. A key issue to be focused and addressed in networks with large amount of computation is load balancing. Load is the number of tasks„t‟ performed by a computation system. The load can be categorized as network load and CPU load. For an efficient load balancing strategy, the process of assigning the load between the nodes should enhance the resource utilization and minimize the computation time. This can be accomplished by a uniform distribution of load of to all the nodes. A Load balancing method should guarantee that, each node in a network performs almost equal amount of work pertinent to their capacity and availability of resources. Relying on task subtraction, this work has presented a pioneering algorithm termed as E-TS (Efficient-Task Subtraction). This algorithm has selected appropriate nodes for each task. The proposed algorithm has improved the utilization of computing resources and has preserved the neutrality in assigning the load to the nodes in the network.
An optimized cost-based data allocation model for heterogeneous distributed ...IJECEIAES
The document presents an optimized cost-based data allocation model for heterogeneous distributed computing systems. It aims to reduce the total system cost by optimizing how data is partitioned and allocated across different processors. The proposed approach uses an artificial bee colony algorithm to determine the allocation that minimizes the total cost, which is calculated by summing the costs of communication, computation, and network usage. Simulation results show the technique is able to efficiently lower the total system cost compared to existing methods and optimize the partitioned data allocation in heterogeneous distributed computing systems.
LOAD BALANCING ALGORITHM ON CLOUD COMPUTING FOR OPTIMIZE RESPONE TIMEijccsa
To improve the performance of cloud computing, there are many parameters and issues that we should consider, including resource allocation, resource responsiveness, connectivity to resources, unused resources exploration, corresponding resource mapping and planning for resource. The planning for the use of resources can be based on many kinds of parameters, and the service response time is one of them.
The users can easily figure out the response time of their requests, and it becomes one of the important QoSs. When we discover and explore more on this, response time can provide solutions for the distribution, the load balancing of resources with better efficiency. This is one of the most promising
research directions for improving the cloud technology. Therefore, this paper proposes a load balancing algorithm based on response time of requests on cloud with the name APRA (ARIMA Prediction of Response Time Algorithm), the main idea is to use ARIMA algorithms to predict the coming response time, thus giving a better way of effectively resolving resource allocation with threshold value. The experiment
result outcomes are potential and valuable for load balancing with predicted response time, it shows that prediction is a great direction for load balancing.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Similar to PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDMODEL)] QUEUING SYSTEMS (20)
11th International Conference on Computer Science, Engineering and Informati...ijgca
11th International Conference on Computer Science, Engineering and Information
Technology (CSEIT 2024) will provide an excellent international forum for sharing knowledge
and results in theory, methodology and applications of Computer Science, Engineering and
Information Technology. The Conference looks for significant contributions to all major fields of
the Computer Science and Information Technology in theoretical and practical aspects. The aim
of the conference is to provide a platform to the researchers and practitioners from both academia
as well as industry to meet and share cutting-edge development in the field.
SERVICE LEVEL AGREEMENT BASED FAULT TOLERANT WORKLOAD SCHEDULING IN CLOUD COM...ijgca
Cloud computing is a concept of providing user and application oriented services in a virtual environment.
Users can use the various cloud services as per their requirements dynamically. Different users have
different requirements in terms of application reliability, performance and fault tolerance. Static and rigid
fault tolerance policies provide a consistent degree of fault tolerance as well as overhead. In this research
work we have proposed a method to implement dynamic fault tolerance considering customer
requirements. The cloud users have been classified in to sub classes as per the fault tolerance requirements.
Their jobs have also been classified into compute intensive and data intensive categories. The varying
degree of fault tolerance has been applied consisting of replication and input buffer. From the simulation
based experiments we have found that the proposed dynamic method performs better than the existing
methods.
SERVICE LEVEL AGREEMENT BASED FAULT TOLERANT WORKLOAD SCHEDULING IN CLOUD COM...ijgca
Cloud computing is a concept of providing user and application oriented services in a virtual environment.
Users can use the various cloud services as per their requirements dynamically. Different users have
different requirements in terms of application reliability, performance and fault tolerance. Static and rigid
fault tolerance policies provide a consistent degree of fault tolerance as well as overhead. In this research
work we have proposed a method to implement dynamic fault tolerance considering customer
requirements. The cloud users have been classified in to sub classes as per the fault tolerance requirements.
Their jobs have also been classified into compute intensive and data intensive categories. The varying
degree of fault tolerance has been applied consisting of replication and input buffer. From the simulation
based experiments we have found that the proposed dynamic method performs better than the existing
methods.
11th International Conference on Computer Science, Engineering and Informatio...ijgca
11th International Conference on Computer Science, Engineering and Information Technology (CSEIT 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer Science, Engineering and Information Technology. The Conference looks for significant contributions to all major fields of the Computer Science and Information Technology in theoretical and practical aspects. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to.
Load balancing functionalities are crucial for best Grid performance and utilization. Accordingly,this
paper presents a new meta-scheduling method called TunSys. It is inspired from the natural phenomenon of
heat propagation and thermal equilibrium. TunSys is based on a Grid polyhedron model with a spherical
like structure used to ensure load balancing through a local neighborhood propagation strategy.
Furthermore, experimental results compared to FCFS, DGA and HGA show encouraging results in terms
of system performance and scalability and in terms of load balancing efficiency.
11th International Conference on Computer Science and Information Technology ...ijgca
11th International Conference on Computer Science and Information Technology (CSIT 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer Science and Information Technology. The Conference looks for significant contributions to all major fields of the Computer Science and Information Technology in theoretical and practical aspects. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
AN INTELLIGENT SYSTEM FOR THE ENHANCEMENT OF VISUALLY IMPAIRED NAVIGATION AND...ijgca
Technological advancement has brought the masses unprecedented convenience, but unnoticed by many, a
population neglected through the age of technology has been the visually impaired population. The visually
impaired population has grown through ages with as much desire as everyone else to adventure but lack
the confidence and support to do so. Time has transported society to a new phase condensed in big data,
but to the visually impaired population, this quick-pace living lifestyle, along with the unpredictable nature
of natural disaster and COVID-19 pandemic, has dropped them deeper into a feeling of disconnection from
the society. Our application uses the global positioning system to support the visually impaired in
independent navigation, alerts them in face of natural disasters, and reminds them to sanitize their devices
during the COVID-19 pandemic
13th International Conference on Data Mining & Knowledge Management Process (...ijgca
13th International Conference on Data Mining & Knowledge Management Process (CDKP 2024) provides a forum for researchers who address this issue and to present their work in a peer-reviewed forum.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to these topics only.
Call for Papers - 15th International Conference on Wireless & Mobile Networks...ijgca
15th International Conference on Wireless & Mobile Networks (WiMoNe 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Wireless & Mobile computing Environment. Current information age is witnessing a dramatic use of digital and electronic devices in the workplace and beyond. Wireless, Mobile Networks & its applications had received a significant and sustained research interest in terms of designing and deploying large scale and high performance computational applications in real life. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
Call for Papers - 4th International Conference on Big Data (CBDA 2023)ijgca
4th International Conference on Big Data (CBDA 2023) will act as a major forum for the presentation of innovative ideas, approaches, developments, and research projects in the areas of Big Data. It will also serve to facilitate the exchange of information between researchers and industry professionals to discuss the latest issues and advancement in the area of Big Data.
Call for Papers - 15th International Conference on Computer Networks & Commun...ijgca
15th International Conference on Computer Networks & Communications (CoNeCo 2023) looks for significant contributions to the Computer Networks & Communications for Wired and Wireless Networks in theoretical and practical aspects. Original papers are invited on Computer Networks, Network Protocols and Wireless Networks, Data Communication Technologies, and Network Security. The goal of this Conference is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
Call for Papers - 15th International Conference on Computer Networks & Commun...ijgca
15th International Conference on Computer Networks & Communications (CoNeCo 2023) looks for significant contributions to the Computer Networks & Communications for Wired and Wireless Networks in theoretical and practical aspects. Original papers are invited on Computer Networks, Network Protocols and Wireless Networks, Data Communication Technologies, and Network Security. The goal of this Conference is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
Call for Papers - 9th International Conference on Cryptography and Informatio...ijgca
9th International Conference on Cryptography and Information Security (CRIS 2023) provides a forum for researchers who address this issue and to present their work in a peer-reviewed forum. It aims to bring together scientists, researchers and students to exchange novel ideas and results in all aspects of cryptography, coding and Information security.
Call for Papers - 9th International Conference on Cryptography and Informatio...ijgca
9th International Conference on Cryptography and Information Security (CRIS 2023) provides a forum for researchers who address this issue and to present their work in a peer-reviewed forum. It aims to bring together scientists, researchers and students to exchange novel ideas and results in all aspects of cryptography, coding and Information security.
Call for Papers - 4th International Conference on Machine learning and Cloud ...ijgca
4th International Conference on Machine learning and Cloud Computing (MLCL 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Cloud computing. The aim of the conference is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
Call for Papers - 11th International Conference on Data Mining & Knowledge Ma...ijgca
11th International Conference on Data Mining & Knowledge Management Process (DKMP 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Data Mining and knowledge management process. The goal of this conference is to bring together researchers and practitioners from academia and industry to focus on understanding Modern data mining concepts and establishing new collaborations in these areas.
Call for Papers - 4th International Conference on Blockchain and Internet of ...ijgca
4th International Conference on Blockchain and Internet of Things (BIoT 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Blockchain and Internet of Things. The Conference looks for significant contributions to all major fields of the Blockchain and Internet of Things in theoretical and practical aspects.
Call for Papers - International Conference IOT, Blockchain and Cryptography (...ijgca
The 4th International Conference on Cloud, Big Data and Web Services (CBW 2023) will take place from March 25-26, 2023 in Sydney, Australia. The conference aims to facilitate the exchange of innovative ideas and research related to cloud computing, big data, and web services. Authors are invited to submit papers by February 18, 2023 on topics including cloud platforms, big data analytics, and web service models and architectures. Selected papers will be published in related journals.
Call for Paper - 4th International Conference on Cloud, Big Data and Web Serv...ijgca
4th International Conference on Cloud, Big Data and Web Services (CBW 2023) will act as a major forum for the presentation of innovative ideas, approaches, developments, and research projects in the areas of Cloud, Big Data and Web services. It will also serve to facilitate the exchange of information between researchers and industry professionals to discuss the latest issues and advancement in the area of Cloud, Big Data and web services.
Call for Papers - International Journal of Database Management Systems (IJDMS)ijgca
The International Journal of Database Management Systems (IJDMS) is a bi monthly open access peer-reviewed journal that publishes articles which contributenew results in all areas of the database management systems & its applications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on understanding Modern developments in this filed and establishing new collaborations in these areas.
Height and depth gauge linear metrology.pdfq30122000
Height gauges may also be used to measure the height of an object by using the underside of the scriber as the datum. The datum may be permanently fixed or the height gauge may have provision to adjust the scale, this is done by sliding the scale vertically along the body of the height gauge by turning a fine feed screw at the top of the gauge; then with the scriber set to the same level as the base, the scale can be matched to it. This adjustment allows different scribers or probes to be used, as well as adjusting for any errors in a damaged or resharpened probe.
We have designed & manufacture the Lubi Valves LBF series type of Butterfly Valves for General Utility Water applications as well as for HVAC applications.
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
Sri Guru Hargobind Ji - Bandi Chor Guru.pdfBalvir Singh
Sri Guru Hargobind Ji (19 June 1595 - 3 March 1644) is revered as the Sixth Nanak.
• On 25 May 1606 Guru Arjan nominated his son Sri Hargobind Ji as his successor. Shortly
afterwards, Guru Arjan was arrested, tortured and killed by order of the Mogul Emperor
Jahangir.
• Guru Hargobind's succession ceremony took place on 24 June 1606. He was barely
eleven years old when he became 6th Guru.
• As ordered by Guru Arjan Dev Ji, he put on two swords, one indicated his spiritual
authority (PIRI) and the other, his temporal authority (MIRI). He thus for the first time
initiated military tradition in the Sikh faith to resist religious persecution, protect
people’s freedom and independence to practice religion by choice. He transformed
Sikhs to be Saints and Soldier.
• He had a long tenure as Guru, lasting 37 years, 9 months and 3 days
Supermarket Management System Project Report.pdfKamal Acharya
Supermarket management is a stand-alone J2EE using Eclipse Juno program.
This project contains all the necessary required information about maintaining
the supermarket billing system.
The core idea of this project to minimize the paper work and centralize the
data. Here all the communication is taken in secure manner. That is, in this
application the information will be stored in client itself. For further security the
data base is stored in the back-end oracle and so no intruders can access it.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Digital Twins Computer Networking Paper Presentation.pptxaryanpankaj78
A Digital Twin in computer networking is a virtual representation of a physical network, used to simulate, analyze, and optimize network performance and reliability. It leverages real-time data to enhance network management, predict issues, and improve decision-making processes.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDMODEL)] QUEUING SYSTEMS
1. International Journal of Grid Computing & Applications (IJGCA) Vol.4, No.1, March 2013
DOI: 10.5121/ijgca.2013.4101 1
PERFORMANCE FACTORS OF CLOUD COMPUTING
DATA CENTERS USING [(M/G/1) : (∞/GDMODEL)]
QUEUING SYSTEMS
N.Ani Brown Mary1
and K.Saravanan2
1
Department of Computer Science and Engineering,
Regional Centre of Anna University, Tirunelveli
anibrownvimal@gmail.com
2
Assistant professor, Department of Computer Science and Engineering,
Regional Centre of Anna University, Tirunelveli
Saravanan.krishnann@gmail.com
ABSTRACT
The ever-increasing status of the cloud computing hypothesis and the budding concept of federated cloud
computing have enthused research efforts towards intellectual cloud service selection aimed at developing
techniques for enabling the cloud users to gain maximum benefit from cloud computing by selecting
services which provide optimal performance at lowest possible cost. Cloud computing is a novel paradigm
for the provision of computing infrastructure, which aims to shift the location of the computing
infrastructure to the network in order to reduce the maintenance costs of hardware and software resources.
Cloud computing systems vitally provide access to large pools of resources. Resources provided by cloud
computing systems hide a great deal of services from the user through virtualization. In this paper, the
cloud data center is modelled as queuing system with a single task arrivals
and a task request buffer of infinite capacity.
KEYWORDS
Cloud computing, performance analysis, response time, queuing theory, markov chain process
1. INTRODUCTION
Cloud computing is the Internet-based expansion and use of computer knowledge. It has become
an IT buzzword for the past a few years. Cloud computing has been often used with synonymous
terms such as software as a service (SaaS), grid computing, cluster computing, autonomic
computing, and utility computing [1]. SaaS, Software as a Service, is a software delivery model
in which software and related data are centrally hosted on the cloud. SaaS is typically accessed by
users using a thin client via a web browser .Grid computing and cluster computing are two types
of underlying computer technologies for the development of cloud computing. Autonomic
computing is a computing system services that is capable of self-management, and utility
computing is the packaging of computing resources such as computational and storage devices
[2,3]. Cloud centers differ from conventional queuing systems in a number of important aspects.
A Cloud center can have outsized number of capability (server) nodes, typically of the order of
hundreds or thousands [4]; conventional queuing analysis rarely considers systems of this size.
Task service times must be modeled by a general, rather than the more convenient exponential,
probability distribution. The coefficient of variation of task service time may be high over the
value of one. Due to the dynamic nature of cloud environments, diversity of users requests and
2. International Journal of Grid Computing & Applications (IJGCA) Vol.4, No.1, March 2013
2
time dependency of load, cloud centers must provide expected quality of service at widely
varying loads [5,6].
Flourishing development of cloud computing paradigm necessitates accurate performance
evaluation of cloud data centers. As exact modeling of cloud centers is not feasible due to the
nature of cloud centers and diversity of user requests, we here describe a novel approximate
analytical model for performance evaluation of cloud server farms and solve it to obtain accurate
estimation of the complete probability distribution of the request response time and other
important performance indicators. The model allows cloud operators to determine the relationship
between the number of servers and input buffer size, on one side, whereas the performance
indicators such as mean number of tasks in the system, blocking probability, and probability that
a task will obtain immediate service, on the other hand. The key benefit of having numerous
servers in cloud computing is, the system performance increases efficiently by reducing the mean
queue length and waiting time than compared to the conventional approach of having only single
server so that the consumers need not wait for a long period of time and also queue length need
not be bulky.
In this paper, we modeled the cloud center as an [(M/G/1) : (∞/GD MODEL)] queuing system
with single task arrivals and a task request buffer of infinite capacity. We evaluate the
performance of queuing system using an analytical model and solve it to obtain important
performance factors like mean number of tasks in the system. The residue of the paper is
organized as follows. Section 2 describes the related work. Section 3 gives a brief overview on an
assortment of queuing models and assumptions. Section 4 discusses our analytical model in
detail. We present and discuss analytical results in section 5. Our findings are summarized in
Section 6, where we also have outlined the directions for future effort.
2. RELATED WORK
Cloud computing provides user a complete software environment. It provides resources such as
computing power, bandwidth and storage capacity. It has engrossed considerable investigate
attention, but only a diminutive portion of the work done so far has addressed performance issues,
and rigorous analytical approach has been adopted by only a handful among these. The response
time is a major constraint in the queuing system as distribution of response time was obtained for
a cloud center model as an M/M/m/m + r queuing system where both interarrival and service
times were assumed to be exponentially distributed, and the system had a finite buffer of size m +
r. The response time was broken down into waiting, service, and execution periods, assuming that
all three periods are independent which is unrealistic, according to authors’ own argument in [7].
The inter-arrival time and/or service time are not exponential is more complex. Most theoretical
analyses have relied on extensive research in performance evaluation of M/G/m queueing
systems. However, the probability distributions of response time and queue length in M/G/m
cannot be obtained in closed form, which necessitated the search for a suitable approximation in
[8]. They have measured the average response time of a service request in [14], but measurement
techniques are hard to be used in computer service performance prediction. In order to compute a
percentile of the response time one has to first find the probability distribution of the response
time. This is not an easy task in a complex computing environment involving many computing
nodes.
They have considered a state-dependent M/G/1 where the interarrival and the service time
distributions depend on the amount of unfinished work in the system. They have applied
perturbation methods to derive approximations for several measures pertaining to the unfinished
work and the mean busy period in such a queue of [9]. The important observation is that the
arrival of a consumer (the size of which depends on the state of the underlying Markov chain) can
3. International Journal of Grid Computing & Applications (IJGCA) Vol.4, No.1, March 2013
3
be viewed as the arrival of a super consumer, whose service time is distributed as the sum of the
service requests of the consumers in the system of [11].A comparison is performed among all
these approaches, mainly focusing on getting reliable estimates of prediction of performance of
the various architectures depending on the workflows. However, no economic cost comparisons
between the different platforms are shown. Some works have been presented to compare the
performance achieved by means of the cloud with other approaches based on desktop
workstations, local clusters, and HPC shared resources with reference to sample scientific
workloads in [16].
2.1. Control Utilization Models
Control debauchery and circuit delay in digital CMOS circuits can be accurately modeled by
simple equations, even for complex microprocessor circuits. CMOS circuits have dynamic, static,
and short-circuit Control debauchery; however, the dominant component in a well designed
circuit is dynamic control utilization p (i.e., the switching component of power), which is
approximately P= bDW2
f , where b is an activity factor, D is the loading capacitance, W is the
supply voltage, and f is the clock frequency [12].
In the supreme case, the supply voltage and the clock frequency are related in such a way that W
∝ fʸφ
for some constant φ > 0 [15]. The processor execution speed s is usually linearly
proportional to the clock frequency, namely, s ∝ f. For ease of discussion, we will assume that W
= cf φ
and s = ef, where c and e are some constants.
Hence, we know that Control Utilization is
P = bDW2
f
= bDc2
f2φ+1
= (bDc2
/e2φ+1
)/ s2φ+1
P = ξsα
Where ξ = bDc2
/e2φ+1
and α = 2φ + 1. For illustration, by setting c = 1.17, bD = 8.0, e = 2.0, φ =
0.6, α = 2φ+1 = 3.0, and ξ = bDc2
/eα
= 9.4192, the value of P is calculated by the equation P =
bDW2
f = ξsα
is reasonably close to that in [13] for the Intel Pentium M processor.
2.2. Models and Assumptions
We can define ergodicity of a Markov chain as follows: A Markov chain is called ergodic if it is
irreducible, recurrent non-null, and aperiodic. We define communicability as follows, State i
communicates with j, written i→j, if the chain may ever visit state j with positive probability,
starting from i. That is, i → j if pij(n) > 0 for some n ≥ 0. We say i and j intercommunicate if i→
j and j→i, in which case we write i ↔ j. It can be seen that ↔ is an equivalence relation, hence
the state space S can be partitioned into the equivalence classes of ↔; within each equivalence
class all states are of the same type.
A set C of states is called
(a) Closed, if pij = 0 for all i ∈ C, j /∈ C.
(b) Irreducible, if i↔j for all i, j ∈ C.
4. International Journal of Grid Computing & Applications (IJGCA) Vol.4, No.1, March 2013
4
The kendall’s classification of queuing systems exists in several modifications. Queuing models
are generally constructed to represent the steady state of a queuing system, that is, the typical,
long run or average state of the system. As a consequence, these are stochastic models that
represent the probability that a queuing system will be found in a particular configuration or state.
A general procedure for constructing and analysing such queuing models is:
1. Identify the parameters of the system, such as the arrival rate, service time, queue capacity,
and perhaps draw a diagram of the system, 2. Identify the system states. (A state will generally
represent the integer number of customers, people, jobs, calls, messages, etc. in the system and
may or may not be limited), 3. Draw a state transition diagram that represents the possible system
states and identify the rates to enter and leave each state. This diagram is a representation of a
Markov chain, 4. Because the state transition diagram represents the steady state situation
between states there is a balanced flow between states so the probabilities of being in adjacent
states can be related mathematically in terms of the arrival and service rates and state
probabilities, 5. Express all the state probabilities in terms of the empty state probability, using
the inter-state transition relationships, 6. Determine the empty state probability by using the fact
that all state probabilities always sum to 1.
M/M/1 represents a single server that has unlimited queue capacity and infinite calling
population, both arrivals and service are Poisson (or random) processes, meaning the statistical
distribution of both the inter-arrival times and the service times follow the exponential
distribution. Because of the mathematical nature of the exponential distribution, a number of quite
simple relationships can be derived for several performance measures based on knowing the
arrival rate and service rate. M/G/1 represents a single server that has unlimited queue capacity
and infinite calling population, while the arrival is still Poisson process, meaning the statistical
distribution of the inter-arrival times still follow the exponential distribution, the distribution of
the service time does not. The distribution of the service time may follow any general statistical
distribution, not just exponential. Relationships can still be derived for a (limited) number of
performance measures if one knows the arrival rate and the mean and variance of the service rate.
However the derivations are generally more complex and difficult. As most of these results rely
on some approximation(s) to obtain a closed-form solution, they are not universally applicable.
1. Approximations are reasonably accurate only when the number of servers is
comparatively small, typically below 10 or so, which makes them unsuitable for
performance analysis of cloud computing data centers.
2. Approximations are very sensitive to the probability distribution of task service times,
and they become increasingly inaccurate when the coefficient of variation of the service
time, CoV, increases toward and above the value of one.
3. Finally, approximation errors are particularly pronounced when the traffic intensity ρ is
small, and/or when both the number of servers m and the CoV of the service time, are
large. As a result, the results mentioned above are not directly applicable to performance
analysis of cloud computing server farms where one or more of the following holds: the
number of servers is huge; the distribution of service times is unknown and does not, in
general, follow any of the well-behaved probability distributions such as exponential
distribution; finally, the traffic intensity can vary in an extremely wide range.
2.3. The standard quantity of consumers in the system
The inter arrival and inter provision times were assumed already in the other queuing models so
that exponential distributions are obtained with parameters λ and µ. Suppose if the arrivals and
departures do not follow Poisson distribution then study of other models becomes difficult. But
we can derive the formulas of a particular Non-Markovian model [(M/G/1) : (∞/GD Model)]
where M indicates the number of arrivals in time t which follows a Poisson process , G indicates
5. International Journal of Grid Computing & Applications (IJGCA) Vol.4, No.1, March 2013
5
the General Output Distribution , ∞ indicates waiting space capacity is Infinite , GD indicates
General Infinite Descriptive.
Let us assume that the arrivals follow a Poisson process with rate of arrival λ. We also assume
that provision times are independently and identically distributed random variables with an
arbitrary probability distribution. Let b(t) be the probability density function of provision time T
between 2 departures. Let N(t) be the number of consumers in the system at time t ≥ 0. Let tn be
the time instant at which the nth
consumer completes service and departs. Let Xn represents the
number of consumers in the system when the nth
customer departs. Also, the sequence of random
variables { Xn : n=1,2,3,.......} is a Markov chain. Hence we have,
Xn+1 = Xn - 1 + A, if Xn > 0 i.e. Xn ≥ 1
A if Xn = 0
where A is the number of customers arriving during the provision time "T" of the (n+1)th
customer.
We know that, if U(Xn) denotes the unit step function , then we can write,
U(Xn) = 1, if Xn > 0 or Xn ≥ 1
0, if Xn = 0
Therefore Xn+1 can be written as
Xn+1 = Xn - U(Xn) + A ...............(1)
Suppose the system is in steady state, then the probability of the number of consumers in the
system is independent of time and hence is a constant.
That is, the average size of the system at departure is
E(Xn+1) = E(Xn)
Taking expectation on both sides of (1), we get
E(Xn+1) = E(Xn - U(Xn) + A)
E(Xn+1) = E(Xn) - E(U(Xn)) + E(A) .............(2)
Since E(Xn+1) = E(Xn), we get
E(Xn) = E(Xn) - E(U(Xn)) + E(A)
E(U(Xn)) = E(A) ................(3)
Squaring equation(1), we have
X2
n+1 = (Xn - U(Xn) + A)2
= X2
n + U2
(Xn) + A2
- 2Xn U(Xn) + 2 A Xn - 2 A U(Xn) ..............(4)
6. International Journal of Grid Computing & Applications (IJGCA) Vol.4, No.1, March 2013
6
But
U2
(Xn) = 1 if X2
n > 0
0 if X2
n = 0
= 1 if Xn > 0
0 if Xn = 0
Therefore Xn denotes the number of consumers and hence Xn cannot be negative.
U2
(Xn) = U(Xn) [U (Xn) = 1 or 0]
Also,
Xn U(Xn) = Xn
Hence (4) becomes
X2
n+1 = X2
n + U (Xn) + A2
- 2Xn + 2 A Xn - 2 A U(Xn)
i.e.,
2 Xn - 2 A Xn = X2
n - X2
n+1 + U (Xn) + A2
- 2 A U(Xn)
2 Xn (1 - A) = X2
n - X2
n+1 + U(Xn) + A2
- 2 A U(Xn)
Taking expectation on both sides, we get
2[E (Xn) - E (A Xn)] = E (X2
n) - E (X2
n+1) + E (U(Xn)) + E (A2
) - 2E(AU(Xn))
2[E (Xn) - E (A) E( Xn)] = E (X2
n) - E (X2
n+1) + E (U(Xn)) + E (A2
) - 2E(AU(Xn))
Therefore A and Xn are independent
2E (Xn) [1 - E (A)] = E (A2
) - E (A2
) + E (A) + E (A2
) - 2E(A) E(A)
2E (Xn) [1 - E (A)] = E (A2
) + E (A) - 2 [E(A)]2
E (Xn) = E(A) - 2[E(A)]2
+ E(A2
) .............(5)
2 (1- E (A))
Since the arrivals during "T" is a Poison process with rate λ,
E (A / T) = λT
E (A2
/ T) = λ2
T2
+ λT ...............(6)
This is obtained by mean and variance of the poison process,
i.e.,
E[X(t)] = λt
7. International Journal of Grid Computing & Applications (IJGCA) Vol.4, No.1, March 2013
7
E[X2
(t)] = λ2
t2
+ λt
Also,
E (A) = E (E (A/T))
= E (λT)
E(A) = λ E(T) ........................(7)
Similarly,
E (A2
) = E (E (A2
/ T))
= E (λ2
T2
+ λT )
E (A2
) = λ2
E(T2
) + λ E(T) ........................(8)
Now equation (5) becomes,
E (Xn) = λ2
E(T2
) + λ E(T) + λ E(T) - 2[λ E(T)]2
2(1 - λ E(T))
= λ2
E(T2
) + 2 λ E(T) - 2 λ2
[E(T)]2
2(1 - λ E(T))
= 2 λ E(T) [ 1 - λ E(T)] + λ2
E(T2
)
2(1 - λ E(T))
E (Xn) = 2 λ E(T) [ 1 - λ E(T)] + λ2
E(T2
)
2(1 - λ E(T)) 2(1 - λ E(T)) ..........(9)
The standard quantity of consumers in the system is obtained from the given equation. Notice that
a multi server system with multiple identical servers has been configured to serve requests from
certain application domain. Therefore, we will only focus on standard quantity of consumers in
the system and do not consider other sources of delay, such as resource allocation and provision,
virtual machine instantiation and deployment, and other overhead in a complex cloud computing
environment.
2.4. Waiting Time Distribution
The waiting time of a consumer in the system is obtained with the help of the equation that has
been already calculated as standard quantity of consumers in the system.
E (Xn) = 2 λ E(T) [ 1 - λ E(T)] + λ2
E(T2
)
2(1 - λ E(T)) 2(1 - λ E(T))
= 2 λ E(T) [ 1 - λ E(T)] + λ2
E(T2
)
2(1 - λ E(T))µ 2(1 - λ E(T))µ
E (Xin) = 2 ρ E(T) [ 1 - λ E(T)] + ρ λ E(T2
)
2(1 - λ E(T)) 2(1 - λ E(T)) .............(10)
8. International Journal of Grid Computing & Applications (IJGCA) Vol.4, No.1, March 2013
8
E (Xiin) = 2 λ E(T) [ 1 - λ E(T)] + λ2
E(T2
) - ρ ...........(11)
2(1 - λ E(T)) 2(1 - λ E(T))
E (Xiiin) = 2 E(T) [ 1 - λ E(T)] + λ E(T2
) - (1/µ) ...........(12)
2(1 - λ E(T)) 2(1 - λ E(T))
With the help of waiting time distribution the delay and the queuing values are obtained by the
consumers in the queue those who are waiting for the resources to be provided by the providers.
2.5. Figures and Tables
Table 1. Utility and Delay
M/GD Utility Queue Delay
1000 5.62037 0 0.00634
5000 21.16239 0 0.61171
10000 47.63793 1 5.96816
15000 86.85273 2 26.43108
20000 87.68589 2 29.94669
25000 88.572 2 48.10515
30000 89.55422 2 58.40535
35000 90.09729 3 51.01807
40000 113.27625 3 46.98262
45000 135.6864 4 44.77912
50000 157.6482 4 44.00284
Depending on the file sizes that is allotted in bytes the values are calculated for response time of
user, and the users waiting in the queue and the waiting time is calculated. Here we can see
clearly that the response time is more when compared with the waiting time.
Figure 1. Utility and Delay
9. International Journal of Grid Computing & Applications (IJGCA) Vol.4, No.1, March 2013
9
3. CONCLUSIONS
Performance assessment of server farms is an imperative aspect of cloud computing which is of
decisive curiosity for both cloud providers and cloud customers. In this project we have proposed
an analytical model for performance evaluation of a cloud computing data centre. In future, the
results can be analysed using simulation. As mean is calculated as well as standard deviation can
be computed. The blocking probability and probability of immediate service can be computed.
In future, this methodology can be used to improve the profit of service providers with the help of
spot pricing technique. Spot pricing is a very important technique that is used to improve the
profit of service providers by consumer satisfaction also.
ACKNOWLEDGEMENTS
None of this work would have been possible without the selfless assistance of a great number of
people. I would like to gratefully thank all those members for their valued guidance, time, helpful
discussion and contribution to this work.
REFERENCES
[1] W. Kim, “Cloud computing: Today and Tomorrow,” Journal of Object Technology, 8, 2009.
[2] Wikipedia, “Cloud Computing,” In http://en. wikipedia.org /wiki/Cloud_computing.
[3] L. Vaquero , L. Rodero-Merino , J. Caceres and m. Lindner, “A break in the clouds: towards a cloud
definition,” ACM SIGCOMM Computer Communication Review, vol. 39, no.1, 2009.
[4] Amazon Elastic Compute Cloud, User Guide, API Version ed., Amazon Web Service LLC or Its
Affiliate, http://aws.amazon. com/documentation/ec2, Aug. 2010.
[5] K. Xiong and H. Perros, “Service Performance and Analysis in Cloud Computing,” Proc. IEEE World
Conf. Services, pp. 693-700, 2009.
[6] J. Baker, C. Bond, J. Corbett, J.J. Furman, A. Khorlin, J. Larsonand, J.M. Leon, Y. Li, A. Lloyd, and
V. Yushprakh, “Megastore: Providing Scalable, Highly Available Storage for Interactive Services,”
Proc. Conf. Innovative Data Systems Research (CIDR), pp. 223-234, Jan. 2011.
[7] B. Yang, F. Tan, Y. Dai, and S. Guo, “Performance Evaluation of Cloud Service Considering Fault
Recovery,” Proc. First Int’l Conf. Cloud Computing (CloudCom ’09), pp. 571-576, Dec. 2009.
[8] D. D. Yao, “Refining the diffusion approximation for the M/G/m queue,” Operations Research, vol.
33, pp. 1266–1277, 1985.
[9] C. Knessl, B. Matkowsky, Z. Schuss and C. Tier, “Asymptotic analysis of a state-dependent M/G/1
queueing system,” SIAM J. Appl. Math. 46 (1986) 483–505.
[10] D.M. Lucantoni , “New results on the single-server queue with a batch Markovian arrival process,”
Stochastic Models 7, 1–46,1991.
[11] M.B. Comb´e and O.J. Boxma , “BMAP modelling of a correlated queue. In: Network performance
modeling and simulation”, J. Walrand, K. Bagchi and G.W. Zobrist (eds.) 177–196, 1998.
[12] A. P. Chandrakasan, S. Sheng, and R. W. Brodersen, “Lowpower CMOS digital design,” IEEE
Journal on Solid-State Circuits, vol. 27, no. 4, pp. 473-484, 1992.
[13] Intel, “Enhanced Intel SpeedStep Technology for the Intel Pentium M Processor” – White Paper,
March 2004.
[14] J. Martin, and A. Nilsson, “On service level agreements for IP networks,” In Proceedings of the IEEE
INFOCOM, June 2002.
[15] B. Zhai, D. Blaauw, D. Sylvester, and K. Flautner, “Theoretical and practical limits of dynamic
voltage scaling,” Proceedings of the 41st Design Automation Conference, pp. 868-873, 2004.
[16] Y. Simmhan and L. Ramakrishnan, “Comparison of resource platform selection approaches for
scientific workflows,” 19th ACM Intl. Symp. on High Performance Distributed Computing, HPDC,
Chicago, IL, 2010, pp. 445–450.