- The document proposes a double resource renting scheme for cloud service providers that combines short-term and long-term server renting. This aims to guarantee quality of service for all requests while reducing resource waste.
- A profit maximization problem is formulated to determine the optimal configuration of servers. Solutions are obtained for ideal and actual scenarios to maximize profit compared to a single renting scheme.
- Comparisons show the double renting scheme can guarantee complete service quality and obtain more profit than a single renting scheme that does not ensure quality of service.
A profit maximization scheme with guaranteed quality of service in cloud comp...syeda yasmeen
The document proposes a double resource renting scheme for cloud service providers to maximize profits while guaranteeing quality of service. It involves combining short-term and long-term server renting to adapt to varying demand and reduce waste. The system is modeled as an M/M/m+D queuing model. An optimization problem is formulated to determine the optimal server configuration. Comparisons show the double renting scheme achieves higher profits compared to single renting while guaranteeing quality of service.
A profit maximization scheme with guaranteed quality of service in cloud comp...Shakas Technologies
A HYBRID CLOUD APPROACH FOR SECURE AUTHORIZED DEDUPLICATION
ABSTRACT:
Data deduplication is one of important data compression techniques for eliminating duplicate copies of repeating data, and has been widely used in cloud storage to reduce the amount of storage space and save bandwidth. To protect the confidentiality of sensitive data while supporting reduplication, the convergent encryption technique has been proposed to encrypt the data before outsourcing.
A PROFIT MAXIMIZATION SCHEME WITH GUARANTEED QUALITY OF SERVICE IN CLOUD COMP...Nexgen Technology
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
A Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Comp...1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
A PROFIT MAXIMIZATION SCHEME WITH GUARANTEED QUALITY OF SERVICE IN CLOUD COMP...Shakas Technologies
The document proposes a double resource renting scheme for cloud computing platforms that combines short-term and long-term resource renting. This is designed to maximize profits while guaranteeing quality of service. An M/M/m+D queuing model is used to analyze the performance of the system. The scheme formulates a profit maximization problem to determine the optimal configuration of resources. Calculations show this double renting scheme achieves higher profits compared to single renting schemes, while guaranteeing quality of service for all requests.
This document proposes a double resource renting scheme for cloud service providers that combines short-term and long-term renting. It defines a profit maximization problem to determine the optimal configuration of servers and queue capacity. An M/M/m+D queuing model is used to analyze factors like average charge and temporary server usage. The double renting scheme is shown to guarantee quality of service for all requests while reducing waste and generating more profit compared to single renting schemes.
IEEE 2015-2016 A Profit Maximization Scheme with Guaranteed Quality of Servic...1crore projects
1 CRORE PROJECTS offers M.E, BE, M. Tech, B. Tech, PhD, MCA, BCA, MSC & MBA projects based on IEEE (2014-2015) and also a real time application projects.
Final Year Projects for BE, B. Tech - ECE, EEE, CSE, IT, MCA, ME, M. Tech, M SC (IT), BCA, BSC and MBA.
Fast Distribution of Replicated Content to Multi- Homed ClientsIDES Editor
Clients can potentially have access to more than
one communication network nowadays due to the availability
of a wide variety of access technologies. On the other hand,
service replication has become a trivial approach in overlay
networks to provide a high availability of data and better QoS.
In this paper, we consider such a multi-homed client seeking
a replicated service in overlay network (e.g., CDN, peer-topeer).
Our aim is to improve the content distribution by
proposing a new model for being applied at the applicationlevel
and in a fully distributed way. Basically, our model
proposes to determine the best mirror server that could be
reached through each client’s network interface based on
application utility function. Then, it consists of downloading
the requested content from the determined best servers
simultaneously through their associated interfaces. Each best
server should deliver a specific estimated range of bytes (i.e.,
content chunk) to an independent TCP socket opened at the
client side for being finally aggregated at the applicationlevel.
Our real experiments show that our model is able to
considerably improve the QoS (e.g., content transfer time)
perceived by the client comparing to the traditional content
distribution techniques.
A profit maximization scheme with guaranteed quality of service in cloud comp...syeda yasmeen
The document proposes a double resource renting scheme for cloud service providers to maximize profits while guaranteeing quality of service. It involves combining short-term and long-term server renting to adapt to varying demand and reduce waste. The system is modeled as an M/M/m+D queuing model. An optimization problem is formulated to determine the optimal server configuration. Comparisons show the double renting scheme achieves higher profits compared to single renting while guaranteeing quality of service.
A profit maximization scheme with guaranteed quality of service in cloud comp...Shakas Technologies
A HYBRID CLOUD APPROACH FOR SECURE AUTHORIZED DEDUPLICATION
ABSTRACT:
Data deduplication is one of important data compression techniques for eliminating duplicate copies of repeating data, and has been widely used in cloud storage to reduce the amount of storage space and save bandwidth. To protect the confidentiality of sensitive data while supporting reduplication, the convergent encryption technique has been proposed to encrypt the data before outsourcing.
A PROFIT MAXIMIZATION SCHEME WITH GUARANTEED QUALITY OF SERVICE IN CLOUD COMP...Nexgen Technology
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
A Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Comp...1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
A PROFIT MAXIMIZATION SCHEME WITH GUARANTEED QUALITY OF SERVICE IN CLOUD COMP...Shakas Technologies
The document proposes a double resource renting scheme for cloud computing platforms that combines short-term and long-term resource renting. This is designed to maximize profits while guaranteeing quality of service. An M/M/m+D queuing model is used to analyze the performance of the system. The scheme formulates a profit maximization problem to determine the optimal configuration of resources. Calculations show this double renting scheme achieves higher profits compared to single renting schemes, while guaranteeing quality of service for all requests.
This document proposes a double resource renting scheme for cloud service providers that combines short-term and long-term renting. It defines a profit maximization problem to determine the optimal configuration of servers and queue capacity. An M/M/m+D queuing model is used to analyze factors like average charge and temporary server usage. The double renting scheme is shown to guarantee quality of service for all requests while reducing waste and generating more profit compared to single renting schemes.
IEEE 2015-2016 A Profit Maximization Scheme with Guaranteed Quality of Servic...1crore projects
1 CRORE PROJECTS offers M.E, BE, M. Tech, B. Tech, PhD, MCA, BCA, MSC & MBA projects based on IEEE (2014-2015) and also a real time application projects.
Final Year Projects for BE, B. Tech - ECE, EEE, CSE, IT, MCA, ME, M. Tech, M SC (IT), BCA, BSC and MBA.
Fast Distribution of Replicated Content to Multi- Homed ClientsIDES Editor
Clients can potentially have access to more than
one communication network nowadays due to the availability
of a wide variety of access technologies. On the other hand,
service replication has become a trivial approach in overlay
networks to provide a high availability of data and better QoS.
In this paper, we consider such a multi-homed client seeking
a replicated service in overlay network (e.g., CDN, peer-topeer).
Our aim is to improve the content distribution by
proposing a new model for being applied at the applicationlevel
and in a fully distributed way. Basically, our model
proposes to determine the best mirror server that could be
reached through each client’s network interface based on
application utility function. Then, it consists of downloading
the requested content from the determined best servers
simultaneously through their associated interfaces. Each best
server should deliver a specific estimated range of bytes (i.e.,
content chunk) to an independent TCP socket opened at the
client side for being finally aggregated at the applicationlevel.
Our real experiments show that our model is able to
considerably improve the QoS (e.g., content transfer time)
perceived by the client comparing to the traditional content
distribution techniques.
Migration Control in Cloud Computing to Reduce the SLA Violationrahulmonikasharma
The requisition of cloud based services are more eminent because of the enormous benefits of cloud such as pay-as-you-use flexibility,scalability and low upfront cost. Day-by-day due to growing number of cloud consumers the load on the datacenters is also increasing. Various load distribution and dynamic load balancing approaches are being followed in the datacenters to optimize the resource utilization so that the performance may be maintained during the increased load. Virtual machine (VM) migration is primarily used to implement dynamic load balancing in the datacenters. But, the poorly designed dynamic VM migration policies may negate its benefits. The VM migration overheads result in the violations of service level agreement (SLA) in the cloud environment.In this paper,an extended VM migration control model is proposedto minimize the SLA violations while controlling the energy consumption of the datacenter during VM migration. The parameters of execution boundary threshold is used to extend an existing VM migration control model. The proposed model is tested through extensive simulations using CloudSim toolkit by executing real world workload. Results are obtained in terms of number of SLA violations while controlling the energy consumption in the datacenter. Results show that the proposed modelachieves better performance in comparison to the existing model.
Camera ready-nash equilibrium-ngct2015-formataminnezarat
- The document proposes an auction-based method for efficient Nash equilibrium resource allocation in cloud computing using game theory.
- In the proposed method, users submit price bids over multiple rounds until reaching a Nash equilibrium where no user wants to change their bid. This equilibrium bid also maximizes utility for the resource provider.
- The method is simulated in CloudSim and results show it converges faster than previous methods, with lower SLA violations and higher provider utility.
Intelligent Workload Management in Virtualized Cloud EnvironmentIJTET Journal
Abstract— Cloud computing is a rising high performance computing environment with a huge scale, heterogeneous collection of self-sufficient systems and elastic computational design. To develop the overall performance of cloud computing, through the deadline constraint, a task scheduling replica is traditional for falling the system power utilization of cloud computing and recovering the yield of service providers. To improve the overall act of cloud environment, with the deadline constraint, a task scheduling model is conventional for reducing the system performance time of cloud computing and improving the profit of service providers. In favor of scheduling replica, a solving technique based on multi-objective genetic algorithm (MO-GA) is considered and the study is determined on programming rules, intersect operators, mixture operators and the scheme of arrangement of Pareto solutions. The model is designed based on open source cloud computing simulation platform CloudSim, to obtainable scheduling algorithms, the result shows that the proposed algorithm can obtain an enhanced solution, thus balancing the load for the concert of multiple objects.
Pricing Models for Cloud Computing Services, a SurveyEditor IJCATR
Recently, citizens and companies can access utility computing services by using Cloud Computing. These services such as
infrastructures, platforms and applications could be accessed on-demand whenever it is needed. In Cloud Computing, different types of
resources would be required to provide services, but the demands such as requests rates and user's requirements of these services and
the cost of the required resources are continuously varying. Therefore, Service Level Agreements would be needed to guarantee the
service's prices and the offered Quality of Services which are always dependable and interrelated to guarantee revenues maximization
for cloud providers as well as improve customers' satisfaction level. Cloud consumers are always searching for a cloud provider who
provides good service with the least price, so Cloud provider should use advanced technologies and frameworks to increase QoS, and
decrease cost. This paper provides a survey on cloud pricing models and analyzes the recent and relevant research in this field.
Content-aware resource allocation model for IPTV delivery networksIJECEIAES
This document proposes a content-aware resource allocation model (CARAM) for IPTV delivery networks. CARAM considers content characteristics like size, popularity, and interactivity to estimate content load and decide required network resources. It integrates resource allocation and replica placement to efficiently allocate storage, bandwidth, and CPU based on replication needs. The paper formulates the content-aware resource allocation problem as a constrained binary optimization problem and uses a hybrid genetic algorithm to evaluate the effect of content status on resource allocation under different popularity distributions.
Sla based optimization of power and migration cost in cloud computingNikhil Venugopal
This document summarizes a research paper about optimizing power consumption and migration costs in cloud computing systems while meeting service level agreements (SLAs). The paper presents an algorithm to solve the resource allocation problem of minimizing total energy costs while probabilistically meeting client SLAs specifying upper bounds on service times. The algorithm uses convex optimization and dynamic programming. Simulation results show the algorithm is more effective than previous approaches at optimizing power and migration costs under SLAs.
Service performance and analysis in cloud computing extened 2Abdullaziz Tagawy
This is a study to the research paper (Service Performance and Analysis in Cloud Computing) by Kaiqi Xiong and Harry Perros in the class related to the course of EC636 Stochastic and Random Process in Tripoli University-Engineering faculty-Computer Engineering Department.
You can find this paper in (https://ieeexplore.ieee.org/document/5190711)
Web Service QoS Prediction Approach in Mobile Internet Environmentsjins0618
Existing many Web service QoS prediction
approaches are very accurate in Internet environments,
however they cannot provide accurate prediction values in
Mobile Internet environments since QoS values of Web
services have great volatility. In this paper, we propose an
accurate Web service QoS prediction approach by weakening
the volatility of QoS data from Web services in Mobile Internet
environments. This approach contains three process, i.e., QoS
preprocessing, user similarity computing, and QoS predicting.
We have implemented our proposed approach with experiment
based on real world and synthetic datasets. The results show
that our approach outperforms other approaches in Mobile
Internet environments.
This document discusses a novel receiver-based traffic redundancy elimination (TRE) technique called Prediction-based Cloud Bandwidth and Cost Reduction (PACK) for cloud computing environments. PACK aims to reduce bandwidth costs and server load by having receivers detect redundant data and send predictions to senders about future chunks. This avoids sending redundant data and allows receivers to locally store chunks instead of downloading them. The document evaluates PACK using video traces and estimates its potential for cost savings compared to traditional sender-based TRE and no TRE.
Web service composition is a concept based on the built of an abstract process, by combining multiple existing class instances, where during the execution, each service class is replaced by a concrete service, selected from several web service candidates. This approach has as an advantage generating flexible and low coupling applications, based on its conception on many elementary modules available on the web. The process of service selection during the composition is based on several axes, one of these axes is the QoS-based web service selection. The Qos or Quality of Service represent a set of parameters that characterize the non-functional web service aspect (execution time, cost, etc...). The composition of web services based on Qos, is the process which allows the selection of the web services that fulfill the user need, based on its qualities. Selected services should optimize the global QoS of the composed process, while satisfying all the constraints specified by the client in all QoS parameters. In this paper, we propose an approach based on the concept of agent system and Skyline approach to effectively select services for composition, and reducing the number of candidate services to be generated and considered in treatment. To evaluate our approach experimentally, we use a several random datasets of services with random values of qualities.
A New QoS Renegotiation Mechanism for Multimedia ApplicationsABDELAAL
The document proposes an adaptive QoS architecture that uses call rejection notifications as feedback to capture network characteristics and allow multimedia applications to renegotiate QoS requirements dynamically. It aims to improve on existing static and dynamic QoS approaches by using flow-based traffic monitoring and a feedback mechanism with low overhead. Simulation results show the approach admits more calls, improves QoS parameters, and decreases call processing time compared to other methods.
LOAD BALANCING ALGORITHM ON CLOUD COMPUTING FOR OPTIMIZE RESPONE TIMEijccsa
To improve the performance of cloud computing, there are many parameters and issues that we should consider, including resource allocation, resource responsiveness, connectivity to resources, unused resources exploration, corresponding resource mapping and planning for resource. The planning for the use of resources can be based on many kinds of parameters, and the service response time is one of them.
The users can easily figure out the response time of their requests, and it becomes one of the important QoSs. When we discover and explore more on this, response time can provide solutions for the distribution, the load balancing of resources with better efficiency. This is one of the most promising
research directions for improving the cloud technology. Therefore, this paper proposes a load balancing algorithm based on response time of requests on cloud with the name APRA (ARIMA Prediction of Response Time Algorithm), the main idea is to use ARIMA algorithms to predict the coming response time, thus giving a better way of effectively resolving resource allocation with threshold value. The experiment
result outcomes are potential and valuable for load balancing with predicted response time, it shows that prediction is a great direction for load balancing.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
1) The document proposes a bandwidth-aware virtual machine migration policy for cloud data centers that considers both the bandwidth and computing power of resources when scheduling tasks of varying sizes.
2) It presents an algorithm that binds tasks to virtual machines in the current data center if the load is below the saturation threshold, and migrates tasks to the next data center if the load is above the threshold, in order to minimize completion time.
3) Experimental results show that the proposed algorithm has lower completion times compared to an existing single data center scheduling algorithm, demonstrating the benefits of considering bandwidth and utilizing multiple data centers.
Quality of service parameter centric resource allocation for lte advancedeSAT Journals
Abstract Efficient radio resource allocation algorithm is proposed in this paper. Proportional fair algorithm gives favorable performance compared to two classical algorithms max-C/I and RR. But PF algorithm does not consider QoS parameters while allocating resources to the users. Moreover with less transmission power utilization Fairness disappears in PF algorithm. A PFQoS-Centric is proposed in this paper. PFQoS-Centric gives better performance. In this proposed algorithm Data rate fairness and Allocation Fairness increase with increase in time jitter threshold values or increase in number of active users increases. Thus Data rate fairness and Allocation fairness is becoming more as compared to that of proportional algorithm. Here Throughput is also maintained during increase in fairness. QoS parameters like Data rate Fairness, Allocation Fairness Delay is most important for next generation of mobile communication. Propose Algorithm (PFQoS-Centric) gives guarantee to satisfy the QoS parameters. Keywords: Quality of Service (QoS), Packet scheduling, Proportional fair (PF), Radio resource allocation (RRA), Data rate Fairness, Allocation Fairness, Long term evolution-advanced (LTE-A).
ENERGY-AWARE LOAD BALANCING AND APPLICATION SCALING FOR THE CLOUD ECOSYSTEMShakas Technologies
This document proposes a double resource renting scheme for cloud service providers to maximize profit while guaranteeing quality of service. It models the cloud system as an M/M/m+D queuing model. The scheme combines long-term and short-term server rentals to provide the necessary computing capacity over time. An optimization problem is formulated to determine the optimal server configuration that maximizes profit by balancing rental costs against increased revenue from meeting quality guarantees. Comparisons show the double renting scheme achieves higher profit than single renting while guaranteeing all requests are served on time.
A Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Com...nexgentechnology
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
A Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Com...nexgentechnology
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
A Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Com...nexgentechnology
1) A double resource renting scheme is proposed that combines short-term and long-term server renting to guarantee quality of service while maximizing cloud provider profits.
2) An M/M/m+D queuing model is used to represent the cloud system and analyze key performance indicators.
3) An optimal configuration problem is formulated and solved to determine the profit-maximizing number of long-term servers, balancing rental costs against meeting service-level agreements.
A profit maximization scheme with guaranteednexgentech15
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
A Study On Service Level Agreement Management Techniques In CloudTracy Drey
This document discusses techniques for managing service level agreements (SLAs) in cloud computing. It begins with an introduction to cloud computing and SLAs. SLAs establish performance standards and consequences for violations between cloud service providers and their customers. The document then reviews several proposed techniques for SLA management from other researchers: (1) allowing multiple SLAs with different performance metrics, (2) using third-party auditing to verify SLA fulfillment, and (3) a layered approach to prevent SLA violations through self-management of cloud resources. The techniques aim to help cloud providers better satisfy SLAs and retain customers.
Migration Control in Cloud Computing to Reduce the SLA Violationrahulmonikasharma
The requisition of cloud based services are more eminent because of the enormous benefits of cloud such as pay-as-you-use flexibility,scalability and low upfront cost. Day-by-day due to growing number of cloud consumers the load on the datacenters is also increasing. Various load distribution and dynamic load balancing approaches are being followed in the datacenters to optimize the resource utilization so that the performance may be maintained during the increased load. Virtual machine (VM) migration is primarily used to implement dynamic load balancing in the datacenters. But, the poorly designed dynamic VM migration policies may negate its benefits. The VM migration overheads result in the violations of service level agreement (SLA) in the cloud environment.In this paper,an extended VM migration control model is proposedto minimize the SLA violations while controlling the energy consumption of the datacenter during VM migration. The parameters of execution boundary threshold is used to extend an existing VM migration control model. The proposed model is tested through extensive simulations using CloudSim toolkit by executing real world workload. Results are obtained in terms of number of SLA violations while controlling the energy consumption in the datacenter. Results show that the proposed modelachieves better performance in comparison to the existing model.
Camera ready-nash equilibrium-ngct2015-formataminnezarat
- The document proposes an auction-based method for efficient Nash equilibrium resource allocation in cloud computing using game theory.
- In the proposed method, users submit price bids over multiple rounds until reaching a Nash equilibrium where no user wants to change their bid. This equilibrium bid also maximizes utility for the resource provider.
- The method is simulated in CloudSim and results show it converges faster than previous methods, with lower SLA violations and higher provider utility.
Intelligent Workload Management in Virtualized Cloud EnvironmentIJTET Journal
Abstract— Cloud computing is a rising high performance computing environment with a huge scale, heterogeneous collection of self-sufficient systems and elastic computational design. To develop the overall performance of cloud computing, through the deadline constraint, a task scheduling replica is traditional for falling the system power utilization of cloud computing and recovering the yield of service providers. To improve the overall act of cloud environment, with the deadline constraint, a task scheduling model is conventional for reducing the system performance time of cloud computing and improving the profit of service providers. In favor of scheduling replica, a solving technique based on multi-objective genetic algorithm (MO-GA) is considered and the study is determined on programming rules, intersect operators, mixture operators and the scheme of arrangement of Pareto solutions. The model is designed based on open source cloud computing simulation platform CloudSim, to obtainable scheduling algorithms, the result shows that the proposed algorithm can obtain an enhanced solution, thus balancing the load for the concert of multiple objects.
Pricing Models for Cloud Computing Services, a SurveyEditor IJCATR
Recently, citizens and companies can access utility computing services by using Cloud Computing. These services such as
infrastructures, platforms and applications could be accessed on-demand whenever it is needed. In Cloud Computing, different types of
resources would be required to provide services, but the demands such as requests rates and user's requirements of these services and
the cost of the required resources are continuously varying. Therefore, Service Level Agreements would be needed to guarantee the
service's prices and the offered Quality of Services which are always dependable and interrelated to guarantee revenues maximization
for cloud providers as well as improve customers' satisfaction level. Cloud consumers are always searching for a cloud provider who
provides good service with the least price, so Cloud provider should use advanced technologies and frameworks to increase QoS, and
decrease cost. This paper provides a survey on cloud pricing models and analyzes the recent and relevant research in this field.
Content-aware resource allocation model for IPTV delivery networksIJECEIAES
This document proposes a content-aware resource allocation model (CARAM) for IPTV delivery networks. CARAM considers content characteristics like size, popularity, and interactivity to estimate content load and decide required network resources. It integrates resource allocation and replica placement to efficiently allocate storage, bandwidth, and CPU based on replication needs. The paper formulates the content-aware resource allocation problem as a constrained binary optimization problem and uses a hybrid genetic algorithm to evaluate the effect of content status on resource allocation under different popularity distributions.
Sla based optimization of power and migration cost in cloud computingNikhil Venugopal
This document summarizes a research paper about optimizing power consumption and migration costs in cloud computing systems while meeting service level agreements (SLAs). The paper presents an algorithm to solve the resource allocation problem of minimizing total energy costs while probabilistically meeting client SLAs specifying upper bounds on service times. The algorithm uses convex optimization and dynamic programming. Simulation results show the algorithm is more effective than previous approaches at optimizing power and migration costs under SLAs.
Service performance and analysis in cloud computing extened 2Abdullaziz Tagawy
This is a study to the research paper (Service Performance and Analysis in Cloud Computing) by Kaiqi Xiong and Harry Perros in the class related to the course of EC636 Stochastic and Random Process in Tripoli University-Engineering faculty-Computer Engineering Department.
You can find this paper in (https://ieeexplore.ieee.org/document/5190711)
Web Service QoS Prediction Approach in Mobile Internet Environmentsjins0618
Existing many Web service QoS prediction
approaches are very accurate in Internet environments,
however they cannot provide accurate prediction values in
Mobile Internet environments since QoS values of Web
services have great volatility. In this paper, we propose an
accurate Web service QoS prediction approach by weakening
the volatility of QoS data from Web services in Mobile Internet
environments. This approach contains three process, i.e., QoS
preprocessing, user similarity computing, and QoS predicting.
We have implemented our proposed approach with experiment
based on real world and synthetic datasets. The results show
that our approach outperforms other approaches in Mobile
Internet environments.
This document discusses a novel receiver-based traffic redundancy elimination (TRE) technique called Prediction-based Cloud Bandwidth and Cost Reduction (PACK) for cloud computing environments. PACK aims to reduce bandwidth costs and server load by having receivers detect redundant data and send predictions to senders about future chunks. This avoids sending redundant data and allows receivers to locally store chunks instead of downloading them. The document evaluates PACK using video traces and estimates its potential for cost savings compared to traditional sender-based TRE and no TRE.
Web service composition is a concept based on the built of an abstract process, by combining multiple existing class instances, where during the execution, each service class is replaced by a concrete service, selected from several web service candidates. This approach has as an advantage generating flexible and low coupling applications, based on its conception on many elementary modules available on the web. The process of service selection during the composition is based on several axes, one of these axes is the QoS-based web service selection. The Qos or Quality of Service represent a set of parameters that characterize the non-functional web service aspect (execution time, cost, etc...). The composition of web services based on Qos, is the process which allows the selection of the web services that fulfill the user need, based on its qualities. Selected services should optimize the global QoS of the composed process, while satisfying all the constraints specified by the client in all QoS parameters. In this paper, we propose an approach based on the concept of agent system and Skyline approach to effectively select services for composition, and reducing the number of candidate services to be generated and considered in treatment. To evaluate our approach experimentally, we use a several random datasets of services with random values of qualities.
A New QoS Renegotiation Mechanism for Multimedia ApplicationsABDELAAL
The document proposes an adaptive QoS architecture that uses call rejection notifications as feedback to capture network characteristics and allow multimedia applications to renegotiate QoS requirements dynamically. It aims to improve on existing static and dynamic QoS approaches by using flow-based traffic monitoring and a feedback mechanism with low overhead. Simulation results show the approach admits more calls, improves QoS parameters, and decreases call processing time compared to other methods.
LOAD BALANCING ALGORITHM ON CLOUD COMPUTING FOR OPTIMIZE RESPONE TIMEijccsa
To improve the performance of cloud computing, there are many parameters and issues that we should consider, including resource allocation, resource responsiveness, connectivity to resources, unused resources exploration, corresponding resource mapping and planning for resource. The planning for the use of resources can be based on many kinds of parameters, and the service response time is one of them.
The users can easily figure out the response time of their requests, and it becomes one of the important QoSs. When we discover and explore more on this, response time can provide solutions for the distribution, the load balancing of resources with better efficiency. This is one of the most promising
research directions for improving the cloud technology. Therefore, this paper proposes a load balancing algorithm based on response time of requests on cloud with the name APRA (ARIMA Prediction of Response Time Algorithm), the main idea is to use ARIMA algorithms to predict the coming response time, thus giving a better way of effectively resolving resource allocation with threshold value. The experiment
result outcomes are potential and valuable for load balancing with predicted response time, it shows that prediction is a great direction for load balancing.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
1) The document proposes a bandwidth-aware virtual machine migration policy for cloud data centers that considers both the bandwidth and computing power of resources when scheduling tasks of varying sizes.
2) It presents an algorithm that binds tasks to virtual machines in the current data center if the load is below the saturation threshold, and migrates tasks to the next data center if the load is above the threshold, in order to minimize completion time.
3) Experimental results show that the proposed algorithm has lower completion times compared to an existing single data center scheduling algorithm, demonstrating the benefits of considering bandwidth and utilizing multiple data centers.
Quality of service parameter centric resource allocation for lte advancedeSAT Journals
Abstract Efficient radio resource allocation algorithm is proposed in this paper. Proportional fair algorithm gives favorable performance compared to two classical algorithms max-C/I and RR. But PF algorithm does not consider QoS parameters while allocating resources to the users. Moreover with less transmission power utilization Fairness disappears in PF algorithm. A PFQoS-Centric is proposed in this paper. PFQoS-Centric gives better performance. In this proposed algorithm Data rate fairness and Allocation Fairness increase with increase in time jitter threshold values or increase in number of active users increases. Thus Data rate fairness and Allocation fairness is becoming more as compared to that of proportional algorithm. Here Throughput is also maintained during increase in fairness. QoS parameters like Data rate Fairness, Allocation Fairness Delay is most important for next generation of mobile communication. Propose Algorithm (PFQoS-Centric) gives guarantee to satisfy the QoS parameters. Keywords: Quality of Service (QoS), Packet scheduling, Proportional fair (PF), Radio resource allocation (RRA), Data rate Fairness, Allocation Fairness, Long term evolution-advanced (LTE-A).
ENERGY-AWARE LOAD BALANCING AND APPLICATION SCALING FOR THE CLOUD ECOSYSTEMShakas Technologies
This document proposes a double resource renting scheme for cloud service providers to maximize profit while guaranteeing quality of service. It models the cloud system as an M/M/m+D queuing model. The scheme combines long-term and short-term server rentals to provide the necessary computing capacity over time. An optimization problem is formulated to determine the optimal server configuration that maximizes profit by balancing rental costs against increased revenue from meeting quality guarantees. Comparisons show the double renting scheme achieves higher profit than single renting while guaranteeing all requests are served on time.
A Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Com...nexgentechnology
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
A Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Com...nexgentechnology
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
A Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Com...nexgentechnology
1) A double resource renting scheme is proposed that combines short-term and long-term server renting to guarantee quality of service while maximizing cloud provider profits.
2) An M/M/m+D queuing model is used to represent the cloud system and analyze key performance indicators.
3) An optimal configuration problem is formulated and solved to determine the profit-maximizing number of long-term servers, balancing rental costs against meeting service-level agreements.
A profit maximization scheme with guaranteednexgentech15
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
A Study On Service Level Agreement Management Techniques In CloudTracy Drey
This document discusses techniques for managing service level agreements (SLAs) in cloud computing. It begins with an introduction to cloud computing and SLAs. SLAs establish performance standards and consequences for violations between cloud service providers and their customers. The document then reviews several proposed techniques for SLA management from other researchers: (1) allowing multiple SLAs with different performance metrics, (2) using third-party auditing to verify SLA fulfillment, and (3) a layered approach to prevent SLA violations through self-management of cloud resources. The techniques aim to help cloud providers better satisfy SLAs and retain customers.
A profit maximization scheme with guaranteed quality of service in cloud comp...Shakas Technologies
As an effective and efficient way to provide computing resources and services to customers on demand, cloud computing has become more and more popular. From cloud service providers’ perspective, profit is one of the most important considerations, and it is mainly determined by the configuration of a cloud service platform under given market demand.
ADAPTIVE MULTI-TENANCY POLICY FOR ENHANCING SERVICE LEVEL AGREEMENT THROUGH R...IJCNCJournal
The appearance of infinite computing resources that available on demand and fast enough to adapt with
load surges makes Cloud computing favourable service infrastructure in IT market. Core feature in Cloud
service infrastructures is Service Level Agreement (SLA) that led seamless service at high quality of service
to client. One of the challenges in Cloud is providing heterogeneous computing services for the clients.
With the increasing number of clients/tenants in the Cloud, unsatisfied agreement is becoming a critical
factor. In this paper, we present an adaptive resource allocation policy which attempts to improve
accountable in Cloud SLA while aiming for enhancing system performance. Specifically, our allocation
incorporates dynamic matching SLA rules to deal with diverse processing requirements from
tenants.Explicitly, it reduces processing overheadswhile achieving better service agreement. Simulation
experiments proved the efficacy of our allocation policy in order to satisfy the tenants; and helps improve
reliable computing.
Revenue Maximization with Good Quality of Service in Cloud ComputingINFOGAIN PUBLICATION
Cloud computing enables people to use resources and services without implementing them on their systems. Profit and quality of service is the most important factor for service providers and it is mainly determined by the configuration of a cloud service platform under given market demand. Single long term renting scheme is usually adopted to design a cloud platform which leads to resource waste and having more renting charges. The novel double renting scheme which is combination of short term and long term renting is aiming at existing issue. This double renting scheme will effectively and efficiently promises a good quality of service of all request and reduces the resource waste significantly. It also provides services with lower cost compared to short term renting scheme. It uses optimal queuing model to maximize the profit. That means the users can access the services simultaneously. The main objective of proposed system is, to maximize profit of service provider by providing efficient and effective services to user.
For further details contact:
N.RAJASEKARAN B.E M.S 9841091117,9840103301.
IMPULSE TECHNOLOGIES,
Old No 251, New No 304,
2nd Floor,
Arcot road ,
Vadapalani ,
Chennai-26.
This document proposes a framework called SLA-Based resource allocation to reduce infrastructure costs and minimize violations of service level agreements (SLAs) for Software as a Service (SaaS) providers. The framework maps customer requests to infrastructure parameters, handles dynamic changes in customer demands, and allocates virtual machines based on SLAs while considering heterogeneity of resources. The goal is to maximize the SaaS provider's profit by optimizing resource usage and reducing penalties from SLA violations when customers dynamically change resource sharing.
SLA Basics describes service level agreements (SLAs) which define non-functional requirements for cloud services. SLAs consist of service level objectives (SLOs) evaluated using key performance indicators (KPIs) with thresholds. Automated SLA protection uses policy rules to evaluate KPIs periodically and trigger actions if conditions are met. SLAs are important in cloud computing to ensure customers receive the expected quality of service, as cloud providers may overcommit resources leading to variable performance without proper SLAs.
Profit Maximization for Service Providers using Hybrid Pricing in Cloud Compu...Editor IJCATR
Cloud computing has recently emerged as one of the buzzwords in the IT industry. Several IT vendors are promising to offer computation, data/storage, and application hosting services, offering Service-Level Agreements (SLA) backed performance and uptime promises for their services. While these „clouds‟ are the natural evolution of traditional clusters and data centers, they are distinguished by following a pricing model where customers are charged based on their utilization of computational resources, storage and transfer of data. They offer subscription-based access to infrastructure, platforms, and applications that are popularly termed as IaaS (Infrastructure as a Service), PaaS (Platform as a Service), and SaaS (Software as a Service). In order to improve the profit of service providers we implement a technique called hybrid pricing , where this hybrid pricing model is a pooled with fixed and spot pricing techniques.
For further details contact:
N.RAJASEKARAN B.E M.S 9841091117,9840103301.
IMPULSE TECHNOLOGIES,
Old No 251, New No 304,
2nd Floor,
Arcot road ,
Vadapalani ,
Chennai-26.
www.impulse.net.in
Email: ieeeprojects@yahoo.com/ imbpulse@gmail.com
The document discusses optimal configuration of multi-server systems in cloud computing environments to maximize profit. It presents a pricing model that considers factors like workload, system configuration, service level agreements, consumer satisfaction, service quality, penalties for low quality, rental costs, energy costs, and margins/profits. The optimization problem formulates the multi-server system as an M/M/m queueing model to analytically determine the optimal configuration under two server speed/power models.
This document describes a Cloud Price Estimation Tool (CPET) that was developed to help users estimate the optimal pricing plan for running their applications on Amazon EC2 cloud servers. CPET collects resource usage data from applications running on a user's private server, analyzes the requirements, and recommends the best EC2 instance type and pricing plan based on the application's estimated usage levels. The tool is meant to help users confidently migrate their applications to the cloud by reducing uncertainty about required resources and pricing. It was evaluated using workloads run on the University of Waterloo's compute clusters to verify its modules worked as intended.
A FRAMEWORK FOR SOFTWARE-AS-A-SERVICE SELECTION AND PROVISIONINGIJCNCJournal
As cloud computing is increasingly transforming the information technology landscape, organizations and
businesses are exhibiting strong interest in Software-as-a-Service (SaaS) offerings that can help them
increase business agility and reduce their operational costs. They increasingly demand services that can
meet their functional and non-functional requirements. Given the plethora and the variety of SaaS
offerings, we propose, in this paper, a framework for SaaS provisioning, which relies on brokered Service
Level agreements (SLAs), between service consumers and SaaS providers. The Cloud Service Broker (CSB)
helps service consumers find the right SaaS providers that can fulfil their functional and non-functional
requirements. The proposed selection algorithm ranks potential SaaS providers by matching their offerings
against the requirements of the service consumer using an aggregate utility function. Furthermore, the CSB
is in charge of conducting SLA negotiation with selected SaaS providers, on behalf of service consumers,
and performing SLA compliance monitoring
This document proposes a cloud service selection model called CloudEval that evaluates non-functional properties and selects an optimal service based on both user-specified quality of service levels and goals. CloudEval uses grey relational analysis, a multi-attribute decision making technique, in the selection process. It considers attributes like availability, response time, price, reputation, performance, and financial credit from service level agreements. The model aims to address limitations of prior works that either focus on specific service types or require user involvement in evaluation.
A Cloud Service Selection Model Based on User-Specified Quality of Service Levelcsandit
Recently, it emerges lots of cloud services in the
cloud service market. After many candidate
services are initially chosen by satisfying both th
e behavior and functional criteria of a target
cloud service. Service consumers need a selection m
odel to further evaluate nonfunctional QOS
properties of the candidate services. Some prior wo
rks have focused on objective and
quantitative benchmark-testing of QOS by some tools
or trusted third-party brokers, as well as
reputation from customers. Service levels have been
offered and designated by cloud service
providers in their Service Level Agreement (SLA). C
onversely, in order to meet user
requirement, it is important for users to discover
their own optimal parameter portfolio for
service level. However, some prior works focus only
on specific kinds of cloud services, or
require users to involve in some evaluating process
. In general, the prior works cannot evaluate
the nonfunctional properties and select the optimal
service which satisfies both user-specified
service level and goals most either. Therefore, the
aim of this study is to propose a cloud service
selection model, CloudEval, to evaluate the nonfunc
tional properties and select the optimal
service which satisfies both user-specified service
level and goals most. CloudEval applies a
well-known multi-attribute decision making techniqu
e, Grey Relational Analysis, to the
selection process. Finally, we conduct some experim
ents. The experimental results show that
CloudEval is effective, especially while the quanti
ty of the candidate cloud services is much
larger than human raters can handle.
Similar to A profit maximization scheme with guaranteed quality of service in cloud computing (20)
This document proposes a reduced latency list decoding algorithm and high throughput decoder architecture for polar codes. The reduced latency list decoding algorithm visits fewer nodes in the decoding tree and considers fewer possibilities of information bits than existing successive cancellation list decoding algorithms, significantly reducing decoding latency and improving throughput with little performance degradation. An implementation of the proposed decoder architecture in a 90nm CMOS technology demonstrates significant latency reduction and area efficiency improvement compared to other list polar decoders.
A researcher developed a radix-8 divider for binary64 division units to improve energy efficiency at high clock rates. Simulation results showed the radix-8 divider requires less energy per division than radix-4 or radix-16 approaches. The researcher used Xilinx 10.1 and ModelSim 6.4b tools and VHDL/Verilog languages to develop and test the minimally redundant radix-8 divider.
Hybrid FPGA architectures containing a mixture of lookup tables (LUTs) and hardened multiplexers are evaluated to improve logic density and reduce chip area. Simulation results show that non-fracturable hybrid architectures naturally save up to 8% area after placement and routing without optimizing the technology mapper, and additional area is saved with architecture-aware mapping optimizations. Fracturable hybrid architectures only provide marginal area gains of up to 2% after placement and routing. For the most area-efficient hybrid architectures, timing performance is minimally impacted.
Input-Based Dynamic Reconfiguration of Approximate Arithmetic Units for Video...Pvrtechnologies Nellore
This document proposes a reconfigurable approximate architecture for MPEG encoders that optimizes power consumption while maintaining a particular Peak Signal-to-Noise Ratio threshold for any input video. It designs reconfigurable adder/subtractor blocks that can modulate their degree of approximation, and integrates them into the motion estimation and discrete cosine transform modules of the MPEG encoder. Experimental results show the approach dynamically adjusts the degree of hardware approximation based on the input video to respect the given quality bound across different videos while achieving up to a 38% power saving over a conventional non-approximated MPEG encoder architecture.
This document lists 69 MATLAB projects related to various domains including image processing, biometrics, medical image processing, computer vision, and more. It provides the project titles, domains or sub-domains, programming languages and years. It also lists the support that will be provided to registered students including the IEEE base paper, documentation, source code, execution help, and publication support. The projects involve techniques such as image segmentation, super resolution, steganography, action recognition, and license plate detection. Support includes documentation templates, software installation guides, and assistance with conferences or journal publications.
This document lists 50 VLSI projects from 2016-2017 across various domains such as low power, high speed data transmission, area efficient/timing and delay reduction, audio/video processing, networking, verification, and Tanner/Microwind. The projects cover a range of topics including ECG acquisition systems, modular multiplication, adaptive radios, arrhythmia prediction, electrical capacitance tomography, approximate computing using memoization, FFT processors, error correction coding, carry skip adders, dynamic voltage and frequency scaling, code compression, turbo decoding, variable digital filters, parallel FFTs, MIMO communications, timing error correction, LU decomposition, FPGA logic architectures, LDPC decoding, minimum energy systems, elliptic curve multiplication, NB
This document lists 97 potential final year projects for students in various domains including embedded systems, Internet of Things, robotics, ZigBee, GSM and GPS, consumer electronics, automation, and more. It provides contact information for the project coordinator and describes the domain and programming language/year for each potential project. Students who register will receive support including an IEEE base paper, documentation, source code, and assistance with publication.
This document describes a high-speed FPGA implementation of an elliptic curve cryptography processor based on redundant signed digit representation. The processor employs pipelining techniques to achieve high throughput for Karatsuba–Ofman multiplication. It also includes an efficient modular adder without comparison and a high throughput modular divider to maximize frequency. The processor supports the NIST P256 curve and performs single-point multiplication in 2.26 ms at a maximum frequency of 160 MHz on a Xilinx Virtex 5 FPGA.
Retiming of digital circuits is conventionally based on the estimates of propagation delays across different paths in the data-flow graphs (DFGs) obtained by discrete component
timing model, which implicitly assumes that operation of a node can begin only after the completion of the operation(s) of its preceding node(s) to obey the data dependence requirement. Such a discrete component timing model very often gives much higher
estimates of the propagation delays than the actuals particularly when the computations in the DFG nodes correspond to fixed point arithmetic operations like additions and multiplications
Pre encoded multipliers based on non-redundant radix-4 signed-digit encodingPvrtechnologies Nellore
This paper introduces an architecture for pre-encoded multipliers used in digital signal processing based on offline encoding of coefficients using a non-redundant radix-4 signed-digit encoding technique. Experimental analysis shows the proposed multipliers using this encoding technique, along with a coefficients memory, are more efficient in terms of area and power compared to conventional Modified Booth encoding schemes.
Quality of-protection-driven data forwarding for intermittently connected wir...Pvrtechnologies Nellore
The document proposes a quality-of-protection (QoP)-driven data forwarding strategy for intermittent wireless networks. The existing system relies on collaborative data delivery, but non-cooperative behavior impairs network QoP and user quality of experience (QoE). The proposed system evaluates process-based and relationship-based credibility of nodes in a distributed manner using locally recorded forwarding behavior information. An intrusion detection mechanism helps select reliable relay nodes for transmission, improving QoP, QoE, reducing network load, and enhancing resource utilization. Numerical results show the proposed approach provides reliable data transmission with high QoP and improved user QoE.
The document provides information about an IT services company called Coalesce Technologies. It discusses Coalesce's services, commitment to client satisfaction, growing network, and customized solutions. It also describes the library management system project, including the problems with existing systems, proposed new system features, and UML diagrams for modeling the system. Key aspects of the proposed system include automating transactions, providing a simple GUI, efficient database updating, and restricting administrative access for security.
The document discusses an e-voting system project that aims to provide a secure and user-friendly online voting system. It outlines the existing paper-based voting system and proposes an online voting system where voters can cast their votes from anywhere in India through a database that stores voter information and votes. The proposed system is designed to accurately record and retrieve voter information and votes in a planned, reliable, and redundant manner. It requires hardware like a PC and Windows OS along with Java programming language to develop the online voting software.
PVR Technology lists 26 new web-based projects including an airline automation system, attendance management system, automated college admission system, clustering datasets using machine learning algorithms, corporate transportation system, courier information system, e-voting system, e-complaints system in PHP, e-learning system in PHP, exam result application, hairstylesalon system, inventory management system, library management system, medical reference system, online placement and training system, online admission system, online bank system, vehicle investigation system, resource management system in PHP, online vegetables purchase system, online shopping for gadgets system, online seat booking system, online requitment system, online quiz system, online placement and training cell system, online movie ticket system,
The document proposes a new medium access control (MAC) protocol called PoCMAC that uses distributed power control to manage interference in full-duplex WiFi networks. PoCMAC allows simultaneous uplink and downlink transmissions between an access point and half-duplex clients. It identifies regimes where power control provides throughput gains and develops a full 802.11-based protocol for distributed selection of three-node topologies. Simulations and software-defined radio experiments show PoCMAC achieves higher capacity and throughput compared to half-duplex networks, while maintaining similar fairness.
This document contains information about PVR Technology, including their head office location, website, email, and phone number. It then lists 20 project titles related to power electronics and renewable energy systems. The projects cover topics like power quality improvement, single-stage converters, grid-connected inverters, maximum power point tracking, stand-alone energy sources, power factor correction, resonant inverters, DC-DC converters, AC-DC converters, filters, and induction heating applications.
Control cloud-data-access-privilege-and-anonymity-with-fully-anonymous-attrib...Pvrtechnologies Nellore
This document proposes a scheme called AnonyControl to address privacy concerns with cloud data access. It aims to control access privileges while preserving user identity privacy. AnonyControl decentralizes the central authority across multiple attribute authorities to limit identity leakage and achieve semi-anonymity. It also generalizes file access control to privilege control to manage cloud data operations in a fine-grained way. The document describes existing access control schemes, disadvantages related to identity privacy, and how AnonyControl improves on previous work by protecting user identities through attribute distribution across authorities.
Control cloud data access privilege and anonymity with fully anonymous attrib...Pvrtechnologies Nellore
This document proposes two schemes, AnonyControl and AnonyControl-F, to address privacy issues in existing access control schemes for cloud storage. AnonyControl decentralizes the central authority to limit identity leakage and achieve semi-anonymity. It also generalizes file access control to privilege control. AnonyControl-F fully prevents identity leakage and achieves full anonymity. The schemes use attribute-based encryption and a multi-authority system to securely control user access privileges without revealing identity information. Security analysis shows the schemes are secure under certain cryptographic assumptions.
The document proposes CloudKeyBank, a key management framework that addresses the confidentiality, search privacy, and owner authorization of outsourced encryption keys. It does so using a new cryptographic primitive called Searchable Conditional Proxy Re-Encryption (SC-PRE) that combines Hidden Vector Encryption and Proxy Re-Encryption. The framework allows key owners to encrypt keys for outsourcing while maintaining privacy and granting controlled authorization. It aims to solve security issues not addressed by traditional outsourced data solutions.
Circuit ciphertext policy attribute-based hybrid encryption with verifiablePvrtechnologies Nellore
The document proposes a scheme for circuit ciphertext-policy attribute-based hybrid encryption with verifiable delegation in cloud computing. It aims to ensure data confidentiality, fine-grained access control, and verifiability of delegated computation results. The scheme uses a combination of ciphertext-policy attribute-based encryption, symmetric encryption, and verifiable computation. It is proven secure based on computational assumptions and simulations show it is practical for cloud computing applications.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
ML Based Model for NIDS MSc Updated Presentation.v2.pptx
A profit maximization scheme with guaranteed quality of service in cloud computing
1. 0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TC.2015.2401021, IEEE Transactions on Computers
TRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 1
A Profit Maximization Scheme with Guaranteed
Quality of Service in Cloud Computing
Jing Mei, Kenli Li, Member, IEEE, Aijia Ouyang and Keqin Li, Fellow, IEEE
Abstract—As an effective and efficient way to provide computing resources and services to customers on demand, cloud computing
has become more and more popular. From cloud service providers’ perspective, profit is one of the most important considerations, and
it is mainly determined by the configuration of a cloud service platform under given market demand. However, a single long-term
renting scheme is usually adopted to configure a cloud platform, which cannot guarantee the service quality but leads to serious
resource waste. In this paper, a double resource renting scheme is designed firstly in which short-term renting and long-term renting
are combined aiming at the existing issues. This double renting scheme can effectively guarantee the quality of service of all requests
and reduce the resource waste greatly. Secondly, a service system is considered as an M/M/m+D queuing model and the performance
indicators that affect the profit of our double renting scheme are analyzed, e.g., the average charge, the ratio of requests that need
temporary servers, and so forth. Thirdly, a profit maximization problem is formulated for the double renting scheme and the optimized
configuration of a cloud platform is obtained by solving the profit maximization problem. Finally, a series of calculations are conducted
to compare the profit of our proposed scheme with that of the single renting scheme. The results show that our scheme can not only
guarantee the service quality of all requests, but also obtain more profit than the latter.
Index Terms—Cloud computing, guaranteed service quality, multiserver system, profit maximization, queuing model, service-level
agreement, waiting time.
!
1 INTRODUCTION
AS an effective and efficient way to consolidate comput-
ing resources and computing services, clouding com-
puting has become more and more popular [1]. Cloud com-
puting centralizes management of resources and services,
and delivers hosted services over the Internet. The hard-
ware, software, databases, information, and all resources are
concentrated and provided to consumers on-demand [2].
Cloud computing turns information technology into ordi-
nary commodities and utilities by the the pay-per-use pric-
ing model [3, 4, 5]. In a cloud computing environment, there
are always three tiers, i.e., infrastructure providers, services
providers, and customers (see Fig. 1 and its elaboration in
Section 3.1). An infrastructure provider maintains the basic
hardware and software facilities. A service provider rents
resources from the infrastructure providers and provides
services to customers. A customer submits its request to a
service provider and pays for it based on the amount and
the quality of the provided service [6]. In this paper, we
aim at researching the multiserver configuration of a service
provider such that its profit is maximized.
Like all business, the profit of a service provider in cloud
computing is related to two parts, which are the cost and
the revenue. For a service provider, the cost is the renting
• The authors are with the College of Information Science and Engineering,
Hunan University, and National Supercomputing Center in Changsha,
Hunan, China, 410082.
E-mail: jingmei1988@163.com, lkl@hnu.edu.cn, oyaj@hnu.edu.cn,
lik@newpaltz.edu.
• Keqin Li is also with the Department of Computer Science, State Univer-
sity of New York, New Paltz, New York 12561, USA.
• Kenli Li is the author for correspondence.
Manuscript received ****, 2015; revised ****, 2015.
cost paid to the infrastructure providers plus the electricity
cost caused by energy consumption, and the revenue is the
service charge to customers. In general, a service provider
rents a certain number of servers from the infrastructure
providers and builds different multiserver systems for dif-
ferent application domains. Each multiserver system is to
execute a special type of service requests and applications.
Hence, the renting cost is proportional to the number of
servers in a multiserver system [2]. The power consumption
of a multiserver system is linearly proportional to the num-
ber of servers and the server utilization, and to the square of
execution speed [7, 8]. The revenue of a service provider is
related to the amount of service and the quality of service.
To summarize, the profit of a service provider is mainly
determined by the configuration of its service platform.
To configure a cloud service platform, a service provider
usually adopts a single renting scheme. That’s to say, the
servers in the service system are all long-term rented. Be-
cause of the limited number of servers, some of the incom-
ing service requests cannot be processed immediately. So
they are first inserted into a queue until they can handled
by any available server. However, the waiting time of the
service requests cannot be too long. In order to satisfy
quality-of-service requirements, the waiting time of each
incoming service request should be limited within a certain
range, which is determined by a service-level agreement
(SLA). If the quality of service is guaranteed, the service
is fully charged, otherwise, the service provider serves the
request for free as a penalty of low quality. To obtain higher
revenue, a service provider should rent more servers from
the infrastructure providers or scale up the server execution
speed to ensure that more service requests are processed
with high service quality. However, doing this would lead to
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
2. 0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TC.2015.2401021, IEEE Transactions on Computers
TRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 2
sharp increase of the renting cost or the electricity cost. Such
increased cost may counterweight the gain from penalty
reduction. In conclusion, the single renting scheme is not a
good scheme for service providers. In this paper, we propose
a novel renting scheme for service providers, which not
only can satisfy quality-of-service requirements, but also can
obtain more profit. Our contributions in this paper can be
summarized as follows.
• A novel double renting scheme is proposed for
service providers. It combines long-term renting
with short-term renting, which can not only satisfy
quality-of-service requirements under the varying
system workload, but also reduce the resource waste
greatly.
• A multiserver system adopted in our paper is mod-
eled as an M/M/m+D queuing model and the perfor-
mance indicators are analyzed such as the average
service charge, the ratio of requests that need short-
term servers, and so forth.
• The optimal configuration problem of service
providers for profit maximization is formulated and
two kinds of optimal solutions, i.e., the ideal solu-
tions and the actual solutions, are obtained respec-
tively.
• A series of comparisons are given to verify the per-
formance of our scheme. The results show that the
proposed Double-Quality-Guaranteed (DQG) rent-
ing scheme can achieve more profit than the com-
pared Single-Quality-Unguaranteed (SQU) renting
scheme in the premise of guaranteeing the service
quality completely.
The rest of the paper is organized as follows. Section 2
reviews the related work on profit aware problem in cloud
computing. Section 3 presents the used models, including
the three-tier cloud computing model, the multiserver sys-
tem model, the revenue and cost models. Section 4 pro-
poses our DQG renting scheme and formulates the profit
optimization problem. Section 5 introduces the methods of
finding the optimal solutions for the profit optimization
problem in two scenarios. Section 6 demonstrates the perfor-
mance of the proposed scheme through comparison with the
traditional SQU renting scheme. Finally, Section 7 concludes
the work.
2 RELATED WORK
In this section, we review recent works relevant to the profit
of cloud service providers. Profit of service providers is
related with many factors such as the price, the market
demand, the system configuration, the customer satisfaction
and so forth. Service providers naturally wish to set a higher
price to get a higher profit margin; but doing so would
decrease the customer satisfaction, which leads to a risk of
discouraging demand in the future. Hence, selecting a rea-
sonable pricing strategy is important for service providers.
The pricing strategies are divided into two categories,
i.e., static pricing and dynamic pricing. Static pricing means
that the price of a service request is fixed and known
in advance, and it does not change with the conditions.
With dynamic pricing a service provider delays the pricing
decision until after the customer demand is revealed, so that
the service provider can adjust prices accordingly [9]. Static
pricing is the dominant strategy which is widely used in
real world and in research [2, 10, 11]. Ghamkhari et al. [11]
adopted a flat-rate pricing strategy and set a fixed price for
all requests, but Odlyzko in [12] argued that the predomi-
nant flat-rate pricing encourages waste and is incompatible
with service differentiation. Another kind of static pricing
strategies are usage-based pricing. For example, the price
of a service request is proportional to the service time and
task execution requirement (measured by the number of
instructions to be executed) in [10] and [2], respectively.
Usage-based pricing reveals that one can use resources more
efficiently [13, 14].
Dynamic pricing emerges as an attractive alternative
to better cope with unpredictable customer demand [15].
Mac´ıas et al. [16] used a genetic algorithm to iteratively
optimize the pricing policy. Amazon EC2 [17, 18] has in-
troduced a ”spot pricing” feature, where the spot price for
a virtual instance is dynamically updated to match supply
and demand. However, consumers dislike prices to change,
especially if they perceive the changes to be ”unfair” [19, 20].
After comparison, we select the usage-based pricing strate-
gy in this paper since it agrees with the concept of cloud
computing mostly.
The second factor affecting the profit of service providers
is customer satisfaction which is determined by the quality
of service and the charge. In order to improve the customer
satisfaction level, there is a service-level agreement (SLA)
between a service provider and the customers. The SLA
adopts a price compensation mechanism for the customers
with low service quality. The mechanism is to guarantee
the service quality and the customer satisfaction so that
more customers are attracted. In previous research, different
SLAs are adopted. Ghamkhari et al. [11] adopted a stepwise
charge function with two stages. If a service request is
handled before its deadline, it is normally charged; but
if a service request is not handled before its deadline, it
is dropped and the provider pays for it due to penalty.
In [2, 10, 21], charge is decreased continuously with the
increasing waiting time until the charge is free. In this
paper, we use a two-step charge function, where the service
requests served with high quality are normally charged,
otherwise, are served for free.
Since profit is an important concern to cloud service
providers, many works have been done on how to boost
their profit. A large body of works have recently focused
on reducing the energy cost to increase profit of service
providers [22, 23, 24, 25], and the idle server turning off
strategy and dynamic CPU clock frequency scaling are adopt-
ed to reduce energy cost. However, only reducing energy
cost cannot obtain profit maximization. Many researchers
investigated the trade-off between minimizing cost and
maximizing revenue to optimize profit. Both [11] and [26]
adjusted the number of switched on servers periodically
using different strategies and different profit maximization
models were built to get the number of switched on servers.
However, these works did not consider the cost of resource
configuration.
Chiang and Ouyang [27] considered a cloud server
system as an M/M/R/K queuing system where all service
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
3. 0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TC.2015.2401021, IEEE Transactions on Computers
TRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 3
requests that exceed its maximum capacity are rejected. A
profit maximization function is defined to find an optimal
combination of the server size R and the queue capacity K
such that the profit is maximized. However, this strategy
has further implications other than just losing the revenue
from some services, because it also implies loss of reputation
and therefore loss of future customers [3]. In [2], Cao et
al. treated a cloud service platform as an M/M/m model,
and the problem of optimal multiserver configuration for
profit maximization was formulated and solved. This work
is the most relevant work to ours, but it adopts a single
renting scheme to configure a multiserver system, which
cannot adapt to the varying market demand and leads to
low service quality and great resource waste. To overcome
this weakness, another resource management strategy is
used in [28, 29, 30, 31], which is cloud federation. Us-
ing federation, different providers running services that
have complementary resource requirements over time can
mutually collaborate to share their respective resources in
order to fulfill each one’s demand [30]. However, providers
should make an intelligent decision about utilization of
the federation (either as a contributor or as a consumer
of resources) depending on different conditions that they
might face, which is a complicated problem.
In this paper, to overcome the shortcomings mentioned
above, a double renting scheme is designed to configure
a cloud service platform, which can guarantee the service
quality of all requests and reduce the resource waste greatly.
Moreover, a profit maximization problem is formulated and
solved to get the optimal multiserver configuration which
can product more profit than the optimal configuration
in [2].
3 THE MODELS
In this section, we first describe the three-tier cloud comput-
ing structure. Then, we introduce the related models used in
this paper, including a multiserver system model, a revenue
model, and a cost model.
3.1 A Cloud System Model
The cloud structure (see Fig. 1) consists of three typical
parties, i.e., infrastructure providers, service providers and
customers. This three-tier structure is used commonly in
existing literatures [2, 6, 10].
Fig. 1: The three-tier cloud structure.
In the three-tier structure, an infrastructure provider the
basic hardware and software facilities. A service provider
rents resources from infrastructure providers and prepares
a set of services in the form of virtual machine (VM). Infras-
tructure providers provide two kinds of resource renting
schemes, e.g., long-term renting and short-term renting. In
general, the rental price of long-term renting is much cheap-
er than that of short-term renting. A customer submits a
service request to a service provider which delivers services
on demand. The customer receives the desired result from
the service provider with certain service-level agreement,
and pays for the service based on the amount of the service
and the service quality. Service providers pay infrastructure
providers for renting their physical resources, and charge
customers for processing their service requests, which gen-
erates cost and revenue, respectively. The profit is generated
from the gap between the revenue and the cost.
3.2 A Multiserver Model
In this paper, we consider the cloud service platform as a
multiserver system with a service request queue. Fig. 2 gives
the schematic diagram of cloud computing [32].
Fig. 2: The schematic diagram of cloud computing.
In an actual cloud computing platform such as Amazon
EC2, IBM blue cloud, and private clouds, there are many
work nodes managed by the cloud managers such as Eu-
calyptus, OpenNebula, and Nimbus. The clouds provide
resources for jobs in the form of virtual machine (VM). In
addition, the users submit their jobs to the cloud in which
a job queuing system such as SGE, PBS, or Condor is used.
All jobs are scheduled by the job scheduler and assigned
to different VMs in a centralized way. Hence, we can con-
sider it as a service request queue. For example, Condor is
a specialized workload management system for compute-
intensive jobs and it provides a job queueing mechanism,
scheduling policy, priority scheme, resource monitoring,
and resource management. Users submit their jobs to Con-
dor, and Condor places them into a queue, chooses when
and where to run them based upon a policy [33, 34]. Hence,
it is reasonable to abstract a cloud service platform as a mul-
tiserver model with a service request queue, and the model
is widely adopted in existing literature [2, 11, 35, 36, 37].
In the three-tier structure, a cloud service provider serves
customers’ service requests by using a multiserver system
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
4. 0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TC.2015.2401021, IEEE Transactions on Computers
TRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 4
Fig. 3: The multiserver system model, where service
requests are first placed in a queue before they are
processed by any servers.
which is rented from an infrastructure provider. Assume
that the multiserver system consists of m long-term rented
identical servers, and it can be scaled up by temporarily
renting short-term servers from infrastructure providers.
The servers in the system have identical execution speed s
(Unit: billion instructions per second). In this paper, a mul-
tiserver system excluding the short-term servers is modeled
as an M/M/m queuing system as follows (see Fig. 3). There
is a Poisson stream of service requests with arrival rate λ,
i.e., the interarrival times are independent and identically
distributed (i.i.d.) exponential random variables with mean
1/λ. A multiserver system maintains a queue with infinite
capacity. When the incoming service requests cannot be pro-
cessed immediately after they arrive, they are firstly placed
in the queue until they can be handled by any available
server. The first-come-first-served (FCFS) queuing discipline
is adopted. The task execution requirements (measured by
the number of instructions) are independent and identically
distributed exponential random variables r with mean r
(Unit: billion instructions). Therefore, the execution times of
tasks on the multiserver system are also i.i.d. exponential
random variables x = r/s with mean x = r/s (Unit:
second). The average service rate of each server is calculated
as µ = 1/x = s/r, and the system utilization is defined as
ρ = λ/mµ = λ/m × r/s.
Because the fixed computing capacity of the service
system is limited, some requests would wait for a long time
before they are served. According to the queuing theory, we
have the following theorem about the waiting time in an
M/M/m queuing system.
Theorem 3.1. The cumulative distribution function (cdf) of
the waiting time W of a service request is
FW (t) = 1 −
πm
1 − ρ
e−mµ(1−ρ)t
, (1)
where
πm =
(mρ)m
m!
[m−1∑
k=0
(mρ)k
k!
+
(mρ)m
m!(1 − ρ)
]−1
.
Proof 3.1. We have known that the probability distribution
function (pdf) of the waiting time W of a service request
is
fW (t) = (1 − Pq)u(t) + mµπme−(1−ρ)mµt
,
where Pq = πm/(1 − ρ) and u(t) is a unit impulse
function [2, 38]. Then, FW (t) can be obtained by straight-
forward calculation.
3.3 Revenue Modeling
The revenue model is determined by the pricing strategy
and the server-level agreement (SLA). In this paper, the
usage-based pricing strategy is adopted, since cloud com-
puting provides services to customers and charges them on
demand. The SLA is a negotiation between service providers
and customers on the service quality and the price. Because
of the limited servers, the service requests that cannot be
handled immediately after entering the system must wait in
the queue until any server is available. However, to satisfy
the quality-of-service requirements, the waiting time of each
service request should be limited within a certain range
which is determined by the SLA. The SLA is widely used by
many types of businesses, and it adopts a price compensa-
tion mechanism to guarantee service quality and customer
satisfaction. For example, China Post gives a service time
commitment for domestic express mails. It promises that
if a domestic express mail does not arrive within a dead-
line, the mailing charge will be refunded. The SLA is also
adopted by many real world cloud service providers such
as Rackspace [39], Joyent [40], Microsoft Azure [41], and so
on. Taking Joyent as an example, the customers order Smart
Machines, Smart Appliances, and/or Virtual Machines from
Joyent, and if the availability of a customer’s services is
less than 100%, Joyent will credit the customer 5% of the
monthly fee for each 30 minutes of downtime up to 100% of
the customer’s monthly fee for the affected server. The only
difference is that its performance metric is availability and
ours is waiting time.
In this paper, the service level is reflected by the waiting
time of requests. Hence, we define D as the maximum
waiting time here that the service requests can tolerate, in
other words, D is their deadline. The service charge of each
task is related to the amount of a service and the service-
level agreement. We define the service charge function for
a service request with execution requirement r and waiting
time W in Eq. (2),
R(r, W) =
{ar, 0 ≤ W ≤ D;
0, W > D.
(2)
where a is a constant, which indicates the price per one
billion instructions (Unit: cents per one billion instructions).
When a service request starts its execution before waiting
a fixed time D (Unit: second), a service provider considers
that the service request is processed with high quality-of-
service and charges a customer ar. If the waiting time of a
service request exceeds deadline D, a service provider must
serve it for free. Similar revenue models have been used in
many existing research such as [2, 11, 42].
According to Theorem 1, it is easy to know that the
probability that the waiting time of a service request exceeds
its deadline D is
P(W ≥ D) = 1 − FW (D) =
πm
1 − ρ
e−mµ(1−ρ)D
. (3)
3.4 Cost Modeling
The cost of a service provider consists of two major parts,
i.e., the rental cost of physical resources and the utility
cost of energy consumption. Many existing research such
as [11, 43, 44] only consider the power consumption cost.
As a major difference between their models and ours, the
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
5. 0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TC.2015.2401021, IEEE Transactions on Computers
TRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 5
resource rental cost is considered in this paper as well, since
it is a major part which affects the profit of service providers.
A similar cost model is adopted in [2]. The resources can
be rented in two ways, long-term renting and short-term
renting, and the rental price of long-term renting is much
cheaper than that of short-term renting. This is reasonable
and common in the real life. In this paper, we assume that
the long-term rental price of one server for unit of time is
β (Unit: cents per second) and the short-term rental price
of one server for unit of time is γ (Unit: cents per second),
where β < γ.
The cost of energy consumption is determined by the
electricity price and the amount of energy consumption. In
this paper, we adopt the following dynamic power model,
which is adopted in the literature such as [2, 7, 45, 46]:
Pd = NswCLV 2
f, (4)
where Nsw is the average gate switching factor at each clock
cycle, CL is the loading capacitance, V is the supply voltage,
and f is the clock frequency [45]. In the ideal case, the
relationship between the clock frequency f and the supply
voltage V is V ∝ fϕ
for some constant ϕ > 0 [46]. The
server execution speed s is linearly proportional to the clock
frequency f, namely, s ∝ f. Hence, the power consumption
is Pd ∝ NswCLs2ϕ+1
. For ease of discussion, we assume
that Pd = bNswCLs2ϕ+1
= ξsα
where ξ = bNswCL and
α = 2ϕ + 1. In this paper, we set NswCL = 7.0, b = 1.3456
and ϕ = 0.5. Hence, α = 2.0 and ξ = 9.4192. The value of
power consumption calculated by Pd = ξsα
is close to the
value of the Intel Pentium M processor [47]. It is reasonable
that a server still consumes some amount of static power [8],
denoted as P∗
(Unit: Watt), when it is idle. For a busy server,
the average amount of energy consumption per unit of time
is P = ξsα
+ P∗
(Unit: Watt). Assume that the price of
energy is δ (Unit: cents per Watt).
4 A QUALITY-GUARANTEED SCHEME
The traditional single resource renting scheme cannot guar-
antee the quality of all requests but wastes a great amount
of resources due to the uncertainty of system workload.
To overcome the weakness, we propose a double renting
scheme as follows, which not only can guarantee the quality
of service completely but also can reduce the resource waste
greatly.
4.1 The Proposed Scheme
In this section, we first propose the Double-Quality-
Guaranteed (DQG) resource renting scheme which com-
bines long-term renting with short-term renting. The main
computing capacity is provided by the long-term rented
servers due to their low price. The short-term rented servers
provide the extra capacity in peak period. The detail of the
scheme is shown in Algorithm 1.
The proposed DQG scheme adopts the traditional FCFS
queueing discipline. For each service request entering the
system, the system records its waiting time. The requests are
assigned and executed on the long-term rented servers in
the order of arrival times. Once the waiting time of a request
reaches D, a temporary server is rented from infrastructure
Algorithm 1 Double-Quality-Guaranteed (DQG) Scheme
1: A multiserver system with m servers is running and wait-
ing for the events as follows
2: A queue Q is initialized as empty
3: Event – A service request arrives
4: Search if any server is available
5: if true then
6: Assign the service request to one available server
7: else
8: Put it at the end of queue Q and record its waiting time
9: end if
10: End Event
11: Event – A server becomes idle
12: Search if the queue Q is empty
13: if true then
14: Wait for a new service request
15: else
16: Take the first service request from queue Q and assign it
to the idle server
17: end if
18: End Event
19: Event – The deadline of a request is achieved
20: Rent a temporary server to execute the request and release
the temporary server when the request is completed
21: End Event
providers to process the request. We consider the novel ser-
vice model as an M/M/m+D queuing model [48, 49, 50]. The
M/M/m+D model is a special M/M/m queuing model with
impatient customers. In an M/M/m+D model, the requests
are impatient and they have a maximal tolerable waiting
time. If the waiting time exceeds the tolerable waiting time,
they lose patience and leave the system. In our scheme, the
impatient requests do not leave the system but are assigned
to temporary rented servers.
Since the requests with waiting time D are all assigned
to temporary servers, it is apparent that all service requests
can guarantee their deadline and are charged based on the
workload according to the SLA. Hence, the revenue of the
service provider increases. However, the cost increases as
well due to the temporarily rented servers. Moreover, the
amount of cost spent in renting temporary servers is deter-
mined by the computing capacity of the long-term rented
multiserver system. Since the revenue has been maximized
using our scheme, minimizing the cost is the key issue for
profit maximization. Next, the tradeoff between the long-
term rental cost and the short-term rental cost is considered,
and an optimal problem is formulated in the following to
get the optimal long-term configuration such that the profit
is maximized.
4.2 The Profit Optimization Problem
Assume that a cloud service platform consists of m long-
term rented servers. It is known that part of requests need
temporary servers to serve, so that their quality can be
guaranteed. Denoted by pext(D) the steady-state probability
that a request is assigned to a temporary server, or put
differently, pext(D) is the long-run fraction of requests whose
waiting times exceed the deadline D. pext(D) is different
from FW (D). In calculating FW (D), all service requests,
whether exceed the deadline, will be waiting in the queue.
However, in calculating pext(D), the requests whose waiting
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
6. 0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TC.2015.2401021, IEEE Transactions on Computers
TRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 6
times are equal to the deadline will be assigned to the
temporary servers, which will reduce the waiting time of
the following requests. In general, pext(D) is much less than
FW (D). Refer to [50], we can known that pext(D) is:
pext(D) =
(1 − ρ)(1 − FW (D))
1 − ρ(1 − FW (D))
. (5)
0 5 10 15 20 25 30 35 40 45 50
0
0.03
0.06
0.09
0.12
0.15
0.18
0.21
0.24
Deadline
p
ext
aver_r=1
Fig. 4: The probability of waiting time exceeding D.
That is to say, there are about λpext(D) service requests
in one unit of time which need short-term rented servers.
Fig. 4 gives the probability versus different deadline where
λ = 5.99, r = 1, m = 6 and s = 1. Hence, the cost on
short-term rented servers in one unit of time is calculated
as:
Cshort = λpext(D)
r
s
(γ + δP), (6)
where r
s is the average execution time of each request.
Among the requests entering the service system, about
pext(D) percentage requests are not executed by the m long-
term rented servers. Hence, the system utilization of the
m servers is ρ(1 − pext(D)). Since the power for speed
s is ξsα
, the average amount of energy consumed by a
long-term rented server in one unit of time is Plong =
ρ(1 − pext(D))ξsα
+ P∗
. Hence, the cost of the long-term
rented servers in one unit of time is calculated as:
Clong = m(β + δPlong). (7)
The following theorem gives the expected charge to a
service request.
Theorem 4.1. The expected charge to a service request is ar.
Proof 4.1. Because the waiting time W of each request is
less than or equal to D, the expected charge to a service
request with execution requirement r is ar according to
the SLA. Since r is a random variable, ar is also random
variable. It is known that r is an exponential random
variable with mean r, so its probability distribution
function is fr(z) = 1
r e−z/r
. The expected charge to a
service request is
∫ ∞
0
fr(z)R(r, z)dz =
∫ ∞
0
1
r
e−z/r
azdz
=
a
r
∫ ∞
0
e−z/r
zdz = −a
∫ ∞
0
zde−z/r
= −a
[
ze−z/r
∞
0
−
∫ ∞
0
e−z/r
dz
]
= −a
[
ze−z/r
∞
0
+ re−z/r
∞
0
]
= ar.
(8)
The theorem is proven.
The profit of a service provider in one unit of time is
obtained as
Profit = Revenue − Clong − Cshort, (9)
where Revenue = λar,
Clong = m(β + δ(ρ(1 − pext(D))ξsα
+ P∗
)),
and
Cshort = λpext(D)
r
s
(γ + δ(ξsα
+ P∗
)).
We aim to choose the optimal number of fixed servers m
and the optimal execution speed s to maximize the profit:
Profit(m, s) = λar − λpext(D)
r
s
(γ + δ(ξsα
+ P∗
))
− m(β + δ(ρ(1 − pext(D))ξsα
+ P∗
)).
(10)
Fig. 5 gives the graph of function Profit(m, s) where λ =
5.99, r = 1, D = 5, a = 15, P∗
= 3, α = 2.0, ξ = 9.4192,
β = 1.5, γ = 3, and δ = 0.3.
0
10
20
30
0123
−150
−100
−50
0
50
The Server SizeThe Server Speed
Profit
Fig. 5: The function Profit(m, s).
From the figure, we can see that the profit of a service
provider is varying with different server size and different
execution speed. Therefore, we have the problem of select-
ing the optimal server size and/or server speed so that the
profit is maximized. In the following section, the solutions
to this problem are proposed.
5 OPTIMAL SOLUTION
In this section, we first develop an analytical method
to solve our optimization problem. Using the analytical
method, the ideal optimal solutions are obtained. Because
the server size and the server speed are limited and discrete,
we give an algorithmic method to get the actual solutions
based on the ideal optimal ones.
5.1 An Analytical Method for Ideal Solutions
We firstly solve our optimization problem analytically, as-
suming that m and s are continuous variables. To this
end, a closed-form expression of pext(D) is needed. In this
paper, we use the same closed-form expression as [2], which
is
∑m−1
k=0
(mρ)k
k! ≈ emρ
. This expression is very accurate
when m is not too small and ρ is not too large [2]. Since
Stirling’s approximation of m! is
√
2πm(m
e )m
, one closed-
form expression of πm is
πm ≈
1−ρ
√
2πm(1−ρ)(eρ
eρ )m+1
,
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
7. 0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TC.2015.2401021, IEEE Transactions on Computers
TRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 7
and
pext(D) ≈
(1 − ρ)e−mµ(1−ρ)D
1+
√
2πm(1−ρ)(eρ
eρ )m−ρe−mµ(1−ρ)D
.
For convenience, we rewrite pext(D) ≈ (1−ρ)K1
K2−ρK1
, where
K1 = e−mµ(1−ρ)D
, and K2 = 1 +
√
2πm(1 − ρ)Φ, where
Φ = (eρ
/eρ)m
.
In the following, we solve our optimization problems
based on above closed-form expression of pext(D).
5.1.1 Optimal Size
Given λ, r, a, P∗
, α, β, γ, δ, ξ, D, and s, our objective is to
find m such that Profit is maximized. To maximize Profit, m
must be found such that
∂Profit
∂m
= −
∂Clong
∂m
−
∂Cshort
∂m
= 0,
where
∂Clong
∂m
= β + δP∗
− δλrξsα−1 ∂pext(D)
∂m
,
and
∂Cshort
∂m
= λ(γ + δP∗
)
r
s
∂pext(D)
∂m
+ λrδξsα−1 ∂pext(D)
∂m
.
Since
ln Φ = m ln(eρ
/eρ) = m(ρ − ln ρ − 1),
and
∂ρ
∂m
= −
λr
m2s
= −
ρ
m
,
we have
1
Φ
∂Φ
∂m
= (ρ − ln ρ − 1) + m(1 −
1
ρ
)
∂ρ
∂m
= − ln ρ,
and
∂Φ
∂m
= −Φ ln ρ.
Then, we get
∂K1
∂m
= −µDK1,
and
∂K2
∂m
=
√
2πmΦ
(
1
2m
(1 + ρ) − ln ρ(1 − ρ)
)
.
Furthermore, we have
∂pext(D)
∂m
=
1
(K2−ρK1)2
[ ρ
m
K1(K2−K1)
+ (ρ−1)µDK1K2−
(1+ρ)K1
2m
(K2−1)
+ (1−ρ)K1(ln ρ)(K2−1)
]
.
We cannot get a closed-form solution to m, but we can
get the numerical solution to m. Since ∂Profit/∂m is not an
increasing or decreasing function of m, we need to find the
decreasing region of m, and then use the standard bisection
method. If there are more than one maximal values, they
are compared and the maximum is selected. When using
the bisection method to find the extreme point, the iteration
accuracy is set as a unified value 10−10
.
In Fig. 6, we demonstrate the net profit in one unit of
time as a function of m and λ where s = 1, r = 1, and the
other parameters are same as with those in Fig. 5. We notice
that there is an optimal choice of m such that the net profit is
maximized. Using the analytical method, the optimal value
of m such that ∂Profit/∂m = 0 is 4.8582, 5.8587, 6.8590,
1 2 3 4 5 6 7 8 9 1011121314151617181920
0
10
20
30
40
50
60
70
80
90
The Server Size
Profit
lamda=4.99
lamda=5.99
lamda=6.99
lamda=7.99
Fig. 6: Net profit versus m and λ.
0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
0
10
20
30
40
50
The Server Speed
OptimalSize
lamda=4.99
lamda=5.99
lamda=6.99
lamda=7.99
(a) Optimal size versus s and λ.
0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
0
10
20
30
40
50
60
70
80
90
The Server Speed
MaximalProfit
lamda=4.99
lamda=5.99
lamda=6.99
lamda=7.99
(b) Maximal profit versus s and λ.
Fig. 7: Optimal size and maximal profit vs. s and λ.
7.8592 for λ = 4.99, 5.99, 6.99, 7.99, respectively. When the
number of servers m is less than the optimal value, the
service provider needs to rent more temporary servers to
execute the requests whose waiting times are equal to the
deadline; hence, the extra cost increases, even surpassing
the gained revenue. As m increases, the waiting times are
significantly reduced, but the cost on fixed servers increases
greatly, which also surpasses the gained revenue too. Hence,
there is an optimal choice of m which maximizes the profit.
In Fig. 7, we demonstrate the optimal size and maximal
profit in one unit of time as a function of s and λ. It means,
for each combination of s and λ, we find the optimal number
of servers and the maximal profit. The parameters are same
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
8. 0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TC.2015.2401021, IEEE Transactions on Computers
TRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 8
as those in Fig. 6. From the figures we can see that a higher
speed leads to a less number of servers needed for each λ,
and different λ values have different optimal combinations
of speed and size. In addition, the greater the λ is, the more
the maximal profit can be obtained.
5.1.2 Optimal Speed
Given λ, r, a, P∗
, α, β, γ, δ, ξ, D, and m, our objective is
to find s such that Profit is maximized. To maximize Profit, s
must be found such that
∂Profit
∂s
= −
∂Clong
∂s
−
∂Cshort
∂s
= 0,
where
∂Clong
∂s
= δξλrsα−2
[
(α − 1)(1 − pext(D)) − s
∂pext(D)
∂s
]
,
and
∂Cshort
∂s
=
rλ(γ + δP∗
)
s2
(
s
∂pext(D)
∂s
− pext(D)
)
+ λrδξsα−2
[
s
∂pext(D)
∂s
+ (α − 1)pext(D)
]
,
Since
∂ρ
∂s
= −
λr
ms2
= −
ρ
s
,
and
1
Φ
∂Φ
∂s
= m(1 −
1
ρ
)
∂ρ
∂s
,
we have
∂Φ
∂s
=
m
s
(1 − ρ)Φ.
Now, we get
∂K1
∂s
= −DK1
m
r
,
and
∂K2
∂s
=
√
2πm(ρ + m(1 − ρ)2
)
Φ
s
.
Furthermore, we have
∂pext(D)
∂s
=
1
(K2 − ρK1)2
[ρ
s
K1(K2 − K1)
+ (ρ − 1)K1
√
2πm(ρ + m(1 − ρ)2
)
Φ
s
+ (ρ − 1)DK1K2
m
r
]
.
Similarly, we cannot get the closed-form expression of
s, so we can use the same method to find the numerical
solution of s. In Fig. 8, we demonstrate the net profit in
one unit of time as a function of s and λ, where m = 6.
The rest parameters are the same as that in Figs. 6 and 7.
We notice that there is an optimal choice of s such that
the net profit is maximized. Using the analytical method,
the optimal value of s such that respectively. When the
servers run at a slower speed than the optimal speed, the
waiting times of service requests will be long and exceed
the deadline. So, the revenue is small and the profit is not
optimal. When s increases, the energy consumption as well
as the electricity cost increases. Hence, the increased revenue
is much less than the increased cost. As a result, the profit is
reduced. Therefore, there is an optimal choice of s such that
the net profit is maximized.
In Fig. 9, we demonstrate the optimal speed and maxi-
mal profit in one unit of time as a function of m and λ. The
0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5
10
20
30
40
50
60
70
80
90
The Server Speed
Profit
lamda=4.99
lamda=5.99
lamda=6.99
lamda=7.99
Fig. 8: Net profit versus s and λ.
1 3 5 7 9 11 13 15 17 19 21 23 25
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
The Server Size
OptimalSpeed
lamda=4.99
lamda=5.99
lamda=6.99
lamda=7.99
(a) Optimal speed versus m and λ.
1 3 5 7 9 11 13 15 17 19 21 23 25
0
10
20
30
40
50
60
70
80
90
The Server Size
MaximalProfit
lamda=4.99
lamda=5.99
lamda=6.99
lamda=7.99
(b) Maximal profit versus m and λ.
Fig. 9: Optimal speed and maximal profit versus m and λ.
parameters are same as that in Figs. 6–8. From the figures
we can see that if the number of fixed servers is great, the
servers must run at a lower speed, which can lead to an
optimal profit. In addition, the optimal speed of servers is
not faster than 1.2, that is because the increased electricity
cost surpasses the increased cost that rents extra servers. The
figure also shows us that different λ values have different
optimal combinations of speed and size.
5.1.3 Optimal Size and Speed
Given λ, r, a, P∗
, α, β, γ, δ, ξ, D, our third problem is to find
m and s such that Profit is maximized. Hence, we need to
find m and s such that ∂Profit/∂m = 0 and ∂Profit/∂s = 0,
where ∂Profit/∂m and ∂Profit/∂s have been derived in the
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
9. 0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TC.2015.2401021, IEEE Transactions on Computers
TRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 9
0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5
25
30
35
40
45
50
55
The Server Speed
Profit
m=3
m=4
m=5
m=6
Fig. 10: Net profit versus m and s.
0.5 0.75 1 1.25 1.5 1.75 2
0
20
40
60
80
100
120
140
160
Average r
MaximalProfit
lamda=4.99
lamda=5.99
lamda=6.99
lamda=7.99
Fig. 11: Maximal profit versus λ and r.
last two sections. The two equations are solved by using
the same method as [2]. In Fig. 10, we demonstrate the net
profit in one unit of time as a function of m and s. Here λ
is 5.99, and r = 1. The optimal value is m = 6.2418 and
s = 0.9386, which result in the maximal profit 58.0150. In
Fig. 11, we demonstrate the maximal profit in one unit of
time in different combinations of λ and r. The figure shows
that the service providers can obtain more profit when the
service requests are with greater λ and r.
5.2 An Algorithmic Method for Actual Solutions
In above subsection, the optimal solutions find using the
analytical method are ideal solutions. Since the number of
rented servers must be integer and the server speed levels
are discrete and limited in real system, we need to find the
optimal solutions for the discrete scenarios. Assume that
S = {si|1 ≤ i ≤ n} is a discrete set of n speed levels with
increasing order. Next, different situations are discussed and
the corresponding methods are given as follows.
5.2.1 Optimal Size
Assume that all servers run at a given execution speed s.
Given λ, r, a, P∗
, α, β, γ, δ, ξ, and D, the first problem is to
find the number of long-term rented servers m such that the
profit is maximized. The method is shown in Algorithm 2.
5.2.2 Optimal Speed
Assume that the service provider rents m servers. Given λ,
r, a, P∗
, α, β, γ, δ, ξ, and D, the second problem is to find the
Algorithm 2 Finding the optimal size
Input: s, λ, r, a, P∗
, α, β, γ, δ, ξ, and D
Output: the optimal number Opt size of fixed servers
1: Profit max ← 0
2: find the server size m using the analytical method in Section
5.1.1
3: m∗
l ← ⌊m⌋, m∗
u ← ⌈m⌉
4: Profitl ← Profit(m∗
l , s), Profitu ← Profit(m∗
u, s)
5: if Profitl > Profitu then
6: Profit max ← Profitl
7: Opt size ← m∗
l
8: else
9: Profit max ← Profitu
10: Opt size ← m∗
u
11: end if
optimal execution speed of all servers such that the profit is
maximized. The method is shown in Algorithm 3.
Algorithm 3 Finding the optimal speed
Input: m, λ, r, a, P∗
, α, β, γ, δ, ξ, and D
Output: the optimal server speed Opt speed
1: Profit max ← 0
2: find the server speed s using the analytical method in
Section 5.1.2
3: s∗
l ← si, s∗
u ← si+1 if si < s ≤ si+1
4: Profitl ← Profit(m, s∗
l ), Profitu ← Profit(m, s∗
u)
5: if Profitl > Profitu then
6: Profit max ← Profitl
7: Opt speed ← s∗
l
8: else
9: Profit max ← Profitu
10: Opt speed ← s∗
u
11: end if
5.2.3 Optimal Size and Speed
In this subsection, we solve the third problem, which is to
find the optimal combination of m and s such that the profit
is maximized. Given λ, r, a, P∗
, α, β, γ, δ, ξ, and D, the
method is shown in Algorithm 4.
Algorithm 4 Finding the optimal size and speed
Input: λ, r, a, P∗
, α, β, γ, δ, ξ, and D
Output: the optimal number Opt size of fixed servers and the
optimal execution speed Opt speed of servers
1: Profit max ← 0
2: find the server size m and speed s using the analytical
method in Section 5.1.3
3: m∗
l ← ⌊m⌋, m∗
u ← ⌈m⌉
4: find the optimal speed s∗
l and s∗
u using Algorithm 3 with
server size m∗
l and m∗
u, respectively
5: Profitl ← Profit(m∗
l , s∗
l ), Profitu ← Profit(m∗
u, s∗
u)
6: if Profitl ≤ Profitu then
7: Profit max ← Profitu
8: Opt size ← m∗
u , Opt speed ← s∗
u
9: else
10: Profit max ← Profitl
11: Opt size ← m∗
l , Opt speed ← s∗
l
12: end if
5.3 Comparison of Two Kinds of Solutions
In Tables 1, 2, and 3, the ideal optimal solutions and the
actual optimal solutions are compared for three different
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
10. 0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TC.2015.2401021, IEEE Transactions on Computers
TRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 10
cases. Table 1 compares the ideal optimal size and the actual
optimal size under the given server speed. Table 2 compares
the ideal optimal speed and the actual optimal speed under
the given server size. In Table 3, two kinds of solutions are
compared for different combinations of λ and r. Here, m
can be any positive integer, and the available speed levels
are S = {0.2, 0.4, · · · , 2.0}. According to the comparisons
we can see that the ideal maximal profit is greater than
the actual maximal profit. In the tables, we also list the
relative difference (RD) between the ideal optimal profit and
the actual optimal profit, which is calculated as
RD =
Idep − Actp
Actp
,
where Idep and Actp are the maximal profit in ideal and
actual scenarios. From the results we know that the relative
difference is always small except some cases in Table 2. That
is because a small difference of speed would lead to a big
difference of profit when the server size is large.
6 PERFORMANCE COMPARISON
Using our resource renting scheme, temporary servers are
rented for all requests whose waiting time are equal to the
deadline, which can guarantee that all requests are served
with high service quality. Hence, our scheme is superior
to the traditional resource renting scheme in terms of the
service quality. Next, we conduct a series of calculations
to compare the profit of our renting scheme and the rent-
ing scheme in [2]. In order to distinguish the proposed
scheme and the compared scheme, the proposed scheme
is renamed as Double-Quality-Guaranteed (DQG) renting
scheme and the compared scheme is renamed as Single-
Quality-Unguaranteed (SQU) renting scheme in this paper.
6.1 The Compared Scheme
Firstly, the average charge of the using the SQU renting
scheme is analyzed.
Theorem 6.1. The expected charge to a service request using
the SQU renting scheme is
ar(1 − Pqe−(1−ρ)mµD
).
Proof 6.1. Recall that the probability distribution function of
the waiting time W of a service request is
fW (t) = (1 − Pq)u(t) + mµπme−(1−ρ)mµt
.
Since W is a random variable, so R(r, W) is also a ran-
dom variable. The expected charge to a service request
with execution requirement r is
R(r) = R(r, W)
=
∫ ∞
0
fW (t)R(r, t)dt
=
∫ D
0
[
(1 − Pq)u(t) + mµπme−(1−ρ)mµt
]
ardt
= (1 − Pq)ar + mµπmar
1 − e−(1−ρ)mµD
(1 − ρ)mµ
= ar(1 − Pqe−(1−ρ)mµD
).
Therefore, the expected charge to a service request is the
expected value of R(r):
R(r)
=
∫ ∞
0
fr(z)R(z)dz
=
∫ ∞
0
1
r
e−z/r
az(1 − Pqe−(1−ρ)mµD
)dz
=
a
r
(1 − Pqe−(1−ρ)mµD
)
∫ ∞
0
e−z/r
zdz
= ar(1 − Pqe−(1−ρ)mµD
).
The theorem is proven.
By the above theorem, the profit in one unit of time using
the SQU renting scheme is calculated as:
λar(1 − Pqe−(1−ρ)mµD
) − m(β + δ(ρξsα
+ P∗
)). (11)
Using the SQU renting scheme, a service provider must
rent more servers or scale up the server speed to maintain
a high quality-guaranteed ratio. Assumed that the required
quality-guaranteed ratio of a service provider is ψ and the
deadline of service requests is D. By solving equation
FW (D) = 1 −
πm
1 − ρ
e−mµ(1−ρ)D
≥ ψ
with given m or s, we can get the corresponding s or m such
that the required quality-guaranteed ratio is achieved.
6.2 Profit Comparison under Different Quality-
Guaranteed Ratio
Let λ be 5.99 and the other parameters be the same as
those in Section 5. In the first example, for a given number
of servers, we compare the profit using the SQU renting
scheme with quality-guaranteed ratio 100%, 99%, 92%, 85%
and the optimal profit using our DQG renting scheme. Be-
cause the quality-guaranteed ratio 100% cannot be achieved
using the SQU renting scheme, hence, we set 99.999999% ≈
100%. The results are shown in Fig. 12. From the figure, we
can see that the profit obtained using the proposed scheme is
always greater than that using the SQU renting scheme, and
the five curves reach the peak at different sizes. In addition,
the profit obtained by a service provider increases when
the qualtiy-guaranteed ratio increases from 85% to 99%, but
decreases when the ratio is greater than 99%. That is because
more service requests are charged with the increasing ratio
from 85% to 99%; but once the ratio is greater than 99%, the
cost to expand the server size is greater than the revenue
obtained from the extra qualtiy-guaranteed requests, hence,
the total profit is reduced.
In the second example, we compare the profit of the
above five scenarios under the given server speed. The
results are given in Fig. 13. The figure shows the trend of
profit when the server speed is increasing from 0.1 to 2.9.
From the figure, we can see that the curves increase firstly
and reach the peak at certain speed, and then decrease along
with the increasing speed on the whole. The figure verifies
that our proposed scheme can obtain more profit than the
SQU renting scheme. Noticed that the changing trends of
the curves of the SQU renting scheme with 100%, 99%,
92%, and 85% quality-guaranteed ratio are interesting. They
show an increasing trend at the beginning and then decrease
during a small range of speed repeatedly. The reason is
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
11. 0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TC.2015.2401021, IEEE Transactions on Computers
TRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 11
TABLE 1: Comparison of the two methods for finding the optimal size
Given Speed 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0
Ideal
Solution
Optimal Size 29.1996 14.6300 9.7599 7.3222 5.8587 4.8827 4.1854 3.6624 3.2555 2.9300
Maximal Profit 11.5546 45.5262 54.6278 57.5070 57.8645 56.9842 55.3996 53.3498 51.0143 48.4578
Actual
Solution
Optimal Size 29 15 10 7 6 5 4 4 3 3
Maximal Profit 11.5268 45.4824 54.6014 57.3751 57.8503 56.9727 55.3259 53.0521 50.8526 48.4513
Relative Difference 0.2411% 0.0964% 0.0483% 0.2299% 0.0246% 0.0202% 0.1332% 0.5612% 0.3180% 0.01325%
TABLE 2: Comparison of the two methods for finding the optimal speed
Given Size 5 7 9 11 13 15 17 19 21 23
Ideal
Solution
Optimal Speed 1.1051 0.8528 0.6840 0.5705 0.4895 0.4288 0.3817 0.3440 0.3132 0.2875
Maximal Profit 57.3742 57.7613 56.0783 53.3337 49.9896 46.2754 42.3167 38.1881 33.9366 29.5933
Actual
Solution
Optimal Speed 1.0 0.8 0.8 0.6 0.6 0.4 0.4 0.4 0.4 0.4
Maximal Profit 57.0479 57.3751 54.7031 53.1753 48.4939 45.4824 42.2165 37.4785 32.6795 27.8795
Relative Difference 0.5721% 0.6732% 2.5140% 0.2979% 3.0843% 1.7435% 0.2373% 1.8934% 3.8470% 6.1474%
TABLE 3: Comparison of the two methods for finding the optimal size and the optimal speed
r 0.50 0.75 1.00 1.25 1.50 1.75 2.00
λ = 4.99
Ideal
Solution
Optimal Size 2.5763 3.8680 5.1608 6.4542 7.7480 9.0420 10.3362
Optimal Speed 0.9432 0.9422 0.9413 0.9406 0.9399 0.9394 0.9388
Maximal Profit 24.0605 36.0947 48.1539 60.1926 72.2317 84.3121 96.3528
Actual
Solution
Optimal Size 3 4 5 6 7 9 10
Optimal Speed 1.0 1.0 1.0 1.0 1.0 1.0 1.0
Maximal Profit 23.8770 35.7921 48.0850 60.1452 72.0928 83.9968 96.2230
Relative Difference 0.7695% 0.8454% 0.14355% 0.0789% 0.1927% 0.3754% 0.1349%
λ = 5.99
Ideal
Solution
Optimal Size 3.1166 4.6787 6.2418 7.8056 9.3600 10.9346 12.4995
Optimal Speed 0.9401 0.9393 0.9386 0.9380 0.9375 0.9370 0.9366
Maximal Profit 28.9587 43.4364 57.9339 72.4121 86.9180 101.3958 115.9086
Actual
Solution
Optimal Size 3 4 6 7 9 10 12
Optimal Speed 1.0 1.0 1.0 1.0 1.0 1.0 1.0
Maximal Profit 28.9158 43.1208 57.8503 72.2208 86.7961 101.2557 115.7505
Relative Difference 0.1484% 0.7317% 0.1445% 0.2649% 0.1405% 0.1384% 0.1365%
1 2 3 4 5 6 7 8 9 10111213141516171819202122232425
0
10
20
30
40
50
60
70
The number of servers
Profit
DQG
SQU 100%
SQU 99%
SQU 92%
SQU 85%
Fig. 12: Profit versus m and different quality-guaranteed
ratios.
analyzed as follows. When the server speed is changing
within a small speed range, in order to satisfy the required
deadline-guaranteed ratio, the number of servers rented by
a service provider keeps unchanged. At the beginning, the
added revenue is more than the added cost, so the profit is
increasing. However, when the speed becomes greater, the
energy consumption increases, leading to the total increased
cost surpassing the increased revenue, hence, the profit
decreases.
In the third example, we explore the changing trend of
the profit with different D, and the results are shown as
0.1 0.3 0.5 0.7 0.9 1.1 1.3 1.5 1.7 1.9 2.1 2.3 2.5 2.7 2.9
0
10
20
30
40
50
60
The Speed
Profit
DQG
SQU 100%
SQU 99%
SQU 92%
SQU 85%
Fig. 13: Profit versus s and different quality-guaranteed
ratios.
Fig. 14. Fig. 14(a) gives the numerical results when the server
speed is fixed at 0.7, and Fig. 14(b) shows the numerical
results when the number of servers is fixed at 5. We analyze
the results as follows.
From Fig. 14(a), we can see that the profit obtained
using the SQU renting scheme increases slightly with the
increment of D. That is because the service charge keeps
constant but the extra cost is reduced when D is greater. As a
consequence, the profit increases. The second phenomenon
from the figure is that the curves of SQU 92% and SQU 85%
have sharp drop at some points and then ascend gradually
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
12. 0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TC.2015.2401021, IEEE Transactions on Computers
TRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 12
5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
30
35
40
45
50
55
60
Deadline D
Profit
DQG
SQU 100%
SQU 99%
SQU 92%
SQU 85%
(a) Fixed server speed s = 0.7.
5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
30
35
40
45
50
55
60
65
Deadline D
Profit
DQG
SQU 100%
SQU 99%
SQU 92%
SQU 85%
(b) Fixed server size m = 5.
Fig. 14: Profit versus D and different quality-guaranteed
ratios.
and smoothly. The reasons are explained as follows. When
the server speed is fixed, enough servers are needed to
satisfy the given quality-guaranteed ratio. By calculating,
we know that the number of required servers is the same
for all D values in a certain interval. For example, [5,7] and
[8,25] are two intervals of D for the curve of SQU 92%,
and the required servers are 10 and 9, respectively. For all
D within the same interval, their costs are the same with
each other. Whereas, their actual quality-guaranteed ratios
are different which get greater with the increasing D. Hence,
during the same interval, the revenue gets greater as well
as the profit. However, if the deadline increases and enters
a different interval, the quality-guaranteed ratio sharply
drops due to the reduced servers, and the lost revenue
surpasses the reduced cost, hence, the profit sharply drops
as well. Moreover, we can also see that the profit of SQU
100% is much less than the other scenarios. That is because
when the quality-guaranteed ratio is great enough, adding
a small revenue leads to a much high cost.
From Fig. 14(b), we can see that the curves of SQU 92%
and SQU 85% descend and ascend repeatedly. The reasons
are same as that of Fig. 14(a). The deadlines within the same
interval share the same minimal speed, hence, the cost keeps
constant. At the same time, the revenue increases due to
the increasing quality-guaranteed ratio. As a consequence,
the profit increases. At each break point, the minimal speed
satisfying the required quality-guaranteed ratio gets smaller,
which leads to a sharp drop of the actual quality-guaranteed
ratio. Hence, the revenue as well as the profit drops.
6.3 Comparison of Optimal Profit
In order to further verify the superiority of our proposed
scheme in terms of profit, we conduct the following com-
parison between the optimal profit achieved by our DQG
renting scheme and that of the SQU renting scheme in [2]. In
this group of comparisons, λ is set as 6.99, D is 5, r is varying
from 0.75 to 2.00 in step of 0.25, and the other parameters
are the same as Section 5. In Fig. 15, the optimal profit
and the corresponding configuration of two renting schemes
are presented. From Fig. 15(a) we can see that the optimal
profit obtained using our scheme is always greater than that
using the SQU renting scheme. According to the calculation,
our scheme can obtain 4.17 percent more profit on the
average than the SQU renting scheme. This shows that our
scheme outperforms the SQU renting scheme in terms of
both of quality of service and profit. Figs. 15(b) and 15(c)
compare the server size and speed of the two schemes. The
figures show that using our renting scheme the capacity
provided by the long-term rented servers is much less than
the capacity using the SQU renting scheme. That is because
a lot of requests are assigned to the temporary servers using
our scheme, and less servers and slower server speed are
configured to reduce the waste of resources in idle period.
In conclusion, our scheme can not only guarantee the service
quality of all requests, but also achieve more profit than the
compared one.
7 CONCLUSIONS
In order to guarantee the quality of service requests and
maximize the profit of service providers, this paper has
proposed a novel Double-Quality-Guaranteed (DQG) rent-
ing scheme for service providers. This scheme combines
short-term renting with long-term renting, which can reduce
the resource waste greatly and adapt to the dynamical
demand of computing capacity. An M/M/m+D queueing
model is build for our multiserver system with varying
system size. And then, an optimal configuration problem
of profit maximization is formulated in which many factors
are taken into considerations, such as the market demand,
the workload of requests, the server-level agreement, the
rental cost of servers, the cost of energy consumption, and
so forth. The optimal solutions are solved for two different
situations, which are the ideal optimal solutions and the
actual optimal solutions. In addition, a series of calcula-
tions are conducted to compare the profit obtained by the
DQG renting scheme with the Single-Quality-Unguaranteed
(SQU) renting scheme. The results show that our scheme
outperforms the SQU scheme in terms of both of service
quality and profit.
In this paper, we only consider the profit maximization
problem in a homogeneous cloud environment, because the
analysis of a heterogenous environment is much more com-
plicated than that of a homogenous environment. However,
we will extend our study to a heterogenous environment in
the future.
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
13. 0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TC.2015.2401021, IEEE Transactions on Computers
TRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 13
0.75 1 1.25 1.5 1.75 2
40
60
80
100
120
140
Average r
OptimalProfit DQG
SQU
(a) Comparison of Profit.
0.75 1 1.25 1.5 1.75 2
0
5
10
15
20
Average r
OptimalSize
DQG
SQU
(b) Comparison of Server Size.
0.75 1 1.25 1.5 1.75 2
0.9
0.92
0.94
0.96
0.98
1
Average r
OptimalSpeed
DQG
SQU
(c) Comparison of Server Speed.
Fig. 15: Comparison between our scheme with that in [2].
ACKNOWLEDGMENTS
The authors thank the anonymous reviewers for their valu-
able comments and suggestions. The research was partially
funded by the Key Program of National Natural Science
Foundation of China (Grant Nos. 61133005, 61432005), the
National Natural Science Foundation of China (Grant Nos.
61370095, 61472124, 61173013, 61202109, and 61472126).
REFERENCES
[1] K. Hwang, J. Dongarra, and G. C. Fox, Distributed and
Cloud Computing. Elsevier/Morgan Kaufmann, 2012.
[2] J. Cao, K. Hwang, K. Li, and A. Y. Zomaya, “Optimal
multiserver configuration for profit maximization in cloud
computing,” IEEE Trans. Parallel Distrib. Syst., vol. 24, no. 6,
pp. 1087–1096, 2013.
[3] A. Fox, R. Griffith, A. Joseph, R. Katz, A. Konwinski,
G. Lee, D. Patterson, A. Rabkin, and I. Stoica, “Above
the clouds: A berkeley view of cloud computing,” Dept.
Electrical Eng. and Comput. Sciences, vol. 28, 2009.
[4] R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and
I. Brandic, “Cloud computing and emerging it platforms:
Vision, hype, and reality for delivering computing as the
5th utility,” Future Gener. Comp. Sy., vol. 25, no. 6, pp. 599–
616, 2009.
[5] P. Mell and T. Grance, “The NIST definition of cloud com-
puting. national institute of standards and technology,”
Information Technology Laboratory, vol. 15, p. 2009, 2009.
[6] J. Chen, C. Wang, B. B. Zhou, L. Sun, Y. C. Lee, and
A. Y. Zomaya, “Tradeoffs between profit and customer
satisfaction for service provisioning in the cloud,” in Proc.
20th Int’l Symp. High Performance Distributed Computing.
ACM, 2011, pp. 229–238.
[7] J. Mei, K. Li, J. Hu, S. Yin, and E. H.-M. Sha, “Energy-
aware preemptive scheduling algorithm for sporadic tasks
on dvs platform,” MICROPROCESS MICROSY., vol. 37,
no. 1, pp. 99–112, 2013.
[8] P. de Langen and B. Juurlink, “Leakage-aware multipro-
cessor scheduling,” J. Signal Process. Sys., vol. 57, no. 1, pp.
73–88, 2009.
[9] G. P. Cachon and P. Feldman, “Dynamic versus static
pricing in the presence of strategic consumers,” Tech. Rep.,
2010.
[10] Y. C. Lee, C. Wang, A. Y. Zomaya, and B. B. Zhou, “Profit-
driven scheduling for cloud services with data access
awareness,” J. Parallel Distr. Com., vol. 72, no. 4, pp. 591–
602, 2012.
[11] M. Ghamkhari and H. Mohsenian-Rad, “Energy and per-
formance management of green data centers: a profit max-
imization approach,” IEEE Trans. Smart Grid, vol. 4, no. 2,
pp. 1017–1025, 2013.
[12] A. Odlyzko, “Should flat-rate internet pricing continue,”
IT Professional, vol. 2, no. 5, pp. 48–51, 2000.
[13] G. Kesidis, A. Das, and G. de Veciana, “On flat-rate
and usage-based pricing for tiered commodity internet
services,” in 42nd Annual Conf. Information Sciences and
Systems. IEEE, 2008, pp. 304–308.
[14] S. Shakkottai, R. Srikant, A. Ozdaglar, and D. Acemoglu,
“The price of simplicity,” IEEE J. Selected Areas in Commu-
nications, vol. 26, no. 7, pp. 1269–1276, 2008.
[15] H. Xu and B. Li, “Dynamic cloud pricing for revenue
maximization,” IEEE Trans. Cloud Computing, vol. 1, no. 2,
pp. 158–171, July 2013.
[16] M. Mac´ıas and J. Guitart, “A genetic model for pricing
in cloud computing markets,” in Proc. 2011 ACM Symp.
Applied Computing, 2011, pp. 113–118.
[17] “Amazon EC2,” http://aws.amazon.com, 2014.
[18] “Amazon EC2 spot instances,” http://aws.amazon.com/
ec2/spot-instances, 2014.
[19] R. L. Hall and C. J. Hitch, “Price theory and business
behaviour,” Oxford economic papers, no. 2, pp. 12–45, 1939.
[20] D. Kahneman, J. L. Knetsch, and R. Thaler, “Fairness as a
constraint on profit seeking: Entitlements in the market,”
The American economic review, pp. 728–741, 1986.
[21] D. E. Irwin, L. E. Grit, and J. S. Chase, “Balancing risk
and reward in a market-based task service,” in 13th IEEE
Int’l Symp. High performance Distributed Computing, 2004,
pp. 160–169.
[22] J. Heo, D. Henriksson, X. Liu, and T. Abdelzaher, “In-
tegrating adaptive components: An emerging challenge
in performance-adaptive systems and a server farm case-
study,” in RTSS 2007, Dec 2007, pp. 227–238.
[23] E. Pinheiro, R. Bianchini, E. V. Carrera, and T. Heath,
“Dynamic cluster reconfiguration for power and perfor-
mance,” in Compilers and operating systems for low power.
Springer, 2003, pp. 75–93.
[24] X. Fan, W.-D. Weber, and L. A. Barroso, “Power provision-
ing for a warehouse-sized computer,” in ACM SIGARCH
Computer Architecture News, vol. 35, no. 2. ACM, 2007, pp.
13–23.
[25] J. S. Chase, D. C. Anderson, P. N. Thakar, A. M. Vahdat,
and R. P. Doyle, “Managing energy and server resources
in hosting centers,” in ACM SIGOPS Operating Systems
Review, vol. 35, no. 5. ACM, 2001, pp. 103–116.
[26] M. Mazzucco and D. Dyachuk, “Optimizing cloud
providers revenues via energy efficient server allocation,”
Sustainable Computing: Informatics and Systems, vol. 2, no. 1,
pp. 1–12, 2012.
[27] Y.-J. Chiang and Y.-C. Ouyang, “Profit optimization in
sla-aware cloud services with a finite capacity queuing
model,” Math. Probl. Eng., 2014.
[28] B. Rochwerger, D. Breitgand, E. Levy, A. Galis, K. Na-
gin, I. M. Llorente, R. Montero, Y. Wolfsthal, E. Elmroth,
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457
14. 0018-9340 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TC.2015.2401021, IEEE Transactions on Computers
TRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * 2015 14
J. Caceres et al., “The reservoir model and architecture
for open federated cloud computing,” IBM J. Research and
Development, vol. 53, no. 4, 2009.
[29] R. Buyya, R. Ranjan, and R. N. Calheiros, “Intercloud:
Utility-oriented federation of cloud computing environ-
ments for scaling of application services,” in Algorithms
and architectures for parallel processing. Springer, 2010, pp.
13–31.
[30] I. Goiri, J. Guitart, and J. Torres, “Characterizing cloud
federation for enhancing providers’ profit,” in 3rd Int’l
Conf. Cloud Computing. IEEE, 2010, pp. 123–130.
[31] A. N. Toosi, R. N. Calheiros, R. K. Thulasiram, and
R. Buyya, “Resource provisioning policies to increase iaas
provider’s profit in a federated cloud environment,” in
13th Int’l Conf. High Performance Computing and Commu-
nications. IEEE, 2011, pp. 279–287.
[32] “Cloud Scheduler,” http://cloudscheduler.org/, 2014.
[33] “Condor,” http://www.cs.wisc.edu/condor, 2014.
[34] T. Tannenbaum, D. Wright, K. Miller, and M. Livny, “Be-
owulf cluster computing with linux.” Cambridge, MA,
USA: MIT Press, 2002, ch. Condor: A Distributed Job
Scheduler, pp. 307–350.
[35] B. Yang, F. Tan, Y.-S. Dai, and S. Guo, “Performance evalu-
ation of cloud service considering fault recovery,” in Cloud
computing. Springer, 2009, pp. 571–576.
[36] J. Cao, K. Li, and I. Stojmenovic, “Optimal power allo-
cation and load distribution for multiple heterogeneous
multicore server processors across clouds and data cen-
ters,” IEEE Trans. Computers, vol. 63, no. 1, pp. 45–58, Jan
2014.
[37] H. Khazaei, J. Misic, and V. B. Misic, “Performance analysis
of cloud computing centers using M/G/m/m+ r queuing
systems,” IEEE Trans. on Parallel and Distributed Systems,
vol. 23, no. 5, pp. 936–943, 2012.
[38] L. Kleinrock, Theory, volume 1, Queueing systems. Wiley-
interscience, 1975.
[39] “Rackspace,” http://www.rackspace.com/information/
legal/cloud/sla, 2014.
[40] “Joyent,” http://www.joyent.com/company/policies/
cloud-hosting-service-level-agreement, 2014.
[41] “Microsoft Azure,” http://azure.microsoft.com/en-us/
support/legal/sla/, 2014.
[42] Z. Liu, S. Wang, Q. Sun, H. Zou, and F. Yang, “Cost-aware
cloud service request scheduling for saas providers,” The
Computer Journal, p. bxt009, 2013.
[43] M. Ghamkhari and H. Mohsenian-Rad, “Profit maximiza-
tion and power management of green data centers sup-
porting multiple slas,” in 2013 Int’l Conf. Computing, Net-
working and Communications. IEEE, 2013, pp. 465–469.
[44] S. Liu, S. Ren, G. Quan, M. Zhao, and S. Ren, “Profit aware
load balancing for distributed cloud data centers,” in IEEE
27th Int’l Symp. Parallel & Distributed Processing. IEEE,
2013, pp. 611–622.
[45] A. P. Chandrakasan, S. Sheng, and R. W. Brodersen,
“Low-power cmos digital design,” IEICE Trans. Electronics,
vol. 75, no. 4, pp. 371–382, 1992.
[46] B. Zhai, D. Blaauw, D. Sylvester, and K. Flautner, “Theo-
retical and practical limits of dynamic voltage scaling,” in
Proc. 41st annual Design Automation Conf. ACM, 2004, pp.
868–873.
[47] “Enhanced intel speedstep technology for the intel pen-
tium m processor,” White Paper, Intel, Mar 2004.
[48] O. Boxma and P. R. Waal, Multiserver queues with impatient
customers. Centrum voor Wiskunde en Informatica, De-
partment of Operations Research, Statistics, and System
Theory, 1993.
[49] D. Barrer, “Queuing with impatient customers and or-
dered service,” Operations Research, vol. 5, no. 5, pp. 650–
656, 1957.
[50] N. K. Boots and H. Tijms, “An M/M/c queue with impa-
tient customers,” Top, vol. 7, no. 2, pp. 213–220, 1999.
[51] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. Katz,
A. Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica
et al., “A view of cloud computing,” Communications of the
ACM, vol. 53, no. 4, pp. 50–58, 2010.
Jing Mei received her BSc in Information and
Computer Science from Hunan Normal Univer-
sity in 2009. She is currently a PhD candidate in
Hunan University, China. Her research interest
includes processor allocation and resource man-
agement, energy-efficient scheduling for parallel
and distributed computing systems, and profit
optimization in cloud computing.
Kenli Li received the PhD degree in computer
science from Huazhong University of Science
and Technology, China, in 2003. He was a vis-
iting scholar at University of Illinois at Urbana
Champaign from 2004 to 2005. He is currently a
full professor of computer science and technol-
ogy at Hunan University and deputy director of
National Supercomputing Center in Changsha.
His major research includes parallel computing,
cloud computing, and DNA computing. He has
published more than 110 papers in international
conferences and journals, such as IEEE-TC, IEEE-TPDS, IEEE-TSP,
JPDC, ICPP, CCGrid, FGCS. He is currently or has served on the edi-
torial boards of IEEE Transactions on Computers, International Journal
of Pattern Recognition and Artificial Intelligence, He is an outstanding
member of CCF.
Aijia Ouyang received the M.E. degree in the
department of computer science and technology
from Guangxi University for Nationalities, China,
in 2010. He is currently a Ph.D. candidate in the
College of Information Science and Engineering
at Hunan University, China. His research inter-
ests include parallel computing, cloud comput-
ing and artificial intelligence. He has published
research articles in international conference and
journals of intelligence algorithms and parallel
computing.
Keqin Li is a SUNY Distinguished Professor
of computer science. His current research in-
terests include parallel computing and high-
performance computing, distributed computing,
energy-efficient computing and communication,
heterogeneous computing systems, cloud com-
puting, big data computing, CPU-GPU hybrid
and cooperative computing, multicore comput-
ing, storage and file systems, wireless communi-
cation networks, sensor networks, peer-to-peer
file sharing systems, mobile computing, service
computing, Internet of things and cyber-physical systems. He has pub-
lished over 320 journal articles, book chapters, and refereed conference
papers, and has received several best paper awards. He is currently
or has served on the editorial boards of IEEE Transactions on Paral-
lel and Distributed Systems, IEEE Transactions on Computers, IEEE
Transactions on Cloud Computing, Journal of Parallel and Distributed
Computing. He is an IEEE Fellow.
For More Details Contact G.Venkat Rao
PVR TECHNOLOGIES 8143271457