Resource provisioning for video on demand in saas


Published on

Published in: Technology, Business
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Resource provisioning for video on demand in saas

  1. 1. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME1RESOURCE PROVISIONING FOR VIDEO ON DEMAND IN SAASPraveen Reshmalal 1, Dr. S.H.Patil 21Research Scholar, Bharati Vidyapeeth Deemed University College of Engineering,2Guide, Bharati Vidyapeeth Deemed University College of EngineeringABSTRACTA Cloud based video on demand is proposed solution to monitor the camera whichis accessible to the client on demand. This camera is present on server computer which iscontrolled by cloud controller. Also it makes use of scheduling algorithms to handlemultiple requests. This software provides functionality to remotely access the camera bymaking use of cloud architecture. All of the above actions are performed in completediscretion, without the user’s knowledge, by a background approach, by making use ofcloud controller.Keywords- VoD; Cloud simI. INTRODUCTIONThe term cloud computing implies access to remote computing services offered bythird parties via a TCP/IP connection to the public Internet [1]. Cloud computing is a modelfor enabling convenient, on demand network access to a shared pool of configurablecomputing resources that can be rapidly provided and released with minimal managementeffort or service provider interaction. This offers reliable services delivered through datacenters that are built on computer and storage virtualization technologies [5]. Therefore, it isa technology aiming to deliver on demand IT resources on a pay per use basis and clouduses the stateless protocol HTTP, to communicate with your computers. The CloudComputing Architectural model is shown in Figure 1.INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING& TECHNOLOGY (IJCET)ISSN 0976 – 6367(Print)ISSN 0976 – 6375(Online)Volume 4, Issue 3, May-June (2013), pp. 01-09© IAEME: Impact Factor (2013): 6.1302 (Calculated by GISI)www.jifactor.comIJCET© I A E M E
  2. 2. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME2Fig.1 The Cloud Computing Architectural modelII. WHAT IS CLOUD COMPUTINGCloud computing?Cloud computing is the development of distributed processing (DistributedComputing),Parallel Processing (Parallel Comp) and Grid Computing (Grid Computing) ,split numerous processing system of computing into smaller subroutines automaticallythrough grid, send to an extensive system of multiple servers, return the results to the userafter calculation and analysis. Through cloud computing, network service providers canhandling tens of millions or even billions of dollars of information in seconds, reach apowerful network services as “super computer “.Cloud computing, for example, is the Virtualization of computer programs through aninternet connection rather than installing applications on every office computer. Usingvirtualization, users can access servers or storage without knowing specific server orstorage details. The virtualization layer w i l l e x e c u te us e r re que s t f or c o m pu t i n gresources by accessing appropriate resources. Virtualization can be applied to manytypes of computer resources: Infrastructure such as Storage, Network, Compute (CPU /Memory etc.), Platform (such as Linux/ Windows OS) and Software as Services.Cloud computing in computing research and industry today has the potential to makethe new idea of ‘computing as a utility’ in the near future. The Internet is often representedas a cloud and the term “Cloud Computing”. Cloud computing is the dynamic provisioningof IT capabilities/IT services (hardware, software, or services) from third parties over anetwork [1][2][9]. These IT services are delivered on demand and they aredeliveredelastically, in terms of ‘able to scale out’ and ‘scale in’. The sections below briefly detailsdifferent types of cloud computing and how Virtual Machines (VMs) can be provided ascloud Infrastructure as a Service(Iaas).III. MODELING THE VM ALLOCATION [5][6]Cloud computing infrastructure is the massive deployment of virtualization toolsand techniques as it has an extra layer i.e. Virtualization layer that acts as an creation,execution, management, and hosting environment for application services.
  3. 3. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME3The modeled VMs in the above virtual environment are contextually isolated but still theyneed to share computing resources- processing cores, system bus etc. Hence, the amountof hardware resources available to each VM is constrained by the total processing power ie.CPU, the memory and system bandwidth available within the host. The choice of virtualmachine, meaning that you can select a configuration of CPU, memory, storage, bandwidthetc. that is optimal for an application.CloudSim supports VM provisioning at two levels:-At the host level – It is possible to specify how much of the overall processingpower of each core will be assigned to each VM. Known as VM policy AllocationAt the VM level – the VM assigns a fixed amount of the available processing powerto the individual application services (task units) that are hosted within its executionengine. Known as VM Scheduling.Note that at each level CloudSim implements the time-shared and space-sharedprovisioning policies. In this paper, we have proposed the VM load Balancing algorithm atthe VM level (VM Scheduling-time shared) where, individual application services isassigned varying (different) amount of the available processing power of VMs.This is because- in the real world, it’s not necessary all the VMs in a DataCenter hasfixed amount of processing powers but it can vary with different computing nodes atdifferent ends.And then to these VMs of different processing powers, the tasks/requests (applicationservices) are assigned or allocated to the most powerful VM and then to the lowest and soon. They are given the required priority weights. Hence, the performance parameters suchas overall response time and data processing time are optimized.IV. LOAD BALANCING IN CLOUD COMPUTINGLoad balancing is the process of distributing the load among various resources in anysystem. Thus load need to be distributed over the resources in cloud-based architecture, sothat each resources does approximately the equal amount of task at any point of time. Basicneed is to provide some techniques to balance requests to provide the solution of theapplication faster.Cloud vendors are based on automatic load balancing services, which allow clients toincrease the number of CPUs or memories for their resources to scale withincreased demands. This service is optional and depends on the clients business needs. Soload balancing serves two important needs, primarily to promote availability of Cloudresources and secondarily to promote performance [2,4].In order to balance the requests of the resources it is important to recognize a few majorgoals of load balancing algorithms:a) Cost effectiveness: primary aim is to achieve an overall improvement in systemperformance at a reasonable cost.b) Scalability and flexibility: the distributed system in which the algorithm is implementedmay change in size or topology. So the algorithm must be scalable and flexible enough toallow such changes to be handled easily.c) Priority: prioritization of the resources or jobs need to be done on before hand throughthe algorithm itself for better service to the important or high prioritized jobs in spite ofequal service provision for all the jobs regardless of their origin.
  4. 4. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME4Brief reviews of few existing load balancing algorithms are presented in the following:I. Token Routing: The main objective of the algorithm [2,4] is to minimize the system costby moving the tokens around the system. But in a scalable cloud system agents cannot havethe enough information of distributing the work load due to communication bottleneck. Sothe workload distribution among the agents is not fixed. The drawback of the token routingalgorithm can be removed with the help of heuristic approach of token based loadbalancing. This algorithm provides the fast and efficient routing decision. In this algorithmagent does not need to have an idea of the complete knowledge of their global state andneighbor’s working load. To make their decision where to pass the token they actuallybuild their own knowledge base. This knowledge base is actually derived from thepreviously received tokens. So in this approach no communication overhead is generated.II. Round Robin: In this algorithm [2,5], the processes are divided between all processors.Each process is assigned to the processor in a round robin order. The process allocationorder is maintained locally independent of the allocations from remote processors. Thoughthe work load distributions between processors are equal but the job processing time fordifferent processes are not same. So at any point of time some nodes may be heavily loadedand others remain idle. This algorithm is mostly used in web servers where Http requests areof similar nature and distributed equally.III. Randomized: Randomized algorithm is of type static in nature. In this algorithm [2,5]a process can be handled by a particular node n with a probability p. The process allocationorder is maintained for each processor independent of allocation from remote processor.This algorithm works well in case of processes are of equal loaded. However, problemarises when loads are of different computational complexities. Randomized algorithmdoes not maintain deterministic approach. It works well when Round Robin algorithmgenerates overhead for process queue.IV. Central queuing: This algorithm [1,3] works on the principal of dynamic distribution.Each new activity arriving at the queue manager is inserted into the queue. When requestfor an activity is received by the queue manager it removes the first activity from thequeue and sends it to the requester. If no ready activity is present in the queue therequest is buffered, until a new activity is available. But in case new activity comes to thequeue while there are unanswered requests in the queue the first such request is removedfrom the queue and new activity is assigned to it. When a processor load falls under thethreshold then the local load manager sends a request for the new activity to the central loadmanager. The c e n t r a l manager then answers the request if ready activity is foundotherwise queues the request until new activity arrives.V. Connection mechanism: Load balancing algorithm [6] can also be based on leastconnection mechanism which is a part of dynamic scheduling algorithm. It needs to count thenumber of connections for each server dynamically to estimate the load. The load balancerrecords the connection number of each server. The number of connection increases when anew connection is dispatched to it, and decreases the number when connection finishes ortimeout happens.
  5. 5. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME5Table 1 presents a comparative study of the above- mentioned load balancing algorithms:Algorithm Nature Environment ProcessMigrationSteadiness ResourceutilizationTokenRoutingDynamicDecentralizedPossible unstable MoreRoundrobinStatic DecentralizedDifficult Stable LessRandomizedStaic DecentralizedDifficult Stable LessCentralQueuingDynamicDifficult UnstableLessLeastlocationDynamicDifficult stable LessIV. SYSTEM ARCHITECTUREThe following structure shows the architecture of the cloud based VoD. Node Controllercontrols the camera. User requests through the cloud controller and receives the live feed of thevideo from the camera through the cloud controller.Work On-Demand Cloud Architecture for Video Application:On-demand videos can be delivered to sub-scribers through different network structures – i.e.the video server location and the network between the video servers to the subscriber. For manycases, proxy server, located closer to the subscribers, is widely used to decrease network traffic anddelays through high speed and robust connection. But proxy server has a finite storage anddistribution capacity, and therefore, a popularity scheme is needed to assist in the s e l e c t i o n ofv i d e o s du ri ng cach in g. Video servers, on the other hand, have a finite capacity and canonly service limited request at one time. For large content library and the unforeseen spikes innumber of active subscribers, Telco are looking for ways to keep service calls rejection to an absoluteminimum. Figure shows the system architecture of the on-demand cloud for IPTV. Videos can bestreamed from any of the virtual servers, irrespective of its capacity, which was aligncontinuously , notably to handle peak loads, to avoid overload and to achieve continuous, highutilization levels of servers while meeting its Service Level Agreements (SLAs). In most cases,performance is not affected as each virtual server behaves as a dedicated server. However, when toomany virtual servers reside on the single physical machine, services may be delivered moreslowly [8].Fig. 2 System Architecture
  6. 6. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME6V. SIMULATIONSimulation is a technique where a program models the behaviour of the system(CPU, network etc.) by calculating the interaction between its different entities usingmathematical formulas’, or actually capturing and playing back observations from aproduction system. The available Simulation tools in Cloud Computing today are:simjava, gridsim and CloudSim.5.1 CloudSim [1][3][6]CloudSim is a framework developed by the GRIDS laboratory of University ofMelbourne which enables seamless modeling, simulation and experimenting ondesigning Cloud computing infrastructures. CloudSim is a self-contained platformwhich can be used to model video on demand, host, service brokers, scheduling andallocation policies of a large scaled Cloud platform. This CloudSim framework is built ontop of GridSim framework which is also developed by the GRIDS laboratory. Hence, theresearcher has used CloudSim to model of video on demand hosts, VMs forexperimenting in simulated cloud environment.Virtual machine enables the abstraction of an OS and Application running on itfrom the hardware. The interior hardware infrastructure services interrelated to the Clouds ismodelled in the Cloudsim simulator by a video on demand element for handling servicerequests. These requests are application elements sandboxed within VMs, which need to beallocated a share of processing power on video on demand host components video ondemand object manages the data management activities such as VM creation and destructionand does the routing of user requests.5.2 ResultsIn this section, we present the evaluation of the performance of the cloud based loadbalancer.The main results investigated in this paper are summarized as follows:♦ λm: The effective arrival rate to the main server♦ λc: The effective arrival rate to the cloudserver♦ W: average waiting time inside the system♦ D: average delay in the buffer♦ S: average server time in each server♦ L : average number of requests in the system♦ Q: average number of requests in the buffer♦ X: average number of requests per server(Server Utilization)♦ Pr: probability of a request gets rejected♦ Pd: probability of a request gets serviced without getting buffered♦ Pb: probability of a request gets serviced after getting bufferedThe simulator was validated by comparing its results with those of provenformulas of M/M/c/k queuing system. Results of the formulas of the average waiting timesin the system as well as in the buffer were compared to that counterpart of the simulatorthat considers exponential distributed random variable for both of the inter-arrival time and
  7. 7. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME7service time. Additionally, the average number of requests in the systems as well as in thebuffer of both theoretically proven formulas and simulation were also compared. Bothresults of the theoretical and simulation were almost identical. Figure 1 shows thecomparison of the average numbers in the system. The average waiting times were toosmall to be presented.Figure 1. Simulation Validation, E[L]: expected number of requests in the system, E[q]:expected number of requests in the queueCloudsim was used t o c a lc u la t e the a ve r a ge number of requests in thesystem. The law stated that during the steady state of a system, the average number ofrequests is equal to their average arrival rate, multiplied by their average time spent in thesystem. Littles law was used in to derive the average number of requests in the system,buffer, and per server. For another validating the results of the simulation, the followequations were used and tested to hold true.W = D + S L = Q + XPr + Pd + Pb = 1Figure 2 shows the breakdown of the average time spent in the systemThe time spent in the servers is quite constant and it represents the average servicetime. The request response time is somehow dominated by the service time as thebuffering time represents small portion of the total response time.
  8. 8. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME8The average number of requests is shown in Figure 3. The figure shows that the numberof requests in the main servers is 30% more than that of the cloud server. It also depictsthat the requests occupancy of the buffer is noticeable only at higher offered loadsρ≥70%. This shows that with specification model under consideration, the buffer playsremarkable role only when the system under stress.Figure 4 presents that the main server is utilized at least 30% more than the clouds basedserver. This results is helpful is sizing the hardware of load balanced main-cloud serversystem under certain workload.VII CONCLUSIONWe evaluated the performance of a load balanced cloud server system underdifferent offered loads. The results show that the buffer of the load balance plays marginalrole except at very high loads. It also show that the main server handle at least 30% asmuch requests at the cloud based server. It will be very informative to pursue the study ofoptimizing the buffer size that meets the minimal rejection probability. The future work isto compare the performance evaluation of systems considering different combinations ofservice time and interarrival time distributions.
  9. 9. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME9VIII. REFERENCES[1] R. Buyy, R. Ranjan, R. Calheiros, "Modeling and simulation of scalable Cloudcomputing environments and the CloudSim toolkit: Challenges and opportunities".Proceedings of the Conference on High Performance Computing and Simulation (HPCS2009), June 2009[2] J. Cao , G. Bennett , K. Zhang, "Direct execution simulation of load balancing algorithmswith real workload distributed", Journal of Systems and Software, vol. 54, no. 3, p.227-237, November 2000[3] Y. Cheng; K. Wang; R. Jan; C. Chen; C. Huang; ``Efficient failover and Load Balancingfor dependable SIP proxy servers", IEEE Symposium on Computers andCommunications, pp. 1153 - 1158, 2008[4] A. Downey, ``Evidence for long-tailed distributions in the internet," Proceedings of the1st ACM IGCOMM Workshop on Internet Measurement, pp. 229-241, 2001. [5] A.Downey,``Lognormal and Pareto distributions in the Internet," ComputerCommunications, vol. 28, no. 7, pp. 790-801, 2005.[6] D. Ersoz, M. S. Yousif, and C. Das, "Characterizing network traffic in a clusterbased,multi-tier data center," Proceedings of the 27th International Conference on DistributedComputing Systems (ICDCS07), pp. 59-68, 2007.[7] Bhathiya Wickremasinghe, Rodrigo N. Calheiros, Rajkumar Buyya,“CloudAnalyst: ACloudSim-based Visual Modeller for Analysing Cloud Computing Environments andApplications”, 20-23, April 2010, pp. 446-452.[8] Cloud computing insights from 110 implementation projects; IBM Academy ofTechnologyThought Leadership White Paper, October 2010.[9] IoannisPsoroulas,IoannisAnagnostopoulos,VassiliLoumos, Eleftherios Kayafas, “AStudy of the Parameters Concerning Load Balancing Algorithms”, IJCSNSInternational Journal of Computer Science and Network Security, Vol. 7, No. 4, 2007,pp. 202-214 .[10] Sandeep Sharma, Sarabjit Singh, Meenakshi Sharma “Performance Analysis of LoadBalancing Algorithms”, World Academy of Science, Engineering and Technology,38, 2008 pp. 269- 272.[11] D.Asir, Shamila Ebenezer and Daniel.D, “Adaptive Load Balancing Techniques inGlobal Scale Grid Environment”, International Journal of Computer Engineering &Technology (IJCET), Volume 1, Issue 2, 2010, pp. 85 - 96, ISSN Print: 0976 – 6367,ISSN Online: 0976 – 6375.[12] Abhishek Pandey, R.M.Tugnayat and A.K.Tiwari, “Data Security Framework for CloudComputing Networks”, International Journal of Computer Engineering & Technology(IJCET), Volume 4, Issue 1, 2013, pp. 178 - 181, ISSN Print: 0976 – 6367,ISSN Online: 0976 – 6375.[13] Gurudatt Kulkarni, Jayant Gambhir and Amruta Dongare, “Security in CloudComputing”, International Journal of Computer Engineering & Technology (IJCET),Volume 3, Issue 1, 2013, pp. 258 - 265, ISSN Print: 0976 – 6367, ISSN Online:0976 – 6375.