This document discusses how virtualization and cloud computing can improve disaster recovery management. It begins by describing traditional disaster recovery approaches like dedicated and shared models that require tradeoffs between cost and speed of recovery. It then explains how cloud computing provides virtualized disaster recovery mechanisms that can offer lower costs, faster recovery times through replication of virtual servers, and improved scalability and flexibility. The document concludes that cloud computing is well-suited for disaster recovery as it allows organizations to scale resources as needed and achieve more reliable continuity of operations at lower costs than traditional approaches.
The Embedded Technology industry is experiencing two major trends. On one hand, computation is moving away from traditional desktop and department-level computer centers On the other hand, the increasing majority of these clients consist of a growing variety of embedded devices, such as smart phones, tablet computers and television set-top boxes (STB), whose capabilities continue to improve while also providing data locality associated to data-intensive application processing of interest Indeed
Load balancing in public cloud by division of cloud based on the geographical...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A survey of various scheduling algorithm in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Cloud computing is a progressive innovation that has achieved new extravagances in the field of
Information Technology. It gives a wellspring of information and application programming stockpiling as
colossal server farms called 'mists', which can be gotten to with the assistance of a system association.
These mists boost the capacities of undertakings with no additional set-up, faculty or permitting costs.
Mists are for the most part sent utilizing Public, Private or Hybrid models relying on the necessities of the
client. In this paper, we have explored the distributed computing engineering, concentrating on the
elements of the Public, Private and Hybrid cloud models. There is a dire need to examine the performance
of a cloud environment on several metrics and enhance its usability and capability. This paper aims at
highlighting important contributions of various researchers in domains like computational power,
performance provisioning, Load balancing and SLAs.
The Embedded Technology industry is experiencing two major trends. On one hand, computation is moving away from traditional desktop and department-level computer centers On the other hand, the increasing majority of these clients consist of a growing variety of embedded devices, such as smart phones, tablet computers and television set-top boxes (STB), whose capabilities continue to improve while also providing data locality associated to data-intensive application processing of interest Indeed
Load balancing in public cloud by division of cloud based on the geographical...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A survey of various scheduling algorithm in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Cloud computing is a progressive innovation that has achieved new extravagances in the field of
Information Technology. It gives a wellspring of information and application programming stockpiling as
colossal server farms called 'mists', which can be gotten to with the assistance of a system association.
These mists boost the capacities of undertakings with no additional set-up, faculty or permitting costs.
Mists are for the most part sent utilizing Public, Private or Hybrid models relying on the necessities of the
client. In this paper, we have explored the distributed computing engineering, concentrating on the
elements of the Public, Private and Hybrid cloud models. There is a dire need to examine the performance
of a cloud environment on several metrics and enhance its usability and capability. This paper aims at
highlighting important contributions of various researchers in domains like computational power,
performance provisioning, Load balancing and SLAs.
AUTO RESOURCE MANAGEMENT TO ENHANCE RELIABILITY AND ENERGY CONSUMPTION IN HET...IJCNCJournal
A classic information processing has been replaced by cloud computing in more studies where cloud computing becomes more popular and growing than other computing models. Cloud computing works for providing on-demand services for users. Reliability and energy consumption are two hot challenges and tradeoffs problem in the cloud computing environment that requires accurate attention and research. This paper proposes an Auto Resource Management (ARM) scheme to enhance reliability by reducing the Service Level Agreement (SLA) violation and reduce energy consumed by cloud computing servers. In this context, the ARM consists of three compounds, they are static/dynamic threshold, virtual machine selection policy, and short prediction resource utilization method. The Minimum Utilization Non-Negative (MUN) virtual machine selection policy and Rate of Change (RoC) dynamic threshold present in this paper. Also, a method of choosing a value as the static threshold is proposed. To improve ARM performance, the paper proposes a Short Prediction Resource Utilization (SPRU) that aims to improve the process of decision making by including the resources utilization of future time and the current time. The output results show that SPRU enhanced the decision-making process for managing cloud computing resources and reduced energy consumption and the SLA violation. The proposed scheme tested under real workload data over the CloudSim simulator.
A Prolific Scheme for Load Balancing Relying on Task Completion Time IJECEIAES
In networks with lot of computation, load balancing gains increasing significance. To offer various resources, services and applications, the ultimate aim is to facilitate the sharing of services and resources on the network over the Internet. A key issue to be focused and addressed in networks with large amount of computation is load balancing. Load is the number of tasks„t‟ performed by a computation system. The load can be categorized as network load and CPU load. For an efficient load balancing strategy, the process of assigning the load between the nodes should enhance the resource utilization and minimize the computation time. This can be accomplished by a uniform distribution of load of to all the nodes. A Load balancing method should guarantee that, each node in a network performs almost equal amount of work pertinent to their capacity and availability of resources. Relying on task subtraction, this work has presented a pioneering algorithm termed as E-TS (Efficient-Task Subtraction). This algorithm has selected appropriate nodes for each task. The proposed algorithm has improved the utilization of computing resources and has preserved the neutrality in assigning the load to the nodes in the network.
Cloud computing is an emerging technology. It process huge amount of data so scheduling mechanism
works as a vital role in the cloud computing. Thus my protocol is designed to minimize the switching time,
improve the resource utilization and also improve the server performance and throughput. This method or
protocol is based on scheduling the jobs in the cloud and to solve the drawbacks in the existing protocols.
Here we assign the priority to the job which gives better performance to the computer and try my best to
minimize the waiting time and switching time. Best effort has been made to manage the scheduling of jobs
for solving drawbacks of existing protocols and also improvise the efficiency and throughput of the server.
A survey on dynamic energy management at virtualization level in cloud data c...csandit
Data centers have become indispensable infrastructure for data storage and facilitating the
development of diversified network services and applications offered by the cloud. Rapid
development of these applications and services imposes various resource demands that results
in increased energy consumption. This necessitates the development of efficient energy
management techniques in data center not only for operational cost but also to reduce the
amount of heat released from storage devices. Virtualization is a powerful tool for energy
management that achieves efficient utilization of data center resources. Though, energy
management at data centers can be static or dynamic, virtualization level energy management
techniques contributes more energy conservation than hardware level. This paper surveys
various issues related to dynamic energy management at virtualization level in cloud data
centers.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
ABSTRACT
Cloud computing utilizes large scale computing infrastructure that has been radically changing the IT landscape enabling remote access to computing resources with low service cost, high scalability , availability and accessibility. Serving tasks from multiple users where the tasks are of different characteristics with variation in the requirement of computing power may cause under or over utilization of resources.Therefore maintaining such mega-scale datacenter requires efficient resource management procedure to increase resource utilization. However, while maintaining efficiency in service provisioning it is necessary to ensure the maximization of profit for the cloud providers. Most of the current research works aims at how providers can offer efficient service provisioning to the user and improving system performance. There are comparatively fewer specific works regarding resource management which also deals with the economic section that considers profit maximization for the provider. In this paper we represent a model that deals with both efficient resource utilization and pricing of the resources. The joint resource management model combines the work of user assignment, task scheduling and load balancing on the fact of CPU power endorsement. We propose four algorithms respectively for user assignment, task scheduling, load balancing and pricing that works on group based resources offering reduction in task execution time(56.3%),activated physical machines(41.44%),provisioning cost(23%) . The cost is calculated over a time interval involving the number of served customer at this time and the amount of resources used within this time
Allocation Strategies of Virtual Resources in Cloud-Computing NetworksIJERA Editor
In distributed computing, Cloud computing facilitates pay per model as per user demand and requirement.
Collection of virtual machines including both computational and storage resources will form the Cloud. In
Cloud computing, the main objective is to provide efficient access to remote and geographically distributed
resources. Cloud faces many challenges, one of them is scheduling/allocation problem. Scheduling refers to a
set of policies to control the order of work to be performed by a computer system. A good scheduler adapts its
allocation strategy according to the changing environment and the type of task. In this paper we will see FCFS,
Round Robin scheduling in addition to Linear Integer Programming an approach of resource allocation.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Dynamic Cloud Partitioning and Load Balancing in Cloud Shyam Hajare
Cloud computing is the emerging and transformational paradigm in the field of information technology. It mostly focuses in providing various services on demand and resource allocation and secure data storage are some of them. To store huge amount of data and accessing data from such metadata is new challenge. Distributing and balancing of the load over a cloud using cloud partitioning can ease the situation. Implementing load balancing by considering static as well as dynamic parameters can improve the performance cloud service provider and can improve the user satisfaction. Implementation the model can provide dynamic way of resource selection de-pending upon different situation of cloud environment at the time of accessing cloud provisions based on cloud partitioning. This model can provide effective load balancing algorithm over the cloud environment, better refresh time methods and better load status evaluation methods.
AUTO RESOURCE MANAGEMENT TO ENHANCE RELIABILITY AND ENERGY CONSUMPTION IN HET...IJCNCJournal
A classic information processing has been replaced by cloud computing in more studies where cloud computing becomes more popular and growing than other computing models. Cloud computing works for providing on-demand services for users. Reliability and energy consumption are two hot challenges and tradeoffs problem in the cloud computing environment that requires accurate attention and research. This paper proposes an Auto Resource Management (ARM) scheme to enhance reliability by reducing the Service Level Agreement (SLA) violation and reduce energy consumed by cloud computing servers. In this context, the ARM consists of three compounds, they are static/dynamic threshold, virtual machine selection policy, and short prediction resource utilization method. The Minimum Utilization Non-Negative (MUN) virtual machine selection policy and Rate of Change (RoC) dynamic threshold present in this paper. Also, a method of choosing a value as the static threshold is proposed. To improve ARM performance, the paper proposes a Short Prediction Resource Utilization (SPRU) that aims to improve the process of decision making by including the resources utilization of future time and the current time. The output results show that SPRU enhanced the decision-making process for managing cloud computing resources and reduced energy consumption and the SLA violation. The proposed scheme tested under real workload data over the CloudSim simulator.
A Prolific Scheme for Load Balancing Relying on Task Completion Time IJECEIAES
In networks with lot of computation, load balancing gains increasing significance. To offer various resources, services and applications, the ultimate aim is to facilitate the sharing of services and resources on the network over the Internet. A key issue to be focused and addressed in networks with large amount of computation is load balancing. Load is the number of tasks„t‟ performed by a computation system. The load can be categorized as network load and CPU load. For an efficient load balancing strategy, the process of assigning the load between the nodes should enhance the resource utilization and minimize the computation time. This can be accomplished by a uniform distribution of load of to all the nodes. A Load balancing method should guarantee that, each node in a network performs almost equal amount of work pertinent to their capacity and availability of resources. Relying on task subtraction, this work has presented a pioneering algorithm termed as E-TS (Efficient-Task Subtraction). This algorithm has selected appropriate nodes for each task. The proposed algorithm has improved the utilization of computing resources and has preserved the neutrality in assigning the load to the nodes in the network.
Cloud computing is an emerging technology. It process huge amount of data so scheduling mechanism
works as a vital role in the cloud computing. Thus my protocol is designed to minimize the switching time,
improve the resource utilization and also improve the server performance and throughput. This method or
protocol is based on scheduling the jobs in the cloud and to solve the drawbacks in the existing protocols.
Here we assign the priority to the job which gives better performance to the computer and try my best to
minimize the waiting time and switching time. Best effort has been made to manage the scheduling of jobs
for solving drawbacks of existing protocols and also improvise the efficiency and throughput of the server.
A survey on dynamic energy management at virtualization level in cloud data c...csandit
Data centers have become indispensable infrastructure for data storage and facilitating the
development of diversified network services and applications offered by the cloud. Rapid
development of these applications and services imposes various resource demands that results
in increased energy consumption. This necessitates the development of efficient energy
management techniques in data center not only for operational cost but also to reduce the
amount of heat released from storage devices. Virtualization is a powerful tool for energy
management that achieves efficient utilization of data center resources. Though, energy
management at data centers can be static or dynamic, virtualization level energy management
techniques contributes more energy conservation than hardware level. This paper surveys
various issues related to dynamic energy management at virtualization level in cloud data
centers.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
ABSTRACT
Cloud computing utilizes large scale computing infrastructure that has been radically changing the IT landscape enabling remote access to computing resources with low service cost, high scalability , availability and accessibility. Serving tasks from multiple users where the tasks are of different characteristics with variation in the requirement of computing power may cause under or over utilization of resources.Therefore maintaining such mega-scale datacenter requires efficient resource management procedure to increase resource utilization. However, while maintaining efficiency in service provisioning it is necessary to ensure the maximization of profit for the cloud providers. Most of the current research works aims at how providers can offer efficient service provisioning to the user and improving system performance. There are comparatively fewer specific works regarding resource management which also deals with the economic section that considers profit maximization for the provider. In this paper we represent a model that deals with both efficient resource utilization and pricing of the resources. The joint resource management model combines the work of user assignment, task scheduling and load balancing on the fact of CPU power endorsement. We propose four algorithms respectively for user assignment, task scheduling, load balancing and pricing that works on group based resources offering reduction in task execution time(56.3%),activated physical machines(41.44%),provisioning cost(23%) . The cost is calculated over a time interval involving the number of served customer at this time and the amount of resources used within this time
Allocation Strategies of Virtual Resources in Cloud-Computing NetworksIJERA Editor
In distributed computing, Cloud computing facilitates pay per model as per user demand and requirement.
Collection of virtual machines including both computational and storage resources will form the Cloud. In
Cloud computing, the main objective is to provide efficient access to remote and geographically distributed
resources. Cloud faces many challenges, one of them is scheduling/allocation problem. Scheduling refers to a
set of policies to control the order of work to be performed by a computer system. A good scheduler adapts its
allocation strategy according to the changing environment and the type of task. In this paper we will see FCFS,
Round Robin scheduling in addition to Linear Integer Programming an approach of resource allocation.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Dynamic Cloud Partitioning and Load Balancing in Cloud Shyam Hajare
Cloud computing is the emerging and transformational paradigm in the field of information technology. It mostly focuses in providing various services on demand and resource allocation and secure data storage are some of them. To store huge amount of data and accessing data from such metadata is new challenge. Distributing and balancing of the load over a cloud using cloud partitioning can ease the situation. Implementing load balancing by considering static as well as dynamic parameters can improve the performance cloud service provider and can improve the user satisfaction. Implementation the model can provide dynamic way of resource selection de-pending upon different situation of cloud environment at the time of accessing cloud provisions based on cloud partitioning. This model can provide effective load balancing algorithm over the cloud environment, better refresh time methods and better load status evaluation methods.
Diseño instruccional para la producción de cursos en línea y el e learningMontserg93
En este archivo encontrarás información breve acerca de los siguientes temas:
-Educación a distancia y medios
-La convergencia de modalidades en los entornos de enseñanza y aprendizaje
-Cursos en línea y el e-learning, cuestión de instancias y alcances
-Nuevas formas de concebir y organizar entornos de enseñanza y aprendizaje, nuevos modelos para el diseño de instrucción
-Lineamientos para la producción de cursos en línea y para el e-learning
Crank shaft sensor is a crucial part of car's engine in modern automobile engineering. It monitors engine's various components via the vehicle's computerized engine management system. This sensor helps to look overall engine functions, track the speed of crank shaft rotation and also check for the engine valves with respect to the pistons.
Disaster Recovery in Business Continuity Managementijtsrd
Cloud computing is internet based computing technique where in systems are interconnected with sharing resources through every different. At present, every organization generates in huge volume of data in digital format that required the secure storage services. Data backup and Disaster Recovery Business Continuity issues are becoming fundamental in networks since the importance and social value of digital data is continuously increasing. Organization requires a Business Continuity Plan BCP or Disaster Recovery Plan DRP and data backup which falls within the cost constraints while achieving the target recovery requirements in terms of recovery time objective RTO and recovery point objective RPO .Site recovery contributes to your business continuity and disaster recovery BCDR strategy, by orchestrating and automating replication of azure VMs between regions, on premises Virtual Machines and physical servers to azure, and on premises machines to a secondary datacenter. The proposed system provides an extensive disaster recovery management using Microsoft Azure Recovery Vault Service. A back up is process on daily basis. Which helps to Small And Medium Sized Enterprises SMEs to cut down their costs on expensive IT infrastructure and reduce the burden on IT environment. Jay S Patel | Keerthana V ""Disaster Recovery in Business Continuity Management"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23607.pdf
Paper URL: https://www.ijtsrd.com/computer-science/real-time-computing/23607/disaster-recovery-in-business-continuity-management/jay-s-patel
Cloud computing is a realized wonder. It delights its users by providing applications, platforms and infrastructure without any initial investment. The “pay as you use” strategy comforts the users. The usage can be increased by adding infrastructure, tools or applications to the existing application. The realistic beauty of cloud computing is that there is no need for any sophisticated tool for access, web browser or even smartphone will do. Cloud computing is a windfall for small organizations having less sensitive information. But for large organizations, the risks related to security may be daunting. Necessary steps have to be taken for managing the issues like confidentiality, integrity, privacy, availability and so on. In this paper availability is taken and studied in a multi-dimensional perspective. Availability is taken a key issue and the mechanisms that enable enhancement are analyzed.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Addressing the cloud computing security menaceeSAT Journals
Abstract Cloud Computing is fast gaining popularity today with its scalable, flexible and on-demand service provision. It brings cost saving and agility to organization with pay-as-you-go approach. Abundant resources are available and the user has a huge range to select from. Cloud facilitates virtualization, simplification, automation and accelerated delivery of application and services for a sustainable business advantage. As much as the technological benefit, cloud computing also has security issues. Security in cloud computing is essential for providing quality of service. In this paper we address security issues which concern cloud computing environment today. We analyze Cloud Computing and security menace it faces due to different threats. Index Terms: Cloud Computing, Cloud Service Provider (CSP), Cloud Security, Cloud User, SaaS, PaaS, IaaS, StaaS
A Secure Cloud Storage System with Data Forwarding using Proxy Re-encryption ...IJTET Journal
Cloud computing provides the facility to access shared resources and common support which contributes services on
demand over the network to perform operations that meet changing business needs. A cloud storage system, consisting of a collection
of storage servers, affords long-term storage services over the internet. Storing the data in a third party cloud system cause serious
concern over data confidentiality, without considering the local infrastructure limitations, the cloud services allow the user to enjoy the
cloud applications. As the different users may be working in the collaborative relationship, the data sharing becomes significant to
achieve productive benefit during the data accessing. The existing security system only focuses on the authentication; it shows that
user’s private data cannot be accessed by the fake users. To address the above cloud storage privacy issue shared authority based
privacy-preserving authentication protocol is used. In the SAPA, the shared access authority is achieved by anonymous access request
and privacy consideration, attribute based access control allows the user to access their own data fields. To provide the data sharing
among the multiple users proxy re-encryption scheme is applied by the cloud server. The privacy-preserving data access authority
sharing is attractive for multi-user collaborative cloud applications.
A Survey on Virtualization Data Centers For Green Cloud ComputingIJTET Journal
Abstract —Due to trends like Cloud Computing and Green cloud Computing, virtualization technologies are gaining increasing importance. Cloud is a atypical model for computing resources, which intent to computing framework to the network in order to cut down costs of software and hardware resources. Nowadays, power is one of big issue of IDC has huge impacts on society. Researchers are seeking to find solutions to make IDC reduce power consumption. These IDC (Internet Data Center) consume large amounts of energy to process the cloud services, high operational cost, and affecting the lifespan of hardware equipments. The field of Green computing is also becoming more and more important in a world with finite number of energy resources and rising demand. Virtual Machine (VM) mechanism has been broadly applied in data center, including flexibility, reliability, and manageability. The research survey presents about the virtualization IDC in green cloud it contains various key features of the Green cloud, cloud computing, data centers, virtualization, data center with virtualization, power – aware, thermal – aware, network-aware, resource-aware and migration techniques. In this paper the several methods that are utilze to achieve the virtualization in IDC in green cloud computing are discussed.
Ant colony Optimization: A Solution of Load balancing in Cloud dannyijwest
As the cloud computing is a new style of computing over internet. It has many advantages along with some
crucial issues to be resolved in order to improve reliability of cloud environment. These issues are related
with the load management, fault tolerance and different security issues in cloud environment. In this paper
the main concern is load balancing in cloud computing. The load can be CPU load, memory capacity,
delay or network load. Load balancing is the process of distributing the load among various nodes of a
distributed system to improve both resource utilization and job response time while also avoiding a
situation where some of the nodes are heavily loaded while other nodes are idle or doing very little work.
Load balancing ensures that all the processor in the system or every node in the network does
approximately the equal amount of work at any instant of time. Many methods to resolve this problem has
been came into existence like Particle Swarm Optimization, hash method, genetic algorithms and several
scheduling based algorithms are there. In this paper we are proposing a method based on Ant Colony
optimization to resolve the problem of load balancing in cloud environment.
A Secure Cloud Storage System with Data Forwarding using Proxy Re-encryption ...IJTET Journal
Cloud computing provides the facility to access shared resources and common support which contributes services on demand over the network to perform operations that meet changing business needs. A cloud storage system, consisting of a collection of storage servers, affords long-term storage services over the internet. Storing the data in a third party cloud system cause serious concern over data confidentiality, without considering the local infrastructure limitations, the cloud services allow the user to enjoy the cloud applications. As the different users may be working in the collaborative relationship, the data sharing becomes significant to achieve productive benefit during the data accessing. The existing security system only focuses on the authentication; it shows that user’s private data cannot be accessed by the fake users. To address the above cloud storage privacy issue shared authority based privacy-preserving authentication protocol is used. In the SAPA, the shared access authority is achieved by anonymous access request and privacy consideration, attribute based access control allows the user to access their own data fields. To provide the data sharing among the multiple users proxy re-encryption scheme is applied by the cloud server. The privacy-preserving data access authority sharing is attractive for multi-user collaborative cloud applications.
Latest development of cloud computing technology, characteristics, challenge,...sushil Choudhary
Cloud computing is a network-based environment that focuses on sharing computations, Cloud computing networks access to a shared pool of configurable networks, servers, storage, service, applications & other important Computing resources. In modern era of Information Technology, the accesses to all information about the important activities of the related fields. In this paper discuss the advantages, disadvantages, characteristics, challenge, deployment model, cloud service model, cloud service provider & various applications areas of cloud computing such as small & large scale (manufacturing, automation, television, broadcast, constructions industries), Geographical Information system (GIS), Military intelligence fusion (MIS), business management, banking, Education, healthcare, Agriculture sector, E-Governance, project planning, cloud computing in family etc. Keywords: Cloud computing, community model, hybrid model, Public model, private model
www.iosrjournals.org 57 | Page Latest development of cloud computing technolo...Sushil kumar Choudhary
Cloud computing is a network-based environment that focuses on sharing computations, Cloud computing networks access to a shared pool of configurable networks, servers, storage, service, applications & other important Computing resources. In modern era of Information Technology, the accesses to all information about the important activities of the related fields. In this paper discuss the advantages, disadvantages, characteristics, challenge, deployment model, cloud service model, cloud service provider & various applications areas of cloud computing such as small & large scale (manufacturing, automation, television, broadcast, constructions industries), Geographical Information system (GIS), Military intelligence fusion (MIS), business management, banking, Education, healthcare, Agriculture sector, E-Governance, project planning, cloud computing in family etc.
Efficient architectural framework of cloud computing Souvik Pal
Cloud computing is that enables adaptive, favorable and on-demand network access to a collective pool of adjustable and configurable computing physical resources which networks, servers, bandwidth, storage that can be swiftly provisioned and released with negligible supervision endeavor or service provider interaction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing. However, clouds are Internet-based concept and try to disguise complexity overhead for end users. Cloud service providers (CSPs) use many structural designs combined with self-service capabilities and ready-to-use facilities for computing resources, which are enabled through network infrastructure especially the internet which is an important consideration. This paper provides an efficient architectural Framework for cloud computing that may lead to better performance and faster access.
Cloud computing Review over various scheduling algorithmsIJEEE
Cloud computing has taken an importantposition in the field of research as well as in thegovernment organisations. Cloud computing uses virtualnetwork technology to provide computer resources tothe end users as well as to the customer’s. Due tocomplex computing environment the use of high logicsand task scheduler algorithms are increase which resultsin costly operation of cloud network. Researchers areattempting to build such kind of job scheduling algorithms that are compatible and applicable in cloud computing environment.In this paper, we review research work which is recently proposed by researchers on the base of energy saving scheduling techniques. We also studying various scheduling algorithms and issues related to them in cloud computing.
Virtual Machine Migration and Allocation in Cloud Computing: A Reviewijtsrd
Cloud computing is an emerging computing technology that maintains computational resources on large data centers and accessed through internet, rather than on local computers. VM migration provides the capability to balance the load, system maintenance, etc. Virtualization technology gives power to cloud computing. The virtual machine migration techniques can be divided into two categories that is pre copy and post copy approach. The process to move running applications or VMs from one physical machine to another is known as VM migration. In migration process the processor state, storage, memory and network connection are moved from one host to another.. Two important performance metrics are downtime and total migration time that the users care about most, because these metrics deals with service degradation and the time during which the service is unavailable. This paper focus on the analysis of live VM migration Techniques in cloud computing. Khushbu Singh Chandel | Dr. Avinash Sharma "Virtual Machine Migration and Allocation in Cloud Computing: A Review" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-1 , December 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29556.pdfPaper URL: https://www.ijtsrd.com/computer-science/computer-network/29556/virtual-machine-migration-and-allocation-in-cloud-computing-a-review/khushbu-singh-chandel
1. International Journal of Research in Advent Technology, Vol.2, No.5, May 2014
E-ISSN: 2321-9637
Virtualizing Disaster Recovery Management Based On
397
Cloud Computing
Ms.Shital V. Bahale 1 , Prof. Dr.Sunil Gupta 2.
M.E.(II Year)- Department of Computer Science and Engg. P.R.M.I.T.R, Badnera-Amravati 1,
Assistant Professor- Department of Computer Science and Engg. P.R.M.I.T.R, Badnera-Amravati 2
Email: shitalbahale@rediffmail.com1 , sunilguptacse@gmail.com2
Abstract - Almost from the beginning of widespread
adoption of computers, organizations realized that
disaster recovery was a necessary component of their
information technology plans. Business data had to
be backed up, and key processes like order entry,
billing, payroll and procurement needed to continue
even if an organization’s data center was disabled
due to a disaster. Growing reliance on crucial
computer systems means those even short periods of
downtime can result in significant financial loss, or in
some cases even put human lives at risk. Many
business and government services utilize Disaster
Recovery (DR) systems to minimize the downtime
incurred by catastrophic system failures.
Cloud computing provides the third leg of a disaster
recovery plan that is essential for business continuity.
Cloud-based storage services take advantage of
Internet access to deliver reliable, low-cost online
storage, helping you to bounce back from a full-scale
data center disaster for less than the cost of a
dedicated online storage solution.
Virtualization is the means of ushering in a new,
productive era of cloud computing, driven by this
need for cost management and increased agility.
Virtualization can also provide the basic building
blocks for your cloud environment to enhance agility
and flexibility. This paper delineate how
virtualization of cloud computing can be used to
address the concerns resulting in improved computer
infrastructure that can easily be restored following a
natural disaster ,reduced expenses, improved
scalability, better performance and is easier to
manage.
Index Terms - Disaster Recovery requirements;
Dedicated and shared DR models; Virtualization;
Cloud based DR mechanisms;
1. INTRODUCTION
A key challenge in providing DR services is to
support Business Continuity (BC), allowing
applications to rapidly come back online after a
failure occurs. By minimizing the recovery time and
the data lost due to disaster, a DR service can also
provide BC, but typically at high cost. Cloud
computing platforms are well suited for offering DR
as a service due to their pay-as-you-go pricing model
that can lower costs, and their use of automated
virtual platforms that can minimize the recovery
time after a failure[1,2].
A typical DR service works by replicating application
state between two data center’s if the primary data
center becomes unavailable, then the backup site can
take over and will activate a new copy of the
application using the most recently replicated data[8].
Virtualization is the foundation for an agile, scalable
cloud and the first practical step for building cloud
infrastructure [11]. Virtualization abstracts and
isolates the underlying hardware as virtual machines
(VMs) in their own runtime environment and with
multiple VMs for computing, storage, and
networking resources in a single hosting
environment. These virtualized resources are critical
for managing data, moving it into and out of the
cloud, and running applications with high utilization
and high availability [14].
Virtualization is managed by a host server running a
hypervisor software, firmware, or hardware that
creates and runs VMs[17]. The VMs are referred to
as guest machines [6,9].
Virtualization also provides several key capabilities
for cloud computing, including resource sharing, VM
isolation, and load balancing. In a cloud environment,
these capabilities enable scalability, high utilization
of pooled resources, rapid provisioning, workload
isolation, and increased uptime.
2. International Journal of Research in Advent Technology, Vol.2, No.5, May 2014
E-ISSN: 2321-9637
398
In this paper we explore how virtualized cloud
platforms can be used to provide low cost DR
solutions that are better at enabling Business
Continuity. In the first section this paper discusses
data recovery requirements , second section explores
traditional approaches to disaster recovery then in
third section describes data recovery mechanisms
and fourth section describes cloud computing
mechanisms for data recovery lastly it concludes with
how organizations can use cloud computing to help
plan for both the mundane interruptions to service—
cut power lines, server hardware failures and security
breaches—as well as more-infrequent disasters.
2. DATA RECOVERY REQUIREMENTS
This section discusses the key requirements for an
effective DR service. Some of these requirements
may be based on business decisions such as the
monetary cost of system downtime or data loss, while
others are directly tied to application performance
and correctness.
2.1 Recovery point objective (RPO)
The RPO of a DR system represents the point in time
of the most recent backup prior to any failure.
2.2 Recovery time objective (RTO)
The RTO is an orthogonal business decision that
specifies a bound on how long it can take for an
application to come back online after a failure occurs.
This includes the time to detect the failure, prepare
any required servers in the backup site (virtual or
physical), initialize the failed application, and
perform the network reconfiguration required to
reroute requests from the original site to the backup
site so the application can be used.[7]. Having a very
low RTO can enable business continuity, allowing an
application to seamlessly continue operating despite a
site wide disaster.
2.3 Performance
For a DR service to be useful it must have a minimal
impact on the performance of each application being
protected under failure-free operation. DR can impact
performance either directly such as in the
synchronous replication case where an application
write will not return until it is committed remotely, or
indirectly by simply consuming disk and network
bandwidth resources which otherwise the application
could use.
2.4 Consistency
The DR service must ensure that after a failure occurs
the application can be restored to a consistent state.
2.5 Geographic Separation
It is important that the primary and backup sites are
geographically separated in order to ensure that a
single disaster will not impact both sites. This
geographic separation adds its own challenges since
increased distance leads to higher WAN bandwidth
costs and will incur greater network latency.
Increased round trip latency directly impacts
application response time. Asynchronous techniques
can improve performance over longer distances, but
can lead to greater data loss during a disaster.
Distance can especially be a challenge in cloud based
DR services as a business might have only coarse
control over where resources will be physically
located.
3. TRADITIONAL DISASTER RECOVERY
APPROACHES
In traditional disaster recovery models—dedicated
and shared— organizations are forced to make the
trade-off between cost and speed to recovery.
3.1 Dedicated disaster recovery model
In a dedicated model, the infrastructure is dedicated
to a single organization. This type of disaster
recovery can offer a faster time to recovery compared
to other traditional models because the IT
infrastructure is duplicated at the disaster recovery
site and is ready to be called upon in the event of a
disaster. Although this model can reduce RTO
because the hardware and software are preconfigured,
it does not eliminate all delays. The process is still
dependent on receiving a current data image, which
involves transporting physical tapes and a data
restoration process. This approach is also costly
because the hardware sits idle when not being used
for disaster recovery. As illustrated in Figure 1, data
restoration can take up to 72 hours including the tape
retrieval, travel and loading process[3].
Dedicated
Data Restore
6 hrs or less 4-72hrs
Interrupt Recovery
Declare H/W Setup S/W Setup Data Restore
3. International Journal of Research in Advent Technology, Vol.2, No.5, May 2014
E-ISSN: 2321-9637
399
Figure 1. Time To Recovery using a Dedicated Infrastructure
3.2 Shared disaster recovery model
In a shared disaster recovery model, the infrastructure
is shared among multiple organizations. Shared
disaster recovery is designed to be more cost
effective because the off-site backup infrastructure is
shared among multiple organizations. After a disaster
is declared, the hardware, operating system and
application software at the disaster site must be
configured from the ground up to match the IT site
that has declared a disaster, and this process can take
hours or even days. In addition, the data restoration
process must be completed as shown in Figure 2,
resulting in an average of 48 to 72 hours to
recovery[3].
Shared
Declare H/W Setup S/W
Data Restore
Setup
Interruption Recovery
Declare H/W Setup S/W Setup Data Restore
Figure 2. Time To Recovery using a Shared Infrastructure
With dedicated and shared disaster recovery models,
organizations have traditionally been forced to make
tradeoffs between cost and speed. As the pressure to
achieve continuous availability and reduce costs
continues to increase, organizations can no longer
accept tradeoffs. Any downtime reflects directly on
their brand image, and customers view any
interruption of key applications such as e-commerce,
online banking and customer self-service as being
unacceptable. As a result, the cost of a minute of
downtime may be thousands of dollars.
4. DR MECHANISMS
Disaster Recovery is primarily a form of long
distance state replication combined with the ability to
start up applications at the backup site after a failure
is detected. Backup mechanisms operating at the file
system or disk layer replicate all or a portion of the
file system tree to the remote site without requiring
specific application knowledge [6].
The use of virtualization makes it possible to not only
transparently replicate the complete disk, but also the
memory context of a virtual machine, allowing it to
seamlessly resume operation after a failure however,
such techniques are typically designed only for LAN
environments due to significant bandwidth and
latency requirements [4, 9].
DR services fall under one of the following
categories:
4.1 Hot Backup Site
A hot backup site typically provides a set of mirrored
stand-by servers that are always available to run the
application once a disaster occurs, providing minimal
RTO and RPO. Hot standbys typically use
synchronous replication to prevent any data loss due
to a disaster. This form of backup is the most
expensive since fully powered servers must be
available at all times to run the application, plus extra
licensing fees may apply for some applications.
4.2 Warm Backup Site
A warm backup site may keep state up to date with
either synchronous or asynchronous replication
schemes depending on the necessary RPO. Standby
servers to run the application after failure are
available, but are only kept in a “warm” state where it
may take minutes to bring them online. This slows
recovery, but also reduces cost.
4.3 Cold Backup Site
In a cold backup site, data is often only replicated on
a periodic basis, leading to an RPO of hours or days.
In addition, servers to run the application after failure
are not readily available, and there may be a delay of
hours or days as hardware is brought out of storage or
repurposed from test and development systems,
resulting in a high RTO. It can be difficult to support
business continuity with cold backup sites, but they
are a very low cost option for applications that do not
require strong protection or availability guarantees.
5. MECHANISMS FOR CLOUD DISASTER
RECOVERY
While cloud computing platforms already contain
many useful features for supporting disaster recovery,
there are additional requirements they must meet
before they can provide DR as a cloud service.
5.1 Network Reconfiguration
For a cloud DR service to provide true business
continuity, it must facilitate reconfiguring the
network setup for an application after it is brought
online in the backup site[10]. Public Internet facing
applications would require additional forms of
network reconfiguration through either modifying
Min 4 hrs Min8-24 hrs Min 4 hrs 4 -72hrs
4. International Journal of Research in Advent Technology, Vol.2, No.5, May 2014
E-ISSN: 2321-9637
400
DNS or updating routes to redirect traffic to the
failover site.
5.2 Security & Isolation
The public nature of cloud computing platforms
remains a concern for some businesses. In order for
an enterprise to be willing to fail over from its private
data center to a cloud during a disaster it will require
strong guarantees about the privacy of storage,
network, and the virtual machine resources it
uses[12,13].
5.3 VM Migration & Cloning
Current cloud computing platforms do not support
VM migration in or out of the cloud. VM migration
or cloning would simplify the failback procedure for
moving an application back to its original site after a
disaster has been dealt with. This would also be a
useful mechanism for facilitating planned
maintenance downtime[4][16].
Cloud computing offers an attractive alternative to
traditional disaster recovery. “The Cloud” is
inherently a shared infrastructure a pooled set of
resources with the infrastructure cost distributed
across everyone who contracts for the cloud service.
This shared nature makes cloud an ideal model for
disaster recovery. Even when we broaden the
definition of disaster recovery to include more
mundane service interruptions, the need for disaster
recovery resources is sporadic. Since all of the
organizations relying on the cloud for backup and
recovery are very unlikely to need the infrastructure
at the same time, costs can be reduced and the cloud
can speed recovery time[5].
Because the server images and data are continuously
replicated, recovery time can be reduced dramatically
to less than an hour, and, in many cases, to minutes—
or even seconds. However, the costs are more
consistent with shared recovery.
Figure 3. Cloud based approach to disaster recovery
Cloud computing based on virtualization, takes a
very different approach to disaster recovery. With
virtualization, the entire server, including the
operating system, applications, patches and data is
encapsulated into a single software bundle or virtual
server[15]. This entire virtual server can be copied or
backed up to an offsite data center and spun up on a
virtual host in a matter of minutes.
Since the virtual server is hardware independent, the
operating system, applications, patches and data can
be safely and accurately transferred from one data
center to a second data center without the burden of
reloading each component of the server. This can
dramatically reduce recovery times compared to
conventional (non-virtualized) disaster recovery
approaches where servers need to be loaded with the
OS and application software and patched to the last
configuration used in production before the data can
be restored.
The cloud shifts the disaster recovery trade-off curve
to the left, as shown below. With cloud computing
(as represented by the red arrow), disaster recovery
becomes much more cost-effective with significantly
faster recovery times.
Figure 4. Cloud Disaster Recovery Trade-offs
The cloud makes cold site disaster recovery
antiquated. With cloud computing, warm site disaster
recovery becomes a very cost-effective option where
backups of critical servers can be spun up in minutes
on a shared or private cloud host platform.
5. International Journal of Research in Advent Technology, Vol.2, No.5, May 2014
E-ISSN: 2321-9637
401
One of the most exciting capabilities of disaster
recovery in the cloud is the ability to deliver multi-site
availability. SAN replication not only provides
rapid failover to the disaster recovery site, but also
the capability to return to the production site when
the DR test or disaster event is over.
One of the added benefits of disaster recovery with
cloud computing is the ability to finely tune the costs
and performance for the DR platform. Applications
and servers that are deemed less critical in a disaster
can be tuned down with less resources, while
assuring that the most critical applications get the
resources they need to keep the business running
through the disaster.
6. CONCLUSION
With pay-as-you-go pricing and the ability to scale
up as conditions change, cloud computing can help
organizations meet the expectations of today’s
frenetic, fast paced environment where IT demands
continue to increase but budgets do not.
Virtualization also eliminates hardware
dependencies, potentially lowering hardware
requirements at the backup site.
By coordinating disaster recovery and data back-up,
data loss can be reduced and reliability of data
integrity improved. In future focus is on using fault
tolerant server hardware within virtualized cloud
environments to reduce management complexity to
sustain the high service levels.
Virtualizing disaster recovery start up can also be
automated for lowering recovery times after a
disaster.
REFERENCES
[1] Rajkumar Buyya, Rajiv Ranjan, and Rodrigo N.
Calheiros. InterCloud:Utility-Oriented
Federation of Cloud Computing Environments
for Scaling of Application Services. In The 10th
International Conference on Algorithms and
Architectures for Parallel Processing, Busan,
Korea, 2010.
[2] Emmanuel Cecchet, Anupam Chanda, Sameh
Elnikety, Julie Marguerite,and Willy
Zwaenepoel. Performance Comparison of
Middleware Architectures for Generating
Dynamic Web Content. In 4th
ACM/IFIP/USENIX International Middleware
Conference, June 2003.
[3]Virtualizing disaster recovery using cloud
computing,IBM Global Technology Services
Thought Leadership White Paper January 2013.
[4] Brendan Cully, Geoffrey Lefebvre, Dutch Meyer,
Mike Feeley, Norm Hutchinson, and Andrew
Warfield. Remus: High availability via
asynchronous virtual machine replication. In
Proceedings of the Usenix Symposium on
Networked System Design and Implementation,
2008.
[5] Albert Greenberg, James Hamilton, David A.
Maltz, and Parveen Patel.Cost of a cloud:
Research problems in data center networks. In
ACM SIGCOMMComputer Communications
Review, Feb 2009.
[6] Kimberley Keeton, Cipriano Santos, Dirk Beyer,
Jeffrey Chase, and John Wilkes. Designing for
Disasters. Conference On File And Storage
Technologies,2004.
[7] Kimberly Keeton, Dirk Beyer, Ernesto Brau, Arif
Merchant, Cipriano Santos,and Alex Zhang. On
the road to recovery:restoring data after
disasters.European Conference on Computer
Systems, 40(4), 2006.
[8] Tirthankar Lahiri, Amit Ganesh, Ron Weiss, and
Ashok Joshi. Fast-Start:quick fault recovery in
oracle. ACM SIGMOD Record, 30(2), 2001.
[9] Vmware high availability.
http://www.vmware.com/products/high-availability/.
[10] T. Wood, A. Gerber, K. Ramakrishnan, J. Van
der Merwe, and P. Shenoy.The case for
enterprise ready virtual private clouds. In
Proceedings of the Usenix Workshop on Hot
Topicsin Cloud Computing (HotCloud), San
Diego, CA, June 2009.
[11] Marston S., Li Z., Bandyopadhyay S., Zhang J. ,
Ghalsasi A. (2010) „Cloud computing - The
business perspective‟ Decision Support Systems
[online] 51 (2011) 176–189 .
[12] Golden; B. (2009), ‘Capex vs. Opex: Most
People Miss the Point About Cloud Economics’.
[13] Fellows, W. (2009), ‘The State of Play: Grid,
Utility, Cloud’ - available at
http://old.ogfeurope.eu/uploads/Industry%20Exp
ert%20Group/FELLOWS_CloudscapeJan09-
WF.pdf
[14] Marc Malizia White Paper On
Virtualization+Cloud Equals Perfect Storm For
Disaster Recovery Services Version1.1, 20
March 2013.
[15] White paper on Server Virtualization and Cloud
Computing by Stratus Technologies November,
2011.
[16] Timothy Wood, Emmanuel Cecchet, K.K.
Ramakrishnany,Prashant Shenoy, Jacobus van
der Merwey, and Arun Venkataramani. Disaster
Recovery as a Cloud Service:Economic Benefits
& Deployment Challenges.
[17] Buyya R., Broberg J., Goscinski A.M. (2011)
Cloud Computing: Principles and Paradigms.
New York: John Wiley & Sons.