This document summarizes a research paper on dynamic consolidation of virtual machines in cloud data centers to manage overloaded hosts while maintaining quality of service constraints. It proposes using a Markov chain model and control algorithm to optimally detect host overloads by maximizing the average time between VM migrations, while meeting a specified QoS goal. The algorithm handles unknown workloads using a multisize sliding window approach. Evaluation shows the algorithm efficiently solves the problem of host overload detection as part of dynamic VM consolidation in cloud computing systems.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Design & Development of a Trustworthy and Secure Billing System for Cloud Com...iosrjce
Cloud computing is an important transition that makes change in service oriented computing
technology. Cloud service provider follows pay-as-you-go pricing approach which means consumer uses as
many resources as he need and billed by the provider based on the resource consumed. CSP give a quality of
service in the form of a service level agreement. For transparent billing, each billing transaction should be
protected against forgery and false modifications. Although CSPs provide service billing records, they cannot
provide trustworthiness. It is due to user or CSP can modify the billing records. In this case even a third party
cannot confirm that the user’s record is correct or CSPs record is correct. To overcome these limitations we
introduced a secure billing system called THEMIS. For secure billing system THEMIS introduces a concept of
cloud notary authority (CNA). CNA generates mutually verifiable binding information that can be used to
resolve future disputes between user and CSP. This project will produce the secure billing through monitoring
the service level agreement (SLA) by using the SMon module. CNA can get a service logs from SMon and stored
it in a local repository for further reference. Even administrator of a cloud system cannot modify or falsify the
data.
A Virtualization Model for Cloud ComputingSouvik Pal
Cloud Computing is now a very emerging field in the IT industry as well as research field. The advancement of Cloud Computing came up due to fast-growing usage of internet among the people. Cloud Computing is basically on-demand network access to a collection of physical resources which can be provisioned according to the need of cloud user under the supervision of Cloud Service provider interaction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing. Virtualization technology is widely applied to modern data center for cloud computing. Virtualization is used computer resources to imitate other computer resources or whole computers. This paper provides a Virtualization model for cloud computing that may lead to faster access and better performance. This model may help to combine self-service capabilities and ready-to-use facilities for computing resources.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A detailed study of cloud computing is presented. Starting from its basics, the characteristics and different modalities
are dwelt upon. Apart from this, the pros and cons of cloud computing is also highlighted. Apart from this, service
models of cloud computing are lucidly highlighted.
Today Cloud computing is used in a wide range of domains. By using cloud computing a user
can utilize services and pool of resources through internet. The cloud computing platform
guarantees subscribers that it will live up to the service level agreement (SLA) in providing
resources as service and as per needs. However, it is essential that the provider be able to
effectively manage the resources. One of the important roles of the cloud computing platform is
to balance the load amongst different servers in order to avoid overloading in any host and
improve resource utilization.
It is defined as a distributed system containing a collection of computing and communication
resources located in distributed data enters which are shared by several end users. It has widely
been adopted by the industry, though there are many existing issues like Load Balancing, Virtual
Machine Migration, Server Consolidation, Energy Management, etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Design & Development of a Trustworthy and Secure Billing System for Cloud Com...iosrjce
Cloud computing is an important transition that makes change in service oriented computing
technology. Cloud service provider follows pay-as-you-go pricing approach which means consumer uses as
many resources as he need and billed by the provider based on the resource consumed. CSP give a quality of
service in the form of a service level agreement. For transparent billing, each billing transaction should be
protected against forgery and false modifications. Although CSPs provide service billing records, they cannot
provide trustworthiness. It is due to user or CSP can modify the billing records. In this case even a third party
cannot confirm that the user’s record is correct or CSPs record is correct. To overcome these limitations we
introduced a secure billing system called THEMIS. For secure billing system THEMIS introduces a concept of
cloud notary authority (CNA). CNA generates mutually verifiable binding information that can be used to
resolve future disputes between user and CSP. This project will produce the secure billing through monitoring
the service level agreement (SLA) by using the SMon module. CNA can get a service logs from SMon and stored
it in a local repository for further reference. Even administrator of a cloud system cannot modify or falsify the
data.
A Virtualization Model for Cloud ComputingSouvik Pal
Cloud Computing is now a very emerging field in the IT industry as well as research field. The advancement of Cloud Computing came up due to fast-growing usage of internet among the people. Cloud Computing is basically on-demand network access to a collection of physical resources which can be provisioned according to the need of cloud user under the supervision of Cloud Service provider interaction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing. Virtualization technology is widely applied to modern data center for cloud computing. Virtualization is used computer resources to imitate other computer resources or whole computers. This paper provides a Virtualization model for cloud computing that may lead to faster access and better performance. This model may help to combine self-service capabilities and ready-to-use facilities for computing resources.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A detailed study of cloud computing is presented. Starting from its basics, the characteristics and different modalities
are dwelt upon. Apart from this, the pros and cons of cloud computing is also highlighted. Apart from this, service
models of cloud computing are lucidly highlighted.
Today Cloud computing is used in a wide range of domains. By using cloud computing a user
can utilize services and pool of resources through internet. The cloud computing platform
guarantees subscribers that it will live up to the service level agreement (SLA) in providing
resources as service and as per needs. However, it is essential that the provider be able to
effectively manage the resources. One of the important roles of the cloud computing platform is
to balance the load amongst different servers in order to avoid overloading in any host and
improve resource utilization.
It is defined as a distributed system containing a collection of computing and communication
resources located in distributed data enters which are shared by several end users. It has widely
been adopted by the industry, though there are many existing issues like Load Balancing, Virtual
Machine Migration, Server Consolidation, Energy Management, etc.
A Secure Cloud Storage System with Data Forwarding using Proxy Re-encryption ...IJTET Journal
Cloud computing provides the facility to access shared resources and common support which contributes services on
demand over the network to perform operations that meet changing business needs. A cloud storage system, consisting of a collection
of storage servers, affords long-term storage services over the internet. Storing the data in a third party cloud system cause serious
concern over data confidentiality, without considering the local infrastructure limitations, the cloud services allow the user to enjoy the
cloud applications. As the different users may be working in the collaborative relationship, the data sharing becomes significant to
achieve productive benefit during the data accessing. The existing security system only focuses on the authentication; it shows that
user’s private data cannot be accessed by the fake users. To address the above cloud storage privacy issue shared authority based
privacy-preserving authentication protocol is used. In the SAPA, the shared access authority is achieved by anonymous access request
and privacy consideration, attribute based access control allows the user to access their own data fields. To provide the data sharing
among the multiple users proxy re-encryption scheme is applied by the cloud server. The privacy-preserving data access authority
sharing is attractive for multi-user collaborative cloud applications.
Cloud computing challenges with emphasis on amazon ec2 and windows azureIJCNCJournal
Cloud Computing has received much attention by the IT-Business world. As compared to the common
computing platforms, cloud computing is more flexible in supporting real-time computation and is
considered a more powerful model for hosting and delivering services over the Internet. However, since
cloud computing is still at its infancy, it faces many challenges that stand against its growth and spread.
This article discusses some challenges facing cloud computing growth and conducts a comparison study
between Amazon EC2 and Windows Azure in dealing with such challenges. It concludes that Amazon EC2
generally offers better solutions than Windows Azure. Nevertheless, the selection between them depends on
the needs of customers.
Cloud Computing is an attractive research area for the last few years; and there have been a tremendous grows in the number of educational institutions all over the world who have either adopted or are considering migrating to cloud computing. However, there are many concerns and reservations about adopting conventional or public cloud based solutions. A new paradigm of cloud based solution has been proposed, namely, the private cloud based solutions, which becomes an attractive choice to educational Institutions. This paper presents the adjustment and implementation of private-based cloud solution for multi-campus educational institution, namely, Al-Balqa Applied University (BAU) in Jordan.
A Secure Cloud Storage System with Data Forwarding using Proxy Re-encryption ...IJTET Journal
Cloud computing provides the facility to access shared resources and common support which contributes services on demand over the network to perform operations that meet changing business needs. A cloud storage system, consisting of a collection of storage servers, affords long-term storage services over the internet. Storing the data in a third party cloud system cause serious concern over data confidentiality, without considering the local infrastructure limitations, the cloud services allow the user to enjoy the cloud applications. As the different users may be working in the collaborative relationship, the data sharing becomes significant to achieve productive benefit during the data accessing. The existing security system only focuses on the authentication; it shows that user’s private data cannot be accessed by the fake users. To address the above cloud storage privacy issue shared authority based privacy-preserving authentication protocol is used. In the SAPA, the shared access authority is achieved by anonymous access request and privacy consideration, attribute based access control allows the user to access their own data fields. To provide the data sharing among the multiple users proxy re-encryption scheme is applied by the cloud server. The privacy-preserving data access authority sharing is attractive for multi-user collaborative cloud applications.
“The upcoming sections cover introductory topic areas pertaining to the fundamental models used to categorize and define clouds and their most common service offerings, along with definitions of organizational roles and the specific set of characteristics that collectively distinguish a cloud.”
A Study of A Method To Provide Minimized Bandwidth Consumption Using Regenera...IJERA Editor
Cloud storage systems to protect data from corruptions, redundant data to tolerate failures of storage and lost data should be repaired when storage fails. Regenerating codes provide fault tolerance by striping data across multiple servers, while using less repair traffic than traditional erasure codes during failure recovery. In previous research implemented practical Data Integrity Protection (DIP) scheme for regenerating-coding based cloud storage. Functional Minimum-Storage Regenerating (FMSR) codes and it construct FMSR-DIP codes, which allow clients to remotely verify the integrity of random subsets of long-term archival data under a multi server setting. The problem is to optimize bandwidth consumption when repairing multiple failures. The cooperative repair of multiple failures can help to further save bandwidth consumption when multiple failures are being repaired.
Efficient architectural framework of cloud computing Souvik Pal
Cloud computing is that enables adaptive, favorable and on-demand network access to a collective pool of adjustable and configurable computing physical resources which networks, servers, bandwidth, storage that can be swiftly provisioned and released with negligible supervision endeavor or service provider interaction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing. However, clouds are Internet-based concept and try to disguise complexity overhead for end users. Cloud service providers (CSPs) use many structural designs combined with self-service capabilities and ready-to-use facilities for computing resources, which are enabled through network infrastructure especially the internet which is an important consideration. This paper provides an efficient architectural Framework for cloud computing that may lead to better performance and faster access.
“The chapter is organized into two primary sections that explore cloud delivery model issues pertaining to cloud providers and cloud consumers respectively.”
An study of security issues & challenges in cloud computingijsrd.com
"Cloud Computing" is a term, which involves virtualization, distributed computing, networking and web-services. It is a way of offering services to users by allowing them to tap into a massive pool of shared computing resources such as servers, storage and network. User can use services by simply plug into the cloud and pay only for what he uses. All these features made a cloud computing very advantageous and demanding. But the data privacy is a key security problem in cloud computing which comprises of data integrity, data confidentiality and user privacy specific concerns. Most of the persons do not prefer cloud to store their data as they are having a fear of losing the privacy of their confidential data. This paper introduces some cloud computing data security problem and its strategy to solve them which also satisfies the user regarding their data security.
Best cloud computing training institute in noidataramandal
TECHAVERA is offering best In Class, Corporate and Online cloud computing Training in Noida. TECHAVERA Delivers best cloud Live Project visit us - http://www.techaveranoida.in/best-cloud-computing-training-in-noida.php
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The recent surge in cloud computing arises from its ability to provide software, infrastructure, and platform services without requiring large investments or expenses to manage and operate them. Clouds typically involve service providers,
Infrastructure / resource providers, and service users (or clients). They include applications delivered as services, as well as the hardware and software systems providing these services. Our proposed framework for generic cloud collaboration allows clients and cloud applications to simultaneously use services from and route data among multiple clouds. This framework supports universal and dynamic collaboration in a multicloud system. It lets clients simultaneously use services from multiple clouds without prior business agreements among (CSP) cloud service providers, and without adopting common standards and specifications.
Mitigation of Voltage Sag/Swell with Fuzzy Control Reduced Rating DVRIJERD Editor
Power quality has been an issue that is becoming increasingly pivotal in industrial electricity
consumers point of view in recent times. Modern industries employ Sensitive power electronic equipments,
control devices and non-linear loads as part of automated processes to increase energy efficiency and
productivity. Voltage disturbances are the most common power quality problem due to this the use of a large
numbers of sophisticated and sensitive electronic equipment in industrial systems is increased. This paper
discusses the design and simulation of dynamic voltage restorer for improvement of power quality and
reduce the harmonics distortion of sensitive loads. Power quality problem is occurring at non-standard
voltage, current and frequency. Electronic devices are very sensitive loads. In power system voltage sag,
swell, flicker and harmonics are some of the problem to the sensitive load. The compensation capability
of a DVR depends primarily on the maximum voltage injection ability and the amount of stored
energy available within the restorer. This device is connected in series with the distribution feeder at
medium voltage. A fuzzy logic control is used to produce the gate pulses for control circuit of DVR and the
circuit is simulated by using MATLAB/SIMULINK software.
Active Power Exchange in Distributed Power-Flow Controller (DPFC) At Third Ha...IJERD Editor
This paper presents a component within the flexible ac-transmission system (FACTS) family, called
distributed power-flow controller (DPFC). The DPFC is derived from the unified power-flow controller (UPFC)
with an eliminated common dc link. The DPFC has the same control capabilities as the UPFC, which comprise
the adjustment of the line impedance, the transmission angle, and the bus voltage. The active power exchange
between the shunt and series converters, which is through the common dc link in the UPFC, is now through the
transmission lines at the third-harmonic frequency. DPFC multiple small-size single-phase converters which
reduces the cost of equipment, no voltage isolation between phases, increases redundancy and there by
reliability increases. The principle and analysis of the DPFC are presented in this paper and the corresponding
simulation results that are carried out on a scaled prototype are also shown.
Router 1X3 – RTL Design and VerificationIJERD Editor
Routing is the process of moving a packet of data from source to destination and enables messages
to pass from one computer to another and eventually reach the target machine. A router is a networking device
that forwards data packets between computer networks. It is connected to two or more data lines from different
networks (as opposed to a network switch, which connects data lines from one single network). This paper,
mainly emphasizes upon the study of router device, it‟s top level architecture, and how various sub-modules of
router i.e. Register, FIFO, FSM and Synchronizer are synthesized, and simulated and finally connected to its top
module.
Reducing Corrosion Rate by Welding DesignIJERD Editor
The paper addresses the importance of welding design to prevent corrosion at steel. Welding is
used to join pipe, profiles at bridges, spindle, and a lot more part of engineering construction. The
problems happened associated with welding are common issues in these fields, especially corrosion.
Corrosion can be reduced with many methods, they are painting, controlling humidity, and also good
welding design. In the research, it can be found that reducing residual stress on the welding can be
solved in corrosion rate reduction problem.
Preheating on 500oC and 600oC give better condition to reduce corosion rate than condition after
preheating 400oC. For all welding groove type, material with 500oC and 600oC preheating after 14 days
corrosion test is 0,5%-0,69% lost. Material with 400oC preheating after 14 days corrosion test is 0,57%-0,76%
lost.
Welding groove also influence corrosion rate. X and V type welding groove give better condition to reduce
corrosion rate than use 1/2V and 1/2 X welding groove. After 14 days corrosion test, the samples with
X welding groove type is 0,5%-0,57% lost. The samples with V welding groove after 14 days corrosion test is
0,51%-0,59% lost. The samples with 1/2V and 1/2X welding groove after 14 days corrosion test is 0,58%-
0,71% lost.
A Secure Cloud Storage System with Data Forwarding using Proxy Re-encryption ...IJTET Journal
Cloud computing provides the facility to access shared resources and common support which contributes services on
demand over the network to perform operations that meet changing business needs. A cloud storage system, consisting of a collection
of storage servers, affords long-term storage services over the internet. Storing the data in a third party cloud system cause serious
concern over data confidentiality, without considering the local infrastructure limitations, the cloud services allow the user to enjoy the
cloud applications. As the different users may be working in the collaborative relationship, the data sharing becomes significant to
achieve productive benefit during the data accessing. The existing security system only focuses on the authentication; it shows that
user’s private data cannot be accessed by the fake users. To address the above cloud storage privacy issue shared authority based
privacy-preserving authentication protocol is used. In the SAPA, the shared access authority is achieved by anonymous access request
and privacy consideration, attribute based access control allows the user to access their own data fields. To provide the data sharing
among the multiple users proxy re-encryption scheme is applied by the cloud server. The privacy-preserving data access authority
sharing is attractive for multi-user collaborative cloud applications.
Cloud computing challenges with emphasis on amazon ec2 and windows azureIJCNCJournal
Cloud Computing has received much attention by the IT-Business world. As compared to the common
computing platforms, cloud computing is more flexible in supporting real-time computation and is
considered a more powerful model for hosting and delivering services over the Internet. However, since
cloud computing is still at its infancy, it faces many challenges that stand against its growth and spread.
This article discusses some challenges facing cloud computing growth and conducts a comparison study
between Amazon EC2 and Windows Azure in dealing with such challenges. It concludes that Amazon EC2
generally offers better solutions than Windows Azure. Nevertheless, the selection between them depends on
the needs of customers.
Cloud Computing is an attractive research area for the last few years; and there have been a tremendous grows in the number of educational institutions all over the world who have either adopted or are considering migrating to cloud computing. However, there are many concerns and reservations about adopting conventional or public cloud based solutions. A new paradigm of cloud based solution has been proposed, namely, the private cloud based solutions, which becomes an attractive choice to educational Institutions. This paper presents the adjustment and implementation of private-based cloud solution for multi-campus educational institution, namely, Al-Balqa Applied University (BAU) in Jordan.
A Secure Cloud Storage System with Data Forwarding using Proxy Re-encryption ...IJTET Journal
Cloud computing provides the facility to access shared resources and common support which contributes services on demand over the network to perform operations that meet changing business needs. A cloud storage system, consisting of a collection of storage servers, affords long-term storage services over the internet. Storing the data in a third party cloud system cause serious concern over data confidentiality, without considering the local infrastructure limitations, the cloud services allow the user to enjoy the cloud applications. As the different users may be working in the collaborative relationship, the data sharing becomes significant to achieve productive benefit during the data accessing. The existing security system only focuses on the authentication; it shows that user’s private data cannot be accessed by the fake users. To address the above cloud storage privacy issue shared authority based privacy-preserving authentication protocol is used. In the SAPA, the shared access authority is achieved by anonymous access request and privacy consideration, attribute based access control allows the user to access their own data fields. To provide the data sharing among the multiple users proxy re-encryption scheme is applied by the cloud server. The privacy-preserving data access authority sharing is attractive for multi-user collaborative cloud applications.
“The upcoming sections cover introductory topic areas pertaining to the fundamental models used to categorize and define clouds and their most common service offerings, along with definitions of organizational roles and the specific set of characteristics that collectively distinguish a cloud.”
A Study of A Method To Provide Minimized Bandwidth Consumption Using Regenera...IJERA Editor
Cloud storage systems to protect data from corruptions, redundant data to tolerate failures of storage and lost data should be repaired when storage fails. Regenerating codes provide fault tolerance by striping data across multiple servers, while using less repair traffic than traditional erasure codes during failure recovery. In previous research implemented practical Data Integrity Protection (DIP) scheme for regenerating-coding based cloud storage. Functional Minimum-Storage Regenerating (FMSR) codes and it construct FMSR-DIP codes, which allow clients to remotely verify the integrity of random subsets of long-term archival data under a multi server setting. The problem is to optimize bandwidth consumption when repairing multiple failures. The cooperative repair of multiple failures can help to further save bandwidth consumption when multiple failures are being repaired.
Efficient architectural framework of cloud computing Souvik Pal
Cloud computing is that enables adaptive, favorable and on-demand network access to a collective pool of adjustable and configurable computing physical resources which networks, servers, bandwidth, storage that can be swiftly provisioned and released with negligible supervision endeavor or service provider interaction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing. However, clouds are Internet-based concept and try to disguise complexity overhead for end users. Cloud service providers (CSPs) use many structural designs combined with self-service capabilities and ready-to-use facilities for computing resources, which are enabled through network infrastructure especially the internet which is an important consideration. This paper provides an efficient architectural Framework for cloud computing that may lead to better performance and faster access.
“The chapter is organized into two primary sections that explore cloud delivery model issues pertaining to cloud providers and cloud consumers respectively.”
An study of security issues & challenges in cloud computingijsrd.com
"Cloud Computing" is a term, which involves virtualization, distributed computing, networking and web-services. It is a way of offering services to users by allowing them to tap into a massive pool of shared computing resources such as servers, storage and network. User can use services by simply plug into the cloud and pay only for what he uses. All these features made a cloud computing very advantageous and demanding. But the data privacy is a key security problem in cloud computing which comprises of data integrity, data confidentiality and user privacy specific concerns. Most of the persons do not prefer cloud to store their data as they are having a fear of losing the privacy of their confidential data. This paper introduces some cloud computing data security problem and its strategy to solve them which also satisfies the user regarding their data security.
Best cloud computing training institute in noidataramandal
TECHAVERA is offering best In Class, Corporate and Online cloud computing Training in Noida. TECHAVERA Delivers best cloud Live Project visit us - http://www.techaveranoida.in/best-cloud-computing-training-in-noida.php
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The recent surge in cloud computing arises from its ability to provide software, infrastructure, and platform services without requiring large investments or expenses to manage and operate them. Clouds typically involve service providers,
Infrastructure / resource providers, and service users (or clients). They include applications delivered as services, as well as the hardware and software systems providing these services. Our proposed framework for generic cloud collaboration allows clients and cloud applications to simultaneously use services from and route data among multiple clouds. This framework supports universal and dynamic collaboration in a multicloud system. It lets clients simultaneously use services from multiple clouds without prior business agreements among (CSP) cloud service providers, and without adopting common standards and specifications.
Mitigation of Voltage Sag/Swell with Fuzzy Control Reduced Rating DVRIJERD Editor
Power quality has been an issue that is becoming increasingly pivotal in industrial electricity
consumers point of view in recent times. Modern industries employ Sensitive power electronic equipments,
control devices and non-linear loads as part of automated processes to increase energy efficiency and
productivity. Voltage disturbances are the most common power quality problem due to this the use of a large
numbers of sophisticated and sensitive electronic equipment in industrial systems is increased. This paper
discusses the design and simulation of dynamic voltage restorer for improvement of power quality and
reduce the harmonics distortion of sensitive loads. Power quality problem is occurring at non-standard
voltage, current and frequency. Electronic devices are very sensitive loads. In power system voltage sag,
swell, flicker and harmonics are some of the problem to the sensitive load. The compensation capability
of a DVR depends primarily on the maximum voltage injection ability and the amount of stored
energy available within the restorer. This device is connected in series with the distribution feeder at
medium voltage. A fuzzy logic control is used to produce the gate pulses for control circuit of DVR and the
circuit is simulated by using MATLAB/SIMULINK software.
Active Power Exchange in Distributed Power-Flow Controller (DPFC) At Third Ha...IJERD Editor
This paper presents a component within the flexible ac-transmission system (FACTS) family, called
distributed power-flow controller (DPFC). The DPFC is derived from the unified power-flow controller (UPFC)
with an eliminated common dc link. The DPFC has the same control capabilities as the UPFC, which comprise
the adjustment of the line impedance, the transmission angle, and the bus voltage. The active power exchange
between the shunt and series converters, which is through the common dc link in the UPFC, is now through the
transmission lines at the third-harmonic frequency. DPFC multiple small-size single-phase converters which
reduces the cost of equipment, no voltage isolation between phases, increases redundancy and there by
reliability increases. The principle and analysis of the DPFC are presented in this paper and the corresponding
simulation results that are carried out on a scaled prototype are also shown.
Router 1X3 – RTL Design and VerificationIJERD Editor
Routing is the process of moving a packet of data from source to destination and enables messages
to pass from one computer to another and eventually reach the target machine. A router is a networking device
that forwards data packets between computer networks. It is connected to two or more data lines from different
networks (as opposed to a network switch, which connects data lines from one single network). This paper,
mainly emphasizes upon the study of router device, it‟s top level architecture, and how various sub-modules of
router i.e. Register, FIFO, FSM and Synchronizer are synthesized, and simulated and finally connected to its top
module.
Reducing Corrosion Rate by Welding DesignIJERD Editor
The paper addresses the importance of welding design to prevent corrosion at steel. Welding is
used to join pipe, profiles at bridges, spindle, and a lot more part of engineering construction. The
problems happened associated with welding are common issues in these fields, especially corrosion.
Corrosion can be reduced with many methods, they are painting, controlling humidity, and also good
welding design. In the research, it can be found that reducing residual stress on the welding can be
solved in corrosion rate reduction problem.
Preheating on 500oC and 600oC give better condition to reduce corosion rate than condition after
preheating 400oC. For all welding groove type, material with 500oC and 600oC preheating after 14 days
corrosion test is 0,5%-0,69% lost. Material with 400oC preheating after 14 days corrosion test is 0,57%-0,76%
lost.
Welding groove also influence corrosion rate. X and V type welding groove give better condition to reduce
corrosion rate than use 1/2V and 1/2 X welding groove. After 14 days corrosion test, the samples with
X welding groove type is 0,5%-0,57% lost. The samples with V welding groove after 14 days corrosion test is
0,51%-0,59% lost. The samples with 1/2V and 1/2X welding groove after 14 days corrosion test is 0,58%-
0,71% lost.
Gold prospecting using Remote Sensing ‘A case study of Sudan’IJERD Editor
Gold has been extracted from northeast Africa for more than 5000 years, and this may be the first
place where the metal was extracted. The Arabian-Nubian Shield (ANS) is an exposure of Precambrian
crystalline rocks on the flanks of the Red Sea. The crystalline rocks are mostly Neoproterozoic in age. ANS
includes the nations of Israel, Jordan. Egypt, Saudi Arabia, Sudan, Eritrea, Ethiopia, Yemen, and Somalia.
Arabian Nubian Shield Consists of juvenile continental crest that formed between 900 550 Ma, when intra
oceanic arc welded together along ophiolite decorated arc. Primary Au mineralization probably developed in
association with the growth of intra oceanic arc and evolution of back arc. Multiple episodes of deformation
have obscured the primary metallogenic setting, but at least some of the deposits preserve evidence that they
originate as sea floor massive sulphide deposits.
The Red Sea Hills Region is a vast span of rugged, harsh and inhospitable sector of the Earth with
inimical moon-like terrain, nevertheless since ancient times it is famed to be an abode of gold and was a major
source of wealth for the Pharaohs of ancient Egypt. The Pharaohs old workings have been periodically
rediscovered through time. Recent endeavours by the Geological Research Authority of Sudan led to the
discovery of a score of occurrences with gold and massive sulphide mineralizations. In the nineties of the
previous century the Geological Research Authority of Sudan (GRAS) in cooperation with BRGM utilized
satellite data of Landsat TM using spectral ratio technique to map possible mineralized zones in the Red Sea
Hills of Sudan. The outcome of the study mapped a gossan type gold mineralization. Band ratio technique was
applied to Arbaat area and a signature of alteration zone was detected. The alteration zones are commonly
associated with mineralization. The alteration zones are commonly associated with mineralization. A filed check
confirmed the existence of stock work of gold bearing quartz in the alteration zone. Another type of gold
mineralization that was discovered using remote sensing is the gold associated with metachert in the Atmur
Desert.
Influence of tensile behaviour of slab on the structural Behaviour of shear c...IJERD Editor
-A composite beam is composed of a steel beam and a slab connected by means of shear connectors
like studs installed on the top flange of the steel beam to form a structure behaving monolithically. This study
analyzes the effects of the tensile behavior of the slab on the structural behavior of the shear connection like slip
stiffness and maximum shear force in composite beams subjected to hogging moment. The results show that the
shear studs located in the crack-concentration zones due to large hogging moments sustain significantly smaller
shear force and slip stiffness than the other zones. Moreover, the reduction of the slip stiffness in the shear
connection appears also to be closely related to the change in the tensile strain of rebar according to the increase
of the load. Further experimental and analytical studies shall be conducted considering variables such as the
reinforcement ratio and the arrangement of shear connectors to achieve efficient design of the shear connection
in composite beams subjected to hogging moment.
A Novel Method for Prevention of Bandwidth Distributed Denial of Service AttacksIJERD Editor
Distributed Denial of Service (DDoS) Attacks became a massive threat to the Internet. Traditional
Architecture of internet is vulnerable to the attacks like DDoS. Attacker primarily acquire his army of Zombies,
then that army will be instructed by the Attacker that when to start an attack and on whom the attack should be
done. In this paper, different techniques which are used to perform DDoS Attacks, Tools that were used to
perform Attacks and Countermeasures in order to detect the attackers and eliminate the Bandwidth Distributed
Denial of Service attacks (B-DDoS) are reviewed. DDoS Attacks were done by using various Flooding
techniques which are used in DDoS attack.
The main purpose of this paper is to design an architecture which can reduce the Bandwidth
Distributed Denial of service Attack and make the victim site or server available for the normal users by
eliminating the zombie machines. Our Primary focus of this paper is to dispute how normal machines are
turning into zombies (Bots), how attack is been initiated, DDoS attack procedure and how an organization can
save their server from being a DDoS victim. In order to present this we implemented a simulated environment
with Cisco switches, Routers, Firewall, some virtual machines and some Attack tools to display a real DDoS
attack. By using Time scheduling, Resource Limiting, System log, Access Control List and some Modular
policy Framework we stopped the attack and identified the Attacker (Bot) machines
Hearing loss is one of the most common human impairments. It is estimated that by year 2015 more
than 700 million people will suffer mild deafness. Most can be helped by hearing aid devices depending on the
severity of their hearing loss. This paper describes the implementation and characterization details of a dual
channel transmitter front end (TFE) for digital hearing aid (DHA) applications that use novel micro
electromechanical- systems (MEMS) audio transducers and ultra-low power-scalable analog-to-digital
converters (ADCs), which enable a very-low form factor, energy-efficient implementation for next-generation
DHA. The contribution of the design is the implementation of the dual channel MEMS microphones and powerscalable
ADC system.
Virtual Machine Migration and Allocation in Cloud Computing: A Reviewijtsrd
Cloud computing is an emerging computing technology that maintains computational resources on large data centers and accessed through internet, rather than on local computers. VM migration provides the capability to balance the load, system maintenance, etc. Virtualization technology gives power to cloud computing. The virtual machine migration techniques can be divided into two categories that is pre copy and post copy approach. The process to move running applications or VMs from one physical machine to another is known as VM migration. In migration process the processor state, storage, memory and network connection are moved from one host to another.. Two important performance metrics are downtime and total migration time that the users care about most, because these metrics deals with service degradation and the time during which the service is unavailable. This paper focus on the analysis of live VM migration Techniques in cloud computing. Khushbu Singh Chandel | Dr. Avinash Sharma "Virtual Machine Migration and Allocation in Cloud Computing: A Review" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-1 , December 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29556.pdfPaper URL: https://www.ijtsrd.com/computer-science/computer-network/29556/virtual-machine-migration-and-allocation-in-cloud-computing-a-review/khushbu-singh-chandel
Short Economic EssayPlease answer MINIMUM 400 word I need this.docxbudabrooks46239
Short Economic Essay
Please answer MINIMUM 400 word
I need this maximum in 2,5 hour because now I’m doing the online final exam and the clock is ticking.
Question:
What is the purpose of the term sheet and why is it important? Be sure to write a detailed long essay to this question. Think about who the term sheet is written for, why it is written, and what does it need to convey.
Cloud Computing: Virtualization and Resiliency for
Data Center Computing
Valentina Salapura
IBM T. J. Watson Research Center
Yorktown Heights, NY, USA
[email protected]
Index Terms — Cloud computing, data center management,
data center optimization, virtualization, Infrastructure as a
service (IaaS), Platform as a service (PaaS), Software as a service
(SaaS), high availability, disaster recovery, virtual appliance.
INTRODUCTION
Cloud computing is being rapidly adopted across the IT
industry, driven by the need to reduce the total cost of
ownership of increasingly more demanding workloads. Within
companies, private clouds are offering a more efficient way to
manage and use private data centers. In the broader
marketplace, public clouds offer the promise of buying
computing capabilities based on a utility model. This utility
model enables IT consumers to purchase compute resources on
demand to fit current business needs and scale expenses
associated with computing resources. Thus, cloud computing
offers IT to be treated as an ongoing variable operating expense
billed by usage rather than requiring capital expenditures that
must be planned years in advance. Advantageously, operating
expenses can be charged against the revenue generated by these
expenses directly. In contrast, capital expenses incurred by the
purchase of a system need to be paid at the time of purchase,
but can only be depreciated to reduce the taxable income over
the lifetime of the system.
THE MAIN ATTRIBUTES OF CLOUD COMPUTING
The main attributes of cloud computing are scalable,
shared, on-demand computing resources delivered over the
network, and pay-per-use pricing. This offers flexibility in
using as few or as many IT resources as needed at any point in
time. Thus, users do not need to predict future resources they
might need, and to commit to capital investment in hardware.
This is especially advantageous for start-ups, and small and
medium businesses which might otherwise not be able to afford
the IT infrastructure they need to support their growing
business. At the same time, redirecting capital investment from
IT infrastructure to the core business is attractive even for large
and financially strong businesses.
From a technical perspective, cloud computing brings the
benefits of virtualization and multi-tenancy to scale-out
systems. Virtualization techniques allow multiple system
images to share the same hardware resources: CPU
virtualization techniques create multiple virtual hardware
systems, while network virtualization .
Dynamic Resource Provisioning with Authentication in Distributed DatabaseEditor IJCATR
Data center have the largest consumption amounts of energy in sharing the power. The public cloud workloads of different
priorities and performance requirements of various applications [4]. Cloud data center have capable of sensing an opportunity to present
different programs. In my proposed construction and the name of the security level of imperturbable privacy leakage rarely distributed
cloud system to deal with the persistent characteristics there is a substantial increases and information that can be used to augment the
profit, retrenchment overhead or both. Data Mining Analysis of data from different perspectives and summarizing it into useful
information is a process. Three empirical algorithms have been proposed assignments estimate the ratios are dissected theoretically and
compared using real Internet latency data recital of testing methods
Hybrid Based Resource Provisioning in CloudEditor IJCATR
The data centres and energy consumption characteristics of the various machines are often noted with different capacities.
The public cloud workloads of different priorities and performance requirements of various applications when analysed we had noted
some invariant reports about cloud. The Cloud data centres become capable of sensing an opportunity to present a different program.
In out proposed work, we are using a hybrid method for resource provisioning in data centres. This method is used to allocate the
resources at the working conditions and also for the energy stored in the power consumptions. Proposed method is used to allocate the
process behind the cloud storage.
Profit Maximization for Service Providers using Hybrid Pricing in Cloud Compu...Editor IJCATR
Cloud computing has recently emerged as one of the buzzwords in the IT industry. Several IT vendors are promising to offer computation, data/storage, and application hosting services, offering Service-Level Agreements (SLA) backed performance and uptime promises for their services. While these „clouds‟ are the natural evolution of traditional clusters and data centers, they are distinguished by following a pricing model where customers are charged based on their utilization of computational resources, storage and transfer of data. They offer subscription-based access to infrastructure, platforms, and applications that are popularly termed as IaaS (Infrastructure as a Service), PaaS (Platform as a Service), and SaaS (Software as a Service). In order to improve the profit of service providers we implement a technique called hybrid pricing , where this hybrid pricing model is a pooled with fixed and spot pricing techniques.
ABSTRACT
Cloud computing utilizes large scale computing infrastructure that has been radically changing the IT landscape enabling remote access to computing resources with low service cost, high scalability , availability and accessibility. Serving tasks from multiple users where the tasks are of different characteristics with variation in the requirement of computing power may cause under or over utilization of resources.Therefore maintaining such mega-scale datacenter requires efficient resource management procedure to increase resource utilization. However, while maintaining efficiency in service provisioning it is necessary to ensure the maximization of profit for the cloud providers. Most of the current research works aims at how providers can offer efficient service provisioning to the user and improving system performance. There are comparatively fewer specific works regarding resource management which also deals with the economic section that considers profit maximization for the provider. In this paper we represent a model that deals with both efficient resource utilization and pricing of the resources. The joint resource management model combines the work of user assignment, task scheduling and load balancing on the fact of CPU power endorsement. We propose four algorithms respectively for user assignment, task scheduling, load balancing and pricing that works on group based resources offering reduction in task execution time(56.3%),activated physical machines(41.44%),provisioning cost(23%) . The cost is calculated over a time interval involving the number of served customer at this time and the amount of resources used within this time
GROUP BASED RESOURCE MANAGEMENT AND PRICING MODEL IN CLOUD COMPUTINGijcsit
Cloud computing utilizes large scale computing infrastructure that has been radically changing the IT landscape enabling remote access to computing resources with low service cost, high scalability , availability and accessibility. Serving tasks from multiple users where the tasks are of different characteristics with variation in the requirement of computing power may cause under or over utilization of resources.Therefore maintaining such mega-scale datacenter requires efficient resource management procedure to increase resource utilization. However, while maintaining efficiency in service provisioning it is necessary to ensure the maximization of profit for the cloud providers. Most of the current research works aims at how providers can offer efficient service provisioning to the user and improving system performance. There are comparatively fewer specific works regarding resource management which also deals with the economic section that considers profit maximization for the provider. In this paper we represent a model that deals with both efficient resource utilization and pricing of the resources. The joint resource management model combines the work of user assignment, task scheduling and load balancing on the fact of CPU power endorsement. We propose four algorithms respectively for user assignment, task scheduling, load balancing and pricing that works on group based resources offering reduction in task execution time(56.3%),activated physical machines(41.44%),provisioning cost(23%) . The cost is calculated over a time interval involving the number of served customer at this time and the amount of resources used within this time.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
An Efficient MDC based Set Partitioned Embedded Block Image CodingDr. Amarjeet Singh
In this paper, fast, efficient, simple and widely used
Set Partitioned Embedded bloCK based coding is done on
Multiple Descriptions of transformed image. The maximum
potential of this type of coding can be exploited with discrete
wavelet transform (DWT) of images. Two correlated
descriptions are generated from a wavelet transformed image
to ensure meaningful transmission of the image over noise
prone wireless channels. These correlated descriptions are
encoded by set partitioning technique through SPECK coders
and transmitted over wireless channels. Quality of
reconstructed image at the decoder side depends upon the
number of descriptions received. More the number of
descriptions received at output side, more enhance the quality
of reconstructed image. However, if any of the multiple
description is lost, the receive can estimate it exploiting the
correlation between the descriptions. The simulations
performed on an image on MATLAB gives decent
performance and results even after half of the descriptions is
lost in transmission.
Swiftly increasing demand of computational
calculations in the process of business, transferring of files
under certain protocols and data centers force to develop an
emerging technology cater to the services for computational
need, highly manageable and secure storage. To fulfill these
technological desires cloud computing is the best answer by
introducing various sorts of service platforms in high
computational environment. Cloud computing is the most
recent paradigm promising to turn around the vision of
“computing utilities” into reality. The term “cloud
computing” is relatively new, there is no universal agreement
on this definition. In this paper, we go through with different
area of expertise of research and novelty in cloud computing
domain and its usefulness in the genre of management. Even
though the cloud computing provides many distinguished
features, it still has certain sorts of short comings amidst with
comparatively high cost for both private and public clouds. It
is the way of congregating amasses of information and
resources stored in personal computers and other gadgets
and further putting them on the public cloud for serving
users. Resource management in a cloud environment is a
hard problem, due to the scale of modern data centers, their
interdependencies along with the range of objectives of the
different actors in a cloud ecosystem. Cloud computing is
turning to be one of the most explosively expanding
technologies in the computing industry in this era. It
authorizes the users to transfer their data and computation to
remote location with minimal impact on system performance.
With the evolution of virtualization technology, cloud
computing has been emerged to be distributed systematically
or strategically on full basis. The idea of cloud computing has
not only restored the field of distributed systems but also
fundamentally changed how business utilizes computing
today. Resource management in cloud computing is in fact a
typical problem which is due to the scale of modern data
centers, the variety of resource types and their inter
dependencies, unpredictability of load along with the range of
objectives of the different actors in a cloud ecosystem.
Understanding the cloud computing stackSatish Chavan
Understanding the cloud computing stack
Introduction
Key characteristics
At Glance
Standardization, Migration &Adaptation
Service models
Deployment models
Network as a Service
Software as a Service (SaaS).
Platform as a Service (PaaS).
Infrastructure as a Service (IaaS).
Communications as a Service (CaaS)
Data as a Service - DaaS
Benefits & Challenges
Security Risks & Challenges
Cloud Vendors
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Adaptive Offloading in Mobile Cloud Computing by automatic partitioning approach of tasks is the idea to augment execution through migrating heavy computation from mobile devices to resourceful cloud servers and then receive the results from them via wireless networks. Offloading is an effective way to
overcome the resources and functionalities constraints
of the mobile devices since it can release them from
intensive processing and increase performance of the
mobile applications, in terms of response time.
Offloading brings many potential benefits, such as
energy saving, performance improvement, reliability
improvement, ease for the software developers and
better exploitation of contextual information.
Parameters about method transitions, response times,
cost and energy consumptions are dynamically reestimated
at runtime during application executions.
Study on the Fused Deposition Modelling In Additive ManufacturingIJERD Editor
Additive manufacturing process, also popularly known as 3-D printing, is a process where a product
is created in a succession of layers. It is based on a novel materials incremental manufacturing philosophy.
Unlike conventional manufacturing processes where material is removed from a given work price to derive the
final shape of a product, 3-D printing develops the product from scratch thus obviating the necessity to cut away
materials. This prevents wastage of raw materials. Commonly used raw materials for the process are ABS
plastic, PLA and nylon. Recently the use of gold, bronze and wood has also been implemented. The complexity
factor of this process is 0% as in any object of any shape and size can be manufactured.
Spyware triggering system by particular string valueIJERD Editor
This computer programme can be used for good and bad purpose in hacking or in any general
purpose. We can say it is next step for hacking techniques such as keylogger and spyware. Once in this system if
user or hacker store particular string as a input after that software continually compare typing activity of user
with that stored string and if it is match then launch spyware programme.
A Blind Steganalysis on JPEG Gray Level Image Based on Statistical Features a...IJERD Editor
This paper presents a blind steganalysis technique to effectively attack the JPEG steganographic
schemes i.e. Jsteg, F5, Outguess and DWT Based. The proposed method exploits the correlations between
block-DCTcoefficients from intra-block and inter-block relation and the statistical moments of characteristic
functions of the test image is selected as features. The features are extracted from the BDCT JPEG 2-array.
Support Vector Machine with cross-validation is implemented for the classification.The proposed scheme gives
improved outcome in attacking.
Secure Image Transmission for Cloud Storage System Using Hybrid SchemeIJERD Editor
- Data over the cloud is transferred or transmitted between servers and users. Privacy of that
data is very important as it belongs to personal information. If data get hacked by the hacker, can be
used to defame a person’s social data. Sometimes delay are held during data transmission. i.e. Mobile
communication, bandwidth is low. Hence compression algorithms are proposed for fast and efficient
transmission, encryption is used for security purposes and blurring is used by providing additional
layers of security. These algorithms are hybridized for having a robust and efficient security and
transmission over cloud storage system.
Application of Buckley-Leverett Equation in Modeling the Radius of Invasion i...IJERD Editor
A thorough review of existing literature indicates that the Buckley-Leverett equation only analyzes
waterflood practices directly without any adjustments on real reservoir scenarios. By doing so, quite a number
of errors are introduced into these analyses. Also, for most waterflood scenarios, a radial investigation is more
appropriate than a simplified linear system. This study investigates the adoption of the Buckley-Leverett
equation to estimate the radius invasion of the displacing fluid during waterflooding. The model is also adopted
for a Microbial flood and a comparative analysis is conducted for both waterflooding and microbial flooding.
Results shown from the analysis doesn’t only records a success in determining the radial distance of the leading
edge of water during the flooding process, but also gives a clearer understanding of the applicability of
microbes to enhance oil production through in-situ production of bio-products like bio surfactans, biogenic
gases, bio acids etc.
Gesture Gaming on the World Wide Web Using an Ordinary Web CameraIJERD Editor
- Gesture gaming is a method by which users having a laptop/pc/x-box play games using natural or
bodily gestures. This paper presents a way of playing free flash games on the internet using an ordinary webcam
with the help of open source technologies. Emphasis in human activity recognition is given on the pose
estimation and the consistency in the pose of the player. These are estimated with the help of an ordinary web
camera having different resolutions from VGA to 20mps. Our work involved giving a 10 second documentary to
the user on how to play a particular game using gestures and what are the various kinds of gestures that can be
performed in front of the system. The initial inputs of the RGB values for the gesture component is obtained by
instructing the user to place his component in a red box in about 10 seconds after the short documentary before
the game is finished. Later the system opens the concerned game on the internet on popular flash game sites like
miniclip, games arcade, GameStop etc and loads the game clicking at various places and brings the state to a
place where the user is to perform only gestures to start playing the game. At any point of time the user can call
off the game by hitting the esc key and the program will release all of the controls and return to the desktop. It
was noted that the results obtained using an ordinary webcam matched that of the Kinect and the users could
relive the gaming experience of the free flash games on the net. Therefore effective in game advertising could
also be achieved thus resulting in a disruptive growth to the advertising firms.
Hardware Analysis of Resonant Frequency Converter Using Isolated Circuits And...IJERD Editor
-LLC resonant frequency converter is basically a combo of series as well as parallel resonant ckt. For
LCC resonant converter it is associated with a disadvantage that, though it has two resonant frequencies, the
lower resonant frequency is in ZCS region[5]. For this application, we are not able to design the converter
working at this resonant frequency. LLC resonant converter existed for a very long time but because of
unknown characteristic of this converter it was used as a series resonant converter with basically a passive
(resistive) load. . Here, it was designed to operate in switching frequency higher than resonant frequency of the
series resonant tank of Lr and Cr converter acts very similar to Series Resonant Converter. The benefit of LLC
resonant converter is narrow switching frequency range with light load[6] . Basically, the control ckt plays a
very imp. role and hence 555 Timer used here provides a perfect square wave as the control ckt provides no
slew rate which makes the square wave really strong and impenetrable. The dead band circuit provides the
exclusive dead band in micro seconds so as to avoid the simultaneous firing of two pairs of IGBT’s where one
pair switches off and the other on for a slightest period of time. Hence, the isolator ckt here is associated with
each and every ckt used because it acts as a driver and an isolation to each of the IGBT is provided with one
exclusive transformer supply[3]. The IGBT’s are fired using the appropriate signal using the previous boards
and hence at last a high frequency rectifier ckt with a filtering capacitor is used to get an exact dc
waveform .The basic goal of this particular analysis is to observe the wave forms and characteristics of
converters with differently positioned passive elements in the form of tank circuits.
Simulated Analysis of Resonant Frequency Converter Using Different Tank Circu...IJERD Editor
LLC resonant frequency converter is basically a combo of series as well as parallel resonant ckt. For
LCC resonant converter it is associated with a disadvantage that, though it has two resonant frequencies, the
lower resonant frequency is in ZCS region [5]. For this application, we are not able to design the converter
working at this resonant frequency. LLC resonant converter existed for a very long time but because of
unknown characteristic of this converter it was used as a series resonant converter with basically a passive
(resistive) load. . Here, it was designed to operate in switching frequency higher than resonant frequency of the
series resonant tank of Lr and Cr converter acts very similar to Series Resonant Converter. The benefit of LLC
resonant converter is narrow switching frequency range with light load[6] . Basically, the control ckt plays a
very imp. role and hence 555 Timer used here provides a perfect square wave as the control ckt provides no
slew rate which makes the square wave really strong and impenetrable. The dead band circuit provides the
exclusive dead band in micro seconds so as to avoid the simultaneous firing of two pairs of IGBT’s where one
pair switches off and the other on for a slightest period of time. Hence, the isolator ckt here is associated with
each and every ckt used because it acts as a driver and an isolation to each of the IGBT is provided with one
exclusive transformer supply[3]. The IGBT’s are fired using the appropriate signal using the previous boards
and hence at last a high frequency rectifier ckt with a filtering capacitor is used to get an exact dc
waveform .The basic goal of this particular analysis is to observe the wave forms and characteristics of
converters with differently positioned passive elements in the form of tank circuits. The supported simulation
is done through PSIM 6.0 software tool
Amateurs Radio operator, also known as HAM communicates with other HAMs through Radio
waves. Wireless communication in which Moon is used as natural satellite is called Moon-bounce or EME
(Earth -Moon-Earth) technique. Long distance communication (DXing) using Very High Frequency (VHF)
operated amateur HAM radio was difficult. Even with the modest setup having good transceiver, power
amplifier and high gain antenna with high directivity, VHF DXing is possible. Generally 2X11 YAGI antenna
along with rotor to set horizontal and vertical angle is used. Moon tracking software gives exact location,
visibility of Moon at both the stations and other vital data to acquire real time position of moon.
“MS-Extractor: An Innovative Approach to Extract Microsatellites on „Y‟ Chrom...IJERD Editor
Simple Sequence Repeats (SSR), also known as Microsatellites, have been extensively used as
molecular markers due to their abundance and high degree of polymorphism. The nucleotide sequences of
polymorphic forms of the same gene should be 99.9% identical. So, Microsatellites extraction from the Gene is
crucial. However, Microsatellites repeat count is compared, if they differ largely, he has some disorder. The Y
chromosome likely contains 50 to 60 genes that provide instructions for making proteins. Because only males
have the Y chromosome, the genes on this chromosome tend to be involved in male sex determination and
development. Several Microsatellite Extractors exist and they fail to extract microsatellites on large data sets of
giga bytes and tera bytes in size. The proposed tool “MS-Extractor: An Innovative Approach to extract
Microsatellites on „Y‟ Chromosome” can extract both Perfect as well as Imperfect Microsatellites from large
data sets of human genome „Y‟. The proposed system uses string matching with sliding window approach to
locate Microsatellites and extracts them.
Importance of Measurements in Smart GridIJERD Editor
- The need to get reliable supply, independence from fossil fuels, and capability to provide clean
energy at a fixed and lower cost, the existing power grid structure is transforming into Smart Grid. The
development of a smart energy distribution grid is a current goal of many nations. A Smart Grid should have
new capabilities such as self-healing, high reliability, energy management, and real-time pricing. This new era
of smart future grid will lead to major changes in existing technologies at generation, transmission and
distribution levels. The incorporation of renewable energy resources and distribution generators in the existing
grid will increase the complexity, optimization problems and instability of the system. This will lead to a
paradigm shift in the instrumentation and control requirements for Smart Grids for high quality, stable and
reliable electricity supply of power. The monitoring of the grid system state and stability relies on the
availability of reliable measurement of data. In this paper the measurement areas that highlight new
measurement challenges, development of the Smart Meters and the critical parameters of electric energy to be
monitored for improving the reliability of power systems has been discussed.
Study of Macro level Properties of SCC using GGBS and Lime stone powderIJERD Editor
One of the major environmental concerns is the disposal of the waste materials and utilization of
industrial by products. Lime stone quarries will produce millions of tons waste dust powder every year. Having
considerable high degree of fineness in comparision to cement this material may be utilized as a partial
replacement to cement. For this purpose an experiment is conducted to investigate the possibility of using lime
stone powder in the production of SCC with combined use GGBS and how it affects the fresh and mechanical
properties of SCC. First SCC is made by replacing cement with GGBS in percentages like 10, 20, 30, 40, 50 and
by taking the optimum mix with GGBS lime stone powder is blended to mix in percentages like 5, 10, 15, 20 as
a partial replacement to cement. Test results shows that the SCC mix with combination of 30% GGBS and 15%
limestone powder gives maximum compressive strength and fresh properties are also in the limits prescribed by
the EFNARC.
Seismic Drift Consideration in soft storied RCC buildings: A Critical ReviewIJERD Editor
Reinforced concrete frame buildings are becoming increasingly common in urban India. Many such
buildings constructed in recent times have a special feature – the ground storey is left open for the purpose of
parking, i.e., columns in the ground floor do not have any partition walls (of either masonry or
Reinforced concrete) between them. Such buildings are often called open ground storey buildings. The
relative horizontal displacement in the ground storey is much larger than storeys above it. The total horizontal
earthquake force it can carry in the ground storey is significantly smaller than storeys above it. The soft or weak
storey may exist at any storey level other than ground storey level. The presence of walls in upper storeys
makes them much stiffer than the open ground storey. Still Multi storey reinforced concrete buildings are
continuing to be built in India which has open ground storeys. It is imperative to know the behavior of
soft storey building to the seismic load for designing various retrofit strategies. Hence it is important to
study and understand the response of such buildings and make such buildings earthquake resistant based
on the study to prevent their collapse and to save the loss of life and property.
Post processing of SLM Ti-6Al-4V Alloy in accordance with AMS 4928 standardsIJERD Editor
This Research work was done to find out the impact of AMS 4928 standard heat treatment on
Selective Laser Melted (SLM) Ti-6Al-4V Grade 23 alloy. Ti-6Al-4V Grade 23 is an Extra Low Interstitial
version of Ti alloy with lower impurities and is α+β type alloy at room temperature. SLM is one type of method
in Additive Manufacturing based on Powder bed system. Each powder layer of few microns is coated and a laser
beam is scanned to melt the metal powder according to the specification of the part and subsequently moved
downwards layer by layer. The test coupons were first heat treated according to the above mentioned standard.
The tensile testing and the microstructural analysis were done to compare the results with that of mentioned in
the AMS 4928.The yield stress andPercentage elongation in the test coupons achieved are better than the
minimum requirement by AMS 4928 standard. Coarse lamellar grain structures were obtained with no
continuous network of alpha at prior beta grain boundaries.
Treatment of Waste Water from Organic Fraction Incineration of Municipal Soli...IJERD Editor
Evaporation is one of treatment alternatives of waste water from condensation of vapour in flue gas
or from flue gas scrubber system of an incinerator. The waste water contains tar and heavy metals which are
toxic and must be separated, before discharged to environment or recycled. Due to the relatively low efficiency
of the evaporation process, a combination of the evaporation-absorption process is developed to increase the
efficiency. The aim of this research is to study the separation efficiency of tar from the tar-water mixture from
organic fraction incineration of garbage by evaporation-absorption process, and compared it with the
evaporation process. The evaporation process was performed by evaporating the waste water directly, while the
evaporation-absorption process was carried out by evaporating the waste water before it had been mixed with
palm oil as an absorbent. The results showed that the efficiency to separate the heavy tar of the evaporation
process was 73.27% compared to the combination of evaporation-absorption that was 98.82%. Meanwhile, for
the separation of the light tar, the efficiencies of both process types were almost the same. This system can be
integrated with the incinerator for the treatment of flue gases and waste water generated from the burning of
organic fraction of MSW
Content Based Video Retrieval Using Integrated Feature Extraction and Persona...IJERD Editor
Traditional video retrieval methods fail to meet technical challenges due to large and rapid growth of
multimedia data, demanding effective retrieval systems. In the last decade Content Based Video Retrieval
(CBVR) has become more and more popular. The amount of lecture video data on the Worldwide Web (WWW)
is growing rapidly. Therefore, a more efficient method for video retrieval in WWW or within large lecture video
archives is urgently needed. This paper presents an implementation of automated video indexing and video
search in large videodatabase. First of all, we apply automatic video segmentation and key-frame detection to
extract the frames from video. At next, we extract textual keywords by applying on video i.e. Optical Character
Recognition (OCR) technology on key-frames and Automatic Speech Recognition (ASR) on audio tracks of that
video. At next, we also extractingcolour, texture and edge detector features from different method. At last, we
integrate all the keywords and features which has extracted from above techniques for searching
purpose.Finallysearch similarity measure is applied to retrieve the best matchingcorresponding videos are
presented as output from database. Additionally we are providing Re-ranking of results as per users interest in
original result.
Planar Internal Antenna Design for Cellular Applications & SAR AnalysisIJERD Editor
This paper presents a new design of direct-fed Multi band printed Planar Internal Antenna (PIA), for
cellular applications. The PIA antenna is composed of ground plane, meander radiating strip and two other
parasitic strips are printed on a common substrate. The designed antenna has been simulated in CST
environment. The simulated results for the resonant frequency, return loss, radiation pattern and gain are
presented and discussed. The bandwidths for three resonance achieved on the basis of -6 dB return loss.These
Bandwidths can be utilized for GSM 900, GSM 1800, GSM 1900, LTE 2300 and Bluetooth/WLAN as an
acceptable reference in mobile phones applications. Further the antenna was placed in proximity to the SAR
head on CST environment. The simulated results of SAR analysis are presented in this paper with acceptable
range.
Intelligent learning management system startersIJERD Editor
learning management system (lms) is increasingly gaining popularity in the academic community as
a means of delivering e-learning contents. Simply placing lecture notes and videos among other contents on
lmss do not particularly train the best. This situation could be improved with intelligent tutoring systems (itss)
integration into preferred lms to make it more adaptive and effective, through enhanced student participation
and learning. This work aims, therefore, to create a starter model and a model java its integrated preferred lms.
The its integrated lms starter model was proposed through augmentation and a fluid iterative cycle of
awareness, suggestion, development, evaluation and conclusion. Known open/inexpensive, tried and tested
popular lmss were evaluated at cms matrix site, and complemented. Java its integrated moodle (preferred),
employing certain architectural framework of its integrated lms, was created following the spiral model of
software development
Joint State and Parameter Estimation by Extended Kalman Filter (EKF) techniqueIJERD Editor
In order to increase power system stability and reliability during and after disturbances, power grid
global and local controllers must be developed. SCADA system provides steady and low sampling density. To
remove these limitation PMUs are being rapidly adopted worldwide. Dynamic states of power system can be
estimated using EKF. This requires field excitation as input which may not available. As a result, the EKF with
unknown inputs proposed for identifying and estimating the states and the unknown inputs of the synchronous
machine.
Experimental Study of Material Removal Efficiency in EDM Using Various Types ...IJERD Editor
The machining process in electrical discharge machining (EDM) consists of a melting process and a
removal process. A region of the workpiece surface heated by the discharge plasma is melted and a portion of
the melted region is removed from the workpiece body. The rest of the melted region remains on the workpiece
surface and re-solidified as a white layer. In previous research, to evaluate the removal ability, a ratio of the
removal volume to the melted volume is defined as the material removal efficiency.
In this study, the material removal efficiency was investigated to develop an understanding of the
machining mechanism in EDM. As a result of experiments, it is found that the material removal efficiencies
show almost the same value, whereas the removal volume varies with the type of dielectric oil or the discharge
duration. To advance the study about the machining mechanism in EDM, the simulation for the workpiece
temperature distribution, considering the effect of the type of dielectric oil or the discharge duration, should be
conducted further
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
F1034047
1. International Journal of Engineering Research and Development
e-ISSN: 2278-067X, p-ISSN: 2278-800X, www.ijerd.com
Volume 10, Issue 3 (March 2014), PP.40-47
40
Dynamic Consolidation of Virtual Machines In Cloud Data
Centers For Managing Overloaded Hosts Under
Quality of Service Constraints
P.D.Saronrex1
R.K.Maheswari2
1
Trichy Engineering College, Assistant Professor, Department of IT, Somu nagar,
Konalai, Trichy, Tamilnadu, India.
2
PG Scholar Trichy Engineering College, Department of CSE, Somu nagar, Trichy, Tamilnadu, India.
Abstract:- Cloud computing has emerged as the default paradigm for a variety of fields especially considering
the resources and infrastructure consumption in case of distributed access. Overloaded host is an aspect of
dynamic VM consolidation that directly influences the resource utilization and Quality of Service (QoS)
delivered by the system. The server overloads cause resource shortages and performance degradation of
applications. Current solutions to the problem of host overload detection are generally heuristic-based, or rely
on statistical analysis of historical data. The limitations of these approaches are that they lead to sub-optimal
results and do not allow explicit specification of a QoS goal proposing a novel approach that for any known
stationary workload and a given state configuration optimally solves the problem of host overload detection by
maximizing the mean inter-migration time under the specified QoS goal based on a Markov chain model and
proposed a control algorithm for the problem of host overload detection as a part of dynamic VM consolidation.
The model allows a system administrator to explicitly set a QoS goal in terms of the OTF parameter, which is a
workload independent QoS metric. For a known stationary workload and a given state configuration, the control
policy obtained from the Markov model optimally solves the host overload detection problem in the online
setting by maximizing the mean inter-migration time, while meeting the QoS goal. Using the Multisize Sliding
Window workload estimation approach the model to handle unknown non stationary workloads and propose an
optimal offline algorithm for the problem of host overload detection to evaluate the efficiency of the MHOD
algorithm.
Keywords:- Distributed systems, Cloud computing, virtualization, dynamic consolidation, energy efficiency,
host overload detection
I. INTRODUCTION
Cloud computing [1] is a promising computing paradigm which recently has drawn extensive attention
from both academia and industry. By combining a set of existing and new techniques from research areas such
as Service-Oriented Architectures (SOA) and virtualization, cloud computing is regarded as such a computing
paradigm in which resources in the computing infrastructure are provided as services over the Internet. Along
with this new paradigm, various business models are developed, which can be described by terminology of “X
as a service (XaaS)” where X could be software, hardware, data storage, and etc. Successful examples are
Amazon’s EC2 and S3 [1], Google App Engine [3], and Microsoft Azure [4] which provide users with scalable
resources in the pay-as-you use fashion at relatively low prices. For example, Amazon’s S3data storage service
just charges $0.12 to $0.15 per gigabyte month. As compared to building their own infrastructures, users are
able to save their investments significantly by migrating businesses into the cloud. With the increasing
development of cloud computing technologies, it is not hard to imagine that in the near future more and more
businesses will be moved into the cloud. As promising as it is, cloud computing is also facing many challenges
that, if not well resolved, may impede its fast growth. Permission to make digital or hard copies of all or part of
this work for personal or classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To
copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Private Cloud uses groups of public or private server pools from an internal corporate data center.
It has to be managed by Enterprise, and allows fine grain access to resources .Private Clouds are generally the
solution considered by enterprises that do not want to outsource any part of IT infrastructure and services for
security concerns. Amplify our datacenter’s efficiency and agility while enhancing security and control with a
private cloud from VMware Consolidate datacenters and deploy workloads on infrastructure with built-in
security and role-based access control. Migrate workloads between pools of infrastructure and integrate existing
2. Dynamic Consolidation Of Virtual Machines In Cloud Data Centers For Managing Overloaded Hosts Under…
41
management systems using customer extensions, APIs, and open cross-cloud standards. Deliver cloud
infrastructure on-demand so end users can consume virtual resources with maximum agility.
Cloud Computing [1] have grown in popularity over the past couple of years with this cloud computing
services, the shared infrastructure means it works like a utility: we only pay for what we need, upgrades are
automatic, and scaling up or down is easy. In order to implement particular resource management techniques
such as VM multiplexing or VM live migration that, even if transparent to final users, has to be considered in
the design of performance models in order to accurately understand the system behavior. This cloud model
promotes availability and is composed of five essential characteristics, three service models, and four
deployment models. On-demand self-service, broad network access, Resource pooling, Rapid elasticity &
Measured Services.
II. RELATED WORK
Prefetching technology is a widely used computer system technology, which hides system expends at
every lay of technology, transmission expends. User model [1] is mainly used to collect user information, user
status information and user behavioural habit information, then organize and manage this information. Cloud
services model [2] can organize and manage service registration information, service current status information
and services relevant information. Cloud status model is focus on describing and maintaining current resource
status information, periodical performance regulations information and current task executing information. The
data obtain mechanism prefetches and analyzes user data, given factors affecting data prefetching hit ratio from
the global perspective, it carries out cooperative prefetching analysis and enhances services quality.
Cloud computing is increasingly gaining inroads among a variety of organizational users. As clouds
are introduced for use by enterprises, service providers, and governmental and educational entities, new
challenges related to the interconnection between such clouds emerge. Cloud administrators seek to maintain
acceptable levels of autonomy and control over their cloud infrastructure, while ensuring the integrity of the
cloud services. At the same time; they are expected to enable cross-cloud services, including mobility of
workloads between clouds.
We present the design and implementation of a technology that enables live mobility of virtual
machines between clouds, while enforcing the cloud insularity requirements of autonomy, privacy, and security.
We also provide an empirical evaluation of our solution, demonstrating its viability and compliance with
requirements. Dynamic resource management [3, 10] it is an active area of research. The authors of employ
prediction techniques and queuing theory results to allocate resources efficiently within a single server serving a
web workload. Static allocation approach is used in where authors propose a simple heuristic for vector bin-
packing problem and apply it to minimize the number of servers required to host a given web traffic. In control
theory is applied to design a system for performance control of web server. The arrival rate of requests to the
server is throttled based on the feedback system. The authors propose an optimization algorithm that allocates
resources (i.e., web servers) depending on the expected financial gain for the hosting center.
III. PROPOSED MODEL
VMware [13] is the global leader in virtualization and cloud infrastructure. VMware offers a unique,
evolutionary path to cloud computing that reduces IT complexity, significantly lowers costs and enables more
flexible, agile service delivery. VMware vSphere leverages the power of virtualization to transform datacenters
into simplified cloud computing infrastructures and enables educational institutions to deliver flexible and
reliable IT services. VMware vSphere virtualizes and aggregates the underlying physical hardware resources
across multiple systems and provides pools of virtual resources to the datacenter. As a cloud operating system,
VMware vSphere manages large collections of infrastructure (such as CPUs, storage, and networking) as a
seamless and dynamic operating environment, and also manages the complexity of a datacenter. The following
component layers make up VMware vSphere.
Cloud based systems are inherently large scale, distributed, almost always virtualized, and operate in
automated shared environments. Performance and availability of such systems are affected by a large number of
parameters including characteristics of the physical infrastructure (e.g., number of servers, number of cores per
server, amount of RAM and local storage per server, configuration of physical servers, network configuration,
persistent storage configuration), characteristics of the virtualization infrastructure (e.g., VM placement and VM
resource allocation, deployment and runtime overheads), failure characteristics (e.g., failure rates, repair rates,
modes of recovery), characteristics of automation tools used to manage the cloud system, and so on proposing a
novel approach that for any known stationary workload and a given state configuration optimally solves the
3. Dynamic Consolidation Of Virtual Machines In Cloud Data Centers For Managing Overloaded Hosts Under…
42
problem of host overload detection by maximizing the mean inter-migration time based on a Markov chain
model and proposed a control algorithm for the problem of host overload detection as a part of dynamic VM
consolidation, optimally solves the host overload detection problem in the online setting by maximizing the mean
inter-migration time.
Fig 3.0: End User Cloud
Infrastructure Services: Infrastructure Services are the set of services provided to abstract, aggregate, and
allocate hardware or infrastructure resources. Infrastructure Services are categorized into several types.
VMware vCompute, which includes the VMware capabilities that abstract away from underlying
disparate server resources. vCompute services aggregate these resources across many discrete servers and assign
them to applications.
VMware vStorage, which is the set of technologies that enables the most efficient use and management
of storage in virtual environments.
VMware vNetwork, which is the set of technologies that simplify and enhance networking in virtual
environments.
Application Services: Application Services are the set of services provided to ensure availability, security, and
scalability for applications. Examples include High Availability and Fault Tolerance.
VMware vCenter Server: Provides a single point of control of the datacenter. It provides essential datacenter
services such as access control, performance monitoring, and configuration.
Clients: Users can access the VMware vSphere datacenter through clients such as the vSphere Client or Web
Access through a Web browser.
Fig 3.0: Overall architecture
4. Dynamic Consolidation Of Virtual Machines In Cloud Data Centers For Managing Overloaded Hosts Under…
43
3.1 Dynamic Resource Allocation
Fig 3.1: Resource allocations
Resource Allocation (RA) is the process of assigning available resources to the needed cloud
applications over the internet. Resource allocation starves services if the allocation is not managed accurately.
Resource provisioning solves that problem by allowing the service providers to manage the resources for each
individual module.
3.2 Cloud Service Provider
Fig: 3.2 service provider
The cloud service provider is responsible for maintaining an agreed-on level of service and provisions
resources accordingly. A CSP, who has significant resources and expertise in building and managing distributed
cloud storage servers, owns and operates live Cloud Computing systems, it is the central entity of cloud. Cloud
provider activities for utilizing and allocating scarce resources within the limit of cloud environment so as to
meet the needs of the cloud application. It requires the type and amount of resources needed by each application
in order to complete a user job. The order and time of allocation of resources are also an input for an optimal
resource allocation.
3.3 Cloud Consumer
Cloud consumer represents a person or organization that maintains a business relationship with, and
uses the service from, a cloud provider. Users, who stores data in the cloud and rely on the cloud for data
computation, Cloud consists of both individual consumers and organizations. Cloud consumers use Service-
Level Agreements (SLAs) for specifying the technical performance requirements to be fulfilled by a cloud
provider. A cloud provider may also list in the SLAs a set of restrictions or limitations, and obligations that
cloud consumers must accept.
3.4 Performance Evaluation
In cloud paradigm, an effective resource allocation strategy is required for achieving user satisfaction
and maximizing the profit for cloud service providers. Some of the strategies discussed above mainly focus on
CPU, memory resources .secured optimal resource allocation algorithms and framework to strengthen the cloud
computing paradigm.
IV. METHODOLOGY
VMware [13] has successfully implemented dozens of private cloud infrastructures. To help we
leverage the experience and best practices we have accumulated from these deployments, we have developed the
Vcloud and Vsphere of our institute architecture-a set of documents we can use to better understand both the
principles upon which VMware’s cloud strategy is executed, and the mechanics for us to implement our own
cloud infrastructure.
4.1 Algorithm
4.1.1 An Optimal Offline Algorithm
Input: A system state history
Input: M, the maximum allowed OTF
Output: A VM migration time
5. Dynamic Consolidation Of Virtual Machines In Cloud Data Centers For Managing Overloaded Hosts Under…
44
1: while history is not empty do
2: if OTF of history _ M then
3: return the time of the last history state
4: else
5: drop the last state from history
6: end if
7: end while
4.1.2 MHOD Algorithm
Input: A CPU utilization history
Output: A decision on whether to migrate a VM
1: if the CPU utilization history size > Tl then
2: Convert the last CPU utilization value to a state
3: Invoke the Multisize Sliding Window estimation
to obtain the estimates of transition probabilities
4: Invoke the MHOD-OPT algorithm
5: return the decision returned by MHOD-OPT
6: end if
7: return false
4.1.3 Window Size Selection Algorithm
Input: J, D, NJ , t, i, j
Output: The selected window size
1: lw J
2: for k = 0 to NJ �1 do
3: if S(i; j; t; k) _ Vac(bpij(t; k); k) then
4: lw J + kD
5: else
6: break loop
7: end if
8: end for
9: return lw
Fig.4.1: Institute vsphere config
6. Dynamic Consolidation Of Virtual Machines In Cloud Data Centers For Managing Overloaded Hosts Under…
45
Fig.4.2: Institute data storage
Fig 4.3: Institute hype
Fig 4.4: Institute virtual servers
Comprehensive Solutions:
Enable the management of all project and institutional documents.
Provides comprehensive, secured storage and backup facility of personal and work data.
Facilitates the exchange of information within across educational institutions.
Flexibility adapting to the way we work:
Offers the flexibility o f uploading unrestricted format types securely without much hassle.
Multi-level access control – adapt to an educational institution’s hierarchy.
Allows institution entities to host the entire application on our own servers.
Sharing information:
Allows sharing of files internally.
Able to control the permission to allow third party to view and download.
7. Dynamic Consolidation Of Virtual Machines In Cloud Data Centers For Managing Overloaded Hosts Under…
46
Manage files online:
Able to organize files in different folders as desired and being able to do it online.
Single storage and backup repository of all important files.
Ease of use and user friendly interface.
Security and Accessibility:
Able to access the file anywhere, anytime within the institution.
It will be more secure data transmission while number of users accessing the data is scalable.
V. CONCLUSION
Virtualization in computing is the creation of a virtual, rather than actual version of a storage device or
network resources. Using some interfaces we can access the data in cloud. This paper gives about the cloud data
management interface by using storage virtualization mechanism. The open cloud computing interface is an
emerging standard for interoperable interface management in the cloud. Cloud computing can solve complex set
of tasks in shorter time by proper resource utilization. To make the cloud to work efficiently, best resource
allocation strategies have to be employed. Utilization of resources is one of the most important tasks in cloud
computing environment the various strategies have been studied and classified. The different features of the
algorithms have been studied. .This can be extended to models which represent PaaS and SaaS Cloud systems
and to integrate the mechanisms needed to capture VM migration and data center consolidation aspects that
cover a crucial role in energy saving policies.
As a part of future work, a plan to implement the MHOD algorithm as an extension of the VM
manager within the OpenStack Cloud platform7
to evaluate the algorithm in a real system as a part of energy-
efficient dynamic VM consolidation. The purpose of this paper is to develop a Private Cloud for Educational
firm which would like to do their automation without spending lot on Infrastructure. Forming an Enterprise
Cloud where the entire request will reach at set of Server (IBM Cloud Servers) for computation & resource
access and hence providing service over the Cloud. It means unnecessarily there is no need to have 100
computers for 32 minutes of work, instead have 1 cloud for 300 different tasks. In such case since everything
will be maintained by the institution firm security issues will be of less concern since Private Cloud
Infrastructure is being formed internally by the firm. Using VMware ESXi Container, we can improve Simple
Network Framework to a Private Cloud for different institutes.
REFERENCES
[1] Amazon Web Services (AWS), Online at http://aws. amazon.com.
[2] Microsoft Azure, http://www.microsoft.com/azure/.
[3] Google App Engine, Online at http://code.google.com/appengine/.
[4] Brock, M.; Goscinski, A.; Grids vs. Clouds Future Information Technology (FutureTech), 2010 5th
IEEE International Conference2010 pp 1-6.
[5] Daryl C. Plummer, Thomas J. Bittman, Tom Austin, David W. Cearley, David Mitchell Smith “Cloud
Computing: Defining and Describing an Emerging Phenomenon”.
[6] High-Performance Cloud Computing: A View of ScientificApplications by Christian Vecchiola, Suraj
Pandey, and Rajkumar Buyya.
[7] Market-Oriented Cloud Computing: Vision, Hype, and Reality for Delivering IT Services as
Computing Utilities Buyya, R. Chee Shin Yeo Venugopal, S.
[8] Performance Evaluation of Cloud Computing Offerings Vladimir Stantchev, SOA and Public Services
Research Group, TU Berlin.
[9] Schaper, J Cloud Services Digital Ecosystems and Technologies (DEST), 2010 4th IEEE International
Conference 2010 pp 91-92.
[10] Srinivasa, K.G.; Siddesh, G.M.; Cherian, S.;” Fault-Tolerant Middleware for Grid Computing
“Srinivasa, K.G. Siddesh, G.M Cherian, S.High Performance Computing and Communications
(HPCC), 2010. 12th IEEE International Conference pp 635 – 640.
[11] Xingchen Chu Nadiminti, K. Chao Jin Venugopal, S.Buyya, R, Univ. of Melbourne, Melbourne
“Aneka: Next- Generation Enterprise Grid Platform for e-Science and e- Business Applications”.
[12] Zhao, Peng; Huang, Ting-lei; Liu, Cai-xia; Wang, Xin; Research of P2P architecture based on cloud
computing IEEE International Conference.
[13] http://www.vmware.com/files/pdf/techpaper/VMW-TWP-vSPHR-SECRTY-HRDNG-USLET-101-
WEB-1.pdf.
[14] Http://www.vmware.com/files/pdf/vc_dbviews_40.pdf.
[15] R. Buyya et al., “Cloud computing and emerging it platforms: Vision, hype, and reality for delivering
computing as the 5th
utility,” Future Gener. Comput. Syst., vol. 25, pp. 599–616, June 2009.
8. Dynamic Consolidation Of Virtual Machines In Cloud Data Centers For Managing Overloaded Hosts Under…
47
[16] “A Iosup-2010” on the performance variability of production cloud services.
[17] N. Bobroff, A. Kochut, and K. Beaty, “Dynamic placement of virtual machines for managing SLA
violations,” in Proc. of the 10th IFIP/IEEE Intl. Symp. on Integrated Network Management (IM),
2007, pp. 119–128.
[18] A. Beloglazov and R. Buyya, “Optimal online deterministic algorithms and adaptive heuristics for
energy and performance efficient dynamic consolidation of virtual machines in cloud data centers,”
Concurrency and Computation: Practice and Experience (CCPE), 2012, DOI: 10.1002/cpe.1867, (in
press).
[19] C Strack - 2012 “Performance and Power Management for Cloud Infrastructures”.
[20] C Tang-2007” A Scalable Application Placement Controller for Enterprise Data Centers.
[21] D. Gmach, J. Rolia, L. Cherkasova, G. Belrose, T. Turicchi, and A. Kemper, “An integrated approach
to resource pool management: Policies, efficiency and quality metrics,” in Proc. of the 38th
IEEE Intl.
Conf. on Dependable Systems and Networks (DSN), 2008, pp. 326–335.
[22] D. Gmach, J. Rolia, L. Cherkasova, and A. Kemper, “Resource pool management: Reactive versus
proactive or lets be friends,” Computer Networks, vol. 53, no. 17, pp. 2905–2922, 2009.