Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
Emerging cloud computing paradigm vision, research challenges and development...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Advancement in computing facilities marks back from 1960’s with introduction of mainframes. Each of the computing has one or the other issues, so keeping this in mind cloud computing was introduced. Cloud computing has its roots in older technologies such as hardware virtualization, distributed computing, internet technologies, and autonomic computing. Cloud computing can be described with two models, one is service model and second is deployment model. While providing several services, cloud management’s primary role is resource provisioning. While there are several such benefits of cloud computing, there are challenges in adopting public clouds because of dependency on infrastructure that is shared by many enterprises. In this paper, we present core knowledge of cloud computing, highlighting its key concepts, deployment models, service models, benefits as well as security issues related to cloud data. The aim of this paper is to provide a better understanding of the cloud computing and to identify important research directions in this field
Emerging cloud computing paradigm vision, research challenges and development...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Advancement in computing facilities marks back from 1960’s with introduction of mainframes. Each of the computing has one or the other issues, so keeping this in mind cloud computing was introduced. Cloud computing has its roots in older technologies such as hardware virtualization, distributed computing, internet technologies, and autonomic computing. Cloud computing can be described with two models, one is service model and second is deployment model. While providing several services, cloud management’s primary role is resource provisioning. While there are several such benefits of cloud computing, there are challenges in adopting public clouds because of dependency on infrastructure that is shared by many enterprises. In this paper, we present core knowledge of cloud computing, highlighting its key concepts, deployment models, service models, benefits as well as security issues related to cloud data. The aim of this paper is to provide a better understanding of the cloud computing and to identify important research directions in this field
A Virtualization Model for Cloud ComputingSouvik Pal
Cloud Computing is now a very emerging field in the IT industry as well as research field. The advancement of Cloud Computing came up due to fast-growing usage of internet among the people. Cloud Computing is basically on-demand network access to a collection of physical resources which can be provisioned according to the need of cloud user under the supervision of Cloud Service provider interaction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing. Virtualization technology is widely applied to modern data center for cloud computing. Virtualization is used computer resources to imitate other computer resources or whole computers. This paper provides a Virtualization model for cloud computing that may lead to faster access and better performance. This model may help to combine self-service capabilities and ready-to-use facilities for computing resources.
A Comparison of Cloud Execution Mechanisms Fog, Edge, and Clone Cloud Computing IJECEIAES
Cloud computing is a technology that was developed a decade ago to provide uninterrupted, scalable services to users and organizations. Cloud computing has also become an attractive feature for mobile users due to the limited features of mobile devices. The combination of cloud technologies with mobile technologies resulted in a new area of computing called mobile cloud computing. This combined technology is used to augment the resources existing in Smart devices. In recent times, Fog computing, Edge computing, and Clone Cloud computing techniques have become the latest trends after mobile cloud computing, which have all been developed to address the limitations in cloud computing. This paper reviews these recent technologies in detail and provides a comparative study of them. It also addresses the differences in these technologies and how each of them is effective for organizations and developers.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Implementing K-Out-Of-N Computing For Fault Tolerant Processing In Mobile and...IJERA Editor
Despite the advances in hardware for hand-held mobile devices, resource-intensive applications (e.g., video and imagestorage and processing or map-reduce type) still remain off bounds since they require large computation and storage capabilities.Recent research has attempted to address these issues by employing remote servers, such as clouds and peer mobile devices.For mobile devices deployed in dynamic networks (i.e., with frequent topology changes because of node failure/unavailability andmobility as in a mobile cloud), however, challenges of reliability and energy efficiency remain largely unaddressed. To the best of ourknowledge, we are the first to address these challenges in an integrated manner for both data storage and processing in mobilecloud, an approach we call k-out-of-n computing. In our solution, mobile devices successfully retrieve or process data, in the mostenergy-efficient way, as long as k out of n remote servers are accessible. Through a real system implementation we prove the feasibilityof our approach. Extensive simulations demonstrate the fault tolerance and energy efficiency performance of our framework in largerscale networks.
In computing, It is the description about Grid Computing.
It gives deep idea about grid, what is grid computing? , why we need it? , why it is so ? etc. History and Architecture of grid computing is also there. Advantages , disadvantages and conclusion is also included.
Efficient and reliable hybrid cloud architecture for big databaseijccsa
The objective of our paper is to propose a Cloud computing framework which is feasible and necessary for
handling huge data. In our prototype system we considered national ID database structure of Bangladesh
which is prepared by election commission of Bangladesh. Using this database we propose an interactive
graphical user interface for Bangladeshi People Search (BDPS) that use a hybrid structure of cloud
computing handled by apache Hadoop where database is implemented by HiveQL. The infrastructure
divides into two parts: locally hosted cloud which is based on “Eucalyptus” and the remote cloud which is
implemented on well-known Amazon Web Service (AWS). Some common problems of Bangladesh aspect
which includes data traffic congestion, server time out and server down issue is also discussed.
The Grid means the infrastructure for the Advanced Web, for computing, collaboration and communication.
The goal is to create the illusion of a simple yet large and powerful self managing virtual computer out of a large collection of connected heterogeneous systems sharing various combinations of resources.
“Grid” computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and ,in some cases, high-performance orientation .
We presented the Grid concept in analogy with that of an electrical power grid and Grid vision
Agent based Aggregation of Cloud Services- A Research Agendaidescitation
-Cloud computing has come to the forefront as it
overcomes some of the issues in computing such as storage
space and processing power. It enables ubiquitous accessing
and processing of information without the need of excessive
computing facilities. In this work, we plan to brief some of the
issues in aggregating the cloud services, discovering futuristic
cloud service requests, develop a repository of the same and
propose an agent based Quality of Service (QoS) provisioning
system for cloud clients.
Swiftly increasing demand of computational
calculations in the process of business, transferring of files
under certain protocols and data centers force to develop an
emerging technology cater to the services for computational
need, highly manageable and secure storage. To fulfill these
technological desires cloud computing is the best answer by
introducing various sorts of service platforms in high
computational environment. Cloud computing is the most
recent paradigm promising to turn around the vision of
“computing utilities” into reality. The term “cloud
computing” is relatively new, there is no universal agreement
on this definition. In this paper, we go through with different
area of expertise of research and novelty in cloud computing
domain and its usefulness in the genre of management. Even
though the cloud computing provides many distinguished
features, it still has certain sorts of short comings amidst with
comparatively high cost for both private and public clouds. It
is the way of congregating amasses of information and
resources stored in personal computers and other gadgets
and further putting them on the public cloud for serving
users. Resource management in a cloud environment is a
hard problem, due to the scale of modern data centers, their
interdependencies along with the range of objectives of the
different actors in a cloud ecosystem. Cloud computing is
turning to be one of the most explosively expanding
technologies in the computing industry in this era. It
authorizes the users to transfer their data and computation to
remote location with minimal impact on system performance.
With the evolution of virtualization technology, cloud
computing has been emerged to be distributed systematically
or strategically on full basis. The idea of cloud computing has
not only restored the field of distributed systems but also
fundamentally changed how business utilizes computing
today. Resource management in cloud computing is in fact a
typical problem which is due to the scale of modern data
centers, the variety of resource types and their inter
dependencies, unpredictability of load along with the range of
objectives of the different actors in a cloud ecosystem.
Grid computing or network computing is developed to make the available electric power in the similar way
as it is available for the grid. For that we just plug in the power and whoever needs power, may use it. In
grid computing if a system needs more power than available it can share the computing with other
machines connected in a grid. In this way we can use the power of a super computer without a huge cost
and the CPU cycles that were wasted previously can also be utilized. For performing grid computation in
joined computers through the internet, the software must be installed which supports grid computation on
each computer inside the VO. The software handles information queries, storage management, processing
scheduling, authentication and data encryption to ensure information security.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
International Journal of Engineering Research and Development is an international premier peer reviewed open access engineering and technology journal promoting the discovery, innovation, advancement and dissemination of basic and transitional knowledge in engineering, technology and related disciplines.
A Virtualization Model for Cloud ComputingSouvik Pal
Cloud Computing is now a very emerging field in the IT industry as well as research field. The advancement of Cloud Computing came up due to fast-growing usage of internet among the people. Cloud Computing is basically on-demand network access to a collection of physical resources which can be provisioned according to the need of cloud user under the supervision of Cloud Service provider interaction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing. Virtualization technology is widely applied to modern data center for cloud computing. Virtualization is used computer resources to imitate other computer resources or whole computers. This paper provides a Virtualization model for cloud computing that may lead to faster access and better performance. This model may help to combine self-service capabilities and ready-to-use facilities for computing resources.
A Comparison of Cloud Execution Mechanisms Fog, Edge, and Clone Cloud Computing IJECEIAES
Cloud computing is a technology that was developed a decade ago to provide uninterrupted, scalable services to users and organizations. Cloud computing has also become an attractive feature for mobile users due to the limited features of mobile devices. The combination of cloud technologies with mobile technologies resulted in a new area of computing called mobile cloud computing. This combined technology is used to augment the resources existing in Smart devices. In recent times, Fog computing, Edge computing, and Clone Cloud computing techniques have become the latest trends after mobile cloud computing, which have all been developed to address the limitations in cloud computing. This paper reviews these recent technologies in detail and provides a comparative study of them. It also addresses the differences in these technologies and how each of them is effective for organizations and developers.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Implementing K-Out-Of-N Computing For Fault Tolerant Processing In Mobile and...IJERA Editor
Despite the advances in hardware for hand-held mobile devices, resource-intensive applications (e.g., video and imagestorage and processing or map-reduce type) still remain off bounds since they require large computation and storage capabilities.Recent research has attempted to address these issues by employing remote servers, such as clouds and peer mobile devices.For mobile devices deployed in dynamic networks (i.e., with frequent topology changes because of node failure/unavailability andmobility as in a mobile cloud), however, challenges of reliability and energy efficiency remain largely unaddressed. To the best of ourknowledge, we are the first to address these challenges in an integrated manner for both data storage and processing in mobilecloud, an approach we call k-out-of-n computing. In our solution, mobile devices successfully retrieve or process data, in the mostenergy-efficient way, as long as k out of n remote servers are accessible. Through a real system implementation we prove the feasibilityof our approach. Extensive simulations demonstrate the fault tolerance and energy efficiency performance of our framework in largerscale networks.
In computing, It is the description about Grid Computing.
It gives deep idea about grid, what is grid computing? , why we need it? , why it is so ? etc. History and Architecture of grid computing is also there. Advantages , disadvantages and conclusion is also included.
Efficient and reliable hybrid cloud architecture for big databaseijccsa
The objective of our paper is to propose a Cloud computing framework which is feasible and necessary for
handling huge data. In our prototype system we considered national ID database structure of Bangladesh
which is prepared by election commission of Bangladesh. Using this database we propose an interactive
graphical user interface for Bangladeshi People Search (BDPS) that use a hybrid structure of cloud
computing handled by apache Hadoop where database is implemented by HiveQL. The infrastructure
divides into two parts: locally hosted cloud which is based on “Eucalyptus” and the remote cloud which is
implemented on well-known Amazon Web Service (AWS). Some common problems of Bangladesh aspect
which includes data traffic congestion, server time out and server down issue is also discussed.
The Grid means the infrastructure for the Advanced Web, for computing, collaboration and communication.
The goal is to create the illusion of a simple yet large and powerful self managing virtual computer out of a large collection of connected heterogeneous systems sharing various combinations of resources.
“Grid” computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and ,in some cases, high-performance orientation .
We presented the Grid concept in analogy with that of an electrical power grid and Grid vision
Agent based Aggregation of Cloud Services- A Research Agendaidescitation
-Cloud computing has come to the forefront as it
overcomes some of the issues in computing such as storage
space and processing power. It enables ubiquitous accessing
and processing of information without the need of excessive
computing facilities. In this work, we plan to brief some of the
issues in aggregating the cloud services, discovering futuristic
cloud service requests, develop a repository of the same and
propose an agent based Quality of Service (QoS) provisioning
system for cloud clients.
Swiftly increasing demand of computational
calculations in the process of business, transferring of files
under certain protocols and data centers force to develop an
emerging technology cater to the services for computational
need, highly manageable and secure storage. To fulfill these
technological desires cloud computing is the best answer by
introducing various sorts of service platforms in high
computational environment. Cloud computing is the most
recent paradigm promising to turn around the vision of
“computing utilities” into reality. The term “cloud
computing” is relatively new, there is no universal agreement
on this definition. In this paper, we go through with different
area of expertise of research and novelty in cloud computing
domain and its usefulness in the genre of management. Even
though the cloud computing provides many distinguished
features, it still has certain sorts of short comings amidst with
comparatively high cost for both private and public clouds. It
is the way of congregating amasses of information and
resources stored in personal computers and other gadgets
and further putting them on the public cloud for serving
users. Resource management in a cloud environment is a
hard problem, due to the scale of modern data centers, their
interdependencies along with the range of objectives of the
different actors in a cloud ecosystem. Cloud computing is
turning to be one of the most explosively expanding
technologies in the computing industry in this era. It
authorizes the users to transfer their data and computation to
remote location with minimal impact on system performance.
With the evolution of virtualization technology, cloud
computing has been emerged to be distributed systematically
or strategically on full basis. The idea of cloud computing has
not only restored the field of distributed systems but also
fundamentally changed how business utilizes computing
today. Resource management in cloud computing is in fact a
typical problem which is due to the scale of modern data
centers, the variety of resource types and their inter
dependencies, unpredictability of load along with the range of
objectives of the different actors in a cloud ecosystem.
Grid computing or network computing is developed to make the available electric power in the similar way
as it is available for the grid. For that we just plug in the power and whoever needs power, may use it. In
grid computing if a system needs more power than available it can share the computing with other
machines connected in a grid. In this way we can use the power of a super computer without a huge cost
and the CPU cycles that were wasted previously can also be utilized. For performing grid computation in
joined computers through the internet, the software must be installed which supports grid computation on
each computer inside the VO. The software handles information queries, storage management, processing
scheduling, authentication and data encryption to ensure information security.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
International Journal of Engineering Research and Development is an international premier peer reviewed open access engineering and technology journal promoting the discovery, innovation, advancement and dissemination of basic and transitional knowledge in engineering, technology and related disciplines.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Introduction to Cloud Computing and Cloud InfrastructureSANTHOSHKUMARKL1
Introduction, Cloud Infrastructure: Cloud computing, Cloud computing delivery models and services, Ethical issues, Cloud vulnerabilities, Cloud computing at Amazon, Cloud computing the Google perspective, Microsoft Windows Azure and online services, Open-source software platforms for private clouds.
Cloud computing is Internet based development and use of computer technology. It is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. Users need not have knowledge of, expertise in, or control over the technology infrastructure "in the cloud" that supports them. Cloud computing is a hot topic all over the world nowadays, through which customers can access information and computer power via a web browser. As the adoption and deployment of cloud computing increase, it is critical to evaluate the performance of cloud environments. Currently, modeling and simulation technology has become a useful and powerful tool in cloud computing research community to deal with these issues. Cloud simulators are required for cloud system testing to decrease the complexity and separate quality concerns. Cloud computing means saving and accessing the data over the internet instead of local storage. In this paper, we have provided a short review on the types, models and architecture of the cloud environment.
Now a days the work is being done by hiring the space and resources from the cloud providers in order to do work effectively and less costly. This paper describes the cloud, its challenges, evolution, attacks along with the approaches required to handle data on cloud. The practice of using a network of remote servers hosted on the Internet to store, manage, and process data, rather than a local server or a personal computer. The need of this review paper is to provide the awareness of the current emerging technology which saves the cost of users.
Security & privacy issues of cloud & grid computing networksijcsa
Cloud computing is a new field in Internet computing that provides novel perspectives in internetworking
technologies. Cloud computing has become a significant technology in field of information technology.
Security of confidential data is a very important area of concern as it can make way for very big problems
if unauthorized users get access to it. Cloud computing should have proper techniques where data is
segregated properly for data security and confidentiality. This paper strives to compare and contrast cloud
computing with grid computing, along with the Tools and simulation environment & Tips to store data and
files safely in Cloud.
An Efficient MDC based Set Partitioned Embedded Block Image CodingDr. Amarjeet Singh
In this paper, fast, efficient, simple and widely used
Set Partitioned Embedded bloCK based coding is done on
Multiple Descriptions of transformed image. The maximum
potential of this type of coding can be exploited with discrete
wavelet transform (DWT) of images. Two correlated
descriptions are generated from a wavelet transformed image
to ensure meaningful transmission of the image over noise
prone wireless channels. These correlated descriptions are
encoded by set partitioning technique through SPECK coders
and transmitted over wireless channels. Quality of
reconstructed image at the decoder side depends upon the
number of descriptions received. More the number of
descriptions received at output side, more enhance the quality
of reconstructed image. However, if any of the multiple
description is lost, the receive can estimate it exploiting the
correlation between the descriptions. The simulations
performed on an image on MATLAB gives decent
performance and results even after half of the descriptions is
lost in transmission.
Efficient architectural framework of cloud computing Souvik Pal
Cloud computing is that enables adaptive, favorable and on-demand network access to a collective pool of adjustable and configurable computing physical resources which networks, servers, bandwidth, storage that can be swiftly provisioned and released with negligible supervision endeavor or service provider interaction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing. However, clouds are Internet-based concept and try to disguise complexity overhead for end users. Cloud service providers (CSPs) use many structural designs combined with self-service capabilities and ready-to-use facilities for computing resources, which are enabled through network infrastructure especially the internet which is an important consideration. This paper provides an efficient architectural Framework for cloud computing that may lead to better performance and faster access.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Similar to International Journal of Engineering Research and Development (20)
A Novel Method for Prevention of Bandwidth Distributed Denial of Service AttacksIJERD Editor
Distributed Denial of Service (DDoS) Attacks became a massive threat to the Internet. Traditional
Architecture of internet is vulnerable to the attacks like DDoS. Attacker primarily acquire his army of Zombies,
then that army will be instructed by the Attacker that when to start an attack and on whom the attack should be
done. In this paper, different techniques which are used to perform DDoS Attacks, Tools that were used to
perform Attacks and Countermeasures in order to detect the attackers and eliminate the Bandwidth Distributed
Denial of Service attacks (B-DDoS) are reviewed. DDoS Attacks were done by using various Flooding
techniques which are used in DDoS attack.
The main purpose of this paper is to design an architecture which can reduce the Bandwidth
Distributed Denial of service Attack and make the victim site or server available for the normal users by
eliminating the zombie machines. Our Primary focus of this paper is to dispute how normal machines are
turning into zombies (Bots), how attack is been initiated, DDoS attack procedure and how an organization can
save their server from being a DDoS victim. In order to present this we implemented a simulated environment
with Cisco switches, Routers, Firewall, some virtual machines and some Attack tools to display a real DDoS
attack. By using Time scheduling, Resource Limiting, System log, Access Control List and some Modular
policy Framework we stopped the attack and identified the Attacker (Bot) machines
Hearing loss is one of the most common human impairments. It is estimated that by year 2015 more
than 700 million people will suffer mild deafness. Most can be helped by hearing aid devices depending on the
severity of their hearing loss. This paper describes the implementation and characterization details of a dual
channel transmitter front end (TFE) for digital hearing aid (DHA) applications that use novel micro
electromechanical- systems (MEMS) audio transducers and ultra-low power-scalable analog-to-digital
converters (ADCs), which enable a very-low form factor, energy-efficient implementation for next-generation
DHA. The contribution of the design is the implementation of the dual channel MEMS microphones and powerscalable
ADC system.
Influence of tensile behaviour of slab on the structural Behaviour of shear c...IJERD Editor
-A composite beam is composed of a steel beam and a slab connected by means of shear connectors
like studs installed on the top flange of the steel beam to form a structure behaving monolithically. This study
analyzes the effects of the tensile behavior of the slab on the structural behavior of the shear connection like slip
stiffness and maximum shear force in composite beams subjected to hogging moment. The results show that the
shear studs located in the crack-concentration zones due to large hogging moments sustain significantly smaller
shear force and slip stiffness than the other zones. Moreover, the reduction of the slip stiffness in the shear
connection appears also to be closely related to the change in the tensile strain of rebar according to the increase
of the load. Further experimental and analytical studies shall be conducted considering variables such as the
reinforcement ratio and the arrangement of shear connectors to achieve efficient design of the shear connection
in composite beams subjected to hogging moment.
Gold prospecting using Remote Sensing ‘A case study of Sudan’IJERD Editor
Gold has been extracted from northeast Africa for more than 5000 years, and this may be the first
place where the metal was extracted. The Arabian-Nubian Shield (ANS) is an exposure of Precambrian
crystalline rocks on the flanks of the Red Sea. The crystalline rocks are mostly Neoproterozoic in age. ANS
includes the nations of Israel, Jordan. Egypt, Saudi Arabia, Sudan, Eritrea, Ethiopia, Yemen, and Somalia.
Arabian Nubian Shield Consists of juvenile continental crest that formed between 900 550 Ma, when intra
oceanic arc welded together along ophiolite decorated arc. Primary Au mineralization probably developed in
association with the growth of intra oceanic arc and evolution of back arc. Multiple episodes of deformation
have obscured the primary metallogenic setting, but at least some of the deposits preserve evidence that they
originate as sea floor massive sulphide deposits.
The Red Sea Hills Region is a vast span of rugged, harsh and inhospitable sector of the Earth with
inimical moon-like terrain, nevertheless since ancient times it is famed to be an abode of gold and was a major
source of wealth for the Pharaohs of ancient Egypt. The Pharaohs old workings have been periodically
rediscovered through time. Recent endeavours by the Geological Research Authority of Sudan led to the
discovery of a score of occurrences with gold and massive sulphide mineralizations. In the nineties of the
previous century the Geological Research Authority of Sudan (GRAS) in cooperation with BRGM utilized
satellite data of Landsat TM using spectral ratio technique to map possible mineralized zones in the Red Sea
Hills of Sudan. The outcome of the study mapped a gossan type gold mineralization. Band ratio technique was
applied to Arbaat area and a signature of alteration zone was detected. The alteration zones are commonly
associated with mineralization. The alteration zones are commonly associated with mineralization. A filed check
confirmed the existence of stock work of gold bearing quartz in the alteration zone. Another type of gold
mineralization that was discovered using remote sensing is the gold associated with metachert in the Atmur
Desert.
Reducing Corrosion Rate by Welding DesignIJERD Editor
The paper addresses the importance of welding design to prevent corrosion at steel. Welding is
used to join pipe, profiles at bridges, spindle, and a lot more part of engineering construction. The
problems happened associated with welding are common issues in these fields, especially corrosion.
Corrosion can be reduced with many methods, they are painting, controlling humidity, and also good
welding design. In the research, it can be found that reducing residual stress on the welding can be
solved in corrosion rate reduction problem.
Preheating on 500oC and 600oC give better condition to reduce corosion rate than condition after
preheating 400oC. For all welding groove type, material with 500oC and 600oC preheating after 14 days
corrosion test is 0,5%-0,69% lost. Material with 400oC preheating after 14 days corrosion test is 0,57%-0,76%
lost.
Welding groove also influence corrosion rate. X and V type welding groove give better condition to reduce
corrosion rate than use 1/2V and 1/2 X welding groove. After 14 days corrosion test, the samples with
X welding groove type is 0,5%-0,57% lost. The samples with V welding groove after 14 days corrosion test is
0,51%-0,59% lost. The samples with 1/2V and 1/2X welding groove after 14 days corrosion test is 0,58%-
0,71% lost.
Router 1X3 – RTL Design and VerificationIJERD Editor
Routing is the process of moving a packet of data from source to destination and enables messages
to pass from one computer to another and eventually reach the target machine. A router is a networking device
that forwards data packets between computer networks. It is connected to two or more data lines from different
networks (as opposed to a network switch, which connects data lines from one single network). This paper,
mainly emphasizes upon the study of router device, it‟s top level architecture, and how various sub-modules of
router i.e. Register, FIFO, FSM and Synchronizer are synthesized, and simulated and finally connected to its top
module.
Active Power Exchange in Distributed Power-Flow Controller (DPFC) At Third Ha...IJERD Editor
This paper presents a component within the flexible ac-transmission system (FACTS) family, called
distributed power-flow controller (DPFC). The DPFC is derived from the unified power-flow controller (UPFC)
with an eliminated common dc link. The DPFC has the same control capabilities as the UPFC, which comprise
the adjustment of the line impedance, the transmission angle, and the bus voltage. The active power exchange
between the shunt and series converters, which is through the common dc link in the UPFC, is now through the
transmission lines at the third-harmonic frequency. DPFC multiple small-size single-phase converters which
reduces the cost of equipment, no voltage isolation between phases, increases redundancy and there by
reliability increases. The principle and analysis of the DPFC are presented in this paper and the corresponding
simulation results that are carried out on a scaled prototype are also shown.
Mitigation of Voltage Sag/Swell with Fuzzy Control Reduced Rating DVRIJERD Editor
Power quality has been an issue that is becoming increasingly pivotal in industrial electricity
consumers point of view in recent times. Modern industries employ Sensitive power electronic equipments,
control devices and non-linear loads as part of automated processes to increase energy efficiency and
productivity. Voltage disturbances are the most common power quality problem due to this the use of a large
numbers of sophisticated and sensitive electronic equipment in industrial systems is increased. This paper
discusses the design and simulation of dynamic voltage restorer for improvement of power quality and
reduce the harmonics distortion of sensitive loads. Power quality problem is occurring at non-standard
voltage, current and frequency. Electronic devices are very sensitive loads. In power system voltage sag,
swell, flicker and harmonics are some of the problem to the sensitive load. The compensation capability
of a DVR depends primarily on the maximum voltage injection ability and the amount of stored
energy available within the restorer. This device is connected in series with the distribution feeder at
medium voltage. A fuzzy logic control is used to produce the gate pulses for control circuit of DVR and the
circuit is simulated by using MATLAB/SIMULINK software.
Study on the Fused Deposition Modelling In Additive ManufacturingIJERD Editor
Additive manufacturing process, also popularly known as 3-D printing, is a process where a product
is created in a succession of layers. It is based on a novel materials incremental manufacturing philosophy.
Unlike conventional manufacturing processes where material is removed from a given work price to derive the
final shape of a product, 3-D printing develops the product from scratch thus obviating the necessity to cut away
materials. This prevents wastage of raw materials. Commonly used raw materials for the process are ABS
plastic, PLA and nylon. Recently the use of gold, bronze and wood has also been implemented. The complexity
factor of this process is 0% as in any object of any shape and size can be manufactured.
Spyware triggering system by particular string valueIJERD Editor
This computer programme can be used for good and bad purpose in hacking or in any general
purpose. We can say it is next step for hacking techniques such as keylogger and spyware. Once in this system if
user or hacker store particular string as a input after that software continually compare typing activity of user
with that stored string and if it is match then launch spyware programme.
A Blind Steganalysis on JPEG Gray Level Image Based on Statistical Features a...IJERD Editor
This paper presents a blind steganalysis technique to effectively attack the JPEG steganographic
schemes i.e. Jsteg, F5, Outguess and DWT Based. The proposed method exploits the correlations between
block-DCTcoefficients from intra-block and inter-block relation and the statistical moments of characteristic
functions of the test image is selected as features. The features are extracted from the BDCT JPEG 2-array.
Support Vector Machine with cross-validation is implemented for the classification.The proposed scheme gives
improved outcome in attacking.
Secure Image Transmission for Cloud Storage System Using Hybrid SchemeIJERD Editor
- Data over the cloud is transferred or transmitted between servers and users. Privacy of that
data is very important as it belongs to personal information. If data get hacked by the hacker, can be
used to defame a person’s social data. Sometimes delay are held during data transmission. i.e. Mobile
communication, bandwidth is low. Hence compression algorithms are proposed for fast and efficient
transmission, encryption is used for security purposes and blurring is used by providing additional
layers of security. These algorithms are hybridized for having a robust and efficient security and
transmission over cloud storage system.
Application of Buckley-Leverett Equation in Modeling the Radius of Invasion i...IJERD Editor
A thorough review of existing literature indicates that the Buckley-Leverett equation only analyzes
waterflood practices directly without any adjustments on real reservoir scenarios. By doing so, quite a number
of errors are introduced into these analyses. Also, for most waterflood scenarios, a radial investigation is more
appropriate than a simplified linear system. This study investigates the adoption of the Buckley-Leverett
equation to estimate the radius invasion of the displacing fluid during waterflooding. The model is also adopted
for a Microbial flood and a comparative analysis is conducted for both waterflooding and microbial flooding.
Results shown from the analysis doesn’t only records a success in determining the radial distance of the leading
edge of water during the flooding process, but also gives a clearer understanding of the applicability of
microbes to enhance oil production through in-situ production of bio-products like bio surfactans, biogenic
gases, bio acids etc.
Gesture Gaming on the World Wide Web Using an Ordinary Web CameraIJERD Editor
- Gesture gaming is a method by which users having a laptop/pc/x-box play games using natural or
bodily gestures. This paper presents a way of playing free flash games on the internet using an ordinary webcam
with the help of open source technologies. Emphasis in human activity recognition is given on the pose
estimation and the consistency in the pose of the player. These are estimated with the help of an ordinary web
camera having different resolutions from VGA to 20mps. Our work involved giving a 10 second documentary to
the user on how to play a particular game using gestures and what are the various kinds of gestures that can be
performed in front of the system. The initial inputs of the RGB values for the gesture component is obtained by
instructing the user to place his component in a red box in about 10 seconds after the short documentary before
the game is finished. Later the system opens the concerned game on the internet on popular flash game sites like
miniclip, games arcade, GameStop etc and loads the game clicking at various places and brings the state to a
place where the user is to perform only gestures to start playing the game. At any point of time the user can call
off the game by hitting the esc key and the program will release all of the controls and return to the desktop. It
was noted that the results obtained using an ordinary webcam matched that of the Kinect and the users could
relive the gaming experience of the free flash games on the net. Therefore effective in game advertising could
also be achieved thus resulting in a disruptive growth to the advertising firms.
Hardware Analysis of Resonant Frequency Converter Using Isolated Circuits And...IJERD Editor
-LLC resonant frequency converter is basically a combo of series as well as parallel resonant ckt. For
LCC resonant converter it is associated with a disadvantage that, though it has two resonant frequencies, the
lower resonant frequency is in ZCS region[5]. For this application, we are not able to design the converter
working at this resonant frequency. LLC resonant converter existed for a very long time but because of
unknown characteristic of this converter it was used as a series resonant converter with basically a passive
(resistive) load. . Here, it was designed to operate in switching frequency higher than resonant frequency of the
series resonant tank of Lr and Cr converter acts very similar to Series Resonant Converter. The benefit of LLC
resonant converter is narrow switching frequency range with light load[6] . Basically, the control ckt plays a
very imp. role and hence 555 Timer used here provides a perfect square wave as the control ckt provides no
slew rate which makes the square wave really strong and impenetrable. The dead band circuit provides the
exclusive dead band in micro seconds so as to avoid the simultaneous firing of two pairs of IGBT’s where one
pair switches off and the other on for a slightest period of time. Hence, the isolator ckt here is associated with
each and every ckt used because it acts as a driver and an isolation to each of the IGBT is provided with one
exclusive transformer supply[3]. The IGBT’s are fired using the appropriate signal using the previous boards
and hence at last a high frequency rectifier ckt with a filtering capacitor is used to get an exact dc
waveform .The basic goal of this particular analysis is to observe the wave forms and characteristics of
converters with differently positioned passive elements in the form of tank circuits.
Simulated Analysis of Resonant Frequency Converter Using Different Tank Circu...IJERD Editor
LLC resonant frequency converter is basically a combo of series as well as parallel resonant ckt. For
LCC resonant converter it is associated with a disadvantage that, though it has two resonant frequencies, the
lower resonant frequency is in ZCS region [5]. For this application, we are not able to design the converter
working at this resonant frequency. LLC resonant converter existed for a very long time but because of
unknown characteristic of this converter it was used as a series resonant converter with basically a passive
(resistive) load. . Here, it was designed to operate in switching frequency higher than resonant frequency of the
series resonant tank of Lr and Cr converter acts very similar to Series Resonant Converter. The benefit of LLC
resonant converter is narrow switching frequency range with light load[6] . Basically, the control ckt plays a
very imp. role and hence 555 Timer used here provides a perfect square wave as the control ckt provides no
slew rate which makes the square wave really strong and impenetrable. The dead band circuit provides the
exclusive dead band in micro seconds so as to avoid the simultaneous firing of two pairs of IGBT’s where one
pair switches off and the other on for a slightest period of time. Hence, the isolator ckt here is associated with
each and every ckt used because it acts as a driver and an isolation to each of the IGBT is provided with one
exclusive transformer supply[3]. The IGBT’s are fired using the appropriate signal using the previous boards
and hence at last a high frequency rectifier ckt with a filtering capacitor is used to get an exact dc
waveform .The basic goal of this particular analysis is to observe the wave forms and characteristics of
converters with differently positioned passive elements in the form of tank circuits. The supported simulation
is done through PSIM 6.0 software tool
Amateurs Radio operator, also known as HAM communicates with other HAMs through Radio
waves. Wireless communication in which Moon is used as natural satellite is called Moon-bounce or EME
(Earth -Moon-Earth) technique. Long distance communication (DXing) using Very High Frequency (VHF)
operated amateur HAM radio was difficult. Even with the modest setup having good transceiver, power
amplifier and high gain antenna with high directivity, VHF DXing is possible. Generally 2X11 YAGI antenna
along with rotor to set horizontal and vertical angle is used. Moon tracking software gives exact location,
visibility of Moon at both the stations and other vital data to acquire real time position of moon.
“MS-Extractor: An Innovative Approach to Extract Microsatellites on „Y‟ Chrom...IJERD Editor
Simple Sequence Repeats (SSR), also known as Microsatellites, have been extensively used as
molecular markers due to their abundance and high degree of polymorphism. The nucleotide sequences of
polymorphic forms of the same gene should be 99.9% identical. So, Microsatellites extraction from the Gene is
crucial. However, Microsatellites repeat count is compared, if they differ largely, he has some disorder. The Y
chromosome likely contains 50 to 60 genes that provide instructions for making proteins. Because only males
have the Y chromosome, the genes on this chromosome tend to be involved in male sex determination and
development. Several Microsatellite Extractors exist and they fail to extract microsatellites on large data sets of
giga bytes and tera bytes in size. The proposed tool “MS-Extractor: An Innovative Approach to extract
Microsatellites on „Y‟ Chromosome” can extract both Perfect as well as Imperfect Microsatellites from large
data sets of human genome „Y‟. The proposed system uses string matching with sliding window approach to
locate Microsatellites and extracts them.
Importance of Measurements in Smart GridIJERD Editor
- The need to get reliable supply, independence from fossil fuels, and capability to provide clean
energy at a fixed and lower cost, the existing power grid structure is transforming into Smart Grid. The
development of a smart energy distribution grid is a current goal of many nations. A Smart Grid should have
new capabilities such as self-healing, high reliability, energy management, and real-time pricing. This new era
of smart future grid will lead to major changes in existing technologies at generation, transmission and
distribution levels. The incorporation of renewable energy resources and distribution generators in the existing
grid will increase the complexity, optimization problems and instability of the system. This will lead to a
paradigm shift in the instrumentation and control requirements for Smart Grids for high quality, stable and
reliable electricity supply of power. The monitoring of the grid system state and stability relies on the
availability of reliable measurement of data. In this paper the measurement areas that highlight new
measurement challenges, development of the Smart Meters and the critical parameters of electric energy to be
monitored for improving the reliability of power systems has been discussed.
Study of Macro level Properties of SCC using GGBS and Lime stone powderIJERD Editor
One of the major environmental concerns is the disposal of the waste materials and utilization of
industrial by products. Lime stone quarries will produce millions of tons waste dust powder every year. Having
considerable high degree of fineness in comparision to cement this material may be utilized as a partial
replacement to cement. For this purpose an experiment is conducted to investigate the possibility of using lime
stone powder in the production of SCC with combined use GGBS and how it affects the fresh and mechanical
properties of SCC. First SCC is made by replacing cement with GGBS in percentages like 10, 20, 30, 40, 50 and
by taking the optimum mix with GGBS lime stone powder is blended to mix in percentages like 5, 10, 15, 20 as
a partial replacement to cement. Test results shows that the SCC mix with combination of 30% GGBS and 15%
limestone powder gives maximum compressive strength and fresh properties are also in the limits prescribed by
the EFNARC.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Immunizing Image Classifiers Against Localized Adversary Attacks
International Journal of Engineering Research and Development
1. International Journal of Engineering Research and Development
e-ISSN: 2278-067X, p-ISSN: 2278-800X, www.ijerd.com
Volume 10, Issue 4 (April 2014), PP.58-66
58
A Comparative Study of Grid Computing and Cloud Computing
Anushila Dey1
, Aditi Dutt2
, Sumeet Kumar Jain3
, Vaishali Pandurangan4
1,2,3,4
Department of Computer Engineering, NMIMS Mukesh Patel School of Technology Management
&Engineering, Mumbai, India
Abstract: - The present competitive world is characterized by individuals and businesses constantly trying to
adapt themselves to progressive technological innovations for competitive advantage to stay ahead of the race.
Cloud Computing is one such innovation which has made considerable impact in the market. It is the practice
of delivering services over the Internet. These services can be broadly categorized into Infrastructure-as-a-
Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). For instance, more and more
organizations and individuals are migrating data to the servers on the cloud, thus reaping the benefits of IaaS
which eliminates the need of own data centres and lowers maintenance costs. Cloud Computing is the
amalgamation of existing technologies like Grid Computing, Utility Computing and several other models.
Grid Computing is a form of distributed and parallel computing. In simple terms, it is a set of distributed
resources working together to achieve a mutual goal thereby enhancing computational power by sharing of
resources. In this paper, we try to compare and contrast the models of Grid and Cloud Computing. We also
discuss their essential characteristics by reviewing a handful of research papers based on the two
technologies.
Keywords: - Cloud Computing; Grid Computing; Grids vs. Clouds
I. OVERVIEW: GRID COMPUTING
Grid computing is an enhanced form of distributed computing. It is different from distributed
computing in a way that grids tend to be more loosely coupled, heterogeneous, and geographically dispersed. It
focuses on large scare resource sharing and applies these resources to work on a single problem which can
require access to huge amounts of data or an inordinate amount of CPU power at the same time in a network.
Grid computing is similar to electric power grid. Electric power grid is a concept in which a user could
obtain electric power from any power station present in in the power grid regardless of its geographical location.
Whenever a user needed extra or additional power, he or she could simply plug into an electric power grid and
receive power on demand. Similarly in Computational Power Grid, the users need to connect to a Computational
Grid in order to get access to additional computing power on demand.
In the early 1990s, the term „grid computing‟ was originated and it was to make computational power
easily accessible to people on demand.
The Grid Computing paradigm became recognized when Ian Foster and Carl Kesselman published
their book, “The Grid: Blueprint for a new computing infrastructure" in 1999.
When Ian Foster built the idea of the „Grid‟, he gave a three point checklist to help define what grid is: [1]
1. Coordinates resources that are not subject to centralized control,
2. Uses standard, open, general-purpose protocols and interfaces, and
3. Delivers non-trivial qualities of service.
Ian Foster, Carl Kesselman and Steve Tuckie are widely regarded as the “fathers of the grid” as they gave rise
to the idea of grid from various existing technologies like distributed computing, object-oriented programming,
etc.
They created the Globus Toolkit which included computation management, storage management, security
provisioning, data movement, monitoring, etc. Many other such tools have been built which help to build the
services needed to create an enterprise or global grid.
According to Foster, grid computing is hardware and software infrastructure which offer a cheap, distributable,
coordinated and reliable access to powerful computational capabilities [1].
II. OVERVIEW: CLOUD COMPUTING
During the last several decades, the advancements in technologies have helped the human race to create,
operate and allocate growing amount of information in many new ways. The new computing applications, in
turn, lead to new demands for even more powerful computing infrastructure. Cloud Computing is thus a
2. A Comparative Study of Grid Computing and Cloud Computing
59
computing paradigm, where the data from different locations and datacentres is stored and it provides
dynamically scalable infrastructure for application, data and file storage.
Thus, Cloud Computing is said to be a model that delivers information technology services, where the
resources are recovered from the web via web-based tools and applications. Cloud Computing information and
services can be accessed only when an electronic device has access to the web. Thus, this allows the employees
to work from remote places.
The principle of Cloud Computing is “reusability of IT resources”. Cloud Computing is generally
compared to traditional concepts of “Grid Computing”, “distributed computing”, “Utility Computing”, or
“autonomic computing” is to broaden horizons across organizational boundaries.
Cloud Computing is known to be a specialized distributed paradigm. In contrast to the traditional ones,
it is scalable; can be enclosed as an abstract entity that helps in delivering different levels of services to
customers outside the Cloud, driven by economies of scale. The services thus provided can be delivered on
demand and can be dynamically configured.
The factors which can contribute to the flow of Cloud Computing:
1. Decrease in hardware cost, increase in computing power and storage capacity.
2. The rapid growing data size in internet publishing and archiving.
3. Adoption of Services Computing and Web 2.0 applications.
Many factors of Cloud Computing overlap with the other existing technologies like Grid Computing,
Utility Computing, distributed computing in general. In fact, Grid Computing is known to be the backbone of
Cloud Computing and thus provide infrastructure support. So, Cloud Computing is evolved out of Grid
Computing. The rise of this evolution was the shift in focus from an infrastructure that delivers storage and
compute resources (such is the case in Grids) to one that is economy based aiming to deliver more abstract
resources and services (such is the case in Clouds). In Utility Computing, it is a business model in which
computing resources, such as computation and storage, are packaged as metered services similar to a physical
public utility, such as electricity and public switched telephone network.
III. GRID CHARACTERISTICS
The collaborative nature of Grids led to the emergence of multiple organizations that function as one
unit through the use of their shared competencies and resources for the purpose of one or more identified goals.
Thus, administration of resources in Grids is solved with the concept of virtual organization representing a
dynamic set of individuals and/or institutions aligned with a set of resource-sharing rules and conditions to solve
a specific (research) goal. Thus, organizations and individuals belonging to a specific virtual organization may
share resources for a specific time frame to achieve certain research goal.
Some characteristics of grid are:
Large scale: a Grid must be able to deal with a number of resources ranging from just a few to
millions. This raises problem of performance degradation as the Grid size increases.
Heterogeneity: [3] a Grid hosts both software and hardware resources that can be vary ranging
from data, files, software components or programs to sensors, scientific instruments, display devices,
personal digital organizers, computers, super-computers and networks.
Geographical distribution: [3] Grid‟s resources may be located at distant places.
Resource sharing: resources in a Grid belong to many different organizations that allow other
organizations (i.e. users) to access them. Non-local resources can thus be used by applications,
promoting efficiency and reducing costs.
Resource coordination: resources in a Grid must be coordinated in order to provide combined
computing capabilities.
Multiple administrations: [3] each organization may establish different security and administrative
policies under which their owned resources can be accessed and used.
Consistent access: a Grid must be built with standard services, protocols and inter-faces thus hiding
the heterogeneity of the resources while allowing its scalability. Without such standards,
application development and pervasive use would not be possible.
Pervasive access: the Grid must grant access to available resources by adapting to a dynamic
environment in which resource failure is commonplace.[3]
Transparent access: a Grid should be seen as a single virtual computer.
Dependable access: a grid must assure the delivery of services under established Quality of Service
(QoS) requirements.
3. A Comparative Study of Grid Computing and Cloud Computing
60
IV. CLOUD CHARACTERISTICS
Cloud Computing does not represent a new technology. Rather, it is considered to be a novel form of
how existing technologies are used to achieve efficient resource pooling and resource management. Thus, Cloud
Computing relies on existing technologies like Grid Computing, service oriented computing, virtualization, Web
2.0, and similar by augmenting those to achieve Cloud related goals like elasticity and energy efficiency.
Virtualization lies at the heart of Cloud Computing. It separates computation and technology from the hardware
layer and facilitates on demand provisioning of computational resources for arbitrary users.
The 5 essential characteristics of Cloud Computing are as follows:
On demand self-services: computer services such as email, applications, network or server service can
be provided without requiring human interaction with each service provider. E.g., Amazon Web
Services (AWS), Microsoft, Google, and IBM.
Broad network access: Cloud capabilities are available over the network and can be accessed through
standard mechanisms that promote use by heterogeneous thin or thick client platforms such as mobile
phones, laptops and PDAs.
Resource pooling: The service providers‟ computing resources are pooled together to cater to multiple
customers with the help of multi-tenancy. Resources are dynamically assigned to consumers on
demand.
Rapid elasticity: Cloud services can be rapidly and elastically provisioned to quickly scale out and
rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often
appear to be unlimited and can be purchased in any quantity at any time.
Measured service: Cloud computing resource usage can be measured, controlled, and reported
providing transparency for both the provider and consumer.
V. GRIDS VS. CLOUDS
A. Business Model
The Business model for software is used as a one-time payment for unlimited usage of the software.
The main characteristic of a Cloud based model is “Measured Service” i.e. pay per use. In a cloud-
based business model, a customer will pay the provider on a consumption basis, very much like the utility
companies charge for basic utilities.
Amazon essentially provides a centralized Cloud consisting of Compute Cloud EC2 and Data Cloud S3.
In EC2, the charges are based on consumption for each instance type and Data Cloud S3 is charged by per GB-
Month of storage used and data transfer is charged by TB / month data transfer, depending on the source and
target of such transfer. To use this service the user only needs a credit card to get on-demand access to
processors in data centres which is distributed throughout the world [2].
The business model for Grid is project-oriented in which the users or community have certain number of service
units (i.e. CPU hours) they can spend [2].
For example, the TeraGrid uses this technique and requires increasingly complex proposals be written
for increasing number of computational power [4]. The TeraGrid has a large number Grid sites which are hosted
at various institutions around the country.
This model has worked well for many Grids around the globe, giving institutions (e.g. TeraGrid)
incentives to join various Grids for access to additional resources for all the users from the corresponding
institution.
B. Architecture
1) Architecture of Grid Computing
Grids use a network of resource-sharing commodity machines helping in delivering the computation power
which is affordable only by the supercomputers and large dedicated clusters used at the time of mid-90s. The
integration of the existing resources with their operating systems, hardware, and security infrastructure is
focused by the grids. A set of standard protocols, middleware, toolkits and services built on the top of the
protocols are defined and provided by the Grids.
The architecture of Grid Computing is described in terms of “layers”, and each layer has a different
specific function. The upper layers are generally user-centric, whereas lower layers are more hardware-centric,
which mainly focuses on the computer and networks.
The Grid includes five different layers:
1. Fabric Layer: The standardized access to the local resource-specific operations is provided by the
Grid. Grids help in discovering computers (OS version, hardware configuration, and usage load), storage
systems, network resource etc. They usually rely on existing fabric components, for instance, local resource
managers. General-purpose components such as GARA (general architecture for advanced reservation) [8].
4. A Comparative Study of Grid Computing and Cloud Computing
61
2. Connectivity Layer: This layer ensures secure connectivity to resources. It defines core
communication and authentication protocols for easy and secure network transactions. This layer uses the Public
Key Infrastructure (PKI). Every user is recognized by a Certificate Authority (CA) within the grid. In this, the
method of single sign-on allows users to authenticate only once. It creates proxy credentials to allow
services/agents to act on a user‟s behalf. The GSI (Grid Security Infrastructure) [9] protocol underlies every
Grid transaction.
3. Resource Layer: This layers provides access to the resources and protocols needed for the publication,
discovery, negotiation, monitoring, accounting and payment of sharing operations on individual resources. The
GRAM (Grid Resource Access and Management) [10] acts as a job manager and reporter. It helps in allocating
the various computational resources and thus monitoring the control of computation of those resources. GridFTP
[11] is another protocol for accessing the data faster and holds integrated security.
4. Collective Layer: This layer coordinates the sharing of resources like directory services. It monitors
and diagnoses the services. Directory services such as MDS (Monitoring and Discovery Service) [12] allows for
the monitoring and discovery of resources. Condor-G [13] and Nimrod-G [14] are examples of co-allocating,
scheduling and brokering services, and MPICH [15] for Grid enabled programming systems, and CAS
(community authorization service) [16] for global resource policies.
5. Application Layer: The application layer is known to be the highest layer of the Grid structure. This is
the layer that Grid users "see" and interact with. This layer comprises of the user applications built on the
protocols and APIs and these operate in Virtual Organization (VO) environment. The application layer performs
the general management functions like tracking the users using the grid services and the service providers. Grid
workflow systems and Grid portals are good examples of the user applications in the application layer.
2) Architecture of Cloud Computing
Clouds are known to be a large pool of computing or storage resources, which can be easily accessed through
standard protocols or through an abstract interface. Clouds can also be implemented over the existing
technologies based on grid, which in turn leverages the efforts in security, resource management, and
virtualization support.
Cloud Computing can be known to have a four layer architecture in comparison to Grid Computing. They are as
follows:
1. Fabric Layer: This layer is the lowest layer in the structure of Cloud Computing. It comprises of the
raw hardware level resources such as compute resources, storage resources and network resources.
2. Unified Resource Layer: This layer comprises of the encapsulated/abstracted resources which can be
exposed to the upper layer and the end users in the form of integrated resources, for example, a virtual
computer/cluster, database system etc.
3. Platform Layer: This layer acts as an add-on, on top of the unified resources in the form of a
collection of specialized tools, middleware and services and provides a development platform. For instance, a
Web hosting environment, etc.
4. Application Layer: This layer contains the applications that would run in the cloud for the users.
Cloud in general provides its users with the services according to their needs. So, the users can ask for the
amount of services they require. Thus, Clouds provide services at three different levels. These levels are as
follows:
1. Infrastructure as a Service (IaaS) [17]: This cloud service provides the cloud users with hardware,
software, and equipment (mostly at the unified resource layer) to deliver software application environments with
a resource usage-based pricing model. Based on application resource needs, infrastructure can scale up and
down dynamically. For instance, Amazon EC2 (Elastic Cloud Computing) [18] Service and S3 (Simple Storage
Service) [19], public can access both compute and storage infrastructures with a utility pricing model.
2. Platform as a Service (PaaS) [17]: It provides the Cloud users a high-level integrated environment to
build, test and deploy their applications. There are certain restrictions given to the developers on the type of
software they can write in exchange for built-in application scalability. For example, Google‟s App Engine [20]
which provides users the capability to build web applications on the same scalable systems that power Google
applications.
3. Software as a Service (SaaS) [17]: This delivers the users with special-purpose software that can be
remotely accessed by consumers through the Internet with a usage-based pricing model. For example Salesforce.
C. Resource Management
Resource management is found both in Grids and Clouds and includes topics such as the compute
model, data model, virtualization and monitoring.
These topics are significant and help in understanding and resolving the main challenges involved in
Grid as well as Cloud Computing.
5. A Comparative Study of Grid Computing and Cloud Computing
62
1) Compute Model
Most Grid systems use a batch-scheduled compute model, in which a local resource manager i.e. LRM manages
the compute resources for a Grid site, and users submit batch jobs to request and access resources for specific
period of time.
Most of the Grids have rules in place that make it compulsory for the batch jobs to identify users and keep track
of their credentials, number of resources needed and also the duration of the allocation.
Cloud Computing supports multi-tenancy. In the compute model of Cloud Computing, all the resources are
shared by all the users at the same time.
2) Data Model
Data is an extremely vital part in both cloud and grid computing. Data Grids have been designed specially to
handle data intensive applications in Grid environments. In this case, virtual data plays a crucial role as it
captures relationships between data, programs and computations and proposes various abstractions that a data
grid can provide such as location transparency where data can be accessed without specifying data location,
materialization transparency: data can be either recomputed on the fly or transferred upon request, depending on
the availability of the data and the cost to re-compute, representation transparency where data can be consumed
and produced no matter what their actual physical formats and storage are, data are mapped into some abstract
structural representation and manipulated in that way [2].
3) Data Locality
Data should be located closer to CPUs to achieve good scalability and data must be distributed over various
computers to minimize communication costs. Hence, data processing depends on data storage. When a file
needs to be processed, the job scheduler will first consult a storage metadata service to get the host node for
each chunk of data and then maps it to the processors which require it.
4) Virtualization
Cloud computing is built on the concept of virtualization. Clouds often run multiple user applications
simultaneously. Also, the applications must be able to use all the available resources concurrently without any
interruption of service. Virtualization provides the necessary abstraction such that the underlying hardware
resources such as storage, network, servers, etc. can be unified together to create a resource pool which can be
made available to all the user applications. Some reasons why Clouds adopt virtualization:
1. Efficient utilization of resources as multiple applications run on the same server.
2. Dynamic Configurability for different kinds of applications having different kinds of needs, as the
resource requirements for various applications could differ significantly.
3. Quick migration of virtual environments from one system to another without service interruption thus,
providing business continuity
4. Automation of resource provisioning, monitoring and maintenance and also caching and reusing
resources which in turn improve overall responsiveness.
Grids rely lesser on virtualization compared to Clouds due to certain policies and also due to the fact that each
individual organization maintains full control of their resources by not virtualizing them.
5) Monitoring
Monitoring the resources is a challenge that virtualization brings to cloud computing. Many Grids such as
TeraGrid also enforce restrictions on what kind of sensors or long-running services a user can launch. Grid
monitoring is straight forward because Grids have a different trust model in which users via their identity can
access and browse resources. Grid resources are not highly abstracted and virtualized as in Clouds.
In a Cloud, different levels of services can be offered to an end user, the user is only exposed to a predefined
API, and the lower level resources are opaque to the user (especially at the PaaS and SaaS level). The user does
not have the liberty to deploy him/her own monitoring infrastructure, and the limited information returned to the
user may not provide the necessary level of details for him/her to figure out what the resource status is.
Monitoring can be argued to be less important in Clouds, as users are interacting with a more abstract layer that
is potentially more sophisticated.
D. Application Model
1) Grid Computing
Grids have different types of applications, which can differ from high performance computing (HPC)
to high throughput computing (HTC). For inter-process communication, these applications mostly use message
passing interface. HPC Applications consist of concurrent programs which are designed for multi-threaded as
well as multi-process models. The applications also consists of various parallel constructs like threads, local
processes, distributed processes, etc. with changing degree of parallelism. HPC application is easily executed
tightly coupled parallel jobs within a particular machine with low-latency interconnects and not executed across
a wide area network. Grids successfully executed more number of loosely coupled applications that tend to be
managed and executed through workflow systems or other sophisticated system and complex applications. The
6. A Comparative Study of Grid Computing and Cloud Computing
63
HTC applications are loosely coupled in nature. These loosely coupled applications are composed of both
independent and dependent tasks. Tasks can be small or large, compute-intensive or data-intensive, uniprocessor
or multiprocessor. The set of tasks can be static or dynamic, homogeneous or heterogeneous, loosely or tightly
coupled.
2) Cloud Computing
Cloud Computing principle provides a similar set of applications. The one thing which is difficult to
achieve in Cloud Computing (much success in grids) are HPC applications which require faster and low latency
network interconnects for efficient scaling to number of processors.
Scientific gateways is a class of Grid applications which form front-ends to a variety of loosely-
coupled and tightly-coupled applications. A Science Gateway is a community-developed set of tools,
applications, and data that are integrated via a portal or a suite of applications, usually in a graphical user
interface, that is further customized to meet the needs of a specific community. Gateways enable entire
communities of users associated with a common discipline to use national resources through a common
interface that is configured for optimal use [21]. Web 2.0 technologies are being adopted by the scientific
gateways. However, the developments in both Grids and Web 2.0 have been made with very little
communication and collaboration between them. Although till now, scientific gateways have only emerged in
Grids, in Clouds the gateways have been adopted exclusively for the end-user interaction. The major role will be
of the browser and Web 2.0 technologies which will in turn help in how users will interact with Grids and
Clouds in the future.
E. Security Model
1) Grid Computing
The security has been designed and built in the fundamental Grid infrastructure. The Grid Security
Infrastructure (GSI) is known to be a term for secure, secret, communication between the different software‟s in
grid computing. The key security issues in Grids are: single sign-on, in this the Grid users are allowed to log in
only once and get permission for a particular required page or multiple Grid sites, this will in turn make the task
easier for accounting and auditing; delegation, only the authorized users can have the access to a particular
program, or application to get the required resources and it can further represent to other programs; privacy,
integrity and segregation, resources here belongs to one user. It cannot be accessed by any other unauthorized
user, and cannot be interfaced during transfer; it can also be communicated for resource allocation, reservation,
and sharing, considering both global and local resource usage policies. The public-key based GSI (Grid Security
Infrastructure) protocol is used for authentication, communication protection, and authorization. For advanced
resource authorization and across communities, CAS (Community Authorization Service) is designed. The Grid
computing security is more time consuming, but it adds one extra level of security to prevent unauthorized
access and many other.
2) Cloud Computing
Security model in Clouds Computing is known to be simple and less secure than the Grid Computing
security model. Cloud infrastructure typically is dependent on Web forms (over SSL) which help in creating and
managing the account information for end-users, making their work easy and allow the cloud users to reset or
change their passwords, in turn receiving new passwords via Emails in an unsafe and unencrypted
communication.
Security is one of the largest concern for the adoption of Cloud Computing. There are seven risks a
Cloud user should raise with agent before committing:
1. Privileged user access: sensitive data if processed outside the enterprise needs the assurance that they
are only accessible and propagated to privileged users;
2. Regulatory compliance: a Cloud provider should have external audits and security certifications and
the customer needs to verify this, and also if their infrastructure complies with some regulatory security
requirements;
3. Data location: it is important that the Cloud provider should commit to store and process data in
specific jurisdictions and to obey local privacy requirements on behalf of the customer, as a customer will not
know where his/her data will be stored;
4. Data segregation: one needs to ensure that one customer‟s data is fully segregated from another
customer‟s data;
5. Recovery: it is important that the Cloud provider should have efficient replication and recovery
mechanism to restore data if a disaster occurs;
6. Investigative support: Cloud services are especially difficult to investigate, if this is important for a
customer, then such support needs to be ensured with a predetermined commitment;
7. A Comparative Study of Grid Computing and Cloud Computing
64
7. Long-term viability: if the Cloud provider is acquired by another company, the data of the cloud users
should be viable. Sensitive data processed outside the enterprise needs the assurance that they are only
accessible and propagated to privileged users.
VI. GRID COMPUTING VS. CLOUD COMPUTING
Table I: GC VS.CC
Parameter Grid Computing Cloud Computing
When? The concept of grids was proposed in 1995.
The Open science grid (OSG) started in 1995
The EDG (European Data Grid) project
began in 2001.
In the late 1990`s Oracle and EMC
offered early private cloud solutions.
However the term cloud computing didn't
gain prominence until 2007.
What? Grids enable access to shared computing
power and storage capacity from your
desktop
Clouds enable access to leased
computing power and storage capacity
from your desktop
Why use them? You don`t need to buy or maintain
your own large computer centre
You can complete more work more
quickly and tackle more difficult problems.
You can share data with your
distributed team in a secure way.
You don`t need to buy or
maintain your own personal computer
center
You can quickly access extra
resources during peak work periods
Workflow
Management
Physical node
EC2 instance
Where are the
computing
resources?
In computing centres distributed across
different sites, countries and continents.
The cloud provider‟s private data centers
which are often centralized in a few
locations with excellent network
connections and cheap electrical power.
Who uses the
service?
Research collaborations, called "Virtual
Organizations", which bring together
researchers around the world working in the
same field.
Small to medium commercial businesses
or researchers with generic IT needs
Who pays for the
service?
Governments - providers and users are
usually publicly funded research
organisations, for example through National
Grid Initiatives.
The cloud provider pays for the
computing resources; the user pays to use
them
What are they
useful for?
Grids were designed to handle large sets of
limited duration jobs that produce or use
large quantities of data
Clouds best support long term services
and longer running jobs (E.g.
facebook.com)
Benefits Collaboration: grid offers a
federated platform for distributed and
collective work.
Ownership : resource providers
maintain ownership of the resources they
contribute to the grid
Transparency: the technologies
used are open source, encouraging trust and
transparency.
Resilience: grids are located at
multiple sites, reducing the risk in case of a
failure at one site that removes significant
resources from the infrastructure.
Flexibility: users can quickly
outsource peaks of activity without long
term commitment
Reliability: provider has
financial incentive to guarantee service
availability (Amazon, for example, can
provide user rebates if availability drops
below 99.9%)
Ease of use: relatively quick
and easy for non-expert users to get
started but setting up sophisticated virtual
machines to support complex
applications is more difficult.
8. A Comparative Study of Grid Computing and Cloud Computing
65
Drawbacks Reliability: grids rely on distributed
services maintained by distributed staff, often
resulting in inconsistency in reliability across
individual sites, although the service itself is
always available.
Complexity: grids are complicated
to build and use, and currently users require
some level of expertise.
Commercial: grids are generally
only available for not-for-profit work, and for
proof of concept in the commercial sphere
Generality: clouds do not offer
many of the specific high-level services
currently provided by grid technology.
Security: users with sensitive
data may be reluctant to entrust it to
external providers or to providers outside
their borders.
Opacity: the technologies used
to guarantee reliability and safety of
cloud operations are not made public.
Rigidity: the cloud is generally
located at a single site, which increases
risk of complete cloud failure.
Provider lock-in: there‟s a risk
of being locked in to services provided
by a very small group of suppliers.
Principle Grid needs processing from you Cloud does the processing for you
Goal Pervasive, uniform, and reliable access to
data, storage capacity and computation
power
Use of different services such as servers,
storage and applications are delivered to
an organization's computers and devices
through the Internet.
Functioning Grid computing separate everything into
different parts.
Cloud computing arrange everything into
one place.
Transparency Low High
Ownership Multiple Single
Multitask Yes Yes
Types of service Network, memory, CPU, bandwidth, device,
storage, etc.
IaaS, PaaS, SaaS, Everything as a service
Example of real
world
SETI, BOINC, Folding@home, GIMPS Amazon Web Service (AWS), Google
apps
Number of users Few Unlimited
Operating
System
Any OS A high performance machine (virtual
machine) on which multiple OS can runs
Resource
management
Distributed (separated) Centralized/Distributed Both
Scheduling Decentralized Both Centralized/ Decentralized
Infrastructure Low level High level
Scalability Normal High
Abstraction Low High
Bandwidth Low High
Future Cloud computing Next generation of internet
VII. CONCLUSION
Cloud Computing is selling like hot cakes nowadays, especially in the IT industry. There is a motley of
Cloud service providers like Google, Amazon, Salesforce, Dropbox, etc. which offer a variety of services each
with flexible pricing options and scalability.
Even though Cloud provides a plethora of features and flexibilities, there‟s a security concern in Cloud
Computing and this can be regarded as one of its drawbacks. A lot of work is still to be done in the security
aspect of Cloud Computing which will make it more accessible and appealing to the customers due to increased
reliability and security.
9. A Comparative Study of Grid Computing and Cloud Computing
66
Grid Computing provides cost-effectiveness in utilizing resources, it helps to solve problems with enhanced
processing power and it also helps in collaborating resources from many computers and tries to use them in
aggregation.
Cloud Computing is strongly related to Grid Computing and as a result Grid and Cloud both have
similarities like sharing of resources such as computational power, storage, application, equipment, etc. but also
have various differences.
In this paper, we acquainted ourselves with the basic ideas of both Clouds and Grids and the
mechanism in which they operate. We also learnt and reviewed the main similarities and differences between
them which in turn helped us to clearly understand the dichotomy between the two analogous technologies.
REFERENCES
[1]. Foster. What is the Grid? A Three Point Checklist, July 2002.
[2]. Ian Foster,Yong Zhao, Ioan Raicu, Shiyong Lu “Cloud Computing and Grid Computing 360-Degree
Compared”
[3]. Seyyed Mohsen Hashemi,Amid Khatibi Bardsiri “Cloud Computing Vs. Grid Computing”, VOL. 2,
NO.5, ARPN Journal of Systems and Software ,MAY 2012
[4]. Prof. Milan Kantilal Vachhani, Dr. Kishor H. Atkotiya “Similarities and Contrast between Grid
Computing and Cloud Computing”, Volume : 3 , Issue : 3, INDIAN JOURNAL OF APPLIED
RESEARCH, March 2013
[5]. Bhawna Saxena, “Grid Computing: Virtualization Of Distributed Computing And Data Resources” ,
Vol. 4, No.1, The Journal of Computer Science and Information Technology, 2006
[6]. ”What is cloud computing?
[7]. “http://searchcloudcomputing.techtarget.com/sDefinition/0sid201gci1287881, 00.html.
[8]. Microsoft, “Windows Azure”, http://www.microsoft.com/windows azure.
[9]. I. Foster, C. Kesselman, C. Lee, R. Lindell, K. Nahrstedt, A. Roy. “A Distributed Resource
Management Architecture that Supports Advance Reservations and Co-Allocation”, Intl Workshop on
Quality of Service, 1999.
[10]. The Globus Security Team. “Globus Toolkit Version 4 Grid Security Infrastructure: A Standards
Perspective,” Technical Report, Argonne National Laboratory, MCS, 2005.
[11]. I. Foster, C. Kesselman. “Globus: A Metacomputing Infrastructure Toolkit”, Intl J. Supercomputer
Applications, 11(2):115-128, 1997.
[12]. B. Allcock, J. Bester, J. Bresnahan, A. L. Chervenak, I. Foster, C. Kesselman, S. Meder, V. Nefedova,
D. Quesnal, S. Tuecke. “Data Management and Transfer in High Performance Computational Grid
Environments”, Parallel Computing Journal, Vol. 28 (5), May 2002, pp. 749-771.
[13]. J. M. Schopf, I. Raicu, L. Pearlman, N. Miller, C. Kesselman, I. Foster, M. D‟Arcy. “Monitoring and
Discovery in a Web Services Framework: Functionality and Performance of Globus Toolkit MDS4”,
Technical Report, Argonne National Laboratory, MCS Preprint #ANL/MCS-P1315-0106, 2006.
[14]. J. Frey, T. Tannenbaum, I. Foster, M. Livny, and S. Tuecke. “Condor-G: A Computation Management
Agent for MultiInstitutional Grids,” Cluster Computing, 5 (3). 237-246. 2002.
[15]. R. Buyya, D. Abramson, J. Giddy. “Nimrod/G: An Architecture for a Resource Management and
Scheduling System in a Global Computational Grid”, IEEE Int. Conf. on High Performance Computing
in Asia-Pacific Region (HPC ASIA) 2000.
[16]. N. Karonis, B. Toonen, and I. Foster. MPICH-G2: A GridEnabled Implementation of the Message
Passing Interface. Journal of Parallel and Distributed Computing, 2003.
[17]. I. Foster, C. Kesselman, L. Pearlman, S. Tuecke, and V. Welch. “The Community Authorization
Service: Status and Future,” In Proc. of Computing in High Energy Physics (CHEP), 2003.
[18]. “What is Cloud Computing?” Whatis.com. http:
//searchsoa.techtarget.com/sDefinition/0,,sid26_gci1287881,00.html, 2008.
[19]. Amazon Elastic Compute Cloud (Amazon EC2), http://aws.amazon.com/ec2, 2008.
[20]. Amazon Simple Storage Service (Amazon S3), http://aws.amazon.com/s3, 2008.
[21]. Google App Engine, http://code.google.com/appengine/, 2008.
[22]. “XSEDE, Extreme Science and Engineering Discovery Environment”, https:
//www.xsede.org/gateways-overview.