A Literature Survey on Resource Management Techniques, Issues and Challenges ...TELKOMNIKA JOURNAL
Cloud computing is a large scale distributed computing which provides on demand services for
clients. Cloud Clients use web browsers, mobile apps, thin clients, or terminal emulators to request and
control their cloud resources at any time and anywhere through the network. As many companies are
shifting their data to cloud and as many people are being aware of the advantages of storing data to cloud,
there is increasing number of cloud computing infrastructure and large amount of data which lead to the
complexity management for cloud providers. We surveyed the state-of-the-art resource management
techniques for IaaS (infrastructure as a service) in cloud computing. Then we put forward different major
issues in the deployment of the cloud infrastructure in order to avoid poor service delivery in cloud
computing.
A Comparative Study: Taxonomy of High Performance Computing (HPC) IJECEIAES
The computer technologies have rapidly developed in both software and hardware field. The complexity of software is increasing as per the market demand because the manual systems are going to become automation as well as the cost of hardware is decreasing. High Performance Computing (HPC) is very demanding technology and an attractive area of computing due to huge data processing in many applications of computing. The paper focus upon different applications of HPC and the types of HPC such as Cluster Computing, Grid Computing and Cloud Computing. It also studies, different classifications and applications of above types of HPC. All these types of HPC are demanding area of computer science. This paper also done comparative study of grid, cloud and cluster computing based on benefits, drawbacks, key areas of research, characterstics, issues and challenges.
Efficient architectural framework of cloud computing Souvik Pal
Cloud computing is that enables adaptive, favorable and on-demand network access to a collective pool of adjustable and configurable computing physical resources which networks, servers, bandwidth, storage that can be swiftly provisioned and released with negligible supervision endeavor or service provider interaction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing. However, clouds are Internet-based concept and try to disguise complexity overhead for end users. Cloud service providers (CSPs) use many structural designs combined with self-service capabilities and ready-to-use facilities for computing resources, which are enabled through network infrastructure especially the internet which is an important consideration. This paper provides an efficient architectural Framework for cloud computing that may lead to better performance and faster access.
Grid computing or network computing is developed to make the available electric power in the similar way
as it is available for the grid. For that we just plug in the power and whoever needs power, may use it. In
grid computing if a system needs more power than available it can share the computing with other
machines connected in a grid. In this way we can use the power of a super computer without a huge cost
and the CPU cycles that were wasted previously can also be utilized. For performing grid computation in
joined computers through the internet, the software must be installed which supports grid computation on
each computer inside the VO. The software handles information queries, storage management, processing
scheduling, authentication and data encryption to ensure information security.
A Literature Survey on Resource Management Techniques, Issues and Challenges ...TELKOMNIKA JOURNAL
Cloud computing is a large scale distributed computing which provides on demand services for
clients. Cloud Clients use web browsers, mobile apps, thin clients, or terminal emulators to request and
control their cloud resources at any time and anywhere through the network. As many companies are
shifting their data to cloud and as many people are being aware of the advantages of storing data to cloud,
there is increasing number of cloud computing infrastructure and large amount of data which lead to the
complexity management for cloud providers. We surveyed the state-of-the-art resource management
techniques for IaaS (infrastructure as a service) in cloud computing. Then we put forward different major
issues in the deployment of the cloud infrastructure in order to avoid poor service delivery in cloud
computing.
A Comparative Study: Taxonomy of High Performance Computing (HPC) IJECEIAES
The computer technologies have rapidly developed in both software and hardware field. The complexity of software is increasing as per the market demand because the manual systems are going to become automation as well as the cost of hardware is decreasing. High Performance Computing (HPC) is very demanding technology and an attractive area of computing due to huge data processing in many applications of computing. The paper focus upon different applications of HPC and the types of HPC such as Cluster Computing, Grid Computing and Cloud Computing. It also studies, different classifications and applications of above types of HPC. All these types of HPC are demanding area of computer science. This paper also done comparative study of grid, cloud and cluster computing based on benefits, drawbacks, key areas of research, characterstics, issues and challenges.
Efficient architectural framework of cloud computing Souvik Pal
Cloud computing is that enables adaptive, favorable and on-demand network access to a collective pool of adjustable and configurable computing physical resources which networks, servers, bandwidth, storage that can be swiftly provisioned and released with negligible supervision endeavor or service provider interaction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing. However, clouds are Internet-based concept and try to disguise complexity overhead for end users. Cloud service providers (CSPs) use many structural designs combined with self-service capabilities and ready-to-use facilities for computing resources, which are enabled through network infrastructure especially the internet which is an important consideration. This paper provides an efficient architectural Framework for cloud computing that may lead to better performance and faster access.
Grid computing or network computing is developed to make the available electric power in the similar way
as it is available for the grid. For that we just plug in the power and whoever needs power, may use it. In
grid computing if a system needs more power than available it can share the computing with other
machines connected in a grid. In this way we can use the power of a super computer without a huge cost
and the CPU cycles that were wasted previously can also be utilized. For performing grid computation in
joined computers through the internet, the software must be installed which supports grid computation on
each computer inside the VO. The software handles information queries, storage management, processing
scheduling, authentication and data encryption to ensure information security.
The Grid means the infrastructure for the Advanced Web, for computing, collaboration and communication.
The goal is to create the illusion of a simple yet large and powerful self managing virtual computer out of a large collection of connected heterogeneous systems sharing various combinations of resources.
“Grid” computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and ,in some cases, high-performance orientation .
We presented the Grid concept in analogy with that of an electrical power grid and Grid vision
Implementing K-Out-Of-N Computing For Fault Tolerant Processing In Mobile and...IJERA Editor
Despite the advances in hardware for hand-held mobile devices, resource-intensive applications (e.g., video and imagestorage and processing or map-reduce type) still remain off bounds since they require large computation and storage capabilities.Recent research has attempted to address these issues by employing remote servers, such as clouds and peer mobile devices.For mobile devices deployed in dynamic networks (i.e., with frequent topology changes because of node failure/unavailability andmobility as in a mobile cloud), however, challenges of reliability and energy efficiency remain largely unaddressed. To the best of ourknowledge, we are the first to address these challenges in an integrated manner for both data storage and processing in mobilecloud, an approach we call k-out-of-n computing. In our solution, mobile devices successfully retrieve or process data, in the mostenergy-efficient way, as long as k out of n remote servers are accessible. Through a real system implementation we prove the feasibilityof our approach. Extensive simulations demonstrate the fault tolerance and energy efficiency performance of our framework in largerscale networks.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Privacy preserving public auditing for secured cloud storagedbpublications
As the cloud computing technology develops during the last decade, outsourcing data to cloud service for storage becomes an attractive trend, which benefits in sparing efforts on heavy data maintenance and management. Nevertheless, since the outsourced cloud storage is not fully trustworthy, it raises security concerns on how to realize data deduplication in cloud while achieving integrity auditing. In this work, we study the problem of integrity auditing and secure deduplication on cloud data. Specifically, aiming at achieving both data integrity and deduplication in cloud, we propose two secure systems, namely SecCloud and SecCloud+. SecCloud introduces an auditing entity with a maintenance of a MapReduce cloud, which helps clients generate data tags before uploading as well as audit the integrity of data having been stored in cloud. Compared with previous work, the computation by user in SecCloud is greatly reduced during the file uploading and auditing phases. SecCloud+ is designed motivated by the fact that customers always want to encrypt their data before uploading, and enables integrity auditing and secure deduplication on encrypted data.
Abstract:-
This paper is based on the study of grid computing and cloud computing technology. These two technologies are related with geographically defined network standards. The main aspect of this paper is deep learning of latest technology and trends in the field of networking.
Keywords:-Technology,Cloud Computing,Grid Computing
Cloud computing is Internet based development and use of computer technology. It is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. Users need not have knowledge of, expertise in, or control over the technology infrastructure "in the cloud" that supports them. Cloud computing is a hot topic all over the world nowadays, through which customers can access information and computer power via a web browser. As the adoption and deployment of cloud computing increase, it is critical to evaluate the performance of cloud environments. Currently, modeling and simulation technology has become a useful and powerful tool in cloud computing research community to deal with these issues. Cloud simulators are required for cloud system testing to decrease the complexity and separate quality concerns. Cloud computing means saving and accessing the data over the internet instead of local storage. In this paper, we have provided a short review on the types, models and architecture of the cloud environment.
A Study of A Method To Provide Minimized Bandwidth Consumption Using Regenera...IJERA Editor
Cloud storage systems to protect data from corruptions, redundant data to tolerate failures of storage and lost data should be repaired when storage fails. Regenerating codes provide fault tolerance by striping data across multiple servers, while using less repair traffic than traditional erasure codes during failure recovery. In previous research implemented practical Data Integrity Protection (DIP) scheme for regenerating-coding based cloud storage. Functional Minimum-Storage Regenerating (FMSR) codes and it construct FMSR-DIP codes, which allow clients to remotely verify the integrity of random subsets of long-term archival data under a multi server setting. The problem is to optimize bandwidth consumption when repairing multiple failures. The cooperative repair of multiple failures can help to further save bandwidth consumption when multiple failures are being repaired.
A Virtualization Model for Cloud ComputingSouvik Pal
Cloud Computing is now a very emerging field in the IT industry as well as research field. The advancement of Cloud Computing came up due to fast-growing usage of internet among the people. Cloud Computing is basically on-demand network access to a collection of physical resources which can be provisioned according to the need of cloud user under the supervision of Cloud Service provider interaction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing. Virtualization technology is widely applied to modern data center for cloud computing. Virtualization is used computer resources to imitate other computer resources or whole computers. This paper provides a Virtualization model for cloud computing that may lead to faster access and better performance. This model may help to combine self-service capabilities and ready-to-use facilities for computing resources.
CONTAINERIZED SERVICES ORCHESTRATION FOR EDGE COMPUTING IN SOFTWARE-DEFINED W...IJCNCJournal
As SD-WAN disrupts legacy WAN technologies and becomes the preferred WAN technology adopted by corporations, and Kubernetes becomes the de-facto container orchestration tool, the opportunities for deploying edge-computing containerized applications running over SD-WAN are vast. Service orchestration in SD-WAN has not been provided with enough attention, resulting in the lack of research focused on service discovery in these scenarios. In this article, an in-house service discovery solution that works alongside Kubernetes’ master node for allowing improved traffic handling and better user experience when running micro-services is developed. The service discovery solution was conceived following a design science research approach. Our research includes the implementation of a proof-ofconcept SD-WAN topology alongside a Kubernetes cluster that allows us to deploy custom services and delimit the necessary characteristics of our in-house solution. Also, the implementation's performance is tested based on the required times for updating the discovery solution according to service updates. Finally, some conclusions and modifications are pointed out based on the results, while also discussing possible enhancements.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The Grid means the infrastructure for the Advanced Web, for computing, collaboration and communication.
The goal is to create the illusion of a simple yet large and powerful self managing virtual computer out of a large collection of connected heterogeneous systems sharing various combinations of resources.
“Grid” computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and ,in some cases, high-performance orientation .
We presented the Grid concept in analogy with that of an electrical power grid and Grid vision
Implementing K-Out-Of-N Computing For Fault Tolerant Processing In Mobile and...IJERA Editor
Despite the advances in hardware for hand-held mobile devices, resource-intensive applications (e.g., video and imagestorage and processing or map-reduce type) still remain off bounds since they require large computation and storage capabilities.Recent research has attempted to address these issues by employing remote servers, such as clouds and peer mobile devices.For mobile devices deployed in dynamic networks (i.e., with frequent topology changes because of node failure/unavailability andmobility as in a mobile cloud), however, challenges of reliability and energy efficiency remain largely unaddressed. To the best of ourknowledge, we are the first to address these challenges in an integrated manner for both data storage and processing in mobilecloud, an approach we call k-out-of-n computing. In our solution, mobile devices successfully retrieve or process data, in the mostenergy-efficient way, as long as k out of n remote servers are accessible. Through a real system implementation we prove the feasibilityof our approach. Extensive simulations demonstrate the fault tolerance and energy efficiency performance of our framework in largerscale networks.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Privacy preserving public auditing for secured cloud storagedbpublications
As the cloud computing technology develops during the last decade, outsourcing data to cloud service for storage becomes an attractive trend, which benefits in sparing efforts on heavy data maintenance and management. Nevertheless, since the outsourced cloud storage is not fully trustworthy, it raises security concerns on how to realize data deduplication in cloud while achieving integrity auditing. In this work, we study the problem of integrity auditing and secure deduplication on cloud data. Specifically, aiming at achieving both data integrity and deduplication in cloud, we propose two secure systems, namely SecCloud and SecCloud+. SecCloud introduces an auditing entity with a maintenance of a MapReduce cloud, which helps clients generate data tags before uploading as well as audit the integrity of data having been stored in cloud. Compared with previous work, the computation by user in SecCloud is greatly reduced during the file uploading and auditing phases. SecCloud+ is designed motivated by the fact that customers always want to encrypt their data before uploading, and enables integrity auditing and secure deduplication on encrypted data.
Abstract:-
This paper is based on the study of grid computing and cloud computing technology. These two technologies are related with geographically defined network standards. The main aspect of this paper is deep learning of latest technology and trends in the field of networking.
Keywords:-Technology,Cloud Computing,Grid Computing
Cloud computing is Internet based development and use of computer technology. It is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. Users need not have knowledge of, expertise in, or control over the technology infrastructure "in the cloud" that supports them. Cloud computing is a hot topic all over the world nowadays, through which customers can access information and computer power via a web browser. As the adoption and deployment of cloud computing increase, it is critical to evaluate the performance of cloud environments. Currently, modeling and simulation technology has become a useful and powerful tool in cloud computing research community to deal with these issues. Cloud simulators are required for cloud system testing to decrease the complexity and separate quality concerns. Cloud computing means saving and accessing the data over the internet instead of local storage. In this paper, we have provided a short review on the types, models and architecture of the cloud environment.
A Study of A Method To Provide Minimized Bandwidth Consumption Using Regenera...IJERA Editor
Cloud storage systems to protect data from corruptions, redundant data to tolerate failures of storage and lost data should be repaired when storage fails. Regenerating codes provide fault tolerance by striping data across multiple servers, while using less repair traffic than traditional erasure codes during failure recovery. In previous research implemented practical Data Integrity Protection (DIP) scheme for regenerating-coding based cloud storage. Functional Minimum-Storage Regenerating (FMSR) codes and it construct FMSR-DIP codes, which allow clients to remotely verify the integrity of random subsets of long-term archival data under a multi server setting. The problem is to optimize bandwidth consumption when repairing multiple failures. The cooperative repair of multiple failures can help to further save bandwidth consumption when multiple failures are being repaired.
A Virtualization Model for Cloud ComputingSouvik Pal
Cloud Computing is now a very emerging field in the IT industry as well as research field. The advancement of Cloud Computing came up due to fast-growing usage of internet among the people. Cloud Computing is basically on-demand network access to a collection of physical resources which can be provisioned according to the need of cloud user under the supervision of Cloud Service provider interaction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing. Virtualization technology is widely applied to modern data center for cloud computing. Virtualization is used computer resources to imitate other computer resources or whole computers. This paper provides a Virtualization model for cloud computing that may lead to faster access and better performance. This model may help to combine self-service capabilities and ready-to-use facilities for computing resources.
CONTAINERIZED SERVICES ORCHESTRATION FOR EDGE COMPUTING IN SOFTWARE-DEFINED W...IJCNCJournal
As SD-WAN disrupts legacy WAN technologies and becomes the preferred WAN technology adopted by corporations, and Kubernetes becomes the de-facto container orchestration tool, the opportunities for deploying edge-computing containerized applications running over SD-WAN are vast. Service orchestration in SD-WAN has not been provided with enough attention, resulting in the lack of research focused on service discovery in these scenarios. In this article, an in-house service discovery solution that works alongside Kubernetes’ master node for allowing improved traffic handling and better user experience when running micro-services is developed. The service discovery solution was conceived following a design science research approach. Our research includes the implementation of a proof-ofconcept SD-WAN topology alongside a Kubernetes cluster that allows us to deploy custom services and delimit the necessary characteristics of our in-house solution. Also, the implementation's performance is tested based on the required times for updating the discovery solution according to service updates. Finally, some conclusions and modifications are pointed out based on the results, while also discussing possible enhancements.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
Cloud computing Review over various scheduling algorithmsIJEEE
Cloud computing has taken an importantposition in the field of research as well as in thegovernment organisations. Cloud computing uses virtualnetwork technology to provide computer resources tothe end users as well as to the customer’s. Due tocomplex computing environment the use of high logicsand task scheduler algorithms are increase which resultsin costly operation of cloud network. Researchers areattempting to build such kind of job scheduling algorithms that are compatible and applicable in cloud computing environment.In this paper, we review research work which is recently proposed by researchers on the base of energy saving scheduling techniques. We also studying various scheduling algorithms and issues related to them in cloud computing.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
We all have good and bad thoughts from time to time and situation to situation. We are bombarded daily with spiraling thoughts(both negative and positive) creating all-consuming feel , making us difficult to manage with associated suffering. Good thoughts are like our Mob Signal (Positive thought) amidst noise(negative thought) in the atmosphere. Negative thoughts like noise outweigh positive thoughts. These thoughts often create unwanted confusion, trouble, stress and frustration in our mind as well as chaos in our physical world. Negative thoughts are also known as “distorted thinking”.
2. 2
1. Define Grid computing.
A Grid is a collection of connected computing resources that appear to
users as a unique large system, providing a single point of access to the datacenter.
Grid Computing can be defined as follows:
Grid Computing is a model of distributed computing that uses
geographically and administratively disparate resources.
Grid Computing can be defined as applying resources from many
computers in a network to a single problem, usually one that requires a
large number of processing cycles or access to large amounts of data.
Grid Computing is the combination of computer resources from multiple
administrative domains applied to a common task, usually to a scientific,
technical or business problem that requires a great number of computer
processing cycles or the need to process large amounts of data.
Grid Computing is distributed, large-scale cluster computing, as well as
a form of network-distributed parallel processing.
there are many other definitions of Grid computing:
Plaszczak/Wellner defines Grid technology as, "the technology that
enables resource virtualization, on-demand provisioning, and service
(resource) sharing between organizations."
IBM defines Grid Computing as “the ability, using a set of open
standards and protocols, to gain access to applications and data,
processing power, storage capacity and a vast array of other computing
resources over the Internet. A Grid is a type of parallel and distributed
system that enables the sharing, selection, and aggregation of
resources distributed across „multiple administrative domains based on
their (resources) availability, capacity, performance, cost and users'
quality-of-service requirements”.
3. 3
Buyya/Venugopal define Grid as "a type of parallel and distributed
system that enables the sharing, selection, and aggregation of
geographically distributed autonomous resources dynamically at runtime
depending on their availability, capability, performance, cost, and users'
quality-of-service requirements".
CERN, one of the largest users of Grid technology, defines the Grid as,
“a service for sharing computer power and data storage capacity over
the Internet.”
In the Grid world, computing resources mean more than just data or
processing power. It might mean disk space, memory, tape backup, or even
software licences. Grid Computing can be a cost effective way to resolve IT
issues in the areas of data, computing and collaboration. One of the
keywords that sum up the motivation behind evolution of the Grid systems is
virtualization, which refers to seamless integration of geographically
distributed heterogeneous systems.
Grid technology allows organizations to use numerous computers to solve
problems by sharing computing resources. Grid Computing is a distributed
computing technology and uses geographically distributed computers
collectively to achieve higher performance computing and resource sharing.
Organizations with both large and small networks have been adopting Grid
techniques in order to reduce execution time and enable resource sharing.
This technology has been applied to computationally intensive scientific,
mathematical, and academic problems. It is used in commercial enterprises
for such diverse applications as drug discovery, economic forecasting,
seismic analysis, and back-office data processing in support of e-commerce
and Web services.
Grid Computing uses middleware to coordinate disparate IT resources
across a network, allowing them to function as a virtual whole. The goal of a
computing Grid is to provide users with access to the resources they need,
when they need them. Broadband networks play a fundamental enabling
role in making Grid Computing possible and this is the motivation for looking
at this technology from the perspective of communication.
4. 4
One of the main strategies of Grid Computing is using software to divide and
apportion pieces of a program among several computers, sometimes up to
many thousands.
Grid Computing provides highly scalable, highly secure, and extremely high-
performance mechanisms for accessing to remote computing resources in a
seamless manner. Thus it is possible for us to share computing resources,
on an unprecedented scale, among an infinite number of geographically
distributed groups.
The size of Grid Computing may vary from being small-confined to a
network of computer workstations within a corporation, to being large, public
collaboration across many companies and networks.
Since Grid Computing is a form of distributed computing, the use of
disparate resources such as compute nodes, storage, applications and data,
often spread across different physical locations and administrative domains,
is optimized through virtualization and collective management.
2. What is the definition for Grid concept given by Plaszczak/Wellner?
Plaszczak/Wellner defines Grid technology as, "the technology that
enables resource virtualization, on-demand provisioning, and service
(resource) sharing between organizations."
3. What are the core functional computational requirements for grid applications?
The core functional computational requirements for grid applications are:
The ability to allow for independent management of computing
resources.
The ability to provide mechanisms that can intelligently and
transparently select computing resources capable of running a user's
job.
5. 5
The understanding of the current and predicted loads on grid resources,
resource availability, dynamic resource configuration, and provisioning.
Failure detection and failover mechanisms.
Ensure appropriate security mechanisms for secure resource
management, access, and integrity.
4. What are the characteristics that users/applications in Grid Computing environments must
be able to perform?
Users/applications typically found in Grid Computing environments must be
able to perform the following characteristics:
The clear and unambiguous identification of the problem(s) need to be
solved.
The identification and mapping of the resources required solve the
problem.
The ability to sustain the required levels of QoS, while adhering to the
anticipated and necessary SLAs.
The capability to collect feedback regarding resource status, including
updates for the environment's respective applications.
6. 6
Grid
Computing activities were initially focused in the areas of computing
power, data access, and storage resources. The definition of Grid
Computing resource sharing has since changed based upon experiences,
with more focus now being applied to a sophisticated form of coordinated
resource sharing distributed throughout the participants in a virtual
organization. This application concept of coordinated resource sharing
includes any resources available within a virtual organization, including
computing power, data, hardware, software and applications, networking
services, and any other forms of computing resource attainment. This
concept of coordinated resource sharing is shown in Figure
The following discussion introduces a number of requirements needed for
such Grid Computing architectures utilized by virtual organizations. We shall
classify these architecture requirements into three categories. These
resource categories must be capable of providing facilities for the following
scenarios:
The need for dynamic discovery of computing resources, based on their
7. 7
capabilities and functions.
The immediate allocation and provisioning of these resources, based on
their availability and the user demands or requirements.
The management of these resources to meet the required service level
agreements (SLAs).
The provisioning of multiple autonomic features for the resources, such
as self-diagnosis, self-healing, self-configuring, and self-management.
The provisioning of secure access methods to the resources, and
bindings with the local security mechanisms based upon the autonomic
control policies.
5. List and explain the three main issues that characterize computational grids.
The steps necessary to realize a computational grid include the integration
of individual software and hardware components into a combined networked
resource.
• The implementation of middleware to provide a transparent view of the
resources available.
• The development of tools that allows management and control of grid
applications and infrastructure.
• The development and optimization of distributed applications to take
advantage of the resources.
There are three main issues that characterize computational grids:
8. 8
1) Heterogeneity:
A grid involves a multiplicity of resources that are
heterogeneous in nature, and might span numerous administrative
domains across wide geographical distances.
2) Scalability: A grid might grow from a few resources to millions. This
raises the problem of potential performance degradation as a Grids size
increases. Consequently, applications that require a large number of
geographically located resources must be designed to be extremely
latency tolerant.
3) Dyn amicity or Adaptability:
In a grid, a resource failure is the rule, not
the exception. In fact, with so many resources in a Grid, the probability
of some resource failing is naturally high. The resource managers or
applications must tailor their behaviour dynamically so as to extract the
maximum performance from the available resources and services.
6. What are the components that are necessary to form a grid?
Figure 3.1 shows the components that are necessary to form a grid and
they are briefly discussed below:
9. 9
Grid Fabric: It comprises all the resources geographically distributed
across the globe and accessible from anywhere on the Internet. They
could be computers (such as PCs or Workstations running operating
systems such as UNIX or NT), clusters (running cluster operating
systems or resource management systems such as LSF, Condor or
PBS), storage devices, databases, and special scientific instruments
such as a radio telescope.
Grid Middleware: It offers core services such as remote process
management, co-allocation of resources, storage access, information
(registry), security, authentication, and Quality of Service (QoS) such as
resource reservation and trading.
• Grid Development Environments and Tools: These offer high-level
services that allow programmers to develop applications and brokers
that act as user agents that can manage or schedule computations
10. 10
across global resources.
• Grid Applications and Portals: They are developed using grid-enabled
languages such as HPC++, and message-passing systems such as
Message Passing Interface (MPI). Applications, such as parameter
simulations and grand-challenge problems often require considerable
computational power, require access to remote data sets, and may need
to interact with scientific instruments. Grid portals offer web-enabled
application services i.e., users can submit and collect results for their
jobs on remote resources through a web interface.
7. What are the areas that a Grid Computing infrastructure component must address in many
stages of the implementation?
The development of grid infrastructure, both hardware and software has
become the focus of a large community of researchers and developers in
both academics and industry. The grid infrastructure is a complex
combination of a number of capabilities and resources identified for the
specific problem and environment being addressed. In the early
development stages of grid applications, middleware and solutions
approaches were developed to solve fairly narrow and limited Grid
Computing problems, such as middleware to deal with numerical analysis,
customized data access grids, and other narrow problems.
Today, with the emergence and convergence of grid service-oriented
technologies, including the interoperable XML-based solutions becoming
ever more present and industry providers with a number of reusable grid
middleware solutions facilitating the following requirement areas, it is
becoming simpler to quickly deploy valuable solutions. Figure shows this
topology of middleware topics.
11. 11
A Grid Computing infrastructure component must address several
potentially complicated areas in many stages of the implementation. These
areas are:
Security
Resource management
Information services
Data management.
Security: The heterogeneous nature of resources and their differing
security policies are complicated and complex in the security schemes of a
Grid Computing environment. These computing resources are hosted in
differing security domains and heterogeneous platforms. Our middleware
solutions must address local security integration, secure identity mapping,
secure access/authentication, secure federation, and trust management.
The other security requirements are often centered on the topics of data
integrity, confidentiality, and information privacy. The Grid Computing data
exchange must be protected using secure communication channels,
including Secure Socket Layer (SSL)/Transport Layer Security (TLS) and
oftentimes in combination with secure message exchange mechanisms
12. 12
such as WS-Security. The most notable security infrastructure used for
securing grid is the Grid Security Infrastructure (GSI). The GSI provides
capabilities for single sign-on, heterogeneous platform integration and
secure resource access/authentication. The latest and most notable security
solution is the use of WS-Security standards. This mechanism provides
message-level, end-to-end security needed for complex and interoperable
secure solutions.
Resource Management: The tremendously large number and the
heterogeneous potential of Grid Computing resources cause the resource
management challenge to be a significant effort topic in Grid Computing
environments. These resource management scenarios often include
resource discovery, resource inventories, fault isolation, resource
provisioning, resource monitoring, a variety of autonomic capabilities, and
service-level management activities. Selection of the correct resource from
the grid resource pool is the most interesting aspect of the resource
management area.
Look at the example of a job management system. Here, the resource
management feature identifies the job, allocates the suitable resources for
the execution of the job, partitions the job if necessary, and provides
feedback to the user on job status. This job scheduling process includes
moving the data needed for various computations to the appropriate Grid
Computing resources, and mechanisms for dispatching the job results. It is
important to understand multiple service providers can host Grid Computing
resources across many domains, such as security, management,
networking services, and application functionalities. Also, note that the
operational and application resources may also be hosted on different
hardware and software platforms. In addition to this, Grid Computing
middleware must provide efficient monitoring of resources to collect the
required matrices on utilization, availability, and other information.
One causal impact of this fact is the security and the ability for the grid
service provider to reach out and probe into other service provider domains
in order to obtain and reason about key operational information. This
oftentimes becomes complicated across several dimensions, and has to be
resolved by meeting-of-the-minds between all service providers, such as
messaging necessary information to all providers, when and where it is
required.
13. 13
Another valuable and very critical feature across the Grid Computing
infrastructure is found in the area of provisioning; that is, to provide
autonomic capabilities for self-management, self-diagnosis, self-healing,
and self-configuring. The most notable resource management middleware
solution is the Grid Resource Allocation Manager (GRAM). This resource
provides a robust job management service for users, which includes job
allocation, status management, data distribution, and start/stop jobs.
Information Services: Information services are fundamentally concentrated
on providing valuable information respective to the Grid Computing
infrastructure resources. These services leverage and entirely depend on
the providers of information such as resource availability, capacity, and
utilization, just to name a few. These information services enable service
providers to most efficiently allocate resources for the variety of very specific
tasks related to the Grid Computing infrastructure solution.
In addition, developers and providers can also construct grid solutions to
reflect portals, and utilize meta-schedulers and meta-resource managers.
These metrics are helpful in service-level management (SLA) in conjunction
with the resource policies. This information is resource specific and is
provided based on the schema pertaining to that resource. We may need
higher level indexing services or data aggregators and transformers to
convert these resource-specific data into valuable information sources for
the end user.
For example, a resource may provide operating system information, while
yet another resource might provide information on hardware configuration,
and we can then group this resource information, reason with it, and then
suggest a "best" price combination on selecting the operating system on
other certain hardware. This combinatorial approach to reasoning is very
straightforward in a Grid Computing infrastructure, simply due to the fact
that all key resources are shared, as is the information correlated respective
to the resources.
Data Management: Data forms the single most important asset in a Grid
Computing system. Data may be input into the resource, and the results
from the resource on the execution of a specific task if the infrastructure is
not designed properly. The data must be near to the computation where it is
14. 14
used and the data movement in a geographically distributed system can
quickly cause scalability problems. Also this data movement in any Grid
Computing environment requires absolutely secure data transfers, both to
and from the respective resources. The current advances surrounding data
management are tightly focusing on virtualized data storage mechanisms,
uch as storage area networks (SAN), network file systems, dedicated
storage servers, and virtual databases. These virtualization mechanisms in
data storage solutions and common access mechanisms (e.g., relational
SQLs, Web services, etc.) help developers and providers to design data
management concepts into the Grid Computing infrastructure with much
more flexibility than traditional approaches.
Some of the considerations developers and providers must factor into
decisions are related to selecting the most appropriate data management
mechanism for Grid Computing infrastructures. This includes the size of the
data repositories, resource geographical distribution, security requirements,
schemes for replication and caching facilities, and the underlying
technologies utilized for storage and data access.
The most important activity noted today is the Open Grid Service
Architecture (OGSA) and its surrounding standard initiatives. The OGSA
provides a common interface solution to grid services, and all the
information has been conveniently encoded using XML as the standard.
This provides a common approach to information services and resource
management for Grid Computing infrastructures.
8. What is grid problem? Briefly explain.
Grid computing has evolved into an important discipline within the computer
industry by differentiating itself from distributed computing through an
increased focus on resource sharing, co-ordination, manageability, and high
performance. The focus on resource sharing is called the grid problem,
which can be defined as the set of problems associated with resource
sharing among a set of individuals or groups. This sharing of resources,
ranging from simple file transfers to complex and collaborative problem
solving, is accomplished under controlled and well-defined conditions and
policies. In this context, the critical problems are resource discovery, event
correlation, authentication, authorization, and access mechanisms.
Resource sharing is further complicated when a grid is introduced as a
15. 15
solution for utility computing, where commercial applications and resources
become available as shareable and on-demand resources. This concept of
commercial on-demand utility grid services adds new, more difficult
challenges to the already complicated grid problem including service level
features, accounting, usage metering, flexible pricing, federated security,
scalability, and open-ended integration.
9. Explain the need for grid protocols.
Our Grid architecture establishes requirements for the protocols and APIs
that enable sharing of resources, services, and code. It does not otherwise
constrain the technologies that might be used to implement these protocols
and APIs. In fact, it is quite feasible to define multiple instantiations of key
Grid architecture elements. For example, we can construct both Kerberos
and PKI-based protocols at the Connectivity layer – and access these
security mechanisms via the same API, thanks to GSS-API However, Grids
constructed with these different protocols are not interoperable and cannot
share essential services – at least not without gateways.
For this reason, the long-term success of Grid computing requires that we
select and achieve widespread deployment of one set of protocols at the
connectivity and resource layers and, to a lesser extent, at the Collective
layer. Much as the core Internet protocols enable different computer
networks to interoperate and exchange information, these Inter grid
protocols enable different organizations to interoperate and exchange or
share resources. Resources that speak these protocols can be said to be
―on the Grid. Standard APIs are also highly useful if Grid code is to be
shared.
10. What are the characteristics of Services?
Services generally have the following characteristics:
16. 16
• They may be individually useful, or they can be integrated – composed –
to provide higher-level services.
• They communicate with their clients by exchanging messages.
• They can participate in a workflow, where the order in which messages
are sent and received affects the outcome of the operations performed
by a service.
• They may be completely self-contained, or they may depend on the
availability of other services, or on the existence of a resource such as a
database.
• They advertise details such as their capabilities, interfaces, policies, and
supported communications protocols. Implementation details such as
programming language and hosting platform are of no concern to
clients, and are not revealed.
Figure gives the service oriented architecture model that illustrates a
17. 17
simple service interaction cycle, which begins with a service advertising
itself through a well-known registry service (1). A potential client, which may
or may not be another service, queries the registry (2) to search for a
service that meets its needs. The registry returns a (possibly empty) list of
suitable services, and the client selects one and passes a request message
to it, using any mutually recognized protocol (3). In this example, the service
responds (4) either with the result of the requested operation or with a fault
message.
The illustration shows the simplest case, but the process may be
significantly more complex in a real-world setting such as a commercial
application. For example, a given service may support only the HTTPS2
protocol, be restricted to authorized users, require authentication, offer
different levels of performance to different users, or require payment for use.
Services can provide such details in a variety of ways, and the client can
use this information to make its selection. The above illustration shows a
simple synchronous, bi-directional message exchange pattern but a variety
of patterns are also possible. For example, an interaction may be one-way,
or the response may come from some other service that completed the
transaction but not from the service to which the client sent the request.
**************************************