Grid computing is a distributed computing model that enables transparent sharing and aggregation of computing, storage, and network resources across dynamic and geographically dispersed organizations. Key characteristics include distributing computational resources among multiple and widely separated sources and users, providing a means for using distributed resources to solve large problems, and making resources appear as a single virtual machine with powerful capabilities. Example applications discussed include scientific computing, business applications, and volunteer computing projects.
Slides by Raquel Salcedo Gomes
For the English for Specific Purposes class at the IT technical course at IFSUL, campus Sapucaia do Sul. September 2017.
Introduction to Cloud Computing and Cloud InfrastructureSANTHOSHKUMARKL1
Introduction, Cloud Infrastructure: Cloud computing, Cloud computing delivery models and services, Ethical issues, Cloud vulnerabilities, Cloud computing at Amazon, Cloud computing the Google perspective, Microsoft Windows Azure and online services, Open-source software platforms for private clouds.
Slides by Raquel Salcedo Gomes
For the English for Specific Purposes class at the IT technical course at IFSUL, campus Sapucaia do Sul. September 2017.
Introduction to Cloud Computing and Cloud InfrastructureSANTHOSHKUMARKL1
Introduction, Cloud Infrastructure: Cloud computing, Cloud computing delivery models and services, Ethical issues, Cloud vulnerabilities, Cloud computing at Amazon, Cloud computing the Google perspective, Microsoft Windows Azure and online services, Open-source software platforms for private clouds.
This is the basic knowledge of an IT Infrastructer.Which help to understand the requirment of IT ,Clients ,server and the work flow of the Information. The network connection and the Layars of OSI model to understand how the data is moving in a network.
What is P2P networks, history, architecture, advantages and weaknesses, Legal issues, Security and Privacy issues, Economic issues, Applications of use and Future developments (April, 2010).
Human Computer Interaction Chapter 2 Interaction and Interaction Design Basi...VijiPriya Jeyamani
Interaction:
Introduction
Models of interaction
Ergonomics
Interaction styles
The context of the interactions
Paradigms:
Introduction
Paradigms for interaction.
2.2 Interaction Design:
Introduction
What is design?
User focus
Scenarios
Navigation design
Screen design and layout
Interaction and prototyping
Companies are looking forward for single Operation center for entire IT stack, This preso summarize the design components for ESOC which will cater entire IT infrastructure and application stack from a single facility.
Chapter 7: Design rules
from
Dix, Finlay, Abowd and Beale (2004).
Human-Computer Interaction, third edition.
Prentice Hall. ISBN 0-13-239864-8.
http://www.hcibook.com/e3/
This is the basic knowledge of an IT Infrastructer.Which help to understand the requirment of IT ,Clients ,server and the work flow of the Information. The network connection and the Layars of OSI model to understand how the data is moving in a network.
What is P2P networks, history, architecture, advantages and weaknesses, Legal issues, Security and Privacy issues, Economic issues, Applications of use and Future developments (April, 2010).
Human Computer Interaction Chapter 2 Interaction and Interaction Design Basi...VijiPriya Jeyamani
Interaction:
Introduction
Models of interaction
Ergonomics
Interaction styles
The context of the interactions
Paradigms:
Introduction
Paradigms for interaction.
2.2 Interaction Design:
Introduction
What is design?
User focus
Scenarios
Navigation design
Screen design and layout
Interaction and prototyping
Companies are looking forward for single Operation center for entire IT stack, This preso summarize the design components for ESOC which will cater entire IT infrastructure and application stack from a single facility.
Chapter 7: Design rules
from
Dix, Finlay, Abowd and Beale (2004).
Human-Computer Interaction, third edition.
Prentice Hall. ISBN 0-13-239864-8.
http://www.hcibook.com/e3/
I am Tapas Kumar Palei. I am studying B.Tech CSE in Ajay Binay Institute Of Technology. Grid computing is my seminar presentation topic. I try to gather everything about the grid computing in this seminar presentation.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
2. 2
Grid Computing
Definition:
Grid computing is distributed computing
performed transparently across multiple
administrative domains.
Distributed high-performance computing.
Large geographically distributed networks of
computers.
Provides a means for using distributed
resources to solve large problems.
“What the Web did for communication, grids
endeavor to do for computation.”
3. 3
Grid Computing
Very general computing applications:
Database searches and queries.
Scientific applications.
Weather prediction.
Cryptography.
Business applications.
Transparency:
Distributing computational resources among
multiple and widely separated sources and
users is a difficult algorithmic problem.
4. 4
Grid Computing
Grid Computing is a distributed computing model.
Grid computing technology integrates servers, storage
systems, and networks distributed within the network to
form an integrated system and provide users with
powerful computing and storage capacity.
For the grid end users or applications, the grid looks
like a virtual machine with powerful capabilities.
The essence of grid computing is to manage
heterogeneous and loosely coupled resources in an
efficient way in this distributed system, and to
coordinate these resources through a task scheduler
so they can complete specific cooperative computing
tasks.
5. 5
Grid Computing
A Grid computing network mainly consists of
these three types of machines:
CONTROL
NODE
PROVIDER
USER
7. 7
Characteristics of Grids
Grids coordinate resources that are not
subject to centralized control.
Grids use standard, open, general-purpose
protocols and interfaces.
Grids deliver high qualities of service.
http://devresource.hp.com/drc/technical_papers/grid_soa/04.png
8. 8
Grid vs. Parallel Computing
SHARCNet – University of
Western Ontario
compneuro.uwaterloo.ca/beowulf.html
Beowulf cluster
9. 9
Grid vs. Parallel Computing
Grid computing is
distinguished from parallel
computing on one or more
multiprocessors:
Parallel computing: locally
“clustered” machines or
large supercomputers.
Grid computing:
computation across
different administrative
domains.
www.chemistry.msu.edu/Facilities/Supercomputer/
10. 10
Two Tenets of Grid Computing
Virtualization
Individual resources, such as computers, disks,
information sources, and applications) are pooled
together and made available by abstractions.
Overcomes “hard-coded” connections between
providers and consumers of resources.
Provisioning
When a request for a resource is made, a specific
resource is identified to fulfill the request.
The system determines how to meet the need, and
optimizes system performance.
11. 11
Characteristics of Grid Applications
Data acquired by scientific instruments.
Data are stored in archives on separate,
perhaps geographically-separated sites.
Data are managed by teams belonging to
different organizations.
Large quantities of data (tera- or petabytes)
are collected.
Software used to analyze and summarize the
raw data.
12. 12
The Importance of
Standardization
Without standardization, grid computing
practitioners would need to acquire accounts
at many different computer centers, managed
by different organizations.
Different security and authentication protocols
and accounting practices would have to be
applied.
Very heterogeneous software environment.
13. 13
Importance of Middleware
Middleware eases grid users’ experience and
provides them with levels of abstraction.
Middleware extends the Web’s information
and database management capabilities.
Allowing remote deployment of computational
resources.
14. 14
Globus Toolkit
Most widely-used
middleware for grids.
Open source toolkit for
building computing grids.
Provides a standard
platform upon which other
services build.
Provides directory
services, security, and
resource management.
www.globus.org
15. 15
CPU Scavenging
Unused PC resources worldwide are
harnessed. Also known as shared
computing.
CPU-scavenging systems gain and lose
machines at unpredictable times as users
interact with their computers, or as network
connections fail.
CPU-scavengers can migrate jobs to allow
smooth operation.
16. 16
SETI@home
Search for Extraterrestrial Intelligence
Goal: to analyze vast amounts of data from
the Arecibo radio telescope.
Initiated by the Space Sciences Laboratory at
the University of California, Berkeley
www.ras.ucalgary.ca/svlbiimages/arecibo.jpg, www.artscouncil.org.uk/spaceart
17. 17
SETI@home
Uses a free screen saver,
available to the public.
When activated, the
screensaver program
downloads time sequences
of radio telescope data and
searches them for radio
sources.
SETI@home has more than
5 million participants.
Inspiration for other scientific
applications in need of large
computing resources.
18. 18
SETI@home
Main purpose: A program downloads and
analyzes radio telescope data.
Data is recorded at the Arecibo Observatory
in Puerto Rico.
The data is sent to Berkeley, where it is
processed into units of 107 seconds of data.
These work units are sent from the
SETI@home server over the Internet to
participating computers around the world for
analysis.
19. 19
SETI@home
The analysis software can search for signals
with about one-tenth the strength of those
sought in previous surveys, because it makes
use of a very computationally intensive
algorithm.
Data is merged into a database using
SETI@home computers in Berkeley. Various
pattern-detection algorithms are applied to
search for the most interesting signals.
21. 21
Berkeley Open Infrastructure for
Network Computing.
Funded by the National Science
Foundation.
Used in the SETI project.
Client-server architecture:
Client – Used by the computer
supplying resources for one or
more BOINC projects. Performs
the computations.
Server – System software, such
as database services and
project’s web site.
BOINC
22. 22
Remote Procedure Calls
Mechanism by which the server
communicates with the client in BOINC.
Like a regular function call or method
invocation, but one computer executes the
function on another computer.
23. 23
Remote Procedure Calls - Examples
Return screensaver mode:
get_screensaver_mode(int& status)
Get a list of results for jobs in progress:
get_results(RESULTS&)
Get a list of file transfers in progress:
get_file_transfers(FILE_TRANSFERS&)
Get the client’s current state:
get_state(CC_STATE&)
24. 24
Human Proteome Folding Project
(HPFP)
Goal: to predict the structure of human proteins.
Devised at the Institute for Systems Biology,
University of Washington.
Produces the likely structures for each of the
proteins using a set of predefined rules.
Improved knowledge of human proteins is
important in developing new therapies.
Officially completed on July 18, 2006.
Second stage now underway.
25. 25
Human Proteome Folding Project
http://msnbcmedia.msn.com/j/msnbc/Components/Photos/041116/folding2.hmedium.jpg
http://ndg.gunzclan.org/Charlotte/graphics/2/images/IMG0153_JPG.jpg
Typical desktop screensaver
setup for HPFP
WCG desktop console - users monitor
progress on protein-folding project.
26. 26
Business Applications
Business application grid (BAG).
Major focus is using existing grid computing
technologies to unite all of an organizations
desktops, workstations, servers, printers,
peripherals, etc., to perform useful work
during idle time.
Usually focused on well-defined problems:
Calculating performance averages for a
mutual fund.
Reducing processing time in wealth
management systems.
Database applications.
27. 27
Business Applications
A large financial services company uses
specialized grid software for new corporate
banking applications.
Oracle Corporation offers a grid database
system.
28. 28
Business Grid Middleware
Provides an IT-level infrastructure to support
business applications.
Middleware provides services for composing,
submitting, and managing business
applications.
Business functions (e.g. credit card
authorization and shipping-and-handling
services) are not provided.
Globus Toolkit 4 makes it easier to build an
application that taps into existing distributed
computing resources (e.g. servers, storages,
databases).
29. 29
Conclusions
Grid computing is an “enabling technology”
that is rapidly gaining popularity in:
Science.
Medicine.
Engineering.
Business and financial applications.
Many software vendors offer grid computing
toolkits and middleware.
In 2004, 20% of companies were seeking grid
computing solutions (Evans Data Corp.).
30. 30
Benefits of Grid Computing
Collaboration.
Increased productivity.
Efficient use of resources and storage.
Cost-effectiveness.
Heterogeneous environments.
Failure tolerance.
Transparency.
31. 31
ADVANTAGES OF GRID
COMPUTING
It is not centralized, as there are no servers
required, except the control node which is just
used for controlling and not for processing.
Multiple heterogeneous machines i.e.
machines with different Operating Systems
can use a single grid computing network.
Tasks can be performed parallelly across
various physical locations and the users don’t
have to pay for them (with money).
32. 32
Challenges
Lack of control over resources, administration.
Security.
Middleware.
Network failures.
Cultural issues.
33. 33
DISADVANTAGES OF GRID
COMPUTING
The software of the grid is still in the involution
stage.
A super fast interconnect between computer
resources is the need of hour.
Licensing across many servers may make it
prohibitive for some applications.
Many groups are reluctant with sharing
resources .
34. 34
Open grid services architecture
OGSA – standard for grid-based applications.
Framework for meeting grid requirements.
Application specific grid services
web services
application specific
OGSI services: naming, service data (metadata)
OGSA services: directory, management, security
service creation and deletion, fault model, service groups GridService
e.g.
interfaces
e.g. astronomy, biomedical informatics, high-energy physics
Factory
grid service interfaces
standard
Open-grid services infrastructure
35. 35
Globus toolkit
http://gdp.globus.org/gt3-tutorial/multiplehtml/ch01s04.html
Restricts access to grid services so that
only authorized clients can use them.
Provides another layer of security on top
of firewalls.
Job management – checking status of
jobs, pausing, stopping if necessary.
Index services – helping to locate grid
resources to meet specific needs.
Reliable file transfer service (RFT) –
performs large file transfers from a client
to a grid service.
Replica management – keeps track of
subsets of large data sets that are
being worked on.
Other non-GT3 services can run on
top of the GT3 architecture.
Low-level functions
36. 36
Other grid tools
Resource management:
Grid Resource Allocation and Management
Protocol (GRAM)
Information Services:
Monitoring and Discovery Service (MDS)
Security Services:
Grid Security Infrastructure (GSI)
Data Movement and Management:
Global Access to Secondary Storage (GASS)
and GridFTP
37. 37
World-Wide Telescope (2002)
Goal: deployment of data resources shared
by astronomers.
Data:
Archives of observations over a particular
period of time, part of the EM spectrum, and
area of the sky.
Observations collected at different sites
around the world.
Data on same celestial objects are combined
over different periods of time and different
parts of the EM spectrum.
38. 38
World-Wide Telescope (2)
Data archives ( terabyte) managed locally by
the teams that collect the data.
As data is acquired, it is analyzed and stored
as transformed data so that it can be used by
remote astronomy sites.
Librarian role of scientists.
Metatdata is required to describe:
Time the data was collected.
Part of the sky observed.
Instruments used.
39. 39
WCG ongoing projects
FightAIDS@Home
Launched by WCG in 2005.
Each computer processes one potential drug molecule
and tests how well it would dock with HIV protease,
inhibiting viral reproduction.
Human Proteome Folding Phase 2
Released in 2006.
Extension of HPF1, focusing on human-secreted
proteins.
Better protein models, but more computationally
intensive.
40. 40
World Community Grid (WCG)
Goal: to create the world's largest public computing
grid for humanitarian concerns.
Administered and funded by IBM.
Platforms: Windows, Linux, and Mac OS X.
Uses the idle time of Internet-connected desktop
computers.
The agent works as a screen saver (like
SETI@home), only using a computer's resources
when it would otherwise be idle, and returning
resources to the users when requested.
Projects are approved by an advisory board:
representatives of major research institutions,
universities, UN, WHO.
41. 41
WCG – Smallpox research
Completed project.
WCG largely began due to the success of this
project in shaving years off research time.
Analysis of therapeutic candidates to fight the
small virus.
About 35 million potential drug molecules were
screened against several smallpox proteins,
resulting in 44 new potential treatments.
42. 42
WCG Ongoing projects
Help Defeat Cancer (2006)
Processes large numbers of tissue samples
using tissue microarrays.
Genome Comparison (2006)
Compares gene sequences of different
organisms to find similarities.
Goal: determining the purpose of specific gene
sequences in particular functions by
comparing it with similar sequences with
known functions in another organism.
43. 43
Other grid projects
Description of the project Reference
1. Aircraft engine maintenance using fault histories and
sensors for predictive diagnostics
www.cs.york.ac.uk/dame
2. Telepresence for predicting the effects of
earthquakes on buildings, using simulations and test sites
www.neesgrid.org
3. Bio-medical informatics network providing
researchers with access to experiments and visualizations of results
nbcr.sdsc.edu
4. Analysis of data from the CMS high energy particle
detector at CERN by physicists world-wide over 15 years
www.uscms.org
5. Testing the effects of candidate drug molecules for
their effect on the activity of a protein, by performing parallel
computations using idle desktop computers
[Taufer et al. 2003]
[Chien 2004]
6. Use of the Sun Grid Engine to enhance aerial
photographs by using spare capacity on a cluster of web servers
www.globexplorer.com
7. The butterfly Grid supports multiplayer games for
very large numbers of players on the internet over the Globus toolkit
www.butterfly.net
8. The Access Grid supports the needs of small group
collaboration, for example by providing shared workspaces
www.accessgrid.org
44. 44
Requirements of grid systems
Remote access to resources, specifically, to
archived data.
Data processing at the site where the data is
managed.
Remote requests (queries) result in a
visualization or results from a small quantity of
data.
Resource manager of a data archive create
instances of services when they are needed.
Similar to distributed object model, where
servant objects are created when needed.
45. 45
Requirements of grid systems (2)
Metadata to describe characteristics of
archived data.
Directory services based on the metadata.
Software for:
Query management.
Data transfer.
Resource reservation.