Presented By Ashok.J 3 rd BCA - AVVM Sri Pushpam College, Poondi , Tanjor
Slide 2: GRID COMPUTING Conceptual View Of Grid Computing ?
What Is Grid Computing?: What Is Grid Computing? Grid computing is the collection of computer resources from multiple locations to reach a common goal. GRID COMPUTING
Slide 4: How Grid Computing Works? GRID COMPUTING
Slide 5: Types Of Grid Data Grid Collaboration Grid Network Grid Utility Grid GRID COMPUTING Computational Grid
Slide 6: Grid topologies
Slide 7: Intra grids A Typical intra grid topology exist within S ingle Organization, providing a basic set of grid Services
Slide 8: Extra grids An Extra grid, Typically involves more than one security provider , and the level Management complexity increases
Slide 9: Inter Grids An inter grid requires the dynamic integration of applications, resources and service with patterns, Customers access via WAN/ Internet
Slide 10: A Simple Grid GRID COMPUTING
Slide 11: Complex Inter grid GRID COMPUTING
Slide 12: Grid Scheduled An application is one or more jobs that are scheduled to run a Grid GRID COMPUTING
Slide 13: Advantages : Can solve larger, more complex problems in a shorter time Easier to collaborate with other organizations Make better use of existing hardware GRID COMPUTING
Slide 14: Disa dvantages : Grid software and standards are still evolving Learning curve to get started Non-interactive job submission GRID COMPUTING
Slide 15: BENEFITS OF GRID COMPUTING GRID COMPUTING Exploiting underutilized resources Parallel CPU capacity Virtual organizations for collaboration and virtual resources Access to additional resources Resource balancing Reliability Management
Presented By Ashok.J ashokmannai0005@gmail.com
Presented By Ashok.J 3 rd BCA - AVVM Sri Pushpam College, Poondi , Tanjor
Slide 2: GRID COMPUTING Conceptual View Of Grid Computing ?
What Is Grid Computing?: What Is Grid Computing? Grid computing is the collection of computer resources from multiple locations to reach a common goal. GRID COMPUTING
Slide 4: How Grid Computing Works? GRID COMPUTING
Slide 5: Types Of Grid Data Grid Collaboration Grid Network Grid Utility Grid GRID COMPUTING Computational Grid
Slide 6: Grid topologies
Slide 7: Intra grids A Typical intra grid topology exist within S ingle Organization, providing a basic set of grid Services
Slide 8: Extra grids An Extra grid, Typically involves more than one security provider , and the level Management complexity increases
Slide 9: Inter Grids An inter grid requires the dynamic integration of applications, resources and service with patterns, Customers access via WAN/ Internet
Slide 10: A Simple Grid GRID COMPUTING
Slide 11: Complex Inter grid GRID COMPUTING
Slide 12: Grid Scheduled An application is one or more jobs that are scheduled to run a Grid GRID COMPUTING
Slide 13: Advantages : Can solve larger, more complex problems in a shorter time Easier to collaborate with other organizations Make better use of existing hardware GRID COMPUTING
Slide 14: Disa dvantages : Grid software and standards are still evolving Learning curve to get started Non-interactive job submission GRID COMPUTING
Slide 15: BENEFITS OF GRID COMPUTING GRID COMPUTING Exploiting underutilized resources Parallel CPU capacity Virtual organizations for collaboration and virtual resources Access to additional resources Resource balancing Reliability Management
Presented By Ashok.J ashokmannai0005@gmail.com
The term “Cloud Computing” is a recent buzzword in the IT world. Behind this fancy poetic phrase, there lies a true picture for the future of computing for both in technical prospective and social prospective. However, the term “Cloud Computing” is recent but the idea of centralizing computation and storage in distributed data centers maintained by third party companies is not new but it came in the way back in 1990s along with distributed computing approaches like grid computing. Cloud computing aimed at providing IT as a service to the cloud users on-demand basic with greater flexibility, availability, reliability and scalability with utility computing model. This new paradigm of computing has an immense potential in it to be use in the field of e-governance and in rural development perspective in the developing country like India.
In computing, It is the description about Grid Computing.
It gives deep idea about grid, what is grid computing? , why we need it? , why it is so ? etc. History and Architecture of grid computing is also there. Advantages , disadvantages and conclusion is also included.
The encryption mechanism is a digital coding system dedicated to preserving the confidentiality and integrity of data. It is used for encoding plain text data into a protected and unreadable format.
4.1Introduction
- Potential Threats and Attacks on Computer System
- Confinement Problems
- Design Issues in Building Secure Distributed Systems
4.2 Cryptography
- Symmetric Cryptosystem Algorithm: DES
- Asymmetric Cryptosystem
4.3 Secure Channels
- Authentication
- Message Integrity and Confidentiality
- Secure Group Communication
4.4 Access Control
- General Issues
- Firewalls
- Secure Mobile Code
4.5 Security Management
- Key Management
- Issues in Key Distribution
- Secure Group Management
- Authorization Management
Cloud computing is using the internet to access someone else's software running on someone else's hardware in someone else's data center.
OUTLINE-
Definitions of Cloud computing
Architecture of Cloud computing
Benefits of Cloud computing
Opportunities of Cloud Computing
Cloud computing – Google Apps
Grid computing vs Cloud computing
The term “Cloud Computing” is a recent buzzword in the IT world. Behind this fancy poetic phrase, there lies a true picture for the future of computing for both in technical prospective and social prospective. However, the term “Cloud Computing” is recent but the idea of centralizing computation and storage in distributed data centers maintained by third party companies is not new but it came in the way back in 1990s along with distributed computing approaches like grid computing. Cloud computing aimed at providing IT as a service to the cloud users on-demand basic with greater flexibility, availability, reliability and scalability with utility computing model. This new paradigm of computing has an immense potential in it to be use in the field of e-governance and in rural development perspective in the developing country like India.
In computing, It is the description about Grid Computing.
It gives deep idea about grid, what is grid computing? , why we need it? , why it is so ? etc. History and Architecture of grid computing is also there. Advantages , disadvantages and conclusion is also included.
The encryption mechanism is a digital coding system dedicated to preserving the confidentiality and integrity of data. It is used for encoding plain text data into a protected and unreadable format.
4.1Introduction
- Potential Threats and Attacks on Computer System
- Confinement Problems
- Design Issues in Building Secure Distributed Systems
4.2 Cryptography
- Symmetric Cryptosystem Algorithm: DES
- Asymmetric Cryptosystem
4.3 Secure Channels
- Authentication
- Message Integrity and Confidentiality
- Secure Group Communication
4.4 Access Control
- General Issues
- Firewalls
- Secure Mobile Code
4.5 Security Management
- Key Management
- Issues in Key Distribution
- Secure Group Management
- Authorization Management
Cloud computing is using the internet to access someone else's software running on someone else's hardware in someone else's data center.
OUTLINE-
Definitions of Cloud computing
Architecture of Cloud computing
Benefits of Cloud computing
Opportunities of Cloud Computing
Cloud computing – Google Apps
Grid computing vs Cloud computing
I am Tapas Kumar Palei. I am studying B.Tech CSE in Ajay Binay Institute Of Technology. Grid computing is my seminar presentation topic. I try to gather everything about the grid computing in this seminar presentation.
The title of this talk is a crass attempt to be catchy and topical, by referring to the recent victory of Watson in Jeopardy.
My point (perhaps confusingly) is not that new computer capabilities are a bad thing. On the contrary, these capabilities represent a tremendous opportunity for science. The challenge that I speak to is how we leverage these capabilities without computers and computation overwhelming the research community in terms of both human and financial resources. The solution, I suggest, is to get computation out of the lab—to outsource it to third party providers.
Abstract follows:
We have made much progress over the past decade toward effective distributed cyberinfrastructure. In big-science fields such as high energy physics, astronomy, and climate, thousands benefit daily from tools that enable the distributed management and analysis of vast quantities of data. But we now face a far greater challenge. Exploding data volumes and new research methodologies mean that many more--ultimately most?--researchers will soon require similar capabilities. How can we possible supply information technology (IT) at this scale, given constrained budgets? Must every lab become filled with computers, and every researcher an IT specialist?
I propose that the answer is to take a leaf from industry, which is slashing both the costs and complexity of consumer and business IT by moving it out of homes and offices to so-called cloud providers. I suggest that by similarly moving research IT out of the lab, we can realize comparable economies of scale and reductions in complexity, empowering investigators with new capabilities and freeing them to focus on their research.
I describe work we are doing to realize this approach, focusing initially on research data lifecycle management. I present promising results obtained to date, and suggest a path towards large-scale delivery of these capabilities. I also suggest that these developments are part of a larger "revolution in scientific affairs," as profound in its implications as the much-discussed "revolution in military affairs" resulting from more capable, low-cost IT. I conclude with some thoughts on how researchers, educators, and institutions may want to prepare for this revolution.
Positioning University of California Information Technology for the Future: S...Larry Smarr
05.02.15
Invited Talk
The Vice Chancellor of Research and Chief Information Officer Summit
“Information Technology Enabling Research at the University of California”
Title: Positioning University of California Information Technology for the Future: State, National, and International IT Infrastructure Trends and Directions
Oakland, CA
OptIPuter-A High Performance SOA LambdaGrid Enabling Scientific ApplicationsLarry Smarr
07.03.21
IEEE Computer Society Tsutomu Kanai Award Keynote
At the Joint Meeting of the: 8th International Symposium on Autonomous Decentralized Systems
2nd International Workshop on Ad Hoc, Sensor and P2P Networks
11th IEEE International Workshop on Future Trends of Distributed Computing Systems
Title: OptIPuter-A High Performance SOA LambdaGrid Enabling Scientific Applications
Sedona, AZ
A talk at the RPI-NSF Workshop on Multiscale Modeling of Complex Data, September 12, 2011, Troy NY, USA.
We have made much progress over the past decade toward effectively
harnessing the collective power of IT resources distributed across the
globe. In fields such as high-energy physics, astronomy, and climate,
thousands benefit daily from tools that manage and analyze large
quantities of data produced and consumed by large collaborative teams.
But we now face a far greater challenge: Exploding data volumes and powerful simulation tools mean that far more--ultimately
most?--researchers will soon require capabilities not so different from those used by these big-science teams. How is the general population of researchers and institutions to meet these needs? Must every lab be filled
with computers loaded with sophisticated software, and every researcher become an information technology (IT) specialist? Can we possibly afford to equip our labs in this way, and where would we find the experts to operate them?
Consumers and businesses face similar challenges, and industry has
responded by moving IT out of homes and offices to so-called cloud providers (e.g., GMail, Google Docs, Salesforce), slashing costs and complexity. I suggest that by similarly moving research IT out of the lab, we can realize comparable economies of scale and reductions in complexity. More importantly, we can free researchers from the burden of managing IT, giving them back their time to focus on research and empowering them to go beyond the scope of what was previously possible.
I describe work we are doing at the Computation Institute to realize this approach, focusing initially on research data lifecycle management. I present promising results obtained to date and suggest a path towards
large-scale delivery of these capabilities.
A time efficient approach for detecting errors in big sensor data on cloudNexgen Technology
TO GET THIS PROJECT COMPLETE SOURCE CODE PLEASE CALL BEOLOW CONTACT DETAILS
MOBILE: 9791938249, 0413-2211159, WEB: WWW.NEXGENPROJECT.COM ,EMAIL:Praveen@nexgenproject.com
NEXGEN TECHNOLOGY provides total software solutions to its customers. Apsys works closely with the customers to identify their business processes for computerization and help them implement state-of-the-art solutions. By identifying and enhancing their processes through information technology solutions. NEXGEN TECHNOLOGY help it customers optimally use their resources.
Opening Keynote Lecture
15th Annual ON*VECTOR International Photonics Workshop
Calit2’s Qualcomm Institute
University of California, San Diego
February 29, 2016
Artificial Bee Colony (ABC) is a swarm
optimization technique. This algorithm generally used to solve
nonlinear and complex problems. ABC is one of the simplest
and up to date population based probabilistic strategy for
global optimization. Analogous to other population based
algorithms, ABC also has some drawbacks computationally
pricey due to its sluggish temperament of search procedure.
The solution search equation of ABC is notably motivated by a
haphazard quantity which facilitates in exploration at the cost
of exploitation of the search space. Due to the large step size in
the solution search equation of ABC there are chances of
skipping the factual solution are higher. For that reason, this
paper introduces a new search strategy in order to balance the
diversity and convergence capability of the ABC. Both
employed bee phase and onlooker bee phase are improved
with help of a local search strategy stimulated by memetic
algorithm. This paper also proposes a new strategy for fitness
calculation and probability calculation. The proposed
algorithm is named as Improved Memetic Search in ABC
(IMeABC). It is tested over 13 impartial benchmark functions
of different complexities and two real word problems are also
considered to prove proposed algorithms superiority over
original ABC algorithm and its recent variants
Spider Monkey optimization (SMO) algorithm is newest addition in class of swarm intelligence. SMO is a population based stochastic meta-heuristic. It is motivated by intelligent foraging behaviour of fission fusion structured social creatures. SMO is a very good option for complex optimization problems. This paper proposed a modified strategy in order to enhance performance of original SMO. This paper introduces a position update strategy in SMO and modifies both local leader and global leader phase. The proposed strategy is named as Modified Position Update in Spider Monkey Optimization (MPU-SMO) algorithm. The proposed algorithm tested over benchmark problems and results show that it gives better results for considered unbiased problems.
Artificial Bee Colony (ABC) algorithm is a Nature Inspired Algorithm (NIA) which based in intelligent food foraging behaviour of honey bee swarm. ABC outperformed over other NIAs and other local search heuristics when tested for benchmark functions as well as factual world problems but occasionally it shows premature convergence and stagnation due to lack of balance between exploration and exploitation. This paper establishes a local search mechanism that enhances exploration capability of ABC and avoids the dilemma of stagnation. With help of recently introduces local search strategy it tries to balance intensification and diversification of search space. The anticipated algorithm named as Enhanced local search in ABC (EnABC) and tested over eleven benchmark functions. Results are evidence for its dominance over other competitive algorithms.
Artificial Bee Colony (ABC) optimization
algorithm is one of the recent population based probabilistic
approach developed for global optimization. ABC is simple
and has been showed significant improvement over other
Nature Inspired Algorithms (NIAs) when tested over some
standard benchmark functions and for some complex real
world optimization problems. Memetic Algorithms also
become one of the key methodologies to solve the very large
and complex real-world optimization problems. The solution
search equation of Memetic ABC is based on Golden Section
Search and an arbitrary value which tries to balance
exploration and exploitation of search space. But still there
are some chances to skip the exact solution due to its step
size. In order to balance between diversification and
intensification capability of the Memetic ABC, it is
randomized the step size in Memetic ABC. The proposed
algorithm is named as Randomized Memetic ABC (RMABC).
In RMABC, new solutions are generated nearby the best so
far solution and it helps to increase the exploitation capability
of Memetic ABC. The experiments on some test problems of
different complexities and one well known engineering
optimization application show that the proposed algorithm
outperforms over Memetic ABC (MeABC) and some other
variant of ABC algorithm(like Gbest guided ABC
(GABC),Hooke Jeeves ABC (HJABC), Best-So-Far ABC
(BSFABC) and Modified ABC (MABC) in case of almost all
the problems.
Differential Evolution (DE) is a renowned optimization stratagem that can easily solve nonlinear and comprehensive problems. DE is a well known and uncomplicated population based probabilistic approach for comprehensive optimization. It has apparently outperformed a number of Evolutionary Algorithms and further search heuristics in the vein of Particle Swarm Optimization at what time of testing over both yardstick and actual world problems. Nevertheless, DE, like other probabilistic optimization algorithms, from time to time exhibits precipitate convergence and stagnates at suboptimal position. In order to stay away from stagnation behavior while maintaining an excellent convergence speed, an innovative search strategy is introduced, named memetic search in DE. In the planned strategy, positions update equation customized as per a memetic search stratagem. In this strategy a better solution participates more times in the position modernize procedure. The position update equation is inspired from the memetic search in artificial bee colony algorithm. The proposed strategy is named as Memetic Search in Differential Evolution (MSDE). To prove efficiency and efficacy of MSDE, it is tested over 8 benchmark optimization problems and three real world optimization problems. A comparative analysis has also been carried out among proposed MSDE and original DE. Results show that the anticipated algorithm go one better than the basic DE and its recent deviations in a good number of the experiments.
Artificial Bee Colony (ABC) is a distinguished optimization strategy that can resolve nonlinear and multifaceted problems. It is comparatively a straightforward and modern population based probabilistic approach for comprehensive optimization. In the vein of the other population based algorithms, ABC is moreover computationally classy due to its slow nature of search procedure. The solution exploration equation of ABC is extensively influenced by a arbitrary quantity which helps in exploration at the cost of exploitation of the better search space. In the solution exploration equation of ABC due to the outsized step size the chance of skipping the factual solution is high. Therefore, here this paper improve onlooker bee phase with help of a local search strategy inspired by memetic algorithm to balance the diversity and convergence capability of the ABC. The proposed algorithm is named as Improved Onlooker Bee Phase in ABC (IoABC). It is tested over 12 well known un-biased test problems of diverse complexities and two engineering optimization problems; results show that the anticipated algorithm go one better than the basic ABC and its recent deviations in a good number of the experiments.
Artificial bee colony (ABC) algorithm is a well known and one of the latest swarm intelligence based techniques. This method is a population based meta-heuristic algorithm used for numerical optimization. It is based on the intelligent behavior of honey bees. Artificial Bee Colony algorithm is one of the most popular techniques that are used in optimization problems. Artificial Bee Colony algorithm has some major advantages over other heuristic methods. To utilize its good feature a number of researchers combined ABC algorithm with other methods, and generate some new hybrid methods. This paper provides comparative analysis of hybrid differential Artificial Bee Colony algorithm with hybrid ABC – SPSO, Genetic algorithm and Independent rough set approach based on some parameters like technique, dimension, methodology etc. KEYWORDS
Artificial bee colony (ABC) algorithm has proved its importance in solving a number of problems including engineering optimization problems. ABC algorithm is one of the most popular and youngest member of the family of population based nature inspired meta-heuristic swarm intelligence method. ABC has been proved its superiority over some other Nature Inspired Algorithms (NIA) when applied for both benchmark functions and real world problems. The performance of search process of ABC depends on a random value which tries to balance exploration and exploitation phase. In order to increase the performance it is required to balance the exploration of search space and exploitation of optimal solution of the ABC. This paper outlines a new hybrid of ABC algorithm with Genetic Algorithm. The proposed method integrates crossover operation from Genetic Algorithm (GA) with original ABC algorithm. The proposed method is named as Crossover based ABC (CbABC). The CbABC strengthens the exploitation phase of ABC as crossover enhances exploration of search space. The CbABC tested over four standard benchmark functions and a popular continuous optimization problem.
Multiplication of two 3 d sparse matrices using 1d arrays and linked listsDr Sandeep Kumar Poonia
A basic algorithm of 3D sparse matrix multiplication (BASMM) is presented using one dimensional (1D) arrays which is used further for multiplying two 3D sparse matrices using Linked Lists. In this algorithm, a general concept is derived in which we enter non- zeros elements in 1st and 2nd sparse matrices (3D) but store that values in 1D arrays and linked lists so that zeros could be removed or ignored to store in memory. The positions of that non-zero value are also stored in memory like row and column position. In this way space complexity is decreased. There are two ways to store the sparse matrix in memory. First is row major order and another is column major order. But, in this algorithm, row major order is used. Now multiplying those two matrices with the help of BASMM algorithm, time complexity also decreased. For the implementation of this, simple c programming and concepts of data structures are used which are very easy to understand for everyone.
Smart Huffman Compression is a software appliance designed to compress a file in a better way. By functioning as an JSP, it provides high level abstraction of java Servlet. For example, Smart Huffman Compression encodes the digital information using fewer bits, reduces the size of file without loss of data in a single, easy-to-manage software appliance form factor. It also provides us the decompression facility also. Smart Huffman Compression provides our organization with effective solutions to reduce the file size or lossless compression of data. It also expedites security of data using the encoding functionality. It is necessary to analyze the relationship between different methods and put them into a framework to better understand and better exploit the possibilities that compression provides us image compression, data compression, audio compression, video compression etc.
Artificial Bee Colony (ABC) algorithm is a Nature
Inspired Algorithm (NIA) which based on intelligent food
foraging behaviour of honey bee swarm. This paper introduces
a local search strategy that enhances exploration competence
of ABC and avoids the problem of stagnation. The proposed
strategy introduces two new local search phases in original
ABC. One just after onlooker bee phase and one after scout
bee phase. The newly introduced phases are inspired by
modified Golden Section Search (GSS) strategy. The proposed
strategy named as new local search strategy in ABC
(NLSSABC). The proposed NLSSABC algorithm applied over
thirteen standard benchmark functions in order to prove its
efficiency.
Program slicing technique is used for decomposition of a program by analyzing that particular program data
and control flow. The main application of program slicing includes various software engineering activities such as
program debugging, understanding, program maintenance, and testing and complexity measurement. When a slicing
technique gathers information about the data and control flow of the program taking an actual and specific execution
(or set of executions) of it, then it is said to be dynamic slicing, otherwise it is said to be static slicing. Generally,
dynamic slices are smaller than static because the statements of the program that affect by the slicing criterion for a
particular execution are contained by dynamic slicing. This paper reports a new approach of program slicing that is a
mixed approach of static and dynamic slice (S-D slicing) using Object Oriented Concepts in C++ Language that will
reduce the complexity of the program and simplify the program for various software engineering applications like
program debubbing.
Articial bee Colony algorithm (ABC) is a population based
heuristic search technique used for optimization problems. ABC
is a very eective optimization technique for continuous opti-
mization problem. Crossover operators have a better exploration
property so crossover operators are added to the ABC. This pa-
per presents ABC with dierent types of real coded crossover op-
erator and its application to Travelling Salesman Problem (TSP).
Each crossover operator is applied to two randomly selected par-
ents from current swarm. Two o-springs generated from crossover
and worst parent is replaced by best ospring, other parent remains
same. ABC with real coded crossover operator applied to travelling
salesman problem. The experimental result shows that our proposed
algorithm performs better than the ABC without crossover in terms
of eciency and accuracy.
Performance evaluation of diff routing protocols in wsn using difft network p...Dr Sandeep Kumar Poonia
In the recent past, wireless sensor networks have been introduced to use in many applications. To design the networks, the factors needed to be considered are the coverage area, mobility, power consumption, communication capabilities etc. The challenging goal of our project is to create a simulator to support the wireless sensor network simulation. The network simulator (NS-2) which supports both wire and wireless networks is implemented to be used with the wireless sensor network.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
Safalta Digital marketing institute in Noida, provide complete applications that encompass a huge range of virtual advertising and marketing additives, which includes search engine optimization, virtual communication advertising, pay-per-click on marketing, content material advertising, internet analytics, and greater. These university courses are designed for students who possess a comprehensive understanding of virtual marketing strategies and attributes.Safalta Digital Marketing Institute in Noida is a first choice for young individuals or students who are looking to start their careers in the field of digital advertising. The institute gives specialized courses designed and certification.
for beginners, providing thorough training in areas such as SEO, digital communication marketing, and PPC training in Noida. After finishing the program, students receive the certifications recognised by top different universitie, setting a strong foundation for a successful career in digital marketing.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
"Protectable subject matters, Protection in biotechnology, Protection of othe...
1. GRID COMPUTING
1. GRID COMPUTING
Sandeep Kumar Poonia
Head Of Dept. CS/IT
B.E., M.Tech., UGC-NET
LM-IAENG, LM-IACSIT,LM-CSTA, LM-AIRCC, LM-SCIEI, AM-UACEE
2. WHY GRID COMPUTING?
40% Mainframes are idle
90% Unix servers are idle
95% PC servers are idle
0-15% Mainframes are idle in peak-hour
70% PC servers are idle in peak-hour
Source: “Grid Computing” Dr Daron G Green
SandeepKumarPoonia
3. OUTLINE
Introduction to Grid Computing
Methods of Grid computing
Grid Middleware
Grid Architecture
SandeepKumarPoonia
4. SandeepKumarPoonia
ELECTRICAL POWER GRID
ANALOGY
Electrical power
grid
users (or electrical
appliances) get access to
electricity through wall
sockets with no care or
consideration for where or
how the electricity is
actually generated.
“The power grid” links
together power plants of
many different kinds
The Grid
users (or client applications) gain
access to computing resources
(processors, storage, data,
applications, and so on) as needed
with little or no knowledge of where
those resources are located or what
the underlying technologies,
hardware, operating system, and so
on are
"the Grid" links together computing
resources (PCs, workstations, servers,
storage elements) and provides the
mechanism needed to access them.
5. Sandeep Kumar Poonia
WHY NEED GRID COMPUTING?
Core networking technology now accelerates at a much
faster rate than advances in microprocessor speeds
Exploiting under utilized resources
Parallel CPU capacity
Virtual resources and virtual organizations for
collaboration
Access to additional resources
6. Sandeep Kumar Poonia
WHO NEEDS GRID COMPUTING?
Not just computer scientists…
scientists “hit the wall” when faced with situations:
The amount of data they need is huge and the data is stored in
different institutions.
The amount of similar calculations the scientist has to do is
huge.
Other areas:
Government
Business
Education
Industrial design
……
7. LIVING IN AN EXPONENTIAL WORLD
(1) COMPUTING & SENSORS
Moore‘s Law: transistor count doubles each 18 months
Magnetohydro-
dynamics
star formation
SandeepKumarPoonia
8. LIVING IN AN EXPONENTIAL WORLD:
(2) STORAGE
Storage density doubles every 12 months
Dramatic growth in online data (1 petabyte =
1000 terabyte = 1,000,000 gigabyte)
2000 ~0.5 petabyte
2005 ~10 petabytes
2010 ~100 petabytes
2015 ~1000 petabytes?
Transforming entire disciplines in physical and,
increasingly, biological sciences; humanities
next?
SandeepKumarPoonia
9. DATA INTENSIVE PHYSICAL SCIENCES
High energy & nuclear physics
Including new experiments at CERN
Gravity wave searches
LIGO, GEO, VIRGO
Time-dependent 3-D systems (simulation, data)
Earth Observation, climate modeling
Geophysics, earthquake modeling
Fluids, aerodynamic design
Pollutant dispersal scenarios
Astronomy: Digital sky surveys
SandeepKumarPoonia
10. ONGOING ASTRONOMICAL MEGA-SURVEYS
Large number of new surveys
Multi-TB in size, 100M objects or larger
In databases
Individual archives planned and under way
Multi-wavelength view of the sky
> 13 wavelength coverage within 5 years
Impressive early discoveries
Finding exotic objects by unusual colors
L,T dwarfs, high redshift quasars
Finding objects by time variability
Gravitational micro-lensing
MACHO
2MASS
SDSS
DPOSS
GSC-II
COBE
MAP
NVSS
FIRST
GALEX
ROSAT
OGLE
...
SandeepKumarPoonia
11. COMING FLOODS OF ASTRONOMY DATA
The planned Large Synoptic Survey Telescope
will produce over 10 petabytes per year by 2008!
All-sky survey every few days, so will have fine-grain
time series for the first time
SandeepKumarPoonia
12. DATA INTENSIVE BIOLOGY AND MEDICINE
Medical data
X-Ray, mammography data, etc. (many petabytes)
Digitizing patient records
X-ray crystallography
Molecular genomics and related disciplines
Human Genome, other genome databases
Proteomics (protein structure, activities, …)
Protein interactions, drug delivery
Virtual Population Laboratory (proposed)
Simulate likely spread of disease outbreaks
Brain scans (3-D, time dependent)
SandeepKumarPoonia
13. And comparisons must be
made among many
We need to get to one micron to know location of every cell. We’re just now
starting to get to 10 microns – Grids will help get us there and further
A BRAIN
IS A LOT
OF DATA!
(MARK ELLISMAN, UCSD)
SandeepKumarPoonia
14. Fastest virtual supercomputers
SandeepKumarPoonia
As of April 2013, Folding@home – 11.4 x86-equivalent
(5.8 "native") PFLOPS.
As of March 2013, BOINC – processing on average 9.2
PFLOPS.
As of April 2010, MilkyWay@Home computes at over
1.6 PFLOPS, with a large amount of this work coming from
GPUs.
As of April 2010, SETI@Home computes data averages
more than 730 TFLOPS.
As of April 2010, Einstein@Home is crunching more than
210 TFLOPS.
As of June 2011, GIMPS is sustaining 61 TFLOPS.
15. HOW GRID COMPUTING WORKS
Super computer,
Big mainframe…
Idol time
Idol CPU
Idol CPU
Idol time
Source: “The Evolving Computing Model: Grid Computing” Michael Teyssedre
SandeepKumarPoonia
16. HOW GRID COMPUTING WORKS
Virtual machine
Virtual CPU…
Idol time
Idol CPU
Idol CPU
Idol time
Source: “The Evolving Computing Model: Grid Computing” Michael Teyssedre
SandeepKumarPoonia
17. HOW GRID COMPUTING WORKS
Grid
Computing
0% idol
0% idol
0% idol
0% idol
Source: “The Evolving Computing Model: Grid Computing” Michael Teyssedre
SandeepKumarPoonia
19. WHAT IS A GRID?
Many definitions exist in the literature
Early defs: Foster and Kesselman, 1998
―A computational grid is a hardware and software
infrastructure that provides dependable, consistent,
pervasive, and inexpensive access to high-end
computational facilities‖
Kleinrock 1969:
―We will probably see the spread of ‗computer utilities‘,
which, like present electric and telephone utilities, will
service individual homes and offices across the country.‖
SandeepKumarPoonia
20. 3-POINT CHECKLIST (FOSTER 2002)
1. Coordinates resources not subject to
centralized control
2. Uses standard, open, general purpose protocols
and interfaces
3. Deliver nontrivial qualities of service
• e.g., response time, throughput, availability,
security
SandeepKumarPoonia
21. DEFINITION
Grid computing is…
A distributed computing system
Where a group of computers are connected
To create and work as one large virtual
computing power, storage, database, application,
and service
SandeepKumarPoonia
22. DEFINITION
Grid computing…
Allows a group of computers to share the system
securely and
Optimizes their collective resources to meet
required workloads
By using open standards
SandeepKumarPoonia
23. GRID COMPUTING
Grid computing is a form of distributed computing
whereby a "super and virtual computer" is composed of a
cluster of networked, loosely coupled computers, acting in
concert to perform very large tasks.
Grid computing (Foster and Kesselman, 1999) is a
growing technology that facilitates the executions of
large-scale resource intensive applications on
geographically distributed computing resources.
Facilitates flexible, secure, coordinated large scale
resource sharing among dynamic collections of
individuals, institutions, and resource
Enable communities (―virtual organizations‖) to share
geographically distributed resources as they pursue
common goals
Ian Foster and Carl Kesselman
SandeepKumarPoonia
24. A COMPARISON
SERIAL
Fetch/Store
Compute
PARALLEL
Fetch/Store
Compute/
communicate
Cooperative game
GRID
Fetch/Store
Discovery of Resources
Interaction with remote
application
Authentication /
Authorization
Security
Compute/Communicate
Etc
SandeepKumarPoonia
25. DISTRIBUTED COMPUTING VS. GRID
Grid is an evolution of distributed computing
Dynamic
Geographically independent
Built around standards
Internet backbone
Distributed computing is an ―older term‖
Typically built around proprietary
software and network
Tightly couples systems/organization
SandeepKumarPoonia
26. WEB VS.
GRID
Web
Uniform naming access to documents
Grid - Uniform, high performance access to computational
resources
Colleges/R&D
Labs
Software
Catalogs
Sensor nets
http://
http://
SandeepKumarPoonia
27. IS THE WORLD WIDE WEB A
GRID ?
Seamless naming? Yes
Uniform security and Authentication? No
Information Service? Yes or No
Co-Scheduling? No
Accounting & Authorization ? No
User Services? No
Event Services? No
Is the Browser a Global Shell ? No
SandeepKumarPoonia
28. WHAT DOES THE WORLD WIDE WEB BRING TO
THE GRID ?
Uniform Naming
A seamless, scalable information service
A powerful new meta-data language: XML
XML will be standard language for
describing information in the grid
SOAP – simple object access protocol
Uses XML for encoding. HTML for protocol
SOAP may become a standard RPC
mechanism for Grid services
Uses XML for encoding. HTML for protocol
Portal Ideas
SandeepKumarPoonia
29. THE ULTIMATE GOAL
In future I will not know or care
where my application will be
executed as I will acquire and pay
to use these resources as I need
them
SandeepKumarPoonia
30. WHY GRIDS?
Large-scale science and engineering are done
through the interaction of people, heterogeneous
computing resources, information systems, and
instruments, all of which are geographically and
organizationally dispersed.
The overall motivation for ―Grids‖ is to facilitate
the routine interactions of these resources in order
to support large-scale science and Engineering.
SandeepKumarPoonia
31. AN EXAMPLE VIRTUAL ORGANIZATION:
CERN‘S LARGE HADRON COLLIDER
1800 Physicists, 150 Institutes, 32 Countries
100 PB of data by 2010; 50,000 CPUs?
SandeepKumarPoonia
32. GRID COMMUNITIES & APPLICATIONS:
DATA GRIDS FOR HIGH ENERGY PHYSICS
Tier2 Centre
~1 TIPS
Online System
Offline Processor Farm
~20 TIPS
CERN Computer Centre
FermiLab ~4 TIPSFrance Regional
Centre
Italy Regional
Centre
Germany Regional
Centre
InstituteInstituteInstitute
Institute
~0.25TIPS
Physicist workstations
~100 MBytes/sec
~100 MBytes/sec
~622 Mbits/sec
~1 MBytes/sec
There is a “bunch crossing” every 25 nsecs.
There are 100 “triggers” per second
Each triggered event is ~1 MByte in size
Physicists work on analysis “channels”.
Each institute will have ~10 physicists working on one or more
channels; data for these channels should be cached by the
institute server
Physics data cache
~PBytes/sec
~622 Mbits/sec
or Air Freight (deprecated)
Tier2 Centre
~1 TIPS
Tier2 Centre
~1 TIPS
Tier2 Centre
~1 TIPS
Caltech
~1 TIPS
~622 Mbits/sec
Tier
0
Tier
1
Tier
2
Tier
4
1 TIPS is approximately 25,000
SpecInt95 equivalents
www.griphyn.org www.ppdg.net www.eu-datagrid.org
SandeepKumarPoonia
34. Early 90s
Gigabit testbeds, metacomputing
Mid to late 90s
Early experiments (e.g., I-WAY), academic software projects
(e.g., Globus, Legion), application experiments
2002
Dozens of application communities & projects
Major infrastructure deployments
Significant technology base (esp. Globus ToolkitTM)
Growing industrial interest
Global Grid Forum: ~500 people, 20+ countries
THE GRID:
A BRIEF HISTORY
SandeepKumarPoonia
35. HOW IT EVOLVES
Utility computing
Service grid
Data grid
Processing grid
Virtualization
Service-oriented
Open standard
SandeepKumarPoonia
36. EARLY ADOPTERS
Academic
Big science
Life science
Nuclear engineering
Simulation…
SandeepKumarPoonia
37. MARKET POTENTIAL
Financial services:
risk management and compliance
Automotive:
acceleration of product development
Petroleum:
discovery of oils
Source: “Perspectives on grid: Grid computing - next-generation distributed computing" Matt Haynos, 01/27/04
SandeepKumarPoonia
38. Criteria for a Grid:
Coordinates resources that are not subject to
centralized control.
Uses standard, open, general-purpose protocols
and interfaces.
Delivers nontrivial qualities of service.
e.g., response time, throughput, availability, security
Benefits
Exploit Underutilized resources
Resource load Balancing
Virtualize resources across an enterprise
Data Grids, Compute Grids
Enable collaboration for virtual organizations
SandeepKumarPoonia
39. WHY DO WE NEED GRIDS?
Many large-scale problems cannot be solved by a
single computer
Globally distributed data and resources
SandeepKumarPoonia
40. GRID APPLICATIONS
Data and computationally intensive applications:
This technology has been applied to computationally-
intensive scientific, mathematical, and academic problems
like drug discovery, economic forecasting, seismic analysis
back office data processing in support of e-commerce
A chemist may utilize hundreds of processors to screen
thousands of compounds per hour.
Teams of engineers worldwide pool resources to analyze
terabytes of structural data.
Meteorologists seek to visualize and analyze petabytes of
climate data with enormous computational demands.
Resource sharing
Computers, storage, sensors, networks, …
Sharing always conditional: issues of trust, policy,
negotiation, payment, …
Coordinated problem solving
distributed data analysis, computation, collaboration, …
SandeepKumarPoonia
41. GRID TOPOLOGIES
• Intragrid
– Local grid within an organisation
– Trust based on personal contracts
• Extragrid
– Resources of a consortium of organisations
connected through a (Virtual) Private Network
– Trust based on Business to Business contracts
• Intergrid
– Global sharing of resources through the
internet
– Trust based on certification
SandeepKumarPoonia
42. COMPUTATIONAL GRID
―A computational grid is a hardware and software infrastructure
that provides dependable, consistent, pervasive, and inexpensive
access to high-end computational capabilities.‖
‖The Grid: Blueprint for a New Computing Infrastructure‖,
Kesselman & Foster
Example : Science Grid (US Department of Energy)
SandeepKumarPoonia
43. DATA GRID
A data grid is a grid computing system that deals with
data — the controlled sharing and management of
large amounts of distributed data.
Data Grid is the storage component of a grid environment.
Scientific and engineering applications require access to
large amounts of data, and often this data is widely
distributed. A data grid provides seamless access to the
local or remote data required to complete compute
intensive calculations.
Example :
Biomedical informatics Research Network (BIRN),
the Southern California earthquake Center (SCEC).
SandeepKumarPoonia
45. CLUSTER COMPUTING
Idea: put some PCs together and get them to
communicate
Cheaper to build than a mainframe
supercomputer
Different sizes of clusters
Scalable – can grow a cluster by adding more PCs
SandeepKumarPoonia
47. PEER-TO-PEER COMPUTING
Connect to other computers
Can access files from any computer on the
network
Allows data sharing without going through
central server
Decentralized approach also useful for Grid
SandeepKumarPoonia
50. DISTRIBUTED SUPERCOMPUTING
Combining multiple high-capacity resources on
a computational grid into a single, virtual
distributed supercomputer.
Tackle problems that cannot be solved on a
single system.
Examples: climate modeling, computational
chemistry
Challenges include:
Scheduling scarce and expensive resources
Scalability of protocols and algorithms
Maintaining high levels of performance across
heterogeneous systems
SandeepKumarPoonia
51. HIGH-THROUGHPUT COMPUTING
Uses the grid to schedule large numbers of
loosely coupled or independent tasks, with the
goal of putting unused processor cycles to
work.
Schedule large numbers of independent tasks
Goal: exploit unused CPU cycles (e.g., from
idle workstations)
Unlike distributed computing, tasks loosely
coupled
Examples: parameter studies, cryptographic
problems
SandeepKumarPoonia
52. On-Demand Computing
Uses grid capabilities to meet short-term
requirements for resources that are not
locally accessible.
Models real-time computing demands.
Use Grid capabilities to meet short-term
requirements for resources that cannot
conveniently be located locally
Unlike distributed computing, driven by cost-
performance concerns rather than absolute
performance
Dispatch expensive or specialized
computations to remote servers
SandeepKumarPoonia
53. COLLABORATIVE COMPUTING
Concerned primarily with enabling and
enhancing human-to-human interactions.
Enable shared use of data archives and
simulations
Applications are often structured in terms of a
virtual shared space.
Examples:
Collaborative exploration of large geophysical data sets
Challenges:
Real-time demands of interactive applications
Rich variety of interactions
SandeepKumarPoonia
54. Data-Intensive Computing
The focus is on synthesizing new information
from data that is maintained in geographically
distributed repositories, digital libraries, and
databases.
Particularly useful for distributed data mining.
Examples:
•High energy physics generate terabytes of distributed data, need complex
queries to detect “interesting” events
•Distributed analysis of Sloan Digital Sky Survey data
SandeepKumarPoonia
55. LOGISTICAL NETWORKING
Logistical networks focus on exposing storage
resources inside networks by optimizing the
global scheduling of data transport, and data
storage.
Contrasts with traditional networking, which
does not explicitly model storage resources in the
network.
high-level services for Grid applications
Called "logistical" because of the analogy it bears
with the systems of warehouses, depots, and
distribution channels.
SandeepKumarPoonia
56. P2P COMPUTING VS GRID
COMPUTING
Differ in Target Communities
Grid system deals with more complex, more
powerful, more diverse and highly interconnected
set of resources than
P2P.
SandeepKumarPoonia
57. A TYPICAL VIEW OF GRID
ENVIRONMENT
User
Resource Broker
Grid Resources
Grid Information Service
A User sends computation
or data intensive application
to Global Grids in order to
speed up the execution of
the application.
A Resource Broker distribute the
jobs in an application to the Grid
resources based on user’s QoS
requirements and details of available
Grid resources for further executions.
Grid Resources (Cluster, PC,
Supercomputer, database,
instruments, etc.) in the Global
Grid execute the user jobs.
Grid Information Service
system collects the details of
the available Grid resources
and passes the information
to the resource broker.
Computation result
Grid application
Computational jobs
Details of Grid resources
Processed jobs
1
2
3
4
SandeepKumarPoonia
58. GRID MIDDLEWARE
Grids are typically managed by grid ware -
a special type of middleware that enable sharing and
manage grid components based on user requirements
and resource attributes (e.g., capacity, performance)
Software that connects other software components or
applications to provide the following functions:
Run applications on suitable available resources
– Brokering, Scheduling
Provide uniform, high-level access to resources
– Semantic interfaces
– Web Services, Service Oriented Architectures
Address inter-domain issues of security, policy, etc.
– Federated Identities
Provide application-level status
monitoring and control
SandeepKumarPoonia
59. MIDDLEWARES
Globus –chicago Univ
Condor – Wisconsin Univ – High throughput
computing
Legion – Virginia Univ – virtual workspaces-
collaborative computing
IBP – Internet back pane – Tennesse Univ –
logistical networking
NetSolve – solving scientific problems in
heterogeneous env – high throughput & data
intensive
SandeepKumarPoonia
60. TWO KEY GRID COMPUTING GROUPS
The Globus Alliance (www.globus.org)
Composed of people from:
Argonne National Labs, University of Chicago, University of
Southern California Information Sciences Institute,
University of Edinburgh and others.
OGSA/I standards initially proposed by the Globus Group
The Global Grid Forum (www.ggf.org)
Heavy involvement of Academic Groups and Industry
(e.g. IBM Grid Computing, HP, United Devices, Oracle,
UK e-Science Programme, US DOE, US NSF, Indiana
University, and many others)
Process
Meets three times annually
Solicits involvement from industry, research groups, and
academics
SandeepKumarPoonia
61. GRID USERS
Many levels of users
Grid developers
Tool developers
Application developers
End users
System administrators
SandeepKumarPoonia
62. SOME GRID CHALLENGES
Data movement
Data replication
Resource management
Job submission
SandeepKumarPoonia
63. SOME OF THE MAJOR GRID PROJECTS
Name URL/Sponsor Focus
EuroGrid, Grid
Interoperability
(GRIP)
eurogrid.org
European Union
Create tech for remote access to super
comp resources & simulation codes; in
GRIP, integrate with Globus Toolkit™
Fusion Collaboratory fusiongrid.org
DOE Off. Science
Create a national computational
collaboratory for fusion research
Globus Project™ globus.org
DARPA, DOE,
NSF, NASA, Msoft
Research on Grid technologies;
development and support of Globus
Toolkit™; application and deployment
GridLab gridlab.org
European Union
Grid technologies and applications
GridPP gridpp.ac.uk
U.K. eScience
Create & apply an operational grid within the
U.K. for particle physics research
Grid Research
Integration Dev. &
Support Center
grids-center.org
NSF
Integration, deployment, support of the NSF
Middleware Infrastructure for research &
education
SandeepKumarPoonia
64. SandeepKumarPoonia
Grid in India-GARUDA
•GARUDA is India's Grid Computing
initiative connecting 17 cities across the
country.
•The 45 participating institutes in this
nationwide project include all the IITs and
C-DAC centers and other major institutes
in India.
65. GLOBUS GRID TOOLKIT
Open source toolkit for building Grid systems and
applications
Enabling technology for the Grid
Share computing power, databases, and other tools securely
online
Facilities for:
Resource monitoring
Resource discovery
Resource management
Security
File management
SandeepKumarPoonia
66. DATA MANAGEMENT IN GLOBUS
TOOLKIT
Data movement
GridFTP
Reliable File Transfer (RFT)
Data replication
Replica Location Service (RLS)
Data Replication Service (DRS)
SandeepKumarPoonia
67. GRIDFTP
High performance, secure, reliable data transfer protocol
Optimized for wide area networks
Superset of Internet FTP protocol
Features:
Multiple data channels for parallel transfers
Partial file transfers
Third party transfers
Reusable data channels
Command pipelining
SandeepKumarPoonia
68. MORE GRIDFTP FEATURES
Auto tuning of parameters
Striping
Transfer data in parallel among multiple senders and
receivers instead of just one
Extended block mode
Send data in blocks
Know block size and offset
Data can arrive out of order
Allows multiple streams
SandeepKumarPoonia
70. LIMITATIONS OF GRIDFTP
Not a web service protocol (does not employ
SOAP, WSDL, etc.)
Requires client to maintain open socket
connection throughout transfer
Inconvenient for long transfers
Cannot recover from client failures
SandeepKumarPoonia
72. RELIABLE FILE TRANSFER (RFT)
Web service with ―job-scheduler‖ functionality for data
movement
User provides source and destination URLs
Service writes job description to a database and moves
files
Service methods for querying transfer status
SandeepKumarPoonia
74. REPLICA LOCATION SERVICE (RLS)
Registry to keep track of where replicas exist on physical
storage system
Users or services register files in RLS when files created
Distributed registry
May consist of multiple servers at different sites
Increase scale
Fault tolerance
SandeepKumarPoonia
75. REPLICA LOCATION SERVICE (RLS)
Logical file name – unique identifier for contents of
file
Physical file name – location of copy of file on
storage system
User can provide logical name and ask for replicas
Or query to find logical name associated with
physical file location
SandeepKumarPoonia
76. DATA REPLICATION SERVICE (DRS)
Pull-based replication capability
Implemented as a web service
Higher-level data management service built on top of RFT
and RLS
Goal: ensure that a specified set of files exists on a storage
site
First, query RLS to locate desired files
Next, creates transfer request using RFT
Finally, new replicas are registered with RLS
SandeepKumarPoonia
77. CONDOR
Original goal: high-throughput computing
Harvest wasted CPU power from other machines
Can also be used on a dedicated cluster
Condor-G – Condor interface to Globus resources
SandeepKumarPoonia
78. CONDOR
Provides many features of batch systems:
job queueing
scheduling policy
priority scheme
resource monitoring
resource management
Users submit their serial or parallel jobs
Condor places them into a queue
Scheduling and monitoring
Informs the user upon completion
SandeepKumarPoonia
79. NIMROD-G
Tool to manage execution of parametric studies across
distributed computers
Manages experiment
Distributing files to remote systems
Performing the remote computation
Gathering results
User submits declarative plan file
Parameters, default values, and commands necessary for
performing the work
Nimrod-G takes advantage of Globus toolkit features
SandeepKumarPoonia
82. EARTH SYSTEM GRID
Provide climate studies scientists with access to
large datasets
Data generated by computational models –
requires massive computational power
Most scientists work with subsets of the data
Requires access to local copies of data
SandeepKumarPoonia
83. ESG INFRASTRUCTURE
Archival storage systems and disk storage systems at
several sites
Storage resource managers and GridFTP servers to
provide access to storage systems
Metadata catalog services
Replica location services
Web portal user interface
SandeepKumarPoonia
86. LASER INTERFEROMETER
GRAVITATIONAL WAVE
OBSERVATORY (LIGO)
Instruments at two sites to detect gravitational waves
Each experiment run produces millions of files
Scientists at other sites want these datasets on local storage
LIGO deploys RLS servers at each site to register local
mappings and collect info about mappings at other sites
SandeepKumarPoonia
87. LARGE SCALE DATA REPLICATION
FOR LIGO
Goal: detection of gravitational waves
Three interferometers at two sites
Generate 1 TB of data daily
Need to replicate this data across 9 sites to make
it available to scientists
Scientists need to learn where data items are,
and how to access them
SandeepKumarPoonia
89. LIGO SOLUTION
Lightweight data replicator (LDR)
Uses parallel data streams, tunable TCP windows, and
tunable write/read buffers
Tracks where copies of specific files can be found
Stores descriptive information (metadata) in a
database
Can select files based on description rather than filename
SandeepKumarPoonia
90. TERAGRID
NSF high-performance computing facility
Nine distributed sites, each with different
capability , e.g., computation power, archiving
facilities, visualization software
Applications may require more than one site
Data sizes on the order of gigabytes or terabytes
SandeepKumarPoonia
92. TERAGRID
Solution: Use GridFTP and RFT with front end
command line tool (tgcp)
Benefits of system:
Simple user interface
High performance data transfer capability
Ability to recover from both client and server software
failures
Extensible configuration
SandeepKumarPoonia
93. TGCP DETAILS
Idea: hide low level GridFTP commands from users
Copy file smallfile.dat in a working directory to another
system:
tgcp smallfile.dat tg-login.sdsc.teragrid.org:/users/ux454332
GridFTP command:
globus-url-copy -p 8 -tcp-bs 1198372
gsiftp://tg-gridftprr.uc.teragrid.org:2811/home/navarro/smallfile.dat
gsiftp://tg-login.sdsc.teragrid.org:2811/users/ux454332/smallfile.dat
SandeepKumarPoonia
95. THE HOURGLASS MODEL
Focus on architecture issues
Propose set of core services as
basic infrastructure
Used to construct high-level,
domain-specific solutions
(diverse)
Design principles
Keep participation cost low
Enable local control
Support for adaptation
―IP hourglass‖ model
Diverse global services
Core
services
Local OS
A p p l i c a t i o n s
SandeepKumarPoonia
96. LAYERED GRID ARCHITECTURE
(BY ANALOGY TO INTERNET ARCHITECTURE)
Application
Fabric
“Controlling things locally”: Access
to, & control of, resources
Connectivity
“Talking to things”: communication
(Internet protocols) & security
Resource
“Sharing single resources”:
negotiating access, controlling use
Collective
“Coordinating multiple resources”:
ubiquitous infrastructure services,
app-specific distributed services
Internet
Transport
Application
Link
InternetProtocolArchitecture
SandeepKumarPoonia
97. EXAMPLE:
DATA GRID ARCHITECTURE
Discipline-Specific Data Grid Application
Coherency control, replica selection, task management,
virtual data catalog, virtual data code catalog, …
Replica catalog, replica management, co-allocation,
certificate authorities, metadata catalogs,
Access to data, access to computers, access to network
performance data, …
Communication, service discovery (DNS), authentication,
authorization, delegation
Storage systems, clusters, networks, network caches, …
Collective
(App)
App
Collective
(Generic)
Resource
Connect
Fabric
SandeepKumarPoonia
99. SIMULATION TOOL
GridSim is a Java-based toolkit for modeling,
and simulation of distributed resource
management and scheduling for conventional
Grid environment.
GridSim is based on SimJava, a general
purpose discrete-event simulation package
implemented in Java.
All components in GridSim communicate with
each other through message passing operations
defined by SimJava.
SandeepKumarPoonia
100. SALIENT FEATURES OF THE GRIDSIM
It allows modeling of heterogeneous types of
resources.
Resources can be modeled operating under space-
or time-shared mode.
Resource capability can be defined (in the form of
MIPS (Million Instructions Per Second)
benchmark.
Resources can be located in any time zone.
Weekends and holidays can be mapped
depending on resource‘s local time to model non-
Grid (local) workload.
Resources can be booked for advance reservation.
Applications with different parallel application
models can be simulated.
SandeepKumarPoonia
101. SALIENT FEATURES OF THE GRIDSIM
Application tasks can be heterogeneous and they can
be CPU or I/O intensive.
There is no limit on the number of application jobs
that can be submitted to a resource.
Multiple user entities can submit tasks for execution
simultaneously in the same resource, which may be
time-shared or space-shared. This feature helps in
building schedulers that can use different market-
driven economic models for selecting services
competitively.
Network speed between resources can be specified.
It supports simulation of both static and dynamic
schedulers.
Statistics of all or selected operations can be recorded
and they can be analyzed using GridSim statistics
analysis methods.
SandeepKumarPoonia
102. A MODULAR ARCHITECTURE FOR GRIDSIM
PLATFORM AND COMPONENTS.
Appn Conf Res Conf User Req Grid Sc Output
Application, User, Grid Scenario’s input and Results
Grid Resource Brokers or Schedulers
…
Appn
modeling
Res entity Info serv Job mgmt Res alloc Statis
GridSim Toolkit
Single
CPU
SMPs Clusters Load Netw Reservation
Resource Modeling and Simulation
SimJava Distributed SimJava
Basic Discrete Event Simulation Infrastructure
PCs Workstation ClustersSMPs Distributed
Resources
Virtual Machine
SandeepKumarPoonia