Grids and the Harmony and Prosperity of Civilizations “ Beijing Forum” (2004) The Harmony and Prosperity of Civilizations http://www.beijingforum.org/english/index.htm Geoffrey Fox Professor of Computer Science, Informatics, Physics Pervasive Technology Laboratories Indiana University Bloomington IN 47401 [email_address] http:// www.infomall.org
Moore’s law predicts that electronic components will improve in performance by a factor of 100 or so every ten years (double every 18 months)
Networks are increasing in performance every year much faster than this as more and better technology is deployed (Gilder’s law)
Last-mile versus backbone performance
Latency versus bandwidth
Cable, DSL, Satellite, Optical fiber, wireless are competing to provide high speed connectivity to the citizens of the world
By 2006, GTRN (Global Terabit Research Network) aims at a 1000:1000:100:10:1 gigabit performance ratio representing international backbone: national: organization: optical desktop: Copper desktop links.
We view the “ordinary” Internet as providing support for the huge number of low-complexity interactions which are the dominant traffic
We superimpose multiple Grids on top of these; each Grid supports a high value high complexity interaction
Grids built from Web Services communicating through an overlay network
Grids provide the special quality of service (security, performance, fault-tolerance) and customized services needed for “distributed complex enterprises”
We need to work with Web Service community as they debate the 60 or so proposed Web Service specifications
Use Web Service Interoperability WS-I as “best practice”
Must add further specifications to support high performance
Database “Grid Services” for N plus N case
Streaming support for M 2 case
Bit level Internet (OSI Stack) Layered Architecture for Web Services and Grids Base Hosting Environment Protocol HTTP FTP DNS … Presentation XDR … Session SSH … Transport TCP UDP … Network IP … Data Link / Physical Service Internet Application Specific Grids Generally Useful Services and Grids Workflow WSFL/BPEL Service Management (“Context etc.”) Service Discovery (UDDI) / Information Service Internet Transport Protocol Service Interfaces WSDL Service Context Higher Level Services
Supporting human decision making with a network of at least four large computers, perhaps six or eight small computers, and a great assortment of disc files and magnetic tape units - not to mention remote consoles and teletype stations - all churning away. (Licklider 1960)
Coordinated resource sharing and problem solving in dynamic multi-institutional virtual organizations
Infrastructure that will provide us with the ability to dynamically link together resources as an ensemble to support the execution of large-scale , resource-intensive , and distributed applications .
Realizing thirty year dream of science fiction writers that have spun yarns featuring worldwide networks of interconnected computers that behave as a single entity.
e-Business captures an emerging view of corporations as dynamic virtual organizations linking employees, customers and stakeholders across the world.
The growing use of outsourcing is one example
e-Science is the similar vision for scientific research with international participation in large accelerators, satellites or distributed gene analyses.
The Grid integrates the best of the Web, traditional enterprise software, high performance computing and Peer-to-peer systems to provide the information technology infrastructure for e-moreorlessanything .
A deluge of data of unprecedented and inevitable size must be managed and understood.
People , computers , data and instruments must be linked.
On demand assignment of experts, computers, networks and storage resources must be supported
Analyzing data from LHC is a “ N plus N Grid ” with huge scale
30,000 CPU’s processing simultaneously LHC data
In a few years, over a 100 of Petabytes of data
Physics discovery is a M 2 Grid with perhaps M=10
Lots of such groups working simultaneously
Note hierarchical structure
M=10 in Physics analysis
M=2,000 in one LHC Experiment
M=10,000 physicists in particle physics
M= 100,000 total physicists
M= Billions People
DAME Rolls Royce and UK e-Science Program Distributed Aircraft Maintenance Environment Several small M 2 Grids – one for each aircraft back-ended by N plus N Grid of reference data of all engines In flight data Airline Maintenance Centre Ground Station Global Network Such as SITA Internet, e-mail, pager Engine Health (Data) Center ~ Gigabyte per aircraft per Engine per transatlantic flight ~5000 engines