Grids and Clouds: Tackle the Limits of ICT Tackle the Limits of ICT Simon C. Lin ( 林誠謙 ) Academia Sinica Grid Computing, Taipei, Taiwan 11529 [email_address] 28 April 2009,  TrendMicro Clouds Seminar
Future of Computing? Computing is about the application of Information and Communication Technology “ Cambrian explosions” in ICT, rich diversity in services, middleware, commodity hardware, devices and sensors
Water, water, everywhere, nor any drop to drink S. T. Coleridge, 1797 S. T. Coleridge, 1797 S. T. Coleridge, 1797
Actual Internet Map It looks amazingly similar to Neural Net in terms of  Connectivity It’s a  Scale-Free  Network Everything over IP Always-On Everywhere
The Internet Hourglass [email_address] IP Application layer Link layer Ethernet, WIFI (802.11), ATM, SONET/SDH, FrameRelay, modem, ADSL, Cable, Bluetooth… Voice, Video, P2P, Email, youtube, …. Protocols – TCP, UDP, SCTP, ICMP,… Disruptive  approaches need a disruptive architecture Changing/updating the Internet core is difficult or impossible ! (e.g. Multicast, Mobile IP, QoS, …) Everything on IP Homogeneous networking abstraction IP on Everything IPv6 IPvX
Heterogeneity We have to extend the waist and host more paradigms Future networks have to scale in size AND functionality Enable network evolution Federation  instead of homogeneous abstraction [email_address] Application layer Link layer CLEAN SLATE We need a framework that  is able to host multiple networks  ANA PROJECT (J02) Generic  framework IP Sensor Home  NW ??? Pub/Sub
Vision “ The Internet only just works ”! Explore the possible  Future(s)  of the Internet The future Internet might be  Polymorphic Multiple Federated Internets Content, Wireless, DTN, Things, … What is the foundation of this future? Is there a  Network Science ? Experimentally   driven research & testing 3 The Disappearing Internet! Serge Fdida, ICT’08, Lyon, Nov.08
There are Limits to Everything ...
Exponential Growth World Gilder’s Law (32X in 4 yrs) Storage Law  (16X in 4yrs) Moore’s Law (5X in 4yrs) Triumph of Light –  Scientific American . George Stix, January 2001 Performance per Dollar Spent Optical Fibre (bits per second) Chip capacity (# transistors) Data Storage (bits per sq. inch) Number of Years 0  1  2  3  4  5 Doubling Time (months) Rock's Law: the cost of leading-edge plant doubles every three years (now at 3-4 billion USD) 9  12  18
The Impact of Scaling All  positive benefits bundled together Reduced component cost Increased functionality Higher performance Reduced component size Smaller energy consumption The Information Revolution Rapid expansion of the use of integrated circuits in all products and services Wide penetration of digital technologies in all areas of society and economy ©   I. Tuomi 25 Nov. 2008  page:  Quality-adjusted price of semiconductors Source: Aizcorbe, Oliner, Sichel (2006): Shifting trends in semiconductor prices and the pace of technical progress. Finance and Economics Discussion Series, Federal Reserve Board, Washington, D.C.
The Data Deluge A large novel:  1 Mbyte;  The Bible:  5 Mbytes A Mozart symphony (compressed):  10 Mbytes   A digital mammogram:  100 Mbytes OED on CD:  500 Mbytes,  Digital movie (compressed):  10 Gbytes Annual production of refereed journal literature ( ∼ 20 k journals;  ∼ 2 M articles):  1 Tbyte   Library of Congress:  20 Tbytes   The Internet Archive (10 B pages) (From 1996 to 2002):  100 Tbytes  Annual production of information (print, film, optical & magnetic media):  1500 to 3000 Pbytes.  From 2008, annual production will be greater than total available storage space! All Worldwide Telephone communication in 2002:  19.3 ExaBytes.  Global Internet traffic will reach EB order next year! Moore’s Law enables instruments and detectors to generate unprecedented amount of data in all scientific disciplines <<< Limit and Challenge!
Moore’s Law in HPC Machines Limit Soon! >>>
100 -1000x Limit! >>>
Power Consumption Challenge for More Computing Power  <<< Limit!
<<< Arch. Limit!
Tera  ->  Peta Bytes RAM time to move 15 minutes 1Gb WAN move time 10 hours ($1000) Disk Cost 7 disks = $5000 (SCSI) Disk Power 100 Watts Disk Weight 5.6 Kg Disk Footprint Inside machine •  RAM time to move –  2 months •  1Gb WAN move time –  14 months ($1 million) •  Disk Cost –  6800 Disks + 490 units + 32 racks = $7 million •  Disk Power –  100 Kilowatts •  Disk Weight –  33 Tonnes •  Disk Footprint –  60 m 2 May 2003 Approximately Correct  Distributed Computing Economics Jim Gray, Microsoft Research, MSR-TR-2003-24   <<< Limit to Scaling!
From Optimizing Architecture to Optimizing Organisation High-performance computing has moved from being a problem of  optimizing the architecture  of an individual supercomputer to one of  optimizing the organization  of large numbers of ordinary computers operating in parallel. Scott Kirkpatrick, SCIENCE  Vol. 299, 2003, p668
Simulated Time Can Get Rough <<< Proc. # Limit !
Small World Can Cure ... This is similar to the roughening surfaces of crystalline materials grown or the liquid surface in a very week gravitational field Roughening becomes a problem when measurements must be continuously extracted from the complex simulation Temporal roughening can be eliminated by synchronising with SuperNode in a Small World
How Large the Computing Could Be?
EGEE-III INFSO-RI-222667 EGEE - Bob Jones - OGF25/User Forum - 2-6 March 2009 ~280 sites 45 countries >80,000 CPUs >25 PetaBytes >14,000 users >250,000 jobs/day Quality: Monitoring via Nagios - distributed via official releases, configured through YAIM, integrated with other tools Gradual implementation of Service Level Agreement with sites Result : 85% of sites are now above the 75% availability threshold Geographical expansion: now have production sites all across Asia: Australia, China, India, Japan, Korea, Malaysia, Pakistan, Taiwan, Thailand In certification: Indonesia, Philippines, Vietnam  Enabling Grids for E-sciencE
 
 
 
 
 
SETI@home: 1,000,000 CPUs “Poor man’s Grid” BOINC (Berkeley Open Infrastructure for Network Computing) platform launched in 2003. General-purpose open-source platform for client-server model of distributed computing.  >50 volunteer computing projects today in a wide range of sciences. Not all use BOINC. Volunteer Computing
Folding@home: >1 petaflop Sony pre-installs folding@home on Playstation-III – user can choose to run in background.  50k PS3s make first distributed petaflop machine (Guiness World Record Sept 2007). Often of Philanthropic Nature
LHC@home: >3000 CPU-years, >60K Volunteers >60K Volunteers Fortran program Sixtrack simulates LHC proton beam stability for 10 5 -10 6  orbits.  Include real magnet parameters, beam-beam effects. Predict stable operation conditions. Citizen Cyber-science Projects
Volunteer Thinking
Cost of 1 TFLOPS-year Cluster: $145K Computing hardware; power/AC infrastructure; network hardware; storage; power; sysadmin Cloud: $1.75M Volunteer: $1K - $10K Server hardware; sysadmin; web development
Performance Current 500K people, 1M computers 6.5 PetaFLOPS (3 from GPUs, 1.4 from PS3s) Potential 1 billion PCs today, 2 billion in 2015 GPU: approaching 1 TFLOPS How to get 1 ExaFLOPS: 4M GPUs * 0.25 availability How to get 1 Exabyte: 10M PC disks * 100 GB
History of volunteer computing Applications Middleware 1995 2005 distributed.net, GIMPS SETI@home, Folding@home Commercial: Entropia, United Devices, ... BOINC Climateprediction.net [email_address] IBM World Community Grid [email_address] [email_address] ... 2005 2000 now Academic: Bayanihan, Javelin, ... Applications
ASGC Introduction Large Hadron Collider (LHC) Avian Flu Drug Discovery Grid Application Platform A Worldwide Grid Infrastructure Asia Pacific Regional Operation Center  >280 sites, 45 countries >80,000 CPUs, >25 PetaBytes >14,000 users, >200 VOs >250,000 jobs/day Best Demo Award of EGEE’07 Lightweight Problem Solving Framework 1. Most Reliable T1: 98.83% 2. Very Highly Performing and most Stable Site in CCRC08 Max CERN/T1-ASGC Point2Point Inbound : 7.3 Gbps
由歐洲到台灣每秒能傳送兩部大英百科全書  --  歷史記錄 Max CERN/T1->ASGC Inbound : 7.3 Gbps,  and Outbound : 5.9 Gbps
 
ASGC Profile Mission: Building sustainable research and collaboration infrastructure Objectives: Support research by e-Science, on data intensive sciences and applications require cross disciplinary distributed collaboration Milestone Operational from the deployment of LCG0 since 2002 , and takes the Tier-1 Center responsibility from 2005 Federated Taiwan Tier-2 center -- Taiwan Analysis Facility (TAF) is also collocated in ASGC Rep. of EGEE e-Science Asia Federation, while joining EGEE from 2004 Providing Asia Pacific Regional Operation Center (APROC) services to Asia Pacific WLCG/EGEE sites from 2005 Initiate Avian Flu Drug Discovery Project and collaborate with EGEE in 2006 Start of EUAsiaGrid Project from April 2008 3
e-Science Support @ ASGC Strategy Collaborate by sharing: goals, resources, solutions, ... Resource integration for effective utilization Enabled by e-Infrastructure services Scalable Resource & flexible solution: working with Users Internet-wide Collaboration Environment High performance networking Service Grid + Desktop Grid + Cloud --> new technology Target Applications: HEP, Life Science, Earth Science, Climate Change, and long-term preservation
Resource Status for WLCG 5,500 2,500 1,300 WLCG CPU(KSI2K) Disk(TB) Tape (TB) 2007 1,700 900 800 2008 3,400 1,500 1,300 2009 5,000 3,000 3,000 2010 7,000 3,500 3,500 2011 8,000 4,000 4,500 2012 9,000 4,500 4,500 2013 10,000 4,500 4,500
ASGC 是 EGEE 和 WLCG 的亞太維運中心 Mission Provide deployment support facilitating Grid  expansion Maximize the  availability  of Grid services Supports EGEE sites in  Asia Pacific  since April 2005 26 production sites in 10 countries, 12 VOs Over 5,300 CPU Cores, 3 PetaByte Disk Space by end of 2008 Runs ASGCCA Certification Authority since 2003 Middleware installation support Production resource center certification Operations Support Monitoring Diagnosis and troubleshooting Problem tracking Security
Networking Backbone in Asia (GLIF)
WLCG/EGEE/EUAsiaGrid Partners in Asia Pacific STM-16 STM-16 STM-16 STM-4 STM-64 TW SINET CERNET JP HK SG CSTNET KREONET WIDE I2 / GN2 Tokyo-LCG2 US AU-ATLAS KEK2 KEK1 PK-NCP HK-HKU KNU KISTI-HEP KISTI-GCRT SDU-LCG2 BEIJING-PKU BEIJING-LCG2 AU-UNIMELB-LCG2 AARNET HARNET PK-PAKGRID MY-MIMOS IN-VECC IN-TIFR TW-NIU TWAREN/ TANET IP Transit HKIX TW-NTCU TW-THU TW-FTT TW-IPAS TW-NCUCC TW-NCUHEP Taiwan-ASGC SG-NGO TH-NETEC, HAII PH-  AdMU, ASTI VN-IAMI NL
Design of collaboration model  High Energy Physics ATLAS and CMS Tier1 Center and APROC Services, Geant4, and other experiments Life Science Application Support of gLite-enabled Virtual Screening Service  Improve autodocking drug discovery performance & scalability with the DIANE2  Initiate Avian Flu Data Challenge II Refine project at EUAsiaGrid Disaster Mitigation Provide wave simulation applications of earthquake and tsunami, for the mitigation planning and evaluation Build up GEOGrid on earthquake in corporation with Taiwan Earthquake Data Center. Related EGEE applications survey (e.g., DEGREE) and collaboration Carbon Flux Application Promote the Grid Application Platform (GAP) to the local ecologist Migrate Carbon Flux data and integrate SRM-SRB interface in GAP system and gLite  Grid Portal Technology Develop grid portal technology, workflow engine and data management services  e-Science Application Support 2
Biomedical goal accelerating the discovery of novel potent inhibitors thru minimizing non-productive trial-and-error approaches improving the efficiency of high throughput screening Grid goal massive throughput: reproducing a grid-enabled in silico process (exercised in DC I) with a shorter time of preparation interactive feedback: evaluating an alternative light-weight grid application framework (DIANE) EGEE Biomed DC II – Large Scale Virtual Screening of Drug Design Initiated by ASGC, Taiwan and WISDOM Project! 137 CPU-Years in 4 Weeks! 5,000 CPU mobilised! Credit:  Y-T Wu N1 H5 Credit:  Y-T Wu
Bio-Portal and Virtual Screening Services x Best Demo Award of EGEE’07 Conference Total computing power used 137 cpu-year 8 variants against 400k compounds
Large Hadron Collider LHC@CERN for High Energy Physics
Simulation of 2004 Earthquake in Taipei Basin, w/o Geological Structure consideration Earthquake Wave Simulation
Cloud Computing
 
 
 
 
 
 
 
 
 
 
 
Is It Cloud Computing? It is potentially a form of Grid Computing  It is often associated with collections of Virtual machines, aiming at 100 times larger than Data Centres Usually, clouds do not cross administrative domain Opportunity to change from Provider focused to User focused Pushed by Vendours
Is It Buzzword evolution? One distributed computing  buzzword per decade Metacomputing (~1987, L. Smarr) Grid computing (~1997, I. Foster, K. Kesselman) Cloud computing (~2007, E. Schmidt ?)
Cloud hype 1
Cloud hype 2 It’s stupidity. It’s worse than stupidity: it’s a marketing hype campaign. Somebody is saying this is inevitable — and whenever you hear somebody saying that, it’s very likely to be a set of businesses campaigning to make it true. Richard Stallman, quoted in The Guardian, September 29, 2008 The interesting thing about Cloud Computing is that we’ve redefined Cloud Computing to include everything that we already do. . . . I don’t understand what we would do differently in the light of Cloud Computing other than change the wording of some of our ads. Larry Ellison, quoted in the Wall Street Journal, September 26, 2008
Cloud hype 3 --- Cloud computing --- Grid computing
Cloud hype 4 Illustration by David Simonds The Economist April 09 The Open Cloud Manifesto “ The industry needs an objective, straightforward conversation  about how this new computing paradigm will impact organizations, how it can be  used with existing technologies, and the  potential pitfalls of proprietary technologies that can lead to lock-in and limited choice .” OpenCloudManifesto.org
Why Cloud Computing ? Innovative business models require a utility-like intelligent infrastructure that embraces complexity to be successful in the competitive, fast-paced, services-based global economy.  ECONOMICS : Small up front investment and can be billed by consumption. Reduction of TCO allows clients to pursue operational efficiency and productivity.  RISK MANAGEMENT : Small up front commitment allows clients to try many new services faster and choose.  This reduces big failure risks and allows clients to be innovative.  TIME TO MARKET : Adopt new services quickly for pilot usages and scale quickly to global scale.  INFORMATION SOCIETY : Value-added information generated by collection and analysis of massive amounts of unstructured data. UBIQUITOUS SOCIETY : Accessible via a heterogeneous set of devices (PC, phone, telematics..) March 2-6, 2009  OGF 25/EGEE User Forum, Catania Italy www.reservoir-fp7.eu Because it makes sense !!! SaaS PaaS IaaS
Why now ? Broadband networks Adoption of Software as a Service  Salesforce.com Web 2.0 mindset Fast penetration of virtualization technology for x86-based servers  Virtual appliances General purpose on-line virtual machines that can do anything March 2-6, 2009  OGF 25/EGEE User Forum, Catania Italy www.reservoir-fp7.eu
Is it really new ? Massive scale resource sharing over the Internet  Sound a lot like grid computing, yet … March 2-6, 2009  OGF 25/EGEE User Forum, Catania Italy www.reservoir-fp7.eu Grid Highly specialized resources that need to  be shared by thousands [researchers] Large data sets Sharing is a goal In many cases, providers are also consumers  Interoperable by design Cloud Reducing CAPEX, OPEX, time to market Millions of users that share to save not for the sake of sharing Providers want market share and customer lock-in Need for interoperability driven by customers
Definitions 1 Enterprise Grid Clouds Service Grid
Definitions 2 Cloud computing  is an  on-demand service  offering a large pool of easily usable and accessible virtualized resources (such as hardware, development platforms and/or software services) in a  pay-per-use model . Clouds are usually commercial and use proprietary interfaces. Service grids  are systems that federate, share and coordinate distributed resources from different organizations which are  not subject to centralized control , using standard, open, general-purpose protocols and interfaces to deliver non-trivial qualities of service. Service grids are  used by Virtual Organisations , thematic groups of users crossing administrative and geographical boundaries.
Definitions 3 Contract harvesting (Cloud) Cooperative harvesting (Service Grid)
Grids vs. Clouds 1
Grids vs. Clouds 2
Grids vs. Clouds 3 Cost of 1 Teraflop-year Cloud: $1.75M  Amazon EC2 rates Cluster: $145K Hardware (computing, network, storage);  power; infrastructure; sysadmin Volunteer: $1K - $10K Server hardware; sysadmin; web development
 
 
Conclusion “  IT is a cost center, after all, not so dissimilar from janitorial and cafeteria services, both of which have long been outsourced at most enterprises. Security concerns won't necessarily prevent companies from wholesale outsourcing of data services: businesses have long outsourced payroll and customer data to trusted providers. Much will depend on the specific company, of course, but it's unlikely that smaller enterprises will resist the economic logic of utility computing. Bigger corporations will simply take longer to make the shift.” Nicholas Carr in  The Big Switch: Rewiring the World, from Edison to Google
Opportunities
Grid & Cloud: opportunities Many cloud offerings = competition Amazon Elastic Cloud Computing (EC2); IBM Blue Cloud; Microsoft Azure Services Platform; Sun Open Cloud Initiative; Google App Engine; Salesforce.com Force.com Cloud; GoGrid Cloud Hosting; RackSpace Cloud Hosting; Flexiscale Utility Computing on Demand… Some academic/open-source variants emerging Eucalyptus; Nimbus, OpenNebula,… Experiments in Grid + Cloud BalticCloud,StratusLab, VirtCloud, …some to be discussed in this session
Where would clouds come in? EGEE-III INFSO-RI-222667 EGEE - Bob Jones - OGF25/User Forum - 2-6 March 2009 Majority of EGEE’s CPUs are at the sites •  The interaction with site resources is ‘simple’ In: Authorization, Compute & Storage Out: Monitoring & Accounting •  Additional Site Requirements VO specific site services VO specific applications deployed to the worker nodes •  Provide classical ‘cloud’ scale out capability Potentially, with VO specific worker nodes Enabling Grids for E-sciencE
Production & Volunteer grids EGEE-III INFSO-RI-222667 EGEE - Bob Jones - OGF25/User Forum - 2-6 March 2009 Enabling Grids for E-sciencE
 
 
Extendable Infrastructure EGEE-III INFSO-RI-222667 EGEE - Bob Jones - OGF25/User Forum - 2-6 March 2009 Enabling Grids for E-sciencE
Conclusion
 
Grid Computing My preferred definition: Grid computing is distributed computing performed transparently across multiple administrative domains Peter Coveney (P.V.Coveney@ucl.ac.uk) Notes:  Computing  means  any  activity involving digital information -- no distinction between numeric/symbolic, or numeric/data/viz Transparency  implies minimal complexity for users of the technology
Grid - its really about collaboration! It’s about sharing and building a vision for the future And it’s about getting connected  It’s about the democratization of science It takes advantage of Open Source! Source: Vicky White
Conclusion Both Grids and Clouds are evolutions of Virtualisation Virtulaisation will provide enormous flexibility in both Grids and Clouds management Grids and Clouds are opportunity to each other! The ICT is always changing, nothing is guaranteed to prevail Taking advantage of your Core Competence is the most important issue!
 
 
 
Embracing a new vision for Science Virtual Research Communities Improved scientific process… role of simulation Cross-disciplinarity Data deluge - Wet-labs versus ICT infrastructures networking grids instrumentation computing data curation… Technology push value added of distributed collaborative research (virtual organisations) Application pull revolution in science & engineering, research & education
What is e-Science? “e-Science is about global collaboration in key areas of big science and the next generation of infrastructure that will enable it” by Dr John Taylor Director General of the UK Research Councils e-Science reflects growing importance of international laboratories, satellites and sensors and their integrated analysis by distributed teams
http://www.rcuk.ac.uk/escience/ What is meant by e-Science?  In the future, e-Science  will refer to the  large scale science that will increasingly  be carried out through distributed global collaborations  enabled by the Internet.  Typically, a feature of such  collaborative scientific enterprises is that they will  require access to very large data collections, very large scale computing resources and high performance visualisation back to the individual user scientists.
Grid Concept Analogy with the electrical power grid “On-demand” access to ubiquitous distributed computing Transparent access to multi-petabyte distributed data bases Easy to plug resources into Complexity of the infrastructure is hidden “ When the network is as fast as the computer's internal links, the machine disintegrates across the net into a set of special purpose appliances ” (George Gilder)  Source: Ian Foster
Open, Integrated, Virtualised, Autonomic Open: Open Source model and Open Architecture Integrated and Virtualised: Invisible Computing, hide all system complexities Autonomic: Integrate redundant and replicated commodity components to create a highly-available system instead of expensive hardware solutions Attributes of On-demand Computing
Grid computing is NOT… … launching isolated jobs onto medium-sized or big iron, as is the case for most work being done on TeraGrid machines most work being done on National Grid Service What added value is there to “grid-enabling” NGS and TeraGrid machines? … talking about  middleware, specifications and standards Point of information:  User No. 000002 of OGSA-DAI on the NGS works in my group What has been happening in all those “informatics” projects that has advanced grid computing? Peter Coveney (P.V.Coveney@ucl.ac.uk)
Who invented WWW ? Tim Berners-Lee of CERN proposed the idea of  WWW in 1989, a revised version with  Robert Cailliau next year WWW is to solve the difficulty of academic information sharing, you don’t have to login to separate accounts to get informtion “ The philosophy that academic information is for all, and it is our duty to make information available.  WWW... ”  --- Tim Berners-Lee, 1991
WWW  Concept , CERN 1989 , 1990
First WWW Browser, Dec 1990
The Rest is History 1991, CERN  distributed the first WWW system to High Energy Physics circle, follow up by universities and research institutes Dec 1991, SLAC  built the first Web Server in US 1993, NCSA  developed an easier to use Mosaic Browser 1994, “Year of the Web”.  10,000 servers, 2,000 commercial, 10 Million users
Paradigm Shift requires New Tools and New Thinkings due to Thomas Kuhn, “The Structure of Scientific Revolutions”, 1962 Paradigm is a framework of thoughts Physics originated from common sense and rationality Greek knew the radius of Earth Aristotle’s Force and Motion Galilei’s Mechanics (Metaphor: clock and watch) Matter and Energy (Heat Engine and Industrial Revolution) If Computer is the metaphor of 21st Century, what is the Physics behind?

Cloud Computing,雲端運算-中研院網格計畫主持人林誠謙

  • 1.
    Grids and Clouds:Tackle the Limits of ICT Tackle the Limits of ICT Simon C. Lin ( 林誠謙 ) Academia Sinica Grid Computing, Taipei, Taiwan 11529 [email_address] 28 April 2009, TrendMicro Clouds Seminar
  • 2.
    Future of Computing?Computing is about the application of Information and Communication Technology “ Cambrian explosions” in ICT, rich diversity in services, middleware, commodity hardware, devices and sensors
  • 3.
    Water, water, everywhere,nor any drop to drink S. T. Coleridge, 1797 S. T. Coleridge, 1797 S. T. Coleridge, 1797
  • 4.
    Actual Internet MapIt looks amazingly similar to Neural Net in terms of Connectivity It’s a Scale-Free Network Everything over IP Always-On Everywhere
  • 5.
    The Internet Hourglass[email_address] IP Application layer Link layer Ethernet, WIFI (802.11), ATM, SONET/SDH, FrameRelay, modem, ADSL, Cable, Bluetooth… Voice, Video, P2P, Email, youtube, …. Protocols – TCP, UDP, SCTP, ICMP,… Disruptive approaches need a disruptive architecture Changing/updating the Internet core is difficult or impossible ! (e.g. Multicast, Mobile IP, QoS, …) Everything on IP Homogeneous networking abstraction IP on Everything IPv6 IPvX
  • 6.
    Heterogeneity We haveto extend the waist and host more paradigms Future networks have to scale in size AND functionality Enable network evolution Federation instead of homogeneous abstraction [email_address] Application layer Link layer CLEAN SLATE We need a framework that is able to host multiple networks  ANA PROJECT (J02) Generic framework IP Sensor Home NW ??? Pub/Sub
  • 7.
    Vision “ TheInternet only just works ”! Explore the possible Future(s) of the Internet The future Internet might be Polymorphic Multiple Federated Internets Content, Wireless, DTN, Things, … What is the foundation of this future? Is there a Network Science ? Experimentally driven research & testing 3 The Disappearing Internet! Serge Fdida, ICT’08, Lyon, Nov.08
  • 8.
    There are Limitsto Everything ...
  • 9.
    Exponential Growth WorldGilder’s Law (32X in 4 yrs) Storage Law (16X in 4yrs) Moore’s Law (5X in 4yrs) Triumph of Light – Scientific American . George Stix, January 2001 Performance per Dollar Spent Optical Fibre (bits per second) Chip capacity (# transistors) Data Storage (bits per sq. inch) Number of Years 0 1 2 3 4 5 Doubling Time (months) Rock's Law: the cost of leading-edge plant doubles every three years (now at 3-4 billion USD) 9 12 18
  • 10.
    The Impact ofScaling All positive benefits bundled together Reduced component cost Increased functionality Higher performance Reduced component size Smaller energy consumption The Information Revolution Rapid expansion of the use of integrated circuits in all products and services Wide penetration of digital technologies in all areas of society and economy © I. Tuomi 25 Nov. 2008 page: Quality-adjusted price of semiconductors Source: Aizcorbe, Oliner, Sichel (2006): Shifting trends in semiconductor prices and the pace of technical progress. Finance and Economics Discussion Series, Federal Reserve Board, Washington, D.C.
  • 11.
    The Data DelugeA large novel: 1 Mbyte; The Bible: 5 Mbytes A Mozart symphony (compressed): 10 Mbytes A digital mammogram: 100 Mbytes OED on CD: 500 Mbytes, Digital movie (compressed): 10 Gbytes Annual production of refereed journal literature ( ∼ 20 k journals; ∼ 2 M articles): 1 Tbyte Library of Congress: 20 Tbytes The Internet Archive (10 B pages) (From 1996 to 2002): 100 Tbytes Annual production of information (print, film, optical & magnetic media): 1500 to 3000 Pbytes. From 2008, annual production will be greater than total available storage space! All Worldwide Telephone communication in 2002: 19.3 ExaBytes. Global Internet traffic will reach EB order next year! Moore’s Law enables instruments and detectors to generate unprecedented amount of data in all scientific disciplines <<< Limit and Challenge!
  • 12.
    Moore’s Law inHPC Machines Limit Soon! >>>
  • 13.
  • 14.
    Power Consumption Challengefor More Computing Power <<< Limit!
  • 15.
  • 16.
    Tera -> Peta Bytes RAM time to move 15 minutes 1Gb WAN move time 10 hours ($1000) Disk Cost 7 disks = $5000 (SCSI) Disk Power 100 Watts Disk Weight 5.6 Kg Disk Footprint Inside machine • RAM time to move – 2 months • 1Gb WAN move time – 14 months ($1 million) • Disk Cost – 6800 Disks + 490 units + 32 racks = $7 million • Disk Power – 100 Kilowatts • Disk Weight – 33 Tonnes • Disk Footprint – 60 m 2 May 2003 Approximately Correct Distributed Computing Economics Jim Gray, Microsoft Research, MSR-TR-2003-24 <<< Limit to Scaling!
  • 17.
    From Optimizing Architectureto Optimizing Organisation High-performance computing has moved from being a problem of optimizing the architecture of an individual supercomputer to one of optimizing the organization of large numbers of ordinary computers operating in parallel. Scott Kirkpatrick, SCIENCE Vol. 299, 2003, p668
  • 18.
    Simulated Time CanGet Rough <<< Proc. # Limit !
  • 19.
    Small World CanCure ... This is similar to the roughening surfaces of crystalline materials grown or the liquid surface in a very week gravitational field Roughening becomes a problem when measurements must be continuously extracted from the complex simulation Temporal roughening can be eliminated by synchronising with SuperNode in a Small World
  • 20.
    How Large theComputing Could Be?
  • 21.
    EGEE-III INFSO-RI-222667 EGEE- Bob Jones - OGF25/User Forum - 2-6 March 2009 ~280 sites 45 countries >80,000 CPUs >25 PetaBytes >14,000 users >250,000 jobs/day Quality: Monitoring via Nagios - distributed via official releases, configured through YAIM, integrated with other tools Gradual implementation of Service Level Agreement with sites Result : 85% of sites are now above the 75% availability threshold Geographical expansion: now have production sites all across Asia: Australia, China, India, Japan, Korea, Malaysia, Pakistan, Taiwan, Thailand In certification: Indonesia, Philippines, Vietnam Enabling Grids for E-sciencE
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
    SETI@home: 1,000,000 CPUs“Poor man’s Grid” BOINC (Berkeley Open Infrastructure for Network Computing) platform launched in 2003. General-purpose open-source platform for client-server model of distributed computing. >50 volunteer computing projects today in a wide range of sciences. Not all use BOINC. Volunteer Computing
  • 28.
    Folding@home: >1 petaflopSony pre-installs folding@home on Playstation-III – user can choose to run in background. 50k PS3s make first distributed petaflop machine (Guiness World Record Sept 2007). Often of Philanthropic Nature
  • 29.
    LHC@home: >3000 CPU-years,>60K Volunteers >60K Volunteers Fortran program Sixtrack simulates LHC proton beam stability for 10 5 -10 6 orbits. Include real magnet parameters, beam-beam effects. Predict stable operation conditions. Citizen Cyber-science Projects
  • 30.
  • 31.
    Cost of 1TFLOPS-year Cluster: $145K Computing hardware; power/AC infrastructure; network hardware; storage; power; sysadmin Cloud: $1.75M Volunteer: $1K - $10K Server hardware; sysadmin; web development
  • 32.
    Performance Current 500Kpeople, 1M computers 6.5 PetaFLOPS (3 from GPUs, 1.4 from PS3s) Potential 1 billion PCs today, 2 billion in 2015 GPU: approaching 1 TFLOPS How to get 1 ExaFLOPS: 4M GPUs * 0.25 availability How to get 1 Exabyte: 10M PC disks * 100 GB
  • 33.
    History of volunteercomputing Applications Middleware 1995 2005 distributed.net, GIMPS SETI@home, Folding@home Commercial: Entropia, United Devices, ... BOINC Climateprediction.net [email_address] IBM World Community Grid [email_address] [email_address] ... 2005 2000 now Academic: Bayanihan, Javelin, ... Applications
  • 34.
    ASGC Introduction LargeHadron Collider (LHC) Avian Flu Drug Discovery Grid Application Platform A Worldwide Grid Infrastructure Asia Pacific Regional Operation Center >280 sites, 45 countries >80,000 CPUs, >25 PetaBytes >14,000 users, >200 VOs >250,000 jobs/day Best Demo Award of EGEE’07 Lightweight Problem Solving Framework 1. Most Reliable T1: 98.83% 2. Very Highly Performing and most Stable Site in CCRC08 Max CERN/T1-ASGC Point2Point Inbound : 7.3 Gbps
  • 35.
    由歐洲到台灣每秒能傳送兩部大英百科全書 -- 歷史記錄 Max CERN/T1->ASGC Inbound : 7.3 Gbps, and Outbound : 5.9 Gbps
  • 36.
  • 37.
    ASGC Profile Mission:Building sustainable research and collaboration infrastructure Objectives: Support research by e-Science, on data intensive sciences and applications require cross disciplinary distributed collaboration Milestone Operational from the deployment of LCG0 since 2002 , and takes the Tier-1 Center responsibility from 2005 Federated Taiwan Tier-2 center -- Taiwan Analysis Facility (TAF) is also collocated in ASGC Rep. of EGEE e-Science Asia Federation, while joining EGEE from 2004 Providing Asia Pacific Regional Operation Center (APROC) services to Asia Pacific WLCG/EGEE sites from 2005 Initiate Avian Flu Drug Discovery Project and collaborate with EGEE in 2006 Start of EUAsiaGrid Project from April 2008 3
  • 38.
    e-Science Support @ASGC Strategy Collaborate by sharing: goals, resources, solutions, ... Resource integration for effective utilization Enabled by e-Infrastructure services Scalable Resource & flexible solution: working with Users Internet-wide Collaboration Environment High performance networking Service Grid + Desktop Grid + Cloud --> new technology Target Applications: HEP, Life Science, Earth Science, Climate Change, and long-term preservation
  • 39.
    Resource Status forWLCG 5,500 2,500 1,300 WLCG CPU(KSI2K) Disk(TB) Tape (TB) 2007 1,700 900 800 2008 3,400 1,500 1,300 2009 5,000 3,000 3,000 2010 7,000 3,500 3,500 2011 8,000 4,000 4,500 2012 9,000 4,500 4,500 2013 10,000 4,500 4,500
  • 40.
    ASGC 是 EGEE和 WLCG 的亞太維運中心 Mission Provide deployment support facilitating Grid expansion Maximize the availability of Grid services Supports EGEE sites in Asia Pacific since April 2005 26 production sites in 10 countries, 12 VOs Over 5,300 CPU Cores, 3 PetaByte Disk Space by end of 2008 Runs ASGCCA Certification Authority since 2003 Middleware installation support Production resource center certification Operations Support Monitoring Diagnosis and troubleshooting Problem tracking Security
  • 41.
  • 42.
    WLCG/EGEE/EUAsiaGrid Partners inAsia Pacific STM-16 STM-16 STM-16 STM-4 STM-64 TW SINET CERNET JP HK SG CSTNET KREONET WIDE I2 / GN2 Tokyo-LCG2 US AU-ATLAS KEK2 KEK1 PK-NCP HK-HKU KNU KISTI-HEP KISTI-GCRT SDU-LCG2 BEIJING-PKU BEIJING-LCG2 AU-UNIMELB-LCG2 AARNET HARNET PK-PAKGRID MY-MIMOS IN-VECC IN-TIFR TW-NIU TWAREN/ TANET IP Transit HKIX TW-NTCU TW-THU TW-FTT TW-IPAS TW-NCUCC TW-NCUHEP Taiwan-ASGC SG-NGO TH-NETEC, HAII PH- AdMU, ASTI VN-IAMI NL
  • 43.
    Design of collaborationmodel High Energy Physics ATLAS and CMS Tier1 Center and APROC Services, Geant4, and other experiments Life Science Application Support of gLite-enabled Virtual Screening Service Improve autodocking drug discovery performance & scalability with the DIANE2 Initiate Avian Flu Data Challenge II Refine project at EUAsiaGrid Disaster Mitigation Provide wave simulation applications of earthquake and tsunami, for the mitigation planning and evaluation Build up GEOGrid on earthquake in corporation with Taiwan Earthquake Data Center. Related EGEE applications survey (e.g., DEGREE) and collaboration Carbon Flux Application Promote the Grid Application Platform (GAP) to the local ecologist Migrate Carbon Flux data and integrate SRM-SRB interface in GAP system and gLite Grid Portal Technology Develop grid portal technology, workflow engine and data management services e-Science Application Support 2
  • 44.
    Biomedical goal acceleratingthe discovery of novel potent inhibitors thru minimizing non-productive trial-and-error approaches improving the efficiency of high throughput screening Grid goal massive throughput: reproducing a grid-enabled in silico process (exercised in DC I) with a shorter time of preparation interactive feedback: evaluating an alternative light-weight grid application framework (DIANE) EGEE Biomed DC II – Large Scale Virtual Screening of Drug Design Initiated by ASGC, Taiwan and WISDOM Project! 137 CPU-Years in 4 Weeks! 5,000 CPU mobilised! Credit: Y-T Wu N1 H5 Credit: Y-T Wu
  • 45.
    Bio-Portal and VirtualScreening Services x Best Demo Award of EGEE’07 Conference Total computing power used 137 cpu-year 8 variants against 400k compounds
  • 46.
    Large Hadron ColliderLHC@CERN for High Energy Physics
  • 47.
    Simulation of 2004Earthquake in Taipei Basin, w/o Geological Structure consideration Earthquake Wave Simulation
  • 48.
  • 49.
  • 50.
  • 51.
  • 52.
  • 53.
  • 54.
  • 55.
  • 56.
  • 57.
  • 58.
  • 59.
  • 60.
    Is It CloudComputing? It is potentially a form of Grid Computing It is often associated with collections of Virtual machines, aiming at 100 times larger than Data Centres Usually, clouds do not cross administrative domain Opportunity to change from Provider focused to User focused Pushed by Vendours
  • 61.
    Is It Buzzwordevolution? One distributed computing buzzword per decade Metacomputing (~1987, L. Smarr) Grid computing (~1997, I. Foster, K. Kesselman) Cloud computing (~2007, E. Schmidt ?)
  • 62.
  • 63.
    Cloud hype 2It’s stupidity. It’s worse than stupidity: it’s a marketing hype campaign. Somebody is saying this is inevitable — and whenever you hear somebody saying that, it’s very likely to be a set of businesses campaigning to make it true. Richard Stallman, quoted in The Guardian, September 29, 2008 The interesting thing about Cloud Computing is that we’ve redefined Cloud Computing to include everything that we already do. . . . I don’t understand what we would do differently in the light of Cloud Computing other than change the wording of some of our ads. Larry Ellison, quoted in the Wall Street Journal, September 26, 2008
  • 64.
    Cloud hype 3--- Cloud computing --- Grid computing
  • 65.
    Cloud hype 4Illustration by David Simonds The Economist April 09 The Open Cloud Manifesto “ The industry needs an objective, straightforward conversation about how this new computing paradigm will impact organizations, how it can be used with existing technologies, and the potential pitfalls of proprietary technologies that can lead to lock-in and limited choice .” OpenCloudManifesto.org
  • 66.
    Why Cloud Computing? Innovative business models require a utility-like intelligent infrastructure that embraces complexity to be successful in the competitive, fast-paced, services-based global economy. ECONOMICS : Small up front investment and can be billed by consumption. Reduction of TCO allows clients to pursue operational efficiency and productivity. RISK MANAGEMENT : Small up front commitment allows clients to try many new services faster and choose. This reduces big failure risks and allows clients to be innovative. TIME TO MARKET : Adopt new services quickly for pilot usages and scale quickly to global scale. INFORMATION SOCIETY : Value-added information generated by collection and analysis of massive amounts of unstructured data. UBIQUITOUS SOCIETY : Accessible via a heterogeneous set of devices (PC, phone, telematics..) March 2-6, 2009 OGF 25/EGEE User Forum, Catania Italy www.reservoir-fp7.eu Because it makes sense !!! SaaS PaaS IaaS
  • 67.
    Why now ?Broadband networks Adoption of Software as a Service Salesforce.com Web 2.0 mindset Fast penetration of virtualization technology for x86-based servers Virtual appliances General purpose on-line virtual machines that can do anything March 2-6, 2009 OGF 25/EGEE User Forum, Catania Italy www.reservoir-fp7.eu
  • 68.
    Is it reallynew ? Massive scale resource sharing over the Internet Sound a lot like grid computing, yet … March 2-6, 2009 OGF 25/EGEE User Forum, Catania Italy www.reservoir-fp7.eu Grid Highly specialized resources that need to be shared by thousands [researchers] Large data sets Sharing is a goal In many cases, providers are also consumers Interoperable by design Cloud Reducing CAPEX, OPEX, time to market Millions of users that share to save not for the sake of sharing Providers want market share and customer lock-in Need for interoperability driven by customers
  • 69.
    Definitions 1 EnterpriseGrid Clouds Service Grid
  • 70.
    Definitions 2 Cloudcomputing is an on-demand service offering a large pool of easily usable and accessible virtualized resources (such as hardware, development platforms and/or software services) in a pay-per-use model . Clouds are usually commercial and use proprietary interfaces. Service grids are systems that federate, share and coordinate distributed resources from different organizations which are not subject to centralized control , using standard, open, general-purpose protocols and interfaces to deliver non-trivial qualities of service. Service grids are used by Virtual Organisations , thematic groups of users crossing administrative and geographical boundaries.
  • 71.
    Definitions 3 Contractharvesting (Cloud) Cooperative harvesting (Service Grid)
  • 72.
  • 73.
  • 74.
    Grids vs. Clouds3 Cost of 1 Teraflop-year Cloud: $1.75M Amazon EC2 rates Cluster: $145K Hardware (computing, network, storage); power; infrastructure; sysadmin Volunteer: $1K - $10K Server hardware; sysadmin; web development
  • 75.
  • 76.
  • 77.
    Conclusion “ IT is a cost center, after all, not so dissimilar from janitorial and cafeteria services, both of which have long been outsourced at most enterprises. Security concerns won't necessarily prevent companies from wholesale outsourcing of data services: businesses have long outsourced payroll and customer data to trusted providers. Much will depend on the specific company, of course, but it's unlikely that smaller enterprises will resist the economic logic of utility computing. Bigger corporations will simply take longer to make the shift.” Nicholas Carr in The Big Switch: Rewiring the World, from Edison to Google
  • 78.
  • 79.
    Grid & Cloud:opportunities Many cloud offerings = competition Amazon Elastic Cloud Computing (EC2); IBM Blue Cloud; Microsoft Azure Services Platform; Sun Open Cloud Initiative; Google App Engine; Salesforce.com Force.com Cloud; GoGrid Cloud Hosting; RackSpace Cloud Hosting; Flexiscale Utility Computing on Demand… Some academic/open-source variants emerging Eucalyptus; Nimbus, OpenNebula,… Experiments in Grid + Cloud BalticCloud,StratusLab, VirtCloud, …some to be discussed in this session
  • 80.
    Where would cloudscome in? EGEE-III INFSO-RI-222667 EGEE - Bob Jones - OGF25/User Forum - 2-6 March 2009 Majority of EGEE’s CPUs are at the sites • The interaction with site resources is ‘simple’ In: Authorization, Compute & Storage Out: Monitoring & Accounting • Additional Site Requirements VO specific site services VO specific applications deployed to the worker nodes • Provide classical ‘cloud’ scale out capability Potentially, with VO specific worker nodes Enabling Grids for E-sciencE
  • 81.
    Production & Volunteergrids EGEE-III INFSO-RI-222667 EGEE - Bob Jones - OGF25/User Forum - 2-6 March 2009 Enabling Grids for E-sciencE
  • 82.
  • 83.
  • 84.
    Extendable Infrastructure EGEE-IIIINFSO-RI-222667 EGEE - Bob Jones - OGF25/User Forum - 2-6 March 2009 Enabling Grids for E-sciencE
  • 85.
  • 86.
  • 87.
    Grid Computing Mypreferred definition: Grid computing is distributed computing performed transparently across multiple administrative domains Peter Coveney (P.V.Coveney@ucl.ac.uk) Notes: Computing means any activity involving digital information -- no distinction between numeric/symbolic, or numeric/data/viz Transparency implies minimal complexity for users of the technology
  • 88.
    Grid - itsreally about collaboration! It’s about sharing and building a vision for the future And it’s about getting connected It’s about the democratization of science It takes advantage of Open Source! Source: Vicky White
  • 89.
    Conclusion Both Gridsand Clouds are evolutions of Virtualisation Virtulaisation will provide enormous flexibility in both Grids and Clouds management Grids and Clouds are opportunity to each other! The ICT is always changing, nothing is guaranteed to prevail Taking advantage of your Core Competence is the most important issue!
  • 90.
  • 91.
  • 92.
  • 93.
    Embracing a newvision for Science Virtual Research Communities Improved scientific process… role of simulation Cross-disciplinarity Data deluge - Wet-labs versus ICT infrastructures networking grids instrumentation computing data curation… Technology push value added of distributed collaborative research (virtual organisations) Application pull revolution in science & engineering, research & education
  • 94.
    What is e-Science?“e-Science is about global collaboration in key areas of big science and the next generation of infrastructure that will enable it” by Dr John Taylor Director General of the UK Research Councils e-Science reflects growing importance of international laboratories, satellites and sensors and their integrated analysis by distributed teams
  • 95.
    http://www.rcuk.ac.uk/escience/ What ismeant by e-Science? In the future, e-Science will refer to the large scale science that will increasingly be carried out through distributed global collaborations enabled by the Internet. Typically, a feature of such collaborative scientific enterprises is that they will require access to very large data collections, very large scale computing resources and high performance visualisation back to the individual user scientists.
  • 96.
    Grid Concept Analogywith the electrical power grid “On-demand” access to ubiquitous distributed computing Transparent access to multi-petabyte distributed data bases Easy to plug resources into Complexity of the infrastructure is hidden “ When the network is as fast as the computer's internal links, the machine disintegrates across the net into a set of special purpose appliances ” (George Gilder) Source: Ian Foster
  • 97.
    Open, Integrated, Virtualised,Autonomic Open: Open Source model and Open Architecture Integrated and Virtualised: Invisible Computing, hide all system complexities Autonomic: Integrate redundant and replicated commodity components to create a highly-available system instead of expensive hardware solutions Attributes of On-demand Computing
  • 98.
    Grid computing isNOT… … launching isolated jobs onto medium-sized or big iron, as is the case for most work being done on TeraGrid machines most work being done on National Grid Service What added value is there to “grid-enabling” NGS and TeraGrid machines? … talking about middleware, specifications and standards Point of information: User No. 000002 of OGSA-DAI on the NGS works in my group What has been happening in all those “informatics” projects that has advanced grid computing? Peter Coveney (P.V.Coveney@ucl.ac.uk)
  • 99.
    Who invented WWW? Tim Berners-Lee of CERN proposed the idea of WWW in 1989, a revised version with Robert Cailliau next year WWW is to solve the difficulty of academic information sharing, you don’t have to login to separate accounts to get informtion “ The philosophy that academic information is for all, and it is our duty to make information available. WWW... ” --- Tim Berners-Lee, 1991
  • 100.
    WWW Concept, CERN 1989 , 1990
  • 101.
  • 102.
    The Rest isHistory 1991, CERN distributed the first WWW system to High Energy Physics circle, follow up by universities and research institutes Dec 1991, SLAC built the first Web Server in US 1993, NCSA developed an easier to use Mosaic Browser 1994, “Year of the Web”. 10,000 servers, 2,000 commercial, 10 Million users
  • 103.
    Paradigm Shift requiresNew Tools and New Thinkings due to Thomas Kuhn, “The Structure of Scientific Revolutions”, 1962 Paradigm is a framework of thoughts Physics originated from common sense and rationality Greek knew the radius of Earth Aristotle’s Force and Motion Galilei’s Mechanics (Metaphor: clock and watch) Matter and Energy (Heat Engine and Industrial Revolution) If Computer is the metaphor of 21st Century, what is the Physics behind?

Editor's Notes

  • #10 Moore’s Law – Individual computers double in processing power every 18 monthsStorage Law – disk storage capacity doubles every 12 Months Gilder’s Law – Network bandwidth doubles every 9 months (but is harder to install)This exponential growth profoundly changes the landscape of Information technology(High-speed) access to networked information becomes the dominant feature of future computingFor large-scale images: secure remote access eventually becomes routine
  • #13 1. Moore’s Law – Transistor Density doubles thus Individual computers double in processing power every 18 months2. This exponential growth profoundly changes the landscape of Information Technology
  • #14 1. Moore’s Law implies 101.6 times growth in 10 years, 256 times in 12 years2. In the last 10 years, in particular, 10,000 times performance gain; this is mainly due to clever use of Parallel programming in some application area3. How to write and Ease of Use of Massively Parallel Computers are essential!
  • #15 1. Higher transistor integration and higher frequency alone result too much heat for more performance.2. One needs a holistic approach to deal with heat problem3. Multicore with multithreading architecture helps to spread power dissipation over a greater area
  • #35 Since the global e-Infrastructure is establishing quickly, we believe it is an excellent opportunity to take advantage of sharing and collaboration to bridge the gap between Asia and the world as well as creating new regional cooperation opportunities in Asia Pacific. ASGC as one of the earliest partner in Asia
  • #38 ASGC is actually a center for e-Science, HEP is the largest and first (power) user of ASGC, and we also learnt a lot from HEP community for collaboration EUAsiaGrid: with 11 Asia partners & 4 European partners
  • #39 Are we able to do e-Science ? sharing sometimes also means complement)
  • #41 from site deployment, certification, services and to sustainable operationsOur participation to other EGEE activities, such as application support and development, dissemination and training will not be discussed here. Only the sustainable model for AP regional is raised here.
  • #42 AARNet also has 2 x 10Gb to US
  • #43 [Introduce the ASGCNet backbone first] first 10G network between Asia and Europe, 2.5G link to US and extend to NL as the Europe link backup. In Asia, we upgrade to have 2.5 G links to both JP and HK from 2008, and another 622Mb link to Singapore. We are keen to share all the network with APAN and TEIN and Asia partners.
  • #44 What we are doing in Taiwan for EUAsiaGrid and the regional collaboration is to provide gLite infrastructure, and to enable e-Infrastructure services for users
  • #45 There is also an important component of EGEE-II in Business Partners and Industry Take-up such as Industry Task Force and Industry Forum. It also organises EGEE Business Associates (EBA).
  • #48 Prediction of earthquake is still not possible at this moment, through the simulation of wave propagation and impacts with assumed hypocenter, we could understand the possible threat to areas scatter from the epicenter. Then, the strategy for mitigation and protection would be verified accordingly. Here is exmaple of simulation of 3D amplification effects in Taipei basin by the wave-field propagation simulation with considering the complex geometry structure, extraordinary hazards could be identified even the epicenter is far from the target area.
  • #89 1. For the un-precedented scale of collaboration, the High Energy Physics Community has the most experience.2. Strategically, the world-wide funding agencies use HEP community to build the 1st Global production Grid3. With the Hope that this will extend to other e-Science and HPC application areas by EGEE and OSG!