Session 50 - High Performance Computing Ecosystem in Europe


Published on

Published in: Technology, Business
1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Session 50 - High Performance Computing Ecosystem in Europe

  1. 1. High-Performance Computing Ecosystemin Europe<br />July 15th, 2009<br />Kimmo Koski<br />CSC – The Finnish IT Center for Science<br />
  2. 2. Topics<br />Terminology and definitions<br />Emerging trends<br />Stakeholders<br />On-going Grid and HPC activities<br />Concluding remarks<br />
  3. 3. Terminology and pointers<br />HPC <br />High Performance Computing<br />HET,<br />High Performance Computing in Europe Taskforce, established in June 2006 with a mandate to draft a strategy for European HPC ecosystem<br />Petaflop/s<br />Performance figure 1015 floating point operations (calculations) in second<br />e-IRG,<br />e-Infrastructure Reflection Group. e-IRG is supporting the creation of a framework (political, technological and administrative) for the easy and cost-effective shared use of distributed electronic resources across Europe - particularly for grid computing, storage and networking.<br />ESFRI,<br />European Strategy Forum on Research Infrastructures. The role of ESFRI is to support a coherent approach to policy-making on research infrastructures in Europe, and to act as an incubator for international negotiations about concrete initiatives. In particular, ESFRI is preparing a European Roadmap for new research infrastructures of pan-European interest.<br />RI<br />Research Infrastructure<br />
  4. 4. Terminology and pointers (cont.)<br />PRACE,<br />Partnership for Advanced Computing in Europe<br />EU FP7 project for preparatory phase in building the European petaflop computing centers, based on HET work<br />DEISA-2,<br />Distributed European Infrastructure for Supercomputing Applications. DEISA is a consortium of leading national supercomputing centers that currently deploys and operates a persistent, production quality, distributed supercomputing environment with continental scope. <br />EGEE-III,<br />Enabling Grid for E-sciencE. The project provides researchers in academia and industry with access to a production level Grid infrastructure, independent of their geographic location. <br />EGI_DS,<br />An effort to establish a sustainable grid infrastructure in Europe<br />GÉANT2,<br />Seventh generation of pan-European research and education network<br />
  5. 5. Computational science infrastructure<br />
  6. 6. Performance Pyramid<br />National/regional centers, Grid-collaboration<br />Local centers<br />European<br />HPC center(s)<br />e-IRG<br />PRACE<br />TIER 0<br />DEISA-2<br />Capability <br />Computing<br />EGEE-III<br />TIER 1<br />Capacity <br />Computing<br />TIER 2<br />
  7. 7. Need to remember about petaflop/s…<br />What do you mean with petaflop/s?<br />Theoretical petaflop/s?<br />LINPACK petaflop/s?<br />Sustained petaflop/s for a single extremely parallel application?<br />Sustained petaflop/s for multiple parallel applications?<br />Note that between 1 and 4 there might be several years<br />Petaflop/s hardware needs petaflop/s applications, which are not easy to program, or not even possible in many cases<br />Do we even know how to scale over 100000 processors …<br />
  8. 8. Emerging trends<br />
  9. 9.
  10. 10. ResearchCommunity-2<br />ResearchCommunity-1<br />ResearchCommunity-3<br />Human <br />interaction<br />Human<br />interaction<br />Human <br />interaction<br />Workspace<br />Workspace<br />Workspace<br />Labs<br />Labs<br />Labs<br />Scientific Data<br />Scientific Data<br />Scientific Data<br />Computing, <br />Grid<br />Computing, <br />Grid<br />Computing, <br />Grid<br />Network<br />Network<br />Network<br />Global Virtual Research Communities<br />
  11. 11. VirtualCommunity<br />VirtualCommunity<br />VirtualCommunity<br />Human <br />interaction<br />Human <br />interaction<br />Human<br />interaction<br />Workspace<br />Workspace<br />Workspace<br />Virtual Labs<br />Virtual Labs<br />Virtual Labs<br />Scientific Data<br />Scientific Data<br />Scientific Data<br />Grid<br />Grid<br />Grid<br />Economies<br />of Scale<br />EfficiencyGains<br />Network<br />Network<br />Network<br />Global Virtual Research Communities<br />Scientific Data<br />Grid<br />Network<br />
  12. 12. Data and information explosion<br />Petascale computing produces exascale data<br />
  14. 14. HPC Ecosystem to support the top<br />The upper layers of the pyramid <br />HPC centers / services<br />European projects (HPC/Grid, networking, …)<br />Activities which enable efficient usage of upper layers<br />Inclusion of national HPC infrastructures<br />Software development and scalability issues<br />Competence development<br />Interoperability between the layers <br />
  15. 15. Stakeholders<br />
  16. 16. Stakeholder categories in PRACE<br />Providers of HPC services<br />European HPC and grid projects<br />Networking infrastructure providers<br />Hardware vendors<br />Software vendors and the software developing academic community<br />End users and their access through related Research Infrastructures<br />Funding bodies on a national and international level<br />Policy setting organisations directly involved in developing the research infrastructure and political bodies like parliaments responsible for national and international legislation<br />
  17. 17. Policy and strategy work <br />HET: HPC in Europe Taskforce<br />e-IRG: e-Infrastructure Reflection Group<br />ESFRI: European Strategy Forum on Research Infrastructures<br />ERA Expert Group on Research Infrastructures<br />ESFRI<br />
  18. 18. Some focus areas<br />Collaboration between research and e-infrastructure providers<br />Horizontal ICT services<br />Balanced approach: more focus on data, software development and competence development<br />Inclusion of different countries, different contribution levels<br />New emerging technologies, innovative computing initiatives<br />Global collaboration, for example Exascale computing initiative<br />Policy work, resource exchange, sustainable services etc.<br />
  19. 19. On-going Grid and HPC activities<br />
  20. 20. EU infrastructure projects<br />GEANT<br />Number of data infrastructure projects<br />
  21. 21.
  22. 22. 22<br />Supercomputing Drives Science through Simulation<br />Environment<br />Weather/ Climatology<br />Pollution / Ozone Hole<br />Ageing Society<br />Medicine<br />Biology<br />Energy<br />Plasma Physics<br />Fuel Cells<br />Materials/ Inf. Tech<br />Spintronics<br />Nano-science<br />
  23. 23. 23<br />PRACE Initiative<br />History and First Steps<br />Production of the HPC part ofthe ESFRI Roadmap;Creation of a vision,<br />involving 15 European countries<br />Signature of the MoU <br />Submission ofan FP7 project proposal<br />Bringing scientists together<br />Creation of the Scientific Case <br />Approval of the project <br />Project start<br />HPCEUR<br />HET<br />2004<br />2005<br />2006<br />2007<br />2008<br />
  24. 24. 24<br />HET: The Scientific Case<br />Weather, Climatology, Earth Science<br />degree of warming, scenarios for our future climate.<br />understand and predict ocean properties and variations<br />weather and flood events<br />Astrophysics, Elementary particle physics, Plasma physics<br />systems, structures which span a large range of different length and time scales<br />quantum field theories like QCD, ITER<br />Material Science, Chemistry, Nanoscience<br />understanding complex materials, complex chemistry, nanoscience<br />the determination of electronic and transport properties<br />Life Science<br />system biology, chromatin dynamics, large scale protein dynamics, protein association and aggregation, supramolecular systems, medicine<br />Engineering<br />complex helicopter simulation, biomedical flows, gas turbines and internal combustion engines, forest fires, green aircraft, <br />virtual power plant<br />
  25. 25. 25<br />First success: HPC in ESFRI Roadmap<br />The European Roadmap for Research Infrastructures is the first comprehensive definition at the European level<br />Research Infrastructures areone of the crucial pillars of the European Research Area<br />A European HPC service – impact foreseen:<br /><ul><li> strategic competitiveness
  26. 26. attractiveness for researchers
  27. 27. supporting industrial development</li></li></ul><li>26<br />Second success: The PRACE Initiative <br />Memorandum of Understanding signed by 15 States in Berlin, on April 16, 2007<br />France, Germany, Spain, The Netherlands, UKcommitted funding for a European HPC Research Infrastructure (LoS)<br />New:<br />
  28. 28. 27<br />Third success: the PRACE project<br />Partnership for Advanced Computing in Europe<br />PRACE<br />EU Project of the European Commission 7th Framework Program Construction of new infrastructures - preparatory phase<br />FP7-INFRASTRUCTURES-2007-1<br />Partners are 16 Legal Entities from 14 European countries<br />Budget: 20 Mio €<br />EU funding: 10 Mio €<br />Duration: January 2008 – December 2009<br />Grant no: RI-211528<br />
  29. 29. 28<br />PRACE Partners<br />
  30. 30. PRACE Work Packages<br />WP1 Management<br />WP2 Organizational concept<br />WP3 Dissemination, outreach and training<br />WP4 Distributed computing<br />WP5 Deployment of prototype systems<br />WP6 Software enabling for prototype systems<br />WP7 Petaflop systems for 2009/2010<br />WP8 Future petaflop technologies<br />29<br />
  31. 31. 30<br />PRACE Objectives in a Nutshell<br />Provide world-class systems for world-class science<br />Create a single European entity <br />Deploy 3 – 5 systems of the highest performance level (tier-0)<br />Ensure diversity of architectures<br />Provide support and training<br />PRACE will be created to stay<br />
  32. 32. 31<br />Representative Benchmark Suite<br />Defined a set of applications benchmarks<br />To be used in the procurement process for Petaflop/s systems<br />12 core applications, plus 8 additional applications<br />Core:NAMD, VASP, QCD, CPMD, GADGET, Code_Saturne, TORB, ECHAM5, NEMO, CP2K, GROMACS, N3D<br />Additional: AVBP, HELIUM, TRIPOLI_4, PEPC, GPAW, ALYA, SIESTA, BSIT<br />Each application will be ported to appropriate subset of prototypes<br />Synthetic benchmarks for architecture evaluation<br />Computation, mixed-mode, IO, bandwidth, OS, communication<br />Applications and Synthetic benchmarks integrated into JuBE<br />Juelich Benchmark Environment<br />
  33. 33. 32<br />Mapping Applications to Architectures<br /><ul><li>Identified affinities and priorities
  34. 34. Based on the application analysis - expressed in a condensed, qualitative way
  35. 35. Need for different “general purpose” systems
  36. 36. There are promising emerging architectures
  37. 37. Will be more quantitative after benchmark runs on prototypes </li></ul>E = estimated<br />
  38. 38. 33<br />Installed prototypes<br />IBM BlueGene/P (FZJ)<br />01-2008<br />IBM Power6 (SARA)<br />07-2008<br />Cray XT5 (CSC)<br />11-2008<br />IBM Cell/Power (BSC)<br />12-2008<br />NEC SX9, vector part (HLRS)<br />02-2009<br />Intel Nehalem/Xeon (CEA/FZJ)06-2009<br />33<br />
  39. 39. 34<br />Status June 2009<br />34<br />Summary of current Prototype Status<br />
  40. 40. 35<br />Web site and the dissemination channels<br />The PRACE web presence with news, events, RSS feeds etc.<br />Alpha-Galileo service: 6500 journalists around the globe:<br />Belief Digital Library<br />HPC-magazines<br />PRACE partner sites, top 10 HPC users<br />The PRACE website,<br />
  41. 41. 36<br />PRACE Dissemination Package<br />PRACE WP3 has created a dissemination package including templates, brochures, flyers, posters, badges, t-shirts, USB-keys, badges etc.<br />The PRACE logo<br />Heavy Computing 10^15: the PRACE t-shirt<br />PRACE USB-key<br />
  42. 42. 37<br />
  43. 43. SC&apos;08 Austin 2008-11-19<br />Andreas Schott, DEISA <br />38<br />DEISA: May 1st, 2004 – April 30th, 2008 <br />DEISA2: May 1st, 2008 – April 30th, 2011<br />DEISA Partners <br />
  44. 44. SC&apos;08 Austin 2008-11-19<br />Andreas Schott, DEISA <br />39<br />DEISA Partners<br />BSCBarcelona Supercomputing Centre Spain<br />CINECA Consortio Interuniversitario per il Calcolo Automatico Italy<br />CSCFinnish Information Technology Centre for Science Finland<br />EPCCUniversity of Edinburgh and CCLRC UK<br />ECMWFEuropean Centre for Medium-Range Weather Forecast UK (int)<br />FZJ Research Centre Juelich Germany<br />HLRS High Performance Computing Centre Stuttgart Germany<br />IDRIS Institut du Développement et des Ressources France<br /> en Informatique Scientifique - CNRS<br />LRZ Leibniz Rechenzentrum Munich Germany<br />RZG Rechenzentrum Garching of the Max Planck Society Germany<br />SARADutch National High Performance Computing Netherlands<br />CEA-CCRT Centre de Calcul Recherche et Technologie, CEA France<br />KTH Kungliga Tekniska Högskolan Sweden<br />CSCSSwiss National Supercomputing Centre Switzerland<br />JSCCJoint Supercomputer Center of the Russian Russia<br /> Academy of Sciences<br />
  45. 45. DEISA 2008<br />Operating the European HPC Infrastructure<br />&gt;1 PetaFlop/s <br />Aggregated peak performance<br />Most powerful European <br />supercomputers for <br />most challenging projects<br />Top-level Europe-wide application enabling<br />Grand Challenge projects performed on a regular basis<br />
  46. 46. SC&apos;08 Austin 2008-11-19<br />Andreas Schott, DEISA <br />41<br />DEISA Core Infrastructure and Services<br />Dedicated High Speed Network <br />Common AAA<br />Single sign on<br />Accounting/budgeting<br />Global Data Management<br />High performance remote I/O and data sharing with<br /> global file systems<br />High performance transfers of large data sets <br />User Operational Infrastructure<br />Distributed Common Production Environment (DCPE)<br />Job management service <br />Common user support and help desk<br />System Operational Infrastructure<br />Common monitoring and information systems<br />Common system operation<br />Global Application Support<br />
  47. 47. SC&apos;08 Austin 2008-11-19<br />Andreas Schott, DEISA <br />42<br />FUNET<br />SURFnet<br />DFN<br />UKERNA<br />GARR<br />RENATER<br />1 Gb/s GRE tunnel <br />10 Gb/s wavelength<br />10 Gb/s routed<br />10 Gb/s switched<br />RedIris<br />DEISA dedicated high speed network<br />
  48. 48. SC&apos;08 Austin 2008-11-19<br />Andreas Schott, DEISA <br />43<br />Super-UXNQS II<br />DEISA Global File System(based on MC-GPFS)<br />IBM P6 (& BlueGene/P)<br />NEC SX8<br />IBM P6 (& BlueGene/P)<br />AIXLL-MC<br />AIXLL-MC<br />GridFTP<br />IBM P6<br />Cray XT4<br />AIXLL<br />UNICOS/lc<br />PBS Pro<br />UNICOS/lc<br />PBS Pro<br />Cray XT4 & XT5<br />SGI ALTIX<br />LINUX<br />PBS Pro<br />AIXLL-MC<br />AIXLL-MC<br />IBM P5<br />LINUX<br />Maui/Slurm<br />LINUX<br />LL<br />IBM P6 & BlueGene/P<br />IBM P6<br />IBM PPC <br />
  49. 49. SC&apos;08 Austin 2008-11-19<br />Andreas Schott, DEISA <br />44<br />Multiple<br />ways to<br />access<br />Presen-tation<br />layer<br />Common<br />production<br />environm.<br />Single<br />monitor<br />system<br />Co-reservation<br />and co-allocation<br />Job manag.<br />layer and <br />monitor.<br />DEISA<br />Sites<br />Data<br />transfer tools<br />Job<br />rerouting<br />Workflow<br />managem.<br />Data staging<br />tools<br />WAN<br />shared<br />filesystem<br />Data manag.<br />layer<br />Unified<br />AAA<br />Network<br />connec-tivity<br />Network<br />and<br />AAA<br />layers<br />DEISA Software Layers<br />
  50. 50. SC&apos;08 Austin 2008-11-19<br />Andreas Schott, DEISA <br />45<br />Supercomputer<br />Hardware Performance<br />Pyramid<br />Supercomputer<br />Application Enabling<br />Requirements<br />Pyramid<br />Capability computing will always need expert support for application enabling and optimizations<br />The more resource demanding one single problem is, the higher<br />are generally the requirements for application enabling including enhancing scalability <br />EU<br />National<br />Local<br />
  51. 51. SC&apos;08 Austin 2008-11-19<br />Andreas Schott, DEISA <br />46<br />DEISA Organizational Structure<br />WP1 – Management<br />WP2 – Dissemination, External Relations, Training<br />WP3 – Operations<br />WP4 – Technologies<br />WP5 – Applications Enabling<br />WP6 – User Environment and Support<br />WP7 – Extreme Computing (DECI) and Benchmark Suite<br />WP8 – Integrated DEISA Development Environment<br />WP9 – Enhancing Scalability<br />
  52. 52. SC&apos;08 Austin 2008-11-19<br />Andreas Schott, DEISA <br />47<br />Evolution of Supercomputing Resources<br />DEISA partners´ compute resources at DEISA project start:<br />~ 30 TF aggregated peak performance<br />2004<br />DEISA partners´ resources at DEISA2 project start:<br />Over 1 PF aggregated peak performance on state-of-the art supercomputers<br />2008<br />Cray XT4 and XT5, Linux<br />IBM Power5, Power6, AIX / Linux<br />IBM BlueGene/P, Linux (frontend)<br />IBM PowerPC, Linux (MareNostrum)<br />SGI ALTIX 4700 (Itanium2 Montecito), Linux<br />NEC SX8 vector system, Super UX<br />Systems interconnected with dedicated 10Gb/s network links provided by GEANT2 and NRENs<br />Fixed fraction of resources dedicated to DEISA usage<br />
  53. 53. SC&apos;08 Austin 2008-11-19<br />Andreas Schott, DEISA <br />48<br />DEISA Extreme Computing Initiative<br /> (DECI)<br />DECI launched in early 2005 to enhance DEISA’s impact <br /> on science and technology <br />Identification, enabling, deploying and operation of “flagship” applications<br /> in selected areas of science and technology<br />Complex, demanding, innovative simulations requiring the exceptional <br /> capabilities of DEISA<br />Multi-national proposals especially encourage<br />Proposals reviewed by national evaluation committees<br />Projects chosen on the basis of innovation potential, <br /> scientific excellence, relevance criteria, and national priorities <br />Most powerful HPC architectures in Europe for the most challenging projects<br />Most appropriate supercomputer architecture selected for each project <br />Mitigation of the rapid performance decay of a single national supercomputer<br /> within its short lifetime cycle of typically about 5 years, as implied by Moore’s law<br />
  54. 54. SC&apos;08 Austin 2008-11-19<br />Andreas Schott, DEISA <br />49<br />DEISA Extreme Computing Initiative<br />Involvements in projects from DECI calls 2005, 2006, 2007:<br />157 research institutes and universities <br />from<br />15 European countries<br />Austria Finland France Germany Hungary<br />Italy Netherlands Poland Portugal Romania Russia Spain Sweden Switzerland UK<br />with collaborators from<br />four other continents<br />North America, South America, Asia, Australia <br />
  55. 55. SC&apos;08 Austin 2008-11-19<br />Andreas Schott, DEISA <br />50<br />DEISA Extreme Computing Initiative<br />Calls for Proposals for challenging supercomputing projects from all areas of science <br />DECI call 2005<br />51 proposals, 12 European countries involved, co-investigator from US)<br /> 30 mio cpu-h requested<br /> 29 proposals accepted, 12 mio cpu-h awarded (normalized to IBM P4+)<br />DECI call 2006<br />41 proposals, 12 European countries involved<br /> co-investigators from N + S America, Asia (US, CA, AR, ISRAEL)<br /> 28 mio cpu-h requested <br /> 23 proposals accepted, 12 mio cpu-h awarded (normalized to IBM P4+)<br />DECI call 2007<br />63 proposals, 14 European countries involved, co-investigators from <br /> N + S America, Asia, Australia (US, CA, BR, AR, ISRAEL, AUS)<br /> 70 mio cpu-h requested<br /> 45 proposals accepted, ~30 mio cpu-h awarded (normalized to IBM P4+)<br />DECI call 2008 <br />66 proposals, 15 European countries involved, co-investigators from <br /> N + S America, Asia, Australia<br /> 134 mio cpu-h requested (normalized to IBM P4+)<br />Evaluation in progress<br />
  56. 56. SC&apos;08 Austin 2008-11-19<br />Andreas Schott, DEISA <br />51<br />DECI Project POLYRES<br />Cover Story of Nature - May 24, 2007<br />Curvy membranes make proteins attractive<br />For almost two decades, physicists have been on the track of membrane mediated interactions. Simulations in DEISA have now revealed that curvy membranes make proteins attractive.<br />Nature 447 (2007), 461-464<br />proteins (red) adhere on a membrane <br /> (blue/yellow) and locally bend it; <br />this triggers a growing invagination. <br />cross-section through an almost <br /> complete vesicle <br />B. J. Reynwar et al.: Aggregation and vesiculation of membrane proteins by curvature mediated interactions, <br />NATURE Vol 447|24 May 2007| doi:10.1038/nature05840<br />
  57. 57. SC&apos;08 Austin 2008-11-19<br />Andreas Schott, DEISA <br />52<br />Achievements and Scientific Impact<br />Brochures can be downloaded from<br />
  58. 58. SC&apos;08 Austin 2008-11-19<br />Andreas Schott, DEISA <br />53<br />2003<br />2004<br />2005<br />2006<br />2007<br />2008<br />2009<br />2010<br />2011<br />2002<br />Evolution of User Categories in DEISA<br />Start of<br />FP7 DEISA2<br />Start of<br />FP6 DEISA<br />DEISA EoI<br />Support of <br />Virtual Communities<br />and EU projects<br />Single project support<br />DEISA Extreme Computing Initiative<br />Early adopters<br />(Joint Research Activities)<br />FP6 DEISA<br />FP7 DEISA2<br />Preparatory Phase<br />
  59. 59. SC&apos;08 Austin 2008-11-19<br />Andreas Schott, DEISA <br />54<br />Tier0 / Tier1 CentersAre there implications for the services?<br />Main difference between T0 and T1 centers: policy and usage models !<br /> T1 centers can evolve to T0 for strategic/political reasons<br />T0 machines automatically degrade to T1 level by aging<br />T0 Centers <br />Leadership-class European systems in competition to the leading systems<br /> worldwide, cyclically renewed<br />Governance structure to be provided by European organization(PRACE)<br />T1 Centers<br />Leading national Centers, cyclically renewed, optionally <br /> surpassing the performance of older T0 machines <br />National Governance structure<br />Services have to be the same in T0/T1<br />Because of the change of the status of the systems, over time<br />For user transparency of the different systems <br /> (Only visible: Some services could have different flavors for T0 and T1)<br />
  60. 60. SC&apos;08 Austin 2008-11-19<br />Andreas Schott, DEISA <br />55<br />Summary<br />Evolvement of this European infrastructure towards a robust and persistent European HPC ecosystem <br />Enhancing the existing services, by deploying new services including<br /> support for European Virtual Communities, and by cooperating and<br /> collaborating with new European initiatives, especially PRACE<br />DEISA2 as the vector for the integration of Tier-0 and Tier-1 <br /> systems in Europe<br />To provide a lean and reliable turnkey operational solution <br /> for a persistent European HPC infrastructure<br />Bridging worldwide HPC projects: To facilitate the support of international science communities with computational needs traversing existing political boundaries<br />
  61. 61. EGEE Status April 2009 <br />Infrastructure<br /><ul><li>Number of sites connected to the EGEE infrastructure: 268
  62. 62. Number of countries connected to the EGEE infrastructure: 54
  63. 63. Number of CPUs (cores) available to users 24/7: ~139,000
  64. 64. Storage capacity available: ~ 25 PB disk + 38 PB tape MSS</li></ul>Users<br /><ul><li>Number of Virtual Organisations using the EGEE infrastructure: > 170
  65. 65. Number of registered Virtual Organisations: >112
  66. 66. Number of registered users: > 13000
  67. 67. Number of people benefiting from the existence of the EGEE infrastructure: ~20000
  68. 68. Number of jobs: >390k jobs/day
  69. 69. Number of application domains making use of the EGEE infrastructure: more than 15 </li></li></ul><li>ProductiveUse<br />Utility<br />Testbeds<br />Are we ready for the demand?<br />National<br />European e-Infrastructure<br />Global<br />57<br />
  70. 70.<br />58<br />38 National Grid Initiatives<br />
  71. 71.<br />59<br />EGI Objectives (1/3)<br />Ensure the long-term sustainability of the European infrastructure<br />Coordinate the integration and interaction between National Grid Infrastructures<br />Operate the European level of the production Grid infrastructure for a wide range of scientific disciplines to link National Grid Infrastructures<br />Provide global services and support that complement and/or coordinate national services<br />Collaborate closely with industry as technology and service providers, as well as Grid users, to promote the rapid and successful uptake of Grid technology by European industry<br />
  72. 72.<br />60<br />EGI Objectives (2/3)<br />Coordinate middleware development and standardization to enhance the infrastructure by soliciting targeted developments from leading EU and National Grid middleware development projects<br />Advise National and European Funding Agencies in establishing their programmes for future software developments based on agreed user needs and development standards<br />Integrate, test, validate and package software from leading Grid middleware development projects and make it widely available<br />
  73. 73.<br />61<br />EGI Objectives (3/3)<br />Provide documentation and training material for the middleware and operations. <br />Take into account developments made by national e-science projects which were aimed at supporting diverse communities<br />Link the European infrastructure with similar infrastructures elsewhere<br />Promote Grid interface standards based on practical experience gained from Grid operations and middleware integration activities, in consultation with relevant standards organizations<br />EGI Vision Paper<br /><br />
  74. 74. Integration and interoperability<br />PRACE and EGI targeting a sustainable infrastructure<br />DEISA-2 and EGEE-III project based<br />Sometimes national stakeholders are partners in multiple initiatives<br />Users do not necessarily care where they get the service as long as they get it<br />Integration PRACE-DEISA and transition EGEE-EGI possible, further on requires creative thinking<br />
  75. 75. New HPC Ecosystem is being built…<br />
  76. 76. New market for European HPC<br />44 ESFRI list new research infrastructure projects, 34 running a preparatory phase project<br />1-4 years<br />1-7 MEUR * 2 (petaflop computing 10 MEUR * 2)<br />Successful new research infrastructures start construction 2009-2011<br />10-1000 MEUR per infrastructure<br />First ones start to deploy: ESS in Lund etc.<br />Existing research infrastructures are also developing<br />CERN, EMBL, ESA, ESO, ECMWF, ITER, …<br />Results:<br />Growing RI market, considerably rising funding volume <br />Need for horizontal activities (computing, data, networks, computational methods and scalability, application development,…) <br />Real danger to build disciplinary silos instead of searching IT synergy<br />Several BEUR for ICT <br />
  77. 77. Some Key Issues in building the ecosystem<br />Sustainability<br />EGEE and DEISA are projects with an end<br />PRACE and EGI are targeted to be sustainable with no definitive end<br />ESFRI and e-IRG<br />How do the research side and infrastructure side work together?<br />Two-directional input requested<br />Requirement for horizontal services<br />Let’s not create disciplinary IT silos<br />Synergy required for cost efficiency and excellence<br />ICT infrastructure is essential for research<br />The role of computational science is growing<br />Renewal and competence<br />Will Europe run out of competent people?<br />Will training and education programs react fast enough?<br />
  78. 78. Requirements of a sustainable HPC Ecosystem<br />How to guarantee access to the top for selected groups?<br />How to ensure there are competent users which can use the high end resources?<br />How to involve all countries who can contribute?<br />How to develop competence in home ground?<br />How to boost collaboration between research and e-infrastructure providers?<br />What are the principles of resource exchange (in-kind)?<br />European centers<br />National /regional centers,<br />Grid-collaboration<br />Universities and local centers<br />
  79. 79. Conclusions<br />
  80. 80. Some conclusions<br />There are far too many acronyms in this field<br />We need to collaborate in providing e-infrastructure<br />From disciplinary silos to horizontal services<br />Building trust between research and service providers<br />Moving from project based work to sustainable research infrastructures<br />Balanced approach: focus not only on computing but also on data, software development and competence<br />Driven by user community needs – technology is a tool, not a target<br />ESFRI list and other development plans will boost the market of ICT services in research<br />Interoperability and integration of initiatives will be seriously discussed <br />
  81. 81. Final words to remember<br />“The problems are not solved by computers nor by any other e-infrastructure, they are solved by people”<br />Kimmo Koski, today<br />