Your SlideShare is downloading. ×

Session 50 - High Performance Computing Ecosystem in Europe

1,091

Published on

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,091
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
54
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. High-Performance Computing Ecosystemin Europe
    July 15th, 2009
    Kimmo Koski
    CSC – The Finnish IT Center for Science
  • 2. Topics
    Terminology and definitions
    Emerging trends
    Stakeholders
    On-going Grid and HPC activities
    Concluding remarks
  • 3. Terminology and pointers
    HPC
    High Performance Computing
    HET, http://www.hpcineuropetaskforce.eu/
    High Performance Computing in Europe Taskforce, established in June 2006 with a mandate to draft a strategy for European HPC ecosystem
    Petaflop/s
    Performance figure 1015 floating point operations (calculations) in second
    e-IRG, http://www.eirg.eu
    e-Infrastructure Reflection Group. e-IRG is supporting the creation of a framework (political, technological and administrative) for the easy and cost-effective shared use of distributed electronic resources across Europe - particularly for grid computing, storage and networking.
    ESFRI, http://cordis.europa.eu/esfri/
    European Strategy Forum on Research Infrastructures. The role of ESFRI is to support a coherent approach to policy-making on research infrastructures in Europe, and to act as an incubator for international negotiations about concrete initiatives. In particular, ESFRI is preparing a European Roadmap for new research infrastructures of pan-European interest.
    RI
    Research Infrastructure
  • 4. Terminology and pointers (cont.)
    PRACE, http://www.prace-project.eu/
    Partnership for Advanced Computing in Europe
    EU FP7 project for preparatory phase in building the European petaflop computing centers, based on HET work
    DEISA-2, https://www.deisa.org/
    Distributed European Infrastructure for Supercomputing Applications. DEISA is a consortium of leading national supercomputing centers that currently deploys and operates a persistent, production quality, distributed supercomputing environment with continental scope.
    EGEE-III, http://www.eu-egee.org/
    Enabling Grid for E-sciencE. The project provides researchers in academia and industry with access to a production level Grid infrastructure, independent of their geographic location.
    EGI_DS, http://www.eu-egi.org/
    An effort to establish a sustainable grid infrastructure in Europe
    GÉANT2, http://www.geant2.net/
    Seventh generation of pan-European research and education network
  • 5. Computational science infrastructure
  • 6. Performance Pyramid
    National/regional centers, Grid-collaboration
    Local centers
    European
    HPC center(s)
    e-IRG
    PRACE
    TIER 0
    DEISA-2
    Capability
    Computing
    EGEE-III
    TIER 1
    Capacity
    Computing
    TIER 2
  • 7. Need to remember about petaflop/s…
    What do you mean with petaflop/s?
    Theoretical petaflop/s?
    LINPACK petaflop/s?
    Sustained petaflop/s for a single extremely parallel application?
    Sustained petaflop/s for multiple parallel applications?
    Note that between 1 and 4 there might be several years
    Petaflop/s hardware needs petaflop/s applications, which are not easy to program, or not even possible in many cases
    Do we even know how to scale over 100000 processors …
  • 8. Emerging trends
  • 9.
  • 10. ResearchCommunity-2
    ResearchCommunity-1
    ResearchCommunity-3
    Human
    interaction
    Human
    interaction
    Human
    interaction
    Workspace
    Workspace
    Workspace
    Labs
    Labs
    Labs
    Scientific Data
    Scientific Data
    Scientific Data
    Computing,
    Grid
    Computing,
    Grid
    Computing,
    Grid
    Network
    Network
    Network
    Global Virtual Research Communities
  • 11. VirtualCommunity
    VirtualCommunity
    VirtualCommunity
    Human
    interaction
    Human
    interaction
    Human
    interaction
    Workspace
    Workspace
    Workspace
    Virtual Labs
    Virtual Labs
    Virtual Labs
    Scientific Data
    Scientific Data
    Scientific Data
    Grid
    Grid
    Grid
    Economies
    of Scale
    EfficiencyGains
    Network
    Network
    Network
    Global Virtual Research Communities
    Scientific Data
    Grid
    Network
  • 12. Data and information explosion
    Petascale computing produces exascale data
  • 13. HPC is a part of a larger ecosystem
    DISCIPLINARIES, USER COMMUNITIES
    COMPETENCE
    SOFTWARE DEVELOPMENT
    DATA INFRASTRUCTURES AND SERVICES
    HPC AND GRID INFRASTRUCTURES
  • 14. HPC Ecosystem to support the top
    The upper layers of the pyramid
    HPC centers / services
    European projects (HPC/Grid, networking, …)
    Activities which enable efficient usage of upper layers
    Inclusion of national HPC infrastructures
    Software development and scalability issues
    Competence development
    Interoperability between the layers
  • 15. Stakeholders
  • 16. Stakeholder categories in PRACE
    Providers of HPC services
    European HPC and grid projects
    Networking infrastructure providers
    Hardware vendors
    Software vendors and the software developing academic community
    End users and their access through related Research Infrastructures
    Funding bodies on a national and international level
    Policy setting organisations directly involved in developing the research infrastructure and political bodies like parliaments responsible for national and international legislation
  • 17. Policy and strategy work
    HET: HPC in Europe Taskforce http://www.hpcineuropetaskforce.eu/
    e-IRG: e-Infrastructure Reflection Grouphttp://www.e-irg.org/
    ESFRI: European Strategy Forum on Research Infrastructureshttp://www.cordis.lu/esfri/
    ERA Expert Group on Research Infrastructures
    ESFRI
  • 18. Some focus areas
    Collaboration between research and e-infrastructure providers
    Horizontal ICT services
    Balanced approach: more focus on data, software development and competence development
    Inclusion of different countries, different contribution levels
    New emerging technologies, innovative computing initiatives
    Global collaboration, for example Exascale computing initiative
    Policy work, resource exchange, sustainable services etc.
  • 19. On-going Grid and HPC activities
  • 20. EU infrastructure projects
    GEANT
    Number of data infrastructure projects
  • 21.
  • 22. 22
    Supercomputing Drives Science through Simulation
    Environment
    Weather/ Climatology
    Pollution / Ozone Hole
    Ageing Society
    Medicine
    Biology
    Energy
    Plasma Physics
    Fuel Cells
    Materials/ Inf. Tech
    Spintronics
    Nano-science
  • 23. 23
    PRACE Initiative
    History and First Steps
    Production of the HPC part ofthe ESFRI Roadmap;Creation of a vision,
    involving 15 European countries
    Signature of the MoU
    Submission ofan FP7 project proposal
    Bringing scientists together
    Creation of the Scientific Case
    Approval of the project
    Project start
    HPCEUR
    HET
    2004
    2005
    2006
    2007
    2008
  • 24. 24
    HET: The Scientific Case
    Weather, Climatology, Earth Science
    degree of warming, scenarios for our future climate.
    understand and predict ocean properties and variations
    weather and flood events
    Astrophysics, Elementary particle physics, Plasma physics
    systems, structures which span a large range of different length and time scales
    quantum field theories like QCD, ITER
    Material Science, Chemistry, Nanoscience
    understanding complex materials, complex chemistry, nanoscience
    the determination of electronic and transport properties
    Life Science
    system biology, chromatin dynamics, large scale protein dynamics, protein association and aggregation, supramolecular systems, medicine
    Engineering
    complex helicopter simulation, biomedical flows, gas turbines and internal combustion engines, forest fires, green aircraft,
    virtual power plant
  • 25. 25
    First success: HPC in ESFRI Roadmap
    The European Roadmap for Research Infrastructures is the first comprehensive definition at the European level
    Research Infrastructures areone of the crucial pillars of the European Research Area
    A European HPC service – impact foreseen:
    • strategic competitiveness
    • 26. attractiveness for researchers
    • 27. supporting industrial development
  • 26
    Second success: The PRACE Initiative
    Memorandum of Understanding signed by 15 States in Berlin, on April 16, 2007
    France, Germany, Spain, The Netherlands, UKcommitted funding for a European HPC Research Infrastructure (LoS)
    New:
  • 28. 27
    Third success: the PRACE project
    Partnership for Advanced Computing in Europe
    PRACE
    EU Project of the European Commission 7th Framework Program Construction of new infrastructures - preparatory phase
    FP7-INFRASTRUCTURES-2007-1
    Partners are 16 Legal Entities from 14 European countries
    Budget: 20 Mio €
    EU funding: 10 Mio €
    Duration: January 2008 – December 2009
    Grant no: RI-211528
  • 29. 28
    PRACE Partners
  • 30. PRACE Work Packages
    WP1 Management
    WP2 Organizational concept
    WP3 Dissemination, outreach and training
    WP4 Distributed computing
    WP5 Deployment of prototype systems
    WP6 Software enabling for prototype systems
    WP7 Petaflop systems for 2009/2010
    WP8 Future petaflop technologies
    29
  • 31. 30
    PRACE Objectives in a Nutshell
    Provide world-class systems for world-class science
    Create a single European entity
    Deploy 3 – 5 systems of the highest performance level (tier-0)
    Ensure diversity of architectures
    Provide support and training
    PRACE will be created to stay
  • 32. 31
    Representative Benchmark Suite
    Defined a set of applications benchmarks
    To be used in the procurement process for Petaflop/s systems
    12 core applications, plus 8 additional applications
    Core:NAMD, VASP, QCD, CPMD, GADGET, Code_Saturne, TORB, ECHAM5, NEMO, CP2K, GROMACS, N3D
    Additional: AVBP, HELIUM, TRIPOLI_4, PEPC, GPAW, ALYA, SIESTA, BSIT
    Each application will be ported to appropriate subset of prototypes
    Synthetic benchmarks for architecture evaluation
    Computation, mixed-mode, IO, bandwidth, OS, communication
    Applications and Synthetic benchmarks integrated into JuBE
    Juelich Benchmark Environment
  • 33. 32
    Mapping Applications to Architectures
    • Identified affinities and priorities
    • 34. Based on the application analysis - expressed in a condensed, qualitative way
    • 35. Need for different “general purpose” systems
    • 36. There are promising emerging architectures
    • 37. Will be more quantitative after benchmark runs on prototypes
    E = estimated
  • 38. 33
    Installed prototypes
    IBM BlueGene/P (FZJ)
    01-2008
    IBM Power6 (SARA)
    07-2008
    Cray XT5 (CSC)
    11-2008
    IBM Cell/Power (BSC)
    12-2008
    NEC SX9, vector part (HLRS)
    02-2009
    Intel Nehalem/Xeon (CEA/FZJ)06-2009
    33
  • 39. 34
    Status June 2009
    34
    Summary of current Prototype Status
  • 40. 35
    Web site and the dissemination channels
    The PRACE web presence with news, events, RSS feeds etc. http://www.prace-project.eu
    Alpha-Galileo service: 6500 journalists around the globe: http://www.alphagalileo.org
    Belief Digital Library
    HPC-magazines
    PRACE partner sites, top 10 HPC users
    The PRACE website, www.prace-project.eu
  • 41. 36
    PRACE Dissemination Package
    PRACE WP3 has created a dissemination package including templates, brochures, flyers, posters, badges, t-shirts, USB-keys, badges etc.
    The PRACE logo
    Heavy Computing 10^15: the PRACE t-shirt
    PRACE USB-key
  • 42. 37
  • 43. SC'08 Austin 2008-11-19
    Andreas Schott, DEISA
    38
    DEISA: May 1st, 2004 – April 30th, 2008
    DEISA2: May 1st, 2008 – April 30th, 2011
    DEISA Partners
  • 44. SC'08 Austin 2008-11-19
    Andreas Schott, DEISA
    39
    DEISA Partners
    BSCBarcelona Supercomputing Centre Spain
    CINECA Consortio Interuniversitario per il Calcolo Automatico Italy
    CSCFinnish Information Technology Centre for Science Finland
    EPCCUniversity of Edinburgh and CCLRC UK
    ECMWFEuropean Centre for Medium-Range Weather Forecast UK (int)
    FZJ Research Centre Juelich Germany
    HLRS High Performance Computing Centre Stuttgart Germany
    IDRIS Institut du Développement et des Ressources France
    en Informatique Scientifique - CNRS
    LRZ Leibniz Rechenzentrum Munich Germany
    RZG Rechenzentrum Garching of the Max Planck Society Germany
    SARADutch National High Performance Computing Netherlands
    CEA-CCRT Centre de Calcul Recherche et Technologie, CEA France
    KTH Kungliga Tekniska Högskolan Sweden
    CSCSSwiss National Supercomputing Centre Switzerland
    JSCCJoint Supercomputer Center of the Russian Russia
    Academy of Sciences
  • 45. DEISA 2008
    Operating the European HPC Infrastructure
    >1 PetaFlop/s
    Aggregated peak performance
    Most powerful European
    supercomputers for
    most challenging projects
    Top-level Europe-wide application enabling
    Grand Challenge projects performed on a regular basis
  • 46. SC'08 Austin 2008-11-19
    Andreas Schott, DEISA
    41
    DEISA Core Infrastructure and Services
    Dedicated High Speed Network
    Common AAA
    Single sign on
    Accounting/budgeting
    Global Data Management
    High performance remote I/O and data sharing with
    global file systems
    High performance transfers of large data sets
    User Operational Infrastructure
    Distributed Common Production Environment (DCPE)
    Job management service
    Common user support and help desk
    System Operational Infrastructure
    Common monitoring and information systems
    Common system operation
    Global Application Support
  • 47. SC'08 Austin 2008-11-19
    Andreas Schott, DEISA
    42
    FUNET
    SURFnet
    DFN
    UKERNA
    GARR
    RENATER
    1 Gb/s GRE tunnel
    10 Gb/s wavelength
    10 Gb/s routed
    10 Gb/s switched
    RedIris
    DEISA dedicated high speed network
  • 48. SC'08 Austin 2008-11-19
    Andreas Schott, DEISA
    43
    Super-UXNQS II
    DEISA Global File System(based on MC-GPFS)
    IBM P6 (& BlueGene/P)
    NEC SX8
    IBM P6 (& BlueGene/P)
    AIXLL-MC
    AIXLL-MC
    GridFTP
    IBM P6
    Cray XT4
    AIXLL
    UNICOS/lc
    PBS Pro
    UNICOS/lc
    PBS Pro
    Cray XT4 & XT5
    SGI ALTIX
    LINUX
    PBS Pro
    AIXLL-MC
    AIXLL-MC
    IBM P5
    LINUX
    Maui/Slurm
    LINUX
    LL
    IBM P6 & BlueGene/P
    IBM P6
    IBM PPC
  • 49. SC'08 Austin 2008-11-19
    Andreas Schott, DEISA
    44
    Multiple
    ways to
    access
    Presen-tation
    layer
    Common
    production
    environm.
    Single
    monitor
    system
    Co-reservation
    and co-allocation
    Job manag.
    layer and
    monitor.
    DEISA
    Sites
    Data
    transfer tools
    Job
    rerouting
    Workflow
    managem.
    Data staging
    tools
    WAN
    shared
    filesystem
    Data manag.
    layer
    Unified
    AAA
    Network
    connec-tivity
    Network
    and
    AAA
    layers
    DEISA Software Layers
  • 50. SC'08 Austin 2008-11-19
    Andreas Schott, DEISA
    45
    Supercomputer
    Hardware Performance
    Pyramid
    Supercomputer
    Application Enabling
    Requirements
    Pyramid
    Capability computing will always need expert support for application enabling and optimizations
    The more resource demanding one single problem is, the higher
    are generally the requirements for application enabling including enhancing scalability
    EU
    National
    Local
  • 51. SC'08 Austin 2008-11-19
    Andreas Schott, DEISA
    46
    DEISA Organizational Structure
    WP1 – Management
    WP2 – Dissemination, External Relations, Training
    WP3 – Operations
    WP4 – Technologies
    WP5 – Applications Enabling
    WP6 – User Environment and Support
    WP7 – Extreme Computing (DECI) and Benchmark Suite
    WP8 – Integrated DEISA Development Environment
    WP9 – Enhancing Scalability
  • 52. SC'08 Austin 2008-11-19
    Andreas Schott, DEISA
    47
    Evolution of Supercomputing Resources
    DEISA partners´ compute resources at DEISA project start:
    ~ 30 TF aggregated peak performance
    2004
    DEISA partners´ resources at DEISA2 project start:
    Over 1 PF aggregated peak performance on state-of-the art supercomputers
    2008
    Cray XT4 and XT5, Linux
    IBM Power5, Power6, AIX / Linux
    IBM BlueGene/P, Linux (frontend)
    IBM PowerPC, Linux (MareNostrum)
    SGI ALTIX 4700 (Itanium2 Montecito), Linux
    NEC SX8 vector system, Super UX
    Systems interconnected with dedicated 10Gb/s network links provided by GEANT2 and NRENs
    Fixed fraction of resources dedicated to DEISA usage
  • 53. SC'08 Austin 2008-11-19
    Andreas Schott, DEISA
    48
    DEISA Extreme Computing Initiative
    (DECI)
    DECI launched in early 2005 to enhance DEISA’s impact
    on science and technology
    Identification, enabling, deploying and operation of “flagship” applications
    in selected areas of science and technology
    Complex, demanding, innovative simulations requiring the exceptional
    capabilities of DEISA
    Multi-national proposals especially encourage
    Proposals reviewed by national evaluation committees
    Projects chosen on the basis of innovation potential,
    scientific excellence, relevance criteria, and national priorities
    Most powerful HPC architectures in Europe for the most challenging projects
    Most appropriate supercomputer architecture selected for each project
    Mitigation of the rapid performance decay of a single national supercomputer
    within its short lifetime cycle of typically about 5 years, as implied by Moore’s law
  • 54. SC'08 Austin 2008-11-19
    Andreas Schott, DEISA
    49
    DEISA Extreme Computing Initiative
    Involvements in projects from DECI calls 2005, 2006, 2007:
    157 research institutes and universities
    from
    15 European countries
    Austria Finland France Germany Hungary
    Italy Netherlands Poland Portugal Romania Russia Spain Sweden Switzerland UK
    with collaborators from
    four other continents
    North America, South America, Asia, Australia
  • 55. SC'08 Austin 2008-11-19
    Andreas Schott, DEISA
    50
    DEISA Extreme Computing Initiative
    Calls for Proposals for challenging supercomputing projects from all areas of science
    DECI call 2005
    51 proposals, 12 European countries involved, co-investigator from US)
    30 mio cpu-h requested
    29 proposals accepted, 12 mio cpu-h awarded (normalized to IBM P4+)
    DECI call 2006
    41 proposals, 12 European countries involved
    co-investigators from N + S America, Asia (US, CA, AR, ISRAEL)
    28 mio cpu-h requested
    23 proposals accepted, 12 mio cpu-h awarded (normalized to IBM P4+)
    DECI call 2007
    63 proposals, 14 European countries involved, co-investigators from
    N + S America, Asia, Australia (US, CA, BR, AR, ISRAEL, AUS)
    70 mio cpu-h requested
    45 proposals accepted, ~30 mio cpu-h awarded (normalized to IBM P4+)
    DECI call 2008
    66 proposals, 15 European countries involved, co-investigators from
    N + S America, Asia, Australia
    134 mio cpu-h requested (normalized to IBM P4+)
    Evaluation in progress
  • 56. SC'08 Austin 2008-11-19
    Andreas Schott, DEISA
    51
    DECI Project POLYRES
    Cover Story of Nature - May 24, 2007
    Curvy membranes make proteins attractive
    For almost two decades, physicists have been on the track of membrane mediated interactions. Simulations in DEISA have now revealed that curvy membranes make proteins attractive.
    Nature 447 (2007), 461-464
    proteins (red) adhere on a membrane
    (blue/yellow) and locally bend it;
    this triggers a growing invagination.
    cross-section through an almost
    complete vesicle
    B. J. Reynwar et al.: Aggregation and vesiculation of membrane proteins by curvature mediated interactions,
    NATURE Vol 447|24 May 2007| doi:10.1038/nature05840
  • 57. SC'08 Austin 2008-11-19
    Andreas Schott, DEISA
    52
    Achievements and Scientific Impact
    Brochures can be downloaded from http://www.deisa.eu/publications/results
  • 58. SC'08 Austin 2008-11-19
    Andreas Schott, DEISA
    53
    2003
    2004
    2005
    2006
    2007
    2008
    2009
    2010
    2011
    2002
    Evolution of User Categories in DEISA
    Start of
    FP7 DEISA2
    Start of
    FP6 DEISA
    DEISA EoI
    Support of
    Virtual Communities
    and EU projects
    Single project support
    DEISA Extreme Computing Initiative
    Early adopters
    (Joint Research Activities)
    FP6 DEISA
    FP7 DEISA2
    Preparatory Phase
  • 59. SC'08 Austin 2008-11-19
    Andreas Schott, DEISA
    54
    Tier0 / Tier1 CentersAre there implications for the services?
    Main difference between T0 and T1 centers: policy and usage models !
    T1 centers can evolve to T0 for strategic/political reasons
    T0 machines automatically degrade to T1 level by aging
    T0 Centers
    Leadership-class European systems in competition to the leading systems
    worldwide, cyclically renewed
    Governance structure to be provided by European organization(PRACE)
    T1 Centers
    Leading national Centers, cyclically renewed, optionally
    surpassing the performance of older T0 machines
    National Governance structure
    Services have to be the same in T0/T1
    Because of the change of the status of the systems, over time
    For user transparency of the different systems
    (Only visible: Some services could have different flavors for T0 and T1)
  • 60. SC'08 Austin 2008-11-19
    Andreas Schott, DEISA
    55
    Summary
    Evolvement of this European infrastructure towards a robust and persistent European HPC ecosystem
    Enhancing the existing services, by deploying new services including
    support for European Virtual Communities, and by cooperating and
    collaborating with new European initiatives, especially PRACE
    DEISA2 as the vector for the integration of Tier-0 and Tier-1
    systems in Europe
    To provide a lean and reliable turnkey operational solution
    for a persistent European HPC infrastructure
    Bridging worldwide HPC projects: To facilitate the support of international science communities with computational needs traversing existing political boundaries
  • 61. EGEE Status April 2009
    Infrastructure
    • Number of sites connected to the EGEE infrastructure: 268
    • 62. Number of countries connected to the EGEE infrastructure: 54
    • 63. Number of CPUs (cores) available to users 24/7: ~139,000
    • 64. Storage capacity available: ~ 25 PB disk + 38 PB tape MSS
    Users
    • Number of Virtual Organisations using the EGEE infrastructure: > 170
    • 65. Number of registered Virtual Organisations: >112
    • 66. Number of registered users: > 13000
    • 67. Number of people benefiting from the existence of the EGEE infrastructure: ~20000
    • 68. Number of jobs: >390k jobs/day
    • 69. Number of application domains making use of the EGEE infrastructure: more than 15
  • ProductiveUse
    Utility
    Testbeds
    Are we ready for the demand?
    National
    European e-Infrastructure
    Global
    57
  • 70. www.eu-egi.org
    58
    38 National Grid Initiatives
  • 71. www.eu-egi.org
    59
    EGI Objectives (1/3)
    Ensure the long-term sustainability of the European infrastructure
    Coordinate the integration and interaction between National Grid Infrastructures
    Operate the European level of the production Grid infrastructure for a wide range of scientific disciplines to link National Grid Infrastructures
    Provide global services and support that complement and/or coordinate national services
    Collaborate closely with industry as technology and service providers, as well as Grid users, to promote the rapid and successful uptake of Grid technology by European industry
  • 72. www.eu-egi.org
    60
    EGI Objectives (2/3)
    Coordinate middleware development and standardization to enhance the infrastructure by soliciting targeted developments from leading EU and National Grid middleware development projects
    Advise National and European Funding Agencies in establishing their programmes for future software developments based on agreed user needs and development standards
    Integrate, test, validate and package software from leading Grid middleware development projects and make it widely available
  • 73. www.eu-egi.org
    61
    EGI Objectives (3/3)
    Provide documentation and training material for the middleware and operations.
    Take into account developments made by national e-science projects which were aimed at supporting diverse communities
    Link the European infrastructure with similar infrastructures elsewhere
    Promote Grid interface standards based on practical experience gained from Grid operations and middleware integration activities, in consultation with relevant standards organizations
    EGI Vision Paper
    http://www.eu-egi.org/vision.pdf
  • 74. Integration and interoperability
    PRACE and EGI targeting a sustainable infrastructure
    DEISA-2 and EGEE-III project based
    Sometimes national stakeholders are partners in multiple initiatives
    Users do not necessarily care where they get the service as long as they get it
    Integration PRACE-DEISA and transition EGEE-EGI possible, further on requires creative thinking
  • 75. New HPC Ecosystem is being built…
  • 76. New market for European HPC
    44 ESFRI list new research infrastructure projects, 34 running a preparatory phase project
    1-4 years
    1-7 MEUR * 2 (petaflop computing 10 MEUR * 2)
    Successful new research infrastructures start construction 2009-2011
    10-1000 MEUR per infrastructure
    First ones start to deploy: ESS in Lund etc.
    Existing research infrastructures are also developing
    CERN, EMBL, ESA, ESO, ECMWF, ITER, …
    Results:
    Growing RI market, considerably rising funding volume
    Need for horizontal activities (computing, data, networks, computational methods and scalability, application development,…)
    Real danger to build disciplinary silos instead of searching IT synergy
    Several BEUR for ICT
  • 77. Some Key Issues in building the ecosystem
    Sustainability
    EGEE and DEISA are projects with an end
    PRACE and EGI are targeted to be sustainable with no definitive end
    ESFRI and e-IRG
    How do the research side and infrastructure side work together?
    Two-directional input requested
    Requirement for horizontal services
    Let’s not create disciplinary IT silos
    Synergy required for cost efficiency and excellence
    ICT infrastructure is essential for research
    The role of computational science is growing
    Renewal and competence
    Will Europe run out of competent people?
    Will training and education programs react fast enough?
  • 78. Requirements of a sustainable HPC Ecosystem
    How to guarantee access to the top for selected groups?
    How to ensure there are competent users which can use the high end resources?
    How to involve all countries who can contribute?
    How to develop competence in home ground?
    How to boost collaboration between research and e-infrastructure providers?
    What are the principles of resource exchange (in-kind)?
    European centers
    National /regional centers,
    Grid-collaboration
    Universities and local centers
  • 79. Conclusions
  • 80. Some conclusions
    There are far too many acronyms in this field
    We need to collaborate in providing e-infrastructure
    From disciplinary silos to horizontal services
    Building trust between research and service providers
    Moving from project based work to sustainable research infrastructures
    Balanced approach: focus not only on computing but also on data, software development and competence
    Driven by user community needs – technology is a tool, not a target
    ESFRI list and other development plans will boost the market of ICT services in research
    Interoperability and integration of initiatives will be seriously discussed
  • 81. Final words to remember
    “The problems are not solved by computers nor by any other e-infrastructure, they are solved by people”
    Kimmo Koski, today

×