Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Nectar cloud workshop ndj 20110331.2


Published on

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Nectar cloud workshop ndj 20110331.2

  1. 1. Thoughts from the east, <br />on research in the cloud<br />NeCTARResearch Cloud Workshop30 – 31 March 2011, Melbourne, Australia<br />Nick Jones<br /><br /><br />
  2. 2. Genomics, Genetics, Bioinformatics, Molecular Modelling:<br />NZ Genomics Ltd<br />Maurice Wilkins Centre<br />Alan Wilson Centre<br />Virtual Institute of Statistical Genetics<br />Wind Energy, Geothermal & Minerals Exploration:<br /><ul><li> GNS Exploration
  3. 3. Institute for Earth Science and Engineering
  4. 4. Centre for Atmospheric Research</li></ul>Nanotechnology and High Technology Materials:<br />MacDiarmid Institute<br />Materials TRST<br />Earthquakes, Tsunami, Volcanoes:<br /><ul><li> Natural Hazards Research Platform
  5. 5. DEVORA Auckland Volcanic Field
  6. 6. GeoNet</li></ul>Human Development, Bioengineering, Social Statistics:<br /><ul><li> National Research Centre for Growth and Development
  7. 7. Auckland Bioengineering Institute
  8. 8. Liggins Institute
  9. 9. Malaghan Institute
  10. 10. Social Sciences Data Service</li></ul>Invasive Species, Water / Land Use, Emissions:<br /><ul><li> Agricultural Greenhouse Gas Centre
  11. 11. Bio-Protection Research Centre
  12. 12. National Climate Change Centre</li></li></ul><li>Roadmap of NZ e-Infrastructure<br />NZ Genomics Ltd<br />Genomics<br />Data<br />Identity<br />HPC<br />eScience<br />Network<br />Research Data Infrastructure<br />Tuakiri – Research Access Federation<br />BeSTGRID Federation<br />National eScience Infrastructure<br />BeSTGRID – Compute / Data Services<br />BlueFernBlueGene/L<br />Weather / Atmosphere NIWA HPC – P575<br />KAREN Advanced Research Network<br />
  13. 13. NZs eResearch Infrastructures<br />HPC & Distributed Computing<br />BeSTGRID, since 2006<br />$2.5M + $800k<br />Largest national grid users in Australasia<br />Built capability, established a model for sharing resources<br />NZ eScience Infrastructure (NeSI)<br />~$50M ($27M Crown)<br />4 HPC centres across NZ<br />Signing agreements now, running for 4 years > review > ongoing…<br />NZ Genomics Ltd (NZGL)<br />National genomics service<br />4 major partners<br />Funded in 2008<br />Almost operational..<br />
  14. 14. NZ Genomics Ltd<br />Centralised Cloud (?)<br />~$3.5M<br />Services federated with NeSI<br />
  15. 15. Principal Investors<br />Government<br /><ul><li> Invest over 4 years, plus out-years</li></ul>Auckland<br /><ul><li> People
  16. 16. New cluster
  17. 17. New storage
  18. 18. Portion of facilities, power, depreciation costs</li></ul>Canterbury<br /><ul><li> People
  19. 19. HPC
  20. 20. New storage
  21. 21. Portion of facilities, power, depreciation costs</li></ul>UJV<br />HPC & Services<br />Cluster &<br />Services<br />Governance<br />Principal Investors<br /><ul><li>Transparent
  22. 22. Collaborative</li></ul>NIWA<br /><ul><li> People
  23. 23. HPC capacity
  24. 24. Storage
  25. 25. Portion of facilities, power, depreciation costs</li></ul>AgResearch / Otago<br /><ul><li> People
  26. 26. New cluster
  27. 27. New storage
  28. 28. Portion of facilities, power, depreciation costs</li></ul>Cluster &<br />Services<br />HPC & Services<br />Procurement Planning<br />Operational management<br />Data & Grid middleware to manage access<br />Outreach<br />Services<br />Legend<br /> Financial flows<br /> Access<br />Science Users<br />Managed Access<br />Research institutions (non investors)<br /><ul><li> Access through a “single front door”
  29. 29. Capacity scaled out from partners capabilities
  30. 30. Managed and metered access to resources, across all resource types
  31. 31. Access fee initially calculated as partial costs of metered use of resources and reviewed annually</li></ul>Principal and Associate Investors<br /><ul><li> Access through a “single front door”
  32. 32. Specialised concentrations of capability at each institution
  33. 33. Receive government coinvestment
  34. 34. Capacity available to institutions reflects their level of investment
  35. 35. Managed and metered access to resources, across all resource types
  36. 36. Institutions’ access costs covered by investment</li></ul>Industry<br /><ul><li> Access through a “single front door”
  37. 37. Capacity scaled out from partners capabilities
  38. 38. Managed and metered access to resources, across all resource types
  39. 39. Access fee calculated as full costs of metered use of resources</li></li></ul><li>Governance<br />Chair (independent via UJV)<br />1 independent (via UJV & MoRST)<br />1 from each Principle Investor<br />1 nominated by Associate Investors<br />Board<br />(7 members)<br />Strategic Advisory Group<br />( 7 members)<br />Chair<br />3 NZ researchers<br />3 offshore researchers<br />Management & Services<br />Director<br />(Fulltime Position)<br />User Advisory Group<br />Management Team<br />4 Board nominated researchers<br />Programme & Communications<br />Finance & Administration<br />Team @AgResearch<br />Team @Auckland<br />Team @Canterbury<br />Team @NIWA<br />Following roles are covered across all teams (in varying portions):<br />Manager / Technical Lead / Systems / Middleware / Applications<br />
  40. 40. Management & Services<br />Management<br />4 Board nominated researchers<br />Services<br />SLA<br />SLA<br />SLA<br />SLA<br />AgResearch<br />/Otago<br />Manager<br />Technical Lead<br />Applications<br />Middleware<br />Systems<br />Auckland<br />Manager<br />Technical Lead<br />Applications<br />Middleware<br />System<br />Canterbury<br />Manager<br />Technical Lead<br />Applications<br />Middleware<br />System<br />NIWA<br />Manager<br />Technical Lead<br />Applications<br />Middleware<br />System<br />
  41. 41. Core Functions<br />
  42. 42. Land of the long white cloud.. <br />Within NZGL and NeSI, we have reasonable scale investments into virtualised infrastructure<br />Our aim is to build at least a federated cloud platform, if not a single research cloud (perhaps with mulitple availability zones)<br />
  43. 43. Migrating users’ tools into the Cloud<br />
  44. 44. Who are our users?<br />Researchers<br />HPC users<br />System Administrators<br />Application Developers<br />What are the funders expectations?<br />Facebook for scientists? <br />VM and development platforms<br />Scaled out server infrastructure & development platform efficiencies and coordination <br />National collaborative infrastructure<br />Needs and solutions depend on which layer of the cloud you’re creating: <br />SaaS, PaaS, IaaS<br />
  45. 45. *aaS Examples<br />Galaxy (Genetic Marker Design)<br />analysis modules<br />workflows<br />Biocommons<br />multisite<br /><br />(Genomics)<br />analysis modules<br />pipelines<br />Scale out analyses?<br />Self service<br />Customised / templated services<br />Reuse of methods?<br />
  46. 46.<br /><br /><br /><br /><br /><br />
  47. 47. Applications & Services<br />HPC<br />+<br />
  48. 48. Joining the Grid to the Cloud<br />Issues, early 2010:<br />Maturity of solution<br />Overhead of VM provisioning<br />Cloud Workshop 18/02/2011<br /><br />Examines queue, launches VMs on Nimbus, EC2, Eucalyptus<br />Let users define the computational image<br />Provide a library for them to share, discover, launch<br />
  49. 49. Research Cloud<br />
  50. 50. Users Issues & Needs<br />What are their issues?<br />Accessibility<br />Usability<br />Domain specificity<br />Funding sources (opex, capex, none)<br />Sustainability<br />What are their needs?<br />Self service / on demand<br />Applications Registry<br />Ease of discovery<br />Consultancy<br />Migration, adaptation<br />Mature Service Delivery<br />
  51. 51. Come… meet the cloud…..<br />Manapouri underground power station<br />
  52. 52. Derinkuyu Underground City, Cappadocia, Turkey<br />
  53. 53. CONSOLE OPERATOR AT SAIGON SATELLITE TERMINAL. Soldier monitors satellite traffic and selects satellite to be used.<br />
  54. 54. eResearch Tools development<br />In the past, staff have been systems operators and middleware developers<br />few business analysts and user centered designers <br />Who is driving product and service development?<br />Who discovers needs, translates into eResearch Tools?<br />
  55. 55. Where are the users now?<br />Under their desks!<br />In institutional facilities<br />In externally supported communities<br />Migrating users to the Cloud<br />And, we want them to join us!<br />
  56. 56. Migrating eResearch Tools<br />Monolithic systems aren’t compatible with HPC, Grid, nor the Cloud<br />Applications require scaling out, whether databases, web based analysis engines, desktop applications<br />
  57. 57. Going the last mile?<br />Who is supporting the researcher, to understand their needs, define their requirements, and implement their solutions?<br />Are there services and applications either already in place, or under development, to meet needs, and fill capacity created?<br />Who will cross the boundaries between these groups?<br />
  58. 58. Who are our users?<br />Who are the users? <br />Researchers – technically savvy ones..<br />HPC users.. maybe<br />System Administrators<br />Application Developers<br />“All researchers that need to use eresearch tools already know how to and are doing so..”<br />“If you can’t write the code yourself, how can you trust the results? Surely you must be able to write the code!?”<br />
  59. 59. What do I talk about, when I talk about Cloud?<br />Instead of the machinery:<br />VMs, hypervisors, schedulers, block storage, virtual networks, hosts, networks, firewalls<br />How about capabilities:<br />Scalability: Methods, Communities<br />Self Service<br />Sustainability(through preservation)<br />Where do we do this well?<br />Data?<br />Preservation<br />Curation<br />HPC?<br />Algorithms<br />Methods<br />Hardware optimisation techniques<br />When communicating to end users, common vocabularies in other communities aren’t about the machinery, they’re higher level…<br />
  60. 60. What do I talk about, .. Scalability<br />What is the equivalent of algorithms and optimal approaches in the Cloud era?<br />The knowledge, skills, and methods to optimally use the cloud?<br />How to be elastic (scale up/down, on demand)<br />How to manage failure<br />Multi-tenancy<br />Building communities, leveraging network effects<br />
  61. 61. What do I talk about, .. Preservation<br />What are the research processes and artefacts we are interacting with?<br />Methodologies..<br />enshrined as VMs?<br />e.g. research codes, which take cuts and bruises to re-instantiate <br />= knowledge artefacts that have value and are scarce<br />What about workflows?<br />Shouldn’t we wait until we have enshrined these methods into sophisticated workflows, and ontologies?<br />.. So we should throw away any method until it reaches a predefined threshold of maturity? Because before then, it has no value?<br />Oh, so all methods are important?<br />that’s up to the researcher to say.. Method construction and archive, self service.<br />Ahh, so we’re close to being able to archive and preserve rich research methods?<br />for those that enshrine their methods in software based systems… YES!<br />Reuseable workflows and services for large collaborative communities?<br />… like Virtual Organistions, they only suit certain levels of scale and maturity<br />Individual research codes, workflows, pipelines<br />= research notebooks for those who efficiently enshrine their thoughts and ideas in code<br />These are people we should support!<br />And, their experiments things we should preserve..<br />
  62. 62. What did we learn from the grid?<br />So..<br />…. what do we need to get right, <br />to be successful?<br />
  63. 63. Get the team right<br />Composition & Coordination of distributed team essential<br />Fully committed FTEs<br />Cross site responsibilities, including shared systems administration<br />Strong leadership and programmemanagement<br />
  64. 64. Strong guidance<br />Which resources are getting used, why?<br />Prescriptive framework necessary to ensure participation<br />… need a well defined service model<br />
  65. 65. Should we build new institutions?<br />Why do we want to build institutions?<br />Scale efficiencies<br />Sustainability<br /> .. and reliable targeted accountable services<br />We often take project funding, and aspire to build new institutions<br />Creating institutions is and will continue to be expensive and difficult..<br />Our institutions are long lived, and can absorb most technologies, given care and enough time to mature<br />
  66. 66. Really think about Coordination<br />Why do we need Government money?<br />Scale?<br />We’re partly seeing a coordination failure <br />(coordination of capital and resources)<br />It is the incentive to bring out institutions into alignment..<br />Funding:<br />work with funders to align opex and capex. Crown and coinvestors fund both in NZ, clarifying intent of investments <br />Otherwise, whoever funds the operating, eats the cake<br />Ensure KPIs incentivize sustainability<br />Incentives and obligations <br />… clarify expectations to coinvestors, sector<br />“Single Front Door”<br />Coordinate with broadest scale IT community that has responsibility and capability<br />Research labs can be good, though often will be introspective / focus on local solutions / research outputs<br />Drive innovations into core IT Services groups; strong coordination from top level sponsors required<br />Collaborate nationally and internationally<br />
  67. 67. Usability really counts !!<br />Make it easy!<br />Authentication … Authentication … Authentication … <br />Consistent User Interfaces?<br />Don’t get in my way<br />Don’t break the abstraction<br />Don’t give me middleware!!<br />
  68. 68. Build communities<br />Is the aim to build aggregations of communities?<br />Support national scale communities?<br /> … but these may not exist<br />will need to build communities <br />(developers, end users, administrators, leaders)<br />Who will mediate within and between communities, seek out commonalities, and take ownership of developments?<br />
  69. 69. Support the long tail<br />There’s an endless list of applications and services, that individual researchers need and want<br />We often focus on the large communities, common platforms, and scalability<br />To make a Research Cloud useful for the majority of the community, we need to design for great diversity<br />We can do this incrementally<br />We’re looking to learn, fast, what works for researchers, and what doesn’t <br />>>> we need agility, and very strong engagement with users communities<br />
  70. 70. Thoughts from the east, <br />on research in the cloud<br />Thanks<br />NeCTAR Research Cloud Workshop30 – 31 March 2011, Melbourne, Australia<br />Nick Jones<br /><br /><br />
  71. 71. Image Credits<br />, the website of, New Zealand history online, at ManatūTaonga. Licensed by ManatūTaonga for re-use under the Creative Commons Attribution-Non-Commercial 3.0 New Zealand Licence.<br />, Communications Electronics 1962-1970, Department of the Army<br />, Travel Blog, 7.08.2010,<br />
  72. 72. Goals<br />Needs of users, as identified at this workshop<br />Policy implications<br />Applications & Services that are in common national usage<br />Enable provision of excellent cloud services to researchers<br />Nodes operating within a prescribed framework<br />Framework<br />Consistent user interface<br />to the range of applications and services running at the distributed nodes<br />Applications and Services<br />Selected by Panel<br />Data analysis, visualisation, collaboration, security, application and service platforms, portals / interfaces to HPC and commercial cloud providers<br />Infrastructure as a Service<br />Hosting Platform as a Service offerings for research communities<br />Distributed Nodes<br />Selected by panel<br />Prescriptive operating model<br />Lead Node will create & operate framework; monitor service delivery<br />Access, interop, application migration, security, licensing, accounting, implementation practicalities, monitoring and maintenance<br />Common layer that each node can interoperate with<br />