Coupling Australia’s Researchers to the Global Innovation Economy


Published on

Eighth Lecture in the
Australian American Leadership Dialogue Scholar Tour
Australian National University
Title: Coupling Australia’s Researchers to the Global Innovation Economy
Canberra, Australia

Published in: Technology, Business
1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • 50 Mpx for A$50k
  • EXPReS-Oz 1 Gbps lightpath to JIVE from each ATNF telescope. 12hr experiment sustained data rate of 512 Mbps.
  • This is a production cluster with it’s own Force10 e1200 switch. It is connected to quartzite and is labeled as the “CAMERA Force10 E1200”. We built CAMERA this way because of technology deployed successfully in Quartzite
  • Maybe add another slide to indicate which science groups are using this or working with this
  • Coupling Australia’s Researchers to the Global Innovation Economy

    1. 1. “ Coupling Australia’s Researchers to the Global Innovation Economy” Eighth Lecture in the Australian American Leadership Dialogue Scholar Tour Australian National University Canberra, Australia October 15, 2008 Dr. Larry Smarr Director, California Institute for Telecommunications and Information Technology Harry E. Gruber Professor, Dept. of Computer Science and Engineering Jacobs School of Engineering, UCSD
    2. 2. Abstract An innovation economy begins with the “pull toward the future” provided by a robust public research sector. While the shared Internet has been rapidly diminishing Australia’s “tyranny of distance,” the 21st Century global competition, driven by public research innovation, requires Australia to have high performance connectivity second to none for its researchers. A major step toward this goal has been achieved during the last year through the Australian American Leadership Dialogue (AALD) Project Link, establishing a 1 Gigabit/sec dedicated end-to-end connection between a 100 megapixel OptIPortal at the University of Melbourne and Calit2@UC San Diego over AARNet, Australia's National Research and Education Network. From October 2-17 Larry Smarr, as the 2008 Leadership Dialogue Scholar, is visiting Australian universities from Perth to Brisbane in order to oversee the launching of the next phase of the Leadership Dialogue’s Project Link—the linking of Australia’s major research intensive universities and the CSIRO to each other and to innovation centres around the world with AARNet’s new 10 Gbps access product. At each university Dr. Smarr will facilitate discussions on what is needed in the local campus infrastructure to make this ultra-broadband available to data intensive researchers. With this unprecedented bandwidth, Australia will be able to join emerging global collaborative research—across disciplines as diverse as climate change, coral reefs, bush fires, biotechnology, and health care—bringing the best minds on the planet to bear on issues critical to Australia’s future.
    3. 3. “ To ensure a competitive economy for the 21 st century, the Australian Government should set a goal of making Australia the pre-eminent location to attract the best researchers and be a preferred partner for international research institutions, businesses and national governments.”
    4. 4. The OptIPuter Creates an OptIPlanet Collaboratory Using High Performance Bandwidth, Resolution, and Video Calit2 (UCSD, UCI), SDSC, and UIC Leads—Larry Smarr PI Univ. Partners: NCSA, USC, SDSU, NW, TA&M, UvA, SARA, KISTI, AIST Industry: IBM, Sun, Telcordia, Chiaro, Calient, Glimmerglass, Lucent Just Finished Sixth and Final Year Scalable Adaptive Graphics Environment (SAGE) September 2007 Amsterdam Czech Republic Chicago
    5. 5. For Scientific and Engineering Details See Special Section of FGCS Journal A Dozen Peer Reviewed Articles on the OptIPuter Project Also 200 More Articles at
    6. 6. OptIPuter Step I: From Shared Internet to Dedicated Lightpaths
    7. 7. Shared Internet Bandwidth: Unpredictable, Widely Varying, Jitter, Asymmetric Measured Bandwidth from User Computer to Stanford Gigabit Server in Megabits/sec Computers In: Australia Canada Czech Rep. India Japan Korea Mexico Moorea Netherlands Poland Taiwan United States Data Intensive Sciences Require Fast Predictable Bandwidth UCSD Source: Larry Smarr and Friends Stanford Server Limit 100-1000x Normal Internet! Time to Move a Terabyte 10 Days 12 Minutes Australia
    8. 8. Dedicated 10Gbps Lightpaths Tie Together State and Regional Fiber Infrastructure NLR 40 x 10Gb Wavelengths Expanding with Darkstrand to 80 Interconnects Two Dozen State and Regional Optical Networks Internet2 Dynamic Circuit Network Under Development
    9. 9. Global Lambda Integrated Facility 1 to 10G Dedicated Lambda Infrastructure Source: Maxine Brown, UIC and Robert Patterson, NCSA Interconnects Global Public Research Innovation Centers
    10. 10. AARNet Provides the National and Global Bandwidth Required Between Campuses 25 Gbps to US 60 Gbps Brisbane - Sydney - Melbourne 30 Gbps Melbourne - Adelaide 10 Gbps Adelaide - Perth
    11. 11. Two New Calit2 Buildings Provide New Laboratories for “Living in the Future” <ul><li>“ Convergence” Laboratory Facilities </li></ul><ul><ul><li>Nanotech, BioMEMS, Chips, Radio, Photonics </li></ul></ul><ul><ul><li>Virtual Reality, Digital Cinema, HDTV, Gaming </li></ul></ul><ul><li>Over 1000 Researchers in Two Buildings </li></ul><ul><ul><li>Linked via Dedicated Optical Networks </li></ul></ul>UC Irvine Preparing for a World in Which Distance is Eliminated…
    12. 12. <ul><li>September 26-30, 2005 </li></ul><ul><li>Calit2 @ University of California, San Diego </li></ul><ul><li>California Institute for Telecommunications and Information Technology </li></ul>Discovering New Applications and Services Enabled by 1-10 Gbps Lambdas T H E G L O B A L L A M B D A I N T E G R A T E D F A C I L I T Y Maxine Brown, Tom DeFanti, Co-Chairs 21 Countries Driving 50 Demonstrations Using 1 or 10Gbps Lightpaths Sept 2005 i Grid 2005
    13. 13. iGrid Lambda Data Services: Sloan Sky Survey Data Transfer <ul><li>SDSS-I </li></ul><ul><ul><li>Imaged 1/4 of the Sky in Five Bandpasses </li></ul></ul><ul><ul><ul><li>8000 sq-degrees at 0.4 arc sec Accuracy </li></ul></ul></ul><ul><ul><li>Detecting Nearly 200 Million Celestial Objects </li></ul></ul><ul><ul><li>Measured Spectra Of: </li></ul></ul><ul><ul><ul><li>> 675,000 galaxies </li></ul></ul></ul><ul><ul><ul><li>90,000 quasars </li></ul></ul></ul><ul><ul><ul><li>185,000 stars </li></ul></ul></ul> iGRID2005 From Federal Express to Lambdas: Transporting Sloan Digital Sky Survey Data Using UDT Robert Grossman, UIC ~200 GigaPixels! Transferred Entire SDSS (3/4 Terabyte) from Calit2 to Korea in 3.5 Hours— Average Speed 2/3 Gbps!
    14. 14. The Large Hadron Collider Uses a Global Fiber Infrastructure To Connect Its Users <ul><li>The grid relies on optical fiber networks to distribute data from CERN to 11 major computer centers in Europe, North America, and Asia </li></ul><ul><li>The grid is capable of routinely processing 250,000 jobs a day </li></ul><ul><li>The data flow will be ~6 Gigabits/sec or 15 million gigabytes a year for 10 to 15 years </li></ul>
    15. 15. Image Credit: Paul Boven, Image created by Paul Boven, JIVE Satellite image: Blue Marble Next Generation, courtesy of NASA Visible Earth EXPReS-Oz eVLBI Using 1 Gbps Lightpaths October 2007 Data Streamed at 512 Mbps
    16. 16. Next Great Planetary Instrument: The Square Kilometer Array Requires Dedicated Fiber Transfers Of 1 TByte Images World-wide Will Be Needed Every Minute!
    17. 17. OptIPuter Step II: From User Analysis on PCs to OptIPortals
    18. 18. My OptIPortal TM – Affordable Termination Device for the OptIPuter Global Backplane <ul><li>20 Dual CPU Nodes, 20 24” Monitors, ~$50,000 </li></ul><ul><li>1/4 Teraflop, 5 Terabyte Storage, 45 Mega Pixels--Nice PC! </li></ul><ul><li>Scalable Adaptive Graphics Environment ( SAGE) Jason Leigh, EVL-UIC </li></ul>Source: Phil Papadopoulos SDSC, Calit2
    19. 19. OptIPuter Scalable Displays Are Used for Multi-Scale Biomedical Imaging Green: Purkinje Cells Red: Glial Cells Light Blue: Nuclear DNA Source: Mark Ellisman, David Lee, Jason Leigh Two-Photon Laser Confocal Microscope Montage of 40x36=1440 Images in 3 Channels of a Mid-Sagittal Section of Rat Cerebellum Acquired Over an 8-hour Period 200 Megapixels!
    20. 20. Scalable Displays Allow Both Global Content and Fine Detail
    21. 21. Allows for Interactive Zooming from Cerebellum to Individual Neurons
    22. 22. On-Line Resources Help You Build Your Own OptIPuter
    23. 23. Prototyping the PC of 2015: Two Hundred Million Pixels Connected at 10Gbps Source: Falko Kuester, Calit2@UCI NSF Infrastructure Grant Data from the Transdisciplinary Imaging Genetics Center 50 Apple 30” Cinema Displays Driven by 25 Dual-Processor G5s
    24. 24. World’s Largest OptIPortal – 1/3 Billion Pixels NASA Earth Satellite Images Bushfires October 2007 San Diego
    25. 25. ASCI Brought Scalable Tiled Walls to Support Visual Analysis of Supercomputing Complexity An Early sPPM Simulation Run Source: LLNL 1999 LLNL Wall--20 MPixels (3x5 Projectors)
    26. 26. Challenge—How to Bring This Visualization Capability to the Supercomputer End User? 35Mpixel EVEREST Display ORNL 2004
    27. 27. The Livermore Lightcone: 8 Large AMR Simulations Covering 10 Billion Years “Look Back Time” <ul><li>1.5 M SU on LLNL Thunder </li></ul><ul><li>Generated 200 TB Data </li></ul><ul><li>0.4 M SU Allocated on SDSC DataStar for Data Analysis Alone </li></ul>512 3 Base Grid, 7 Levels of Adaptive Refinement  65,000 Spatial Dynamic Range Livermore Lightcone Tile 8 Source: Michael Norman, SDSC, UCSD
    28. 28. Using OptIPortals to Analyze Supercomputer Simulations Two 64K Images From a Cosmological Simulation of Galaxy Cluster Formation Each Side: 2 Billion Light Years Mike Norman, SDSC October 10, 2008 log of gas temperature log of gas density
    29. 29. CoreWall: Use of OptIPortal in Geosciences Using High Resolution Core Images to Study Paleogeology, Learning about the History of The Planet to Better Understand Causes of Global Warming Before electronic visualization laboratory, university of illinois at chicago After 5 Deployed In Antarctica
    30. 30. Students Learn Case Studies in the Context of Diverse Medical Evidence UIC Anatomy Class electronic visualization laboratory, university of illinois at chicago
    31. 31. OptIPuter Step III: From YouTube to Digital Cinema Streaming Video
    32. 32. AARNet Pioneered Uncompressed HD VTC with UWashington Research Channel-- Supercomputing 2004 Canberra Pittsburgh
    33. 33. e-Science Collaboratory Without Walls Enabled by iHDTV Uncompressed HD Telepresence Photo: Harry Ammons, SDSC John Delaney, PI LOOKING, Neptune May 23, 2007 1500 Mbits/sec Calit2 to UW Research Channel Over NLR
    34. 34. OptIPlanet Collaboratory Persistent Infrastructure Between Calit2 and U Washington Ginger Armbrust’s Diatoms: Micrographs, Chromosomes, Genetic Assembly Photo Credit: Alan Decker UW’s Research Channel Michael Wellings Feb. 29, 2008 iHDTV: 1500 Mbits/sec Calit2 to UW Research Channel Over NLR
    35. 35. OptIPuter Step IV: Integration of Lightpaths, OptIPortals, and Streaming Media
    36. 36. The Calit2 OptIPortals at UCSD and UCI Are Now a Gbit/s HD Collaboratory Calit2@ UCSD wall NASA Ames Visit Feb. 29, 2008 HiPerVerse: First ½ Gigapixel Distributed OptIPortal- 124 Tiles Sept. 15, 2008 UCSD cluster: 15 x Quad core Dell XPS with Dual nVIDIA 5600s UCI cluster: 25 x Dual Core Apple G5 Calit2@ UCI wall
    37. 37. Command and Control: Live Session with JPL and Mars Rover from Calit2 Source: Falko Kuester, Calit2; Michael Sims, NASA
    38. 38. New Year’s Challenge: Streaming Underwater Video From Taiwan’s Kenting Reef to Calit2’s OptIPortal UCSD: Rajvikram Singh, Sameer Tilak, Jurgen Schulze, Tony Fountain, Peter Arzberger NCHC : Ebbe Strandell, Sun-In Lin, Yao-Tsung Wang, Fang-Pang Lin My next plan is to stream stable and quality underwater  images to Calit2, hopefully by PRAGMA 14. --Fang-Pang to LS Jan. 1, 2008 March 6, 2008 Plan Accomplished! Local Images Remote Videos March 26, 2008
    39. 39. Calit2 Microbial Metagenomics Cluster- Next Generation Optically Linked Science Data Server 512 Processors ~5 Teraflops ~ 200 Terabytes Storage 1GbE and 10GbE Switched / Routed Core ~200TB Sun X4500 Storage 10GbE Source: Phil Papadopoulos, SDSC, Calit2
    40. 40. CAMERA’s Global Microbial Metagenomics CyberCommunity 2200 Registered Users From Over 50 Countries
    41. 41. OptIPuter Step V: The Campus Last Mile
    42. 42. CENIC’s New “Hybrid Network” - Traditional Routed IP and the New Switched Ethernet and Optical Services Source: Jim Dolgonas, CENIC ~ $14M Invested in Upgrade Now Campuses Need to Upgrade
    43. 43. <ul><li>HD and Other High Bandwidth Applications Combined with “Big Research” Pushing Large Data Sets Means 1 Gbps is No Longer Adequate for All Users </li></ul><ul><li>AARNet Helps Connect Campus Users or Remote Instruments </li></ul><ul><li>Will Permit Researchers to Exchange Large Amounts of Data within Australia, and Internationally via SXTransPORT </li></ul>AARNet 10Gbps Access Product is Here!!! © 2008, AARNet Pty Ltd Slide From Chris Hancock, CEO AARNet
    44. 44. Use Campus Investment in Fiber and Networks to Physically Connect Campus Resources Source:Phil Papadopoulos, SDSC/Calit2 UCSD Storage OptIPortal Research Cluster Digital Collections Manager PetaScale Data Analysis Facility HPC System Cluster Condo UC Grid Pilot Research Instrument 10Gbps
    45. 45. Source: Maxine Brown, OptIPuter Project Manager Green Initiative: Can Optical Fiber Replace Airline Travel for Continuing Collaborations?
    46. 46. OptIPortals Are Being Adopted Globally Russian Academy Sciences Moscow [email_address] SARA- Netherlands Brno-Czech Republic [email_address] CICESE, Mexico [email_address] KISTI-Korea [email_address] AIST-Japan CNIC-China NCHC-Taiwan Osaka U-Japan U Melbourne U Queensland Canberra CSIRO Discovery Center Last Week Monash University Today ANU!
    47. 47. “ Using the Link to Build the Link” Calit2 and Univ. Melbourne Technology Teams No Calit2 Person Physically Flew to Australia to Bring This Up!
    48. 48. UM Professor Graeme Jackson Planning Brain Surgery for Severe Epilepsy
    49. 49. Smarr American Australian Leadership Dialogue OptIPlanet Collaboratory Lecture Tour October 2008 <ul><li>Oct 2—University of Adelaide </li></ul><ul><li>Oct 6—Univ of Western Australia </li></ul><ul><li>Oct 8—Monash Univ.; Swinburne Univ. </li></ul><ul><li>Oct 9—Univ. of Melbourne </li></ul><ul><li>Oct 10—Univ. of Queensland </li></ul><ul><li>Oct 13—Univ. of Technology Sydney </li></ul><ul><li>Oct 14—Univ. of New South Wales </li></ul><ul><li>Oct 15—ANU ; AARNet; Leadership Dialogue Scholar Oration, Canberra </li></ul><ul><li>Oct 16—CSIRO, Canberra </li></ul><ul><li>Oct 17—Sydney Univ. </li></ul>AARNet National Network
    50. 50. AARNet’s “EN4R” – Experimental Network For Researchers <ul><li>For Researchers </li></ul><ul><li>Free Access for up to 12 months </li></ul><ul><li>2 Circuits Reserved for EN4R on Each Optical Backbone Segment </li></ul><ul><li>Access to North America via. SXTransPORT </li></ul>Source: Chris Hancock, AARNet
    51. 51. EVL’s SAGE OptIPortal VisualCasting Multi-Site OptIPuter Collaboratory CENIC CalREN-XD Workshop Sept. 15, 2008 EVL-UI Chicago U Michigan Streaming 4k Source: Jason Leigh, Luc Renambot, EVL, UI Chicago At Supercomputing 2008 Austin, Texas November, 2008 SC08 Bandwidth Challenge Entry Requires 10 Gbps Lightpath to Each Site
    52. 52. AARNet’s Roadmap Towards 2012 Source: Chris Hancock, AARNet
    53. 53. 21 st Century Australian Information Infrastructure: Joining the Global Data-Intensive Collaboratory <ul><li>All Data-Intensive Australian: </li></ul><ul><ul><li>Researchers </li></ul></ul><ul><ul><li>Scientific Instruments </li></ul></ul><ul><ul><li>Data Repositories </li></ul></ul><ul><li>Should Have Best-of-Breed End-End Connectivity </li></ul><ul><li>Today, that Means 10Gbps Lightpaths </li></ul><ul><li>This Requires a Spirited Partnership: </li></ul><ul><ul><li>Federal </li></ul></ul><ul><ul><li>State </li></ul></ul><ul><ul><li>Universities and CSIRO </li></ul></ul><ul><ul><li>AARnet </li></ul></ul>The Mutuality Principle at Work!