Coupling Australia’s Researchers to the Global Innovation Economy

609 views

Published on

08.10.02
First Lecture in the
Australian American Leadership Dialogue Scholar Tour
University of Adelaide
Title: Coupling Australia’s Researchers to the Global Innovation Economy
Adelaide, Australia

Published in: Technology, Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
609
On SlideShare
0
From Embeds
0
Number of Embeds
13
Actions
Shares
0
Downloads
4
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Coupling Australia’s Researchers to the Global Innovation Economy

  1. 1. “Coupling Australia’s Researchers to the Global Innovation Economy” First Lecture in the Australian American Leadership Dialogue Scholar Tour University of Adelaide Adelaide, Australia October 2, 2008 Dr. Larry Smarr Director, California Institute for Telecommunications and Information Technology Harry E. Gruber Professor, Dept. of Computer Science and Engineering Jacobs School of Engineering, UCSD
  2. 2. Abstract An innovation economy begins with the “pull toward the future” provided by a robust public research sector. While the shared Internet has been rapidly diminishing Australia’s “tyranny of distance,” the 21st Century global competition, driven by public research innovation, requires Australia to have high performance connectivity second to none for its researchers. A major step toward this goal has been achieved during the last year through the Australian American Leadership Dialogue (AALD) Project Link, establishing a 1 Gigabit/sec dedicated end-to- end connection between a 100 megapixel OptIPortal at the University of Melbourne and Calit2@UC San Diego over AARNet, Australia's National Research and Education Network. From October 2-17 Larry Smarr, as the 2008 Leadership Dialogue Scholar, is visiting Australian universities from Perth to Brisbane in order to oversee the launching of the next phase of the Leadership Dialogue’s Project Link—the linking of Australia’s major research intensive universities and the CSIRO to each other and to innovation centres around the world with AARNet’s new 10 Gbps access product. At each university Dr. Smarr will facilitate discussions on what is needed in the local campus infrastructure to make this ultra-broadband available to data intensive researchers. With this unprecedented bandwidth, Australia will be able to join emerging global collaborative research— across disciplines as diverse as climate change, coral reefs, bush fires, biotechnology, and health care—bringing the best minds on the planet to bear on issues critical to Australia’s future.
  3. 3. The 20 Year Pursuit of a Dream: Shrinking the Planet “What we really have to do is eliminate distance • Televisualization: between individuals who want to interact with – Telepresence other people and with other computers.” – Remote Interactive ― Larry Smarr, Director, NCSA Visual Illinois Supercomputing – Multi-disciplinary Scientific Visualization Boston “We’re using satellite technology…to demo what It might be like to have high-speed fiber-optic links between advanced computers in two different geographic locations.” ― Al Gore, Senator ATT & Chair, US Senate Subcommittee on Science, Technology and Space Sun SIGGRAPH 1989
  4. 4. The OptIPuter Creates an OptIPlanet Collaboratory Using High Performance Bandwidth, Resolution, and Video Scalable Adaptive Graphics Environment (SAGE) Amsterdam Chicago Just Finished Sixth and Final Year Czech Republic September 2007 Calit2 (UCSD, UCI), SDSC, and UIC Leads—Larry Smarr PI Univ. Partners: NCSA, USC, SDSU, NW, TA&M, UvA, SARA, KISTI, AIST Industry: IBM, Sun, Telcordia, Chiaro, Calient, Glimmerglass, Lucent
  5. 5. OptIPuter Step I: From Shared Internet to Dedicated Lightpaths
  6. 6. The Unrelenting Exponential Growth of Data Requires an Exponential Growth in Bandwidth • “US Bancorp backs up 100 TeraBytes of financial data every night – now.” – David Grabski (VP Information Tech. US Bancorp), Qwest High Performance Networking Summit, Denver, CO. USA, June 2006 • “Each LHC experiment foresees a recorded raw data rate of 1 to several thousand TeraBytes/year” – Dr. Harvey Neuman (Cal Tech), Professor of Physics • “The VLA facility is now able to generate 700 Gbps of astronomical data and the Extended VLA will reach 3200 Gigabits per second by 2009.” – Dr. Steven Durand, National Radio Astronomy Observatory, e-VLBI Workshop, MIT Haystack Observatory, Sep 2006 • “The Global Information Grid will need to store and access millions of Terabytes of data on a realtime basis by 2010” – Dr. Henry Dardy (DOD), Optical Fiber Conference, Los Angeles, CA USA, Mar 2006 Source: Jerry Sobieski MAX / University of Maryland
  7. 7. Shared Internet Bandwidth: Unpredictable, Widely Varying, Jitter, Asymmetric 10000 12 Minutes 100-1000x Stanford Server Limit Computers In: 1000 Normal Time to Move UCSD Internet! Australia a Terabyte 100 Outbound (Mbps) Canada Czech Rep. India Data Intensive 10 10 Days Sciences Japan Korea Require Mexico Australia Fast 1 Moorea Predictable Netherlands Bandwidth Poland 0.1 Taiwan United States 0.01 0.01 0.1 1 10 100 1000 10000 Source: Larry Smarr and Friends Inbound (Mbps) Measured Bandwidth from User Computer to Stanford Gigabit Server in Megabits/sec http://netspeed.stanford.edu/
  8. 8. Dedicated Optical Channels Makes High Performance Cyberinfrastructure Possible (WDM) c=λ* f Source: Steve Wallach, Chiaro Networks “Lambdas”
  9. 9. 9Gbps Out of 10Gbps Disk-to-Disk Performance Using LambdaStream between EVL and Calit2 9.3 Throughput in Gbps 9.35 9.3 9.25 9.22 9.2 9.15 CaveWave 9.1 9.01 9.02 9.05 TeraWave 9 8.95 8.9 8.85 San Diego to Chicago Chicago to San Diego CAVEWave: TeraGrid: 20 senders to 20 receivers (point to point ) 20 senders to 20 receivers (point to point ) Effective Throughput = 9.01 Gbps Effective Throughput = 9.02 Gbps (San Diego to Chicago) (San Diego to Chicago) 450.5 Mbps disk to disk transfer per stream 451 Mbps disk to disk transfer per stream Effective Throughput = 9.30 Gbps Effective Throughput = 9.22 Gbps (Chicago to San Diego) (Chicago to San Diego) 465 Mbps disk to disk transfer per stream 461 Mbps disk to disk transfer per stream Dataset: 220GB Satellite Imagery of Chicago courtesy USGS. Each file is 5000 x 5000 RGB image with a size of 75MB i.e ~ 3000 files Source: Venkatram Vishwanath, UIC EVL
  10. 10. Dedicated 10Gbps Lambdas Provide Cyberinfrastructure Backbone for U.S. Researchers 10 Gbps per User ~ 200-1000x Shared Internet Throughput Interconnects Two Dozen State and Regional Internet2 Dynamic Optical Networks Circuit Network Under Development NLR 40 x 10Gb Wavelengths Expanding with Darkstrand to 80
  11. 11. Global Lambda Integrated Facility 1 to 10G Dedicated Lambdas Interconnects Global Public Research Innovation Centers Source: Maxine Brown, UIC and Robert Patterson, NCSA
  12. 12. AARNet Provides the National and Global Bandwidth Required Between Campuses 25 Gbps to US 60 Gbps Brisbrane - Sydney - Melbourne 30 Gbps Melbourne - Adelaide 10 Gbps Adelaide - Perth
  13. 13. OptIPuter Step II: From User Analysis on PCs to OptIPortals
  14. 14. My OptIPortalTM – Affordable Termination Device for the OptIPuter Global Backplane • 20 Dual CPU Nodes, 20 24” Monitors, ~$50,000 • 1/4 Teraflop, 5 Terabyte Storage, 45 Mega Pixels--Nice PC! • Scalable Adaptive Graphics Environment ( SAGE) Jason Leigh, EVL-UIC Source: Phil Papadopoulos SDSC, Calit2
  15. 15. On-Line Resources Help You Build Your Own OptIPuter www.optiputer.net http://wiki.optiputer.net/optiportal www.evl.uic.edu/cavern/sage http://vis.ucsd.edu/~cglx/
  16. 16. Students Learn Case Studies in the Context of Diverse Medical Evidence UIC Anatomy Class electronic visualization laboratory, university of illinois at chicago
  17. 17. CoreWall: Use of OptIPortal in Geosciences Using High Resolution Core Images to Study Before Paleogeology, Learning about the History of The Planet to Better Understand Causes of Global Warming 5 Deployed In Antarctica www.corewall.org After electronic visualization laboratory, university of illinois at chicago
  18. 18. Group Analysis of Global Change Supercomputer Simulations Latest Atmospheric Data is Displayed for Classes, Research Meetings, and Lunch Gatherings- A Truly Communal Wall Before After Source: U of Michigan Atmospheric Sciences Department
  19. 19. Using HIPerWall OptIPortals for Humanities and Social Sciences Software Studies Initiative, Calti2@UCSD Interface Designs for Cultural Analytics Research Environment Jeremy Douglass (top) & Lev Manovich Calit2@UCI (bottom) 200 Mpixel HIPerWall Second Annual Meeting of the Humanities, Arts, Science, and Technology Advanced Collaboratory (HASTAC II) UC Irvine May 23, 2008
  20. 20. Calit2 3D Immersive StarCAVE OptIPortal: Enables Exploration of High Resolution Simulations Connected at 50 Gb/s to Quartzite 15 Meyer Sound Speakers + Subwoofer 30 HD Projectors! Passive Polarization-- Optimized the Polarization Separation and Minimized Attenuation Source: Tom DeFanti, Greg Dawe, Calit2 Cluster with 30 Nvidia 5600 cards-60 GB Texture Memory
  21. 21. OptIPuter Step III: From YouTube to Digital Cinema Streaming Video
  22. 22. Traffic From YouTube on a Typical Day Slide From Chris Hancock, CEO AARNet Several Hundred Million Downloaded per Day, But Each is Small What is Users Need to Stream HD Video? 22
  23. 23. AARNet Pioneered Uncompressed HD VTC with UWashington Research Channel--Supercomputing 2004 Canberra Pittsburgh
  24. 24. e-Science Collaboratory Without Walls Enabled by iHDTV Uncompressed HD Telepresence 1500 Mbits/sec Calit2 to UW Research Channel Over NLR May 23, 2007 John Delaney, PI LOOKING, Neptune Photo: Harry Ammons, SDSC
  25. 25. HD Talk to Monash University from Calit2 July 30, 2008 July 31, 2008
  26. 26. OptIPuter Step IV: Integration of Lightpaths, OptIPortals, and Streaming Media
  27. 27. The Calit2 OptIPortals at UCSD and UCI Are Now a Gbit/s HD Collaboratory NASA Ames Visit Feb. 29, 2008 HiPerVerse: First ½ Gigapixel Distributed OptIPortal- 124 Tiles Sept. 15, 2008 Calit2@ UCI wall Calit2@ UCSD wall UCSD cluster: 15 x Quad core Dell XPS with Dual nVIDIA 5600s UCI cluster: 25 x Dual Core Apple G5
  28. 28. OptIPlanet Collaboratory Persistent Infrastructure Supporting Microbial Research Photo Credit: Alan Decker Feb. 29, 2008 Ginger Armbrust’s Diatoms: Micrographs, Chromosomes, Genetic Assembly (U Washington) iHDTV: 1500 Mbits/sec Calit2 to UW Research Channel Over NLR UW’s Research Channel Michael Wellings
  29. 29. EVL’s SAGE VisualCasting Multi-Site OptIPuter Collaboratory CENIC CalREN-XD Workshop Sept. 15, 2008 EVL-UI Chicago Streaming 4k U Michigan Source: Jason Leigh, Luc Renambot, EVL, UI Chicago
  30. 30. OptIPortal Visualcasting SC08 Bandwidth Challenge Entry On site: Remote: SARA (Amsterdam) U of Michigan GIST / KISTI (Korea) UIC/EVL Osaka University U of Queensland Masaryk University, CALIT2 Russian Academy of Science Source: Jason Leigh, EVL, UIC
  31. 31. OptIPuter Step V: The Campus Last Mile
  32. 32. How Do You Get From Your Lab to the Regional Optical Networks? “Research is being stalled by ‘information overload,’ Mr. Bement said, because data from digital instruments are piling up far faster than researchers can study. In particular, he said, campus networks need to be improved. High-speed data lines crossing the nation are the equivalent of six-lane superhighways, he said. But networks at colleges and universities are not so capable. “Those massive conduits are reduced to two-lane roads at most college and university campuses,” he said. Improving cyberinfrastructure, he said, “will transform the capabilities of campus-based scientists.” -- Arden Bement, the director of the National Science Foundation www.ctwatch.org
  33. 33. CENIC’s New “Hybrid Network” - Traditional Routed IP and the New Switched Ethernet and Optical Services ~ $14M Invested in Upgrade Now Campuses Need to Upgrade Source: Jim Dolgonas, CENIC
  34. 34. AARNet 10Gbps Access Product is Here!!! • HD and Other High Bandwidth Applications Combined with “Big Research” Pushing Large Data Sets Means 1 Gbps is No Longer Adequate for All Users Vivaty • Will Permit Researchers to Exchange Large Amounts of Data within Australia, and Internationally via SXTransPORT © 2008, AARNet Pty Ltd 34 Slide From Chris Hancock, CEO AARNet
  35. 35. AARNet’s “EN4R” – Experimental Network For Researchers • For Researchers • Free Access for up to 12 months • 2 Circuits Reserved for EN4R on Each Optical Backbone Segment • Access to North America via. SXTransPORT 35 Source: Chris Hancock, AARNet
  36. 36. “NCN” - National Collaborative Network - Driving National Collaborative Research Infrastructure Strategy • Point to Point or Multipoint National Ethernet service • Allows Researchers to Collaborate at Layer 2 – For Use with Applications that Don’t Tolerate IP Networks (e-VLBI) – Assists in Mitigating Firewalling and Security Concerns • Ready for service by Q4’08 36 Source: Chris Hancock, AARNet
  37. 37. Connecting to 10G – AARNet 1. There are several factors involved in any decision to “connect at 10G”: a. Is it to be an Optical Circuit or General IP connection?  AARNet’s IP backbone currently runs at 10G [Brisbane – Sydney – Canberra – Melbourne – Adelaide – Perth] b. Is AARNet’s optical backbone within reach?  AARNet’s Optical backbone currently lit with at least 20G to 30G [Brisbane – Sydney – Canberra – Melbourne – Adelaide] c. How close is the relevant PoP?  IP and optical PoPs may be at different locations – AARNet 10G for both is only provisioned to the PoP today 2. Connection to the PoP: 5 categories: a. Co-located – Like ANU and UTS: a patch cord is simply put in place to connect the customer. b. Metro – AARNet would use existing dark fibre where available or use DWDM (passive) systems to connect customer in. c. Regional – AARNet would use a 10G DWDM circuit on the regional optical network. d. Managed Services – A customer could choose to procure a managed 10G service from an alternative carrier to AARNet PoP (unlikely but AARNet will support it). e. Construction – Either a dark fibre tail, or a DWDM network, or similar to meet customer needs. 30-Sep-08 © AARNet Pty Ltd Connecting to 10G 37
  38. 38. Connecting to 10G – Customer 3. Campus Interconnection Requirements: a. 10G IP access – the customer plugs in the 10G interface into their campus gateway router or firewall or directly into their Research Network infrastructure (either logically or physically separated) b. 10G NCN/VPLS access – the customer plugs the 1G or 10G interface into their campus internal network or into their Research network as above. They may chose to pass this through a firewall but generally the NCN is for “trusted” parties (it’s a closed, known group). c. 10G Transmission – Point to Point 10G capacity either bought or under EN4R – customer can choose to bring this in as a regular WAN link attached to their WAN routers/switch or directly between instruments/clusters. Since this product isn’t IP based there is no need for firewalling. 4. On-Campus Reticulation: The main options for the Customer are: a. provide a physically separate network for their researchers. b. provide an overlay (MPLS/VLAN/VPLS) on campus for researchers. Both of these methods are being seen in practice and no difficulties are likely to exist for AARNet or Customers in either approach; AARNet would work with any variation on these options. 30-Sep-08 © AARNet Pty Ltd Connecting to 10G 38
  39. 39. To Build a Campus Dark Fiber Network— First, Find Out Where All the Campus Conduit Is!
  40. 40. The “Golden Spike” UCSD Experimental Optical Core: Ready to Couple Users to CENIC L1, L2, L3 Services Quartzite Communications To 10GigE cluster Goals by 2008: Core Year 3 node interfaces >= 60 endpoints at 10 GigE CENIC L1, L2 >= 30 Packet switched Wavelength Quartzite Selective Services Corewavelengths ..... Switch >= 30 Switched >= 400 Connected endpoints Lucent To 10GigE cluster node interfaces and other switches To cluster nodes ..... Glimmerglass Approximately 0.5 Tbps ..... To cluster nodes GigE Switch with Dual 10GigE Upliks Arrive at the “Optical” Production OOO Switch To cluster nodes Center of Hybrid Campus 32 10GigE ..... Switch GigE Switch with Dual 10GigE Upliks Force10 ... To Packet Switch CalREN-HPR GigE Switch with Dual 10GigE Upliks other Research nodes Cloud GigE Funded by 10GigE NSF MRI Campus Research 4 GigE 4 pair fiber Grant Cloud Cisco 6509 Juniper T320 OptIPuter Border Router Source: Phil Papadopoulos, SDSC/Calit2 (Quartzite PI, OptIPuter co-PI)
  41. 41. Calit2 Sunlight Optical Exchange Contains Quartzite Maxine Brown, UIC OptIPuter Project Manager Feb. 21, 2008
  42. 42. Use Campus Investment in Fiber and Networks to Physically Connect Campus Resources HPC System Cluster Condo PetaScale Data Analysis UCSD Storage Facility UC Grid Pilot Digital Collections Research Manager Cluster OptIPortal Research 10Gbps Instrument Source:Phil Papadopoulos, SDSC/Calit2
  43. 43. AARNet’s Roadmap Towards 2012 Today 1-3 Years 4-6 Years AARNet 3 AARNet 3.5 AARNet 4 Research & EN4R D-EN4R LambdaPaths Collaboration Tools LightPaths NCN Customer Access 1G Access 10G Access 40G Access CPE P2P 1G L3 VPN Network Services Ethernet VPLS G.MPLS IP Backbone 10G 40G 100G Near National National National DWDM Backbone 40 x 10G 80 x 40G 80 x 100G 43 Source: Chris Hancock, AARNet
  44. 44. OptIPuter Step V: Applications Emerge
  45. 45. Two New Calit2 Buildings Provide New Laboratories for “Living in the Future” • “Convergence” Laboratory Facilities – Nanotech, BioMEMS, Chips, Radio, Photonics – Virtual Reality, Digital Cinema, HDTV, Gaming • Over 1000 Researchers in Two Buildings – Linked via Dedicated Optical Networks UC Irvine www.calit2.net Preparing for a World in Which Distance is Eliminated…
  46. 46. Discovering New Applications and Services Enabled by 1-10 Gbps Lambdas Maxine Brown, Tom DeFanti, Co-Chairs i Grid 2005 www.igrid2005.org THE GLOBAL LAMBDA INTEGRATED FACILITY September 26-30, 2005 Calit2 @ University of California, San Diego California Institute for Telecommunications and Information Technology 21 Countries Driving 50 Demonstrations Using 1 or 10Gbps Lightpaths 100Gb of Bandwidth into the Calit2@UCSD Building Sept 2005
  47. 47. iGrid Media Streaming Services CineGrid @ iGrid2005 4K Distance Learning 4K Virtual Reality 4K Supercomputing Visualization 4K Anime 4K Digital Cinema Source: Laurin Herr
  48. 48. iGrid Lambda Data Services: Sloan Sky Survey Data Transfer • SDSS-I – Imaged 1/4 of the Sky in Five Bandpasses – 8000 sq-degrees at 0.4 arc sec Accuracy ~200 GigaPixels! – Detecting Nearly 200 Million Celestial Objects – Measured Spectra Of: iGRID2005 – > 675,000 galaxies From Federal Express to Lambdas: – 90,000 quasars Transporting Sloan Digital Sky Survey – 185,000 stars Data Using UDT Robert Grossman, UIC Transferred Entire SDSS (3/4 Terabyte) from Calit2 to Korea in 3.5 Hours— Average Speed 2/3 Gbps! www.sdss.org
  49. 49. iGrid Lambda Control Plane Services: Transform Batch to Real-Time Global e-Very Long Baseline Interferometry • Goal: Real-Time VLBI Radio Telescope Data Correlation • Achieved 512Mb Transfers from USA and Sweden to MIT • Results Streamed to iGrid2005 in San Diego Optical Connections Dynamically Managed Using the DRAGON Control Plane and Internet2 HOPI Network Source: Jerry Sobieski, DRAGON
  50. 50. iGrid Lambda Instrument Control Services– UCSD/Osaka Univ. Using Real-Time Instrument Steering and HDTV Most Powerful Electron Southern California OptIPuter Microscope in the World -- Osaka, Japan HDTV UCSD Source: Mark Ellisman, UCSD
  51. 51. iGrid Scientific Instrument Services: Enable Remote Interactive HD Imaging of Deep Sea Vent Canadian-U.S. Collaboration Source John Delaney & Deborah Kelley, UWash
  52. 52. Green Initiative: Can Optical Fiber Replace Airline Travel for Continuing Collaborations ? Source: Maxine Brown, OptIPuter Project Manager
  53. 53. OptIPortals Are Being Adopted Globally U Melbourne AIST-Japan Osaka U-Japan KISTI-Korea CNIC-China UZurich NCHC-Taiwan U Queensland SARA- Netherlands Brno-Czech Republic U. Melbourne, EVL@UIC Calit2@UCSD Calit2@UCI Australia CSIRO Discovery Center Canberra
  54. 54. New Year’s Challenge: Streaming Underwater Video From Taiwan’s Kenting Reef to Calit2’s OptIPortal My next plan is to stream stable Remote Videos Local Images and quality underwater images to Calit2, hopefully by PRAGMA 14. -- Fang-Pang to LS Jan. 1, 2008 March 6, 2008 Plan Accomplished! March 26, 2008 UCSD: Rajvikram Singh, Sameer Tilak, Jurgen Schulze, Tony Fountain, Peter Arzberger NCHC : Ebbe Strandell, Sun-In Lin, Yao-Tsung Wang, Fang-Pang Lin
  55. 55. “Using the Link to Build the Link” Calit2 and Univ. Melbourne Technology Teams No Calit2 Person Physically Flew to Australia to Bring This Up! www.calit2.net/newsroom/release.php?id=1219
  56. 56. UM Professor Graeme Jackson Planning Brain Surgery for Severe Epilepsy www.calit2.net/newsroom/release.php?id=1219
  57. 57. Victoria Premier and Australian Deputy Prime Minister Asking Questions www.calit2.net/newsroom/release.php?id=1219
  58. 58. University of Melbourne Vice Chancellor Glyn Davis in Calit2 Replies to Question from Australia
  59. 59. Smarr American Australian Leadership Dialogue OptIPlanet Collaboratory Lecture Tour October 2008 AARNet National Network • Oct 2—University of Adelaide • Oct 6—Univ of Western Australia • Oct 8—Monash Univ.; Swinburne Univ. • Oct 9—Univ. of Melbourne • Oct 10—Univ. of Queensland • Oct 13—Univ. of Technology Sydney • Oct 14—Univ. of New South Wales • Oct 15—ANU; AARNet; Leadership Dialogue Scholar Oration, Canberra • Oct 16—CSIRO, Canberra • Oct 16—Sydney Univ.
  60. 60. “To ensure a competitive economy for the 21st century, the Australian Government should set a goal of making Australia the pre-eminent location to attract the best researchers and be a preferred partner for international research institutions, businesses and national governments.”
  61. 61. Broadband Users in Japan: Over 10 Million Homes Have Fiber Connection Eventually Enabling FTTH will Gigabit/sec to the Home 16 16 overtake ADSL ADSL soon # of Customers (Million) 14 14 12 12 10 10 FTTH 8 8 6 6 4 CATV 4 2 2 0 Dec. 2005 Mar. 2006 Jun. 2006 Sep. 2006 Dec. 2006 Mar. 2007 Jun. 2007 Sep. 2007 Dec 05 Mar 06 Jun 06 Sep 06 Dec 06 Mar 07 Jun 07 Sep 07 61 Source: Takashi Shimizu, NTT Network Innovation Laboratories
  62. 62. In the Near Future, Walls of Homes and Offices will be Electroactive Sharp Labs of America / EVL Public-Private Partnership Chairman of Sharp “In Ten Years' Time Entire Walls Could Be Screens” Forbes, June 4, 2007 Studying User-Interaction Issues and Moving Image Synchronization Issues in Future Ultra High Resolution Environments electronic visualization laboratory, university of illinois at chicago

×