09.11.03
Report to the
Dept. of Energy Advanced Scientific Computing Advisory Committee
Title: Project StarGate An End-to-End 10Gbps HPC to User Cyberinfrastructure ANL * Calit2 * LBNL * NICS * ORNL * SDSC
Oak Ridge, TN
Cyberinfrastructure for Advanced Marine Microbial Ecology Research and Analys...Larry Smarr
06.07.31
Invited Talk
CONNECT Investment Community Meeting
Calit2@UCSD
Title: Cyberinfrastructure for Advanced Marine Microbial Ecology Research and Analysis (CAMERA)
La Jolla, CA
06.12.13
Panelist
Panel on Issues, Challenges, and Future Directions of Multimedia Research
IEEE International Symposium on Multimedia (ISM 2006)
Title: Towards GigaPixel Displays
La Jolla, CA
The Energy Efficient Cyberinfrastructure in Slowing Climate ChangeLarry Smarr
10.04.28
Invited Speaker
Community Alliance for Distributed Energy Resources
Scripps Forum, UCSD
Title: The Energy Efficient Cyberinfrastructure in Slowing Climate Change
La Jolla, CA
Cyberinfrastructure for Advanced Marine Microbial Ecology Research and Analys...Larry Smarr
06.07.31
Invited Talk
CONNECT Investment Community Meeting
Calit2@UCSD
Title: Cyberinfrastructure for Advanced Marine Microbial Ecology Research and Analysis (CAMERA)
La Jolla, CA
06.12.13
Panelist
Panel on Issues, Challenges, and Future Directions of Multimedia Research
IEEE International Symposium on Multimedia (ISM 2006)
Title: Towards GigaPixel Displays
La Jolla, CA
The Energy Efficient Cyberinfrastructure in Slowing Climate ChangeLarry Smarr
10.04.28
Invited Speaker
Community Alliance for Distributed Energy Resources
Scripps Forum, UCSD
Title: The Energy Efficient Cyberinfrastructure in Slowing Climate Change
La Jolla, CA
Opportunities for Advanced Technology in TelecommunicationsLarry Smarr
06.12.07
Invited Talk
37th IEEE Semiconductor Interface Specialists Conference
Catamaran Resort Hotel
Title: Opportunities for Advanced Technology in Telecommunications
San Diego, CA
SC21: Larry Smarr on The Rise of Supernetwork Data Intensive ComputingLarry Smarr
Larry Smarr, founding director of Calit2 (now Distinguished Professor Emeritus at the University of California San Diego) and the first director of NCSA, is one of the seminal figures in the U.S. supercomputing community. What began as a personal drive, shared by others, to spur the creation of supercomputers in the U.S. for scientific use, later expanded into a drive to link those supercomputers with high-speed optical networks, and blossomed into the notion of building a distributed, high-performance computing infrastructure – replete with compute, storage and management capabilities – available broadly to the science community.
Remote Telepresence for Exploring Virtual WorldsLarry Smarr
08.01.26
Foundational Talk
Virtual World and Immersive Environments
NASA Ames
Title: Remote Telepresence for Exploring Virtual Worlds
Mountain View, CA
Bringing Mexico Into the Global LambdaGridLarry Smarr
12.03.13
CENIC 2012 Conference Award Talk
2012 CENIC Innovations in Networking Award for High-Performance Research Applications: Enhancing Mexican/American Research Collaborations.
Title: Bringing Mexico Into the Global LambdaGrid
Palo Alto, CA
Why Researchers are Using Advanced NetworksLarry Smarr
07.07.03
Remote Talk from Calit2 to:
Building KAREN Communities for Collaboration Forum
KIWI Advanced Research and Education Network
University of Auckland, Auckland City, New Zealand
Title: Why Researchers are Using Advanced Networks
La Jolla, CA
OptIPuter-A High Performance SOA LambdaGrid Enabling Scientific ApplicationsLarry Smarr
07.03.21
IEEE Computer Society Tsutomu Kanai Award Keynote
At the Joint Meeting of the: 8th International Symposium on Autonomous Decentralized Systems
2nd International Workshop on Ad Hoc, Sensor and P2P Networks
11th IEEE International Workshop on Future Trends of Distributed Computing Systems
Title: OptIPuter-A High Performance SOA LambdaGrid Enabling Scientific Applications
Sedona, AZ
Toward a Global Interactive Earth Observing CyberinfrastructureLarry Smarr
05.01.12
Invited Talk to the 21st International Conference on Interactive Information Processing Systems (IIPS) for Meteorology, Oceanography, and Hydrology Held at the 85th AMS Annual Meeting
Title: Toward a Global Interactive Earth Observing Cyberinfrastructure
San Diego, CA
The Jump to Light Speed - Data Intensive Earth Sciences are Leading the Way t...Larry Smarr
05.06.14
Keynote to the 15th Federation of Earth Science Information Partners Assembly Meeting: Linking Data and Information to Decision Makers
Title: The Jump to Light Speed - Data Intensive Earth Sciences are Leading the Way to the International LambdaGrid
San Diego, CA
From the Shared Internet to Personal Lightwaves: How the OptIPuter is Transfo...Larry Smarr
08.04.03
Invited Talk
Cyberinfrastructure Colloquium
Clemson University
Title: From the Shared Internet to Personal Lightwaves: How the OptIPuter is Transforming Scientific Research
Clemson, SC
High Performance Cyberinfrastructure is Needed to Enable Data-Intensive Scien...Larry Smarr
11.03.28
Remote Luncheon Presentation from Calit2@UCSD
National Science Board
Expert Panel Discussion on Data Policies
National Science Foundation
Title: High Performance Cyberinfrastructure is Needed to Enable Data-Intensive Science and Engineering
Arlington, Virginia
08.09.19
Invited Lecture to the Green IT Workshop
Canada-California Strategic Innovation Partnership
Title: Toward Greener Cyberinfrastructure
Palo Alto, CA
The OptiPuter, Quartzite, and Starlight Projects: A Campus to Global-Scale Te...Larry Smarr
05.03.09
Invited Talk
Optical Fiber Communication Conference (OFC2005)
Title: The OptiPuter, Quartzite, and Starlight Projects: A Campus to Global-Scale Testbed for Optical Technologies Enabling LambdaGrid Computing
Anaheim, CA
Science and Cyberinfrastructure in the Data-Dominated EraLarry Smarr
10.02.22
Invited talk
Symposium #1610, How Computational Science Is Tackling the Grand Challenges Facing Science and Society
Title: Science and Cyberinfrastructure in the Data-Dominated Era
San Diego, CA
How to Terminate the GLIF by Building a Campus Big Data Freeway SystemLarry Smarr
12.10.11
Keynote Lecture
12th Annual Global LambdaGrid Workshop
Title: How to Terminate the GLIF by Building a Campus Big Data Freeway System
Chicago, IL
Opportunities for Advanced Technology in TelecommunicationsLarry Smarr
06.12.07
Invited Talk
37th IEEE Semiconductor Interface Specialists Conference
Catamaran Resort Hotel
Title: Opportunities for Advanced Technology in Telecommunications
San Diego, CA
SC21: Larry Smarr on The Rise of Supernetwork Data Intensive ComputingLarry Smarr
Larry Smarr, founding director of Calit2 (now Distinguished Professor Emeritus at the University of California San Diego) and the first director of NCSA, is one of the seminal figures in the U.S. supercomputing community. What began as a personal drive, shared by others, to spur the creation of supercomputers in the U.S. for scientific use, later expanded into a drive to link those supercomputers with high-speed optical networks, and blossomed into the notion of building a distributed, high-performance computing infrastructure – replete with compute, storage and management capabilities – available broadly to the science community.
Remote Telepresence for Exploring Virtual WorldsLarry Smarr
08.01.26
Foundational Talk
Virtual World and Immersive Environments
NASA Ames
Title: Remote Telepresence for Exploring Virtual Worlds
Mountain View, CA
Bringing Mexico Into the Global LambdaGridLarry Smarr
12.03.13
CENIC 2012 Conference Award Talk
2012 CENIC Innovations in Networking Award for High-Performance Research Applications: Enhancing Mexican/American Research Collaborations.
Title: Bringing Mexico Into the Global LambdaGrid
Palo Alto, CA
Why Researchers are Using Advanced NetworksLarry Smarr
07.07.03
Remote Talk from Calit2 to:
Building KAREN Communities for Collaboration Forum
KIWI Advanced Research and Education Network
University of Auckland, Auckland City, New Zealand
Title: Why Researchers are Using Advanced Networks
La Jolla, CA
OptIPuter-A High Performance SOA LambdaGrid Enabling Scientific ApplicationsLarry Smarr
07.03.21
IEEE Computer Society Tsutomu Kanai Award Keynote
At the Joint Meeting of the: 8th International Symposium on Autonomous Decentralized Systems
2nd International Workshop on Ad Hoc, Sensor and P2P Networks
11th IEEE International Workshop on Future Trends of Distributed Computing Systems
Title: OptIPuter-A High Performance SOA LambdaGrid Enabling Scientific Applications
Sedona, AZ
Toward a Global Interactive Earth Observing CyberinfrastructureLarry Smarr
05.01.12
Invited Talk to the 21st International Conference on Interactive Information Processing Systems (IIPS) for Meteorology, Oceanography, and Hydrology Held at the 85th AMS Annual Meeting
Title: Toward a Global Interactive Earth Observing Cyberinfrastructure
San Diego, CA
The Jump to Light Speed - Data Intensive Earth Sciences are Leading the Way t...Larry Smarr
05.06.14
Keynote to the 15th Federation of Earth Science Information Partners Assembly Meeting: Linking Data and Information to Decision Makers
Title: The Jump to Light Speed - Data Intensive Earth Sciences are Leading the Way to the International LambdaGrid
San Diego, CA
From the Shared Internet to Personal Lightwaves: How the OptIPuter is Transfo...Larry Smarr
08.04.03
Invited Talk
Cyberinfrastructure Colloquium
Clemson University
Title: From the Shared Internet to Personal Lightwaves: How the OptIPuter is Transforming Scientific Research
Clemson, SC
High Performance Cyberinfrastructure is Needed to Enable Data-Intensive Scien...Larry Smarr
11.03.28
Remote Luncheon Presentation from Calit2@UCSD
National Science Board
Expert Panel Discussion on Data Policies
National Science Foundation
Title: High Performance Cyberinfrastructure is Needed to Enable Data-Intensive Science and Engineering
Arlington, Virginia
08.09.19
Invited Lecture to the Green IT Workshop
Canada-California Strategic Innovation Partnership
Title: Toward Greener Cyberinfrastructure
Palo Alto, CA
The OptiPuter, Quartzite, and Starlight Projects: A Campus to Global-Scale Te...Larry Smarr
05.03.09
Invited Talk
Optical Fiber Communication Conference (OFC2005)
Title: The OptiPuter, Quartzite, and Starlight Projects: A Campus to Global-Scale Testbed for Optical Technologies Enabling LambdaGrid Computing
Anaheim, CA
Science and Cyberinfrastructure in the Data-Dominated EraLarry Smarr
10.02.22
Invited talk
Symposium #1610, How Computational Science Is Tackling the Grand Challenges Facing Science and Society
Title: Science and Cyberinfrastructure in the Data-Dominated Era
San Diego, CA
How to Terminate the GLIF by Building a Campus Big Data Freeway SystemLarry Smarr
12.10.11
Keynote Lecture
12th Annual Global LambdaGrid Workshop
Title: How to Terminate the GLIF by Building a Campus Big Data Freeway System
Chicago, IL
High Performance Cyberinfrastructure Enables Data-Driven Science in the Globa...Larry Smarr
10.10.28
Invited Speaker
Grand Challenges in Data-Intensive Discovery Conference
San Diego Supercomputer Center, UC San Diego
Title: High Performance Cyberinfrastructure Enables Data-Driven Science in the Globally Networked World
La Jolla, CA
A Campus-Scale High Performance Cyberinfrastructure is Required for Data-Int...Larry Smarr
11.12.12
Seminar Presentation
Princeton Institute for Computational Science and Engineering (PICSciE)
Princeton University
Title: A Campus-Scale High Performance Cyberinfrastructure is Required for Data-Intensive Research
Princeton, NJ
El Barcelona Supercomputing Center (BSC) fue establecido en 2005 y alberga el MareNostrum, uno de los superordenadores más potentes de España. Somos el centro pionero de la supercomputación en España. Nuestra especialidad es la computación de altas prestaciones - también conocida como HPC o High Performance Computing- y nuestra misión es doble: ofrecer infraestructuras y servicio de supercomputación a los científicos españoles y europeos, y generar conocimiento y tecnología para transferirlos a la sociedad. Somos Centro de Excelencia Severo Ochoa, miembros de primer nivel de la infraestructura de investigación europea PRACE (Partnership for Advanced Computing in Europe), y gestionamos la Red Española de Supercomputación (RES). Como centro de investigación, contamos con más de 456 expertos de 45 países, organizados en cuatro grandes áreas de investigación: Ciencias de la computación, Ciencias de la vida, Ciencias de la tierra y aplicaciones computacionales en ciencia e ingeniería.
In this deck from the 2019 Stanford HPC Conference, Rob Neely, from Lawrence Livermore National Laboratory presents: Sierra - Science Unleashed.
"This talk will give an overview of Sierra and some of the early science results it has enabled. Sierra is an IBM system harnessing the power of over 17,000 NVIDIA Volta GPUs recently deployed at Lawrence Livermore National Laboratory and is currently ranked as the #2 system on the Top500. Before being turned over for use in the classified mission, Sierra spent months in an “open science campaign” where we got an early glimpse at some of the truly game-changing science this system will unleash – selected results of which will be presented."
Rob Neely is a Computer Scientist and Technical Manager at Lawrence Livermore National Laboratory where he is the Weapon Simulation & Computing Program Coordinator for Computing Environments, and the Associate Division Lead for the Center for Applied Scientific Computing (CASC). He also is the DOE Exascale Computing Project lead for Software Technologies Ecosystem and Delivery. He has been involved in High Performance Computing for his entire 25+ year career.
Learn more: https://computation.llnl.gov/computers/sierra
and
http://hpcadvisorycouncil.com/events/2019/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...Larry Smarr
11.04.06
Joint Presentation
UCSD School of Medicine Research Council
Larry Smarr, Calit2 & Phil Papadopoulos, SDSC/Calit2
Title: High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biomedical Sciences
In this deck from the 2017 MVAPICH User Group, Adam Moody from Lawrence Livermore National Laboratory presents: MVAPICH: How a Bunch of Buckeyes Crack Tough Nuts.
"High-performance computing is being applied to solve the world's most daunting problems, including researching climate change, studying fusion physics, and curing cancer. MPI is a key component in this work, and as such, the MVAPICH team plays a critical role in these efforts. In this talk, I will discuss recent science that MVAPICH has enabled and describe future research that is planned. I will detail how the MVAPICH team has responded to address past problems and list the requirements that future work will demand."
Watch the video: https://wp.me/p3RLHQ-hp6
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facilityinside-BigData.com
In this deck from the Swiss HPC Conference, Mark Wilkinson presents: 40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility.
"DiRAC is the integrated supercomputing facility for theoretical modeling and HPC-based research in particle physics, and astrophysics, cosmology, and nuclear physics, all areas in which the UK is world-leading. DiRAC provides a variety of compute resources, matching machine architecture to the algorithm design and requirements of the research problems to be solved. As a single federated Facility, DiRAC allows more effective and efficient use of computing resources, supporting the delivery of the science programs across the STFC research communities. It provides a common training and consultation framework and, crucially, provides critical mass and a coordinating structure for both small- and large-scale cross-discipline science projects, the technical support needed to run and develop a distributed HPC service, and a pool of expertise to support knowledge transfer and industrial partnership projects. The on-going development and sharing of best-practice for the delivery of productive, national HPC services with DiRAC enables STFC researchers to produce world-leading science across the entire STFC science theory program."
Watch the video: https://wp.me/p3RLHQ-k94
Learn more: https://dirac.ac.uk/
and
http://hpcadvisorycouncil.com/events/2019/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Programming Trends in High Performance ComputingJuris Vencels
Presented on June 3, 2016 @
University of Latvia, Faculty of Physics and Mathematics
Laboratory for mathematical modelling of environmental and technological processes
Enjoy, like, share, distribute, remix, tweak, credit is not required.
Riding the Light: How Dedicated Optical Circuits are Enabling New ScienceLarry Smarr
06.08.15
Invited Talk
Future of Imaging Plenary Session
SPIE Optics and Photonics Convention
Title: Riding the Light: How Dedicated Optical Circuits are Enabling New Science
San Diego, CA
Project StarGate An End-to-End 10Gbps HPC to User Cyberinfrastructure ANL * Calit2 * LBNL * NICS * ORNL * SDSC
1. Project StarGate An End-to-End 10Gbps HPC to User Cyberinfrastructure ANL * Calit2 * LBNL * NICS * ORNL * SDSC Report to the Dept. of Energy Advanced Scientific Computing Advisory Committee Oak Ridge, TN November 3, 2009 Dr. Larry Smarr Director, California Institute for Telecommunications and Information Technology Harry E. Gruber Professor, Dept. of Computer Science and Engineering Jacobs School of Engineering, UCSD Twitter: lsmarr
7. Opening Up 10Gbps Data Path ORNL/NICS to ANL to SDSC Connectivity provided by ESnet Science Data Network End-to-End Coupling of User with DOE/NSF HPC Facilities
8.
9.
10.
11.
12.
13.
14.
15.
Editor's Notes
NSF TeraGrid Review January 10, 2006 Charlie Catlett (cec@uchicago.edu) Eureka – the visualization cluster at ALCF Each node has 2 graphics cards 8 processors 32 GB RAM fast interconnect local disk Server FLOPS = 2.0 GHz * 8 cores * 2 flop per clock = 32 GFLOPS
NSF TeraGrid Review January 10, 2006 Charlie Catlett (cec@uchicago.edu) One of its strengths is its speed, and ability to handle large data sets. Number of procs = power of 2 to do rendering + 1 for compositing 2 graphics cards per node, so half as many nodes as listed here Data i/o is clearly the bottleneck Doing an animation of a single time step, data is only loaded once, can be pretty quick