10.10.28
Invited Speaker
Grand Challenges in Data-Intensive Discovery Conference
San Diego Supercomputer Center, UC San Diego
Title: High Performance Cyberinfrastructure Enables Data-Driven Science in the Globally Networked World
La Jolla, CA
An End-to-End Campus-Scale High Performance Cyberinfrastructure for Data-Inte...Larry Smarr
12.04.19
The Annual Robert Stewart Distinguished Lecture
Iowa State University
Title: An End-to-End Campus-Scale High Performance Cyberinfrastructure for Data-Intensive Research
Ames, IA
The Missing Link: Dedicated End-to-End 10Gbps Optical Lightpaths for Clusters...Larry Smarr
11.05.24
Invited Keynote Presentation
11th IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing
Title: The Missing Link: Dedicated End-to-End 10Gbps Optical Lightpaths for Clusters, Grids, and Clouds
Newport Beach, CA
New Applications of SuperNetworks and the Implications for Campus NetworksLarry Smarr
07.10.09
Speaker
Fall 2007 Internet2 Member Meeting
Town and Country Resort and Convention Center
Title: New Applications of SuperNetworks and the Implications for Campus Networks
San Diego, CA
An End-to-End Campus-Scale High Performance Cyberinfrastructure for Data-Inte...Larry Smarr
12.04.19
The Annual Robert Stewart Distinguished Lecture
Iowa State University
Title: An End-to-End Campus-Scale High Performance Cyberinfrastructure for Data-Intensive Research
Ames, IA
The Missing Link: Dedicated End-to-End 10Gbps Optical Lightpaths for Clusters...Larry Smarr
11.05.24
Invited Keynote Presentation
11th IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing
Title: The Missing Link: Dedicated End-to-End 10Gbps Optical Lightpaths for Clusters, Grids, and Clouds
Newport Beach, CA
New Applications of SuperNetworks and the Implications for Campus NetworksLarry Smarr
07.10.09
Speaker
Fall 2007 Internet2 Member Meeting
Town and Country Resort and Convention Center
Title: New Applications of SuperNetworks and the Implications for Campus Networks
San Diego, CA
CLOUD COMPUTING: AN ALTERNATIVE PLATFORM FOR SCIENTIFIC COMPUTINGDavid Ramirez
After an overview of its fundamental technologies, Grid Computing is presented as the platform of choice for scientific High Performance Computing (HPC). The latest offerings in Cloud Computing (CC) would enable it to become a basis for creating easy to deploy, on-demand and widely accessible grids, putting HPC within the reach of most scientific and research communities. A case study framework is proposed for future development.
High Performance Cyberinfrastructure Discovery Tools for Data Intensive ResearchLarry Smarr
10.05.03
Keynote Speaker
NAE Grand Challenges Summit
Title: High Performance Cyberinfrastructure Discovery Tools for Data Intensive Research
Seattle, WA
21st Century e-Knowledge Requires a High Performance e-InfrastructureLarry Smarr
11.12.09
Keynote Presentation
40-year anniversary Celebration of SARA
Title: 21st Century e-Knowledge Requires a High Performance e-Infrastructure
Amsterdam, Netherlands
UC Capabilities Supporting High-Performance Collaboration and Data-Intensive ...Larry Smarr
07.10.22
University of California Council of Research
UC Irvine
Title: UC Capabilities Supporting High-Performance Collaboration and Data-Intensive Sciences
Irvine, CA
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/altera/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Bill Jenkins, Senior Product Specialist for High Level Design Tools at Intel, presents the "Accelerating Deep Learning Using Altera FPGAs" tutorial at the May 2016 Embedded Vision Summit.
While large strides have recently been made in the development of high-performance systems for neural networks based on multi-core technology, significant challenges in power, cost and, performance scaling remain. Field-programmable gate arrays (FPGAs) are a natural choice for implementing neural networks because they can combine computing, logic, and memory resources in a single device. Intel's Programmable Solutions Group has developed a scalable convolutional neural network reference design for deep learning systems using the OpenCL programming language built with our SDK for OpenCL. The design performance is being benchmarked using several popular CNN benchmarks: CIFAR-10, ImageNet and KITTI.
Building the CNN with OpenCL kernels allows true scaling of the design from smaller to larger devices and from one device generation to the next. New designs can be sized using different numbers of kernels at each layer. Performance scaling from one generation to the next also benefits from architectural advancements, such as floating-point engines and frequency scaling. Thus, you achieve greater than linear performance and performance per watt scaling with each new series of devices.
Coupling Australia’s Researchers to the Global Innovation EconomyLarry Smarr
08.10.02
First Lecture in the
Australian American Leadership Dialogue Scholar Tour
University of Adelaide
Title: Coupling Australia’s Researchers to the Global Innovation Economy
Adelaide, Australia
Coupling Australia’s Researchers to the Global Innovation EconomyLarry Smarr
08.10.06
Second Lecture in the
Australian American Leadership Dialogue Scholar Tour
University of Western Australia
Title: Coupling Australia’s Researchers to the Global Innovation Economy
Perth, Australia
Coupling Australia’s Researchers to the Global Innovation EconomyLarry Smarr
08.10.17
Ninth Lecture in the
Australian American Leadership Dialogue Scholar Tour
University of Sydney
Title: Coupling Australia’s Researchers to the Global Innovation Economy
Sydney, Australia
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facilityinside-BigData.com
In this deck from the Swiss HPC Conference, Mark Wilkinson presents: 40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility.
"DiRAC is the integrated supercomputing facility for theoretical modeling and HPC-based research in particle physics, and astrophysics, cosmology, and nuclear physics, all areas in which the UK is world-leading. DiRAC provides a variety of compute resources, matching machine architecture to the algorithm design and requirements of the research problems to be solved. As a single federated Facility, DiRAC allows more effective and efficient use of computing resources, supporting the delivery of the science programs across the STFC research communities. It provides a common training and consultation framework and, crucially, provides critical mass and a coordinating structure for both small- and large-scale cross-discipline science projects, the technical support needed to run and develop a distributed HPC service, and a pool of expertise to support knowledge transfer and industrial partnership projects. The on-going development and sharing of best-practice for the delivery of productive, national HPC services with DiRAC enables STFC researchers to produce world-leading science across the entire STFC science theory program."
Watch the video: https://wp.me/p3RLHQ-k94
Learn more: https://dirac.ac.uk/
and
http://hpcadvisorycouncil.com/events/2019/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Early Benchmarking Results for Neuromorphic ComputingDESMOND YUEN
An update on the Intel Neuromorphic Research Community’s growth and benchmark results, including the addition of new corporate members and numerous new benchmarking updates computed on Intel’s neuromorphic test chip, Loihi.
In this deck from the HPC User Forum at Argonne, Andrew Siegel from Argonne presents: ECP Application Development.
"The Exascale Computing Project is accelerating delivery of a capable exascale computing ecosystem for breakthroughs in scientific discovery, energy assurance, economic competitiveness, and national security. ECP is chartered with accelerating delivery of a capable exascale computing ecosystem to provide breakthrough modeling and simulation solutions to address the most critical challenges in scientific discovery, energy assurance, economic competitiveness, and national security. This role goes far beyond the limited scope of a physical computing system. ECP’s work encompasses the development of an entire exascale ecosystem: applications, system software, hardware technologies and architectures, along with critical workforce development."
Watch the video: https://wp.me/p3RLHQ-kSL
Learn more: https://www.exascaleproject.org
and
http://hpcuserforum.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Metacomputer Architecture of the Global LambdaGrid: How Personal Light Paths ...Larry Smarr
08.05.15
Departments of Computer Science / Physics and Astronomy
University of Missouri@Columbia
Title: Metacomputer Architecture of the Global LambdaGrid: How Personal Light Paths are Transforming e-Science
Columbia, MO
CLOUD COMPUTING: AN ALTERNATIVE PLATFORM FOR SCIENTIFIC COMPUTINGDavid Ramirez
After an overview of its fundamental technologies, Grid Computing is presented as the platform of choice for scientific High Performance Computing (HPC). The latest offerings in Cloud Computing (CC) would enable it to become a basis for creating easy to deploy, on-demand and widely accessible grids, putting HPC within the reach of most scientific and research communities. A case study framework is proposed for future development.
High Performance Cyberinfrastructure Discovery Tools for Data Intensive ResearchLarry Smarr
10.05.03
Keynote Speaker
NAE Grand Challenges Summit
Title: High Performance Cyberinfrastructure Discovery Tools for Data Intensive Research
Seattle, WA
21st Century e-Knowledge Requires a High Performance e-InfrastructureLarry Smarr
11.12.09
Keynote Presentation
40-year anniversary Celebration of SARA
Title: 21st Century e-Knowledge Requires a High Performance e-Infrastructure
Amsterdam, Netherlands
UC Capabilities Supporting High-Performance Collaboration and Data-Intensive ...Larry Smarr
07.10.22
University of California Council of Research
UC Irvine
Title: UC Capabilities Supporting High-Performance Collaboration and Data-Intensive Sciences
Irvine, CA
For the full video of this presentation, please visit:
http://www.embedded-vision.com/platinum-members/altera/embedded-vision-training/videos/pages/may-2016-embedded-vision-summit
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Bill Jenkins, Senior Product Specialist for High Level Design Tools at Intel, presents the "Accelerating Deep Learning Using Altera FPGAs" tutorial at the May 2016 Embedded Vision Summit.
While large strides have recently been made in the development of high-performance systems for neural networks based on multi-core technology, significant challenges in power, cost and, performance scaling remain. Field-programmable gate arrays (FPGAs) are a natural choice for implementing neural networks because they can combine computing, logic, and memory resources in a single device. Intel's Programmable Solutions Group has developed a scalable convolutional neural network reference design for deep learning systems using the OpenCL programming language built with our SDK for OpenCL. The design performance is being benchmarked using several popular CNN benchmarks: CIFAR-10, ImageNet and KITTI.
Building the CNN with OpenCL kernels allows true scaling of the design from smaller to larger devices and from one device generation to the next. New designs can be sized using different numbers of kernels at each layer. Performance scaling from one generation to the next also benefits from architectural advancements, such as floating-point engines and frequency scaling. Thus, you achieve greater than linear performance and performance per watt scaling with each new series of devices.
Coupling Australia’s Researchers to the Global Innovation EconomyLarry Smarr
08.10.02
First Lecture in the
Australian American Leadership Dialogue Scholar Tour
University of Adelaide
Title: Coupling Australia’s Researchers to the Global Innovation Economy
Adelaide, Australia
Coupling Australia’s Researchers to the Global Innovation EconomyLarry Smarr
08.10.06
Second Lecture in the
Australian American Leadership Dialogue Scholar Tour
University of Western Australia
Title: Coupling Australia’s Researchers to the Global Innovation Economy
Perth, Australia
Coupling Australia’s Researchers to the Global Innovation EconomyLarry Smarr
08.10.17
Ninth Lecture in the
Australian American Leadership Dialogue Scholar Tour
University of Sydney
Title: Coupling Australia’s Researchers to the Global Innovation Economy
Sydney, Australia
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facilityinside-BigData.com
In this deck from the Swiss HPC Conference, Mark Wilkinson presents: 40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility.
"DiRAC is the integrated supercomputing facility for theoretical modeling and HPC-based research in particle physics, and astrophysics, cosmology, and nuclear physics, all areas in which the UK is world-leading. DiRAC provides a variety of compute resources, matching machine architecture to the algorithm design and requirements of the research problems to be solved. As a single federated Facility, DiRAC allows more effective and efficient use of computing resources, supporting the delivery of the science programs across the STFC research communities. It provides a common training and consultation framework and, crucially, provides critical mass and a coordinating structure for both small- and large-scale cross-discipline science projects, the technical support needed to run and develop a distributed HPC service, and a pool of expertise to support knowledge transfer and industrial partnership projects. The on-going development and sharing of best-practice for the delivery of productive, national HPC services with DiRAC enables STFC researchers to produce world-leading science across the entire STFC science theory program."
Watch the video: https://wp.me/p3RLHQ-k94
Learn more: https://dirac.ac.uk/
and
http://hpcadvisorycouncil.com/events/2019/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Early Benchmarking Results for Neuromorphic ComputingDESMOND YUEN
An update on the Intel Neuromorphic Research Community’s growth and benchmark results, including the addition of new corporate members and numerous new benchmarking updates computed on Intel’s neuromorphic test chip, Loihi.
In this deck from the HPC User Forum at Argonne, Andrew Siegel from Argonne presents: ECP Application Development.
"The Exascale Computing Project is accelerating delivery of a capable exascale computing ecosystem for breakthroughs in scientific discovery, energy assurance, economic competitiveness, and national security. ECP is chartered with accelerating delivery of a capable exascale computing ecosystem to provide breakthrough modeling and simulation solutions to address the most critical challenges in scientific discovery, energy assurance, economic competitiveness, and national security. This role goes far beyond the limited scope of a physical computing system. ECP’s work encompasses the development of an entire exascale ecosystem: applications, system software, hardware technologies and architectures, along with critical workforce development."
Watch the video: https://wp.me/p3RLHQ-kSL
Learn more: https://www.exascaleproject.org
and
http://hpcuserforum.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Metacomputer Architecture of the Global LambdaGrid: How Personal Light Paths ...Larry Smarr
08.05.15
Departments of Computer Science / Physics and Astronomy
University of Missouri@Columbia
Title: Metacomputer Architecture of the Global LambdaGrid: How Personal Light Paths are Transforming e-Science
Columbia, MO
Cyberinfrastructure for Advanced Marine Microbial Ecology Research and Analys...Larry Smarr
06.04.26
Invited Talk
CONNECT Board Meeting
Title: Cyberinfrastructure for Advanced Marine Microbial Ecology Research and Analysis (CAMERA)
La Jolla, CA
Coupling Australia’s Researchers to the Global Innovation EconomyLarry Smarr
08.10.08
Fourth Lecture in the
Australian American Leadership Dialogue Scholar Tour
Swinburne University
Title: Coupling Australia’s Researchers to the Global Innovation Economy
Hawthorn, Australia
Limiting Global Climatic Disruption by Revolutionary Change in the Global Ene...Larry Smarr
10.06.08
Keynote Opening Talk
Xconomy Forum: The Rise of Smart Energy
Title: Limiting Global Climatic Disruption by Revolutionary Change in the Global Energy System
La Jolla, CA
The Growing Interdependence of the Internet and Climate ChangeLarry Smarr
10.04.30
Distinguished Lecture
Scientific Computing and Imaging (SCI) Institute
University of Utah
Title: The Growing Interdependence of the Internet and Climate Change
Salt Lake City, UT
The Importance of Large-Scale Computer Science Research EffortsLarry Smarr
05.10.20
Talk at Public Seminar on Large-Scale NSF Research Efforts for the Future Computer Museum
Title: The Importance of Large-Scale Computer Science Research Efforts
Mountain View, CA
Applying Photonics to User Needs: The Application ChallengeLarry Smarr
05.02.28
Invited Talk to the 4th Annual On*VECTOR International Photonics Workshop
Sponsored by NTT Network Innovation Laboratories
Title: Applying Photonics to User Needs: The Application Challenge
University of California, San Diego
Cyberinfrastructure for Advanced Marine Microbial Ecology Research and Analys...Larry Smarr
06.07.31
Invited Talk
CONNECT Investment Community Meeting
Calit2@UCSD
Title: Cyberinfrastructure for Advanced Marine Microbial Ecology Research and Analysis (CAMERA)
La Jolla, CA
Science and Cyberinfrastructure in the Data-Dominated EraLarry Smarr
10.02.22
Invited talk
Symposium #1610, How Computational Science Is Tackling the Grand Challenges Facing Science and Society
Title: Science and Cyberinfrastructure in the Data-Dominated Era
San Diego, CA
Making Sense of Information Through Planetary Scale ComputingLarry Smarr
09.03.01
Invited Presentation to the
Diamond Exchange—Brave New World
Title: Making Sense of Information Through Planetary Scale Computing
Monterey, CA
How to Terminate the GLIF by Building a Campus Big Data Freeway SystemLarry Smarr
12.10.11
Keynote Lecture
12th Annual Global LambdaGrid Workshop
Title: How to Terminate the GLIF by Building a Campus Big Data Freeway System
Chicago, IL
End-to-end Optical Fiber Cyberinfrastructure for Data-Intensive Research: Imp...Larry Smarr
10.10.13
Featured Speaker EDUCAUSE 2010
Anaheim Convention Center
Title: End-to-end Optical Fiber Cyberinfrastructure for Data-Intensive Research: Implications for Your Campus
Anaheim, CA
High Performance Cyberinfrastructure Required for Data Intensive Scientific R...Larry Smarr
11.06.08
Invited Presentation
National Science Foundation Advisory Committee on Cyberinfrastructure
Title: High Performance Cyberinfrastructure Required for Data Intensive Scientific Research
Arlington, VA
High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...Larry Smarr
11.04.06
Joint Presentation
UCSD School of Medicine Research Council
Larry Smarr, Calit2 & Phil Papadopoulos, SDSC/Calit2
Title: High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biomedical Sciences
Using Photonics to Prototype the Research Campus Infrastructure of the Future...Larry Smarr
08.02.21
Presentation
Philip Papadopoulos, Larry Smarr, Joseph Ford, Shaya Fainman, and Brian Dunne
University of California, San Diego
Title: Using Photonics to Prototype the Research Campus Infrastructure of the Future: The UCSD Quartzite Project
La Jolla, CA
06.07.26
Invited Talk
Cyberinfrastructure for Humanities, Arts, and Social Sciences, A Summer Institute, SDSC
Title: The OptIPuter and Its Applications
La Jolla, CA
Riding the Light: How Dedicated Optical Circuits are Enabling New ScienceLarry Smarr
06.08.15
Invited Talk
Future of Imaging Plenary Session
SPIE Optics and Photonics Convention
Title: Riding the Light: How Dedicated Optical Circuits are Enabling New Science
San Diego, CA
Project StarGate An End-to-End 10Gbps HPC to User Cyberinfrastructure ANL * C...Larry Smarr
09.11.03
Report to the
Dept. of Energy Advanced Scientific Computing Advisory Committee
Title: Project StarGate An End-to-End 10Gbps HPC to User Cyberinfrastructure ANL * Calit2 * LBNL * NICS * ORNL * SDSC
Oak Ridge, TN
A Campus-Scale High Performance Cyberinfrastructure is Required for Data-Int...Larry Smarr
11.12.12
Seminar Presentation
Princeton Institute for Computational Science and Engineering (PICSciE)
Princeton University
Title: A Campus-Scale High Performance Cyberinfrastructure is Required for Data-Intensive Research
Princeton, NJ
High Performance Cyberinfrastructure is Needed to Enable Data-Intensive Scien...Larry Smarr
11.03.28
Remote Luncheon Presentation from Calit2@UCSD
National Science Board
Expert Panel Discussion on Data Policies
National Science Foundation
Title: High Performance Cyberinfrastructure is Needed to Enable Data-Intensive Science and Engineering
Arlington, Virginia
SDVIs and In-Situ Visualization on TACC's StampedeIntel® Software
Speaker: Paul Navrátil, Texas Advanced Computing Center (TACC)
The design emphasis for supercomputing systems has moved from raw performance to performance-per-watt, and as a result, supercomputing architectures are converging on processors with wide vector units and many processing cores per chip. Such processors are capable of performant image rendering purely in software. This improved capability is fortuitous, since the prevailing homogeneous system designs lack dedicated, hardware-accelerated rendering subsystems for use in data visualization. Reliance on this “software-defined” rendering capability will grow in importance since, due to growing data sizes, visualizations must be performed on the same machine where the data is produced. Further, as data sizes outgrow disk I/O capacity, visualization will be increasingly incorporated into the simulation code itself (in situ visualization).
This talk presents recent work in high-fidelity visualization using the OSPRay ray tracing framework on TACC’s local and remote visualization systems. We present work using OSPRay within ParaView Catalyst in situ framework from Kitware, including capitalizing on opportunities to reduce data costs migrating through VTK filters for visualization. We highlight the performance opportunities and advantages of Intel® Advanced Vector Extensions 512, the memory system improvements possible with Intel® Xeon Phi™ processor multi-channel DRAM (MCDRAM) and the Intel® Omni-Path Architecture interconnect.
Why Researchers are Using Advanced NetworksLarry Smarr
07.07.03
Remote Talk from Calit2 to:
Building KAREN Communities for Collaboration Forum
KIWI Advanced Research and Education Network
University of Auckland, Auckland City, New Zealand
Title: Why Researchers are Using Advanced Networks
La Jolla, CA
Larry Smarr - Making Sense of Information Through Planetary Scale ComputingDiamond Exchange
"Brave New World" DiamondExchange
February 28 - March 3, 2009
Date: Sunday, March 1, 2009
Presenter: Larry Smarr
Presentation: Making Sense of Information Through Planetary Scale Computing
Introduction to Software Defined Visualization (SDVis)Intel® Software
Software defined visualization (SDVis) is an open-source initiative from Intel and industry collaborators. Improve the visual fidelity, performance, and efficiency of prominent visualization solutions, while supporting the rapidly growing big data use on workstations through high-performance computing (HPC) on supercomputing clusters without memory limitations and cost of GPU-based solutions.
Similar to High Performance Cyberinfrastructure Enables Data-Driven Science in the Globally Networked World (20)
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Digital Tools and AI for Teaching Learning and Research
High Performance Cyberinfrastructure Enables Data-Driven Science in the Globally Networked World
1. ―High Performance Cyberinfrastructure Enables
Data-Driven Science in
the Globally Networked World‖
Invited Speaker
Grand Challenges in Data-Intensive Discovery Conference
San Diego Supercomputer Center, UC San Diego
La Jolla, CA
October 28, 2010
Dr. Larry Smarr
Director, California Institute for Telecommunications and Information Technology
Harry E. Gruber Professor, Dept. of Computer Science and Engineering
Jacobs School of Engineering, UCSD
Follow me on Twitter: lsmarr
2. Abstract
Today we are living in a data-dominated world where distributed scientific instruments,
as well as supercomputers, generate terabytes to petabytes of data. It was in response to
this challenge that the NSF funded the OptIPuter project to research how user-controlled
10Gbps dedicated lightpaths (or ―lambdas‖) could provide direct access to global data
repositories, scientific instruments, and computational resources from ―OptIPortals,‖ PC
clusters which provide scalable visualization, computing, and storage in the user's
campus laboratory. The use of dedicated lightpaths over fiber optic cables enables
individual researchers to experience ―clear channel‖ 10,000 megabits/sec, 100-1000
times faster than over today’s shared Internet—a critical capability for data-intensive
science. The seven-year OptIPuter computer science research project is now over, but it
stimulated a national and global build-out of dedicated fiber optic networks. U.S.
universities now have access to high bandwidth lambdas through the National
LambdaRail, Internet2's WaveCo, and the Global Lambda Integrated Facility. A few
pioneering campuses are now building on-campus lightpaths to connect the data-
intensive researchers, data generators, and vast storage systems to each other on
campus, as well as to the national network campus gateways. I will give examples of the
application use of this emerging high performance cyberinfrastructure in genomics,
ocean observatories, radio astronomy, and cosmology.
3. Academic Research ―OptIPlatform‖ Cyberinfrastructure:
A 10Gbps ―End-to-End‖ Lightpath Cloud
HD/4k Video Cams
HD/4k Telepresence
Instruments
End User HPC
OptIPortal
10G
Lightpaths
National LambdaRail
Campus
Optical Switch
Data Repositories & Clusters HD/4k Video Images
4. The OptIPuter Project: Creating High Resolution Portals
Over Dedicated Optical Channels to Global Science Data
Scalable
Adaptive
Graphics
Environment
(SAGE)
Picture
Source:
Mark
Ellisman,
David Lee,
Jason Leigh
Calit2 (UCSD, UCI), SDSC, and UIC Leads—Larry Smarr PI
Univ. Partners: NCSA, USC, SDSU, NW, TA&M, UvA, SARA, KISTI, AIST
Industry: IBM, Sun, Telcordia, Chiaro, Calient, Glimmerglass, Lucent
5. On-Line Resources
Help You Build Your Own OptIPortal
www.optiputer.net
http://wiki.optiputer.net/optiportal
www.evl.uic.edu/cavern/sage/
http://vis.ucsd.edu/~cglx/
OptIPortals Are Built
From Commodity PC Clusters and LCDs
To Create a 10Gbps Scalable Termination Device
6. Nearly Seamless AESOP OptIPortal
46‖ NEC Ultra-Narrow Bezel 720p LCD Monitors
Source: Tom DeFanti, Calit2@UCSD;
7. 3D Stereo Head Tracked OptIPortal:
NexCAVE
Array of JVC HDTV 3D LCD Screens
KAUST NexCAVE = 22.5MPixels
www.calit2.net/newsroom/article.php?id=1584
Source: Tom DeFanti, Calit2@UCSD
8. Project StarGate Goals:
Combining Supercomputers and Supernetworks
• Create an ―End-to-End‖
10Gbps Workflow
• Explore Use of OptIPortals as
OptIPortal@SDSC
Petascale Supercomputer
―Scalable Workstations‖
• Exploit Dynamic 10Gbps
Circuits on ESnet
• Connect Hardware Resources
at ORNL, ANL, SDSC
• Show that Data Need Not be
Trapped by the Network Rick Wagner Mike Norman
―Event Horizon‖
Source: Michael Norman, SDSC, UCSD
• ANL * Calit2 * LBNL * NICS * ORNL * SDSC
9. Using Supernetworks to Couple End User’s OptIPortal
to Remote Supercomputers and Visualization Servers
Source: Mike Norman,
Rick Wagner, SDSC Argonne NL
DOE Eureka
100 Dual Quad Core Xeon Servers
200 NVIDIA Quadro FX GPUs in 50
Quadro Plex S4 1U enclosures
3.2 TB RAM rendering
ESnet
SDSC 10 Gb/s fiber optic network
NICS
visualization
ORNL
Calit2/SDSC OptIPortal1
20 30‖ (2560 x 1600 pixel) LCD panels
NSF TeraGrid Kraken simulation
Cray XT5
10 NVIDIA Quadro FX 4600 graphics
8,256 Compute Nodes
cards > 80 megapixels
99,072 Compute Cores
10 Gb/s network throughout
129 TB RAM
*ANL * Calit2 * LBNL * NICS * ORNL * SDSC
10. National-Scale Interactive Remote Rendering
of Large Datasets
SDSC ESnet ALCF
Science Data Network (SDN)
> 10 Gb/s Fiber Optic Network
Dynamic VLANs Configured
Using OSCARS
Rendering
Visualization Eureka
OptIPortal (40M pixels LCDs) 100 Dual Quad Core Xeon Servers
10 NVIDIA FX 4600 Cards 200 NVIDIA FX GPUs
10 Gb/s Network Throughout 3.2 TB RAM
Interactive Remote Rendering
Real-Time Volume Rendering Streamed from ANL to SDSC
Last Year Last Week
High-Resolution (4K+, 15+ FPS)—But: Now Driven by a Simple Web GUI
• Command-Line Driven •Rotate, Pan, Zoom
• Fixed Color Maps, Transfer Functions •GUI Works from Most Browsers
• Slow Exploration of Data • Manipulate Colors and Opacity
• Fast Renderer Response Time
Source: Rick Wagner, SDSC
11. NSF OOI is a $400M Program
-OOI CI is $34M Part of This
30-40 Software Engineers
Housed at Calit2@UCSD
Source: Matthew Arrott, Calit2 Program Manager for OOI CI
12. OOI CI
is Built on NLR/I2 Optical Infrastructure
Physical Network Implementation
Source: John Orcutt,
Matthew Arrott, SIO/Calit2
13. California and Washington Universities Are Testing
a 10Gbps Connected Commercial Data Cloud
• Amazon Experiment for Big Data
– Only Available Through CENIC & Pacific NW
GigaPOP
– Private 10Gbps Peering Paths
– Includes Amazon EC2 Computing & S3 Storage
Services
• Early Experiments Underway
– Robert Grossman, Open Cloud Consortium
– Phil Papadopoulos, Calit2/SDSC Rocks
14. Open Cloud OptIPuter Testbed--Manage and Compute
Large Datasets Over 10Gbps Lambdas
CENIC NLR C-Wave Dragon
Open Source SW
Hadoop
• 9 Racks MREN Sector/Sphere
• 500 Nodes Nebula
• 1000+ Cores Thrift, GPB
• 10+ Gb/s Now Eucalyptus
• Upgrading Portions to Benchmarks
100 Gb/s in 2010/2011
14
Source: Robert Grossman, UChicago
15. Ocean Modeling HPC In the Cloud:
Tropical Pacific SST (2 Month Ave 2002)
MIT GCM 1/3 Degree Horizontal Resolution, 51 Levels, Forced by NCEP2.
Grid is 564x168x51, Model State is T,S,U,V,W and Sea Surface Height
Run on EC2 HPC Instance. In Collaboration with OOI CI/Calit2
Source: B. Cornuelle, N. Martinez, C.Papadopoulos COMPAS, SIO
16. Run Timings of Tropical Pacific:
Local SIO ATLAS Cluster and Amazon EC2 Cloud
ATLAS ATLAS ATLAS EC2 HPC EC2 HPC
Ethernet Myrinet, Myrinet Ethernet Ethernet
NFS NFS Local Disk 1 Node Local Disk
Wall Time* 4711 2986 2983 14428 2379
User Time* 3833 2953 2933 1909 1590
System 798 17 19 2764 750
Time* *All times in Seconds
Atlas: 128 Node Cluster @ SIO COMPAS. Myrinet 10G, 8GB/node, ~3yrs old
EC2: HPC Computing Instance, 2.93GHz Nehalem, 24GB/Node, 10GbE
Compilers: Ethernet – GNU FORTRAN with OpenMPI
Myrinet – PGI FORTRAN with MPICH1
Single Node EC2 was Oversubscribed, 48 Process. All Other Parallel
Instances used 6 Physical Nodes, 8 Cores/Node. Model Code has been
Ported to Run on ATLAS, Triton (@SDSC) and in EC2.
Source: B. Cornuelle, N. Martinez, C.Papadopoulos COMPAS, SIO
17. Using Condor and Amazon EC2 on
Adaptive Poisson-Boltzmann Solver (APBS)
• APBS Rocks Roll (NBCR) + EC2 Roll
+ Condor Roll = Amazon VM
• Cluster extension into Amazon using Condor
Local
Running in Amazon Cloud
Cluster EC2 Cloud
NBCR NBCR
VM VM
NBCR
VM
APBS + EC2 + Condor
Source: Phil Papadopoulos,
SDSC/Calit2
18. Moving into the Clouds:
Rocks and EC2
• We Can Build Physical Hosting Clusters & Multiple,
Isolated Virtual Clusters:
– Can I Use Rocks to Author ―Images‖ Compatible with EC2?
(We Use Xen, They Use Xen)
– Can I Automatically Integrate EC2 Virtual Machines into
My Local Cluster (Cluster Extension)
– Submit Locally
– My Own Private + Public Cloud
• What This Will Mean
– All your Existing Software Runs Seamlessly
Among Local and Remote Nodes
– User Home Directories Can Be Mounted
– Queue Systems Work
– Unmodified MPI Works
Source: Phil Papadopoulos, SDSC/Calit2
19. ―Blueprint for the Digital University‖--Report of the
UCSD Research Cyberinfrastructure Design Team
• Focus on Data-Intensive Cyberinfrastructure
April 2009
No Data
Bottlenecks
--Design for
Gigabit/s
Data Flows
http://research.ucsd.edu/documents/rcidt/RCIDTReportFinal2009.pdf
20. Current UCSD Optical Core:
Bridging End-Users to CENIC L1, L2, L3 Services
Quartzite Communications
To 10GigE cluster
node interfaces
Core Year 3
Enpoints:
Quartzite Wavelength
>= 60 endpoints at 10 GigE
Core
Selective
.....
Switch
>= 32 Packet switched Lucent To 10GigE cluster
node interfaces and
>= 32 Switched wavelengths other switches
To cluster nodes
.....
>= 300 Connected endpoints
Glimmerglass
To cluster nodes
.....
Production
GigE Switch with
OOO
Dual 10GigE Upliks
Switch
To cluster nodes
Approximately 0.5 TBit/s
32 10GigE
.....
Arrive at the ―Optical‖ GigE Switch with
Force10 Dual 10GigE Upliks
Center of Campus.
...
GigE Switch with
Switching is a Hybrid of:
To Packet Switch CalREN-HPR
Research
Dual 10GigE Upliks
Packet, Lambda, Circuit --
other
nodes
Cloud
GigE
OOO and Packet Switches
10GigE
Campus Research
4 GigE
4 pair fiber
Cloud
Juniper T320
Source: Phil Papadopoulos, SDSC/Calit2
(Quartzite PI, OptIPuter co-PI)
Quartzite Network MRI #CNS-0421555;
OptIPuter #ANI-0225642
21. UCSD Campus Investment in Fiber Enables
Consolidation of Energy Efficient Computing & Storage
WAN 10Gb:
N x 10Gb CENIC, NLR, I2
Gordon –
HPD System
Cluster Condo
DataOasis
(Central) Storage
Triton – Petascale
Data Analysis
Scientific
Instruments
Digital Data Campus Lab OptIPortal
Collections Cluster Tile Display Wall
Source: Philip Papadopoulos, SDSC/Calit2
22. UCSD Planned Optical Networked
Biomedical Researchers and Instruments
• Connects at 10 Gbps :
CryoElectron
Microscopy Facility – Microarrays
San Diego – Genome Sequencers
Supercomputer – Mass Spectrometry
Center
– Light and Electron
Microscopes
– Whole Body Imagers
– Computing
– Storage
Cellular & Molecular
Medicine East
Calit2@UCSD
Bioengineering
Radiology
Imaging Lab
National
Center for
Microscopy
& Imaging Center for
Molecular Genetics
Pharmaceutical
Sciences Building Cellular & Molecular
Biomedical Research Medicine West
23. Moving to a Shared Campus Data Storage
and Analysis Resource: Triton Resource @ SDSC
Triton
Resource
Large Memory Shared Resource
PSDAF Cluster
• 256/512 GB/sys • 24 GB/Node
• 9TB Total • 6TB Total
• 128 GB/sec • 256 GB/sec
• ~ 9 TF • ~ 20 TF
x256
x28
UCSD Research Labs
Large Scale Storage
• 2 PB
• 40 – 80 GB/sec
• 3000 – 6000 disks
• Phase 0: 1/3 TB, 8GB/s
Campus Research
Network
Source: Philip Papadopoulos, SDSC/Calit2
24. Calit2 Microbial Metagenomics Cluster-
Next Generation Optically Linked Science Data Server
Source: Phil Papadopoulos, SDSC, Calit2
512 Processors
~200TB
~5 Teraflops Sun
1GbE X4500
~ 200 Terabytes Storage and Storage
10GbE
Switched 10GbE
/ Routed
Core
25. Calit2 CAMERA Automatic Overflows
into SDSC Triton
@ SDSC
Triton Resource
@ CALIT2
Transparently CAMERA -
Sends Jobs to Managed
Submit Portal Job Submit
on Triton Portal (VM)
10Gbps
Direct
Mount
CAMERA ==
DATA No Data
Staging
26. Prototyping Next Generation User Access and Large
Data Analysis-Between Calit2 and U Washington
Photo Credit: Alan Decker Feb. 29, 2008
Ginger
Armbrust’s
Diatoms:
Micrographs,
Chromosomes,
Genetic
Assembly
iHDTV: 1500 Mbits/sec Calit2 to
UW Research Channel Over NLR
27. Rapid Evolution of 10GbE Port Prices
Makes Campus-Scale 10Gbps CI Affordable
• Port Pricing is Falling
• Density is Rising – Dramatically
• Cost of 10GbE Approaching Cluster HPC Interconnects
$80K/port
Chiaro
(60 Max)
$ 5K
Force 10
(40 max) ~$1000
(300+ Max)
$ 500
Arista $ 400
48 ports Arista
48 ports
2005 2007 2009 2010
Source: Philip Papadopoulos, SDSC/Calit2
28. 10G Switched Data Analysis Resource:
Data Oasis (RFP Responses Due 10/29/2010)
OptIPuter RCN
Colo
CalRe
32 n
Triton
20
24
32
2
Trestles 12 Existing
40
Storage
Oasis Procurement (RFP)
Dash
8 • Phase0: > 8GB/s sustained, today 1500 –
• RFP for Phase1: > 40 GB/sec for Lustre 2000 TB
> 40
• Nodes must be able to function as Lustre GB/s
OSS (Linux) or NFS (Solaris)
100 • Connectivity to Network is 2 x 10GbE/Node
Gordon • Likely Reserve dollars for inexpensive
replica servers
Source: Philip Papadopoulos, SDSC/Calit2