In this video from the 2017 Argonne Training Program on Extreme-Scale Computing, Keren Bergman from Columbia University presents: Silicon Photonics for Extreme Computing - Challenges and Opportunities.
"As they confront ever more complex and data-intensive problems, scientists and researchers increasingly look to the next generation of supercomputing--the high-end segment of high-performance computing. That next generation will play out in so-called exaflop computers--machines capable of executing at least a quintillion (1E18) floating-point operations per second. Such a computer would represent a thousand-fold improvement over the current standard, the petaflop machines that first came on line in 2008. But while exaflop computers already appear on funders' technology roadmaps, making the exaflop leap on the short timescales of those roadmaps constitutes a formidable challenge."
Watch the video: https://wp.me/p3RLHQ-hvV
Learn more: https://extremecomputingtraining.anl.gov/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Ayar Labs TeraPHY: A Chiplet Technology for Low-Power, High-Bandwidth In-Pack...inside-BigData.com
Today at Hot Chips 2019, Intel engineers presented technical details on hybrid chip packaging technology, Intel Optane DC persistent memory and chiplet technology for optical I/O.
"To get to a future state of ‘AI everywhere,’ we’ll need to address the crush of data being generated and ensure enterprises are empowered to make efficient use of their data, processing it where it’s collected when it makes sense and making smarter use of their upstream resources," said Naveen Rao, Intel vice president and GM, Artificial Intelligence Products Group. "Data centers and the cloud need to have access to performant and scalable general purpose computing and specialized acceleration for complex AI applications. In this future vision of AI everywhere, a holistic approach is needed—from hardware to software to applications.”
Learn more: https://www.intel.ai/accelerating-for-ai/?elq_cid=1192980
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Benjamin Wohlfeil's presentation at the EPIC Online Technology Meeting explored how innovation in co-packaged optics is addressing key data center interconnect challenges.
*(PPT was prepared for a 15 min presentation)
The topic "Photonic Integrated circuit technology" is in itself very vast that it cant be explained completely in a matter of minutes, so it is better to focus on a particular type of PIC throughout the presentation .(because,based on substrate material,the technology changes and it is always important to maintain a flow throughout the presentation).
Research well on the topic,do your best and leave the rest
:)
In this slidecast, Brian Welch from Luxtera presents: Silicon Photonics for HPC Interconnects. Luxtera is the first company to overcome the complex technical obstacles involved with integrating high performance optics directly with silicon electronics on a mainstream CMOS chip, bringing direct “fiber to the chip” connectivity to market.
Silicon photonics is an evolving technology in which data is transferred among computer chips by optical rays. Optical rays can carry far more data in less time than electrical conductors.
This presentation gives emphasis on the basics of silicon photonics
SiC & GaN - Technology & market knowledge updatePntPower.com
Wide Band Gap semiconductor are more and more used in power electronics. Silicon Carbide and Gallium Nitride are now involved in the race to replace silicon. With huge R&D investments and start-ups facing historical players, market and technology knowledge becomes key. Point The Gap presented a SiC & GaN market knowledge update. All the information needed to understand this market, gathered in a few slides by a power electronics market intelligence specialist. Also available on www.PointThePower.com.
Including company names like Transphorm, Infineon, EPC corp., Cambridge electronics, Rohm, Cree, Toyota, Finsix, Zolt, Avogy, etc...
DesignCon 2019 112-Gbps Electrical Interfaces: An OIF Update on CEI-112GLeah Wilkinson
DesignCon 2019
112-Gbps Electrical Interfaces: An OIF Update on CEI-112G
Brian Holden, Kandou Bus
Cathy Liu, Broadcom
Steve Sekel, Keysight
Nathan Tracy, TE Connectivity
Epitaxy Growth Equipment for More Than Moore Devices Technology and Market Tr...Yole Developpement
Driven by microLED displays and power devices, epitaxy equipment shipment volumes will multiply more than threefold over the next five years.
More info on: https://www.i-micronews.com/products/epitaxy-growth-equipment-for-more-than-moore-devices-technology-and-market-trends-2020/
Ayar Labs TeraPHY: A Chiplet Technology for Low-Power, High-Bandwidth In-Pack...inside-BigData.com
Today at Hot Chips 2019, Intel engineers presented technical details on hybrid chip packaging technology, Intel Optane DC persistent memory and chiplet technology for optical I/O.
"To get to a future state of ‘AI everywhere,’ we’ll need to address the crush of data being generated and ensure enterprises are empowered to make efficient use of their data, processing it where it’s collected when it makes sense and making smarter use of their upstream resources," said Naveen Rao, Intel vice president and GM, Artificial Intelligence Products Group. "Data centers and the cloud need to have access to performant and scalable general purpose computing and specialized acceleration for complex AI applications. In this future vision of AI everywhere, a holistic approach is needed—from hardware to software to applications.”
Learn more: https://www.intel.ai/accelerating-for-ai/?elq_cid=1192980
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Benjamin Wohlfeil's presentation at the EPIC Online Technology Meeting explored how innovation in co-packaged optics is addressing key data center interconnect challenges.
*(PPT was prepared for a 15 min presentation)
The topic "Photonic Integrated circuit technology" is in itself very vast that it cant be explained completely in a matter of minutes, so it is better to focus on a particular type of PIC throughout the presentation .(because,based on substrate material,the technology changes and it is always important to maintain a flow throughout the presentation).
Research well on the topic,do your best and leave the rest
:)
In this slidecast, Brian Welch from Luxtera presents: Silicon Photonics for HPC Interconnects. Luxtera is the first company to overcome the complex technical obstacles involved with integrating high performance optics directly with silicon electronics on a mainstream CMOS chip, bringing direct “fiber to the chip” connectivity to market.
Silicon photonics is an evolving technology in which data is transferred among computer chips by optical rays. Optical rays can carry far more data in less time than electrical conductors.
This presentation gives emphasis on the basics of silicon photonics
SiC & GaN - Technology & market knowledge updatePntPower.com
Wide Band Gap semiconductor are more and more used in power electronics. Silicon Carbide and Gallium Nitride are now involved in the race to replace silicon. With huge R&D investments and start-ups facing historical players, market and technology knowledge becomes key. Point The Gap presented a SiC & GaN market knowledge update. All the information needed to understand this market, gathered in a few slides by a power electronics market intelligence specialist. Also available on www.PointThePower.com.
Including company names like Transphorm, Infineon, EPC corp., Cambridge electronics, Rohm, Cree, Toyota, Finsix, Zolt, Avogy, etc...
DesignCon 2019 112-Gbps Electrical Interfaces: An OIF Update on CEI-112GLeah Wilkinson
DesignCon 2019
112-Gbps Electrical Interfaces: An OIF Update on CEI-112G
Brian Holden, Kandou Bus
Cathy Liu, Broadcom
Steve Sekel, Keysight
Nathan Tracy, TE Connectivity
Epitaxy Growth Equipment for More Than Moore Devices Technology and Market Tr...Yole Developpement
Driven by microLED displays and power devices, epitaxy equipment shipment volumes will multiply more than threefold over the next five years.
More info on: https://www.i-micronews.com/products/epitaxy-growth-equipment-for-more-than-moore-devices-technology-and-market-trends-2020/
Beyond communication, silicon photonics is penetrating consumer and automotive – heading to $1.1B in 2026.
More information: https://www.i-micronews.com/products/silicon-photonics-2021/
Market will more than double by 2025 driven by heavy investments in data centers.
More information: https://www.i-micronews.com/products/optical-transceivers-for-datacom-telecom-2020/
This narrated power point presentation attempts to examine the losses due to non-linear effects in optical fibers. The material will be useful for KTU final year B Tech students who prepare for the subject EC 405, Optical Communications.
We are offering a comprehensive high speed networking solution which is including 3.2T Co-packaged Optic (CPO), 100G, 200G, 400G & 800G transceivers, DACs, AOCs, ACCs & Loopback modules. We are fulfil your research (R&D) stage product development, DVT/EVT pre-product testing, mass production and also for final application use.
Welcome to contact us for more product info.
Deep analysis of the 400Gb optical transceiver from a leading Chinese company.
More information: https://www.systemplus.fr/reverse-costing-reports/innolights-400g-qsfp-dd-optical-transceiver/
OIF's Network Operator Working Group Chair Junjie Li spoke on delivering 400ZR, FlexE, and 112Gbps electrical to the datacom/telecom industry on September 4, 2019 during CIOE(China International Optoelectronic Exposition) event?
#400ZR #FlexE #112G #datacom #telecom #interoperability
Electrical interfaces at 112 Gbps are a critical enabler of faster, more efficient and cost effective networks and data centers. A panel of OIF contributors will discuss the ongoing CEI-112G electrical interface development projects, and the new architectures they will enable including chiplet packaging, co-packaged optics and internal cable based solutions. The panel will provide an update on the multiple interfaces being defined by the OIF including CEI-112G MCM, XSR, VSR, MR and LR for 112 Gbps applications of die-to-die, chip-to-module, chip-to-chip and long reach over backplane and cables. Listen to thought leaders in the electrical interface industry debate the issues surrounding the CEI-112G projects and the architectures they will enable.
The GaN RF market enjoyed a healthy increase in 2015.
2015 was a significant year for the GaN RF industry, with a dramatic increase in wireless infrastructure market sales driven by the massive adoption of LTE networks in China. By year’s end, the total RF GaN market was close to $300M.
Sales will likely not soar as high over the next two years, but growth will continue, mainly driven by increased adoption of GaN technology in the wireless infrastructure and defense markets. A significant boost will occur around 2019 – 2020, led by the implementation of 5G networks. Market size will be multiplied by 2.5 by the end of 2022, posting a CAGR of 14%...
More information on that report at http://www.i-micronews.com/reports.html
Compound Semiconductors,RF, Power Electronics
Silicon Photonics: A Solution for Ultra High Speed Data TransferIDES Editor
Silicon photonics is the integration of integrated
optics and photonics IC technologies in silicon. Silicon
photonics has recently attracted a great deal of attention since
it offers an opportunity for low cost solutions for various
applications ranging from telecommunications to chip-chip
inter connects. Two keys to this advancement are the increased
speed of communications (now at the speed of light) and the
increased amount of data that can be transmitted at once (i.e.,
bandwidth). Silicon photonics is the study and application of
photonic systems which use silicon as an optical medium.
The silicon is usually patterned with sub-micrometer
precision, into microphotonic components. These operate in
the infrared, most commonly at the 1.55 micrometer
wavelength used by most fiber optic telecommunication
systems. The silicon typically lies on top of a layer of silica in
what (by analogy with a similar construction in
microelectronics) is known as silicon on insulator (SOI). Today
the problems associated with multi-core processors with copper
interconnect are Latency, Bandwidth, Power dissipation,
Electromagnetic interference and Signal integrity. Micro
processor designers use the integration of number of
transistors that could be squeezed onto each chip to boost
computational horsepower. That in turn caused the amount
of waste heat that had to be dissipated from each square
millimeter of silicon to go up. One problem we are facing in
this effort is that micro processors with large numbers of cores
are not yet being manufactured. Fiber optics has a reputation
as an expensive solution because of high cost of hardware and
Fabrication is done using exotic materials which are costly.
The methods used in assembly and package of these
components are also expensive. A recent break through in
silicon photonics is in the development of a laser modulator
that encodes optical data at 40 billion bits per second. Finally
reached the goal of data transmission at 40 Gbps speed,
matching the fastest devices deployed today with least cost of
processing and showing the ultimate solutions to the problems
associated with copper interconnects in multi-core processors
and expensive fiber optics.
OIF experts presented project updates, discussed overcoming implementation challenges through interoperability and open optical networking and disaggregation at NGON & DCI World 2022 held in Barcelona, Spain June 21–23, 2022.
Speakers gave an overview of OIF’s 400ZR work, including results from a recent interoperability demonstration, co-packaging, Common Management Interface Specification (CMIS), common electrical interfaces (112G and 224G) and Transport Software Defined Networking (SDN) Application Program Interface (API).
RF Front End modules and components for cellphones 2017 - Report by Yole Deve...Yole Developpement
A dynamic market with high responsivity to technical innovation, the RF front end industry is set to grow at 14% CAGR to reach $22.7B in 2022.
A market that will more than double in six years!
The radio-frequency (RF) front end and components market for cellphones is highly dynamic. From being worth $10.1B last year, it is expected to reach $22.7B in 2022. Such high growth is definitely something that players in other semiconductor markets would envy. However, the growth is not evenly distributed.
Filters represent the biggest business in the RF front end industry, and the value of this business will more than triple from 2016 to 2022. Most of this growth will derive from additional filtering needs from new antennas as well as the need for more filtering functionality due to multiple carrier aggregation (CA).
Power amplifiers (PAs) and low noise amplifiers (LNAs), the second biggest business, will be almost flat over the same period. High-end LTE PA market growth will be balanced by a shrinking 2G/3G market. The LNA market will grow steadily, especially thanks to the addition of new antennas.
Switches, the third biggest business, will double. This market will mainly be driven by antenna switches.
Lastly, antenna tuners, a small business today with an estimated $36M market value, will expand 7.5-fold to reach $272M in 2022. This growth is mainly due to tuning being added to both the main and the diversity antennas.
For more discussion, please visit our website: http://www.i-micronews.com/reports.html
Flexibly Scalable High Performance Architectures with Embedded Photonicsinside-BigData.com
In this deck from PASC 2019, Keren Bergman from Columbia University presents: Flexibly Scalable High Performance Architectures with Embedded Photonics.
"Computing systems are critically challenged to meet the performance demands of applications particularly driven by the explosive growth in data analytics. Data movement, dominated by energy costs and limited ‘chip-escape’ bandwidth densities, is a key physical layer roadblock to these systems’ scalability. Integrated silicon photonics with deeply embedded optical connectivity is on the cusp of enabling revolutionary data movement and extreme performance capabilities. Beyond alleviating the bandwidth/energy bottlenecks, embedded photonics can enable new disaggregated architectures that leverage the distance independence of optical transmission. We will discuss how the envisioned modular system interconnected by a unified photonic fabric can be flexibly composed to create custom architectures tailored for specific applications."
Watch the video: https://wp.me/p3RLHQ-kuh
Learn more: https://pasc19.pasc-conference.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Beyond communication, silicon photonics is penetrating consumer and automotive – heading to $1.1B in 2026.
More information: https://www.i-micronews.com/products/silicon-photonics-2021/
Market will more than double by 2025 driven by heavy investments in data centers.
More information: https://www.i-micronews.com/products/optical-transceivers-for-datacom-telecom-2020/
This narrated power point presentation attempts to examine the losses due to non-linear effects in optical fibers. The material will be useful for KTU final year B Tech students who prepare for the subject EC 405, Optical Communications.
We are offering a comprehensive high speed networking solution which is including 3.2T Co-packaged Optic (CPO), 100G, 200G, 400G & 800G transceivers, DACs, AOCs, ACCs & Loopback modules. We are fulfil your research (R&D) stage product development, DVT/EVT pre-product testing, mass production and also for final application use.
Welcome to contact us for more product info.
Deep analysis of the 400Gb optical transceiver from a leading Chinese company.
More information: https://www.systemplus.fr/reverse-costing-reports/innolights-400g-qsfp-dd-optical-transceiver/
OIF's Network Operator Working Group Chair Junjie Li spoke on delivering 400ZR, FlexE, and 112Gbps electrical to the datacom/telecom industry on September 4, 2019 during CIOE(China International Optoelectronic Exposition) event?
#400ZR #FlexE #112G #datacom #telecom #interoperability
Electrical interfaces at 112 Gbps are a critical enabler of faster, more efficient and cost effective networks and data centers. A panel of OIF contributors will discuss the ongoing CEI-112G electrical interface development projects, and the new architectures they will enable including chiplet packaging, co-packaged optics and internal cable based solutions. The panel will provide an update on the multiple interfaces being defined by the OIF including CEI-112G MCM, XSR, VSR, MR and LR for 112 Gbps applications of die-to-die, chip-to-module, chip-to-chip and long reach over backplane and cables. Listen to thought leaders in the electrical interface industry debate the issues surrounding the CEI-112G projects and the architectures they will enable.
The GaN RF market enjoyed a healthy increase in 2015.
2015 was a significant year for the GaN RF industry, with a dramatic increase in wireless infrastructure market sales driven by the massive adoption of LTE networks in China. By year’s end, the total RF GaN market was close to $300M.
Sales will likely not soar as high over the next two years, but growth will continue, mainly driven by increased adoption of GaN technology in the wireless infrastructure and defense markets. A significant boost will occur around 2019 – 2020, led by the implementation of 5G networks. Market size will be multiplied by 2.5 by the end of 2022, posting a CAGR of 14%...
More information on that report at http://www.i-micronews.com/reports.html
Compound Semiconductors,RF, Power Electronics
Silicon Photonics: A Solution for Ultra High Speed Data TransferIDES Editor
Silicon photonics is the integration of integrated
optics and photonics IC technologies in silicon. Silicon
photonics has recently attracted a great deal of attention since
it offers an opportunity for low cost solutions for various
applications ranging from telecommunications to chip-chip
inter connects. Two keys to this advancement are the increased
speed of communications (now at the speed of light) and the
increased amount of data that can be transmitted at once (i.e.,
bandwidth). Silicon photonics is the study and application of
photonic systems which use silicon as an optical medium.
The silicon is usually patterned with sub-micrometer
precision, into microphotonic components. These operate in
the infrared, most commonly at the 1.55 micrometer
wavelength used by most fiber optic telecommunication
systems. The silicon typically lies on top of a layer of silica in
what (by analogy with a similar construction in
microelectronics) is known as silicon on insulator (SOI). Today
the problems associated with multi-core processors with copper
interconnect are Latency, Bandwidth, Power dissipation,
Electromagnetic interference and Signal integrity. Micro
processor designers use the integration of number of
transistors that could be squeezed onto each chip to boost
computational horsepower. That in turn caused the amount
of waste heat that had to be dissipated from each square
millimeter of silicon to go up. One problem we are facing in
this effort is that micro processors with large numbers of cores
are not yet being manufactured. Fiber optics has a reputation
as an expensive solution because of high cost of hardware and
Fabrication is done using exotic materials which are costly.
The methods used in assembly and package of these
components are also expensive. A recent break through in
silicon photonics is in the development of a laser modulator
that encodes optical data at 40 billion bits per second. Finally
reached the goal of data transmission at 40 Gbps speed,
matching the fastest devices deployed today with least cost of
processing and showing the ultimate solutions to the problems
associated with copper interconnects in multi-core processors
and expensive fiber optics.
OIF experts presented project updates, discussed overcoming implementation challenges through interoperability and open optical networking and disaggregation at NGON & DCI World 2022 held in Barcelona, Spain June 21–23, 2022.
Speakers gave an overview of OIF’s 400ZR work, including results from a recent interoperability demonstration, co-packaging, Common Management Interface Specification (CMIS), common electrical interfaces (112G and 224G) and Transport Software Defined Networking (SDN) Application Program Interface (API).
RF Front End modules and components for cellphones 2017 - Report by Yole Deve...Yole Developpement
A dynamic market with high responsivity to technical innovation, the RF front end industry is set to grow at 14% CAGR to reach $22.7B in 2022.
A market that will more than double in six years!
The radio-frequency (RF) front end and components market for cellphones is highly dynamic. From being worth $10.1B last year, it is expected to reach $22.7B in 2022. Such high growth is definitely something that players in other semiconductor markets would envy. However, the growth is not evenly distributed.
Filters represent the biggest business in the RF front end industry, and the value of this business will more than triple from 2016 to 2022. Most of this growth will derive from additional filtering needs from new antennas as well as the need for more filtering functionality due to multiple carrier aggregation (CA).
Power amplifiers (PAs) and low noise amplifiers (LNAs), the second biggest business, will be almost flat over the same period. High-end LTE PA market growth will be balanced by a shrinking 2G/3G market. The LNA market will grow steadily, especially thanks to the addition of new antennas.
Switches, the third biggest business, will double. This market will mainly be driven by antenna switches.
Lastly, antenna tuners, a small business today with an estimated $36M market value, will expand 7.5-fold to reach $272M in 2022. This growth is mainly due to tuning being added to both the main and the diversity antennas.
For more discussion, please visit our website: http://www.i-micronews.com/reports.html
Flexibly Scalable High Performance Architectures with Embedded Photonicsinside-BigData.com
In this deck from PASC 2019, Keren Bergman from Columbia University presents: Flexibly Scalable High Performance Architectures with Embedded Photonics.
"Computing systems are critically challenged to meet the performance demands of applications particularly driven by the explosive growth in data analytics. Data movement, dominated by energy costs and limited ‘chip-escape’ bandwidth densities, is a key physical layer roadblock to these systems’ scalability. Integrated silicon photonics with deeply embedded optical connectivity is on the cusp of enabling revolutionary data movement and extreme performance capabilities. Beyond alleviating the bandwidth/energy bottlenecks, embedded photonics can enable new disaggregated architectures that leverage the distance independence of optical transmission. We will discuss how the envisioned modular system interconnected by a unified photonic fabric can be flexibly composed to create custom architectures tailored for specific applications."
Watch the video: https://wp.me/p3RLHQ-kuh
Learn more: https://pasc19.pasc-conference.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Process Variation Aware Crosstalk Mitigation for DWDM based Photonic NoC Arch...Ishan Thakkar
Photonic network-on-chip (PNoC) architectures are a potential candidate for communication in future chip multi-processors as they can attain higher bandwidth with lower power dissipation than electrical NoCs. PNoCs typically employ dense wavelength division multiplexing (DWDM) for high bandwidth transfers. Unfortunately, DWDM increases crosstalk noise and decreases optical signal to noise ratio (SNR) in microring resonators (MRs) threatening the reliability of data communication. Additionally, process variations induce variations in the width and thickness of MRs causing shifts in resonance wavelengths of MRs, which further reduces signal integrity, leading to communication errors and bandwidth loss. In this paper, we propose a novel encoding mechanism that intelligently adapts to on-chip process variations, and improves worst-case SNR by reducing crosstalk noise in MRs used within DWDM-based PNoCs. Experimental results on the Corona PNoC architecture indicate that our approach improves worst-case SNR by up to 44.13%.
Mike Novak
Tellabs
This session will focus on the underlying GPON (Gigabit Passive Optical Network) and All-Secure PON infrastructure, the implications to the Layer-1 design, using Armored Interlocking Fiber to deploy NIPR/SIPR data and voice requirements.
Kyeong Soo Kim, Academic Weeks Videoconference Session with Pakistan COMSATS Institute of Information Technology (CIIT), Swansea University, Swansea, Wales UK, Dec. 14, 2010.
In this deck from the 2019 OpenFabrics Workshop in Austin, Ariel Almog from Mellanox presents: To HDR and Beyond.
"Recently, deployment of 50 Gbps per lane (HDR) speed started and 100 Gbps per lane (EDR) which is a future technology is around the corner. These technologies exposing various new physical interfaces for copper and optical interfaces and type of transceiver like SFP-DD. Supporting these speeds also toughen the task to get low BER (Bit Error Rate) through FEC (Forward Error Correction) algorithm. The high bandwidth might cause the NIC PCIe interface to become a bottle neck as PCIe gen3 can handle up to single 100 Gbps interface over 16 lanes and PCIe gen4 can handle up to single 200 Gbps interface over 16 lanes. In addition, since the host might have dual CPU sockets, Socket direct technology, provides direct PCIe access to dual CPU sockets, eliminates the need for network traffic to go over the inter-process bus and allows better utilization of PCIe, thus optimizing overall system performance."
Watch the video: https://wp.me/p3RLHQ-k0B
Learn more: http://mellanox.com
and
https://www.openfabrics.org/2019-workshop-agenda-and-abstracts/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Next Generation Fiber Structured Cabling and Migration to 40/100gPanduit
The new high speed Ethernet standards, 40GBASE-SR4 and 100GBASE-SR10, will require a change in the fiber cable plant. Here we examine the media and connectivity solutions needed to ease the migration for 10 Gigabit Ethernet to 40 and 100 Gigabit Ethernet.
Transmission system used for optical fibers Jay Baria
In this presentation I have explained various types of transmission system used for optical transmission and also described about the budget method that has to be followed while selecting an source for optical fibers and also about the factors that should be consider while selecting an source.
Benefits of multi layer bandwidth management in next generation core optical ...Anuj Malik
OFC 2013 Presentation
This preso evaluates multi-layer switching architecture vs. all-optical and all-digital switching architectures. Further, the value of incorporating super-channels is evaluated to determine benefit. A real world network model is utilized to quantify and compare results.
View all Sessions
Kashif Islam, Solutions Architect , Cisco
Jay Romero, Sr. Director, IT Operations , Erickson Living
Come and learn how Erickson Living achieved deployment success using Cisco ME4600 based GPON Solution. Guest Presenter: Jay Romero, Sr.Director - IT Operations. Passive Optical Networks (PON) provides an effective and efficient way of providing fiber based high speed access to residential and business users. With the ever-growing demand for higher bandwidth, service providers are looking for fiber solutions that are cost-effective and easy to deploy and manage. This session will provide an insight into PON technology, with a focus on Gigabit-Capable PON. Attendees will learn basic design principles and applicable use cases for architecting a GPON Network using the Cisco ME4600 OLT and ONT/ONU. The presentation will outline the requirements to configure and verify an end-to-end service over ME4600 OLT. Redundancy mechanisms, such as Type B protection, in a GPON based environment will also be covered Attendees will walk away from this session with a firm understanding of the GPON technology, a clear view of applicability of GPON vs point-to-point ethernet for varius scenarios and reference designs for an effective, fast and reliable GPON network using Cisco ME4600 series of OLT and ONT products.
Alexis Dacquay – is CCIE with over 10 years experience in the networking industry. He has in the past been designing, deploying, and supporting some large corporate LAN/WAN networks. He has in the last 4 years specialised in high performance datacenter networking to satisfy the needs of cloud providers, web2.0, big data, HPC, HFT, and any other enterprise for which high performing network is critical to their business. Originally from Bretagne, privately a huge fan of polish cuisine.
Topic of Presentation: Handling high-bandwidth-consumption applications in a modern DC design
Language: English
Abstract: Modern Data Centre requires proper handling of high-bandwidth consuming applications, like BigData or IP Storage. To achieve this, next generation Ethernet speeds of 25, 50 and 100Gbps are being pursued. We are to show _why_ these new Ethernet speeds are vital from technology standpoint and _how_ to cope with the those sparkling new requirements by networking hardware enablements. We are to share ethernet switches’ design considerations, with the biggest emphasis put on the importance of big buffers and how they accommodate this bursty traffic. Throughout the presentation we will additionally elaborate on the evolution of variety of modern applications, and how we can handle those with the properly designed hardware, software, and Data Centre itself.
In this deck from the Stanford HPC Conference, Shahin Khan from OrionX describes major market Shifts in IT.
"We will discuss the digital infrastructure of the future enterprise and the state of these trends."
"We work with clients on the impact of Digital Transformation (DX) on them, their customers, and their messages. Generally, they want to track, in one place, trends like IoT, 5G, AI, Blockchain, and Quantum Computing. And they want to know what these trends mean, how they affect each other, and when they demand action, and how to formulate and execute an effective plan. If that describes you, we can help."
Watch the video: https://wp.me/p3RLHQ-lPP
Learn more: http://orionx.net
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Preparing to program Aurora at Exascale - Early experiences and future direct...inside-BigData.com
In this deck from IWOCL / SYCLcon 2020, Hal Finkel from Argonne National Laboratory presents: Preparing to program Aurora at Exascale - Early experiences and future directions.
"Argonne National Laboratory’s Leadership Computing Facility will be home to Aurora, our first exascale supercomputer. Aurora promises to take scientific computing to a whole new level, and scientists and engineers from many different fields will take advantage of Aurora’s unprecedented computational capabilities to push the boundaries of human knowledge. In addition, Aurora’s support for advanced machine-learning and big-data computations will enable scientific workflows incorporating these techniques along with traditional HPC algorithms. Programming the state-of-the-art hardware in Aurora will be accomplished using state-of-the-art programming models. Some of these models, such as OpenMP, are long-established in the HPC ecosystem. Other models, such as Intel’s oneAPI, based on SYCL, are relatively-new models constructed with the benefit of significant experience. Many applications will not use these models directly, but rather, will use C++ abstraction libraries such as Kokkos or RAJA. Python will also be a common entry point to high-performance capabilities. As we look toward the future, features in the C++ standard itself will become increasingly relevant for accessing the extreme parallelism of exascale platforms.
This presentation will summarize the experiences of our team as we prepare for Aurora, exploring how to port applications to Aurora’s architecture and programming models, and distilling the challenges and best practices we’ve developed to date. oneAPI/SYCL and OpenMP are both critical models in these efforts, and while the ecosystem for Aurora has yet to mature, we’ve already had a great deal of success. Importantly, we are not passive recipients of programming models developed by others. Our team works not only with vendor-provided compilers and tools, but also develops improved open-source LLVM-based technologies that feed both open-source and vendor-provided capabilities. In addition, we actively participate in the standardization of OpenMP, SYCL, and C++. To conclude, I’ll share our thoughts on how these models can best develop in the future to support exascale-class systems."
Watch the video: https://wp.me/p3RLHQ-lPT
Learn more: https://www.iwocl.org/iwocl-2020/conference-program/
and
https://www.anl.gov/topic/aurora
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck, Greg Wahl from Advantech presents: Transforming Private 5G Networks.
Advantech Networks & Communications Group is driving innovation in next-generation network solutions with their High Performance Servers. We provide business critical hardware to the world's leading telecom and networking equipment manufacturers with both standard and customized products. Our High Performance Servers are highly configurable platforms designed to balance the best in x86 server-class processing performance with maximum I/O and offload density. The systems are cost effective, highly available and optimized to meet next generation networking and media processing needs.
“Advantech’s Networks and Communication Group has been both an innovator and trusted enabling partner in the telecommunications and network security markets for over a decade, designing and manufacturing products for OEMs that accelerate their network platform evolution and time to market.” Said Advantech Vice President of Networks & Communications Group, Ween Niu. “In the new IP Infrastructure era, we will be expanding our expertise in Software Defined Networking (SDN) and Network Function Virtualization (NFV), two of the essential conduits to 5G infrastructure agility making networks easier to install, secure, automate and manage in a cloud-based infrastructure.”
In addition to innovation in air interface technologies and architecture extensions, 5G will also need a new generation of network computing platforms to run the emerging software defined infrastructure, one that provides greater topology flexibility, essential to deliver on the promises of high availability, high coverage, low latency and high bandwidth connections. This will open up new parallel industry opportunities through dedicated 5G network slices reserved for specific industries dedicated to video traffic, augmented reality, IoT, connected cars etc. 5G unlocks many new doors and one of the keys to its enablement lies in the elasticity and flexibility of the underlying infrastructure.
Advantech’s corporate vision is to enable an intelligent planet. The company is a global leader in the fields of IoT intelligent systems and embedded platforms. To embrace the trends of IoT, big data, and artificial intelligence, Advantech promotes IoT hardware and software solutions with the Edge Intelligence WISE-PaaS core to assist business partners and clients in connecting their industrial chains. Advantech is also working with business partners to co-create business ecosystems that accelerate the goal of industrial intelligence."
Watch the video: https://wp.me/p3RLHQ-lPQ
* Company website: https://www.advantech.com/
* Solution page: https://www2.advantech.com/nc/newsletter/NCG/SKY/benefits.html
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...inside-BigData.com
In this deck from the Stanford HPC Conference, Katie Lewis from Lawrence Livermore National Laboratory presents: The Incorporation of Machine Learning into Scientific Simulations at Lawrence Livermore National Laboratory.
"Scientific simulations have driven computing at Lawrence Livermore National Laboratory (LLNL) for decades. During that time, we have seen significant changes in hardware, tools, and algorithms. Today, data science, including machine learning, is one of the fastest growing areas of computing, and LLNL is investing in hardware, applications, and algorithms in this space. While the use of simulations to focus and understand experiments is well accepted in our community, machine learning brings new challenges that need to be addressed. I will explore applications for machine learning in scientific simulations that are showing promising results and further investigation that is needed to better understand its usefulness."
Watch the video: https://youtu.be/NVwmvCWpZ6Y
Learn more: https://computing.llnl.gov/research-area/machine-learning
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...inside-BigData.com
In this deck from the Stanford HPC Conference, DK Panda from Ohio State University presents: How to Achieve High-Performance, Scalable and Distributed DNN Training on Modern HPC Systems?
"This talk will start with an overview of challenges being faced by the AI community to achieve high-performance, scalable and distributed DNN training on Modern HPC systems with both scale-up and scale-out strategies. After that, the talk will focus on a range of solutions being carried out in my group to address these challenges. The solutions will include: 1) MPI-driven Deep Learning, 2) Co-designing Deep Learning Stacks with High-Performance MPI, 3) Out-of- core DNN training, and 4) Hybrid (Data and Model) parallelism. Case studies to accelerate DNN training with popular frameworks like TensorFlow, PyTorch, MXNet and Caffe on modern HPC systems will be presented."
Watch the video: https://youtu.be/LeUNoKZVuwQ
Learn more: http://web.cse.ohio-state.edu/~panda.2/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...inside-BigData.com
In this deck from the Stanford HPC Conference, Nick Nystrom and Paola Buitrago provide an update from the Pittsburgh Supercomputing Center.
Nick Nystrom is Chief Scientist at the Pittsburgh Supercomputing Center (PSC). Nick is architect and PI for Bridges, PSC's flagship system that successfully pioneered the convergence of HPC, AI, and Big Data. He is also PI for the NIH Human Biomolecular Atlas Program’s HIVE Infrastructure Component and co-PI for projects that bring emerging AI technologies to research (Open Compass), apply machine learning to biomedical data for breast and lung cancer (Big Data for Better Health), and identify causal relationships in biomedical big data (the Center for Causal Discovery, an NIH Big Data to Knowledge Center of Excellence). His current research interests include hardware and software architecture, applications of machine learning to multimodal data (particularly for the life sciences) and to enhance simulation, and graph analytics.
Watch the video: https://youtu.be/LWEU1L1o7yY
Learn more: https://www.psc.edu/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the Stanford HPC Conference, Ryan Quick from Providentia Worldwide describes how DNNs can be used to improve EDA simulation runs.
"Systems Intelligence relies on a variety of methods for providing insight into the core mechanisms for driving automated behavioral changes in self-healing command and control platforms. This talk reports on initial efforts with leveraging Semiconductor Electronic Design Automation (EDA) telemetry data from cross-domain sources including power, network, storage, nodes, and applications in neural networks as a driving method for insight into SI automation systems."
Watch the video: https://youtu.be/2WbR8tq-XbM
Learn more: http://www.providentiaworldwide.com/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoringinside-BigData.com
In this deck from the Stanford HPC Conference, Nicole Xu from Stanford University describes how she transformed a common jellyfish into a bionic creature that is part animal and part machine.
"Animal locomotion and bioinspiration have the potential to expand the performance capabilities of robots, but current implementations are limited. Mechanical soft robots leverage engineered materials and are highly controllable, but these biomimetic robots consume more power than corresponding animal counterparts. Biological soft robots from a bottom-up approach offer advantages such as speed and controllability but are limited to survival in cell media. Instead, biohybrid robots that comprise live animals and self- contained microelectronic systems leverage the animals’ own metabolism to reduce power constraints and body as an natural scaffold with damage tolerance. We demonstrate that by integrating onboard microelectronics into live jellyfish, we can enhance propulsion up to threefold, using only 10 mW of external power input to the microelectronics and at only a twofold increase in cost of transport to the animal. This robotic system uses 10 to 1000 times less external power per mass than existing swimming robots in literature and can be used in future applications for ocean monitoring to track environmental changes."
Watch the video: https://youtu.be/HrmJFyvInj8
Learn more: https://sanfrancisco.cbslocal.com/2020/02/05/stanford-research-project-common-jellyfish-bionic-sea-creatures/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the Stanford HPC Conference, Peter Dueben from the European Centre for Medium-Range Weather Forecasts (ECMWF) presents: Machine Learning for Weather Forecasts.
"I will present recent studies that use deep learning to learn the equations of motion of the atmosphere, to emulate model components of weather forecast models and to enhance usability of weather forecasts. I will than talk about the main challenges for the application of deep learning in cutting-edge weather forecasts and suggest approaches to improve usability in the future."
Peter is contributing to the development and optimization of weather and climate models for modern supercomputers. He is focusing on a better understanding of model error and model uncertainty, on the use of reduced numerical precision that is optimised for a given level of model error, on global cloud- resolving simulations with ECMWF's forecast model, and the use of machine learning, and in particular deep learning, to improve the workflow and predictions. Peter has graduated in Physics and wrote his PhD thesis at the Max Planck Institute for Meteorology in Germany. He worked as Postdoc with Tim Palmer at the University of Oxford and has taken up a position as University Research Fellow of the Royal Society at the European Centre for Medium-Range Weather Forecasts (ECMWF) in 2017.
Watch the video: https://youtu.be/ks3fkRj8Iqc
Learn more: https://www.ecmwf.int/
and
http://www.hpcadvisorycouncil.com/events/2020/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck, Gilad Shainer from the HPC AI Advisory Council describes how this organization fosters innovation in the high performance computing community.
"The HPC-AI Advisory Council’s mission is to bridge the gap between high-performance computing (HPC) and Artificial Intelligence (AI) use and its potential, bring the beneficial capabilities of HPC and AI to new users for better research, education, innovation and product manufacturing, bring users the expertise needed to operate HPC and AI systems, provide application designers with the tools needed to enable parallel computing, and to strengthen the qualification and integration of HPC and AI system products."
Watch the video: https://wp.me/p3RLHQ-lNz
Learn more: http://hpcadvisorycouncil.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Today RIKEN in Japan announced that the Fugaku supercomputer will be made available for research projects aimed to combat COVID-19.
"Fugaku is currently being installed and is scheduled to be available to the public in 2021. However, faced with the devastating disaster unfolding before our eyes, RIKEN and MEXT decided to make a portion of the computational resources of Fugaku available for COVID-19-related projects ahead of schedule while continuing the installation process.
Fugaku is being developed not only for the progress in science, but also to help build the society dubbed as the “Society 5.0” by the Japanese government, where all people will live safe and comfortable lives. The current initiative to fight against the novel coronavirus is driven by the philosophy behind the development of Fugaku."
Initial Projects
Exploring new drug candidates for COVID-19 by "Fugaku"
Yasushi Okuno, RIKEN / Kyoto University
Prediction of conformational dynamics of proteins on the surface of SARS-Cov-2 using Fugaku
Yuji Sugita, RIKEN
Simulation analysis of pandemic phenomena
Nobuyasu Ito, RIKEN
Fragment molecular orbital calculations for COVID-19 proteins
Yuji Mochizuki, Rikkyo University
In this deck from the Performance Optimisation and Productivity group, Lubomir Riha from IT4Innovations presents: Energy Efficient Computing using Dynamic Tuning.
"We now live in a world of power-constrained architectures and systems and power consumption represents a significant cost factor in the overall HPC system economy. For these reasons, in recent years researchers, supercomputing centers and major vendors have developed new tools and methodologies to measure and optimize the energy consumption of large-scale high performance system installations. Due to the link between energy consumption, power consumption and execution time of an application executed by the final user, it is important for these tools and the methodology used to consider all these aspects, empowering the final user and the system administrator with the capability of finding the best configuration given different high level objectives.
This webinar focused on tools designed to improve the energy-efficiency of HPC applications using a methodology of dynamic tuning of HPC applications, developed under the H2020 READEX project. The READEX methodology has been designed for exploiting the dynamic behaviour of software. At design time, different runtime situations (RTS) are detected and optimized system configurations are determined. RTSs with the same configuration are grouped into scenarios, forming the tuning model. At runtime, the tuning model is used to switch system configurations dynamically.
The MERIC tool, that implements the READEX methodology, is presented. It supports manual or binary instrumentation of the analysed applications to simplify the analysis. This instrumentation is used to identify and annotate the significant regions in the HPC application. Automatic binary instrumentation annotates regions with significant runtime. Manual instrumentation, which can be combined with automatic, allows code developer to annotate regions of particular interest."
Watch the video: https://wp.me/p3RLHQ-lJP
Learn more: https://pop-coe.eu/blog/14th-pop-webinar-energy-efficient-computing-using-dynamic-tuning
and
https://code.it4i.cz/vys0053/meric
Sign up for our insideHPC Newsletter: http://insidehpc.com/newslett
In this deck from GTC Digital, William Beaudin from DDN presents: HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD.
Enabling high performance computing through the use of GPUs requires an incredible amount of IO to sustain application performance. We'll cover architectures that enable extremely scalable applications through the use of NVIDIA’s SuperPOD and DDN’s A3I systems.
The NVIDIA DGX SuperPOD is a first-of-its-kind artificial intelligence (AI) supercomputing infrastructure. DDN A³I with the EXA5 parallel file system is a turnkey, AI data storage infrastructure for rapid deployment, featuring faster performance, effortless scale, and simplified operations through deeper integration. The combined solution delivers groundbreaking performance, deploys in weeks as a fully integrated system, and is designed to solve the world's most challenging AI problems.
Watch the video: https://wp.me/p3RLHQ-lIV
Learn more: https://www.ddn.com/download/nvidia-superpod-ddn-a3i-ai400-appliance-with-the-exa5-filesystem/
and
https://www.nvidia.com/en-us/gtc/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck, Paul Isaacs from Linaro presents: State of ARM-based HPC. This talk provides an overview of applications and infrastructure services successfully ported to Aarch64 and benefiting from scale.
"With its debut on the TOP500, the 125,000-core Astra supercomputer at New Mexico’s Sandia Labs uses Cavium ThunderX2 chips to mark Arm’s entry into the petascale world. In Japan, the Fujitsu A64FX Arm-based CPU in the pending Fugaku supercomputer has been optimized to achieve high-level, real-world application performance, anticipating up to one hundred times the application execution performance of the K computer. K was the first computer to top 10 petaflops in 2011."
Watch the video: https://wp.me/p3RLHQ-lIT
Learn more: https://www.linaro.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Versal Premium ACAP for Network and Cloud Accelerationinside-BigData.com
Today Xilinx announced Versal Premium, the third series in the Versal ACAP portfolio. The Versal Premium series features highly integrated, networked and power-optimized cores and the industry’s highest bandwidth and compute density on an adaptable platform. Versal Premium is designed for the highest bandwidth networks operating in thermally and spatially constrained environments, as well as for cloud providers who need scalable, adaptable application acceleration.
Versal is the industry’s first adaptive compute acceleration platform (ACAP), a revolutionary new category of heterogeneous compute devices with capabilities that far exceed those of conventional silicon architectures. Developed on TSMC’s 7-nanometer process technology, Versal Premium combines software programmability with dynamically configurable hardware acceleration and pre-engineered connectivity and security features to enable a faster time-to- market. The Versal Premium series delivers up to 3X higher throughput compared to current generation FPGAs, with built-in Ethernet, Interlaken, and cryptographic engines that enable fast and secure networks. The series doubles the compute density of currently deployed mainstream FPGAs and provides the adaptability to keep pace with increasingly diverse and evolving cloud and networking workloads.
Learn more: https://insidehpc.com/2020/03/xilinx-announces-versal-premium-acap-for-network-and-cloud-acceleration/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Zettar: Moving Massive Amounts of Data across Any Distance Efficientlyinside-BigData.com
In this video from the Rice Oil & Gas Conference, Chin Fang from Zettar presents: Moving Massive Amounts of Data across Any Distance Efficiently.
The objective of this talk is to present two on-going projects aiming at improving and ensuring highly efficient bulk transferring or streaming of massive amounts of data over digital connections across any distance. It examines the current state of the art, a few very common misconceptions, the differences among the three major type of data movement solutions, a current initiative attempting to improve the data movement efficiency from the ground up, and another multi-stage project that shows how to conduct long distance large scale data movement at speed and scale internationally. Both projects have real world motivations, e.g. the ambitious data transfer requirements of Linac Coherent Light Source II (LCLS-II) [1], a premier preparation project of the U.S. DOE Exascale Computing Initiative (ECI) [2]. Their immediate goals are described and explained, together with the solution used for each. Findings and early results are reported. Possible future works are outlined.
Watch the video: https://wp.me/p3RLHQ-lBX
Learn more: https://www.zettar.com/
and
https://rice2020oghpc.rice.edu/program-2/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the Rice Oil & Gas Conference, Bradley McCredie from AMD presents: Scaling TCO in a Post Moore's Law Era.
"While foundries bravely drive forward to overcome the technical and economic challenges posed by scaling to 5nm and beyond, Moore’s law alone can provide only a fraction of the performance / watt and performance / dollar gains needed to satisfy the demands of today’s high performance computing and artificial intelligence applications. To close the gap, multiple strategies are required. First, new levels of innovation and design efficiency will supplement technology gains to continue to deliver meaningful improvements in SoC performance. Second, heterogenous compute architectures will create x-factor increases of performance efficiency for the most critical applications. Finally, open software frameworks, APIs, and toolsets will enable broad ecosystems of application level innovation."
Watch the video:
Learn more: http://amd.com
and
https://rice2020oghpc.rice.edu/program-2/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
CUDA-Python and RAPIDS for blazing fast scientific computinginside-BigData.com
In this deck from the ECSS Symposium, Abe Stern from NVIDIA presents: CUDA-Python and RAPIDS for blazing fast scientific computing.
"We will introduce Numba and RAPIDS for GPU programming in Python. Numba allows us to write just-in-time compiled CUDA code in Python, giving us easy access to the power of GPUs from a powerful high-level language. RAPIDS is a suite of tools with a Python interface for machine learning and dataframe operations. Together, Numba and RAPIDS represent a potent set of tools for rapid prototyping, development, and analysis for scientific computing. We will cover the basics of each library and go over simple examples to get users started. Finally, we will briefly highlight several other relevant libraries for GPU programming."
Watch the video: https://wp.me/p3RLHQ-lvu
Learn more: https://developer.nvidia.com/rapids
and
https://www.xsede.org/for-users/ecss/ecss-symposium
Sign up for our insideHPC Newsletter: http://insidehp.com/newsletter
In this deck from FOSDEM 2020, Colin Sauze from Aberystwyth University describes the development of a RaspberryPi cluster for teaching an introduction to HPC.
"The motivation for this was to overcome four key problems faced by new HPC users:
* The availability of a real HPC system and the effect running training courses can have on the real system, conversely the availability of spare resources on the real system can cause problems for the training course.
* A fear of using a large and expensive HPC system for the first time and worries that doing something wrong might damage the system.
* That HPC systems are very abstract systems sitting in data centres that users never see, it is difficult for them to understand exactly what it is they are using.
* That new users fail to understand resource limitations, in part because of the vast resources in modern HPC systems a lot of mistakes can be made before running out of resources. A more resource constrained system makes it easier to understand this.
The talk will also discuss some of the technical challenges in deploying an HPC environment to a Raspberry Pi and attempts to keep that environment as close to a "real" HPC as possible. The issue to trying to automate the installation process will also be covered."
Learn more: https://github.com/colinsauze/pi_cluster
and
https://fosdem.org/2020/schedule/events/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from ATPESC 2019, Ken Raffenetti from Argonne presents an overview of HPC interconnects.
"The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two-week training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future."
Watch the video: https://wp.me/p3RLHQ-luc
Learn more: https://extremecomputingtraining.anl.gov/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
Silicon Photonics for Extreme Computing - Challenges and Opportunities
1. Rev PA1Rev PA1 1
Keren Bergman
Department of Electrical Engineering
Lightwave Research Lab, Columbia University
New York, NY, USA
Silicon Photonics for Extreme
Scale Computing
–
Challenges & Opportunities
2. Rev PA1Rev PA1 2
Trends in extreme HPC
• Evolution of the top10 in
the last six years:
– Average total compute power:
• 0.86 PFlops 21 PFlops
• ~24x increase
– Average nodal compute power:
• 31GFlops 600GFlops
• ~19x increase
– Average number of nodes
• 28k 35k
• ~1.3x increase
Node compute power main contributor to performance growth
2010 2011 2012 2013 2014 2015 2016
0.1
1
10
100
Node computer power (FLOPs)
Number of nodes
Total system compute power (FLOPs)
Average of top 10 sytems, relative to 2010
24x
19x
1.3x
[top500.org, S. Rumley, et al. Optical Interconnects for Extreme
Scale Computing Systems, Elsevier PARCO 64, 2017]
3. Rev PA1Rev PA1 3
Interconnect trends
• Top 10 average node level
evolutions:
– Average node compute power:
• 31GFlops 600GFlops
• ~19x increase
– Average bandwidth available
per node
• 2.7GB/s 7.8GB/s
• ~3.2x increase
– Average byte-per-flop ratio
• 0.06 B/Flop 0.01 B/Flop
• ~6x decrease
• Sunway TaihuLight (#1) shows 0.004 B/Flop !!
Growing gap in interconnect bandwidth
2010 2011 2012 2013 2014 2015 2016
0.1
1
10
Node computer power (FLOPs)
Node bandwidth (bit/s)
Byte-per-flop ratio
Average of top 10 sytems, relative to 2010
19x
3.2x
0.17x
[top500.org, S. Rumley, et al. Optical Interconnects for Extreme
Scale Computing Systems, Elsevier PARCO 64, 2017]
4. Rev PA1Rev PA1 4
• Real Exascale goal: reaching an Exaflop…
• …while satisfying constraints (20MW, $200M)
• …with reasonably useful applications
• Assume 15% of $ budget for interconnect:
– 15% x $200M / 500 Pb/s = 6 ¢/Gb/s
– Bi-directional links must thus be sold for ~10 ¢/Gb/s
• Today: optical 10$/Gb/s
electrical 0.1-1 $/Gb/s
• Assume 15% of power budget for interconnect:
– 15% x 20MW / 125 Pb/s = 24 mW/Gb/s = 24 pJ/bit
= budget for communicating a bit end-to-end
6 pJ/bit per hop
4 pJ/bit for switching today ~20 pJ/bit
2 pJ/bit for transmission today ~10 pJ/bit (elec) ~20 pJ/bit (optical)
Exascale interconnects – power and cost constraints
1.25 ExaFLOP
X 0.01 B/FLOP
= 125 Pb/s injection BW
X 4 hops
= 500 Pb/s installed BW
5. 5
The Photonic Opportunity for Data Movement
Energy efficient, low-latency, high-bandwidth data interconnectivity is the core
challenge to continued scalability across computing platforms
Energy consumption completely dominated by costs of data movement
Bandwidth taper from chip to system forces extreme locality
Reduce Energy Consumption Eliminate Bandwidth Taper
10
100
1,000
10,000
100,000
1,000,000
Bandwidth:Gb/sec/mm
Electronic
Photonic
0.01
0.1
1
10
100
1000
Chip Edge PCB Rack System
DataMovementEnergy
(pJ/bit)
Electronic
Photonic
6. Rev PA1Rev PA1 6
Default interconnect architecture
Node Router Router Node
Long distance
(router-router) link
Short distance
(router-router) link
Electrical transceivers
Optical transceivers
(currently 10-20% of
router-router links)
Router
Node
Short distance
NR (node-router) link
N compute
nodes in system
Node
Node
Node
One rack
7. Rev PA1Rev PA1 7
Router
Core
Exascale optical interconnect ?
• Requirements for that to happen:
– Divide cost by 1.5 orders of magnitude at least
– Improve energy-efficiency by one order of magnitude at least
Node Router
Core
Router
Core
Node
Long distance RR link
Electrical transceivers
Optical transceivers
Node
N compute
nodes in system
Node
Node
8. Rev PA1Rev PA1 8
Exascale supercomputing node
• Consider 50k nodes
• Total injection bandwidth ~ 100 Pb/s
• 4 ZB per year at 1% utilization
• Total cumulated unidirectional bandwidth: ~ 500 Pb/s
Compute power:
From 10 to 30 Teraflop (TF)
Interconnect bandwidth:
0.01 B/F 0.8 – 2.4 Tb/s
Bulk memory bandwidth:
0.1 B/F 8 - 24 Tb/s
Near memory bandwidth:
10-30 TF x 8bit x 0.5B/F = 40 - 120Tb/s
Requirements
for next-
generation
interconnects!
Ideal case:
Back to 0.01B/F to
ensure well fed nodes
9. Silicon Photonics: all the parts
• Silicon as core material
– High refractive index; high contrast;
sub micron cross-section, small
bend radius.
• Small footprint devices
– 10 μm – 1 mm scale compared to
cm-level scale for telecom
• Low power consumption
– Can reach <1 pJ/bit per link
• Aggressive WDM platform
– Bandwidth densities 1-2Tb/s pin IO
Switching
WDM Modulation &
Demultiplexing
Modulation
Detection
•Silicon wafer-scale CMOS
– Integration, density scaling
– CMOS fabrication tools
– 2.5D and 3D platforms
S. Rumley et al. "Silicon Photonics for Exascale Systems”, IEEE JLT 33 (4), 2015.
10. Photonic Computing Architectures: Beyond Wires
10
• Leverage dense WDM bandwidth density
• Photonic switching
• Distance-independent, cut-through, bufferless
• Bandwidth-energy optimized interconnects
On chip
Short distance PCB
Long distance PCB
Optical link
Conventional hop-by-hop
data movement
Fully flattened end-to-end
data movement
12
conversions!
No conversion!
11. Silicon Photonic Link Design
clk
dataTIA
clk
dataTIA
clk clk clk
data
R
C
clk
dataTIA
clk gen
Silicon waveguide Silicon waveguide
• Co-existence of Electronics and Photonics
• Energy-Bandwidth optimization
Clock Distribution
Serialization of Data
Driver
Comb Laser
Vertical Grating Coupler
Coupled Waveguides
OOK Modulation
Fiber
Thermal Tuning
Photodiode
“0”
“1”
λ
Deserialization
Demultiplexing Filter
Amplifier
[1] M. Bahadori et al. “Comprehensive Design Space Exploration of
Silicon Photonic Interconnects," IEEE JLT 34 (12), 2015.
12. Rev PA1Rev PA1 12
Utilization of Optical Power Budget
Photonic Elements
From Electronics To Electronics
Coupler
Loss
Sensitivity level of receiver
Demux Array Penalty
Coupler
Loss
Coupler
Loss
Modulator Array Penalty
Available
Laser
power
Maximum Available Budget per λ
Max 5 dBm
per λ
Extra
Budget
Maximum link utilization
13. Rev PA1Rev PA1 13
All-Parameter Optimization: Min Energy Design
Photonic Elements
From Electronics To Electronics
Optimal design based on physical dimensions Breakdown of consumption for given bandwidth
1 pJ/bit
14. Rev PA1Rev PA1 14
3 5 10 15 20 25
Optical channel ( ) linerate [Gb/s]
0
1
2
3
Linkenergyefficiency[pJ/bit] 10% Laser eff.
30% Laser eff.
Highly parallel, “SERDES-less” links
• Investigate “many-channel” architectures with low bitrate wavelengths
– Leads to poor laser utilization
• Only possible with high laser efficiency
– But may allow drastic simplification of drivers and SERDES blocks
400G
15. Rev PA1Rev PA1 15
0 10 20 30
0
4
8
12
Cost per bandwidth – declining but slowly
• Today (2017):
– 100G (EDR) best
$/Gb/s figure
– Copper cable have
shorter reaches due to
higher bit-rate
– Optics: Not even ½
order of magnitude
price drop over 4
years
– But electrical-optical
gap is shrinking
[Besta et al. “Slim Fly: A Cost Effective
Low-Diameter Network Topology”,
SuperComputing 2014]
[may 2017]
Mellanox EDR
100 Gb/s
Length [m]
Cost[$/Gb/s]
17. Rev PA1Rev PA1 17
Toward integrated optical transceivers
• Fundamental step to reduce cost and power of optics: co-integration
Node Router
Core
Router
Core
Node
Long distance RR link
Electrical transceivers
Optical transceivers
Router
Core
Node
Short distance
NR link
18. Rev PA1Rev PA1 18
Towards integrated, general-purpose optical
transceivers
Node Router
Core
Router
Core
Node
Electrical transceivers
Integrated optical transceivers
Router
Core
Node
Short distance
NR link
• Fundamental step to reduce cost and power of optics
19. Rev PA1Rev PA1 19
Cost vs. energy
• Transceiver procurement cost dominates TCO
– Same energy/procurement ratio as Cori (30% / 70%) with 0.4 $/Gb/s
• This is a factor of 10 from the current cost figure
Hypotetical 0.1$/Gb/s transceivers (8 years)
Hypotetical 1$/Gb/s transceivers (8 years)
Typical 100G transceivers (8 years, 2.5W, $500)
Titan (6 years, 8.2 MW, M$97)
Cori (8 years, 3.9 MW, M$100)
Roadrunner (5 years, 2.35 MW, M$100)
0% 20% 40% 60% 80% 100%
4
8
Desktop PC (5 years, 100W, $2000)
Cost of energy over lifetime ($100/MWh)
Procurement cost
20. Rev PA1Rev PA1 20
The bright side of optical switching
• Optical switches can have astonishing aggregate bandwidths
– Absence of signal introspection
– Possibility to receive dense WDM signals (> 1 Tb/s) on each port
• Translates into alluring $/Gb/s figures:
Optical switching >10 cheaper than electrical packet routing…
Type # ports Bandwidth
per port
Total
bandwidth
Price Price per
Gb/s
Calient
S320
Mems
switch
320 ports 400Gbps
(with 16
wavelengths
at 25G –
CDAUI-16
signaling)
128 Tb/s $40k 0.3 $/Gb/s
Mellanox
SB7790
36 ports 100 Gbps
(4xEDR
Infiniband)
3.6 Tb/s $12k 3.3 $/Gb/s
21. Rev PA1Rev PA1 21
Optical switch vs. Electrical packet router
• (Random access) buffering plays a crucial role.
– Without buffering, end-to-end scheduling is required
– With buffering, scheduling made link-after-link
– Without buffering, no back-to-back transmissions
• Packet routers also allow differentiated QoS, error correction, etc.
Packet router offers much higher “value” than optical switches
My personal guess:
At least 30x more value (for 10ns optical switching time)
Optical switches not competing with packet routers for “daily switching”
Optical switch: Bufferless Electrical packet router: Buffered
22. Rev PA1Rev PA1 22
Our FPGA-Controlled Switch Test-Bed
• 16 high-speed DACs enable test
and control of integrated
photonic switch circuit
PIN shifter Thermal tuner
3dB coupler
Grating coupler array
FPGA with 16
high-speed
DACs/ADCs
On-chipLoss[dB]
1
2
3
4
5
Path category
B: Bar
C:cross
W: Waveguide
X: Crossing
23. Rev PA1Rev PA1 23
Transitioning to Novel Modular Architectures…
• Modular architecture and control
plane
• Avoids on chip crossings
• Fully non-blocking
• Path independent insertion loss
• Low crosstalk
[Dessislava Nikolova*, David M. Calhoun*, Yang Liu, Sébastien Rumley, Ari Novack, Tom Baehr-Jones, Michael Hochberg, Keren Bergman , Modular
architecture for fully non-blocking silicon photonic switch fabric , Nature Microsystems & Nanoengineering 3 (1607) (Jan 2017).]
24. Rev PA1Rev PA1 24
AIM Datacom 2nd Tapeout Run
• More efforts poured into monolithically-
integrated photonic devices
• Novel switch devices and test structures
submitted to AIM platform
Device Area
1 4x4x4 λ Space-and-
wavelength switch
1.9mm x
2.6mm
2 4x4 Si space switch 1.4mm x
2.3mm
3 4x4 Si/SiN two-layered
space switch
1.5mm x
2.3mm
4 2x2 double-gated/single-
gated ring switch
0.8mm x
1.4mm
5 Crossing and escalator
test structure
0.6mm x
1mm
6 1x2x8 λ MUX with rings 1.2mm x
0.2mm
7 1x2x4 λ MUX with micro-
disks
0.6mm x
0.2mm
8 2x2 double-gated MZM
switch
3mm x
0.4mm
25. Rev PA1Rev PA1 25
What optical networks are good at: bandwidth
steering
• Put your fibers where traffic is
• Example: 3D stencil traffic over Dragonfly
– 20 groups, 462 nodes per group, 24x24x16 stencil
• Per workload reconfiguration – avoids switching time overhead
• No need for ultra-large radixes – 8x8 is sufficient
[1] K. Wen, et al. “Flexfly: Enabling a Reconfigurable Dragonfly through Silicon Photonics”, SC 2016
27. Rev PA1Rev PA1 27
GPU3GPU1 GPU2 GPU4
CMP1 CMP2 NIC1 NIC2MEMMEM
MEMMEM
Intra-node bandwidth steering
• Alternative concept: use many low-
radix optical switches
– 8x8 realizable with today’s
technology
– Tens of switches can be
collocated on a single chip
• Somehow less flexible than the
packet routed counterpart
– Not all-to-all
– Reconfiguration takes
microseconds
• But transparent for packets
– Latency of point-to-point
– Energy efficient
28. Rev PA1Rev PA1 28
GPU3GPU1 GPU2 GPU4
CMP1 CMP2 NIC1 NIC2MEMMEM
MEMMEM
GPU3GPU1 GPU2 GPU4
CMP1 CMP2
NIC1 NIC2
MM
Conventional architecture
MM MM MM
M M
30. Rev PA1Rev PA1 30
Conclusions
• Lack of bandwidth is threatening scalability
• For Exascale, need to work on (priority-sorted)
– Costs of the optical part
• Automated packaging and testing
• Increased integration
• Larger market
– Power of electrical part
• Packet routers
– Finely cost/performance-optimized
topologies [1]
• Taper optical bandwidth, but not too much
• Get as much as we can from optical cables
– Power of optical part
• Energy-wise optimized designs
• Improved laser efficiency (technology or “tricks”)
• Technological advances
– Costs of electrical part
[1] M.Y. Teh, et al. “Design space exploration of the Dragonfly topology”, Exacomm workshop (best paper), 2017
31. Rev PA1Rev PA1 31
Conclusions
• All-optical interconnects
• Optical switches and packets routers not directly comparable
• Bandwidth-steering based architectures should be explored
• Optical switches used in addition to regular routers
GPU3GPU1 GPU2 GPU4
CMP1 CMP2
NIC1 NIC2
MM MM MM MM
M M
GPU3GPU1
GPU2 GPU4
CMP1 CMP2
NIC1 NIC2
MEMMEM MEMMEM