Silicon photonics and optical connectivity technologies are rapidly evolving to keep up with exponential data growth in data centers. Data center traffic is doubling every 12 months, straining existing networking infrastructure. New technologies like silicon photonics are being deployed to enable higher bandwidth 100G and emerging 400G interconnects within and between data centers. This will allow data center networks to scale cost effectively to support continued growth in data and use of technologies like machine learning. Emerging optical modules like QSFP-DD and embedded optics directly on boards will drive adoption of 400G and higher speeds in the near future.
3. Other names and brands may be claimed as the property of others
Source: Estimates from Facebook, Google, Cisco publications, and Intel network model
Optical Connectivity as
% of Networking Spend
45%+
Facebook Data Center Network Design
>200K Servers
10K+ switches
>$1B
Facebook Data Center Fort Worth, TexasGoogle Data Center
Worldwide Annual
Data Center Traffic
(~5X global internet traffic)
5ZB
EmergenceofHyperScaleDataCentersData center networks are struggling to keep up with exponential data growth
3
4. $2.5B
$0.7B
2016 2018 2020
$4.6B
Data Center 100G+ TAM
DatacenterconnectivityTAM
Data Center total spend on 100G and
400G interconnects
Connected world, machine-to-
machine traffic, data analytics &
machine learning driving exponential
data growth
Continual innovation needed to
support data growth
Between DC
Across DC
Across Row
In Rack
Source: Intel 2016 Market Model based in part on Crehan and Dell’Oro reports
4
6. Intel®SiliconPhotonicsProductOverview
100GPSM4QSFP 100GCWDM4QSFP
Up to 2 km reach on parallel single mode fiber
Fully MSA compliant
In volume production
2 km and 10km reach on duplex single mode
fiber
Fully MSA compliant
Ramping now
On-die hybrid lasers Single die CWDM4 transmitter
4 fibers for
100G
Single fiber
for 100G
7. • Most integrated optics approach, enabled by Intel’s
hybrid laser technology with >90% coupling efficiency
• Capable of multiple optical wavelengths and
integration of multiple optical components
Intel®SiliconPhotonics–TechnologyOverview
7
SILICON INTEGRATION SILICON SCALE
SILICON BACKEND
• Advanced CMOS manufacturing process at Intel fabs
on 300mm wafers
• High-volume wafer-level testing
• Comprehensive on-wafer optical,
electrical, and high-speed test
capabilities
Optical Electrical RF
8. Silicon (device) wafer
InP substrate removal: Only active epi layers
remain on device wafer
Wafer level bonding
Laser fabrication through
wafer level processing Silicon
InP
Plasma activation and bonding process: InP die are
bonded & transferred in parallel to device wafer
HYBRIDSILICONLASERFABRICATION
8
Indium phosphide die
9. Lithographically defined laser
No critical alignment steps
Low coupling loss
CMOS process
Integratable with other
silicon photonic components
Bonded InP bulk EPI
Large arrays of lasers w/ high density
Scalable to multiple wavelengths
HYBRIDLASER:BENEFITSOFWAFERSCALEPROCESSING
9
Single die CWDM4 transmitterMach-Zehnder modulator
11. Advantages of CMOS manufacturing infrastructure and methodologies
• Fast TPT and priority processing can achieve additional 3-4x improvementCycle time
• All changes are documented and reviewed (Whitepapers and xCCB)Change control
• 1000’s of process and tool parameters are monitored and recordedProcess control
• In-line data and control/spec limits are used to disposition material earlyIn-line disposition
Fab:adoptandreuse
11
LOT ID#
(8 wafers each)
12. Source: http://www.tel.com
DESIGN OF EXPERIMENTS
Vertical couplers for on-wafer optical testing
OpticalPower
Ibias
DOE ID#
Design Target
Also enables expansive DOE’s to
quickly converge on design targets
Test:adoptandretrofit
12
Other names and brands may be claimed as the property of others
13. • Wafers can be tested as soon as they arrive.
Eliminates wait time for dicing, polish, AR coating...
Time to Data
• Automated wafer test can measure far more
devices than manual test, 24x7
Volume of Data
• Well defined, consistent, and repeatable test plans
ensure experiments can be analyzed properly
Consistency of Data
• Wafers can be tested and sent for additional
processing
Non-Destructive Tests
• Eliminate bad die from supply chainDie Sorting
Testwaferleveltestadvantages
13
14. BEYOND100G-NEXTGENERATIONINTERFACESfor400G+
Same faceplate density as QSFP (32 ports per
1RU) but with eight lane electrical interface
and improved ability to dissipate power
http://www.qsfp-dd.com/
http://osfpmsa.org/
COBO
Consortium for On-Board Optics
Developing specifications for interchangeable
and interoperable optical modules that can be
mounted onto printed circuit boards
http://cobo.azurewebsites.net/
QSFP-DD FOUNDERS/PROMOTERS
FOUNDING MEMBERS
14
Intel Optical Engine
QSFP-DD image from: http://www.qsfp-dd.com/. Intel Optical engine photo is for representative purposes only.
Other names and brands may be claimed as the property of others
QSFP-DD
15. MICROSOFTVIEw–IntraDCGeneration2
15
ToR
Tier 1 Tier 1
Tier 3 Tier 3
ToR
…
Optical
Fan-Out
…
<80km Connections
100/200Gb
(400G FlexE)
>80km Connections
100/150/200Gb
(400G FlexE)
Intercity Metro
Tier 3 Switch
X by 400Gb
Tier 1 Switch
X by 400Gb
<20m AOC
Connections
400Gb
<20m Connections
100Gb
TOR
Xx100Gb
Servers
<3m DAC
50/100Gb
Tier 2 Tier 2
Tier 2 Switch
X by 400Gb
<2,000m Connections
400Gb
<2,000m Connections
400Gb
Tier 1 Tier 1
• 400Gb ecosystem
• Primarily 50Gb attached servers with initial
100Gb deployments
• Optics to the server likely with 100Gb
deployments
• All links above the TOR are 400Gb
• 500m DR optics used for all links
• FlexEtherent allows 400Gb inter DC links
• 400Gb ecosystem looks like a good time to
transition to on-board optics
Tom Issenhuth, Microsoft. Panel discussion at IDF. IDF16 CLDPN01 — Silicon Photonics and the Future of Optical Connectivity in the Data Center
http://myeventagenda.com/sessions/0B9F4191-1C29-408A-8B61-65D7520025A8/14/5#sessionID=1486. Other names and brands may be claimed as the property of others
16. Core Network / Inter Data CenterCore Network / Inter Data Center
Super Spine/Core
Leaf
Spine
ToR
Servers
TODAY FUTURE
Possiblenetworkarchitectureevolution
16
18. 2020+2016/2017
100G MSA Pluggable High density integrated
2018/2019
400/800G embedded400G MSA Pluggable
Enabling transition to
100G switch-to-switch
connectivity with 25G I/O
Address electrical I/O constraints
Highest density
Lowest system power
Embedded optics for
density, signal integrity,
and power
Supporting
next-generation switches
12.8Tb/s and 50G I/O
18
FutureofopticalconnectivityinDatacenters
ExpectedformfactorandBandwidthevolution
QSFP-DD image from: http://www.qsfp-dd.com. Intel Optical engine photo is for representative purposes only.
Other names and brands may be claimed as the property of others
Available/announced switch ASICs
3.2/6.4 Tb/s 6.5Tb/s 12.8Tb/s 12.8Tb/s or 25.6Tb/s