19. EXOGENI
• 14 GPO-funded racks
– Partnership between RENCI, Duke and IBM
– IBM x3650 M3/M4 servers
• 1x146GB 10K SAS hard drive +1x500GB secondary
drive
• 48G RAM
• Dual-socket 8-core CPU w/ Sandy Bridge
• 10G dual-port Chelseo adapter
– BNT 8264 10G/40G OpenFlow switch
– DS3512 6TB sliverable storage
• iSCSI interface for head node image storage as well as
experimenter slivering
• Each rack is a small networked cloud
– OpenStack-based
– EC2 nomenclature for node sizes (m1.small, m1.large etc)
– Interconnected by combination of dynamic and static L2
circuits through regionals and national backbones
• http://wiki.exogeni.net
20. Configuration
# Total Hosts 16
# Management Node 1
CPU
2 Socket 8 Core E5-2690
2.9GHz
Memory 48GB
HDD
1x300GB 15K SAS,
1x500GB 7200 SATA
NIC 2x10GE - vNIC
# Worker Node 15
CPU
2 Socket 8 Core E5-2690
2.9GHz
Memory 192GB
HDD
1x300GB 15K SAS,
1x500GB 7200 SATA
NIC 2x10GE - vNIC
Management Switch Cat-2960S
OpenFlow Switch Nexus 3064
VPN Appliance ASA5512
UCS 5108 Blade Svr
UCS 6248 Fabric
Interconnect x2
ASA 5512 VPN
Concentrator
Nexus 3064 OpenFlow
switch
UCS 5108 Blade Svr
Chassis x2
Cat 2960 Management
switch
NOTE: Solution can be scaled down by using just a single UCS B-series shelf
* 16 RU excludes the Nexus Switch element & NetApp storage
• Power, cooling, and space efficient, “Wire-once”, policy driven, self-provisioning
environment
• UCS B-series uses the Intel e2590 CPU, offering 240 cores for VM hosting in 16RU*
• Data-center class solution with flexible expansion options
– ability to scale to thousands of cores
7004
24. MobilityFirst Design: Architecture Features
Routers with Integrated
Storage & ComputingHeterogeneous
Wireless Access
End-Point mobility
with multi-homing In-network
content cache
Network Mobility &
Disconnected Mode
Hop-by-hop
file transport
Edge-aware
Inter-domain
routing
Named devices, content,
and context
11001101011100100…0011
Public Key Based
Global Identifier (GUID)
Storage-aware
Intra-domain
routing
Service API with
unicast, multi-homing,
mcast, anycast, content
query, etc.
Strong authentication, privacy
Ad-hoc p2p
mode
Human-readable
name
Connectionless Packet Switched Network
with hybrid name/address routing
26. Potential use of GENI in NEBULA*
26
• GENI Technology:
- Enables experiments involving multiple sites
- Isolates NEBULA experiments to a single VLAN
- Eliminates need for special HW & Address Translation
• Potential Uses:
- Multisite student collaboration on Ncore (NEBULA
Core)
- Testbed for NDP (NEBULA Data Plane) experiments
- Platform for NVENT (NEBULA extensible control plane)
- * No GENI-enabled switches on NEBULA campuses-->so
preliminary thoughts
27. NDN Experimental Infrastructure
y Perform ance A nalysis,A PI Feedback and Future
ED Lighting w ith Em bedded Linux Interface
play attacks using signed interests, so embed a
into each message before signing. Maintaining
mestamp or a smaller counter, per fixture per
sonable.
interest“com m and”, that has as arguments the symmetric k
encrypted under the device’s public key, and the requested ex
date and time.
The device should reply with a ContentO bject, encrypted w
sym m etric key, confirming its installation and actual expiratio
and time allowed.
of industry-standard LED lighting by Philips Color
a proprietary UDP/IP protocol for fixture and power
onfiguration and control.
an embedded Linux controller (Gumstix Overo, 500
A7) that connects to an NDN network on one
network on the other.
r supplies are on the IP network
ures via a multi-drop serial.
Python and C software to bootstrap, configure, and
es via NDN.
network has an asymmetric key pair: lighting fixtures,
bedded interfaces, and applications. Figure 3: NDN Lighting Module
C C N x C A PI Feedback
x Experiments So Far
Watertight enclosure
WiFi router
Computer
Battery
Micropho
♢ Pervasive/mobile computing “infrastructure-less”
testbeds with embedded hardware
♢ Real world settings for Internet-of-Things scenarios
♢ Open Network Lab (ONL)
♢ Controlled small-scale experiments, especially
forwarding
♢ NDN Overlay Testbed on public Internet
♢ Live application testing/use under realistic
conditions
♢ Routing and incremental deployment
♢ PlanetLab
♢ Large-scale experiments
♢ Supercharged PlanetLab Platform (SPP) Nodes
♢ High-performance CCNx/NDN forwarding
28. • Using SPP nodes
– Initial software running
on 5 nodes now
– Lead: Patrick Crowley
• No other clear needs
identified yet
• Possibilities:
– Large numbers of nodes
with significant
topology control
including local
broadcast
– Running natively over
something other than IP
– NDN “PlanetLab”
NDN and GENI
Kansas City
Houston
NDN Participating Institutions
Deployed SPP Nodes
Salt Lake City
Washington DC
Atlanta
Yale
WashU
UIUC
Memphis
ColoState
PARC
UCLA
UCI
CAIDA/UCSD
Arizona
29. eXpressive Internet Architecture (XIA)
CMU, BU, Wisconsin
Nikhil Handigol et al, Stanford Univ.XIA exploring three
concepts to address issues:
• Diverse types of end-
points
• Intrinsic security
• Flexible addressing
30. Research Infrastructure
for Computer Scientists
Public-Private Partnership
for Next-Gen Applications
Future commercial
offerings
US Ignite is a new organization that will promote advanced
applications and infrastructure leveraging GENI research and
technologies.
CS Experiments
Experimental Usage and Demonstrations
Pre-commercial Applications
Regional and backbone networks
Campus and Lab
Applied Research
Campus networks Municipal and
commercial networks
App creation teams
GENI members, policies, … US Ignite members, policies, …
GENItechnology
federation
Service creators
Commercial Applications
GENI US Ignite
CS Research
“点燃美国”计划推动城市下一代宽带应用,
GINI与之合作推进成果商业化应用
31. Internet 2
Brocade MLX Series
Juniper MX SERIES EDGE ROUTERS EX SERIES SWITCH
NEC Programmable Flow UNIVERGE PF5820
GENI RACK
IBM RackSwitch G8264 (BNT8264)
Cisco Nexus 7004 3064
HP E5406
32. Software-Defined Networking (SDN):
Support for OpenFlow 1.0 in true Layer 3 hybrid-port mode,
which enables OpenFlow forwarding and dynamic IPv4/IPv6
routing simultaneously on the same port
33. • Cisco GENI Rack OpenFlow-capable switches are used in „hybrid‟
mode, offering both traditional „learning‟ switch and OpenFlow 1.0
capabilities at the same time
• Nexus 3064 OpenFlow-capable switch
provides high-density 10G ports
• Cisco Nexus 7000 Series Switches
provide the foundation for Cisco ®
Unified Fabric. They are a modular
data center-class product line
designed for highly scalable
1/10/40/100 Gigabit Ethernet
networks with a fabric architecture
that scales beyond 17 terabits per
second (Tbps).
34. EX9200 programmable core switch for 10G, 40G and
100Gbps software-defined networks。 It runs that same 3+-year-
old One/Trio ASIC as the MX and form factors are identical, but
chassis, line cards, switch fabric, routing engines and software are
not 。
EX9200 will be positioned against the Nexus 7000 “M” series in the
data center, and against Cisco„s Catalyst 6500E and HP‟s
7500/10500/12500 switches in the campus core with large physical
or logical scale, and/or 40/100G requirements
目前在Internet2 中使用
35.
36.
37. External I/O
ports
6 open module slots; Supports a maximum of 24 10-
GbE ports or 144 autosensing 10/100/1000
ports or 144 mini-GBICs, or a combination
Rack mounting Mounts in an EIA-standard 19 in. telco rack or
equipment cabinet (hardware included);
horizontal surface mounting only
Memory and
processor
Gigabit Module : ARM9 @ 200 MHz, packet buffer size:
144 Mb QDR SDRAM; 10G Module : ARM9 @
200 MHz, packet buffer size: 36 Mb QDR
SDRAM; Management Module : Freescale
PowerPC 8540 @ 666 MHz, 4 MB flash, 128 MB
compact flash, 256 MB DDR SDRAM
Latency 1000 Mb Latency: < 3.7 µs (FIFO 64-byte packets); 10
Gbps Latency: < 2.1 µs (FIFO 64-byte packets)
Routing/switchin
g capacity
322.8 Gbps
Throughput up to 240.2 million pps