Presentation from Networkshop46.
Brunel’s experience of designing networks and systems to use the Jisc shared data centre - by Simon Furber, Brunel University
Developing an internal business case, and the associated benefits and savings, to support use of the Jisc shared datacentre - by Mike Cope, UCL
Hyper convergence - our journey to the future - George Ford, University of Creative Arts
3. Design Networks and Systems to
use the JISC Shared Datacentre
Simon Furber
Network and Infrastructure Manager
4. Brunel University London 4
Build site for our new Teaching and Leaning Centre
The John Crank Building
5. Brunel University London
The issues we had
• The primary data centre DC1
• One of the main network nodes half of the fibre end points (the original node 0)
• Janet connections
• BT Connections
• Virgin Media Connections ( ** are still in there ** )
• The Computer Centre Staff and IT work areas
• There was no space on campus
• There was no time
• The construction of the New Learning and Teaching Centre was a high level priority
5
6. Brunel University London
The constraints
• No new data centre on site
• Confusion over the cloud and why that didn’t fix all our problems
• Scale of the project
• The timing and pressure to demolish
• Separating the PoP from the Data Centre
• Planning and funding
6
7. Brunel University London
The John Crank Decant Project
• There is a program board that has oversight of all the aspects of this project
• Re-site all DC1 assets into DC2 or Move to SDC
• retire old hardware or virtualise
• Find a new home for the core network and it’s services
• Limited to 55M radius to preserve as much existing fibre and re-terminate
• Design and implement a new fibre network to remove single dependencies on John Crank
• Find a new home for the Computer Centre
• Staff
• Teaching IT Workareas
• DR Facilities
• Equipment and Storage
• New Storage and VMWare Host procurement
7
8. Brunel University London
We had some luck
• We already had a second data Centre which gave us resilience for BC and DR
• We became part of the Janet Optical Network (we are very lucky)
• Link Between from Brunel to IC and Brunel via RHUL to UCL - 2 x 40G bearers
• We have two nodes and avoided putting one of them in DC1
• The JISC Shared Data Centre at Infinity now Virtus was just down the road
• This had the same Janet fibre infrastructure.
• The timing of the VMWare hardware replacement and Storage replacement
• We were also able to avoid using DC1 for new services
8
9. Brunel University London
We Built a Schneider cube at SDC
• Pod B DH3 Slough SDC
• 12 Racks with enclosed cold isle
and doors DC2
• Utilising the Janet Optical
Network we ordered direct
circuits to SDC
• Designed the layouts and
monitoring
• Environment
• CCTV
• Operate it as a lights out
secondary datacentre
• New VMWare hosts and
Storage were deployed straight
to SDC
9
13. Brunel University London 13
So we built one
• 4 x10G dual path links via Janet Network point to point
• Dual Ciena Node at either end
• 4 resilience layer 3 PtP links with Cisco OTV to transport the VLANs
• 4 x Nexus 7702, 2 x Nexus 5500’s and 4 x MDS 9250
• Layer 2 stretched and Layer 3 Separation
• FCIP for Storage
14. Brunel University London 14
We Built a new PoP
• Modular containerised solution
• Generator and UPS backed
• Designed and built to our specification
• 7 Racks with separate hot and cold isle with free cooling most of the time
• Not intended as a DC but will host infrastructure service
• We will be moving the Second Janet node into it
• Designed everything
• Operate it as a lights facility
• New VMWare hosts and Storage will be not deployed into it
• Phased migration/withdrawal of service about to begin
18. Brunel University London 18
The John Crank Decant Project – Strategic Outcome
• Moved one Data Centre off site into shared JISC Data centre
• Virtualised and retired old server hardware (reduced migration)
• Moving services where applicable to the Cloud
• Procured new Sever Hardware and deployed strategically (reduced migration)
• Procured new storage and deployed strategically (reduced migration)
• Moved from VMWare 5.x to 6x
• New Fibre infrastructure designed and being built
• Ability to optimise and reorganise infrastructure minimising disruption
• Did not achieve Janet Connectivity at SDC - Future
19. Brunel University London 19
The Experience
• Very complex
• Planning is key and knowing that plans don’t survey contact with reality
• Good partners who are willing to be flexible
• JISC / Virtus
• ON365 / Schneider / Kinetic IT
• MAVIN / Eaton / Sensorium
• HSL / EMCORE / Brandrex / Commscope / Excel / Prism
• Cisco / BT / Dell
• Strategically designs need attention to detail
• Programme Management – We had a good one who kept us honest and kept his head
56. MIGRATE CAMPUS VPNs
1
2
1
2
SAN
Blades
Network
Legacy 1gbps
connections
Resource Unit
Resource Unit
Resource Unit
Resource Unit
Resource Unit
Network
LegacyHyper-converged
Campus
Public Services
10gbps
connections
60. LESSONS LEARNED
Design Before You Start
Factor in Disk Space Resilience
Factor in transfer method
Factor in transfer speed.
Hidden costs to build method
Solution Testing
Test, Test, Test
Plan for Expansion
61. PLANNING NEXT STEPS
1
2
Backup SAN
New Services
Network
Resource Unit
Resource Unit
Resource Unit
Resource Unit
Resource Unit
Network
Hyper-converged
Campus
Public Services
10gbps
connections
62. CC0 Creative Commons | Free for commercial use | No attribution required
TRUSTED SUPPLIERS
63. CC0 Creative Commons | Free for commercial use | No attribution required
SAFETY / SECURITY BY DESIGN