• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Virtualisation Wkshop 071206 - SCU.ppt
 

Virtualisation Wkshop 071206 - SCU.ppt

on

  • 480 views

 

Statistics

Views

Total Views
480
Views on SlideShare
480
Embed Views
0

Actions

Likes
0
Downloads
1
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Good afternoon everyone - My name is Luke Walford and I will be speaking today about the Journey undertaken by IT&TS @ Southern Cross University in “ Server and Storage Consolidation and Virtualisation” We will have Q&A at the end of the presentation so if you would mind holding any questions till then that would be appreciated– thank you.
  • So the topics that I am going to be covering will include: Firstly a very brief overview of SCU as a University, We operate 3 campuses: Tweed Heads, Lismore our main administrative centre, Coffs Harbour a coloacation campus with TAFE, and 11-12 secondary students, Satellite campus at Sydney Hotel School an industry partnership Satellite campus at Melbourne Approx 14000 Student, and 800 staff,
  • So the topics that I am going to be covering will include: Firstly a very brief overview of the study itself to place some context to the rest of the presentation
  • 2. Secondly I will go into some detail on the research problem and include in this the research aims and objectives
  • 3. Thirdly the literature review
  • 4. I will briefly cover the methodology that was used for the study
  • 5. I will provide an overview of the findings of the study before going into a more detailed
  • 6. Discussion on those findings and present the “Security Practitioner’s Management Model”
  • 1. Consists of 3 main campuses TWEED, LISMORE, Coffs Harbour (SCU, Year12, NSW-Tafe), 2. Satellite campuses Sydney Hotel School, Melbourne Business School (Patnerships) 3. Approx 14,000 Students 50% External, 50% Internal, 4. Approx 800 fulltime staff. 5. 1GB Connectivity levels between campuses
  • 1. Consists of 3 main campuses TWEED, LISMORE, Coffs Harbour (SCU, Year12, NSW-Tafe), 2. Satellite campuses Sydney Hotel School, Melbourne Business School (Patnerships) 3. Approx 14,000 Students 50% External, 50% Internal, 4. Approx 800 fulltime staff. 5. 1GB Connectivity levels between campuses
  • 1. Dec Alpha systems to be phased out by end of 2007.
  • 1. Dec Alpha systems to be phased out by end of 2007.
  • Dec Alpha systems to be phased out by end of 2007. State of play the server journey 2002 – SCU data centre was a miss-mash of various server towers, PCs, Macintosh’s, DEC Alpha DS20s & 10s and workstations, antiquated stakeholder systems, scavenged PC’s e.g. University timetabling system operated on standalone no-name rebuild 133mhz 64mb PC running File maker Pro databases e.g. Master DNS, Kerberos and radius services operate on a DEC system purchase in 1994 call CYCLOPS only retired 12 months ago…..a very well engineered platform, always a worry when rebooting though, Service levels suffered as a result of legacy platforms capacity constraints, hardware failures etc, Data Centre - space constraints started to bite as a result of the antiquated inter locked shelving systems used to house legacy tower formats that did not lend themselves to effective use of space. Funding – The funding model for replacement was none existent, the strategy was to keep extending maintenance, proved costly in-terms of both service levels and dollars. We could not continue……
  • Dec Alpha systems to be phased out by end of 2007. State of play the server journey 2002 – SCU data centre was a miss-mash of various server towers, PCs, Macintosh’s, DEC Alpha DS20s & 10s and workstations, antiquated stakeholder systems, scavenged PC’s e.g. University timetabling system operated on standalone no-name rebuild 133mhz 64mb PC running File maker Pro databases e.g. Master DNS, Kerberos and radius services operate on a DEC system purchase in 1994 call CYCLOPS only retired 12 months ago…..a very well engineered platform, always a worry when rebooting though, Service levels suffered as a result of legacy platforms capacity constraints, hardware failures etc, Data Centre - space constraints started to bite as a result of the antiquated inter locked shelving systems used to house legacy tower formats that did not lend themselves to effective use of space. Funding – The funding model for replacement was none existent, the strategy was to keep extending maintenance, proved costly in-terms of both service levels and dollars. We could not continue……
  • Dec Alpha systems to be phased out by end of 2007. State of play the server journey 2002 – SCU data centre was a miss-mash of various server towers, PCs, Macintosh’s, DEC Alpha DS20s & 10s and workstations, antiquated stakeholder systems, scavenged PC’s e.g. University timetabling system operated on standalone no-name rebuild 133mhz 64mb PC running File maker Pro databases e.g. Master DNS, Kerberos and radius services operate on a DEC system purchase in 1994 call CYCLOPS only retired 12 months ago…..a very well engineered platform, always a worry when rebooting though, Service levels suffered as a result of legacy platforms capacity constraints, hardware failures etc, Data Centre - space constraints started to bite as a result of the antiquated inter locked shelving systems used to house legacy tower formats that did not lend themselves to effective use of space. Funding – The funding model for replacement was none existent, the strategy was to keep extending maintenance, proved costly in-terms of both service levels and dollars. We could not continue……
  • Dec Alpha systems to be phased out by end of 2007. State of play the server journey 2002 – SCU data centre was a miss-mash of various server towers, PCs, Macintosh’s, DEC Alpha DS20s & 10s and workstations, antiquated stakeholder systems, scavenged PC’s e.g. University timetabling system operated on standalone no-name rebuild 133mhz 64mb PC running File maker Pro databases e.g. Master DNS, Kerberos and radius services operate on a DEC system purchase in 1994 call CYCLOPS only retired 12 months ago…..a very well engineered platform, always a worry when rebooting though, Service levels suffered as a result of legacy platforms capacity constraints, hardware failures etc, Data Centre - space constraints started to bite as a result of the antiquated inter locked shelving systems used to house legacy tower formats that did not lend themselves to effective use of space. Funding – The funding model for replacement was none existent, the strategy was to keep extending maintenance, proved costly in-terms of both service levels and dollars. We could not continue……
  • Dec Alpha systems to be phased out by end of 2007. State of play the server journey 2002 – SCU data centre was a miss-mash of various server towers, PCs, Macintosh’s, DEC Alpha DS20s & 10s and workstations, antiquated stakeholder systems, scavenged PC’s e.g. University timetabling system operated on standalone no-name rebuild 133mhz 64mb PC running File maker Pro databases e.g. Master DNS, Kerberos and radius services operate on a DEC system purchase in 1994 call CYCLOPS only retired 12 months ago…..a very well engineered platform, always a worry when rebooting though, Service levels suffered as a result of legacy platforms capacity constraints, hardware failures etc, Data Centre - space constraints started to bite as a result of the antiquated inter locked shelving systems used to house legacy tower formats that did not lend themselves to effective use of space. Funding – The funding model for replacement was none existent, the strategy was to keep extending maintenance, proved costly in-terms of both service levels and dollars. We could not continue……
  • Dec Alpha systems to be phased out by end of 2007. State of play the server journey 2002 – SCU data centre was a miss-mash of various server towers, PCs, Macintosh’s, DEC Alpha DS20s & 10s and workstations, antiquated stakeholder systems, scavenged PC’s e.g. University timetabling system operated on standalone no-name rebuild 133mhz 64mb PC running File maker Pro databases e.g. Master DNS, Kerberos and radius services operate on a DEC system purchase in 1994 call CYCLOPS only retired 12 months ago…..a very well engineered platform, always a worry when rebooting though, Service levels suffered as a result of legacy platforms capacity constraints, hardware failures etc, Data Centre - space constraints started to bite as a result of the antiquated inter locked shelving systems used to house legacy tower formats that did not lend themselves to effective use of space. Funding – The funding model for replacement was none existent, the strategy was to keep extending maintenance, proved costly in-terms of both service levels and dollars. We could not continue……
  • Dec Alpha systems to be phased out by end of 2007. State of play the server journey 2002 – SCU data centre was a miss-mash of various server towers, PCs, Macintosh’s, DEC Alpha DS20s & 10s and workstations, antiquated stakeholder systems, scavenged PC’s e.g. University timetabling system operated on standalone no-name rebuild 133mhz 64mb PC running File maker Pro databases e.g. Master DNS, Kerberos and radius services operate on a DEC system purchase in 1994 call CYCLOPS only retired 12 months ago…..a very well engineered platform, always a worry when rebooting though, Service levels suffered as a result of legacy platforms capacity constraints, hardware failures etc, Data Centre - space constraints started to bite as a result of the antiquated inter locked shelving systems used to house legacy tower formats that did not lend themselves to effective use of space. Funding – The funding model for replacement was none existent, the strategy was to keep extending maintenance, proved costly in-terms of both service levels and dollars. We could not continue……
  • Dec Alpha systems to be phased out by end of 2007. State of play the server journey 2002 – SCU data centre was a miss-mash of various server towers, PCs, Macintosh’s, DEC Alpha DS20s & 10s and workstations, antiquated stakeholder systems, scavenged PC’s e.g. University timetabling system operated on standalone no-name rebuild 133mhz 64mb PC running File maker Pro databases e.g. Master DNS, Kerberos and radius services operate on a DEC system purchase in 1994 call CYCLOPS only retired 12 months ago…..a very well engineered platform, always a worry when rebooting though, Service levels suffered as a result of legacy platforms capacity constraints, hardware failures etc, Data Centre - space constraints started to bite as a result of the antiquated inter locked shelving systems used to house legacy tower formats that did not lend themselves to effective use of space. Funding – The funding model for replacement was none existent, the strategy was to keep extending maintenance, proved costly in-terms of both service levels and dollars. We could not continue……
  • Dec Alpha systems to be phased out by end of 2007. State of play the server journey 2002 – SCU data centre was a miss-mash of various server towers, PCs, Macintosh’s, DEC Alpha DS20s & 10s and workstations, antiquated stakeholder systems, scavenged PC’s e.g. University timetabling system operated on standalone no-name rebuild 133mhz 64mb PC running File maker Pro databases e.g. Master DNS, Kerberos and radius services operate on a DEC system purchase in 1994 call CYCLOPS only retired 12 months ago…..a very well engineered platform, always a worry when rebooting though, Service levels suffered as a result of legacy platforms capacity constraints, hardware failures etc, Data Centre - space constraints started to bite as a result of the antiquated inter locked shelving systems used to house legacy tower formats that did not lend themselves to effective use of space. Funding – The funding model for replacement was none existent, the strategy was to keep extending maintenance, proved costly in-terms of both service levels and dollars. We could not continue……
  • Dec Alpha systems to be phased out by end of 2007. State of play the server journey 2002 – SCU data centre was a miss-mash of various server towers, PCs, Macintosh’s, DEC Alpha DS20s & 10s and workstations, antiquated stakeholder systems, scavenged PC’s e.g. University timetabling system operated on standalone no-name rebuild 133mhz 64mb PC running File maker Pro databases e.g. Master DNS, Kerberos and radius services operate on a DEC system purchase in 1994 call CYCLOPS only retired 12 months ago…..a very well engineered platform, always a worry when rebooting though, Service levels suffered as a result of legacy platforms capacity constraints, hardware failures etc, Data Centre - space constraints started to bite as a result of the antiquated inter locked shelving systems used to house legacy tower formats that did not lend themselves to effective use of space. Funding – The funding model for replacement was none existent, the strategy was to keep extending maintenance, proved costly in-terms of both service levels and dollars. We could not continue……
  • Dec Alpha systems to be phased out by end of 2007. State of play the server journey 2002 – SCU data centre was a miss-mash of various server towers, PCs, Macintosh’s, DEC Alpha DS20s & 10s and workstations, antiquated stakeholder systems, scavenged PC’s e.g. University timetabling system operated on standalone no-name rebuild 133mhz 64mb PC running File maker Pro databases e.g. Master DNS, Kerberos and radius services operate on a DEC system purchase in 1994 call CYCLOPS only retired 12 months ago…..a very well engineered platform, always a worry when rebooting though, Service levels suffered as a result of legacy platforms capacity constraints, hardware failures etc, Data Centre - space constraints started to bite as a result of the antiquated inter locked shelving systems used to house legacy tower formats that did not lend themselves to effective use of space. Funding – The funding model for replacement was none existent, the strategy was to keep extending maintenance, proved costly in-terms of both service levels and dollars. We could not continue……
  • Dec Alpha systems to be phased out by end of 2007. State of play the server journey 2002 – SCU data centre was a miss-mash of various server towers, PCs, Macintosh’s, DEC Alpha DS20s & 10s and workstations, antiquated stakeholder systems, scavenged PC’s e.g. University timetabling system operated on standalone no-name rebuild 133mhz 64mb PC running File maker Pro databases e.g. Master DNS, Kerberos and radius services operate on a DEC system purchase in 1994 call CYCLOPS only retired 12 months ago…..a very well engineered platform, always a worry when rebooting though, Service levels suffered as a result of legacy platforms capacity constraints, hardware failures etc, Data Centre - space constraints started to bite as a result of the antiquated inter locked shelving systems used to house legacy tower formats that did not lend themselves to effective use of space. Funding – The funding model for replacement was none existent, the strategy was to keep extending maintenance, proved costly in-terms of both service levels and dollars. We could not continue……
  • Dec Alpha systems to be phased out by end of 2007. State of play the server journey 2002 – SCU data centre was a miss-mash of various server towers, PCs, Macintosh’s, DEC Alpha DS20s & 10s and workstations, antiquated stakeholder systems, scavenged PC’s e.g. University timetabling system operated on standalone no-name rebuild 133mhz 64mb PC running File maker Pro databases e.g. Master DNS, Kerberos and radius services operate on a DEC system purchase in 1994 call CYCLOPS only retired 12 months ago…..a very well engineered platform, always a worry when rebooting though, Service levels suffered as a result of legacy platforms capacity constraints, hardware failures etc, Data Centre - space constraints started to bite as a result of the antiquated inter locked shelving systems used to house legacy tower formats that did not lend themselves to effective use of space. Funding – The funding model for replacement was none existent, the strategy was to keep extending maintenance, proved costly in-terms of both service levels and dollars. We could not continue……
  • Dec Alpha systems to be phased out by end of 2007. State of play the server journey 2002 – SCU data centre was a miss-mash of various server towers, PCs, Macintosh’s, DEC Alpha DS20s & 10s and workstations, antiquated stakeholder systems, scavenged PC’s e.g. University timetabling system operated on standalone no-name rebuild 133mhz 64mb PC running File maker Pro databases e.g. Master DNS, Kerberos and radius services operate on a DEC system purchase in 1994 call CYCLOPS only retired 12 months ago…..a very well engineered platform, always a worry when rebooting though, Service levels suffered as a result of legacy platforms capacity constraints, hardware failures etc, Data Centre - space constraints started to bite as a result of the antiquated inter locked shelving systems used to house legacy tower formats that did not lend themselves to effective use of space. Funding – The funding model for replacement was none existent, the strategy was to keep extending maintenance, proved costly in-terms of both service levels and dollars. We could not continue……
  • Dec Alpha systems to be phased out by end of 2007. State of play the server journey 2002 – SCU data centre was a miss-mash of various server towers, PCs, Macintosh’s, DEC Alpha DS20s & 10s and workstations, antiquated stakeholder systems, scavenged PC’s e.g. University timetabling system operated on standalone no-name rebuild 133mhz 64mb PC running File maker Pro databases e.g. Master DNS, Kerberos and radius services operate on a DEC system purchase in 1994 call CYCLOPS only retired 12 months ago…..a very well engineered platform, always a worry when rebooting though, Service levels suffered as a result of legacy platforms capacity constraints, hardware failures etc, Data Centre - space constraints started to bite as a result of the antiquated inter locked shelving systems used to house legacy tower formats that did not lend themselves to effective use of space. Funding – The funding model for replacement was none existent, the strategy was to keep extending maintenance, proved costly in-terms of both service levels and dollars. We could not continue……
  • Dec Alpha systems to be phased out by end of 2007. State of play the server journey 2002 – SCU data centre was a miss-mash of various server towers, PCs, Macintosh’s, DEC Alpha DS20s & 10s and workstations, antiquated stakeholder systems, scavenged PC’s e.g. University timetabling system operated on standalone no-name rebuild 133mhz 64mb PC running File maker Pro databases e.g. Master DNS, Kerberos and radius services operate on a DEC system purchase in 1994 call CYCLOPS only retired 12 months ago…..a very well engineered platform, always a worry when rebooting though, Service levels suffered as a result of legacy platforms capacity constraints, hardware failures etc, Data Centre - space constraints started to bite as a result of the antiquated inter locked shelving systems used to house legacy tower formats that did not lend themselves to effective use of space. Funding – The funding model for replacement was none existent, the strategy was to keep extending maintenance, proved costly in-terms of both service levels and dollars. We could not continue……
  • Dec Alpha systems to be phased out by end of 2007. State of play the server journey 2002 – SCU data centre was a miss-mash of various server towers, PCs, Macintosh’s, DEC Alpha DS20s & 10s and workstations, antiquated stakeholder systems, scavenged PC’s e.g. University timetabling system operated on standalone no-name rebuild 133mhz 64mb PC running File maker Pro databases e.g. Master DNS, Kerberos and radius services operate on a DEC system purchase in 1994 call CYCLOPS only retired 12 months ago…..a very well engineered platform, always a worry when rebooting though, Service levels suffered as a result of legacy platforms capacity constraints, hardware failures etc, Data Centre - space constraints started to bite as a result of the antiquated inter locked shelving systems used to house legacy tower formats that did not lend themselves to effective use of space. Funding – The funding model for replacement was none existent, the strategy was to keep extending maintenance, proved costly in-terms of both service levels and dollars. We could not continue……
  • Dec Alpha systems to be phased out by end of 2007. State of play the server journey 2002 – SCU data centre was a miss-mash of various server towers, PCs, Macintosh’s, DEC Alpha DS20s & 10s and workstations, antiquated stakeholder systems, scavenged PC’s e.g. University timetabling system operated on standalone no-name rebuild 133mhz 64mb PC running File maker Pro databases e.g. Master DNS, Kerberos and radius services operate on a DEC system purchase in 1994 call CYCLOPS only retired 12 months ago…..a very well engineered platform, always a worry when rebooting though, Service levels suffered as a result of legacy platforms capacity constraints, hardware failures etc, Data Centre - space constraints started to bite as a result of the antiquated inter locked shelving systems used to house legacy tower formats that did not lend themselves to effective use of space. Funding – The funding model for replacement was none existent, the strategy was to keep extending maintenance, proved costly in-terms of both service levels and dollars. We could not continue……
  • Dec Alpha systems to be phased out by end of 2007. State of play the server journey 2002 – SCU data centre was a miss-mash of various server towers, PCs, Macintosh’s, DEC Alpha DS20s & 10s and workstations, antiquated stakeholder systems, scavenged PC’s e.g. University timetabling system operated on standalone no-name rebuild 133mhz 64mb PC running File maker Pro databases e.g. Master DNS, Kerberos and radius services operate on a DEC system purchase in 1994 call CYCLOPS only retired 12 months ago…..a very well engineered platform, always a worry when rebooting though, Service levels suffered as a result of legacy platforms capacity constraints, hardware failures etc, Data Centre - space constraints started to bite as a result of the antiquated inter locked shelving systems used to house legacy tower formats that did not lend themselves to effective use of space. Funding – The funding model for replacement was none existent, the strategy was to keep extending maintenance, proved costly in-terms of both service levels and dollars. We could not continue……
  • Dec Alpha systems to be phased out by end of 2007. State of play the server journey 2002 – SCU data centre was a miss-mash of various server towers, PCs, Macintosh’s, DEC Alpha DS20s & 10s and workstations, antiquated stakeholder systems, scavenged PC’s e.g. University timetabling system operated on standalone no-name rebuild 133mhz 64mb PC running File maker Pro databases e.g. Master DNS, Kerberos and radius services operate on a DEC system purchase in 1994 call CYCLOPS only retired 12 months ago…..a very well engineered platform, always a worry when rebooting though, Service levels suffered as a result of legacy platforms capacity constraints, hardware failures etc, Data Centre - space constraints started to bite as a result of the antiquated inter locked shelving systems used to house legacy tower formats that did not lend themselves to effective use of space. Funding – The funding model for replacement was none existent, the strategy was to keep extending maintenance, proved costly in-terms of both service levels and dollars. We could not continue……
  • Dec Alpha systems to be phased out by end of 2007. State of play the server journey 2002 – SCU data centre was a miss-mash of various server towers, PCs, Macintosh’s, DEC Alpha DS20s & 10s and workstations, antiquated stakeholder systems, scavenged PC’s e.g. University timetabling system operated on standalone no-name rebuild 133mhz 64mb PC running File maker Pro databases e.g. Master DNS, Kerberos and radius services operate on a DEC system purchase in 1994 call CYCLOPS only retired 12 months ago…..a very well engineered platform, always a worry when rebooting though, Service levels suffered as a result of legacy platforms capacity constraints, hardware failures etc, Data Centre - space constraints started to bite as a result of the antiquated inter locked shelving systems used to house legacy tower formats that did not lend themselves to effective use of space. Funding – The funding model for replacement was none existent, the strategy was to keep extending maintenance, proved costly in-terms of both service levels and dollars. We could not continue……

Virtualisation Wkshop 071206 - SCU.ppt Virtualisation Wkshop 071206 - SCU.ppt Presentation Transcript

  • SCU Warts & All Server And Storage Consolidation And Virtualisation experience
    • About SCU
    Topics:
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
    • About SCU
    • Technical environment
    Topics:
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
  • Topics:
    • About SCU
    • Technical environment
    • Server Infrastructure
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
  • Topics:
    • About SCU
    • Technical environment
    • Server Infrastructure
    • Storage Infrastructure
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
  • Topics:
    • About SCU
    • Technical environment
    • Server Infrastructure
    • Storage Infrastructure
    • Virtualization
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
  • Topics:
    • About SCU
    • Technical environment
    • Server Infrastructure
    • Storage Infrastructure
    • Virtualization
    • Next steps
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
  • Topics:
    • About SCU
    • Technical environment
    • Server Infrastructure
    • Storage Infrastructure
    • Virtualization
    • Next steps
    • ?
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
  • About SCU
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
  • About SCU
    • Campuses located at Tweed Heads, Lismore, Coffs Harbour,
    • Satellites located at Sydney (Hotel School) Melbourne (School of Business)
    • Approx. 14K students
    • Approx. 800 staff
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
  • Technical Environment
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
  • Technical Environment
    • Intercampus connectivity 1GB via AARNET for major campuses with 100MB for Sydney and 512 KB ADSL at Melbourne,
    • Networking infrastructure CISCO from edge to core, mix of 6500’s and 3750’s,
    • Server Operating systems include, Novell Netware, Win 2003, Linux Red-hat, Sun Solaris and some Tru64,
    • Server platforms Rack mount, HP DL’s, DELL Power Edge 1650 – 1850, HP bl20 Blades, various Sun Ultra-Spark platforms, and some DEC Alpha
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
  • Technical Environment
    • Storage platforms; HP EVA 3000, 4000, 56 spool capacity currently operating 146FC (7TB), MSA 1500’s currently operating 300GB Ultra SCSI, and 500GB SATA,
    • Enterprise backup; HP Data Protector 6.0 streaming to a MSL6060 LTO2 library,
    • Virtualisation platform; ESX 2SNP currently operating 8 licenses, environments are a mixture of Novell, Win 2000 – 2003, Linux Redhat.
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
  • Server Infrastructure
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
    Server Infrastructure
    • 2002 the Data Centre was fast becoming a bottleneck for growth as a result of space constraints due to the mishmash of various server configurations and footprints housed by in-locking space hungry shelving.
    • End of 2002 SCU entered into a lease agreement opening the way for replacement of the entire server fleet equating to approx. 40 physical platforms at the time.
    • From 2003 – 2006 server growth increased by around 50% to approx 95 physical platforms today,
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
    Server Infrastructure
    • A significant portion of this growth can be attributed to a normalization of platforms required to deliver services reliably, i.e. impact of deploying N+X tier architectures for high availability etc.
    • 2003 Server platforms were standardized around 19 inch rack mount systems with a combination of DELL and HP platforms,
    • 2005 – 2006 saw the introduction of blade servers, and VMWARE,
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
    Server Infrastructure
    • 2002 – 2005 saw a turn around in data centre constraints from space to power and cooling requiring additional investment. Power and Air are still not as good as they could be, seem to be playing catch-up,
    • Server explosion rather than consolidation, there is some light on the horizon with VMWARE,
    • Status today of the server fleet is services are reliable and asset management is in place.
  • Storage Infrastructure
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
    Storage Infrastructure
    • 2002 state of play; All services operate with localized or direct attach disk, some raided and some not, services were constrained as drives reached capacity, costs increased as more array devices were brought in to pluck the gaps,
    • The writing was definitely on the wall,
    • Total storage under management 1.2TB including system disks,
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
    Storage Infrastructure
    • We tendered for a “Storage Area Network” for 3.5 TB with plan for 30% growth/annum and a replacement cycle of 5 years for the SAN.
    • As a result of the Tender we ended up with a 6TB HP EVA 3000 dual controllers with redundant fabric,
    • We also had to be back this thing up, so a part of the tender we include a requirement for an enterprise level backup and recovery solution “HP Data protector, & MSL6060 LTO2 library,
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
    Storage Infrastructure
    • Within 6 months and with assistance of the vender we had consolidated all service storage and backup requirements onto the SAN, “brilliant” with capacity to burn so we thought!
    • 2003 went past, then 2004, then half way through 2005 we were at capacity, remember we only started with 1.2 TB in 2003 now we were at 6TB,
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
    Storage Infrastructure
    • Infrastructure rollover was in 2008 so we needed to buy time, we added the remaining capacity of 650GB usable this got us to 2006.
    • 2006 Crunch time, the numbers on replacing a shelf of our 146GB FC disks with 300GB FC did not add up in-terms of value.
    • For the approx. same cost we could obtained a tier 2 HP MSA1500 running 56 Ultra SCSI 3 300GB drives; approx 12TB of usable space.
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
    Storage Infrastructure
    • We are now in the process of migrating around 4TB off our SAN onto the MSA, services will include File & Print, development environments, and other unstructured data, plus providing a disk to disk backup capacity,
    • We should now easily make the rollover timeframe of the current SAN,
    • We believe that a large component of this storage up take was a normalisation of usage as the Organisation had been constrained for so long with legacy infrastructures,
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
    Storage Infrastructure
    • Apart from our capacity issues we do love our SAN…
  • Virtualisation
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
    Virtualisation
    • Why? Primarily it has allowed us to provision cost effectively development/testing and less crucial production environments. Covering the shortfall in small project budgets that struggle to fund the necessary infrastructure,
    • Bigger Picture! As we become more experienced with VMWARE we plan to assess all critical services inline with asset replacement schedules for the virtualisation option,
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
    Virtualisation
    • Story to date; In 2005 we acquired 3 chassis of HP BL20 blades, and 3 ESX 2 way licenses with virtual centre manager,
    • End 2006 we have grown to 8 ESX 2 way licenses operating approx. 40 mixed environments including Novell, Win 2003 and Linux Redhat. (These are predominately development/test systems),
    • Early 2007 will see implementation of our CRM system completely in VMWARE, this will also lift the VM profile for all the doubters out there,
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
    Virtualisation
    • As a result of VMWARE we have seen an explosion of environments however this has can also be attributed to a normalisation of actually environmental necessities,
    • Provisioning controls; As VMWARE does lend itself well to on demand provisioning controls to regulate growth are crucial, as ideas and whims could easily be acted upon,
    • Initial challenges; breaking the stakeholder mindsets of physical infrastructure ownership vs virtualisation, expelling internal fear campaigns around performance and capacity, when 90% of the server infrastructure is only operating at 20% to 30%.
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
    Virtualisation
    • Learning Curve; Administrator learning curve is steep, getting the VM farm architecture right at the start is crucial, understanding the best way to provision storage and networking is essential, (Port, VLAN and LUN provisioning and structures, Firewall nightmares)
    • Complexity; combination of VM and Blades can add orders of magnitude in complexity when considering blades operate 4 Nics, Dual+ HBAs, and VM trunking can become a trifle complex, (Documentation & Change Management crucial)
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
    Virtualisation
    • Server sizing; crucial to maximizing your license and service deployments, we currently have operate std 2 way 9GB VM environments this has mainly been due to funding constraints,
    • Licensing – we have found it cumbersome to manage yearly maintenance contacts for support as they all fall due at different times. We have arrange co-termination with VM but we have to do this with each license purchased is there a better way?
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
    Virtualisation
    • Today, VMWARE is a core component in our architecture plan moving forward with expected infiltration into more crucial architectures over the next couple of years.
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
    Next Steps
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
    Next Step
    • Determine consolidation and virtualisation targets,
    • Leverage asset replacement schedules to fund the re-engineering of targeted services with the VM option,
    • Integrate VMWARE into the design and development of the University IT DRP-BCP planning,
    • Develop a framework for managing, determining and deploying virtual environments,
    • Continue with the expansion of SAN architecture,
    • Investigate ILM options for unstructured data,
      • About SCU
      • Technical environment
      • Server Infrastructure
      • Storage Infrastructure
      • Virtualization
      • Next steps
      • ?
    Questions