• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Green Remote Office Re-Architecture

Green Remote Office Re-Architecture






Total Views
Views on SlideShare
Embed Views



0 Embeds 0

No embeds



Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment

    Green Remote Office Re-Architecture Green Remote Office Re-Architecture Presentation Transcript

    • Centralizing Client/Server Applications and User Data
    • The confluence of the end of support for most of the remote office servers and the request to “DO SOMETHING!” about the servers in less than ideal locations (laundry rooms for example) that keep failing drove OKDHS to search for “a better way” to serve these office networks. The recent success of a mobility project provided the starting point.
    • This project employs virtualization at several levels, to achieve different objectives. These objectives span from increasing the users/server in the terminal server farm, to providing an individual, customizable desktop for some users. Many consider a terminal server session to be a “virtual desktop”, adding confusion to the discussion.
    • In computing, platform virtualization is a term that refers to the abstraction of computer resources. Virtualization hides the physical characteristics of computing resources from their users, be they applications, or end users.[1] The term has been widely used since the 1960s. Wikipedia
      • A Hypervisor runs “on top” of hardware (CPU, Motherboard, etc.)
      • A Virtual Machine (guest) runs “on top” of the hypervisor
      • The OS and applications run “in” the guest
      • The entire guest is contained in a few files on the hardware storage
      • The guest is now portable, can be copied and replicated
      Why do we have so many servers? Because each app wants it’s own OS. Virtualization lets us run lots of apps and their OSes on one piece of hardware (server) – just like the mainframe has offered for 40 years now.
      • Historically, a server was required in each office for:
      • Network addresses *
      • User files
      • Line of Business Application files *
      • PC patching
      • PC re-imaging
      • PC management * mission critical
      • Each of the 168 remote offices has a server supporting the local network.
      • These servers are going out of support, presenting an opportunity to re-evaluate how the remote networks are architected.
      • Mobile user architecture provided an alternative that can be used by all.
      • Moving the “Line of Business” (LOB) applications to terminal server allows all processing and data to reside in the data center.
      • A County Office
      • The Data center
      • A line connecting them
      • A Mobile user
      • A Cell tower and the Internet to connect the mobile user
      • Each remote office contains a LAN with a router, switches, server, printers, and PCs.
      • A WAN line connects the office to the data center.
      • All database, email, web, and application servers are in the one data center.
      • The remote office server provides application code, file storage, IP addresses, and a remote distribution/management point for PC management.
      • There are 5949 desktops, 3197 tablet PCs, and 168 remote servers for 8652 employees at 168 sites.
      • Power Builder code for the LOB (eligibility, SACWIS, etc.) applications resides on the server (for a single point of distribution per site) but is executed on the PCs.
        • The applications that are run on the PC connect to servers in the data center to access/update data. This results in much data traversing the WAN lines in both directions.
        • The server will be removed, the applications will be run on the terminal server, not the PC.
      • Running the applications on terminal server in the data center keeps all the data traffic on the high speed backbone, with only the screen images, keystrokes and mouse movements, and print traffic on the WAN lines.
    • Each user creates many connections to the data center Local server stores application code, code runs on PC, all data is in the data center.
      • LOB applications - KIDS/FACS PC gets code from server, runs code, gets data from Data Center, writes data to Data Center
      • Printing PC prints directly to local MFP
      • Read Policy PC connects to InfoNet web server in Data Center
      • Mapped drives – user & workgroup PC connects to local server
      • Email PC connects to Email server in Data Center
      • Internet Access PC connects through Data Center to Internet server
      • Mobile users access the LOB applications through Windows Terminal Server instead of running them locally due to support, security, and performance issues.
      • They have access to all of the functionality that is available in the office.
      • Application response time is faster in areas with high speed access.
    • Each user shares part of a Terminal Server, one connection to the data center. Applications run on Terminal Server, most traffic and data stays in the data center. Tablet PC
      • Connect to Terminal Server farm
      • LOB applications - KIDS/FACS Terminal server connects to Mainframe/other server
      • Printing Terminal server prints to office MFP
      • Read Policy Terminal server connects to InfoNet web server
      • Mapped drives - user & workgroup Terminal server connects to file server
      • Email Terminal server connects to Email server
      • Internet Access Either direct from Tablet or Terminal server out to Internet
      • Using the Mobile user architecture for all offices means:
            • No application code to distribute
            • All user data is in the data center (no remote backups to manage/secure)
            • No server to maintain in the remote offices
            • PCs function as thin clients, extending their useful life
            • All data traffic stays on the data center backbone network
            • Less traffic over the WAN lines
    • Each user shares part of a Terminal Server, one connection to data center All users run applications on terminal server, all the data is in the data center. No servers in the remote offices. Terminal Server
      • LOB applications - KIDS/FACS Terminal server connects to Mainframe/other server
      • Printing Terminal server prints to office MFP
      • Read Policy Terminal server connects to InfoNet web server
      • Mapped drives – user & workgroup Terminal server connects to file server
      • Email Terminal server connects to Email server
      • Internet Access Terminal server out to Internet
      • Offices will be converted a few at a time, with the rest operating in the old mode.
      • An initial pool of 20 spare servers (old) exists to replace or repair the remote servers still in use when failures occur.
      • 10 new servers were purchased to augment the support pool.
      • OKDHS-DSD staff will either exchange or repair remaining remote servers requiring maintenance during the roll-out.
      • Servers that are removed from the field offices during the roll-out will be reconditioned and added to the support pool.
      • The self support model has been tested and found to yield superior service to the remote offices.
      • Terminal server is great, if all users need the same desktop.
      • Users with special applications need special consideration.
      • Options:
        • Continue to run special apps on the PC
        • Dedicated terminal servers with all the special software for these users.
        • Individual PCs in the data center (each a ‘terminal server’ for a single user – virtual desktop infrastructure, VDI) such as:
          • Racks of PC blades - or
          • Many virtual PCs hosted on a few servers
    • Each user has own virtual PC or blade PC Virtual Desktop All applications run on individual PCs in the data center (virtual or blade), all data is in the data center. No servers in the remote offices.
      • User A : QMF Blade PC connects to Mainframe/other server
      • User B : Printing Virtual PC prints to office MFP
      • User C : TeleLogic Blade PC connects to TeleLogic server
      • User D : Mapped drives – user & workgroup Blade PC connects to file server
      • User E: Email Virtual PC connects to Email server
      • User F: Internet Access Virtual PC out to Internet
      • Supply terminal server desktops for the majority of users – same whether mobile or desk bound.
      • Supply individual ‘single session’ desktops (VDI) for the power users.
      • Remove the ‘out of support’ servers from the remote offices.
      • Self-support the remaining remote servers during the roll-out period.
      • Centralized LOB applications without a re-write
      • No remote backups or server maintenance
      • Improved user LOB application response time (productivity)
      • Less WAN line usage
      • Centralized data management/security
      • Extended useful life of PCs (only used for Remote Desktop).
      • Re-allocate the 10 remote server support employees.
      • ? ? ? ? ? ?
      • No “down side” since:
      • Change is good !
    • Current Status
      • Three small sites (group homes with servers in laundry rooms) have been operating successfully for several months.
      • The first multi program, full size office is testing this architecture as we speak.
      • We will run a full month business cycle to collect data that will determine the future deployment of this architecture.
      • While green IT was not the only driving force in this architecture, there are green benefits:
      • Removing 168 field servers (HP ML530) and adding 64 blade servers (HP BL465) = a net reduction of 1,022,112 KWH/yr., saving $51,106/yr. in electricity. * UPS power and cooling cost reductions not included
      • Eliminating 337 trips ( 23,400 miles ) to maintain servers in remote offices at $24,800 per year.
      • OKDHS will have the infrastructure to support telecommuting and other remote working options.
      • 2006, replaced approx. 10,000 CRT monitors with 17” LCD screens = reduction of 1,080,000 KWH/yr., saving $125,500/yr. in electricity charges.
      • 2008, “tuning” the data center cooling system by raising the temperature from 67F to 70F and turning one AC unit off – expecting to save 180,000 KWH/yr., or $9,000/yr.
      • Microsoft LiveMeeting for all employees 6 months ago:
      • 205 scheduled meetings
      • 54 ad hoc meetings
      • a total of 680 attendees
      • a total duration of 318 hours
      • Average meeting is 28 minutes
      • Waiting for travel reduction numbers
      • Recently completed IBM CDAT study indicates that 128 of 193 servers are can be virtualized onto 4 servers
        • 21 servers are good candidates for blade servers
      • This HAS to have green implications!
      • DR of virtualized servers requires less hardware
      • Deploy Thin Clients instead of new PCs ( less power )
      • 4 - 10 hour day work weeks
      • Telecommuting
    • Thank You!