Datacenter Of The Future Article - - Jon Greaves 1


Published on

Evolution of the Datacenter model and scalability via virtualization and Cloud hosting services.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Datacenter Of The Future Article - - Jon Greaves 1

  1. 1. FEATURE ARTICLE In the 60’s and 70’s it was commonplace for companies to use com- puter resources from a Service Bureau whereby the Service Bureau sold time or computing services on a single mainframe. Pioneers of Service Bureau computing include IBM, Tymshare and GE. In fact, the 1973 Auerbach Guide to Timesharing lists 125 time-sharing services, many with a focus on specific applications1 . Timesharing eventually By Jon Greaves gave way to client-server computing as users looked for richer graph- Chief Technology Officer ical interfaces and realized the economics of moving to much smaller Carpathia Hosting, Inc. servers costing significantly less than mainframes. The 90’s and the new millennium ushered in distributed, web-based applications that further distanced companies from a centralized, Service Bureau com- puting model, introducing the reemergence of an old trend … the rise of centralized, shared computing resources. Going back to the 60’s and 70’s, Service Bureaus published access to their applications over dialup or packet switched networks and used time-sharing capabilities of mini or mainframe comput- ers to allow multiple customers simultaneous access. This proved to be a successful business model when customers evaluated the expense of purchasing their own mini-computer, install- ing the applications, then running jobs vs. renting capacity on an existing system. As a result of users renting applications by time-slices, the utilization of mini/mainframe computers increased, making this a very natural solution for organizations with available computing power or dedicated Service Bureaus. Source 1 1
  2. 2. ThE DATAcEnTEr of ThE fuTurE With the advent of advances in networking technology, desktop solution, they do desire the potential savings such a solution computers became more powerful and readily available allowing can provide. This area is evolving to what’s basically a private or this time-sharing approach to evolve into client server comput- semi-private cloud, with the latter being infrastructure shared ing. Applications can now be distributed between a desk-top between multiple divisions or partners, providing compatible computer and a back-end server. availability and security profiles. Client server gave way to three or n-tiered where the business Some customers are creating tiers of computing. By blend- functions were extracted from the applications user interface or ing both models of shared and private clouds, and selecting a data. This allowed the functionality of applications to be easily private infrastructure for core applications requiring a greater extended without necessarily having to recode the user inter- degree of control of availability and performance, they can face or database back-ends. then leverage on-demand shared cloud services to provide burstable capacity. Fast forward to today’s datacenter. With the advances in virtual- ization technology now providing the equivalent of time-sharing A great example of this is SmugMug - a photo and video editing, not only for computing resources, but for storage and network- sharing and printing site. SmugMug uses Amazon’s S3 service to ing, we see the re-emergence of the Service Bureau model store in excess of 500TB of digital photographs and videos. For as Software as a Service (SaaS) vendors. These vendors pub- its compute platform, SmugMug uses a combination of its own lish their applications in much the same way as the original hosted servers and dynamically provisioned computing from Service Bureaus. Amazon’s EC2 compute cloud. SmugMug has taken this model to the next level by designing an autonomous control system, Fast forward to today’s datacenter. With the advances in virtualization technology now providing the equivalent of time-sharing not only for computing resources, but for storage and networking, we see the re-emergence of the Service Bureau model as Software as a Service (SaaS) vendors. These vendors publish their applications in much the same way as the original Service Bureaus. While SaaS vendors focus on their application being delivered in called SkyNet. In essence, SkyNet watches for specific workloads this model, a number of companies have generalized this capa- that can be moved to Amazon’s compute platform such as the bility, allowing you to run your own applications in this mode encoding of video or the application of watermarks to images. with little or no modification. This technology has been labeled This can equate to the provisioning of several thousand virtual with many names ranging from platform or infrastructure as a servers to complete their workloads. As the demand is met, service, to the overused (and abused) cloud computing. these virtual servers are then decommissioned - all automati- cally driven based on demands and service level agreements. These solutions offer great advantages to customers with work- This is a far more cost-effective method of meeting customer loads that fit this model, typically including lower service level demand, where the alternative would be purchasing and provi- agreements (around 99.9% availability) and those comfortable sioning thousands of servers that may only become utilized once with customer data being hosted in such a shared cloud. Early a week. adopters of such technology are the “Web2.0” startups that do not see strategic value - at least at the current point in their evo- As customers begin to embrace these forms of dynamic comput- lution - to run/operate their own infrastructure or those using ing, the demands on managed hosting providers as well as what such technology as a tier in their computing or storage strategy. will be required to support these forms of computing will evolve. A new breed of datacenter optimization services focused at the The benefits of adopting a highly virtualized computing platform customer’s workload will augment today’s smart hands and basic have also caught the eye of many enterprises and government managed services. agencies. While they may not be able to adopt a shared cloud 2
  3. 3. ThE DATAcEnTEr of ThE fuTurE FEATURE WhAT’s OLd WhAT’s NEW Power consumption per square foot 100 watts 300 watts datacenter cooling type Raised Floor hot Aisle Containment Virtualization None/hardware hypervisor Total square Feet 20,000 100,000+ server Type Enterprise Blade/Volume Workloads static dynamic/Portable A good example of this is capacity planning. Today many enter- density as compared to traditional rack mount systems where prise customers consider capacity planning a quarterly, or a fully populated chassis running at high load can require over potentially annual event linked to purchasing cycles of new hard- 6500 watts in just 19 rack units. ware and square foot/power from hosting providers. From a datacenter point of view this changes the power pro- In the environments previously described, the discipline of visioning equation from being 100watts per sq. ft. to areas capacity planning evolves to become “capacity prediction” – requiring greater than 300watts per sq. ft. putting strain on a constant activity where the customer works closely with the datacenter power plans and utility companies to provide the managed service provider to ensure the most appropriate com- electricity. This often leads to datacenters running out of power puting power is available when they need it. “Power” is the and being part vacant. relevant term here. With the cost of electricity spiraling to be 60-80% of an organization’s IT costs, customers are looking for As power consumption increases so does opportunities to optimize their power usage. In fact, power is the heat required to be dissipated leading such an important factor, many customers are now investigating to new blueprints for construction. We are or deploying “follow the moon” solutions where applications are migrated to follow the inexpensive electricity typically available seeing designs move away from raised floor from energy companies from midnight to 5am. Another door paired with cooling the datacenter itself using opened by highly virtualized environments and requiring close partnerships with managed service providers. computer room air conditioning (CRAC) units to hot aisle containment and in rack cooling So what does all of this mean to hosting providers? The last data- based on concrete slab construction. center construction boom occurred during the dot-com bubble nearly 10 years ago. Many of today’s datacenters were con- structed with a shelf life of 10-12 years with the expectation of As hosting providers begin this refresh cycle they need to con- retrofits required to catch-up with power and cooling demands sider both the changing workloads and new services customers of modern infrastructure. will undoubtedly require. The datacenter of the future will tie the changing workloads along with intelligent facilities to allow the Let’s consider the kinds of servers these datacenters were customer to manage risk, price and performance of their work- designed for. These “enterprise” class servers typically required loads. These workloads will become highly portable by design 610 watts of power for a 4-rack unit machine. In most cases rack and capable of spanning datacenters - and potentially continents space would be exhausted before the power. If you contrast - to deliver the value of dynamic computing customers require. this with today’s modular blade servers, we see 2 to 3 times the 3
  4. 4. ThE DATAcEnTEr of ThE fuTurE Jon Greaves is a recognized leader in the information technology services industry, having spent a significant part of his career in managed services and operations with a particular emphasis on remote services delivery. Most recently as CTO and Distinguished Engineer of Sun Microsystems Services business - and prior to that - Jon provided an instrumental role in the success of SevenSpace, a pioneer in remote, IT operations services. Jon is an expert on systems security and privacy, playing a strategic role in represent- ing the telecommunications industry in developing international standards in response to Critical Infrastructure Protection and U.S. Presidential Decision Directive 62 and 63. Jon has also held positions at British Telecom, MCI and Concert. CORPORATE 43480 Yukon Drive, Suite 200 Ashburn, Virginia 20147 Voice: 1.703.740.1730 Toll free: 1.888.200.9494 fax: 1.703.997.5577 dATA CENTERs carpathia hosting, is a leading provider of enterprise managed hosting services for government agen- Ashburn, VA | harrisonburg, VA | Phoenix, AZ | Los Angeles, cA cies and businesses that require colocation, Managed Services, Data center Management, and cloud computing. Employing dynamic technologies that remove hardware dependencies and improve efficiencies, carpathia hosting solutions strive to reduce operational costs while surpassing SLA references to other products are made to show compatibility. All companies and/or products mentioned requirements. As a datacenter neutral company, carpathia hosting is quickly becoming the hosting in this document are registered or trademarked by their respective organizations. The inclusion of third company of choice for companies that demand security, quality and high performance. party products does not infer endorsement by these parties, unless otherwise noted. 4